Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Friday, December 2, 2016

We Like What Experts Like -- And What Is Expensive - Art Taste Bends To Social Factors


Together with colleagues from the University of Copenhagen, Matthew Pelowski and Michael Forster from the Department of Basic Psychological Research and Research Methods at the University of Vienna have observed the influence of social and financial contextual information on the pleasures of art. The focus was on the question whether the purchase price, the prestige of a gallery or the socioeconomic status and educational status of other persons have an influence on the personal taste.

Credit; Raphael - Web Gallery of Art:

We like what experts or peers like - and what is expensive

During the study, students assessed a series of paintings according to personal pleasure. Before the presentation, the participants learned that certain social groups had already seen and evaluated the works before them. These included either peers (fellow university students), experts (museum curators at respected museums), or a group of similarly aged university dropouts who were currently unemployed and long-time social security recipients. The results were then compared with a control group that had evaluated the images without social context information.

This image shows Matthew Pelowski (left) and Michael Forster (right) from the Faculty of Psychology at the University of Vienna during the research for their study.

Copyright: Helmut Leder, Faculty of Psychology, University of Vienna

"Results showed that when participants thought that either experts or their peers liked a painting, they also liked it more", says Pelowski. "However, when they thought that the unemployed dropouts didn't like a painting, participants went in the opposite direction and said that they liked it more."

In a second study, the researchers also showed that telling participants the (fictitious) sales price of a painting at an art auction, significantly changed the way they rated art. Very low prices made participants like art less, very high prices made them like art more.

Art is used to show allegiance to desirable social groups

"These results provide empirical support for a 'social distinction' theory, first introduced by the French Sociologist and Philosopher Pierre Bourdieu," explains Pelowski. "According to how we use our evaluation and engagement with art in order to show allegiance to, or distance ourselves from, desirable or undesirable social groups." Both studies also have important implications for museums, suggesting that the context can affect how we see art.


Contacts and sources:
Matthew Pelowsk
University of Vienna


Publication in "Psychology of Aesthetics, Creativity and the Arts":
Lauring, J. O., Pelowski, M., Forster, M., Gondan, M., Ptito, M., & Kupers, R. (2016, June 13). Well, if They Like it . . . Effects of Social Groups' Ratings and Price Information on the Appreciation of Art. Psychology of Aesthetics, Creativity, and the Arts. Advance online publication.
DOI: http://dx.doi.org/10.1037/aca0000063

New Machine Learning Approach Can be Used to Tell If Planetary Systems Are Stable or Not


Machine learning is a powerful tool used for a variety of tasks in modern life, from fraud detection and sorting spam in Google, to making movie recommendations on Netflix. The same class of algorithms used by Google and Netflix can also tell us if distant planetary systems are stable or not

Now a team of researchers from the University of Toronto Scarborough have developed a novel approach in using it to determine whether planetary systems are stable or not.

"Machine learning offers a powerful way to tackle a problem in astrophysics, and that's predicting whether planetary systems are stable," says Dan Tamayo, lead author of the research and a postdoctoral fellow in the Centre for Planetary Science at U of T Scarborough.

Machine learning is a form of artificial intelligence that gives computers the ability to learn without having to be constantly programmed for a specific task. The benefit is that it can teach computers to learn and change when exposed to new data, not to mention it's also very efficient.

Artist's depiction of a collision between two planetary bodies.

Credit: NASA/JPL-Caltech


The method developed by Tamayo and his team is 1,000 times faster than traditional methods in predicting stability.

"In the past we've been hamstrung in trying to figure out whether planetary systems are stable by methods that couldn't handle the amount of data we were throwing at it," he says.

It's important to know whether planetary systems are stable or not because it can tell us a great deal about how these systems formed. It can also offer valuable new information about exoplanets that is not offered by current methods of observation.

There are several current methods of detecting exoplanets that provide information such as the size of the planet and its orbital period, but they may not provide the planet's mass or how elliptical their orbit is, which are all factors that affect stability, notes Tamayo.

The method developed by Tamayo and his team is the result of a series of workshops at U of T Scarborough covering how machine learning could help tackle specific scientific problems. The research is currently published online in the Astrophysical Journal Letters.

"What's encouraging is that our findings tell us that investing weeks of computation to train machine learning models is worth it because not only is this tool accurate, it also works much faster," he adds.

It may also come in handy when analysing data from NASA's Transiting Exoplanet Survey Satellite (TESS) set to launch next year. The two-year mission will focus on discovering new exoplanets by focusing on the brightest stars near our solar system.

"It could be a useful tool because predicting stability would allow us to learn more about the system, from the upper limits of mass to the eccentricities of these planets," says Tamayo.

"It could be a very useful tool in better understanding those systems."



Contacts and sources:
Don Campbell
University of Toronto

Iron Age Ceramics Suggests Complex Pattern of Eastern Mediterranean Trade

Two markers of regional exchange in the Eastern Mediterranean during the first millennium BCE are the White Painted and Bichrome Wares from Cyprus’s Cypro-Geometric and Cypro-Archaic periods

Cypriot-style pottery may have been locally produced as well as imported and traded in Turkey during the Iron Age, according to a study published November 30, 2016 in the open-access journal PLOS ONE by Steven Karacic from Florida State University, USA, and James Osborne of the University of Chicago, USA.

White Painted and Bichrome Wares are Cypriot-style ceramics produced during the Iron Age that may provide clues about trade in the Eastern Mediterranean at that time. Although these ceramics are often assumed to be imports from Cyprus, excavations in southern Turkey have suggested that some pottery was produced locally, challenging previous assumptions about trade in the Eastern Mediterranean.

Cypro-Geometric III and Cypro-Archaic I (ca. 850-600 BCE) pottery from Tell Tayinat, ancient Kunulua. (1-3) White Painted Ware vertical-sided bowls; (4-7) White Painted Ware barrel jugs; (8-10) Bichrome Ware vertical-sided bowls; (11-12) Bichrome Ware barrel jugs; (13) Bichrome Ware juglet.

Credit: Karacic et al (2016)

The authors of the present study analyzed White Painted and Bichrome Wares recovered from three sites in the Hatay region of Turkey: Tell Tayinat, Çatal Höyük, and Tell Judaidah, using techniques which bombarded the pottery with x-rays and neutrons, providing insight into the chemical elements they contained. Imported and local versions of this pottery had different elemental compositions, which helped the authors determine where this pottery was produced. 

When compared with existing datasets, the researchers found that Çatal Höyük and Tell Judaidah may only have had access to pottery imported from Cyprus whereas Tell Tayinat may have made Cypriot-style pottery locally as well as importing it.

Eastern Mediterranean Economic Exchange during the Iron Age: Portable X-Ray Fluorescence and Neutron Activation Analysis of Cypriot-Style Pottery in the Amuq Valley, Turkey

Credit: Steven Karacic James F. Osborne'

The authors suggest that feasting practices amongst the affluent in Tell Tayinat may have driven demand for Cypriot-style pottery, resulting in either local potters producing this pottery or Cypriot potters settling in the vicinity. 

Usually, pottery styles are expected to become increasingly rare the further away they are found from their origin of production, so these findings suggest a complex pattern of exchange in the Eastern Mediterranean during the Iron Age.

"We were surprised to find that locally produced Cypriot-style pottery was consumed at Tell Tayinat but not the other sites included in our study," says Karacic. "These results indicate complex social and economic interactions between the Amuq and Cyprus that we are only just beginning to understand for the Iron Age."



Contacts and sources:
Tessa Gregory
PLOS

Citation: Karacic S, Osborne JF (2016) Eastern Mediterranean Economic Exchange during the Iron Age: Portable X-Ray Fluorescence and Neutron Activation Analysis of Cypriot-Style Pottery in the Amuq Valley, Turkey. PLoS ONE 11(11): e0166399. doi:10.1371/journal.pone.0166399 : http://journals.plos.org/plosone/article?id=info%3Adoi/10.1371/journal.pone.0166399

Increasing Tornado Outbreaks—Is Climate Change To Blame?

Tornadoes and severe thunderstorms kill people and damage property every year. Estimated U.S. insured losses due to severe thunderstorms in the first half of 2016 were $8.5 billion. The largest U.S. impacts of tornadoes result from tornado outbreaks, sequences of tornadoes that occur in close succession. 

Last spring a research team led by Michael Tippett, associate professor of applied physics and applied mathematics at Columbia Engineering, published a study showing that the average number of tornadoes during outbreaks—large-scale weather events that can last one to three days and span huge regions—has risen since 1954. But they were not sure why.

In a new paper, published December 1 in Science via First Release, the researchers looked at increasing trends in the severity of tornado outbreaks where they measured severity by the number of tornadoes per outbreak. They found that these trends are increasing fastest for the most extreme outbreaks. While they saw changes in meteorological quantities that are consistent with these upward trends, the meteorological trends were not the ones expected under climate change.

A tornado near Elk Mountain, west of Laramie Wyoming on the 15th of June, 2015. The tornado passed over mostly rural areas of the county, lasting over 20 minutes.

Credit: John Allen/Central Michigan University.

“This study raises new questions about what climate change will do to severe thunderstorms and what is responsible for recent trends,” says Tippett, who is also a member of the Data Science Institute and the Columbia Initiative on Extreme Weather and Climate. “The fact that we don’t see the presently understood meteorological signature of global warming in changing outbreak statistics leaves two possibilities: either the recent increases are not due to a warming climate, or a warming climate has implications for tornado activity that we don’t understand. This is an unexpected finding.”

The researchers used two NOAA datasets, one containing tornado reports and the other observation-based estimates of meteorological quantities associated with tornado outbreaks. “Other researchers have focused on tornado reports without considering the meteorological environments,” notes Chiara Lepore, associate research scientist at the Lamont-Doherty Earth Observatory, who is a coauthor of the paper. “The meteorological data provide an independent check on the tornado reports and let us check for what would be expected under climate change.”

U.S. tornado activity in recent decades has been drawing the attention of scientists. While no significant trends have been found in either the annual number of reliably reported tornadoes or of outbreaks, recent studies indicate increased variability in large normalized economic and insured losses from U.S. thunderstorms, increases in the annual number of days on which many tornadoes occur, and increases in the annual mean and variance of the number of tornadoes per outbreak. In the current study, the researchers used extreme value analysis and found that the frequency of U.S. outbreaks with many tornadoes is increasing, and is increasing faster for more extreme outbreaks. They modeled this behavior using extreme value distributions with parameters that vary to match the trends in the data.


Annual 20th, 40th, 60th and 80th percentiles of the number of E/F1+ tornadoes per outbreak (6 or more E/F1+ tornadoes), 1954-2015 (solid lines), and quantile regression fits to 1965-2015 assuming linear growth in time (dashed lines).
Credit: Michael Tippett/Columbia Engineering.


Extreme meteorological environments associated with severe thunderstorms showed consistent upward trends, but the trends did not resemble those currently expected to result from global warming. They looked at two factors: convective available potential energy (CAPE) and a measure of vertical wind shear, storm relative helicity. Modeling studies have projected that CAPE will increase in a warmer climate leading to more frequent environments favorable to severe thunderstorms in the U.S. However, they found that the meteorological trends were not due to increasing CAPE but instead due to trends in storm relative helicity, which has not been projected to increase under climate change.

“Tornadoes blow people away, and their houses and cars and a lot else,” says Joel Cohen, coauthor of the paper and director of the Laboratory of Populations, which is based jointly at Rockefeller University and Columbia’s Earth Institute. “We've used new statistical tools that haven't been used before to put tornadoes under the microscope. The findings are surprising. We found that, over the last half century or so, the more extreme the tornado outbreaks, the faster the numbers of such extreme outbreaks have been increasing. What's pushing this rise in extreme outbreaks is far from obvious in the present state of climate science. Viewing the thousands of tornadoes that have been reliably recorded in the U.S. over the past half century or so as a population has permitted us to ask new questions and discover new, important changes in outbreaks of these tornadoes.”

Adds Harold Brooks, senior scientist at NOAA's National Severe Storms Laboratory, who was not involved with this project, “The study is important because it addresses one of the hypotheses that has been raised to explain the observed change in number of tornadoes in outbreaks. Changes in CAPE can't explain the change. It seems that changes in shear are more important, but we don't yet understand why those have happened and if they're related to global warming.”

Better understanding of how climate affects tornado activity can help to predict tornado activity in the short-term, a month, or even a year in advance, and would be a major aid to insurance and reinsurance companies in assessing the risks posed by outbreaks. “An assessment of changing tornado outbreak size is highly relevant to the insurance industry,” notes Kelly Hererid, AVP, Senior Research Scientist, Chubb Tempest Re R&D. “Common insurance risk management tools like reinsurance and catastrophe bonds are often structured around storm outbreaks rather than individual tornadoes, so an increasing concentration of tornadoes into larger outbreaks provides a mechanism to change loss potential without necessarily altering the underlying tornado count. This approach provides an expanded view of disaster potential beyond simple changes in event frequency.”

Tippett notes that more studies are needed to attribute the observed changes to either global warming or another component of climate variability. The research group plans next to study other aspects of severe thunderstorms such as hail, which causes less intense damage but is important for business (especially insurance and reinsurance) because it affects larger areas and is responsible for substantial losses every year.

The study was partially funded by Columbia University Research Initiatives for Science and Engineering (RISE) award; the Office of Naval Research; NOAA’s Climate Program Office’s Modeling, Analysis, Predictions and Projections; Willis Research Network; and the National Science Foundation. 




Contacts and sources:
Columbia University School of Engineering and Applied Science

Citation: :More tornadoes in the most extreme U.S. tornado outbreaks  Science 01 Dec 2016: DOI: 10.1126/science.aah7393 











How Do Children Hear Anger?

Even if they don’t understand the words, infants react to the way their mother speaks and the emotions conveyed through speech. What exactly they react to and how has yet to be fully deciphered, but could have significant impact on a child’s development. Researchers in acoustics and psychology teamed up to better define and study this impact.

Peter Moriarty, a graduate researcher at Pennsylvania State University, presented the results of these studies, conducted with Michelle Vigeant, professor of acoustics and architectural engineering, and Pamela Cole professor of psychology, at the Acoustical Society of America and Acoustical Society of Japan joint meeting being held Nov. 28-Dec. 2 in Honolulu, Hawaii.

The team used functional magnetic resonance imaging (fMRI) to capture real-time information about the brain activity of children while they listening to samples of their mothers’ voice with different affects -- or non-verbal emotional cues. Acoustic analysis of the voice samples was performed in conjunction with the fMRI data to correlate brain activity to quantifiable acoustical characteristics.

Credit: Pixabay

“We’re using acoustic analysis and fMRI to look at the interaction and specifically how the child’s brain responds to specific acoustic cues in their mother’s speech,” Moriarty said. Children in the study heard 15 second voice samples of the same words or sentences, but each conveyed either anger, happiness, or were neutral in affect for control purposes. The emotional affects were defined and predicted quantitatively by a set of acoustic parameters.

“Most of these acoustic parameters are fairly well established,” Moriarty said. “We’re talking about things like the pitch of speech as a function of time... They have been used in hundreds of studies.” In a more general sense, they are looking at what’s called prosody, or the intonations of voice.

However, there are many acoustic parameters relevant to speech. Understanding patterns within various sets of these parameters, and how they relate to emotion and emotional processing, is far from straight forward.

“You can’t just talk to Siri [referring to Apple’s virtual assistant] and Siri knows that you’re angry or not. There’s a very complicated model that you have to produce in order to make these judgements,” Moriarty explained. “The problem is that there’s a very complicated interaction between these acoustic parameters and the type of emotion … and the negativity or positivity we’d associate with some of these emotions.”

This work is a pilot study done as an early stage of a larger project called, The Processing of the Emotional Environment Project (PEEP). In this early stage, the team is looking for the best set of variables to predict these emotions, as well as the effects these emotions have on processes in the brain. “[We want] an acoustic number or numbers doing a good job at predicting that we’re saying, ‘yes, we can say quantitatively that this was angry or this was happy,’” Vigeant said.

In the work to be presented, the team has demonstrated the importance of looking at lower frequency characteristics in voice spectra; the patterns that appear over many seconds of speech or the voice sample as a whole. These patterns, they report, may play a significant role in understanding the resulting brain activity and differentiating the information relevant to emotional processing.

With effective predictors and fMRI analysis of effects on the brain, the ultimate goal of PEEP is to learn how a toddler who has not yet developed language processes emotion through prosody and how the environment effects their development. “A long term goal is really to understand prosodic processing, because that is what young children are responding to before they can actually process and integrate the verbal content,” Cole said.

Toddlers, however, are somewhat harder to image in an fMRI device, as it requires them to be mostly motionless for long periods of time. So for now, the team is studying older children aged 6-10 -- though there are still some challenges of wriggling.

“We’re essentially trying to validate this type of procedure and look at whether or not we’re able to get meaningful results out of studying children that are so young. This really hasn’t been done at this age group in the past and that’s largely due to the difficulty of having children remain somewhat immobile in the scanner.”



Contacts and sources:
Acoustical Society of America

Presentation 4aAA11, "Low frequency analysis of acoustical parameters of emotional speech for use with functional magnetic resonance imaging," by Peter M. Moriarty is at 11:15 a.m. HAST, Dec. 1, 2016 in Room Lehua.



The Frankenstein Effect’ of Working Memory

Imagine you’re at a cocktail party, and a couple comes up to you and introduces themselves. Though you try to remember their names, it’s difficult once the conversation has moved on.

In this common scenario, your working memory is the tool responsible for retaining names so you can address each person correctly. Working memory is a process psychologists are trying to understand better, though there are several theories about how it works.

A new study from Nathan Rose, assistant professor of psychology at the University of Notre Dame, examined a fundamental problem your brain has to solve, which is keeping information “in mind,” or active, so your brain can act accordingly.

The common theory is that the information is kept in mind by neurons related to the information actively firing throughout a delay period, a theory that’s been dominant since at least the 1940s, according to Rose.
Credit: Wikimedia Commons

However, in a new paper published in Science on Friday (Dec. 2), Rose and his team give weight to the synaptic theory, a less well-known and tested model. The synaptic theory suggests that information can be retained for short periods of time by specific changes in the links, or weights, between neurons.

Rose said this research advances the potential to understand a variety of higher-order cognitive functions including not only working memory but also perception, attention and long-term memory. Eventually, he said, this research could lay the groundwork for the potential to use noninvasive brain stimulation techniques such as transcranial magnetic stimulation, or TMS, to reactivate and potentially strengthen latent memories. Rose and his collaborators are currently working on extending these results to see how they relate to long-term memory.

Rose and his colleagues used a series of noninvasive procedures on healthy young adults to test the idea that certain information is retained in “activity-silent” neural mechanisms, an area of study previously tested largely on only mathematical modeling or rodents. Participants were hooked up to neural imaging machines that allow researchers to “see” what the brain is thinking about by capturing which areas of the brain are active at any given time, since different areas of the brain correspond to different thoughts. Participants were given two items to keep in mind throughout the experiment — for example, a word and a face. Each of these items activate different areas in the brain, making it easier for the researchers to identify which a person is thinking about. At first, Rose’s team saw neural evidence for the active representation of both items.

“Then, when we cued people about the item that was tested first, evidence for the cued item, or the attended memory item that was still in the focus of attention, remained elevated, but the neural evidence for the uncued item dropped all the way back to baseline levels of activation,” said Rose, “as if the item had been forgotten.”

In half of the tests, Rose’s team tested participants again on the second, uncued item – called the unattended memory item – to find out if the item was still in working memory, despite looking as if it had been forgotten. When the researchers cued participants to switch to thinking about the initially uncued item, “people accurately and rapidly did so,” said Rose. The researchers also saw a corresponding return of neural evidence for the active representation of the initially uncued item. This indicated that despite looking as if the second, unattended item had been forgotten, it remained in working memory.

“The unattended memory item seems to be represented without neural evidence of an active representation, but it’s still there, somehow,” Rose said.

In a second round of experiments, Rose’s team added TMS, the noninvasive brain stimulation, to the testing for the unattended memory item. The TMS provided a painless jolt of energy to specific areas of the brain to see how it affected neural activity, looking for signs of the unattended memory item resurfacing.

“Although the TMS activates a highly specific part of the brain, it is a relatively nonspecific form of information that is applied to the network. It’s just a burst of energy that goes through the network, but when it’s filtered through this potentiated network, the output of the neural activity that we’re recording appears structured, as if that information has suddenly been reactivated,” Rose said. “We’re using this brain stimulation to reactivate a specific memory.”

The researchers found that after the TMS is applied to the part of the brain where information about the unattended memory item is processed, the neural signals fired back up in the exact form of the “forgotten” item, going from baseline back to the level of neural activity for the word or face that the participant was keeping in mind. The team dubbed this reactivation of memory using TMS the “Frankenstein effect,” since the neural signals for the secondary item went from baseline activity – looking like it was forgotten – back to full activity.

In further testing, Rose’s team discovered that once participants knew they wouldn’t have to remember the unattended item any longer in the tests, the memory items truly were dropped from their working memory.

“Once the item is no longer relevant on the trial, we don’t see the same reactivation effect,” Rose said. “So that means this is really a dynamic maintenance mechanism that is actually under cognitive control. This is a strategic process. This is a more dynamic process than we had anticipated.”

The Science paper is the first study published in the journal from the Notre Dame Department of Psychology and the second from the College of Arts and Letters this semester. Co-authors are Joshua LaRocque, Adam Riggall, Olivia Gosseries, Michael Starrett, Emma Meyering and Bradley Postle, all at the University of Wisconsin, Madison.



Contacts and sources: 
Nathan Rose
University of Notre Dame

Citation: Reactivation of latent working memories with transcranial magnetic stimulation Science 02 Dec 2016:Vol. 354, Issue 6316, pp. 1136-1139 DOI: 10.1126/science.aah7011

How It Takes Just 6 Seconds To Hack A Credit Card, Video

New research reveals the ease with which criminals can hack an account without any of the card details

Dubbed the Distributed Guessing Attack, the team from Newcastle University, UK, say it can take just six seconds to find the card number, the expiry date and the CVV using nothing more than a laptop and an internet connection

Circumventing all the security features put in place to protect online payments from fraud, investigators on the recent Tesco cyberattack believe that hackers used a 'guessing attack' method to defraud Tesco customers of £2.5m

Working out the card number, expiry date and security code of any Visa credit or debit card can take as little as six seconds and uses nothing more than guesswork, new research has shown.


Credit: Newcastle University 


Research published in the academic journal IEEE Security & Privacy, shows how the so-called Distributed Guessing Attack is able to circumvent all the security features put in place to protect online payments from fraud.

Exposing the flaws in the VISA payment system, the team from Newcastle University, UK, found neither the network nor the banks were able to detect attackers making multiple, invalid attempts to get payment card data.

By automatically and systematically generating different variations of the cards security data and firing it at multiple websites, within seconds hackers are able to get a 'hit' and verify all the necessary security data.


Credit: Newcastle University 

Investigators believe this guessing attack method is likely to have been used in the recent Tesco cyberattack which the Newcastle team describe as "frighteningly easy if you have a laptop and an internet connection."

And they say the risk is greatest at this time of year when so many of us are purchasing Christmas presents online.

"This sort of attack exploits two weaknesses that on their own are not too severe but when used together, present a serious risk to the whole payment system," explains Mohammed Ali, a PhD student in Newcastle University's School of Computing Science and lead author on the paper.

"Firstly, the current online payment system does not detect multiple invalid payment requests from different websites. This allows unlimited guesses on each card data field, using up to the allowed number of attempts - typically 10 or 20 guesses - on each website.

"Secondly, different websites ask for different variations in the card data fields to validate an online purchase. This means it's quite easy to build up the information and piece it together like a jigsaw.

"The unlimited guesses, when combined with the variations in the payment data fields make it frighteningly easy for attackers to generate all the card details one field at a time.

"Each generated card field can be used in succession to generate the next field and so on. If the hits are spread across enough websites then a positive response to each question can be received within two seconds - just like any online payment.

"So even starting with no details at all other than the first six digits - which tell you the bank and card type and so are the same for every card from a single provider - a hacker can obtain the three essential pieces of information to make an online purchase within as little as six seconds."

How the Distributed Guessing Attack works

To obtain card details, the attack uses online payment websites to guess the data and the reply to the transaction will confirm whether or not the guess was right.

Different websites ask for different variations in the card data fields and these can be divided into three categories: Card Number + Expiry date (the absolute minimum); Card Number + Expiry date + CVV (Card security code); Card Number + Expiry date + CVV.

Because the current online system does not detect multiple invalid payment requests on the same card from different websites, unlimited guesses can be made by distributing the guesses over many websites.

However, the team found it was only the VISA network that was vulnerable.

"MasterCard's centralised network was able to detect the guessing attack after less than 10 attempts - even when those payments were distributed across multiple networks," says Mohammed.

At the same time, because different online merchants ask for different information, it allows the guessing attack to obtain the information one field at a time.

Mohammed explains: "Most hackers will have got hold of valid card numbers as a starting point but even without that it's relatively easy to generate variations of card numbers and automatically send them out across numerous websites to validate them.

"The next step is the expiry date. Banks typically issue cards that are valid for 60 months so guessing the date takes at most 60 attempts.

"The CVV is your last barrier and theoretically only the card holder has that piece of information - it isn't stored anywhere else.

"But guessing this three-digit number takes fewer than 1,000 attempts. Spread this out over 1,000 websites and one will come back verified within a couple of seconds. And there you have it - all the data you need to hack the account."

Protecting ourselves from fraud

An online payment - or "card not present" transaction - is dependent on the customer providing data that only the owner of the card could know.

But unless all merchants ask for the same information then, says the team, jigsaw identification across websites is simple.

So how can we keep our money safe?

"Sadly there's no magic bullet," says Newcastle University's Dr Martin Emms, co-author on the paper.

"But we can all take simple steps to minimise the impact if we do find ourselves the victim of a hack. For example, use just one card for online payments and keep the spending limit on that account as low as possible. If it's a bank card then keep ready funds to a minimum and transfer over money as you need it.

"And be vigilant, check your statements and balance regularly and watch out for odd payments.



"However, the only sure way of not being hacked is to keep your money in the mattress and that's not something I'd recommend!"


Contacts and sources:
Louella Houldcroft
Newcastle University

Citation: : Does The Online Card Payment Landscape Unwittingly Facilitate Fraud?
Author(s): Ali, Mohammed Aamir; Arief, Budi; Emms, Martin; van Moorsel, Aad. Newcastle University, UK. Publication Title: IEEE Security & Privacy 

Researchers Put Embryos in Suspended Animation

UC San Francisco researchers have found a way to pause the development of early mouse embryos for up to a month in the lab, a finding with potential implications for assisted reproduction, regenerative medicine, aging and even cancer, the authors say.

The new study — published online Nov. 23, 2016, in Nature – involved experiments with pre-implantation mouse embryos, called blastocysts. The researchers found that drugs that inhibit the activity a master regulator of cell growth called mTOR can put these early embryos into a stable and reversible state of suspended animation.

“Normally, blastocysts only last a day or two, max, in the lab. But blastocysts treated with mTOR inhibitors could survive up to four weeks,” said the study’s lead author, Aydan Bulut-Karslioglu, PhD, a postdoctoral researcher in the lab of senior author Miguel Ramalho-Santos, PhD, who is an associate professor of obstetrics/gynecology and reproductive sciences at UCSF.

UCSF researchers have placed mouse embryos into suspended animation, which pauses the development. 

Image by Miguel Ramalho-Santos lab

Bulut-Karslioglu and colleagues showed that paused embryos could quickly resume normal growth when mTOR inhibiters were removed, and developed into healthy mice if implanted back into a recipient mother.
Discovery was Surprise to Researchers

The discovery was a surprise to the researchers, who had intended to study how mTOR-inhibiting drugs slow cell growth in blastocysts, not to find a way to put the embryos into hibernation.

“It was completely surprising. We were standing around in the tissue culture room, scratching our heads, and saying wow, what do we make of this?” said Ramalho-Santos, who is a member of the Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research. “To put it in perspective, mouse pregnancies only last about 20 days, so the 30-day-old ‘paused’ embryos we were seeing would have been pups approaching weaning already if they’d been allowed to develop normally.”

Further experiments demonstrated that cultured mouse embryonic stem cells – which are derived from the blastocyst-stage embryo – can also be put into suspended animation by mTOR inhibitors. The drugs appear to act by reducing gene activity across much of the genome, the team found, with the exception of a handful of so-called “repressor” genes that themselves may act to inhibit gene activity. The researchers tested a number of different mTOR inhibitors and found that the most effective was a new synthetic drug called Rapa-Link that was recently developed at UCSF by the lab of Kevan Shokat, PhD.

The researchers believe that it should be possible to extend the suspended animation for much longer than the 30 days observed in the present study, Bulut-Karslioglu said: “Our dormant blastocysts are eventually dying when they run out of some essential metabolite within them. If we could supply those limiting nutrients in the culture medium, we should be able to sustain them even longer. We just don't know exactly what they need yet.”
Drug-Induced Dormancy Mimics Natural Pausing

Bulut-Karslioglu and colleagues demonstrated that the dormant state they were able to induce in blastocysts by blocking mTOR was almost identical to the natural ability of mice to pause a pregnancy in its early stages. This temporary stasis, called diapause, occurs in species across the animal kingdom, and in mammals from mice to wallabies, it typically allows mothers to delay pregnancy when food is scarce or they are otherwise stressed.

It makes sense that mTOR would be involved in the process of diapause, Ramalho-Santos said: “mTOR is this beautiful regulator of developmental timing that works by being a nutrient sensor. It doesn’t just drive cells into growing willy-nilly; it tunes cell growth based on the level of nutrients that are available in the environment.”

It is an open question whether humans also have the ability to pause pregnancies at the blastocyst stage, Bulut-Karslioglu said, because the time from fertilization to implantation is hard to measure in humans. However, anecdotal accounts from practitioners of in vitro fertilization of unusually long pregnancies and mismatches between the timing of artificial embryo transfer and the resulting pregnancy suggest that humans too may have the ability to delay implantation of fertilized embryos in some circumstances.
Implications for Other Fields of Medicine

The new research could have a big impact on the field of assisted reproduction, where practitioners are currently limited by the rapid degradation of embryos once they reach the blastocyst stage. Putting blastocysts into suspended animation may avoid the compromise of freezing embryos and give practitioners more time to test fertilized blastocysts for genetic defects before implanting them, Bulut-Karslioglu said.

MTOR inhibitors are already in clinical trials to treat certain forms of cancer, but the new results suggest a potential danger of this approach, Ramalho-Santos said: “Our results suggest that mTOR inhibitors may well slow cancer growth and shrink tumors, but could leave behind these dormant cancer stem cells that could go back to spreading after therapy is interrupted. You might use a second or third line of drugs specifically to kill off those remaining dormant cells.”

The authors are eager to explore whether mTOR inhibitors and related downstream biochemical pathways can drive stem cells into a dormant state at later stages of development, which could have major implications efforts to repair or replace ailing organs in the field of regenerative medicine. The findings also have potential implications in aging research, the authors say, where mTOR inhibitors have already been shown to extend the lives of mice and other animals, an outcome which the authors suggest could result in part from preserving more youthful stem cells.

“This is a great example of the power of basic science,” Ramalho-Santos said. “We weren't looking for ways to pause blastocyst development or mimic diapause. We weren’t trying to model aging or test cancer therapies or develop better techniques for tissue regeneration or organ transplantation. None of that was in our mind, but our experiments told us we were on to something we had to understand, and we couldn’t ignore where they led.”

Additional authors on the paper are Steffen Biechele, PhD, and Trisha A. Macrae, of the Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research, Center for Reproductive Sciences, and Diabetes Center at UCSF; Hu Jin, Miroslav Hejna, PhD, and Jun S. Song, PhD, of the Carl R. Woese Institute for Genomic Biology at the Univ'ersity of Illinois, Urbana-Champaign; and Marina Gertsenstein, PhD, of the Centre for Phenogenomics in Toronto.

This research was supported by grants from the National Institutes of Health (5P30CA082103, P30DK063720, R01CA163336, R01OD012204, R01GM113014) and the National Science Foundation (1442504). The authors declare no competing financial interests.




Contacts and sources:
Nicholas Weiler
University of California, San Francisco

First Time: Scientists Catch Water Molecules Passing the Proton Baton

Water conducts electricity, but the process by which this familiar fluid passes along positive charges has puzzled scientists for decades.

But in a paper published in the Dec. 2 issue of the journal Science, an international team of researchers has finally caught water in the act — showing how water molecules pass along excess charges and, in the process, conduct electricity.

“This fundamental process in chemistry and biology has eluded a firm explanation,” said co-author Anne McCoy, a professor of chemistry at the University of Washington. “And now we have the missing piece that gives us the bigger picture: how protons essentially ‘move’ through water.”
Credit: University of Washington

There’s more going on in there than we know.Roger McLassus

The team was led by Mark Johnson, senior author and a professor at Yale University. For over a decade, Johnson, McCoy and two co-authors — professor Kenneth Jordan at the University of Pittsburgh and Knut Asmis, a professor at Leipzig University — have collaborated to understand how molecules in complex arrangements pass along charged particles.

For water, this is an old question. Chemists call the process by which water conducts electricity the Grotthuss mechanism. When excess protons — the positively charged subatomic particles within atoms — are introduced into water, they pass quickly through the fluid, riding a transient, ever-shifting network of loose bonds between water molecules. By the Grotthuss mechanism, a water molecule can pick up an excess charge and pass it along to a neighbor almost instantaneously.

The exchange is fundamental to understanding the behavior of water in biological and industrial settings. But it is also so fast and the vibrations between water molecules so great that the hand-off cannot be captured using traditional spectroscopy — a technique that scatters light against a molecule to learn about its structure.

“With spectroscopy, you hit objects with a beam of photons, see how those photons are scattered and use that scattering information to determine information about the object’s structure and arrangement of atoms,” said McCoy. “And this is where Mark Johnson’s lab at Yale has really been a leader — in adapting spectroscopy to better capture this transfer of protons among water molecules.”

A simplified view of the Grotthuss mechanism: Water molecules pass along an extra proton. Oxygen atoms are in red, with hydrogen atoms in grey.
Credit: Matt K. Petersen

Johnson’s lab, along with collaborators in Asmis’s lab in Germany, figured out how to freeze the proton relay to slow the process, giving the researchers time to visualize the Grotthuss mechanism using spectroscopy. When these “spectroscopic snapshots” proved still too blurry due to vibrations in chemical bonds, they switched to studying this mechanism in “heavy water.” In heavy water, regular hydrogen atoms are replaced by a heavier isotope called deuterium. By the quirky rules of quantum mechanics that underlie the behavior of subatomic particles, bonds in heavy water shake less than traditional H2O.

But this snapshot required massive amounts of theoretical and computational decoding to reveal just how water molecules momentarily altered their structure to both receive and pass along an extra proton. McCoy’s and Jordan’s groups helped develop computational approaches to analyze the spectroscopy data.

“In spectroscopy, your goal is to determine the structure of molecules based on how they scatter light,” said McCoy. “In our approach, we also asked how the behavior of bonds will affect spectroscopy. That really completed our circle of inquiry and allowed us to visualize this transfer of protons.”

In their paper, they describe the Grotthuss mechanism attaching various tag molecules to complexes made up of four molecules of heavy water. According to McCoy, they would like to see how the proton relay changes among larger groups of water molecules and to expand these spectroscopy techniques to include other small molecules with complex structures.

Lead author on the paper is Conrad Wolke at Yale University. Other co-authors are Joseph Fournier of the University of Chicago; Laura Dzugan of The Ohio State University; Matias Fagiani and Harald Knorke of Leipzig University; and Tuguldur Odbadrakh of the University of Pittsburgh. McCoy, who moved to UW from The Ohio State University in 2015, currently maintains labs at both institutions and Dzugan is a member of her research group. The research was funded by the U.S. Department of Energy, the National Science Foundation and the German Research Foundation.




Contacts and sources:
James Urton
University of Washington

Artificial Dog Nose Sniffs Out Explosives

By mimicking how dogs get their whiffs, a team of government and university researchers have demonstrated that “active sniffing” can improve by more than 10 times the performance of current technologies that rely on continuous suction to detect trace amounts of explosives and other contraband.

Researchers have looked to one of nature’s best chemical detectors – the dog – to help make today’s chemical detection devices better at sniffing out explosives and other contraband materials.

“The dog is an active aerodynamic sampling system that literally reaches out and grabs odorants,” explained Matthew Staymates, a mechanical engineer and fluid dynamicist at the National Institute of Standards and Technology (NIST).

 “It uses fluid dynamics and entrainment to increase its aerodynamic reach to sample vapors at increasingly large distances. Applying this bio-inspired design principle could lead to significantly improved vapor samplers for detecting explosives, narcotics, pathogens—even cancer.”

://www.youtube.com/embed/Eb1aWTqwDr0" frameborder="0" allowfullscreen>

Matt Staymates, a mechanical engineer at the National Institute of Standards and Technology (NIST), uses a schlieren imaging system to visualize the flow of vapors into an explosives detection device fitted with an artificial dog nose that mimics the “active sniffing” of a dog. The artificial dog nose, which was developed by Staymates and colleagues at NIST, the Massachusetts Institute of Technology Lincoln Laboratory, and the U.S. Food and Drug Administration, can improve trace chemical detection as much as 16-fold

 Copyright Robert Rathe


Following nature’s lead, Staymates and colleagues from NIST, the Massachusetts Institute of Technology’s Lincoln Laboratory and the U.S. Food and Drug Administration fitted a dog-nose-inspired adapter to the front end of a commercially available explosives detector. Adding the artificial dog nose—made on a 3-D printer—to enable active sniffing improved odorant detection by up to 18 times, depending on the distance from the source.

Trace detection devices now used at points of entry and departure such as airports and seaports, and other sensitive locations, typically employ passive sampling. Examples include equipment that requires swabbing hands or other surfaces and then running the sample through a chemical detector—typically an ion mobility spectrometer. Wand-like vapor detectors accommodate more sampling mobility, but unless the detector scans immediately above it, the chemical signature of a bomb-making ingredient will go unnoticed.

Aiming to uncover clues on how to improve trace detection capabilities, the researchers turned to one of nature’s best chemical detectors: the dog. Through their review of previous studies, the team distilled what occurs during sniffing. Five times a second, dogs exhale to reach out, pull and then inhale to deliver a nose full of aromas for decoding by some 300 million receptor cells.

Using a 3-D printer, Staymates replicated the external features of a female Labrador retriever’s nose, including the shape, direction, and spacing of the nostrils. Moving air through the artificial nose at the same rate that a dog inhales and exhales allowed them to mimic the air sampling—or sniffing—of dogs.

With schlieren imaging—a technique widely used in aeronautical engineering to view the flow of air around objects—and high-speed video, the team first confirmed that their imitation nose could indeed sniff much like the real thing, a property documented in previous studies of live dogs.

With each sniff, air jets exit from both nostrils, moving downward and outward. Though it might seem counterintuitive, the air jets entrain—or draw in—vapor-laden air toward the nostrils. During inhalation, the entrained air is pulled into each nostril.

The team’s first set of experiments compared the air-sampling performance of their “actively sniffing” artificial dog nose with that of trace-detection devices that rely on continuous suction. The head-to-head comparison with an inhalation system used with a real-time monitoring mass spectrometer found that sampling efficiency with the sniffing artificial dog nose was four times better 10 centimeters (3.9 inches) away from the vapor source and 18 times better at a stand-off distance of 20 centimeters (7.9 inches).

On the basis of those results, the team chose to outfit a commercially available vapor detector with a bio-inspired 3D-printed inlet that would enable it to sniff like a dog, rather than to inhale only in 10-second intervals, the device’s normal mode of operation. The switch resulted in an improvement in odorant detection by a factor of 16 at a stand-off distance of 4 centimeters (1.6 inches).

“Their incredible air-sampling efficiency is one reason why the dog is such an amazing chemical sampler,” Staymates said. “It’s just a piece of the puzzle. There’s lots more to be learned and to emulate as we work to improve the sensitivity, accuracy and speed of trace-detection technology.”




Contacts and sources:
Rich Press
NIST

The research is reported(link is external) in the journal Scientific Reports.

Article: M. Staymates, W. MacCrehan, J. Staymates, R. Kunz, T. Mendum, T-H. Ong, G. Geurtsen, G. Gillen and B.A. Craven. Biomimetic Sniffing Improves the Detection Performance of a 3D Printed Nose of a Dog and a Commercial Trace Vapor Detector. December 1, 2016. Scientific Reports. DOI: 10.1038/srep36876(link is external)












White Deaths Exceed Births in One-Third of U.S. States

In 2014, deaths among non-Hispanic whites exceeded births in more states than at any time in U.S. history. Seventeen states, home to 121 million residents or roughly 38 percent of the U.S. population, had more deaths than births among non-Hispanic whites (hereafter referred to as whites) in 2014, compared to just four in 2004. When births fail to keep pace with deaths, a region is said to have a “natural decrease” in population, which can only be offset by migration gains. In twelve of the seventeen states with white natural decreases, the white population diminished overall between 2013 and 2014.
Over the last several decades, demographers have noted the growing incidence of natural decrease in the United States. More widespread natural decrease results from declining fertility due to the Great Recession, and the aging of the large baby boom cohorts born between 1946 and 1964. This senior population is projected to expand from nearly 15 percent of the total population in 2015 to nearly 24 percent in 2060.

Much of this aging baby boom population is white, and so white mortality is growing. Together, growing white mortality and the diminishing number of white births increase the likelihood of more white natural decrease. In contrast, births exceed deaths by a considerable margin among the younger Latino population, and the combination of these very different demographic trends is increasing the diversity of the U.S. population.
Credit: University of New Hampshire

The 17 states include California, Florida, Pennsylvania, New Hampshire, Connecticut, Maine and Rhode Island.

"When births fail to keep pace with deaths, a region is said to have a natural decrease in population," said Sáenz.

More than 121 million people--roughly 38 percent of the total U.S. population--reside in the 17 states with an explicit white natural decrease. In 12 of the 17 states, the white population diminished overall between 2013 and 2014.

The researchers believe that the decreasing white population in these states can be attributed to the rising number of aging adults, a decrease in their fertility rates and the falling number of white women of childbearing age.

States by Incidence of White Natural Increase or Decrease, 2014
Credit: University of New Hampshire

Despite the large number of states with white natural decline, only two states had more deaths than births in their combined population. For the other 15 states, the white natural decrease has been offset by natural increases in minority populations. In particular, due to the youthfulness of the Latino population, Latino births exceeded deaths by a considerable margin during the same time frame. This trend, the researchers said, is related to the increasing diversity of the U.S. population. A 2014 report by Sáenz, in fact, contends that the single largest component of the U.S. child population will be Latino by 2060.


"Our analysis of the demographic factors causing white natural decrease and minority population growth suggests that the pace is likely to pick up in the future," said Sáenz. "These demographic trends have major policy implications from increasing demands on healthcare and retirement systems for aging populations to considerable necessary investments in education and training for younger ones."

The report suggests that competing demands between these populations could create considerable potential for disagreements regarding funding priorities.

The research brief was conducted on behalf of the Carsey School of Public Policy at the University of New Hampshire. Sáenz is the Mark G. Yudof Endowed Professor at UTSA and a policy fellow at the Carsey School of Public Policy at the University of New Hampshire.


Contacts and sources:
Jesus Chavez
The University of Texas at San Antonio (UTSA)

Black-White Earnings Gap Returns to 1950 Levels

After years of progress, the median earnings gap between black and white men has returned to what it was in 1950, according to new research by economists from Duke University and the University of Chicago.

The experience of African-American men is not uniform, though: The earnings gap between black men with a college education and those with less education is at an all-time high, the authors say.

The research appears online in the National Bureau of Economic Research working paper series.

The paper looks at earnings for working-age men across a span of 75 years, from 1940 to 2014. The earnings gap between black and white men narrowed during the civil rights era. Then, starting around 1970, the gap between black and white men's wages started widening once again.

While salaries for upper income black men have continued to climb since the 1960s, a starkly different picture appears for lower income black men.
Credit: Duke University

"When it comes to the earnings gap between black and white men, we've gone all the way back to 1950," said Duke economist Patrick Bayer, who co-authored the paper with Kerwin Kofi Charles of the University of Chicago.

The picture for black men looks very different at the top of the economic ladder versus the bottom, the authors say. Since the 1960s, top black salaries have continued to climb. Those advances were fueled by more equal access to universities and high-skilled professions, the study finds.

Meanwhile, a starkly different story transpired at the bottom of the economic ladder. Massive increases in incarceration rates and the general decline of working-class jobs have devastated the labor market prospects of men with a high school degree or less, the authors say.

The changing economy has been hard on all workers with less than a high school education, but especially devastating for black men, Bayer said.

"The broad economic changes we've seen since the 1970s have clearly helped people at the top of the ladder," Bayer said. "But the labor market for low-skilled workers has basically collapsed.

"Back in 1940 there were plenty of jobs for men with less than a high school degree. Now education is more and more a determinant of who's working and who's not."

In fact, more and more working-age men in the United States aren't working at all. The number of nonworking white men grew from about 8 percent in 1960 to 17 percent in 2014. The numbers look still worse among black men: In 1960, 19 percent of black men were not working; in 2014, that number had grown to 35 percent of black men. That includes men who are incarcerated as well those who can't find jobs.

"The rate at which men are not working has been skyrocketing, and it's not simply the result of the Great Recession," Bayer said. "It's a big part of what's been happening to our economy over the past 40 years."

The situation would be even worse if not for educational gains among African-Americans over the past 75 years, Bayer said.

On average, black men today have many more years of schooling than black men of the past, and the education gap between white and black men has shrunk considerably. Nevertheless, a gap remains: These days, black men have about a year's less education than white men, on average.

"In essence, the economic benefits that should have come from the substantial gains in education for black men over the past 75 years have been completely undone by the changing economy, which exacts an ever steeper price for the differences that still remain," Bayer said.

The findings show the need for renewed focus on closing racial gaps in education and school quality, which have been stuck in place for several decades, according to the authors. They also suggest that any economic changes that improve prospects for all low-skilled workers will have the important side effect of reducing racial economic inequality.

"We clearly need to create better job opportunities for everyone in the lower rungs of the economic ladder, where work has become increasingly hard to come by," Bayer said.




Contacts and sources:
Duke University

Citation: "Divergent Paths: Structural Change, Economic Rank, and the Evolution of Black-White Earnings Differences, 1940-2014," Patrick Bayer and Kerwin Kofi Charles. NBER Working Paper No. 2279, November 2016.
 

Thursday, December 1, 2016

The 'Hometown Effect,' Genetics Play Key Roles In Your Gut Microbiome

Genetics and birthplace have a big effect on the make-up of the microbial community in the gut, according to research published Nov. 28 in the journal Nature Microbiology.

The findings by a team of scientists from the Department of Energy's Pacific Northwest National Laboratory and Lawrence Berkeley National Laboratory (Berkeley Lab) represent an attempt to untangle the forces that shape the gut microbiome, which plays an important role in keeping us healthy.

In the study, scientists linked specific genes in an animal — in this case, a mouse — to the presence and abundance of specific microbes in its gut.

"We are starting to tease out the importance of different variables, like diet, genetics and the environment, on microbes in the gut," said PNNL's Janet Jansson, a corresponding author of the study. "It turns out that early life history and genetics both play a role."

Mice raised in environments with different relative abundances of diverse microbes (left and right) have a correspondingly diverse gut microbiome. These signature characteristics remained even when the mice were moved to a new facility, and they persisted into the next generation. 
Courtesy of Zosia Rostomian/Berkeley Lab

Scientists studied more than 50,000 genetic variations in mice and ultimately identified more than 100 snippets that affect the population of microbes in the gut. Some of those genes in mice are very similar to human genes that are involved in the development of diseases like arthritis, colon cancer, Crohn's disease, celiac disease and diabetes.

The abundance of one microbe in particular, a probiotic strain of Lactobacillales, was affected by several host genes and was linked to higher levels of important immune cells known as T-helper cells. These results support the key role of the microbiome in the body's immune response, and suggest the possibility that controlling the microbes in the gut could influence the immune system and disease vulnerability.

"We know the microbiome likely plays an important role in fighting infections," said first author Antoine Snijders of the Berkeley Lab. "We found that the level of T-helper cells in the blood of mice is well explained by the level of Lactobacillales in the gut. It's the same family of bacteria found in yogurt and very often used as a probiotic."

The nuclei of mouse epithelial cells (red) and the microbes (green) in the mouse intestine are visible.
Credit: PNNL

To do the research, the team drew upon a genetically diverse set of "collaborative cross" mice that capture the genetic variation in human populations. Scientists studied 30 strains of the mice, which were housed in two facilities with different environments for the first four weeks of their lives. The scientists took fecal samples from the mice to characterize their gut microbiomes before transferring them to a third facility.

The researchers found that the microbiome retained a clear microbial signature formed where the mice were first raised — effectively their "hometown." Moreover, that microbial trait carried over to the next generation, surprising the scientists.

"The early life environment is very important for the formation of an individual's microbiome," said Jian-Hua Mao, a corresponding author from Berkeley Lab. "The first dose of microbes one gets comes from the mom, and that remains a strong influence for a lifetime and even beyond."

In brief, the team found that:
Both genetics and early environment play a strong role in determining an organism's microbiome
The genes in mice that were correlated to microbes in the gut are very similar to genes that are involved in many diseases in people

The researchers also found indications that moderate shifts in diet play a role in determining exactly what functions the microbes carry out in the gut.

"Our findings could have some exciting implications for people's health," said Jansson. "In the future, perhaps people could have designer diets, optimized according to their genes and their microbiome, to digest foods more effectively or to modulate their susceptibility to disease."

Berkeley Lab research scientist Antoine Snijders handles one of the mice used in the research study on how early environmental exposure and genetics play big roles in shaping the "signature" of the gut microbiome.
Courtesy of Marilyn Chung/Berkeley Lab


Other co-lead authors on this paper are Sasha Langley from Berkeley Lab and Young-Mo Kim from PNNL. Thomas Metz at PNNL is also a co-corresponding author. The study also included work by scientists at the University of Washington.

The research was funded primarily by the Office of Naval Research. Additional funding came from Berkeley Lab's Microbes to Biomes and PNNL's Microbiomes in Transition initiatives.

The mice were created at the Systems Genetics Core Facility at the University of North Carolina. The team made metabolomic measurements at EMSL, the Environmental Molecular Sciences Laboratory, a DOE Office of Science User Facility at PNNL.




Contacts and sources:
Tom Rickey, PNNL
Sarah Yang, Lawrence Berkeley National Laboratory

Reference: Antoine M. Snijders, Sasha A. Langley, Young-Mo Kim, Colin J. Brislawn, Cecilia Noecker, Erika M. Zink, Sarah J. Fansler, Cameron P. Casey, Darla R. Miller, Yurong Huang, Gary H. Karpen, Susan E. Celniker, James B. Brown, Elhanan Borenstein, Janet K. Jansson, Thomas O. Metz, Jian-Hua Mao, Influence of early life exposure, host genetics and diet on the mouse gut microbiome and metabolome, Nature Microbiology, Nov. 28, 2016, DOI: 10.1038/nmicrobiol.2016.221.


Where the Big Rains on the Great Plains Come From

Intense storms have become more frequent and longer-lasting in the Great Plains and Midwest in the last 35 years. What has fueled these storms? The temperature difference between the Southern Great Plains and the Atlantic Ocean produces winds that carry moisture from the Gulf of Mexico to the Great Plains, according to a recent study in Nature Communications.

"These storms are impressive," said atmospheric scientist Zhe Feng at the Department of Energy's Pacific Northwest National Laboratory. "A storm can span the entire state of Oklahoma and last 24 hours as it propagates eastward from the Rocky Mountain foothills across the Great Plains, producing heavy rain along the way."

Understanding how storms changed in the past is an important step towards projecting future changes. The largest storms, especially, have been challenging to simulate.


Multiple storm systems converge over the Great Plains into immense thunderstorms that bring much of the spring rainfall.
Credit: Roy Kaltschmidt, ARM Climate Research Facility


"These storms bring well over half of the rain received in the central U.S. in the spring and summer," said atmospheric scientist Ruby Leung, a coauthor with Feng and others at PNNL. "But almost no climate model can simulate these storms. Even though these storms are big enough for the models to capture, they are more complicated than the smaller isolated thunderstorms or the larger frontal rainstorms that models are wired to produce."

Previous research had found more heavy springtime rain falling in the central United States in recent decades, but scientists did not know what types of storms were causing the increase. Different storm types might respond in their own unique ways as the climate warms, so the PNNL researchers set out to find out.

To do so, the team worked out a way to identify storms called mesoscale convective systems. This type of storm develops from smaller convective storms that aggregate to form the largest type of convective storms on Earth. They are best detected using satellites with a bird’s eye view from space. Feng transformed well-established satellite detection methods into a new technique that he then applied to rainfall measured by radars and rain gauges for the past 35 years. This allowed the researchers to identify thousands of the large convective storms and their rainfall east of the Rocky Mountains.

The results showed the frequency of very long-lasting ones increased by about 4 percent per decade, most notably in the northern half of the central region -- just below the Great Lakes. The researchers rated the storms that produced the top five percent of rainfall as extreme events and saw that extreme events have become more frequent in the last 35 years.

But what contributes to the changes in the frequency and characteristics of mesoscale convective systems? To find out, the researchers analyzed the region's meteorological environment. They found that the Southern Great Plains warms more than the ocean does.

This difference in temperature creates a pressure gradient between the Rocky Mountains and the Atlantic Ocean that induces stronger winds that push moisture up from the Gulf of Mexico. The warmer and moister air converge in the Northern Great Plains, where it falls in massive storms.

Although these storms are occurring more often and producing heavier rainfall, whether they turn into floods depends on the details.

"Flooding depends not only on precipitation intensity and duration, but also how much water the ground can hold," said Leung. "Teasing out whether the observed changes in rain have led to increased flooding is complicated by reservoirs and land use. Both have the ability to modulate soil moisture and streamflow, hence flooding."

This work was supported by the Department of Energy Office of Science.




Contacts and sources:
Mary Beckman
Pacific Northwest National Laboratory



Citation: Zhe Feng, L. Ruby Leung, Samson Hagos, Robert A. Houze, Jr., Casey D. Burleyson, Karthik Balaguru. More frequent intense and long-lived storms dominate the springtime trend in central U.S. rainfall, Nature Communications Nov. 11, 2016, doi: 10.1038/NCOMMS13429.

Cool New Theory on Galaxy Formation

Giant galaxies may grow from cold gas that condenses as stars rather than forming in hot, violent mergers.

The surprise finding was made with CSIRO and US radio telescopes by an international team including four CSIRO researchers and published in the journal Science today.

The biggest galaxies are found at the hearts of clusters, huge swarms of galaxies.

"Until now we thought these giants formed by small galaxies falling together and merging," team member Professor Ray Norris of CSIRO and Western Sydney University said.

 An artist's impression of the Spiderweb galaxy sitting in a cloud of cold gas (blue).  

Credit: ESO/M. Kornmesser.


But the researchers, led by Dr Bjorn Emonts from the Centro de Astrobiología in Spain, saw something very different when they looked at a protocluster, an embryonic cluster, 10 billion light-years away.

This protocluster was known to have a giant galaxy called the Spiderweb forming at its centre.

Dr Emonts' team found that the Spiderweb is wallowing in a huge cloud of very cold gas that could be up to 100 billion times the mass of our Sun.

Most of this gas must be hydrogen, the basic material from which stars and galaxies form.

Earlier work by another team had revealed young stars all across the protocluster. The new finding suggests that "rather than forming from infalling galaxies, the Spiderweb may be condensing directly out of the gas," according to Professor Norris.

The astronomers didn't see the hydrogen gas directly but located it by detecting a tracer gas, carbon monoxide (CO), which is easier to find.

The Very Large Array telescope in the USA showed that most of the CO could not be in the small galaxies in the protocluster, while CSIRO's Australia Telescope Compact Array saw the large cloud surrounding the galaxies.

"This is the sort of science the Compact Array excels at," Professor Norris said.

Co-author Professor Matthew Lehnert from the Institut Astrophysique de Paris described the gas as "shockingly cold" – about minus 200 degrees Celsius.

"We expected a fiery process – lots of galaxies falling in and heating gas up," he said.

Where the carbon monoxide came from is a puzzle.

"It's a by-product of previous stars but we cannot say for sure where it came from or how it accumulated in the cluster core," Dr Emonts said.

"To find out we'd have to look even deeper into the Universe's history."

CSIRO researchers Ron Ekers, James Allison and Balthasar Indermuehle also contributed to this study of the Spiderweb.

The Australia Telescope Compact Array is part of the Australia Telescope National Facility, which is funded by the Australian Government for operation as a National Facility managed by CSIRO.



Contacts and sources:
Andrew Warren
CSIRO.

6,000 Years Ago The Sahara Desert Was Tropical, So What Happened?

As little as 6,000 years ago, the vast Sahara Desert was covered in grassland that received plenty of rainfall, but shifts in the world’s weather patterns abruptly transformed the vegetated region into some of the driest land on Earth. A Texas A&M university researcher is trying to uncover the clues responsible for this enormous climate transformation – and the findings could lead to better rainfall predictions worldwide.

Sahara desert from space.

Credit: NASA

Robert Korty, associate professor in the Department of Atmospheric Sciences, along with colleague William Boos of Yale University, have had their work published in the current issue of Nature Geoscience.

The two researchers have looked into precipitation patterns of the Holocene era nd compared them with present-day movements of the intertropical convergence zone, a large region of intense tropical rainfall. Using computer models and other data, the researchers found links to rainfall patterns thousands of years ago.

“The framework we developed helps us understand why the heaviest tropical rain belts set up where they do,” Korty explains.

Eastern Desert mountains


/redit:  M.M.Minderhoud


“Tropical rain belts are tied to what happens elsewhere in the world through the Hadley circulation, but it won’t predict changes elsewhere directly, as the chain of events is very complex. But it is a step toward that goal.”

The Hadley circulation is a tropical atmospheric circulation that rises near the equator. It is linked to the subtropical trade winds, tropical rainbelts, and affects the position of severe storms, hurricanes, and the jet stream. Where it descends in the subtropics, it can create desert-like conditions. The majority of Earth’s arid regions are located in areas beneath the descending parts of the Hadley circulation.

“We know that 6,000 years ago, what is now the Sahara Desert was a rainy place,” Korty adds.


The Sahara desert was once a tropical jungle.

Credit: Shutterstock

“It has been something of a mystery to understand how the tropical rain belt moved so far north of the equator. Our findings show that that large migrations in rainfall can occur in one part of the globe even while the belt doesn’t move much elsewhere.

“This framework may also be useful in predicting the details of how tropical rain bands tend to shift during modern-day El Niño and La Niña events (the cooling or warming of waters in the central Pacific Ocean which tend to influence weather patterns around the world).”

The findings could lead to better ways to predict future rainfall patterns in parts of the world, Korty believes.

“One of the implications of this is that we can deduce how the position of the rainfall will change in response to individual forces,” he says. “We were able to conclude that the variations in Earth's orbit that shifted rainfall north in Africa 6,000 years ago were by themselves insufficient to sustain the amount of rain that geologic evidence shows fell over what is now the Sahara Desert. Feedbacks between the shifts in rain and the vegetation that could exist with it are needed to get heavy rains into the Sahara.”




Contacts and sources:
Texas A&M University 

Synchronized Swimming: How Startled Fish Shoals Effectively Evade Danger

Understanding the response dynamics of a startled shoal of fish reveals how orientation within the group effects escape chances, and may offer insight into emergency response planning

As panic spreads, an entire shoal (collective) of fish responds to an incoming threat in a matter of seconds, seemingly as a single body, to change course and evade a threatening predator. Within those few seconds, the panic-infused information – more technically known as the startle response – spreads through the collective, warning fish within the group that would otherwise have no way to detect such a threat.

The ways in which this information spreads and the role played by position dynamics may help us better plan for emergencies.

Illustration of an agent (depicted here as a fish) at a random orientation at the first time step (= 0, blue) and transitioning to startled state (= 1, red). When startled directly from a threat, agents react by instantaneously reorienting towards a reference direction, with some noise (2). Without loss of generality, the reference direction is set at zero degrees. (b) Illustration of two agents. When startled indirectly by social cues, agents move in the average direction of their startled neighbors. In this illustration, only the difference in orientation between two agents is shown, along with noise (1) that is added to the difference in orientation, which represents the noise in the agent's ability to sense their neighbor's orientation

Credit: Amanda Chicoli/University of Maryland

Amanda Chicoli is a post-doctoral biologist at the Carnegie Institution for Science in Baltimore, and she teamed up with Derek Paley, a professor in the department of aerospace engineering at the University of Maryland, College Park, to study and model this transmission of information. Their work appears in this week’s Chaos, from by AIP publishing.

“The initial idea for this project stemmed from my Ph.D. research in collective behavior, my own expertise in fish sensory biology, and Dr. Paley’s expertise in coupled oscillator systems,” said Chicoli. Systems of coupled oscillators, like tethered pendula or connected springs for example, are used often to model more complex systems in science, from molecules to shoals fish.

Superficially, a single shoal of fish – or even just a network of matching springs – may appear to be simple. But the factors playing a part in the group response can get complex very easily. In this study, Chicoli and Paley looked at the orientation of fish within the collection, and the influence it can have to how well the whole group escapes a threat.

“One of the main goals of this study was to investigate the role of polarization (alignment) on information transmission and collective response to a threat. The model predicts faster spreading of threat information in groups that are strongly aligned, and slower but more accurate responses in groups that align only after a threat has been detected,” said Chicoli.

In fact, it is this alignment (while swimming) that characterizes a social gathering of fish, or a shoal of fish, as a school of fish. Though it may feel intuitive that a school of fish would have common alignment for such purposes as quick evasion, but proving this is not so clear cut.

Compass and stem plots illustrating how coupled agents synchronize orientation.
Credit: Amanda Chicoli/University of Maryland

“Alignment in fish shoals is generally thought to be involved in predator defense, however it is difficult to directly test this with experimental methods,” said Chicoli, describing their methods. “The model incorporates several theoretical decision-making processes: the decision on which direction to head, whether to respond to a threat at all, and with whom to interact.”

The specific model used, called a susceptible-infected-removed (SIR) model, was actually one used more commonly in calculating how diseases spread and proved to accurately apply to the fish shoals.

“The SIR model was chosen because it is a probabilistic model of disease (or information) transmission through a group. Many biological processes, including the startle from a predator, are probabilistic in nature,” Chicoli said. “There is some probability of noticing the predator, some probability in responding, and some probability in escaping. These probabilities can also be observed experimentally at the physiological behavioral levels.”

“To the best of my knowledge… this is the first study to directly model information transmission in schooling,” Chicoli also said. “The results from testing our model suggest that the degree of alignment in groups of fish may involve a speed-accuracy trade-off; they might also be interpreted in terms of decision-making processes in groups.”

Specifically, their work revealed and quantified the relationship between various orientation configurations and the accuracy and speed of responses. They found some surprising correlations.

“We did not expect to find more accurate responses in the groups that were initially randomly oriented. We also did not expect that the highest probability of responding to the external threat investigated would correspond to fewer individuals responding,” said Chicoli. “This result is due to the nature of our startle model, which is based on disease spreading. Since individuals do not re-startle, if too many agents startle without spreading the information, the startles die out before propagating and fewer individuals react overall.”

As for the potential impact for people, particularly in emergency situations, these and future results can provide valuable information when designing and planning for such events.

Chicoli said, “Importantly, the model may be relevant to investigating how individuals will respond to the movement of others in a group in order to decide what direction to travel or what exit route to follow. For example, if you had a large crowd all heading to one exit door, when two doors are available, would an individual follow the group or choose the available door with no crowd?”

Looking to further work, the relative simplicity of the model offers plenty of ways in which it can be scaled and enhanced, both in the context of fish shoals and more broadly in terms of animal behavior. Information, after all, is nature’s most fundamental currency.

“In future versions, it would be of use to include sensory capabilities and limitations for different species of interest to assess how this affects the results, and to probe the role of different sensory modalities in information transmission,” Chicoli said.

“The model may be used to vary group size and the number of informed individuals over the parameter space to assess the conditions under which knowledgeable individuals have the most influence. A small number of knowledgeable individuals have been previously shown to elicit a response from both small and large.”

In an era lending more and more focus not just to increasing volume of available information, but also to the ways in which that information moves, there may be even more application potential of these efforts yet to come.

Contacts and sources:
American Institute of Physics (AIP)


The article, "Probabilistic information transmission in a network of coupled oscillators reveals speed-accuracy trade-off in responding to threats," is authored by Amanda Chicoli and Derek A. Paley. The article will appear in the journal Chaos, by AIP publishing, on November 29, 2016 (DOI: 10.1063/1.4966682). http://www.scitation.aip.org/content/aip/journal/chaos/26/11/10.1063/1.4966682