Tuesday, May 29, 2018

Bacteria and Viruses Ejected from the Ocean Identified

Certain types of bacteria and viruses are readily ejected into the atmosphere when waves break while other taxa are less likely to be transported by sea spray into the air, researchers reported May 22.

An interdisciplinary team of scientists from Scripps Institution of Oceanography, the University of California San Diego, and the J. Craig Venter Institute (JCVI) reached this conclusion after replicating a phytoplankton bloom in a unique ocean-atmosphere wave facility developed by scientists in the National Science Foundation-funded Center for Aerosol Impacts on Chemistry of the Environment (CAICE) on the Scripps campus. 

They found that bacteria and viruses coated by waxy substances or lipids appear in greater quantity and are enriched in sea spray aerosols. According to researchers, the results suggest that the water-repellent properties of the surfaces of these microbes are what make them more likely to be cast out of the ocean as waves break at the sea surface.

A 2014 experiment at Scripps identified microbes in sea spray aerosols. 
A 2014 experiment at Scripps identified microbes in sea spray aerosols. Photo: Christina McCluskey/Colorado St. Univ.
Photo: Christina McCluskey/Colorado St. Univ.

The team in the National Science Foundation-funded study included chemists, oceanographers, microbiologists, geneticists, and pediatric medicine specialists who are attempting to understand how far potentially infectious bacteria and viruses can travel and if those that pose the greatest risks to public health are among those most likely to escape the ocean. In previous studies, individual members of the team have characterized sea spray aerosols, which form when waves break and bubbles burst at the ocean surface.

“Some of the bacteria we detected have been found on skin as well as in your gut, so they could be affecting your health—at this point, no one really knows the health effects of breathing in ocean microbes,” said Kim Prather, who has a joint appointment at UC San Diego’s Scripps Institution of Oceanography and the Department of Chemistry and Biochemistry. “We are trying to understand sources of environmental microbes using the unique ocean-atmosphere facilities we have developed here at Scripps. By breaking waves in fresh seawater in an isolated wave channel, UC San Diego is the only place in the world that can directly measure the microbes transferred from the ocean to the atmosphere.”

Prather’s research group has previously shown how microbes have a nearly worldwide reach, able to travel tens of thousands of kilometers on the wind, sometimes re-entering the ocean and re-emerging from it along the journey. As they do, their chemical attributes, their ability to infect, and their effects on cloud formation and precipitation can evolve.

“In CAICE, we realized that many of the chemical components found in the aerosols are derived from living microorganisms in the ocean, so one of our first goals was to find out which ones are present in the water and then understand which of them are able to hitch a ride on the aerosol particles,” said Michael Burkart, a researcher at the Department of Chemistry and Biochemistry at UC San Diego.

The study tapped into techniques developed in the Earth Microbiome Project, which was founded by co-author Rob Knight and others in 2010 to sample as many microbial communities as possible to understand the ecology of microbes and their interactions with humans.

“In the Earth Microbiome Project a key challenge is to model microbes across the planet,” said Knight, a UC San Diego professor of pediatrics and computer science and engineering with the UC San Diego Center for Microbiome Innovation. “The ocean spray results provide a completely new and unexpected mechanism for dispersal that we will have to take into account for a full understanding of Earth’s microbial biosphere.”

The researchers conducted the experiment by establishing blooms of phytoplankton over a 34-day period. They did this inside an ocean-atmosphere facility housed at the Hydraulics Laboratory at Scripps Oceanography. After several days, the replicated ocean, which scientists call a mesocosm, began emitting into the air bacterial taxa such as Actinobacteria and Corynebacteria. Viruses contained in the aerosols that became airborne were few relative to bacteria, but the strains that were detected in the air such as Herpesvirales had a similar kind of water repelling surface that enables them to be transferred from the ocean to the atmosphere.

The potential human health effects of the ocean microbes most often found in the sea spray aerosols will now begin to be studied by the team at UC San Diego. There is little known about the health effects— good or bad—of breathing ocean air enriched in microbes and other biological material. The researchers reported the presence of strains not often found in seawater such as Legionella and an avian strain of E. coli, which they said could be evidence of contamination in certain coastal waters. Knowledge of which pathogens occurring in pollution runoff become aerosolized could help improve the understanding of human exposure pathways for those living near the coast.

“No one expected that it might be an evolved strategy for particular bacteria and viruses to get themselves into the ocean spray,” Knight said. “Our next challenge is to figure out why they’re doing it, and when it’s good or bad for our health, or even for the planet’s climate.”

Metagenomic analyses conducted by JCVI leverage methods and funding from the Global Ocean Sampling project, a long-running metagenomic survey of marine environments that was funded by the Beyster Family Fund of the San Diego Foundation and the Life Technologies Foundation.

Members of the research team include lead author Jennifer Michaud, Michael Burkart, Luke Thompson, Zhenjiang Zech Xu, Christopher Lee, Kevin Pham, Charlotte Beall, Francesca Malfatti, and Farooq Azam of UC San Diego; and Christopher Dupont, Drishti Kaul, Josh Espinoza, and R. Alexander Richter of the J. Craig Venter Institute in La Jolla, Calif.







Contacts and sources:
Robert Monroe
Heather Kowalski, JCVI
Scripps Institution of Oceanography

Citation: “Taxon-specific aerosolization of bacteria and viruses in an experimental ocean atmosphere mesocosm,” which appeared May 22 in the journal Nature Communications, was supported by CAICE, which is led by Prather and designated in 2013 as an NSF Center for Chemical Innovation.

Going with the Flow: Jellyfish Invasion of Europe a Success

Invasion of jelly fish or scientists present first comprehensive inventory of Mnemiopsis leidyi in Europe.

Twelve years ago, the comb jelly Mnemiopsis leidyi, originating from the North American East Coast, appeared in northern European waters. Based on the first comprehensive data collection on the occurrence of this invasive jellyfish in Europe, scientists from 19 countries led by the GEOMAR Helmholtz Centre for Ocean Research Kiel and Technical University of Denmark have now shown that ocean currents play a key role for this successful invasion. The study has been published in the international journal Global Ecology and Biogeography.

Occurance of Mnemiopsis leidyi in western Eurasian seas from 1990 to November 2016. The map is based on 12.400 geo-referenced observations (black dots) with regions of presence (red) and absence (dark blue) highlighted. Avarage currents (white arrows) are shown to depict general circulation patterns. 
Graphic: Cornelia Jaspers/GEOMAR, DTU Aqua

When the American comb jelly Mnemiopsis leidyi, also known as sea walnut, conquered the Black Sea as a new habitat 35 years ago, the ecosystem there changed sustainably. The economically important anchovy stocks collapsed since the fish had to compete with the jellyfish for food. Against this background, scientists, fisheries associations and environmental authorities were alarmed when, in 2005, the sea walnut also appeared in northern European waters. Although effects similar to those in the Black Sea have not been observed in the Baltic and in the North Sea yet, scientists are still closely monitoring the development - particularly since many questions on invasion pathways are still largely unclear.

A total of 47 scientists from 19 countries have now published the first comprehensive inventory of Mnemiopsis leidyi in European waters in the international journal Global Ecology and Biogeography. With this data, the interdisciplinary team of authors shows that ocean currents as pathways of invasive jellyfish and other drifting organisms in the seas have been significantly underestimated so far. “To explain the invasion of alien species in marine ecosystems, everybody is focused on transport in or on ships. That is important, but does not explain the whole phenomenon”, says lead author Dr. Cornelia Jaspers, Biological Oceanographer at the GEOMAR Helmholtz Centre for Ocean Research Kiel and at the Technical University of Denmark in Lyngby.

As basis for their study, the participants have collected all reliable data on the occurrence of the American comb jelly in European waters since 1990 - a total of more than 12,000 geo-referenced data points. “Even this inventory is new, because so far there were only regional studies on the distribution”, explains Dr. Jaspers.

In cooperation with oceanographers and ocean modelers, the team linked data on the occurrence of Mnemiopsis leidyi to prevailing currents in European waters. The analysis included not only the flow directions and their strength, but also their stability. The models showed that the steady flow pattern of the southern North Sea closely links the region with much of northwestern Europe, including the Norwegian coast and even the Baltic Sea.

Mnemiopsis leidyi. 
Photo: Cornelia Jaspers/GEOMAR, DTU Aqua


Due to this close connection, not only invasive jellyfish but generally non-native species floating in the sea can be spread over long distances within a very short time. “Using the imported sea walnut as example, we were able to show that these species can travel up to 2000 kilometers within three months,” says Hans-Harald Hinrichsen, physical Oceanographer at GEOMAR. Species that arrive in ports in the southwestern North Sea, such as Antwerp or Rotterdam, reach Norway and the Baltic Sea very quickly.

To confirm this connection, the authors used a natural experiment. After a very cold winter season in early 2010, the jellyfish disappeared from the Baltic Sea and large parts of northwestern Europe in 2011 and stayed away until 2013. But after the warm winter of 2013/14, a new population of Mnemiopsis established itself in the Baltic very fast. “This new population was of another genotype than the first invaders. Thus, within a short time, a new immigration took place, driven by the prevailing ocean currents,” says Dr. Jaspers. Perhaps the new arrivals from the second invasion wave are even better adapted to the local conditions.

Therefore, the authors plead not only to keep an eye on the transport routes across oceans, but also to better investigate the possibilities of spreading within a region. “The study shows that a single gateway, a single port for example, in which ships with invasive species arrive, is enough, to redistribute non-native species across entire regions,” she concludes.



Contacts and sources:
Jan Steffen
Helmholtz Centre for Ocean Research Kiel (GEOMAR)


Citation: Jaspers, C. et. al. (2018): Ocean current connectivity propelling the secondary spread of a marine invasive comb jelly across western Eurasia. Global Ecology and Biogeography, https://doi.org/10.1111/geb.12742

USA Consumers to Pay Cost of Flooding in China

Intensifying river floods could lead to regional production losses worldwide caused by global warming. This might not only hamper local economies around the globe – the effects might also propagate through the global network of trade and supply chains, a study now published in Nature Climate Change shows. It is the first to assess this effect for flooding on a global scale, using a newly developed dynamic economic model.

 It finds that economic flood damages in China, which could, without further adaption, increase by 80 percent within the next 20 years, might also affect EU and US industries. The US economy might be specifically vulnerable due to its unbalanced trade relation with China. Contrary to US president Trump’s current tariff sanctions, the study suggests that building stronger and thus more balanced trade relations might be a useful strategy to mitigate economic losses caused by intensifying weather extremes.


Transfer of economic losses due to river floods through trade networks from China to the US.
China floods to hit US economy: climate effects through trade chains
 Image: PIK (cutout)


“Climate change will increase flood risks already in the next two decades – and this is not only a problem for millions of people but also for economies worldwide,” says Anders Levermann, project leader from the Potsdam Institute for Climate Impact Research (PIK) in Germany and the Lamont Doherty Earth Observatory, Columbia University in New York.

Without further adaption measures, climate change will likely increase economic losses worldwide due to fluvial floods by more than 15 percent accumulating to a total of about 600 billion US dollar within the next 20 years. While the bulk of this is independent of climate change, the rise is not. “Not only local industries will be affected by these climate impacts,” says Sven Willner, lead author of the study from PIK. “Through supply shortages, changes in demand and associated price signals, economic losses might be down-streamed along the global trade and supply network affecting other economies on a global scale – we were surprised about the size of this rather worrying effect.”


Credit: Potsdam Institute


World Bank Economist: “Natural disasters are not local events anymore”

The World Bank’s lead economist with the Global Facility for Disaster Reduction and Recovery, Stéphane Hallegatte, who pioneered research in the area of indirect disaster effects but was not involved in the present study, comments: “This work combines two very innovative lines of work: global risk assessment for natural hazards and network theory to understand how localized shocks propagate in time and space. It contributes to scientific progress in multiple ways, but one of the most important policy messages for me is that the world is so interconnected that natural disasters are not local events anymore: everybody can be affected by a disaster occurring far away. It means that risk management is more than each country’s responsibility: it has become a global public good.”

The study is based on projections of near-future river floods on a regional scale already determined by the greenhouse gas emissions that humans have so far emitted into our atmosphere – impacts after 2035 depend on future additional emissions. The authors investigate the overall economic network response to river flood-related shocks, taking into account the inner dynamics of international trade. They do so with the specifically designed, new Acclimate-model, a dynamic economic computer simulation.

Without major adaptation, China could suffer biggest direct losses

Without major adaptation measures, China could suffer the biggest direct economic losses from river floods – adding up to a total of more than 380 billion US dollar in economic losses over the next 20 years, including natural flood events not related to global warming. This corresponds to about 5 percent of China’s annual economic output. 175 billion of the total losses in China will likely occur due to climate change. “This is a lot,” says Willner, “and it is only the effect by river floods, not even taking into account other climate change impacts such as storms and heat waves.”

The European Union and the United States on the contrary might be affected predominantly by indirect losses passed down along the global trade and supply network. In the US, direct losses might be around 30 billion US Dollar, whereas indirect losses might be 170 billion US dollar in the next 20 years. “The EU will suffer less from indirect losses caused by climate-related flooding in China due to its even trade balance. They will suffer when flooded regions in China temporarily fail to deliver for instance parts that European companies need for their production, but on the other hand Europe will profit from filling climate-induced production gaps in China by exporting goods to Asia. This yields the European economy currently more climate-prepared for the future,” says Willner. “In contrast, the US imports much more from China than it exports to this country. This leaves the US more susceptible to climate-related risks of economic losses passed down along the global supply and trade chain.”

Global trade allows global buffering – India could be a winner

“More intense global trade can help to mitigate losses from local extreme events by facilitating market adjustments,” explains co-author Christian Otto from the Potsdam Institute and Columbia University. “When a supplier is impacted by a disaster hampering its production, international trade increases the chance that other suppliers can jump in and temporarily replace it. Interestingly, the global increase of climate-induced river floods could even cause net gains for some economies such as India, South East Asia, or Australia.”

The study’s focus is not on damages to production facilities of businesses, but to what extent a regional economy stagnates due to flooding. “We adopted a rather optimistic view when it comes to the flexibility and promptness of shifting production towards non-affected suppliers after an extreme weather event,” explains Christian Otto. “Hence our study rather underestimates than overestimates the production losses – things could eventually turn out to be worse.”

Trump’s tariffs might impede climate-proofing the US economy

“We find that the intensification of the mutual trade relation with China leaves the EU better prepared against production losses in Asia than the US. The prospect that the US will be worse off can be traced back to the fact that it is importing more products from China than it is exporting,” says PIK’s Anders Levermann. “Interestingly, such an unbalanced trade relation might be an economic risk for the US when it comes to climate-related economic losses. In the end, Trump’s tariffs might impede climate-proofing the US economy.”

For resolving this risk and balancing out the negative trade relation, there are generally two options: either isolation or more trade. “By introducing a tariff plan against China, Trump currently goes for isolation,” says Levermann. “But Trump’s tariff sanctions are likely to leave US economy even more vulnerable to climate change. As our study suggests, under climate change, the more reasonable strategy is a well-balanced economic connectivity, because it allows to compensate economic damages from unexpected weather events – of which we expect more in the future.”



Contacts and sources:
Potsdam Institute for Climate Impact Research (PIK)


Citation: Global economic response to river floods. Nature Climate Change,
Sven Norman Willner, Christian Otto, Anders Levermann. 2018; DOI: 10.1038/s41558-018-0173-2

Unprecedented Self-Healing Material for Robots

Many natural organisms have the ability to repair themselves. Now, manufactured machines will be able to mimic this property. In findings published this week in Nature Materials, researchers at Carnegie Mellon University have created a self-healing material that spontaneously repairs itself under extreme mechanical damage.

This soft-matter composite material is composed of liquid metal droplets suspended in a soft elastomer. When damaged, the droplets rupture to form new connections with neighboring droplets and reroute electrical signals without interruption. Circuits produced with conductive traces of this material remain fully and continuously operational when severed, punctured, or had material removed.

Source: Carnegie Mellon University College of Engineering

“Other research in soft electronics has resulted in materials that are elastic and deformable, but still vulnerable to mechanical damage that causes immediate electrical failure,” said Carmel Majidi, an associate professor of mechanical engineering. “The unprecedented level of functionality of our self-healing material can enable soft-matter electronics and machines to exhibit the extraordinary resilience of soft biological tissue and organisms.”

Applications for its use include bio-inspired robotics, human-machine interaction, and wearable computing. Because the material also exhibits high electrical conductivity that does not change when stretched, it is ideal for use in power and data transmission. Think of a first responder robot that can rescue humans during an emergency without sustaining damage, a health-monitoring device on an athlete during rigorous training, or an inflatable structure that can withstand environmental extremes on Mars.




Contacts and sources:
 College of Engineering, Carnegie Mellon University


Citation: “An Autonomously Electrically Self-Healing, Liquid Metal-Elastomer Composite for Robust Soft-Matter Robotics and Electronics,” Nature Materials, DOI: 10.1038/s41563-018-0084-7.

Twitter Rhetoric May Signal If a Protest Will Turn Violent

Moral rhetoric on Twitter may signal whether a protest will turn violent, according to a  University of Southern California (USC)-led study.

 USC researchers also found that people are more likely to endorse violence when they moralize the issue that they are protesting — that is, when they see it as an issue of right and wrong. That holds true when they believe that others in their social network moralize the issue, too.

“Extreme movements can emerge through social networks,” said the study’s corresponding author, Morteza Dehghani, a researcher at the Brain and Creativity Institute at USC. “We have seen several examples in recent years, such as the protests in Baltimore and Charlottesville, where people’s perceptions are influenced by the activity in their social networks. People identify others who share their beliefs and interpret this as consensus. In these studies, we show that this can have potentially dangerous consequences.”

USC researchers studied the wording of tweets surrounding certain topics and searched for key phrases to do with morality.
illustration twitter concept showing clashing of social media and violence
 Illustration/Nina Dehghani

The scientists analyzed 18 million tweets posted during the 2015 Baltimore protests over the death of 25-year-old Freddie Gray, who died as police took him to jail. Researchers used a deep neural network — an advanced machine learning technique — to detect moralized language on Twitter.

They investigated the association between moral tweets and arrest rates, a proxy for violence. This analysis showed that the number of hourly arrests made during the protests was associated with the number of moralized tweets posted in previous hours.

Tweets containing moral rhetoric nearly doubled on days when clashes among protesters and police became violent.

The study was published on May 23 in Nature Human Behaviour.
Social media posts as a barometer for activism

Social media sites such as Twitter have become a significant platform for activism and a source for data on human behavior. That makes them ripe for research.

Recent examples of movements tied to social media include the #marchforourlives effort to seek gun control, the #metoo movement against sexual assault and harassment, and #blacklivesmatter, a campaign against systematic racism that began in 2014 after the police-involved shooting death of Michael Brown, 19, in Ferguson, Mo.

An example involving more violence is the Arab Spring revolution, which began in Tunisia in late 2010 and set off protests in Egypt, Libya and other nations, forcing changes in their leadership. In Syria, clashes escalated into a war that has killed hundreds of thousands of people and displaced countless refugees.
Detecting moralization online

The scientists developed a model for detecting moralized language based on a prior, deep learning framework that can reliably identify text that evokes moral concerns associated with different types of moral values and their opposites. The “Moral Foundations Theory” defines these dueling values:

Care/harm
Fairness/cheating
Loyalty/betrayal
Authority/subversion
Purity/degradation

Here are two examples moralized language and the moral foundations with which they are associated:

Sample Tweet 1:

Why does the opposition speak only abt black on black crime as a rebuttal to police brutality/murder? #AllCrimeMatters, right? #FreddieGray

Moral Foundations: Fairness and Loyalty

Sample Tweet 2:

regardless of how anyone feels, prayers to the police force and their family

Moral Foundations: Care and Purity

Moralization and political polarization are exacerbated by online “echo chambers,” researchers say. These are social networks where people connect with other like-minded people while distancing themselves from those who don’t share their beliefs.
Protests, social media and violence

Social media data help researchers illuminate real-world social dynamics and test hypotheses, explained Joe Hoover, a lead author of the paper and doctoral candidate in psychology at the USC Dornsife College of Letters, Arts and Sciences. “However, as with all observational data, it can be difficult to establish the statistical and experimental control that is necessary for drawing reliable conclusions.”

To make up for this, the scientists conducted a series of controlled behavioral studies, each with more than 200 people. Researchers first asked participants to read a paragraph about the 2017 clashes over the removal of Confederate monuments in Charlottesville, Va. Then the researchers asked how much participants agreed or disagreed with statements about the use of violence against far-right protesters.

The more certain people were that many others in their network shared their views, the more willing they were to consider the use of violence against their perceived opponents, the scientists found.

The work was supported by a grant from the U.S. Department of Defense. Other study co-authors were Marlon Mooijman of Northwestern University, Hoover from USC Dornsife and the Brain and Creativity Institute at USC, and Ying Lin and Heng Ji of the Rensselaer Polytechnic Institute.



Contacts and sources:
Emily Gersema
University of Southern California


Citation: Moralization in social networks and the emergence of violence during protests.
Marlon Mooijman, Joe Hoover, Ying Lin, Heng Ji, Morteza Dehghani. Nature Human Behaviour, 2018; DOI: 10.1038/s41562-018-0353-0

Video Reconstructed from What a Rat Saw

Using machine-learning techniques, a research team has reconstructed a short movie of small, randomly moving discs from signals produced by rat retinal neurons. Vicente Botella-Soler of the Institute of Science and Technology Austria and colleagues present this work in PLOS Computational Biology.

Reconstructing a video from the retinal activity. Left: two example stimulus frames displayed to the rat retina. Middle and right: Reconstructions obtained with two different methods (sparse linear decoding in the middle and nonlinear decoding on the right). Green circles denote true disc positions. 
Credit: Botella-Soler et al.


Neurons in the mammalian retina transform light patterns into electrical signals that are transmitted to the brain. Reconstructing light patterns from neuron signals, a process known as decoding, can help reveal what kind of information these signals carry. However, most decoding efforts to date have used simple stimuli and have relied on small numbers (fewer than 50) of retinal neurons.

In the new study, Botella-Soler and colleagues examined a small patch of about 100 neurons taken from the retina of a rat. They recorded the electrical signals produced by each neuron in response to short movies of small discs moving in a complex, random pattern. The researchers used various regression methods to compare their ability to reconstruct a movie one frame at a time, pixel by pixel.

The research team found that a mathematically simple linear decoder produced an accurate reconstruction of the movie. However, nonlinear methods reconstructed the movie more accurately, and two very different nonlinear methods, neural nets and kernelized decoders, performed similarly well.

Unlike linear decoders, the researchers demonstrated that nonlinear methods were sensitive to each neuron signal in the context of previous signals from the same neuron. The researchers hypothesized that this history dependence enabled the nonlinear decoders to ignore spontaneous neuron signals that do not correspond to an actual stimulus, while a linear decoder might “hallucinate” stimuli in response to such spontaneously generated neural activity.

These findings could pave the way to improved decoding methods and better understanding of what different types of retinal neurons do and why they are needed. As a next step, Botella-Soler and colleagues will investigate how well decoders trained on a new class of synthetic stimuli might generalize to both simpler as well as naturally complex stimuli.

“I hope that our work showcases that with sufficient attention to experimental design and computational exploration, it is possible to open the box of modern statistical and machine learning methods and actually interpret which features in the data give rise to their extra predictive power,” says study senior author Gasper Tkacik. “This is the path to not only reporting better quantitative performance, but also extracting new insights and testable hypotheses about biological systems.”





Contacts and sources:
Institute of Science and Technology Austria

Citation: Nonlinear decoding of a complex movie from the mammalian retina
Botella-Soler V, Deny S, Martius G, Marre O, Tkačik G (2018)  PLoS Comput Biol 14(5): e1006057. doi.org/10.1371/journal.pcbi.1006057

Dinosaur Dandruff Discovered

Scientists at University College Cork (UCC)  believe they have discovered 125 million-year-old dinosaur dandruff.

The flaky find was made in the preserved plumage of feathered dinosaurs Microraptor, Beipiaosaurus and Sinornithosaurus as well as early birds.

The palaeontologists say the discovery represents the first evidence of how dinosaurs shed their skin.

The study involved examining dandruff from modern birds which was then compared with the fossil cells using electron microscopes.

Scientists found tough cells called corneocytes which are found in human dandruff.

UCC's Dr Maria McNamara, who led the study, says, "What’s remarkable is that the fossil dandruff is almost identical to that in modern birds."
UCC's Dr Maria McNamara, who led the study
 Photo: John Sheehan.


The finding is evidence that this feature of modern skin may have evolved in the late Middle Jurassic period.

"There was a burst of evolution of feathered dinosaurs and birds at this time, and it's exciting to see evidence that the skin of early birds and dinosaurs was evolving rapidly in response to bearing feathers," said Dr Maria McNamara who led the research.

The research, published in the journal Nature Communications, also found remarkable similarities between the skin of the dinosaurs and those of modern birds.
"The fossil cells are preserved with incredible detail - right down to the level of nanoscale keratin fibrils," said Dr McNamara.

"What's remarkable is that the fossil dandruff is almost identical to that in modern birds, even the spiral twisting of individual fibres is still visible."

The research also provides the first evidence that the dinosaurs lost their skin in flakes, in the same way as modern birds and mammals, not in one or a number of large pieces like reptiles today do.



Contacts and sources:
Will Goodbody
University College Cork


Citation: Fossilized skin reveals coevolution with feathers and metabolism in feathered dinosaurs and early birds.
Maria E. McNamara, Fucheng Zhang, Stuart L. Kearns, Patrick J. Orr, André Toulouse, Tara Foley, David W. E. Hone, Chris S. Rogers, Michael J. Benton, Diane Johnson, Xing Xu, Zhonghe Zhou. Nature Communications, 2018; 9 (1) DOI: 10.1038/s41467-018-04443-x

Science Proves Why It Pays To Be Humble


Scientists prove modesty can evolve naturally as a new model can explain why we obscure positive traits and good deeds

Why do people make anonymous donations, and why does the public perceive this as admirable? Why do we downplay our interest in a potential partner, if we risk missing out on a relationship? 

A team of scientists, consisting of Christian Hilbe, a postdoc at the Institute Science and Technology Austria (IST Austria), Moshe Hoffman, and Martin Nowak, both at Harvard University, has developed a novel game theoretic model that captures these behaviors and enables their study. Their new model is the first to include the idea that hidden signals, when discovered, provide additional information about the sender. They use this idea to explain under which circumstances people have an incentive to hide their positive attributes. 

Credit: IST Austria)

People often take actions that may be costly at first, but lead to reputational benefits in the long run. However, if good reputations are important, why are there numerous situations in which people hide accomplishments or good characteristics, like when we donate anonymously? Similarly, we often emphasize subtlety in art or fashion, avoid appearing over-eager, or otherwise obscure something positive. Why do others consider this behavior commendable? The team’s key insight into this societal puzzle is that “burying” a signal (i.e. obscuring information) is a signal in and of itself. This additional signal can have several interpretations: for instance, the sender may be unconcerned with those who might have been impressed, but who miss subtle messages (like an artist disregarding the philistine masses). Alternatively, the sender might be confident that those who matter to them will find out anyway (for instance, only those who have the taste and/or necessary wealth will recognize a designer bag without an obvious logo).


The scientists succeeded in formalizing these ideas in a new evolutionary game theory model they call the “signal-burying game”, which they detail in a paper published today in Nature Human Behaviour. In this game, there are different types of senders (high, medium, and low), and different types of receivers (selective and unselective). The sender and the receiver do not know the other’s type. To convey their type, senders may pay a cost to send a signal. Signals may be sent clearly or be buried. When a signal is buried, it has a lower probability of being observed by any kind of receiver. In particular, buried signals entail the risk that receivers will never learn that the sender has sent a signal at all. After the sender has made his signaling decision, receivers decide whether or not to engage in an economic interaction with the sender. The game has an element of risk, and therefore, senders and receivers must develop strategies to maximize their payoff.

“We wanted to understand what strategies would evolve naturally and be stable,” explains Christian Hilbe, co-first author of the paper and postdoc in the research group of Krishnendu Chatterjee at IST Austria. “In particular, is it possible to have a situation where high-level senders always choose to bury their signals, mid-level senders always send a clear signal, and low-level senders send no signal at all?” This would correspond to situations that come up in real life, and is one of the key distinguishing features of their model: they allow for strategies that target specific receivers at the risk of losing others. In their simulations, players started off neither sending nor receiving signals. Then, with some probability, a player either selects a random strategy (representing mutation) or imitates another player (representing a learning process biased towards strategies with higher payoff). In their simulations, the scientists found that populations quickly settled at the strategy described above.


The team also developed several extensions to the model, enabling them to cover more general scenarios. First, they added different levels of obscurity: senders could choose from several revelation probabilities. “We found that in this case, high senders tend to be modest...but not too modest,” adds Hilbe. “Even if you’re humble, you don’t try to be holier-than-thou.” It is moreover possible to increase the number of types of senders and receivers, as well as introduce subtleties in the preferences of the receivers.


Using their new model, Hilbe, Hoffman, and Nowak were able to put a different perspective on various common situations: a donor giving anonymously, an academic not disclosing their degree, an artist creating art with hidden messages, and a possible partner hiding their interest, among others. Evolutionary game theory shows that, in the end, these puzzling social behaviors make sense.

 


Contacts and sources:
Stefan Bernhardt / Elisabeth Guggenberger
Institute Science and Technology Austria (IST Austria)



Citation: 'The signal-burying game can explain why we obscure positive traits and good deeds’, Moshe Hoffman, Christian Hilbe and Martin A. Nowak, Nature Human Behaviour, DOI: 10.1038/s41562-018-0354-z

Dinosaur Killing Asteroid Caused 100,000 Year Temperature Spike of 5 Degrees Celsius

When the Chicxulub asteroid smashed into Earth 65 million years ago, the event drove an abrupt and long-lasting era of global warming, a new study reports. The results , published in the May 25 issue of Science, suggest that the asteroid impact caused a rapid temperature increase of 5 degrees Celsius (roughly 9 degrees Fahrenheit) that endured for about 100,000 years.

The monumental event, which caused the extinction of many animals including dinosaurs, is a rare case where Earth's systems were perturbed at a rate greater than today's global temperature changes driven by human activity. It therefore provides valuable insights into what may happen from sudden, extreme environmental changes, say the study's authors.

The Chicxulub Impact May Have Driven an Abrupt Global Temperature Increase That Lasted for 100,000 Years. | 
This painting by Donald E. Davis depicts an asteroid slamming into tropical, shallow seas of the sulfur-rich Yucatan Peninsula in what is today southeast Mexico.
Credit: Donald E. Davis Via NASA 

Ken MacLeod of the University of Missouri, an author involved in the study, said "I think the impact and its aftermath is the single most spectacularly catastrophic event in Earth history."

The effects of the impact on climate are still unclear, because Chicxulub disturbed many sediments worldwide. Scientists analyze layers of rocks, sediments and the shells of marine animals to understand past climates. These features of the geological record provide important clues about the environment, such as the abundance of atmospheric carbon dioxide, at the time each was deposited in the ground.

MacLeod and his team discovered and analyzed a robust collection of well-preserved, sand grain-sized remains of fish teeth, scales and bone from El Kef in Tunisia that was buried around the time of the asteroid's impact. These samples retain oxygen isotopic signatures that reveal the temperature at the time that the animals were alive.

The authors were able to collect samples from sediments that span the time leading up to the Chicxulub impact until long afterward. Based on their analysis, they propose that global temperatures abruptly increased at the time of the impact by about 5 C and did not cool to previous values for roughly 100,000 years.

"The clear [change in the geological record] we found suggests temperature change at the boundary was as dramatic as one could imagine," said MacLeod, who compares it to the changes seen globally today.

"In less than two centuries, we've altered the composition of the atmosphere and reworked land surfaces on a global scale. The [perturbation from the Chicxulub impact] happened on human timescales or shorter, and the consequences of that perturbation, based on our data, lasted 100,000 years," he said.

MacLeod cautioned that these results are derived from just one site in Tunisia, and more pristine geological samples are needed to better understand the full effects of Chicxulub. He has recently returned from another expedition to Mentelle Basin, off the southwest end of Australia, where the team gathered more samples representing this critical era. He is also working with other scientists to use computer modeling to understand atmospheric carbon dioxide concentrations at the time of the impact.



Contacts and sources:
American Association for the Advancement of Science

Citation: Postimpact earliest Paleogene warming shown by fish debris oxygen isotopes (El Kef, Tunisia).
K. G. MacLeod, P. C. Quinton, J. Sepúlveda, M. H. Negra. Science, 2018; eaap8525 DOI: 10.1126/science.aap8525

It Knows Where: More Effective Location Awareness for the Internet-of-(Many)-Things Devised


The vast number of devices connected on 5G networks can help locate themselves, rather than rely on centralized “anchors.”

Anticipating a critical strain on the ability of fifth generation (5G) networks to keep track of a rapidly growing number of mobile devices, engineers at Tufts University have come up with an improved algorithm for localizing and tracking these products that distributes the task among the devices themselves. It is a scalable solution that could meet the demands of a projected 50 billion connected products in the Internet-of-Things by 2020, and would enable a widening range of location-based services. 

The results of the Tufts study were published in Proceedings of the IEEE, the leading peer-reviewed scientific journal published by the Institute of Electrical and Electronics Engineers.


Credit: Pixabay

Currently, positioning of wireless devices is centralized, depending on “anchors” with known locations such as cell towers or GPS satellites to communicate directly with each device. As the number of devices increases, anchors must be installed at higher density. Centralized positioning can become unwieldy as the number of items to track grows significantly.

As an alternative to centralized solutions, the authors’ method of distributed localization in a 5G network has the devices locate themselves without all of them needing direct access to anchors. Sensing and calculations are done locally on the device, so there is no need for a central coordinator to collect and process the data.

“The need to provide location awareness of every device, sensor, or vehicle, whether stationary or moving, is going to figure more prominently in the future,” said Usman Khan, Ph.D., associate professor of electrical and computer engineering in the School of Engineering at Tufts University. “There will be applications for tracking assets and inventory, healthcare, security, agriculture, environmental science, military operations, emergency response, industrial automation, self-driving vehicles, robotics – the list is endless. The virtually limitless potential of the Internet-of-Things requires us to develop smart decentralized algorithms,” said Khan, who is the paper’s corresponding author.

The self-localization algorithm developed by Khan and his colleagues makes use of device-to-device communication, and so can take place indoors (e.g., in offices and manufacturing facilities), underground, underwater, or under thick cloud cover. This is an advantage over GPS systems, which not only can go dark under those conditions, but also adds to the cost and power requirements of the device.

The mobility of the devices makes self-localization challenging. The key is to obtain positions rapidly to track them in real-time, which means the calculations must be simplified without sacrificing accuracy. The authors accomplished this by substituting the non-linear position calculations, which are computationally demanding and can miss their mark if the initial guess at position is in the wrong place, with a linear model that quickly and reliably converges on the accurate position of the device. The move to a computationally simpler linear calculation emerges as a result of the devices measuring their location relative to each other or a point representing the “center of mass” of neighboring devices, rather than having all of them reference a set of stationary anchors. Convergence to accurate positions is extremely fast, making real-time tracking of a large number of devices feasible.

“In addition to preparing us for a future of ubiquitous connected devices, this approach could relieve pressure on the current infrastructure by removing the need to install a lot of transmitters (anchors) in buildings and neighborhoods,” said Khan.

Other contributing authors include lead author Sam Savavi, a PhD graduate student in the Department of Electrical and Computer Engineering at Tufts University, and now a post-doctoral associate at Boston College, and Soummya Kar, Ph.D., and José M.F. Moura, D.Sc., of the Department of Electrical and Computer Engineering at Carnegie Mellon University. Dr. Moura is currently president-elect of IEEE.

The work was supported by the National Science Foundation (CCF1350264, CCF1513936, ECCS1408222).






Contacts and sources:
Mike Silver
Tufts University


Citation:  "Distributed localization: a linear theory," Savavi S., Khan U.A., Kar S., Moura J.M.F. Cell Reports, Proceedings of the IEEE, July 2018; 106(7). DOI: 10.1109/JPROC.2018.2823638

400 Million Year-Old Evolutionary Arms Race and HIV

Understanding the evolution of a 400 million-year-old anti-viral protein that first emerged in marine life, is helping researchers get the upper-hand on human immunodeficiency virus (HIV).

Researchers at Western University were interested in the origin of a gene that encodes for protein, HERC5, shown to potently inhibit HIV. In a new study published in the Journal of Virology Stephen Barr, PhD, associate professor at Western’s Schulich School of Medicine & Dentistry, shows that the gene first emerged in fish over 400 million-years-ago and has been involved in an evolutionary arms race with viruses ever since.


PhD Candidate Ermela Paparisto, and Principal Investigator, Stephen Barr, PhD
Credit: University of Western Ontario

The study shows that over hundreds of millions of years, this battle for survival caused the genes to develop sophisticated shields to block viruses, which in turn forced the viruses to continually evolve and change to circumvent these defences. This provides insight into both how the viruses and the immune system has evolved.

Using sequencing technology, Barr and his research team found that the HERC5 gene from this 400 million-year-old fish called a coelacanth encodes for a protein that can potently block the primate version of HIV, known as simian immunodeficiency virus (SIV), but fails to block HIV.

“Of course HIV and these modern day viruses that we study aren’t present in fish, but ancient versions of them are. So what we assume is that as these ancient retroviruses wreaked havoc on marine life, their immune systems had to develop a defense,” Barr explained. “We think that one of those defenses is the HERC family. As retroviruses evolved, eventually giving rise to HIV, different variants of HERC genes emerged to combat these infections.”


Credit: University of Western Ontario

Since these viruses have been in battle for so long, they have had time to learn of ways to get around these shields and as a result became smarter. Consequently, this new level of sophistication likely helped these viruses to jump the species barrier and establish new infections in humans.

“By learning the big picture and identifying all the different proteins that can make up this defense against viruses, we can develop a more global approach to advance antiviral drugs. Our future goal is to discover the mechanisms that viruses use to inactivate HERCs and other similar antiviral proteins so that we can exploit this knowledge for the development of novel antiviral drugs,” said Barr.




Contacts and sources:
Crystal Mackay
University of Western Ontario


Citation: Evolution-guided structural and functional analyses of the HERC family reveals an ancient marine origin and determinants of antiviral activity
Ermela Paparisto, Matthew W. Woods, Macon D. Coleman, Seyed A. Moghadasi, Divjyot S. Kochar, Sean K. Tom, Hinissan P. Kohio, Richard M. Gibson, Taryn J. Rohringer, Nina R. Hunt, Eric J. Di Gravio, Jonathan Y. Zhang, Meijuan Tian, Yong Gao, Eric J. Arts, Stephen D. Barr. .. Journal of Virology, 2018; JVI.00528-18 DOI: 10.1128/JVI.00528-18

2.4 Billion Years Ago, Earth's First Snow Fell as Oxygen Appeared with Rising Land

Earth’s first snow may have fallen after a lot of land rose swiftly from the sea and set off dramatic changes on Earth 2.4 billion years ago, says UO geologist Ilya Bindeman.

That notion comes from research done on shale in Bindeman’s Stable Isotope Laboratory. Shale is the world’s most abundant sedimentary rock, and the lab used samples drawn from every continent.

Scientists looked at ratios of three common oxygen isotopes, or chemical signatures. They found archival-quality evidence from as far back as 3.5 billion years ago showing traces of rainwater that caused weathering of land.

Shale rocks are formed by the weathering of crust. Bindeman, a professor in the Department of Earth Sciences, initially began collecting shale samples while doing petroleum-related research.

"They tell you a lot about the exposure to air and light and precipitation,” he said. “The process of forming shale captures organic products and eventually helps to generate oil. Shales provide us with a continuous record of weathering."

Previously submerged surfaces become exposed to weathering, leading to the accumulation of mudrocks and shales. In this scene, winter drainage at Fern Ridge reservoir west of Eugene exposes mudrocks, providing an example of how newly risen land is exposed to weathering forces.

Credit:  University of Oregon

Bindeman and his eight co-authors detected a major shift in the chemical makeup of 278 shale samples at the 2.4-billion-year mark. They detailed their conclusions in a paper in the May 24 issue of Nature.

Those changes began on a planet that was much hotter than today when the newly surfaced land rose rapidly and was exposed to weathering. Based on his own previous modeling and other studies, Bindeman said the total landmass of the planet 2.4 billion years ago may have reached about two-thirds of what is seen today.

Before and After: How Earth's land elevations may have looked before and after the Great Oxygenation Event 
Courtesy of Ilya Bindeman

The emergence of so much land changed the flow of atmospheric gases and other chemical and physical processes, primarily between 2.4 billion and 2.2 billion years ago, he said. It also happened amid large-scale changes in mantle dynamics.

"What we speculate is that once large continents emerged, light would have been reflected back into space and that would have initiated runaway glaciation," said Bindeman, a third-generation geologist who grew up in Russia. "Earth would have seen its first snowfall."

Chemical changes recorded in the rocks coincide with the theorized timing of land collisions that formed Earth's first supercontinent, Kenorland, and the planet’s first high-mountain ranges and plateaus. When the planet was much hotter, Bindeman said, such mountainous land could not be supported.

"Land rising from water changes the albedo of the planet,” he said. “Initially, Earth would have been dark blue with some white clouds when viewed from space. Early continents added to reflection.”

The rapid changes, the researchers noted, may have triggered what scientists call the Great Oxygenation Event, in which atmospheric changes brought significant amounts of free oxygen into the air.

Scientists have long believed that Earth experienced a gradual or stepwise emergence of land between 1.1 billion and 3.5 billion years ago. Bindeman’s study points to an age near the middle of that span.

The timing also coincides with the transition from the Archean Eon, when archaea and bacteria — simple, single-cell life forms — thrived in water, to the Proterozoic Eon, when more complex life forms, such as algae, plants and fungi, emerged.

UO co-authors with Bindeman, who was supported by the National Science Foundation, were doctoral student David O. Zakharov and research associate James Palandri, both in Bindeman's UO lab, and Gregory J. Retallack, a professor in the UO Department of Earth Sciences.



Contacts and sources:
Jim Barlow
University of Oregon


Citation: Rapid emergence of subaerial landmasses and onset of a modern hydrologic cycle 2.5 billion years ago.
I. N. Bindeman, D. O. Zakharov, J. Palandri, N. D. Greber, N. Dauphas, G. J. Retallack, A. Hofmann, J. S. Lackey, A. Bekker. Nature, 2018; 557 (7706): 545 DOI: 10.1038/s41586-018-0131-1

Hot Bodies Fight Infections and Tumors Better






The hotter our body temperature, the more our bodies speed up a key defense system that fights against tumors, wounds or infections, new research by a multidisciplinary team of mathematicians and biologists from the Universities of Warwick and Manchester has found.

The researchers have demonstrated that small rises in temperature (such as during a fever) speed up the speed of a cellular ‘clock’ that controls the response to infections – and this new understanding could lead to more effective and fast-working drugs which target a key protein involved in this process.


Professor David Rand, Professor of Mathematics and a member of the University of Warwick's Zeeman Institute for Systems Biology and Infectious Disease Epidemiology (SBIDER) -

Credit: Professor Rand


Biologists found that inflammatory signals activate ‘Nuclear Factor kappa B’ (NF-κB) proteins to start a ‘clock’ ticking, in which NF-κB proteins move backwards and forwards into and out of the cell nucleus, where they switch genes on and off.

This allows cells to respond to a tumour, wound or infection. When NF-κB is uncontrolled, it is associated with inflammatory diseases, such as Crohn’s disease, psoriasis and rheumatoid arthritis.

At a body temperature of 34 degrees, the NF-κB clock slows down. At higher temperatures than the normal 37 degree body temperature (such as in fever, 40 degrees), the NF-κB clock speeds up.

Mathematicians at the University of Warwick’s Systems Biology Centre calculated how temperature increases make the cycle speed up.

They predicted that a protein called A20 - which is essential to avoid inflammatory disease - might be critically involved in this process. The experimentalists then removed A20 from cells and found that the NF-kB clock lost its sensitivity to increases in temperature.

Lead mathematician Professor David Rand, Professor of Mathematics and a member of the University of Warwick's Zeeman Institute for Systems Biology and Infectious Disease Epidemiology (SBIDER), explained that in normal life the 24 hour body clock controls small (1.5 degree) changes in body temperature.

He commented: “the lower body temperature during sleep might provide a fascinating explanation into how shift work, jet lag or sleep disorders cause increased inflammatory disease”

Mathematician Dan Woodcock from the University of Warwick said: “this is a good example of how mathematical modelling of cells can lead to useful new biological understanding.”

While the activities of many NF-kB controlled genes were not affected by temperature, a key group of genes showed altered profiles at the different temperatures. These temperature sensitive genes included key inflammatory regulators and controllers of cell communication that can alter cell responses.

Professor Mike White, lead biologist from the University of Manchester 

Credit:  Professor White.

This study shows that temperature changes inflammation in cells and tissues in a biologically organised way and suggests that new drugs might more precisely change the inflammatory response by targeting the A20 protein.

Professor Mike White, lead biologist from the University of Manchester, said the study provides a possible explanation of how both environmental and body temperature affects our health:

“We have known for some time that influenza and cold epidemics tend to be worse in the winter when temperatures are cooler. Also, mice living at higher temperatures suffer less from inflammation and cancer. These changes may now be explained by altered immune responses at different temperatures.”

The research is published in the journal Proceedings of the National Academy of Sciences (USA). The work was funded by the Biotechnology and Biological Sciences Research Council (BBSRC) with additional support from the Medical Research Council (MRC) and the European Union.




Contacts and sources:
Luke Walton
Universities of Warwick

Citation: Temperature regulates NF-κB dynamics and function through timing of A20 transcription.  C. V. Harper, D. J. Woodcock, C. Lam, M. Garcia-Albornoz, A. Adamson, L. Ashall, W. Rowe, P. Downton, L. Schmidt, S. West, D. G. Spiller, D. A. Rand, M. R. H. White.  Proceedings of the National Academy of Sciences, 2018; 201803609 DOI: 10.1073/pnas.1803609115

Monday, May 28, 2018

Pulsar Seen in Unprecedented Detail

A team of astronomers has performed one of the highest resolution observations in astronomical history by observing two intense regions of radiation, 20 kilometers apart, around a star 6500 light-years away.

In an artist’s impression, the pulsar PSR B1957+20 is seen in the background through the cloud of gas enveloping its brown dwarf star companion.

Image: Dr. Mark A. Garlick; Dunlap Institute for Astronomy & Astrophysics, University of Toronto

The observation is equivalent to using a telescope on Earth to see a flea on the surface of Pluto.

The extraordinary observation was made possible by the rare geometry and characteristics of a pair of stars orbiting each other. One is a cool, lightweight star called a brown dwarf, which features a “wake” or comet-like tail of gas. The other is an exotic, rapidly spinning star called a pulsar.

“The gas is acting like a magnifying glass right in front of the pulsar,” says Robert Main, lead author of the paper describing the observation being published May 24 in the journal Nature. “We are essentially looking at the pulsar through a naturally occurring magnifier which periodically allows us to see the two regions separately.”

Main is a PhD astronomy student in the Department of Astronomy & Astrophysics at the University of Toronto, working with colleagues at the University of Toronto’s Dunlap Institute for Astronomy & Astrophysics and Canadian Institute for Theoretical Astrophysics, and the Perimeter Institute.

The pulsar is a neutron star that rotates rapidly—over 600 times a second. As the pulsar spins, it emits beams of radiation from the two hotspots on its surface. The intense regions of radiation being observed are associated with the beams.

The brown dwarf star is about a third the diameter of the Sun. It is roughly two million kilometres from the pulsar—or five times the distance between the Earth and the moon—and orbits around it in just over 9 hours. The dwarf companion star is tidally locked to the pulsar so that one side always faces its pulsating companion, the way the moon is tidally locked to the Earth.

Because it is so close to the pulsar, the brown dwarf star is blasted by the strong radiation coming from its smaller companion. The intense radiation from the pulsar heats one side of the relatively cool dwarf star to the temperature of our Sun, or some 6000°C.

The blast from the pulsar could ultimately spell its companion’s demise. Pulsars in these types of binary systems are called “black widow” pulsars. Just as a black widow spider eats its mate, it is thought the pulsar, given the right conditions, could gradually erode gas from the dwarf star until the latter is consumed.

In addition to being an observation of incredibly high resolution, the result could be a clue to the nature of mysterious phenomena known as Fast Radio Bursts, or FRBs.

“Many observed properties of FRBs could be explained if they are being amplified by plasma lenses,” say Main. “The properties of the amplified pulses we detected in our study show a remarkable similarity to the bursts from the repeating FRB, suggesting that the repeating FRB may be lensed by plasma in its host galaxy.”

Additional notes:

1. The pulsar is designated PSR B1957+20. Previous work led by Main’s co-author, Prof. Marten van Kerkwijk, from the University of Toronto, suggests that it is likely one of the most massive pulsars known, and further work to accurately measure its mass will help in understanding how matter behaves at the highest known densities, and equivalently, how massive a neutron star can be before collapsing into a black hole.

2. Main and his co-authors used data obtained with the Arecibo Observatory radio telescope before Hurricane Maria damaged the telescope in September 2017. The collaborators will use the telescope to make follow-up observations of PSR B1957+20.




Contacts and sources:
Robert Main
Department of Astronomy & Astrophysics
Dunlap Institute for Astronomy & Astrophysics (Associate)
University of Toronto

Signs of Life Near Ancient Mars' Lake


Iron-rich rocks near ancient lake sites on Mars could hold vital clues that show life once existed there, research suggests.

These rocks - which formed in lake beds - are the best place to seek fossil evidence of life from billions of years ago, researchers say.

A new study that sheds light on where fossils might be preserved could aid the search for traces of tiny creatures - known as microbes - on Mars, which it is thought may have supported primitive life forms around four billion years ago.

The Jezero Crater delta, a well-preserved ancient river delta on Mars 
Jezero Crater delta
Credit: NASA/JPL-Caltech/MSSS/JHU-APL

A team of scientists has determined that sedimentary rocks made of compacted mud or clay are the most likely to contain fossils. These rocks are rich in iron and a mineral called silica, which helps preserve fossils.

They formed during the Noachian and Hesperian Periods of Martian history between three and four billion years ago. At that time, the planet's surface was abundant in water, which could have supported life.

The rocks are much better preserved than those of the same age on Earth, researchers say. This is because Mars is not subject to plate tectonics - the movement of huge rocky slabs that form the crust of some planets - which over time can destroy rocks and fossils inside them.

The team reviewed studies of fossils on Earth and assessed the results of lab experiments replicating Martian conditions to identify the most promising sites on the planet to explore for traces of ancient life.

Their findings could help inform NASA's next rover mission to the Red Planet, which will focus on searching for evidence of past life. The US space agency's Mars 2020 rover will collect rock samples to be returned to Earth for analysis by a future mission.

A similar mission led by the European Space Agency is also planned in coming years.

The latest study of Mars rocks - led by a researcher from the University of Edinburgh - could aid in the selection of landing sites for both missions. It could also help to identify the best places to gather rock samples.

The study, published in Journal of Geophysical Research, also involved researchers at NASA's Jet Propulsion Laboratory, Brown University, California Institute of Technology, Massachusetts Institute of Technology and Yale University in the US.

Dr Sean McMahon, a Marie Sklodowska-Curie fellow in the University of Edinburgh's School of Physics and Astronomy, said: "There are many interesting rock and mineral outcrops on Mars where we would like to search for fossils, but since we can't send rovers to all of them we have tried to prioritize the most promising deposits based on the best available information."




Contacts and sources:
Corin Campbell
University of Edinburgh

Saturday, May 26, 2018

'Traffic Jams' In Jet Streams Cause Abnormal Weather Patterns Says New Theory

The sky sometimes has its limits, according to new research from two University of Chicago atmospheric scientists.

A study published May 24 in Science offers an explanation for a mysterious and sometimes deadly weather pattern in which the jet stream, the global air currents that circle the Earth, stalls out over a region. Much like highways, the jet stream has a capacity, researchers said, and when it's exceeded, blockages form that are remarkably similar to traffic jams--and climate forecasters can use the same math to model them both.

The deadly 2003 European heat wave, California's 2014 drought and the swing of Superstorm Sandy in 2012 that surprised forecasters--all of these were caused by a weather phenomenon known as "blocking," in which the jet stream meanders, stopping weather systems from moving eastward. Scientists have known about it for decades, almost as long as they've known about the jet stream--first discovered by pioneering University of Chicago meteorologist Carl-Gustaf Rossby, in fact--but no one had a good explanation for why it happens.

This is an illustration of the Northern Hemisphere's polar jet stream.
Earth
Credit:  NASA's Goddard Space Flight Center


"Blocking is notoriously difficult to forecast, in large part because there was no compelling theory about when it forms and why," said study coauthor Noboru Nakamura, a professor in the Department of the Geophysical Sciences.

Nakamura and then-graduate student Clare S.Y. Huang were studying the jet stream, trying to determine a clear set of measurements for blocking in order to better analyze the phenomenon. One of their new metrics was a term that measured the jet stream's meander. Looking over the math, Nakamura realized that the equation was nearly identical to one devised decades ago by transportation engineers trying to describe traffic jams.

"It turns out the jet stream has a capacity for 'weather traffic,' just as highway has traffic capacity, and when it is exceeded, blocking manifests as congestion," said Huang.

Much like car traffic, movement slows when multiple highways converge and the speed of the jet stream is reduced due to topography such as mountains or coasts.

The result is a simple theory that not only reproduces blocking, but predicts it, said Nakamura, who called making the cross-disciplinary connection "one of the most unexpected, but enlightening moments in my research career--truly a gift from God."

The explanation may not immediately improve short-term weather forecasting, the researchers said, but it will certainly help predict long-term patterns, including which areas may see more drought or floods.

Their initial results suggest that while climate change probably increases blocking by running the jet stream closer to its capacity, there will be regional differences: for example, the Pacific Ocean may actually see a decrease in blocking over the decades.

"It's very difficult to forecast anything until you understand why it's happening, so this mechanistic model should be extremely helpful," Nakamura said.

And the model, unlike most modern climate science, is computationally simple: "This equation captures the essence with a much less complicated system," Huang said.




Contacts and sources:
Louise Lerner
University of Chicago




Citation: "Atmospheric Blocking as a Traffic Jam in the Jet Stream," Nakamura and Huang, Science, May 24, 2018.

New Insights into Solar Flares' Explosive Energy Releases

Last September, a massive new region of magnetic field erupted on the Sun’s surface next to an existing sunspot. The powerful collision of magnetic fields produced a series of potent solar flares, causing turbulent space weather conditions at Earth. These were the first flares to be captured, in their moment-by-moment progression, by NJIT’s recently expanded Owens Valley Solar Array (EOVSA).

With 13 antennas now working together, EOVSA was able to make images of the flare in multiple radio frequencies simultaneously for the first time. This enhanced ability to peer into the mechanics of flares offers scientists new pathways to investigate the most powerful eruptions in our solar system.

“These September flares included two of the strongest of the current 11-year solar activity cycle, hurling radiation and charged particles toward Earth that disrupted radio communications,” said Dale Gary, distinguished professor of physics at NJIT’s Center for Solar-Terrestrial Research (CSTR) and EOVSA’s director. The last flare of the period, on September 10, was “the most exciting,” he added.

“The sunspot region was just passing over the solar limb – the edge of the Sun as it rotates – and we could see the comparative height of the flare in many different wavelengths, from optical, to ultraviolet, to X-rays, to radio,” he recounted. “This view provided a wonderful chance to capture the structure of a large solar flare with all of its ingredients.”

Radio emissions are generated by energetic electrons accelerated in the corona, the Sun’s hot upper atmosphere. Modern solar physics relies on observations at many wavelengths; radio imaging complements these by directly observing the particle acceleration that drives the whole process. By measuring the radio spectrum at different places in the solar atmosphere, especially when it is able to do so fast enough to follow changes during solar flares, it becomes a powerful diagnostic of the fast-changing solar environment during these eruptions.

EOVSA, which is funded by the National Science Foundation, is the first radio imaging instrument that can make spectral images fast enough – in one second – to follow the rapid changes that occur in solar flares. This capability allows the radio spectrum to be measured dynamically throughout the flaring region, to pinpoint the location of particle acceleration and map where those particles travel. Images of solar flares at most other wavelengths show only the consequences of heating by the accelerated particles, whereas radio emission can directly show the particles themselves.

“One of the great mysteries of solar research is to understand how the Sun produces extremely high-energy particles in such a short time,” Gary noted. “But to answer that question, we must have quantitative diagnostics of both the particles and the environment, especially the magnetic field that is at the heart of the energy release. EOVSA makes that possible at radio wavelengths for the first time.”

EOVSA radio intensity spectrogram of the 2017 September 10 solar flare, with frequency (vertical scale) and time (horizontal scale).



Gary presented EOVSA’s new findings this week at the Triennial Earth-Sun Summit (TESS) meeting, which brings together the solar physics division of the American Astronomical Society (AAS) and the solar physics and aeronomy section of the American Geophysical Union (AGU).

“EOVSA’s new results have sparked lots of interest at the TESS meeting,” said Bin Chen, assistant professor of physics at CSTR, who is chairing a session focused on the intense solar activity that occurred last September. “A number of experts at the meeting commented that these results would add fundamentally new insights into the understanding of energy release and particle acceleration in solar flares.”

Among other discoveries, scientists at EOVSA have learned that radio emissions in a flare are spread over a much larger region than previously known, indicating that high-energy particles are promptly transported in large numbers throughout the explosive magnetic field "bubble" called a coronal mass ejection (CME).

“This is important because CMEs drive shock waves that further accelerate particles that are dangerous to spacecraft, astronauts and even people in airplanes flying polar routes. To date, it remains a mystery how these shock waves alone accelerate particles, because the physics is not understood,” he said. “One of the theories is that ‘seed’ particles must be present in the shock region, which can generate the waves necessary for further acceleration. It has long been speculated that flares, which are known to accelerate particles, may provide them. Previous observations, mainly with X-rays, always show those particles confined to very low heights and it has not been understood how such particles could get to the shock. The radio images show evidence for particles in a much larger region, giving them more opportunity to gain access to the shock region.”

Sunspots are the primary generator of solar flares, the sudden, powerful blasts of electromagnetic radiation and charged particles that burst into space during explosions on the Sun’s surface. Their turning motion causes energy to build up that is released in the form of flares.

EOVSA was designed to make high-resolution radio images of flares (1-second cadence), sunspot regions (20-minute cadence), the full Sun (a few per day) and hundreds of frequencies over a broad frequency band, making it the first solar instrument able to measure the radio spectrum from point-to-point in the flaring region.

“We are working towards a calibration and imaging pipeline to automatically generate microwave images observed by EOVSA, and make them available to the community on a day-to-day basis,” added Chen, who is leading the EOVSA pipeline effort.

“The most unexpected revelation so far from EOVSA is what we see at the lowest radio frequencies,” Gary noted. “Observations of flares based on high radio frequencies and based on X-ray observations show a flare that is a relatively small, compact region even though we see evidence for heating over a much larger area. Although we had rare observations from the past that seemed to show large radio sources, EOVSA has now made it routine to image large radio sources that are even bigger at lower frequencies.”

Initially, he and his colleagues were unable to tap into these new regions, however. After the array was completed, they realized that cell phone towers in the Owens Valley were causing much higher levels of radio frequency interference than expected. As a result, they designed "notch" filters that were able to cut out the frequencies most affected by cell towers.

“This is important because a lot of interesting solar radio bursts occur in the cell tower range (1.9-2.2 GHz). It is the lower frequencies that best show this new and not well understood phenomenon of large sources,” Gary said. “Somehow, the accelerated particles are being transported to a much greater volume of the corona than we thought.”

With new funding from NASA, Gary and colleagues will measure the spatially-resolved radio spectrum of solar flares, determine the particle and plasma parameters as a function of position and time, and then use 3-dimensional modeling, which his group has developed, to fully understand the initial acceleration and subsequent transport of high-energy particles.

The Sun goes through 11-year cycles of activity, and this past year may have provided the last flares we will see for the next four or five years,” Gary said. “For the next few years, we will focus our efforts on improving the active sunspot regions and full-disk images with the array. This imaging on a larger spatial scale is more challenging, but could be just as important, since the larger scale features govern the Sun's influence on the Earth's atmosphere and the solar wind.”



Contacts and sources:
Tanya Klein /  Tracey Regan
 New Jersey Institute of Technology

“These Could Revolutionize the World”

Imagine a box you plug into the wall that cleans your toxic air and pays you cash.

That’s essentially what Vanderbilt University researchers produced after discovering the blueprint for turning the carbon dioxide into the most valuable material ever sold – carbon nanotubes with small diameters.

Carbon nanotubes are supermaterials that can be stronger than steel and more conductive than copper. The reason they’re not in every application from batteries to tires is that these amazing properties only show up in the tiniest nanotubes, which are extremely expensive. Not only did the Vanderbilt team show they can make these materials from carbon dioxide sucked from the air, but how to do this in a way that is much cheaper than any other method out there.

Cary Pint
Credit:  Vanderbilt  University

These materials, which Assistant Professor of Mechanical Engineering Cary Pint calls “black gold,” could steer the conversation from the negative impact of emissions to how we can use them in future technology.

“One of the most exciting things about what we’ve done is use electrochemistry to pull apart carbon dioxide into elemental constituents of carbon and oxygen and stitch together, with nanometer precision, those carbon atoms into new forms of matter,” Pint said. “That opens the door to being able to generate really valuable products with carbon nanotubes.

“These could revolutionize the world.”

Anna Douglas 

Credit:  Vanderbilt  University

In a report published today in ACS Applied Materials and Interfaces, Pint, interdisciplinary material science Ph.D. student Anna Douglas and their team describe how tiny nanoparticles 10,000 times smaller than a human hair can be produced from coatings on stainless steel surfaces. The key was making them small enough to be valuable.

“The cheapest carbon nanotubes on the market cost around $100-200 per kilogram,” Douglas said. “Our research advance demonstrates a pathway to synthesize carbon nanotubes better in quality than these materials with lower cost and using carbon dioxide captured from the air.”

But making small nanotubes is no small task. The research team showed that a process called Ostwald ripening — where the nanoparticles that grow the carbon nanotubes change in size to larger diameters — is a key contender against producing the infinitely more useful size. The team showed they could partially overcome this by tuning electrochemical parameters to minimize these pesky large nanoparticles.

Small diameter carbon nanotubes grown on a stainless steel surface
. Credit: Pint Lab/Vanderbilt University

This core technology led Pint and Douglas to co-found SkyNano LLC, a company focused on building upon the science of this process to scale up and commercialize products from these materials.

“What we’ve learned is the science that opens the door to now build some of the most valuable materials in our world, such as diamonds and single-walled carbon nanotubes, from carbon dioxide that we capture from air through our process,” Pint said.

Other researchers involved in the study were Rachel Carter, formerly a Vanderbilt University Ph.D. student and presently a Nuclear Regulatory Commission postdoctoral fellow at Naval Research Laboratory, and Mengya Li, graduate student in mechanical engineering at Vanderbilt University.

This work was supported in part by National Science Foundation grant CMMI 1400424 and Vanderbilt University start-up funds. Douglas is supported in part by a National Science Foundation Graduate Research Fellowship. 


Contacts and sources:
Heidi Hall
Vanderbilt University

Prehistoric Temple on Mississippi Delta Abandoned Due to Ecological Changes



Prehistoric people of the Mississippi Delta may have abandoned a large ceremonial site due to environmental stress, according to a new paper authored by Elizabeth Chamberlain, a postdoctoral researcher in Earth and environmental sciences, and University of Illinois anthropologist Jayur Mehta.

The study, published online May 18 in the peer-reviewed Journal of Island and Coastal Archaeology, used archaeological excavations, geologic mapping and coring, and radiocarbon dating to identify how Native Americans built and inhabited the Grand Caillou mound near Dulac, Louisiana. The work stemmed from research begun by Mehta and Chamberlain when they were graduate students at Tulane University.

The researchers found that construction of the Grand Caillou mound began around 800 years ago, when Bayou Grand Caillou was a major and active river channel. The site appeared to have been abandoned around 600 years ago, the same time Bayou Grand Caillou stopped carrying a significant portion of Mississippi River water and sediment.

The Grand Caillou Mound in the Mississippi River Delta may have been abandoned due to environmental stress.
hilltop densely covered with trees
 Credit: Jayur Mehta

"Dating an abandonment event can be challenging," said Chamberlain. "Within our suite of radiocarbon ages at Grand Caillou, there were none younger than 600 years. This suggests that people may have moved on to a new site at that time."

"The abandonment of the river channel would have caused a number of changes to the ecology and landscape at the Grand Caillou mound," added Mehta. "This may be an early example of people responding to changes in Mississippi Delta landscape by relocating."

The researchers also found that the location and architecture of the mound--it stands 20 feet tall--indicated a great degree of environmental and engineering expertise among the people who built it.

"This region is naturally high elevation, and offered access to waterways for transportation and hunting," said Mehta.

Geologic coring showed that the mound was built in alternating layers of sand and mud. While sand is a common deposit near the mound, mud must have been imported from farther away.

"This demonstrates a high level of geotechnical knowledge by native people, and a big group effort in mound construction," said Chamberlain. "Waterlogged mud is heavy, and all this material was moved by hand."

The study also used artifacts recovered from the mound to identify the culture of the people who lived there. This helped to connect the Grand Caillou inhabitants with mound-building groups farther upstream in the Mississippi River Valley and along the Gulf Coast.

"Research such as this is especially important in light of recent published work showing that net land loss will continue in the Mississippi Delta," said Chamberlain.

"Archaeological sites are a non-renewable resource," added Mehta. "With the loss of coastal land, we also risk losing these valuable records of how prehistoric people lived in the Mississippi Delta. It is critical to recover and document archaeological records before they slip beneath the sea."



Contacts and sources:
Liz Entman
Vanderbilt University


Citation: Mound Construction and Site Selection in the Lafourche Subdelta of the Mississippi River Delta, Louisiana, USA Jayur Madhusudan Mehta & Elizabeth L. Chamberlain
The Journal of Island and Coastal Archaeology http://dx.doi.org/10.1080/15564894.2018.1458764

'The Giant Comet' Cosmochemical Model of Pluto Formation


Southwest Research Institute scientists integrated NASA's New Horizons discoveries with data from ESA's Rosetta mission to develop a new theory about how Pluto may have formed at the edge of our solar system.

"We've developed what we call 'the giant comet' cosmochemical model of Pluto formation," said Dr. Christopher Glein of SwRI's Space Science and Engineering Division. The research is described in a paper published online in Icarus. At the heart of the research is the nitrogen-rich ice in Sputnik Planitia, a large glacier that forms the left lobe of the bright Tombaugh Regio feature on Pluto's surface. "We found an intriguing consistency between the estimated amount of nitrogen inside the glacier and the amount that would be expected if Pluto was formed by the agglomeration of roughly a billion comets or other Kuiper Belt objects similar in chemical composition to 67P, the comet explored by Rosetta."

 Research Institute NASA's New Horizons spacecraft captured this image of Sputnik Planitia — a glacial expanse rich in nitrogen, carbon monoxide and methane ices — that forms the left lobe of a heart-shaped feature on Pluto’s surface. SwRI scientists studied the dwarf planet’s nitrogen and carbon monoxide composition to develop a new theory for its formation.

Image Courtesy of NASA/Johns Hopkins University Applied Physics Laboratory/Southwest

In addition to the comet model, scientists also investigated a solar model, with Pluto forming from very cold ices that would have had a chemical composition that more closely matches that of the Sun.

Scientists needed to understand not only the nitrogen present at Pluto now -- in its atmosphere and in glaciers -- but also how much of the volatile element potentially could have leaked out of the atmosphere and into space over the eons. They then needed to reconcile the proportion of carbon monoxide to nitrogen to get a more complete picture. Ultimately, the low abundance of carbon monoxide at Pluto points to burial in surface ices or to destruction from liquid water.


New Horizons not only showed humanity what Pluto looks like, but also provided information on the composition of Pluto’s atmosphere and surface. These maps — assembled using data from the Ralph instrument — indicate regions rich in methane (CH4), nitrogen (N2), carbon monoxide (CO) and water (H2O) ices. Sputnik Planitia shows an especially strong signature of nitrogen near the equator. SwRI scientists combined these data with Rosetta’s comet 67P data to develop a proposed “giant comet” model for Pluto formation.

Image Courtesy of NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute

"Our research suggests that Pluto's initial chemical makeup, inherited from cometary building blocks, was chemically modified by liquid water, perhaps even in a subsurface ocean," Glein said. However, the solar model also satisfies some constraints. While the research pointed to some interesting possibilities, many questions remain to be answered.

"This research builds upon the fantastic successes of the New Horizons and Rosetta missions to expand our understanding of the origin and evolution of Pluto," said Glein. "Using chemistry as a detective's tool, we are able to trace certain features we see on Pluto today to formation processes from long ago. This leads to a new appreciation of the richness of Pluto's 'life story,' which we are only starting to grasp."

The research was supported by NASA Rosetta funding. The member states of the European Space Agency (ESA) and NASA both contributed to the Rosetta mission. NASA's Jet Propulsion Lab (JPL) manages the U.S. contribution of the Rosetta mission for NASA's Science Mission Directorate in Washington.





Contacts and sources:
Deb Schmid
Southwest Research Institute


Citation:  "Primordial N2 provides a cosmochemical explanation for the existence of Sputnik Planitia, Pluto," is coauthored by Glein and Dr. J. Hunter Waite Jr., an SwRI program director.