Friday, June 30, 2017

Ordinary Light Drives Tiny Motors

Science fiction is full of fanciful devices that allow light to interact forcefully with matter, from light sabers to photon-drive rockets. In recent years, science has begun to catch up; some results hint at interesting real-world interactions between light and matter at atomic scales, and researchers have produced devices such as optical tractor beams, tweezers, and vortex beams.

Now, a team at MIT and elsewhere has pushed through another boundary in the quest for such exotic contraptions, by creating in simulations the first system in which particles — ranging from roughly molecule- to bacteria-sized — can be manipulated by a beam of ordinary light rather than the expensive specialized light sources required by other systems. 

The findings are reported today in the journal Science Advances, by MIT postdocs Ognjen Ilic PhD ’15, Ido Kaminer, and Bo Zhen; professor of physics Marin Soljačić; and two others.

Researchers have created in simulations the first system in which can be manipulated by a beam of ordinary light rather than the expensive specialized light sources required by other systems.
Researchers have created in simulations the first system in which can be manipulated by a beam of ordinary light rather than the expensive specialized light sources required by other systems.
Image: Christine Daniloff/MIT

Most research that attempts to manipulate matter with light, whether by pushing away individual atoms or small particles, attracting them, or spinning them around, involves the use of sophisticated laser beams or other specialized equipment that severely limits the kinds of uses of such systems can be applied to. “Our approach is to look at whether we can get all these interesting mechanical effects, but with very simple light,” Ilic says.

The team decided to work on engineering the particles themselves, rather than the light beams, to get them to respond to ordinary light in particular ways. As their initial test, the researchers created simulated asymmetrical particles, called Janus (two-faced) particles, just a micrometer in diameter — one-hundredth the width of a human hair. These tiny spheres were composed of a silica core coated on side with a thin layer of gold.

When exposed to a beam of light, the two-sided configuration of these particles causes an interaction that shifts their axes of symmetry relative to the orientation of the beam, the researchers found. At the same time, this interaction creates forces that set the particles spinning uniformly. Multiple particles can all be affected at once by the same beam. And the rate of spin can be changed by just changing the color of the light.

The same kind of system, the researchers, say, could be applied to producing different kinds of manipulations, such as moving the positions of the particles. Ultimately, this new principle might be applied to moving particles around inside a body, using light to control their position and activity, for new medical treatments. It might also find uses in optically based nanomachinery.

About the growing number of approaches to controlling interactions between light and material objects, Kaminer says, “I think about this as a new tool in the arsenal, and a very significant one.”

Ilic says the study “enables dynamics that may not be achieved by the conventional approach of shaping the beam of light,” and could make possible a wide range of applications that are hard to foresee at this point. For example, in many potential applications, such as biological uses, nanoparticles may be moving in an incredibly complex, changing environment that would distort and scatter the beams needed for other kinds of particle manipulation. But these conditions would not matter to the simple light beams needed to activate the team’s asymmetric particles.

“Because our approach does not require shaping of the light field, a single beam of light can simultaneously actuate a large number of particles,” Ilic says. “Achieving this type of behavior would be of considerable interest to the community of scientists studying optical manipulation of nanoparticles and molecular machines.” Kaminer adds, “There’s an advantage in controlling large numbers of particles at once. It’s a unique opportunity we have here.”

Soljačić says this work fits into the area of topological physics, a burgeoning area of research that also led to last year’s Nobel Prize in physics. Most such work, though, has been focused on fairly specialized conditions that can exist in certain exotic materials called periodic media. “In contrast, our work investigates topological phenomena in particles,” he says.

And this is just the start, the team suggests. This initial set of simulations only addressed the effects with a very simple two-sided particle. “I think the most exciting thing for us,” Kaminer says, “is there’s an enormous field of opportunities here. With such a simple particle showing such complex dynamics,” he says, it’s hard to imagine what will be possible “with an enormous range of particles and shapes and structures we can explore.”

“Topology has been found to be a powerful tool in describing a select few physical systems,” says Mikael Rechtsman, an assistant professor of physics at Penn State who was not involved in this work. “Whenever a system can be described by a topological number, it is necessarily highly insensitive to imperfections that are present under realistic conditions. Soljačić's group has managed to find yet another important physical system in which this topological robustness can play a role, namely the control and manipulation of nanoparticles with light. Specifically, they have found that certain particles’ rotational states can be ‘topologically protected’ to be highly stable in the presence of a laser beam propagating through the system. This could potentially have importance for trapping and probing individual viruses and DNA, for example.”

The team also included Owen Miller at Yale University and Hrvoje Buljan at the University of Zagreb, in Croatia. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the National Science Foundation, and the European Research Council.


Contacts and sources:
David L. Chandler
Massachusetts Institute Of Technology

For Some US Counties, Climate Change Will Be Particularly Costly

A highly granular assessment of the impacts of climate change on the U.S. economy suggests that each 1°Celsius increase in temperature will cost 1.2% of the country's gross domestic product, on average.

Different from past analyses of climate change in the U.S., which suggested the country would benefit or lose as a single entity, this study captures regional differences: locations in the south, for example, are at much higher risk of incurring economic damage, while areas in the Pacific Northwest and New England may experience a slight economic gain.

County-level annual damages in median scenario for climate during 2080-2099 under business-as-usual emissions trajectory (RCP8.5). Negative damages indicate economic benefits. Map corresponds with Figure 2I in main article. This material relates to a paper that appeared in the June 30, 2017, issue of Science, published by AAAS. The paper, by S. Hsiang at University of California, Berkeley in Berkeley, Calif., and colleagues was titled, 'Estimating economic damage from climate change in the United States.'

Credit: Hsiang, Kopp, Jina, Rising, et al. (Science, 2017)

Because losses are expected to be largest in regions that are already poorer, on average, climate change will tend to increase preexisting inequality in the United States, the authors say. Estimates of climate change damage are central to the design of climate policies. 

To better quantify the costs of climate change in the U.S., and at higher-resolution than in past, Solomon Hsiang and colleagues developed a model that integrated data capturing the effects of short-term weather fluctuations between 1981 and 2010 on six key economic factors, such as agriculture yield and labor supply. They used these data to construct estimates of future economic impacts based on climate change projections under a "business-as-usual" approach (one in which fossil fuels continue to be used intensively). Unsurprisingly, Atlantic coast counties are expected to suffer the largest losses from cyclone intensification and sea level rise, they report. 


Inequality of damages in the USA: Range of economic damages per year for groupings of US counties, based on their income (29,000 simulations for each of 3,143 counties). The poorest 10% of counties are the leftmost box plot. The richest 10% are the rightmost box plot. Damages are fraction of county income. White lines are median estimates, boxes show the inner 66% of possible outcomes, outer whiskers are inner 90% of possible outcomes. This figure is a simplified version of Figure 5C in the main article.

Overall though, southern and midwestern populations are projected to suffer the largest losses, exceeding 20% of gross county product (GCP) in some instances, while some northern and western populations may actually see small economic gains - up to 10% of GCP. The model estimates that average yields in agriculture will decline by about 9%. 

As well, mortality rates will increase by about 5.4 deaths per 100,000, for each 1°C increase. In a related video interview, Hsiang highlights some of the policy-oriented implications of this work: "We've shown which U.S. regional economies are particularly vulnerable, which will help policymakers... If we are going to adapt, we need to know where to focus."

These results are highlighted in a Perspective by William A. Pizer. Reporters can find the economic damages projected for their county by visiting this site http://globalpolicy.science/econ-damage-climate-change-usa

Explore the ways in which climate change will impact where you live, work and do business. Starting with changes in temperature, this map will expand to include projected social and economic impacts in the weeks and months ahead.

The map shows changes from the norm in temperature in the U.S. at the end of the century.





Contacts and sources:
American Association For The Advancement Of Science

New Self-Powered System Makes Smart Windows Smarter

Smart windows equipped with controllable glazing can augment lighting, cooling and heating systems by varying their tint, saving up to 40 percent in an average building's energy costs.

These smart windows require power for operation, so they are relatively complicated to install in existing buildings. But by applying a new solar cell technology, researchers at Princeton University have developed a different type of smart window: a self-powered version that promises to be inexpensive and easy to apply to existing windows. This system features solar cells that selectively absorb near-ultraviolet (near-UV) light, so the new windows are completely self-powered.


Graduate student Nicholas Davy holds a sample of the special window glass. 
Photo by David Kelly Crow

"Sunlight is a mixture of electromagnetic radiation made up of near-UV rays, visible light, and infrared energy, or heat," said Yueh-Lin (Lynn) Loo, director of the Andlinger Center for Energy and the Environment, and the Theodora D. '78 and William H. Walton III '74 Professor in Engineering. "We wanted the smart window to dynamically control the amount of natural light and heat that can come inside, saving on energy cost and making the space more comfortable."

The smart window controls the transmission of visible light and infrared heat into the building, while the new type of solar cell uses near-UV light to power the system.

"This new technology is actually smart management of the entire spectrum of sunlight," said Loo, who is a professor of chemical and biological engineering. Loo is one of the authors of a paper, published June 30, that describes this technology, which was developed in her lab.

A piece of the glass, which is shown in the video, harvests one portion of the light spectrum to control other parts of the spectrum. Specifically, it uses near-ultraviolet light to generate electricity, which powers chemical reactions that lighten or darken the glass as needed. When darkened, the window can block more than 80 percent of light.
 Credit: Princeton University

Because near-UV light is invisible to the human eye, the researchers set out to harness it for the electrical energy needed to activate the tinting technology.

"Using near-UV light to power these windows means that the solar cells can be transparent and occupy the same footprint of the window without competing for the same spectral range or imposing aesthetic and design constraints," Loo added. "Typical solar cells made of silicon are black because they absorb all visible light and some infrared heat – so those would be unsuitable for this application."

Princeton engineers invented a window system that could simultaneously generate electricity and lower heating and cooling costs. The team, led by Professor Yueh-Lin (Lynn) Loo, center, includes graduate students Nicholas Davy, left, and Melda Sezen-Edmonds, right. Behind them is a cleanroom at the Andlinger Center for Energy and the Environment, where Loo is the director.

Photo by David Kelly Crow

In the paper published in Nature Energy, the researchers described how they used organic semiconductors - contorted hexabenzocoronene (cHBC) derivatives - for constructing the solar cells. The researchers chose the material because its chemical structure could be modified to absorb a narrow range of wavelengths - in this case, near-UV light. To construct the solar cell, the semiconductor molecules are deposited as thin films on glass with the same production methods used by organic light-emitting diode manufacturers. When the solar cell is operational, sunlight excites the cHBC semiconductors to produce electricity.

At the same time, the researchers constructed a smart window consisting of electrochromic polymers, which control the tint, and can be operated solely using power produced by the solar cell. When near-UV light from the sun generates an electrical charge in the solar cell, the charge triggers a reaction in the electrochromic window, causing it to change from clear to dark blue. When darkened, the window can block more than 80 percent of light.

Nicholas Davy, a doctoral student in the chemical and biological engineering department and the paper's lead author, said other researchers have already developed transparent solar cells, but those target infrared energy. However, infrared energy carries heat, so using it to generate electricity can conflict with a smart window’s function of controlling the flow of heat in or out of a building. Transparent near-UV solar cells, on the other hand, don't generate as much power as the infrared version, but don’t impede the transmission of infrared radiation, so they complement the smart window’s task.

Davy said that the Princeton team’s aim is to create a flexible version of the solar-powered smart window system that can be applied to existing windows via lamination.

"Someone in their house or apartment could take these wireless smart window laminates – which could have a sticky backing that is peeled off – and install them on the interior of their windows," said Davy. "Then you could control the sunlight passing into your home using an app on your phone, thereby instantly improving energy efficiency, comfort, and privacy.”

Joseph Berry, senior research scientist at the National Renewable Energy Laboratory, who studies solar cells but was not involved in the research, said the research project is interesting because the device scales well and targets a specific part of the solar spectrum.

"Integrating the solar cells into the smart windows makes them more attractive for retrofits and you don't have to deal with wiring power," said Berry. "And the voltage performance is quite good. The voltage they have been able to produce can drive electronic devices directly, which is technologically quite interesting."

Davy and Loo have started a new company, called Andluca Technologies, based on the technology described in the paper, and are already exploring other applications for the transparent solar cells. They explained that the near-UV solar cell technology can also power internet-of-things sensors and other low-power consumer products.

"It does not generate enough power for a car, but it can provide auxiliary power for smaller devices, for example, a fan to cool the car while it’s parked in the hot sun," Loo said.

Besides Loo and Davy, Melda Sezen-Edmonds, a graduate student in chemical and biological engineering, is the co-author responsible for the electrochromic portion of the paper. Other authors are Jia Gao, a postdoctoral researcher in Loo’s group then, now with Enablence Technologies in California; Xin Lin, a graduate student in electrical engineering; Amy Liu, an undergraduate in computer science; Nan Yao, director of Princeton's Imaging and Analysis Center; and Antoine Kahn, the Stephen C. Macaleer '63 Professor in Engineering and Applied Science and vice dean of Princeton's School of Engineering and Applied Science. Support for the project was provided in part by the National Science Foundation, and the Wilke Family Fund administered by the School of Engineering and Applied Science at Princeton.





Contacts and sources:
Sharon Adarlo
Princeton University

Echeclus Is a Bizarre Solar System Object

Scientists pursue research through observation, experimentation and modeling. They strive for all of these pieces to fit together, but sometimes finding the unexpected is even more exciting. That's what happened to University of Central Florida's astrophysicist Gal Sarid, who studies comets, asteroids and planetary formation and earlier this year was part of a team that published a study focused on the comet 174P/Echeclus. It didn't behave the way the team was expecting.

"This is another clue that Echeclus is a bizarre solar system object," said University of South Florida physics research Professor Maria Womack, who leads the team.

Here's an illustration of Echeclus, which UCF researcher Gal Sarid and USF professor Maria Womack are studying.

Credit: Florida Space Institute at UCF.

Comets streak across the sky and as they get closer to the sun look like bright fuzz balls with extended luminous trails in their wake. However, comets are actually bulky spheres of mixed ice and rock, many of them also rich in other frozen volatile compounds, such as carbon monoxide, carbon dioxide, hydrogen cyanide and methanol.

Comets heat up as they get closer to the sun, losing their icy layers by sublimation and producing emission jets of water vapor, other gases and dust expelled from the comet nucleus, Sarid said. Once they move away from the sun, they cool off again. But some comets start showing emission activity while still very far from the sun, where heating is low.

That's what Sarid and Womack research as they study these kinds of distantly active comets. Womack and graduate student Kacper Wierzchos used the Arizona Radio Observatory Submillimeter telescope to observe Echeclus last year as it approached the sun. This work will be part of Wierzchos' doctoral dissertation in applied physics at USF. Sarid provided theoretical expertise for interpreting the observational results.

Echeclus is part of the population of objects called centaurs, which have orbits around the sun at distances between that of Jupiter and Neptune. It is also part of a special group within the centaurs, which sometimes exhibit comet-like activity. Previous research indicated that Echeclus might have been spewing carbon monoxide as its icy material changed phases.

The team found that the levels of carbon monoxide were nearly 40 times lower than typically expected from other comets at similar distances from the sun. This suggests that Echeclus and similar active Centaurs may be more fragile than other comets. Echeclus may have gone through a different physical process from most comets that caused it to lose a lot of its original carbon monoxide, or it may have had less of that substance to begin with.

Understanding the composition of comets and how they work will help researchers understand how our solar system was formed. It will also aid space explorers plan for their travels - things to avoid and perhaps hidden resources found within the nucleus of comets that may be useful on deep space missions.

"These are minor bodies that we are studying, but they can provide major insights," Sarid said. "We believe they are rich in organics and could provide important hints of how life originated."

UCF Researcher Gal Sarid loves challenges and comets that don't act as expected, is definitely a challenge.

Credit: UCF

Sarid is determined to solve the puzzle. This week he hosts a group of comet experts at UCF to discuss the mysterious activity of Echeclus and other similar bodies. The idea for the workshop is to capitalize on the local expertise in observation, laboratory and theoretical work that is required to fully understand the mysteries of active comets at great distances from the sun. The inaugural Florida Distant Comets workshop was held a year ago at USF.

"I guess I've always liked challenges," Sarid said from his office at the Florida Space Institute at UCF, where he spends his days trying to decipher the models and mathematical equations related to his work.

Sarid has a Ph.D. in geophysics and planetary Sciences from Tel Aviv University in Israel and completed postdoctoral work at the Institute for Astronomy and the NASA Astrobiology Institute in Hawaii, followed by a second postdoctoral research appointment at Harvard University. He was a part of a team that used the telescopes in Hawaii for several years chasing comets and asteroids for NASA observing campaigns and space missions before joining UCF in 2014.

He teamed up with Womack in 2016 and on this most recent study provided theoretical expertise for interpreting the observational results. The National Science Foundation funds the project, under a grant awarded to USF, with Womack as the principal investigator and Sarid as a co-investigator.

They will continue to look at centaur-type comets and measure the level of their carbon monoxide emission and related activity.




Contacts and sources:
Zenaida Gonzalez Kotala
The University of Central Florida

Bizarre Scale Armor of 240 Million Year Old Swiss Reptile

Grisons, 241 million years ago - Instead of amidst high mountains, a small reptile suns itself on an island beach in a warm shallow sea, where many fish and marine reptiles frolic. This is the story told by an excellently preserved new discovery of the reptile Eusaurosphargis dalsassoi studied by paleontologists from the University of Zurich.

About 20 centimeters in length, the Swiss reptile was small and juvenile, but its skin was already strongly armored with variously formed smooth, jagged or even thorny osteoderms. Its skeleton indicates a life on land, even though the animal was found together with fish and marine reptiles in the 241 million year old calcareous deposits of the Prosanto Formation near Ducanfurgga at an altitude of 2,740 meters south of Davos in the canton Grisons, Switzerland.

Live reconstruction of Eusaurosphargis dalsassoi 
Credit: Beat Scheffold, Paleontological Institute and Museum, University of Zurich

The Swiss-British team of researchers led by Torsten Scheyer, paleontologist at the University of Zurich, and James Neenan from the Oxford University Museum of Natural History therefore assumes that it was washed off a nearby island into the sea basin and became embedded in the finely layered marine sediments after death.

Skeleton and appearance reconstructed

14 years ago, the species Eusaurosphargis dalsassoi was described using a partially preserved, completely disarticulated sample from the vicinity of the Swiss-Italian UNESCO World Heritage Site Monte San Giorgio. The new find from the Grisons Mountains, on the other hand, is very well-preserved, allowing researchers to reconstruct the skeleton and outward appearance of the animal for the first time.

In the process, they discovered something astonishing: Externally, Eusaurosphargis dalsassoilooks very similar to girdled lizards (Cordylidae), a group of small, scaled reptiles (Lepidosauria) that usually live in the dry regions of southern Africa. 

Fossil plate of Eusaurosphargis dalsassoi 
Credit: Torsten Scheyer, Paleontological Institute and Museum, University of Zurich

Some of the more strongly armored girdled lizard species could have served as the basis of mythical dragon legends due to their appearance. "This is a case of convergent development as the extinct species is not closely related to today's African lizards" , Scheyer explains.

Related to Helveticosaurus

An exact examination of the phylogenetic relationships rather confirms that its closest relatives are marine reptiles such as ichthyosaurs (Ichthyosauria or "fish lizards"), sauropterygians (Sauropterygia "lizard flippers") or even Helveticosaurus, a marine reptile that is unique to Switzerland, all of which have been found at Monte San Giorgio. The skeleton of Eusaurosphargis, however, shows neither a streamlined body structure, nor arms and legs that have transformed into flippers, as well as no tail fin, which would indicate a life at sea.

Discovery initially identified as fish remains

The astonishing fossil was originally discovered 15 years ago by amateur paleontologist and fossil preparator Christian Obrist during systematic fossil excavations of the University of Zurich under the leadership of Heinz Furrer, which were sponsored by the Natural History Museum of the Grisons in Chur and by the Grisons canton. 

The animal was found near Ducanfurgga at an altitude of 2,740 meters south of Davos in the canton Grisons, Switzerland 

Credit: Christian Obrist


It took more than a decade for the scientific value of the exceptional discovery to gradually be recognized as a result of elaborate preparation. The fossil was namely initially identified as simple fish remains. "The excavations at Ducanfurgga are still in progress today and will hopefully reveal other spectacular discoveries in the future," Furrer says.




Contacts and sources:
Torsten M. Scheyer
University of Zurich

Citation: Scheyer, Torsten M., Neenan James M., Bodogan Timea, Furrer Heinz, Obrist Christian, and Plamondon Mathieu. A new, exceptionally preserved juvenile specimen of Eusaurosphargis dalsassoi (Diapsida) and implications for Mesozoic marine diapsid phylogeny. Scientific Reports, doi: 10.1038/s41598-017-04514-x.

Momentum Paradox of Light Solved

Aalto University researchers show that in a transparent medium each photon is accompanied by an atomic mass density wave in a recent publication,  The optical force of the photon sets the medium atoms in motion and makes them carry 92% of the total momentum of light, in the case of silicon.

The novel discovery solves the centennial momentum paradox of light. In the literature, there has existed two different values for the momentum of light in the transparent medium. Typically, these values differ by a factor of ten and this discrepancy is known as the momentum paradox of light. The difference between the momentum values is caused by neglecting the momentum of atoms moving with the light pulse.

The optical force on atoms forms a mass density wave that propagates with light through the crystal. 
Image Jyrki Hokkanen, CSC.

To solve the momentum paradox the authors prove that the special theory of relativity requires an extra atomic density to travel with the photon. In related classical computer simulations, they use optical force field and Newton´s second law to show that a wave of increased atomic mass density is propagating through the medium with the light pulse.

The mass transfer leads to splitting of the total momentum of light into two components. The fields’ share of momentum is equal to the Abraham momentum while the total momentum, which includes also the momentum of atoms driven forward by the optical force, is equal to the Minkowski momentum.

”Since our work is theoretical and computational it must be still verified experimentally, before it can become a standard model of light in a transparent medium. Measuring the total momentum of a light pulse is not enough but one also has to measure the transferred atomic mass. This should be feasible using present interferometric and microscopic techniques and common photonic materials”, researcher Mikko Partanen says.

See the video: Photon mass drag and the momentum of light in a medium
Potential interstellar applications of the discovery


The researchers are working on potential optomechanical applications enabled by the optical shock wave of atoms predicted by the new theory. However, the theory applies not only to transparent liquids and solids but also to dilute interstellar gas. Using a simple kinematic consideration it can be shown that the energy loss caused by the mass transfer effect becomes for dilute interstellar gas proportional to the photon energy and distance travelled by light.

“This prompts for further simulations with realistic parameters for interstellar gas density, plasma properties and temperature. Presently the Hubble’s law is explained by Doppler shift being larger from distant stars. This effectively supports the hypothesis of expanding universe. In the mass polariton theory of light this hypothesis is not needed since redshift becomes automatically proportional to the distance from the star to the observer”, explains Professor Jukka Tulkki.





Contacts and sources:
Mikko Partanen, Doctoral Candidate, Aalto University
Professor Jukka Tulkki, Aalto University

Research article: Mikko Partanen, Teppo Häyrynen, Jani Oksanen, and Jukka Tulkki. Photon mass drag and the momentum of light in a medium. Physical Review A 95. DOI: 10.1103/PhysRevA.95.063850

The True Colors of the Statue of Liberty

The Statue of Liberty is an iconic blue-green symbol of freedom. But did you know she wasn't always that color? When France gifted Lady Liberty to the U.S., she was a 305-foot statue with reddish-brown copper skin. 
Her color change is thanks to about 30 years' worth of chemistry in the air of New York City harbor. See how this monumental statue transitioned from penny red to chocolate brown to glorious liberty green in this  Reactions video, just in time for Independence Day



Contacts and sources:
Katie Cottingham
The American Chemical Society

Thursday, June 29, 2017

Origins of the Stone Fruit Species

As global competition for fresh and processed fruit increases, breeders and producers also have to deal with the effects of climate change and more pathogens, especially sharka disease, appearing in their orchards. EU-funded research traced the origins of stone fruits to discover genetic clues for better disease resistance.

In an effort to learn more about stone fruits, European scientists have traced the origins of the beloved apricot all the way back to wild Asian species. Their investigations reveal several significant evolutionary events and identify natural gene pools for higher resistance to the most serious threat to apricot and plum harvests in Europe today – sharka disease.

Studying the genetic make-up of domesticated plant and animal species, and comparing them to their wild relatives helps scientists understand how populations diverge and adapt over time and in different climates and conditions.

Credit: © M.studio - fotolia.com

Apricots are an important fruit in the Northern hemisphere. In France, for example, it is the third-largest fruit crop with some 12 800 hectares under cultivation. Italy, France and Spain are the principle apricot-producing countries by weight in Europe. But pathogens like the Plum pox virus (PPV), which causes sharka, pose a huge threat to production everywhere.

Yet very little is known about the history of how the apricot tree was domesticated for farming and its resistance to sharka over time. Veronique Decroocq, a scientist at INRA in Bordeaux, took up the challenge with colleagues in the EU-funded STONE project to map the genetic landscape of domestic and wild apricot trees worldwide.

“We used 18 specific markers to genotype a collection of 230 wild trees from Central Asia and 142 cultivated ones, a representative sample of the cultivated apricot landscape around the world,” says Decroocq. She received a Marie Sklodowska-Curie research staff exchange grant to study the genetic diversity of stone fruit trees, such as apricots, peaches and cherries in Europe, the Caucasus and Central Asia.

The search for sharka resistance

Natural forests of wild apricot trees are expected to carry a higher level of resilience to pathogen attacks and climate changes thanks to their richer genetic diversity. A series of PPV inoculation tests was carried out on the apricot trees, as part of the four-year STONE project, to test this assumption.

Genetic markers during the testing revealed the highest levels of diversity in Central Asian and Chinese wild and cultivated apricots, which confirmed Decroocq’s suspicion that the original species can be traced back to this region. “There was a clear branching out of cultivated apricots between Chinese and Western varieties or ‘accessions’. We also noted distinct differences between cultivated and wild apricots.”

After some analysis, the STONE team now believes apricots underwent two independent ‘domestication events’ stemming from the same wild, ancestral gene pool. A genetic subdivision was also noted in apricots native to Central Asia, which showed higher resistance to sharka.

“These findings help us understand the domestication history of cultivated apricots and provide valuable evidence that a rich and exploitable source of genetic diversity and disease-resistance lies hidden in wild apricot varieties from Central Asia,” says Decroocq.

That is also potentially good news for producers in the € 260 billion (US$274 billion) a year global fruit and vegetable processing market.

These and other key outputs of the project’s research, such as a phytosanitary survey of fruit tree species in the Caucasian region, and the identification of key genes involved in peach fruit quality, would not have been possible without the EU staff exchange grant. “It has provided a chance to advance this little-known field and one day develop more effective disease-resistant stone fruit production,” concludes Decroocq.


Contacts and sources:
EC Research and Innovation

Graphene Dialysis Membrane Filters Nanometer-Sized Molecules at 10 to 100 Times the Rate of Commercial Membranes

Dialysis, in the most general sense, is the process by which molecules filter out of one solution, by diffusing through a membrane, into a more dilute solution. Outside of hemodialysis, which removes waste from blood, scientists use dialysis to purify drugs, remove residue from chemical solutions, and isolate molecules for medical diagnosis, typically by allowing the materials to pass through a porous membrane.

Today’s commercial dialysis membranes separate molecules slowly, in part due to their makeup: They are relatively thick, and the pores that tunnel through such dense membranes do so in winding paths, making it difficult for target molecules to quickly pass through.

1) Graphene, grown on copper foil, is pressed against a supporting sheet of polycarbonate. 2) The polycarbonate acts to peel the graphene from the copper. 3) Using interfacial polymerization, researchers seal large tears and defects in graphene. 4) Next, they use oxygen plasma to etch pores of specific sizes in graphene.
1) Graphene, grown on copper foil, is pressed against a supporting sheet of polycarbonate. 2) The polycarbonate acts to peel the graphene from the copper. 3) Using interfacial polymerization, researchers seal large tears and defects in graphene. 4) Next, they use oxygen plasma to etch pores of specific sizes in graphene. 
Courtesy of the researchers 

Now MIT engineers have fabricated a functional dialysis membrane from a sheet of graphene — a single layer of carbon atoms, linked end to end in hexagonal configuration like that of chicken wire. The graphene membrane, about the size of a fingernail, is less than 1 nanometer thick. (The thinnest existing memranes are about 20 nanometers thick.) The team’s membrane is able to filter out nanometer-sized molecules from aqueous solutions up to 10 times faster than state-of-the-art membranes, with the graphene itself being up to 100 times faster.

While graphene has largely been explored for applications in electronics, Piran Kidambi, a postdoc in MIT’s Department of Mechanical Engineering, says the team’s findings demonstrate that graphene may improve membrane technology, particularly for lab-scale separation processes and potentially for hemodialysis.

“Because graphene is so thin, diffusion across it will be extremely fast,” Kidambi says. “A molecule doesn’t have to do this tedious job of going through all these tortuous pores in a thick membrane before exiting the other side. Moving graphene into this regime of biological separation is very exciting.”

Kidambi is a lead author of a study reporting the technology, published today in Advanced Materials. Six co-authors are from MIT, including Rohit Karnik, associate professor of mechanical engineering, and Jing Kong, associate professor of electrical engineering.

Plugging graphene

To make the graphene membrane, the researchers first used a common technique called chemical vapor deposition to grow graphene on copper foil. They then carefully etched away the copper and transferred the graphene to a supporting sheet of polycarbonate, studded throughout with pores large enough to let through any molecules that have passed through the graphene. The polycarbonate acts as a scaffold, keeping the ultrathin graphene from curling up on itself.

The researchers looked to turn graphene into a molecularly selective sieve, letting through only molecules of a certain size. To do so, they created tiny pores in the material by exposing the structure to oxygen plasma, a process by which oxygen, pumped into a plasma chamber, can etch away at materials.

“By tuning the oxygen plasma conditions, we can control the density and size of pores we make, in the areas where the graphene is pristine,” Kidambi says. “What happens is, an oxygen radical comes to a carbon atom [in graphene] and rapidly reacts, and they both fly out as carbon dioxide.”

What is left is a tiny hole in the graphene, where a carbon atom once sat. Kidambi and his colleagues found that the longer graphene is exposed to oxygen plasma, the larger and more dense the pores will be. Relatively short exposure times, of about 45 to 60 seconds, generate very small pores.

Desirable defects

The researchers tested multiple graphene membranes with pores of varying sizes and distributions, placing each membrane in the middle of a diffusion chamber. They filled the chamber’s feed side with a solution containing various mixtures of molecules of different sizes, ranging from potassium chloride (0.66 nanometers wide) to vitamin B12 (1 to 1.5 nanometers) and lysozyme (4 nanometers), a protein found in egg white. The other side of the chamber was filled with a dilute solution.

The team then measured the flow of molecules as they diffused through each graphene membrane.

Membranes with very small pores let through potassium chloride but not larger molecules such as L-tryptophan, which measures only 0.2 nanometers wider. Membranes with larger pores let through correspondingly larger molecules.

The team carried out similar experiments with commercial dialysis membranes and found that, in comparison, the graphene membranes performed with higher “permeance,” filtering out the desired molecules up to 10 times faster.

Kidambi points out that the polycarbonate support is etched with pores that only take up 10 percent of its surface area, which limits the amount of desired molecules that ultimately pass through both layers.

“Only 10 percent of the membrane’s area is accessible, but even with that 10 percent, we’re able to do better than state-of-the-art,” Kidambi says.

To make the graphene membrane even better, the team plans to improve the polycarbonate support by etching more pores into the material to increase the membrane’s overall permeance. They are also working to further scale up the dimensions of the membrane, which currently measures 1 square centimeter. Further tuning the oxygen plasma process to create tailored pores will also improve a membrane’s performance — something that Kidambi points out would have vastly different consequences for graphene in electronics applications.

“What’s exciting is, what’s not great for the electronics field is actually perfect in this [membrane dialysis] field,” Kidambi says. “In electronics, you want to minimize defects. Here you want to make defects of the right size. It goes to show the end use of the technology dictates what you want in the technology. That’s the key.”

This research was supported, in part, by the U.S. Department of Energy and a Lindemann Trust Fellowship.



Contacts and sources:
Jennifer Chu
Massachusetts Institute of Technology

Neanderthals Invented Dentistry

Neanderthal dentists?

A discovery of multiple toothpick grooves on teeth and signs of other manipulations by a Neanderthal of 130,000 years ago are evidence of a kind of prehistoric dentistry, according to a new study led by a University of Kansas researcher.


Photo courtesy of David Frayer

"As a package, this fits together as a dental problem that the Neanderthal was having and was trying to presumably treat itself, with the toothpick grooves, the breaks and also with the scratches on the premolar," said David Frayer, professor emeritus of anthropology. "It was an interesting connection or collection of phenomena that fit together in a way that we would expect a modern human to do. Everybody has had dental pain, and they know what it's like to have a problem with an impacted tooth."

The Bulletin of the International Association for Paleodontology recently published the study. The researchers analyzed four isolated but associated mandibular teeth on the left side of the Neanderthal's mouth. Frayer's co-authors are Joseph Gatti, a Lawrence dentist; Janet Monge, of the University of Pennsylvania, and Davorka Radovčić, curator at the Croatian Natural History Museum.



The teeth were found at Krapina site in Croatia, and Frayer and Radovčić have made several discoveries about Neanderthal life there, including a widely recognized 2015 study published in PLOS ONE about a set of eagle talons that included cut marks and were fashioned into a piece of jewelry.

The teeth and all the Krapina Neanderthal fossils were discovered more than 100 years ago from the site, which was originally excavated between 1899-1905. However, Frayer and Radovčić in recent years have re-examined many items collected from the site.

In this case, they analyzed the teeth with a light microscope to document occlusal wear, toothpick groove formation, dentin scratches and antemortem, lingual enamel fractures.

Photo courtesy of David Frayer 

Even though the teeth were isolated, previous researchers were able to reconstruct their order and location in the male or female Neanderthal's mouth. Frayer said researchers have not recovered the mandible to look for evidence of periodontal disease, but the scratches and grooves on the teeth indicate they were likely causing irritation and discomfort for some time for this individual.

They found the premolar and M3 molar were pushed out of their normal positions. Associated with that, they found six toothpick grooves among those two teeth and the two molars farther behind them.

"The scratches indicate this individual was pushing something into his or her mouth to get at that twisted premolar," Frayer said.

The features of the premolar and third molar are associated with several kinds of dental manipulations, he said. Mostly because the chips of the teeth were on the tongue side of the teeth and at different angles, the researchers ruled out that something happened to the teeth after the Neanderthal died.

Past research in the fossil record has identified toothpick grooves going back almost 2 million years, Frayer said. They did not identify what the Neanderthal would have used to produce the toothpick grooves, but it possibly could have been a bone or stem of grass.

"It's maybe not surprising that a Neanderthal did this, but as far as I know, there's no specimen that combines all of this together into a pattern that would indicate he or she was trying to presumably self-treat this eruption problem," he said.

The evidence from the toothpick marks and dental manipulations is also interesting in light of the discovery of the Krapina Neanderthals' ability to fashion eagle talons into jewelry because people often think of Neanderthals as having "subhuman" abilities.

"It fits into a pattern of a Neanderthal being able to modify its personal environment by using tools," Frayer said, "because the toothpick grooves, whether they are made by bones or grass stems or who knows what, the scratches and chips in the teeth, they show us that Neanderthals were doing something inside their mouths to treat the dental irritation. Or at least this one was."

Top right photo: Three views of the four teeth recovered from the Neanderthal Krapina site in Croatia, roughly 130,000 years old. A team of researchers led by David Frayer, KU professor emeritus of anthropology, examined the teeth and found evidence of scratches and toothpick grooves on the three molars and one premolar tooth from the bottom left side of the Neanderthal’s mouth. Because two of the teeth are pushed out of their normal positions, the researchers found the grooves indicate an effort by the Neanderthal to manipulate his or her teeth to relieve the pain.

Bottom right photo: Researchers led by David Frayer, KU professor emeritus of anthropology, discovered toothpick grooves and other irregular facets and marks on this premolar tooth that belonged to a Neanderthal at the Krapina site in Croatia. The grooves and marks on the teeth indicate the Neanderthal attempted to remove the tooth with some type of tool, likely to address the pain it caused, Frayer said.


Contacts and sources:
George DiepenbrockThe University of Kansas

Humans Are Interrupting the Ancient, Natural Cycle of Burning and Recovery

The world's open grasslands and the beneficial fires that sustain them have shrunk rapidly over the past two decades, thanks to a massive increase in agriculture, according to a new study led by University of California, Irvine and NASA researchers published today in Science.

Analyzing 1998 to 2015 data from NASA's Terra and Aqua satellites, the international team found that the total area of Earth's surface torched by flames had fallen by nearly 25 percent, or 452,000 square miles (1.2 million square kilometers). Decreases were greatest in Central America and South America, across the Eurasian steppe and in northern Africa, home to fast-disappearing lions, rhinoceroses and other iconic species that live on these fire-forged savannas.

Regular fires have long helped maintain healthy grasslands worldwide. But rapid expansion of industrial farming in Africa, Asia, Central America and South America has resulted in a nearly 25 percent decrease in fires and the loss of habitat for endangered lions and other large mammals.

Credit: Center of Environmental Monitoring & Fire Management, Federal University of Tocantins, Brazil

"A billion and a half more people have been added to the planet over the past 20 years, livestock has doubled in many places, and wide-open areas once kept open by fire are now being farmed," said James Randerson, Chancellor's Professor of Earth system science at UCI. "Our fire data are a sensitive indicator of the intense pressure humans are placing on these important ecosystems."

Modelers had forecast that as global temperatures rose, fire risk would soar. But the researchers learned that widely used prediction tools didn't account for surging population growth or the conversion of grasslands and subsistence farming to industrial agriculture in some of the world's poorest regions.

Fire has been an important factor for millennia in the maintenance of healthy grasslands, which support many large mammals. Without occasional blazes, trees and shrubs encroach on this habitat, which covers about a fifth of the planet's terrain. The researchers discovered a profound transformation over the past two decades.

"Satellite images revealed clear relationships among the rapid disappearance of fires from grassland ecosystems across the world, human activity and changes in plant cover," said lead author Niels Andela, a research scientist at NASA's Goddard Space Flight Center and UCI.

Sharp increases in the number of livestock, the expansion of croplands, and new buildings and roads have fragmented the savannas and reduced highly flammable dried grasses. The expanses have become prized assets for private landowners who want to prevent brush fires. Unlike international efforts to combat tropical deforestation, there's been less focus on protecting these vast semiarid stretches.

"Humans are interrupting the ancient, natural cycle of burning and recovery in these areas," Randerson said.

Losing a fourth of the planet's fires has benefits, increasing storage of dangerous carbon emissions and reducing lung-damaging smoke. But the drop-off in smoke in the atmosphere also allows more sunlight to reach the Earth's surface, causing more global warming.

The change is not uniform. Consistent with previous reports, more wildfires have occurred in the western U.S. and across North American boreal forests, where climate change is lengthening the fire season and drying out flammable vegetation faster.



Contacts and sources:
Brian Bell
University of California, Irvine

Chocolate Boosts Brain Performance and It Fights Cognitive Decline

A balanced diet is chocolate in both hands - a phrase commonly used to justify ones chocolate snacking behavior. A phrase now shown to actually harbor some truth, as the cocoa bean is a rich source of flavanols: a class of natural compounds that has neuroprotective effects.

In their recent review published in Frontiers in Nutrition, Italian researchers examined the available literature for the effects of acute and chronic administration of cocoa flavanols on different cognitive domains. In other words: what happens to your brain up to a few hours after you eat cocoa flavanols, and what happens when you sustain such a cocoa flavanol enriched diet for a prolonged period of time?

File:Chocolate.jpg
Credit: Wikimedia Commons/André Karwath

Although randomized controlled trials investigating the acute effect of cocoa flavanols are sparse, most of them point towards a beneficial effect on cognitive performance. Participants showed, among others, enhancements in working memory performance and improved visual information processing after having had cocoa flavanols. And for women, eating cocoa after a night of total sleep deprivation actually counteracted the cognitive impairment (i.e. less accuracy in performing tasks) that such a night brings about. Promising results for people that suffer from chronic sleep deprivation or work shifts.

It has to be noted though, that the effects depended on the length and mental load of the used cognitive tests to measure the effect of acute cocoa consumption. In young and healthy adults, for example, a high demanding cognitive test was required to uncover the subtle immediate behavioral effects that cocoa flavanols have on this group.

The effects of relatively long-term ingestion of cocoa flavanols (ranging from 5 days up to 3 months) has generally been investigated in elderly individuals. It turns out that for them cognitive performance was improved by a daily intake of cocoa flavanols. Factors such as attention, processing speed, working memory, and verbal fluency were greatly affected. These effects were, however, most pronounced in older adults with a starting memory decline or other mild cognitive impairments.

And this was exactly the most unexpected and promising result according to authors Valentina Socci and Michele Ferrara from the University of L'Aquila in Italy. "This result suggests the potential of cocoa flavanols to protect cognition in vulnerable populations over time by improving cognitive performance. If you look at the underlying mechanism, the cocoa flavanols have beneficial effects for cardiovascular health and can increase cerebral blood volume in the dentate gyrus of the hippocampus. This structure is particularly affected by aging and therefore the potential source of age-related memory decline in humans."

So should cocoa become a dietary supplement to improve our cognition? "Regular intake of cocoa and chocolate could indeed provide beneficial effects on cognitive functioning over time. There are, however, potential side effects of eating cocoa and chocolate. Those are generally linked to the caloric value of chocolate, some inherent chemical compounds of the cocoa plant such as caffeine and theobromine, and a variety of additives we add to chocolate such as sugar or milk."

Nonetheless, the scientists are the first to put their results into practice: "Dark chocolate is a rich source of flavanols. So we always eat some dark chocolate. Every day."

This research was published in the Research Topic "Chocolate and Health: Friend or Foe?". This Topic gathered papers covering the functional properties of cocoa, to unravel the pro and cons of cocoa in relation to human health.


Contacts and sources:
Melissa Cochrane
Frontiers


Citation:  Enhancing Human Cognition with Cocoa Flavonoids http://dx.doi.org/10.3389/fnut.2017.00019

A Dual-Arm Remote-Control Construction Robot for Disaster Relief

A group of Japanese researchers developed a new concept construction robot for disaster relief situations. This robot has a double swing dual arm mechanism and has drastically improved operability and mobility compared to conventional construction machines.

In disaster areas, operating heavy construction equipment remotely and autonomously is necessary, but conventional remote-controlled heavy equipment has problems such as insufficient operability, inability to perform heavy-duty work, limited mobility on slopes and stairs, and low work efficiency because of difficult remote control. Thus, fundamental solutions to such problems have been sought after.

Double Swing Dual Arm Robot

Credit: Osaka University

As part of the Impulsing Paradigm Challenge through Disruptive Technologies Program (ImPACT)’s Tough Robotics Challenge Program, researchers from Osaka University, Kobe University, Tohoku University, Tohoku University, The University of Tokyo, and Tokyo Institute of Technology, tackle these challenges.

This group of researchers attempts to solve these challenges by developing a prototype robot with a double swing dual arm mechanism and hydraulic-powered robotic hands. Using this robot, this group aims for discontinuous innovation; they try to drastically increase the efficiency of work and movement through the dual arm robot capable of handling heavy objects and by excavating and gripping with its high-powered hands. Specifically, this robot has the following functions.

1. A double swing dual arm mechanism capable of performing heavy work with high operability and terrain adaptability (smooth mobility on slopes and stairs).


Examples of work using double swing dual arm

Credit: Osaka University

In the double swing dual arm mechanism of this robot, its right and left arms and the rotating portion of its shoulders are on the same axis. Because of this, the robot can use bearings with far bigger diameter on its rotation portion as compared to humans and animals, whose shoulder joints are arranged on different axes.

Also, these arms are supported close to the robot’s center of gravity, providing the robot with a high degree of stability. This structure allows the robot to withstand high loads and perform heavy-duty work. Additionally, since each coaxially-arranged arm rotates at 360 degrees, there is no distinction between right and left hands, which allows the user to freely change the layout of the robot’s hands.

2. Multi-fingered hand for construction robots

This group has developed a 4-fingered hand for use with construction robots and has equipped it to one of the robot’s arms. The operating modes -- excavation and grip -- can be selected by changing the hand’s shape. It is also possible to change the hand according to the shape of objects and control a wide range of grip strength.

Examples of work using tough robot hand

Credit: Osaka University

3. Basic technology for enhancement of remote controls

This robot has the capability to allow a remote operator to precisely control the robot with the senses of force and touch as if he/she is actually touching the target object. This robot is equipped with a multi-rotor unmanned aircraft vehicle UAV ("drone") with power supply through electric lines, which allows the operator to view objects and terrain from different viewpoints without a robot-mounted camera. This robot also has a bird's-eye view image composition system. These functions make the robot’s precise tasks and movement over intricate terrain easy.

Researchers in this group think that these functions will dramatically increase construction equipment’s capacity to deal with large-scale disasters and accidents and believe it is possible that the replacement of conventional construction equipment with this robot will drastically change civil engineering and construction methods. 

The researchers aim at achieving practical use of this robot to disaster relief situations within a few years through future improvement, integration with basic technology, and performance limit tests.


Contacts and sources:
Osaka University
http://resou.osaka-u.ac.jp/en/research/2017/20170619_1

Abundant Viruses Discovered in All the Oceans of the Planet

A group of scientists from several research centres and international universities led by Manuel Martínez García, from the University of Alicante Research Group in Molecular Microbian Ecology has discovered forty-four of the most abundant new viruses in all the Earth's oceans. The finding has been achieved thanks to the application of cutting-edge techniques that mix flow cytometry and genomics and molecular biology techniques. 

The technique developed by the researchers has revealed some of the most abundant viruses on a planetary level, especially on the surface of all oceans. "This finding would allow the discovery of emerging pathogenic viruses, which are impossible to cultivate in the laboratory due to technical difficulties. In this way, the technique gives us the genomic information that each virus carries, so we know what virus it is", as explained by UA researcher Manuel Martínez García.

Photographic assembly that simulates a "single virus" isolated.

Credit: University of Alicante 

The findings appeared 23 June 2017, in the scientific journal Nature Communications.

Up to date, there were hints, but it was not known which some of the most abundant viruses in the oceans of the planet were. This study sheds light on this issue and gives way to the study of other ecosystems. "With this technology, we open the door to deciphering the terrestrial viriosphere," according to Òscar Fornas, one of the researchers involved and head of the Flow Cytometry Unit at Pompeu Fabra University and Centre for Genomic Regulation in Barcelona. 

 "Not only does it serve to discover new viruses or see the ecology of large groups of viruses in the samples studied, but also sets the basis for studying the different viruses present in a particular ecosystem.. In this regard, the human body is a particular ecosystem and this is where much of the future of this project or possible emergent projects lies."

Now, after detecting them in aquatic environments, researchers have been applying it with human samples, such as saliva.

Martínez García stated that the achievement is that "a single virus is separated from the virus set". The process happens to break the capsid and then copies of the genome are made using molecular biology techniques. After that, "we can sequence DNA and with that, we access genetic information" to know 'who is'.


Dr. Manuel Martínez García, on the right, leads the project together with Dr. Óscar Fornas, expert in flow cytometry

Credit: University of Alicante 

"It is the first time that the genomic study of a single-virus particles has been performed efficiently," Fornas said, whose unit has been responsible for separating each virus particle one by one.

The article "Single-virus genomics reveals hidden cosmopolitan and abundant viruses" is the conclusion of the research study led by Manuel Martínez García, from the University of Alicante Molecular Microbial Ecology Group in collaboration with Dr. Josefa Antón Botella, coordinator of the group; the group of Evolutionary Genomics of the Miguel Hernández University, with Dr. Rodríguez Valera; The Institute of Marine Sciences (ICM) of the Spanish National Research Council (CSIC) in Barcelona, with Drs. Josep Maria Gasol and Silvia Acinas; the Pompeu Fabra University, with with Dr. Oscar Fornas; as well as two American research groups from Ohio University and Marino Bigelow Laboratory for Ocean Sciences.


Contacts and sources:
Citation: Martinez-Hernandez F, Fornas O, Lluesma M, Bolduc B, Cruz MJ, Martínez Martínez J, Antón J, Gasol J, Rosselli R, Rodríguez-Valera R, Sullivan MB, Acinas S and Manuel Martinez-Garcia. “Single-virus genomics reveals hidden cosmopolitan and abundant viruses”. Nature Communications. 2017
DOI: 10.1038/NCOMMS15892.  

NASA Rocket Launch Lights up the Mid-Atlantic Coast

July 4 fireworks came early when a NASA Terrier-Improved Malemute sounding rocket was successfully launched at 4:25 a.m., Thursday, June 29, from the agency’s Wallops Flight Facility in Virginia.

During the 8-minute flight, 10 canisters about the size of a soft drink can were ejected in space, 6 to 12 miles away from the 670-pound main payload.

The canisters deployed blue-green and red vapor that formed artificial clouds visible from New York to North Carolina.
Vapor Traces
Credit: NASA/Wallops Range Optical Systems Group

During an ionosphere or aurora science mission, these clouds, or vapor tracers, allow scientists on the ground to visually track particle motions in space.

The development of the multi-canister ampoule ejection system will allow scientists to gather information over a much larger area than previously possible when deploying the tracers just from the main payload.

A NASA Terrier-Improved Malemute sounding rocket was successfully launched at 4:25 a.m., Thursday, June 29, from the agency’s Wallops Flight Facility in Virginia.
Credits: NASA/Terry Zaperach
The rocket, after being delayed multiple times over the last 30 days, flew to an altitude of about 118 miles.

Time-lapse video of the ampoule release creating artificial clouds, or vapor tracers, that allow scientists to track particle motions in space. The ampoule launch occurred June 29 from NASA's Wallops Flight Facility, Virginia.

Credits: NASA/Wallops Range Optical Systems Group

Wallops received nearly 2,000 reports and photos of the cloud sightings from areas as far north as New York, south to North Carolina, and inland throughout Virginia, Maryland, Pennsylvania, and points in-between. Submitted photos can be viewed on the Wallops Facebook Page.

NASA's Wallops Flight Facility provides agile, low-cost flight and launch range services to meet government and commercial sector needs for accessing flight regimes worldwide from the Earth’s surface to the moon. Wallops' flight assets ranging from research aircraft, unmanned aerial systems and high-altitude balloons to suborbital and orbital rockets provide a full-range of capability, while operational launch range and airfield capabilities meet ongoing and emerging needs in the science, aerospace, defense, and commercial industries.


Contacts and sources:
Jeremy Eggers
NASA




Collapse of the European Ice Sheet Caused Ten Times the Melt of Greenland and Antarctica Today

The Eurasian ice sheet was an enormous conveyor of ice that covered most of northern Europe some 23,000 years ago. Its extent was such that one could have skied 4,500 km continuously across it - from the far southwestern isles in Britain to Franz Josef Land in the Siberian Arctic. Suffice to say its existence had a massive and extremely hostile impact on Europe at the time.

This ice sheet alone lowered global sea-level by over 20 meters. As it melted and collapsed, it caused severe flooding across the continent, led to dramatic sea-level rise, and diverted mega-rivers that raged on the continent. A new model, investigating the retreat of this ice sheet and its many impacts has just been published in Quaternary Science Reviews.

Based on the latest reconstruction of the famous ice age river system, Fleuve Manche, the scientists have calculated that its catchment area was similar to that of the Mississippi.

Illustration: H. Patton/CAGE

Ten times the melt of Greenland and Antarctica today

"Our model experiments show that from 15000 to 13000 years ago, the Eurasian ice sheet lost 750 cubic kilometres of ice a year. For short periods, it peaked at ice loss rates of over 3000 cubic kilometres per year." says first author Henry Patton, researcher at CAGE Centre for Arctic Gas Hydrate, Environment and Climate at UiT The Arctic University of Norway.

A cubic kilometre of ice is difficult to imagine, but think of a cube that is 1km long on each side: It will contain 1 000 000 000 tonnes of water. Now multiply that by 3000.

"There is an event in this deglaciation story called Meltwater Pulse 1A. This was a period of very rapid sea level rise that lasted some 400-500 years when global temperatures were rising very quickly. During this period, we estimate that the Eurasian Ice Sheet contributed around 2.5 metres to global sea level rise" states Patton.

The sea level rise and the colossal amounts of meltwater discharged from the collapsing ice sheet meant that areas that previously were land eventually became seabed. Britain and Ireland, which had been joined to Europe throughout the last ice age, finally separated with the flooding of the English Channel around 10,000 years ago. 
Illustration: H. Patton/CAGE

"To place it in context", says professor Alun Hubbard, the paper's second author and a leading glaciologist, "this is almost ten times the current rates of ice being lost from Greenland and Antarctica today. What's fascinating is that not all Eurasian ice retreat was from surface melting alone. Its northern and western sectors across the Barents Sea, Norway and Britain terminated directly into the sea. They underwent rapid collapse through calving of vast armadas of icebergs and undercutting of the ice margin by warm ocean currents."

"This is a harbinger of what's starting to happen to the Greenland ice sheet" warns Hubbard.

All rivers in Europe unite

The influence of the Eurasian ice sheet extended far beyond what was directly covered by ice. One of the most dramatic impacts was the formation of the enormous Fleuve Manche. This was a mega-river network that drained the present-day Vistula, Elbe, Rhine and Thames rivers, and the meltwater from the ice sheet itself, through the Seine Estuary and into the North Atlantic.

"Some speculate that at some points during the European deglaciation this river system had a discharge twice that of the Amazon today. Based on our latest reconstruction of this system, we have calculated that its catchment area was similar to that of the Mississippi. It was certainly the largest river system to have ever drained the Eurasian continent," says Patton

The original Brexit is a fact

The vast reach of this catchment meant that this mega-river had the capacity to contribute enormous volumes of cold freshwater directly into the North Atlantic, enough to have severely modified the Gulf Stream - a major climate influencer.

Also, the sea level rise and the colossal amounts of meltwater discharged from the collapsing ice sheet meant that areas that previously were land eventually became seabed.

"Britain and Ireland, which had been joined to Europe throughout the last ice age, finally separated with the flooding of the English Channel around 10,000 years ago. It was the original Brexit, so to speak" says Alun Hubbard.

The ice retreats, the humans advance

The ice reconstruction in this study provides a fascinating image of a changing Europe during the time prehistoric humans came to populate the continent. The environmental challenges they met must have been spectacular.

"One thing that we show pretty well in this study is that our simulation is relevant to a range of different research disciplines, not only glaciology. It can even be useful for archaeologists who look at human migration routes, and are interested to see how the European environment developed over the last 20 000 years." says Patton

This model reconstruction has already proven a vital constraint for understanding complex systems beyond the ice sheet realm. For example, data from this study has been used to examine the evolution of gas hydrate stability within the Eurasian Arctic over glacial timescales, exploring the development of massive mounds and methane blow-out craters that have been recently discovered on the Arctic seafloor.


Contacts and sources:
Henry Patton
CAGE - Center for Arctic Gas Hydrate, Climate and Environment

Oceans Are Warming Rapidly, Says CAS Study

More than 90% of the earth's energy imbalance (EEI) in the climate system is sequestered in the ocean and consequently the ocean heat content (OHC) is increasing. Therefore, OHC is one of the most important indicators of global warming. During the past 30 years, many independent groups worked to estimate historical OHC changes. 

However, large uncertainty has been found among the published global OHC time series. For example, during the current surge of research on the so-called "hiatus" or "slowdown", different scientific studies draw quite different conclusions on the key scientific question such as "Where is the heat redistributed in the ocean?" This motivates us to give a detailed analysis about global and basin OHC changes based on multiple ocean datasets.

This image shows the ocean warming rate (Ocean Heat Content 0-2000m trend) from 1960 to 2016 in unit of W/m2, calculated by IAP Gridded Data.

Credit: Cheng Lijing
A just released study, led by Ph. D student WANG Gong-jie from National University of Defence Technology, cooperating with Professor LI Chong-yin and Dr. CHENG Li-jing from Institute of Atmospheric Physics (IAP)/ Chinese Academy of Sciences, Professor John P. ABRAHAM from University of St. Thomas (USA), comprehensively examined the OHC change on decadal and multi-decadal scales and at different ocean basins. Through three different objectively analyzed ocean datasets (Ishii from Japan, EN4 from Met. Office and IAP), they found that the oceans are robustly warming, regardless of which data was used. In addition, the heat among global oceans experienced a significant redistribution in the past several decades.

During 1998-2012, which was famous for global warming slowdown period, all of these basins had been accumulating heat, and there was no clear indication of which ocean basin dominates the global OHC change. 

In other words, below 100-m depth in the Atlantic and Southern Ocean, and between 100-300m depth in the Pacific and Indian Ocean, there was statistically significant warming and they all contributed to global ocean warming. The discrepancy results from previous studies are due to the difference of depth ranges used in calculating OHC as well as the uncertainty in subsurface temperature datasets.

Why are there substantial differences among different datasets? This study shows that Ishii analysis underestimates the heating rate in the southern hemisphere in the past century. And EN4 analysis cannot correctly reconstruct the sea surface temperature (SST) during the past 30 years and underestimates the warming rate by ~90% compared with an independent SST datasets such as ERSST and OISST. This indicates the Ishii and EN4 analyses may underestimate the ocean warming rate.

"In plain English, it will be important that we keep high-quality temperature sensors positioned throughout the oceans so in the future we will be able to predict where our climate is headed," explains co-author ABRAHAM. "We say in science that a measurement not made is a measurement lost forever. And there are no more important measurements than of heating of the oceans."


Contacts and sources:
Zheng Lin
Institute of Atmospheric Physics, Chinese Academy of Sciences


Citation: Consensuses and discrepancies of basin-scale ocean heat content changes in different ocean analyses http://dx.doi.org/10.1007/s00382-017-3751-5

What's on Your Skin? Archaea, An Extreme Loving Microbe, That's What

It turns out your skin is crawling with single-celled microorganisms – ­and they’re not just bacteria. A study by the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the Medical University of Graz has found that the skin microbiome also contains archaea, a type of extreme-loving microbe, and that the amount of it varies with age.

The researchers conducted both genetic and chemical analyses of samples collected from human volunteers ranging in age from 1 to 75. They found that archaea (pronounced ar-KEY-uh) were most abundant in subjects younger than 12 and older than 60. Their study has been published in Scientific Reports (a Nature journal) in an article titled, “Human age and skin physiology shape diversity and abundance of Archaea on skin.

“The skin microbiome is usually dominated by bacteria,” said Hoi-Ying Holman, director of the Berkeley Synchrotron Infrared Structural Biology (BSISB) Program and a senior author on the paper. “Most of the scientific attention has been on bacteria, because it’s easier to detect. Based on the literature, six years ago we didn’t even know that archaea existed on human skin. Now we’ve found they’re part of the core microbiome and are an important player on human skin.”

As director of the Berkeley Synchrotron Infrared Structural Biology (BSISB) program at the Advanced Light Source, Hoi-Ying Holman focuses on developing and providing research communities new synchrotron infrared technologies for deciphering the relationship between genome and functional processes, and identifying the connection between the genome and natural environments.

Credit: Marilyn Chung/Berkeley Lab

The study was a joint effort of Holman, Berkeley Lab postdoctoral fellow Giovanni Birarda (now a scientist at Elettra Sincrotrone Trieste in Italy), UC Berkeley postdoctoral fellow Alexander Probst (now associate professor at the University of Duisburg-Essen in Germany), and Christine Moissl-Eichinger, the corresponding author of the study. Moissl-Eichinger and her team at the Medical University of Graz in Austria and at the University of Regensburg in Germany analyzed the genetic features of the skin microbiomes.

Berkeley Lab postdoctoral fellow Giovanni Birarda

Credit:  Berkeley Lab

In addition to the influence of age, they found that gender was not a factor but that people with dry skin have more archaea. “Archaea might be important for the cleanup process under dry skin conditions,” said Moissl-Eichinger. “The results of our genetic analysis (DNA-based quantitative PCR and next-generation sequencing), together with results obtained from infrared spectroscopy imaging, allowed us to link lower levels of sebum [the oily secretion of sebaceous glands] and thus reduced skin moisture with an increase of archaeal signatures.”

More than skin deep

It was not until the 1970s that scientists realized how different archaea were from bacteria, and they became a separate branch on the tree of life – ­the three branches being Bacteria, Archaea, and Eukarya (which includes all plants and animals). Archaea are commonly found in extreme environments, such as hot springs and Antarctic ice. Nowadays it is known that archaea exist in sediments and in the Earth’s subsurface as well, but they have only recently been found in the human gut and linked with the human microbiome.

Holman’s focus is on developing synchrotron infrared spectroscopy techniques to look at biological or ecological systems. Using Berkeley Lab’s Advanced Light Source (ALS), one of the world’s brightest sources of infrared beams, the Holman Group developed a rapid and label-free method to screen cells and immediately tell if they’re bacteria or archaea.

“The challenges in microbial profiling are speed, throughput, and sample integrity,” she said. “We spent years developing this technique and could not have done it without the unique resources of the ALS.”

But the dearth of studies on skin archaea is not just because of technical limitations. The researchers assert that the lack of age diversity in the sampling in previous studies was also a factor. “Sampling criteria and methods matter,” Holman said. “We found middle-aged human subjects have less archaea; therefore, the archaeal signatures have been overlooked in other skin microbiome studies.”

From astronauts to archaea

This study stemmed from a planetary protection project for NASA and the European Space Agency. “We were checking spacecraft and their clean rooms for the presence of archaea, as they are suspected to be possible critical contaminants during space exploration – certain methane-producing archaea, the so-called methanogens, could possibly survive on Mars,” Moissl-Eichinger said. “We did not find many signatures from methanogens, but we found loads of Thaumarchaeota, a very different type of archaea that survives with oxygen.”

(left) Fluorescence images of archaeal cells in skin wipe samples; the ALS was used to measure infrared absorption spectra of different Archaea types. (right) The hierachical chart of the human skin archaeome with Thaumarchaeota (red), Euryarchaeota (green), and Crenarchaeota (blue).

 Credit:  Berkeley Lab

At first it was thought the Thaumarchaeota were from the outside, but after finding them in hospitals and other clean rooms, the researchers suspected they were from human skin. So they conducted a pilot study of 13 volunteers and found they all had these archaea on their skin.

As a follow-up, which is the current study, they tested 51 volunteers and decided to get a large range in ages to test the age-dependency of the archaeal signatures. Samples were taken from the chest area. The variations in archaeal abundance among the age groups were statistically significant and unexpected. “It was surprising,” Holman said. “There’s a five- to eightfold difference between middle-aged people and the elderly – that’s a lot.”

Role in human health still a question

Their study focused on Thaumarchaeota, one of the many phyla of archaea, as little evidence of the others was found in the pilot study. “We know that Thaumarchaeota are supposed to be an ammonia-oxidizing microorganism, and ammonia is a major component of sweat, which means they might play a role in nitrogen turnover and skin health,” Holman said.

In collaboration with Peter Wolf of the Medical University of Graz, the team also correlated archaeal abundance with skin dryness, as middle-aged persons have higher sebum levels and thus moister skin than the elderly.

So far, most archaea are known to be beneficial rather than harmful to human health. They may be important for reducing skin pH or keeping it at low levels, and lower pH is associated with lower susceptibility to infections.

“The detected archaea are probably involved in nitrogen turnover on skin, and are capable of lowering the skin pH, supporting the suppression of pathogens,” said Moissl-Eichinger. “Bacteria with the same capacities are already used as skin probiotics, potentially improving skin moisture and reducing body odors. Nevertheless, the clinical relevance of Thaumarchaeota remains unclear and awaits further studies.”

Holman listed several avenues of inquiry for future studies with Moissl-Eichinger. “We would like to investigate the physiological role of human skin archaea and how they differ from environmental archaea,” she said. “We would like to find out which niches they prefer on or in the human body. We also want to know whether they might be involved in pathogenic processes, such as neurodermatitis or psoriasis. So far, there is little evidence of the pathogenicity of archaea.”

The study was funded by the U.S. Department of Energy’s Office of Science, BioTechMed Graz, the Bavaria California Technology Center, and the University of Regensburg. The Advanced Light Source is a DOE Office of Science User Facility at Berkeley Lab. Other co-authors of the study were Anna Auerbach of the University of Regensburg and Kaisa Koskinen of the Medical University of Graz.




Contacts and sources:
Julie Chao
Lawrence Berkeley National Laboratory

3,000-Year-Old Textiles Offer Earliest Evidence of Chemical Dyeing in The Levant

Tel Aviv University archaeologists have revealed that cloth samples found in the Israeli desert present the earliest evidence of plant-based textile dyeing in the region. They were found at a large-scale copper smelting site and a nearby temple in the copper ore district of Timna in Israel's Arava desert and are estimated to date from the 13th-10th centuries BCE.

The discovery provides insight into society and copper production in the Timna region at the time of David and Solomon, Tel Aviv University researchers say.

The wool and linen pieces shed light on a sophisticated textile industry and reveal details about a deeply hierarchical society dependent on long-distance trade to support its infrastructure in the unforgiving desert.

This is a dyed textile at Timna.

Credit: Clara Amit, courtesy of the Israeli Antiquities Authority.

The study was published in PLOS ONE. It was led by Dr. Erez Ben-Yosef of TAU's Department of Archaeology and Near Eastern Cultures and Dr. Naama Sukenik of the Israel Antiquities Authority; and conducted in collaboration with Vanessa Workman of TAU's Department of Archaeology, Dr. Orit Shamir of the Israel Antiquities Authority and Dr. Zohar Amar, Dr. Alexander Varvak and Dr. David Iluz of Bar-Ilan University.

Textiles suggest significant social stratification

"This was clearly a formative period, with local kingdoms emerging and replacing Egyptian hegemony in Canaan," Dr. Ben-Yosef said. "These beautiful masterpieces of weaving and dyeing -- the first evidence of industrial dyeing at the time, of wash-resistant color on textile -- support the idea of a strong, hierarchical Edomite Kingdom in Timna at the time.

"It is apparent that there was a dominant elite in this society that took pains to dress according to their 'class,' and had the means to engage in long-distance trade to transport these textiles -- and other materials and resources -- to the desert."

The research suggests a sophisticated dyeing process involving cooking colorful plants in water, then adding fleece fixed with alum to create a chemical bond between fabrics and dye. The result is a wash-resistant colorful fabric.

The researchers radiocarbon-dated the textile pieces and harnessed gas chromatography to identify the cloth's organic molecules. They found "red" molecules produced from the madder plant and "blue" molecules from the woad plant.

"Both plants were known in antiquity as sources of organic dyes," said Dr. Ben-Yosef. "We know that these plants were used to create elaborate costumes during the Roman period, more than a thousand years later. Now we have evidence in the region of an Edomite society wearing textiles produced the same way, versus an earlier 'primitive' smearing of color on fabric."

"We can make many inferences according to this discovery," Dr. Ben-Yosef continued. "To force a large group of people to work in dangerous mines in the desert, you need a strong ruling party -- an elite that probably wore exquisite clothes to further distinguish themselves. The smelters, working in furnaces, were considered 'magicians' or even priests, and they probably wore fine clothing too. They represented the highest level of society, managing a sensitive and complex process to produce copper from rock."

Evidence of long-distance trade

The textile dye presents evidence of long-distance trade, Dr. Ben-Yosef noted. "Clearly this is not local. These plants require a lot of water and probably hail from the Mediterranean regions. The dyeing required special craftspeople, an entire industry that could not have subsisted in the desert. If Jerusalem was indeed opulent in the time of King Solomon, and the Temple covered in copper, we can assume a link to that kingdom."

The textiles are currently being stored in special facilities at the Israel Antiquities Authority and will one day be presented in museums in Israel and elsewhere.



Contacts and sources:
George Hunka
American Friends of Tel Aviv University (AFTAU)

'Bulges' In Volcanoes Can Predict Eruptions

A team of researchers from the University of Cambridge have developed a new way of measuring the pressure inside volcanoes, and found that it can be a reliable indicator of future eruptions.

Using a technique called 'seismic noise interferometry' combined with geophysical measurements, the researchers measured the energy moving through a volcano. They found that there is a good correlation between the speed at which the energy travelled and the amount of bulging and shrinking observed in the rock. The technique could be used to predict more accurately when a volcano will erupt. Their results are reported in the journal Science Advances.

Kilauea
Credit: Clare Donaldso

Data was collected by the US Geological Survey across Kilauea in Hawaii, a very active volcano with a lake of bubbling lava just beneath its summit. During a four-year period, the researchers used sensors to measure relative changes in the velocity of seismic waves moving through the volcano over time. They then compared their results with a second set of data which measured tiny changes in the angle of the volcano over the same time period.

Lava Waterfall, Kilauea Volcano, Hawaii. 
Credit: Dhilung Kirat


As Kilauea is such an active volcano, it is constantly bulging and shrinking as pressure in the magma chamber beneath the summit increases and decreases. Kilauea's current eruption started in 1983, and it spews and sputters lava almost constantly. Earlier this year, a large part of the volcano fell away and it opened up a huge 'waterfall' of lava into the ocean below. Due to this high volume of activity, Kilauea is also one of the most-studied volcanoes on Earth.

The Cambridge researchers used seismic noise to detect what was controlling Kilauea's movement. Seismic noise is a persistent low-level vibration in the Earth, caused by everything from earthquakes to waves in the ocean, and can often be read on a single sensor as random noise. But by pairing sensors together, the researchers were able to observe energy passing between the two, therefore allowing them to isolate the seismic noise that was coming from the volcano.

"We were interested in how the energy travelling between the sensors changes, whether it's getting faster or slower," said Clare Donaldson, a PhD student in Cambridge's Department of Earth Sciences, and the paper's first author. "We want to know whether the seismic velocity changes reflect increasing pressure in the volcano, as volcanoes bulge out before an eruption. This is crucial for eruption forecasting."

One to two kilometres below Kilauea's lava lake, there is a reservoir of magma. As the amount of magma changes in this underground reservoir, the whole summit of the volcano bulges and shrinks. At the same time, the seismic velocity changes. As the magma chamber fills up, it causes an increase in pressure, which leads to cracks closing in the surrounding rock and producing faster seismic waves - and vice versa.

"This is the first time that we've been able to compare seismic noise with deformation over such a long period, and the strong correlation between the two shows that this could be a new way of predicting volcanic eruptions," said Donaldson.

Volcano seismology has traditionally measured small earthquakes at volcanoes. When magma moves underground, it often sets off tiny earthquakes, as it cracks its way through solid rock. Detecting these earthquakes is therefore very useful for eruption prediction. But sometimes magma can flow silently, through pre-existing pathways, and no earthquakes may occur. This new technique will still detect the changes caused by the magma flow.

Seismic noise occurs continuously, and is sensitive to changes that would otherwise have been missed. The researchers anticipate that this new research will allow the method to be used at the hundreds of active volcanoes around the world.



Contacts and sources:
Sarah Collins
University of Cambridge 

Citation: C. Donaldson et al. ‘Relative seismic velocity variations correlate with deformation at Kīlauea volcano’. Science Advances (2017) DOI: 10.1126/sciadv.1700219