Wednesday, July 31, 2019

What Does a Marsquake Look Like?

Southern California got all shook up after a set of recent quakes. But Earth isn't the only place that experiences quakes: Both the Moon and Mars have them as well. NASA sent the first seismometer to the Moon 50 years ago, during the Apollo 11 mission; the agency's InSight lander brought the first seismometer to Mars in late 2018, and it's called the Seismic Experiment for Interior Structure (SEIS).

This artist's concept is a simulation of what seismic waves from a marsquake might look like as they move through different layers of the Martian interior.
Artist's concept is a simulation of what seismic waves from a marsquake might look like
Credit: NASA/JPL-Caltech/ETH Zurich/ Van Driel


Provided by the French space agency, Centre National d'Études Spatiales (CNES), the seismometer detected its first marsquake on April 6, 2019. The InSight mission's Marsquake Service, which monitors the data from SEIS, is led by Swiss research university ETH Zurich.

Quakes look and feel different depending on the material their seismic waves pass through. In a new video, scientists at ETH demonstrate this by using data from the Apollo-era seismometers on the Moon, two of the first quakes detected on Mars by SEIS and quakes recorded here on Earth.

By running data from these worlds through a quake simulator, or "shake room," scientists can experience for themselves how different the earthquakes can be. Researchers had to amplify the marsquake signals by a factor of 10 million in order to make the quiet and distant tremors perceptible in comparison to the similarly amplified moonquakes and unamplified earthquakes.

Fifty years after Apollo 11 astronauts deployed the first seismometer on the surface of the Moon, NASA InSight's seismic experiment transmits data giving researchers the opportunity to compare marsquakes to moon and earthquakes. The Marsquake Service (MQS) center at ETH Zurich in Switzerland monitors daily seismic activity on Mars.

About InSight

JPL manages InSight for NASA's Science Mission Directorate. InSight is part of NASA's Discovery Program, managed by the agency's Marshall Space Flight Center in Huntsville, Alabama. Lockheed Martin Space in Denver built the InSight spacecraft, including its cruise stage and lander, and supports spacecraft operations for the mission.

A number of European partners, including France's Centre National d'Études Spatiales (CNES) and the German Aerospace Center (DLR), are supporting the InSight mission. CNES provided the Seismic Experiment for Interior Structure (SEIS) instrument to NASA, with the principal investigator at IPGP (Institut de Physique du Globe de Paris). Significant contributions for SEIS came from IPGP; the Max Planck Institute for Solar System Research (MPS) in Germany; the Swiss Federal Institute of Technology (ETH Zurich) in Switzerland; Imperial College London and Oxford University in the United Kingdom; and JPL. DLR provided the Heat Flow and Physical Properties Package (HP3) instrument, with significant contributions from the Space Research Center (CBK) of the Polish Academy of Sciences and Astronika in Poland. Spain's Centro de Astrobiología (CAB) supplied the temperature and wind sensors.


Contacts and sources
Andrew Good
Jet Propulsion Laboratory, Pasadena, Calif.




:










Citation:




Speech Development Starts in the Womb

New parents are always thrilled when their offspring finally speaks its first words – usually at the age of 12 to 18 months. What parents often don't know: the regions of the brain that recognise and process speech sounds start to specialise at a much earlier stage.

An interdisciplinary working group of the Department of Pediatrics and Adolescent Medicine of MedUni Vienna/Vienna General Hospital in the Comprehensive Center for Pediatrics (CCP) led by neurolinguistics expert Lisa Bartha-Doering has now found that full-term newborns are able to discriminate between speech sounds and non-speech sounds the day after they are born and that specialisation of specific regions of the frontal and temporal lobe of the left hemisphere of the brain for the processing of speech can be observed even at this early age.

Speech development starts in the womb
File:Baby sleeping2.jpg
Credit:

The hearing apparatus of the foetus is already functional in the last trimester of pregnancy and speech-specific regions develop in the brain. Babies therefore learn to distinguish the first speech sounds while still in the womb. The natural filtering of speech sounds through the amniotic fluid and through the background noise of the mother's body plays a major role in this. The last few weeks before birth are therefore very important for the first phases of a child’s speech development and affect its ongoing speech acquisition.

To measure this early brain activity, the researchers used the technique of functional near-infrared spectroscopy (fNIRS), which involves measuring the changes in oxygenation in the baby’s cerebral cortex as it recognises speech.

Crucial last weeks before birth

Bartha-Doering and her team also found that, in contrast to full-term babies, many babies that were born preterm still couldn’t discriminate between speech and non-speech sounds by their expected delivery date (days or weeks after their actual – premature – birth) and still didn’t exhibit any functional specialisation of the regions of the brain responsible for speech.

They lack the benefit of spending the crucial last few weeks before birth in the natural environment of the mother's body to help them recognise and process filtered speech sounds. These results also underscore the importance of the acoustic environment on preterm baby and neonatal units in hospitals. "A noisy environment similar to that encountered in the mother's body, including parental voices and reduction of ambient noise can support the development of the speech areas in the brain of preterm infants, thus facilitating their further speech acquisition," stresses the MedUni Vienna expert.

 "Much of this knowledge about the importance of the sound environment for newborn babies has already been put into practice on the neonatal wards within Vienna General Hospital and the recent findings of this study will now feed into further optimisation of the acoustic environment on the preterm and neonatal units."
https://www.meduniwien.ac.at



Contacts and sources:
Medical University of Vienna


Citation: “Absence of early speech discrimination in preterm infants at term-equivalent age. Developmental Cognitive Neuroscience”. Bartha-Doering L, Alexopoulos J, Giordano V, Stelzer L, Kainz T, Benavides-Varela S, Wartenburger I, Klebermass-Schrehof K, Olischar M, Seidl R, Berger A (2019). In Press. Link: https://doi.org/10.1016/j.dcn.2019.100679



Tuesday, July 30, 2019

Elephant Extinction Will Raise Carbon Dioxide Levels in Atmosphere



One of the last remaining megaherbivores, forest elephants shape their environment by serving as seed dispersers and forest bulldozers as they eat over a hundred species of fruit, trample bushes, knock over trees and create trails and clearings. Their ecological impact also affects tree populations and carbon levels in the forest, researchers report, with significant implications for climate and conservation policies.
 
Central African elephant. 
elephant
Photo by Stephen Blake, Ph.D.


In a paper recently published in Nature Geoscience, a Saint Louis University biologist and his colleagues found that elephant populations in central African forests encourage the growth of slow-growing trees with high wood density that sequester more carbon from the atmosphere than fast growing species which are the preferred foods of elephants.

As forest elephants preferentially browse on the fast growing species, they cause high levels of damage and mortality to these species compared to the slow growing, high wood density species. The collapse of forest elephant populations will likely therefore causes an increase in the abundance of fast growing tree species at the expense of slow growing species, and reduce the ability of the forest to capture carbon.

Stephen Blake, Ph.D., assistant professor of biology at Saint Louis University

Stephen Blake, Ph.D., assistant professor of biology at Saint Louis University, spent 17 years in central Africa doing, among other things, applied research and conservation work with elephants. While there, he collected a data set on forest structure and species composition in the Nouabalé-Ndoki Forest of northern Congo.

In the current study, Blake’s collaborators developed a mathematical computer model to answer the question ‘What would happen to the composition of the forest over time with and without elephant browsing?’

To find out, they simulated elephant damage through browsing in the forest and assumed they browse certain plant species at different rates. Elephants prefer fast-growing species in more open spaces. As they feed and browse, they cause damage, knocking off a limb or breaking a shrub. The model calculated feeding and breakage rates along with elephant mortality rates to see their effect on certain woody plants.

“Lo and behold, as we look at numbers of elephants in a forest and we look at the composition of forest over time, we find that the proportion of trees with high density wood is higher in forests with elephants,” Blake said.

“The simulation found that the slow-growing plant species survive better when elephants are present. These species aren’t eaten by elephants and, over time, the forest becomes dominated by these slow-growing species. Wood (lignin) has a carbon backbone, meaning it has a large number of carbon molecules in it. Slow growing high wood density species contain more carbon molecules per unit volume than fast growing low wood density species. As the elephants “thin” the forest, they increase the number of slow-growing trees and the forest is capable of storing more carbon.”

These findings suggest far-ranging ecological consequences of past and present extinctions. The loss of elephants will seriously reduce the ability of the remaining forest to sequester carbon. Trees and plants use carbon dioxide during photosynthesis, removing it from the atmosphere. For this reason, plants are helpful in combating global warming and serve to store carbon emissions.

Without the forest elephants, less carbon dioxide will be taken out of the atmosphere. In monetary terms, forest elephants represent a carbon storage service of $43 billion.

“The sad reality is that humanity is doing its best to rid the planet of elephants as quickly as it can,” Blake said. “Forest elephants are rapidly declining and facing extinction. From a climate perspective, all of their positive effect on carbon and their myriad other ecological roles as forest gardeners and engineers will be lost.”

The study authors note that forest elephant conservation could reverse this loss.

“Elephants are a flagship species. People love elephants – we spend millions every year on cuddly toys, they are zoo favourites and who didn’t cry during Dumbo? and yet we’re pushing them closer to extinction every day. On one hand we admire them and feel empathy and are horrified when they are murdered and on the other hand we’re not prepared to do anything serious about it. The consequences may be severe for us all. We need to change our ways.

“Besides, it just makes good sense to keep them around. They’re doing an amazing job of helping the planet store carbon for free.”

Other authors on the study include first author Fabio Berzaghi, Marcos Longo, Philippe Ciais, Francois Bretagnolle, Simone Vieira, Marcos Scaranello, Giuseppe Scarascia-Mugnozza and Christopher E. Doughty.
Key Take-aways
Researchers asked ‘What would happen to the composition of the forest over time with and without elephants?’
They found that elephant browsing on fast growing tree species damages and kills young plants which pushes the composition of the forest towards slow-growing plant species which increase in abundance in areas where elephants occur. Slow-growing plants have dense wood and therefore store more carbon than slow growing species.
The loss of elephants will seriously reduce the ability of the forest to sequester carbon and so less carbon dioxide will be kept out of the atmosphere.
Forest elephants are rapidly becoming extinct. From a climate perspective, all of the positive carbon effect that elephants provide will be lost if we do not reverse the trend of illegal killing for the ivory trade.

Contacts and sources:
Carrie Bebermeyer
Saint Louis University

Citation: Carbon stocks in central African forests enhanced by elephant disturbance.
Fabio Berzaghi, Marcos Longo, Philippe Ciais, Stephen Blake, François Bretagnolle, Simone Vieira, Marcos Scaranello, Giuseppe Scarascia-Mugnozza, Christopher E. Doughty. Nature Geoscience, 2019; DOI: 10.1038/s41561-019-0395-6

'Tickle' Therapy Using tVNS to Slow Down Ageing



'Tickling' the ear with a small electrical current appears to rebalance the autonomic nervous system for over-55s, potentially slowing down one of the effects of ageing, according to new research.

Scientists found that a short daily therapy delivered for two weeks led to both physiological and wellbeing improvements, including a better quality of life, mood and sleep.

The therapy, called transcutaneous vagus nerve stimulation, delivers a small, painless electrical current to the ear, which sends signals to the body's nervous system through the vagus nerve.

The new research, conducted at the University of Leeds, suggests the therapy may slow down an important effect associated with ageing.

A tVNS device attaches to the ear and gently provides electrical stimulation, which rebalances the autonomic nervous system.
Credit: University of Leeds

This could help protect people from chronic diseases which we become more prone to as we get older, such as high blood pressure, heart disease and atrial fibrillation. The researchers, who published their findings today in the journal Aging, suggest that the 'tickle' therapy has the potential to help people age more healthily, by recalibrating the body's internal control system.

Lead author Dr Beatrice Bretherton, from the School of Biomedical Sciences at the University of Leeds, said: "The ear is like a gateway through which we can tinker with the body's metabolic balance, without the need for medication or invasive procedures. We believe these results are just the tip of the iceberg.

"We are excited to investigate further into the effects and potential long-term benefits of daily ear stimulation, as we have seen a great response to the treatment so far."

The study was conducted by scientists from the University of Leeds and funded by the Dunhill Medical Trust.

What is the autonomic nervous system?

The autonomic nervous system controls many of the body's functions which don't require conscious thought, such as digestion, breathing, heart rate and blood pressure.

It contains two branches, the sympathetic and the parasympathetic, which work against each other to maintain a healthy balance of activity.

The sympathetic branch helps the body prepare for high intensity 'fight or flight' activity, whilst the parasympathetic is crucial to low intensity 'rest and digest' activity.

As we age, and when we are fighting diseases, the body's balance changes such that the sympathetic branch begins to dominate. This imbalance makes us more susceptible to new diseases and leads to the breakdown of healthy bodily function as we get older.

Clinicians have long been interested in the potential for using electrical currents to influence the nervous system. The vagus nerve, the major nerve of the parasympathetic system, has often been used for electrical stimulation and past research has looked at the possibility of using vagus nerve stimulation to tackle depression, epilepsy, obesity, stroke, tinnitus and heart conditions.

However, this kind of stimulation needs surgery to implant electrodes in the neck region, with associated expense and a small risks of side effects.

Fortunately, there is one small branch of the vagus nerve that can be stimulated without surgery, located in the skin of specific parts of the outer ear.

In Leeds, previous research has shown that applying a small electrical stimulus to the vagus nerve at the ear, which some people perceive as a tickling sensation, improves the balance of the autonomic nervous system in healthy 30-year-olds.

Other researchers worldwide are now investigating if this transcutaneous vagus nerve stimulation (tVNS) could provide a therapy for conditions ranging from heart problems to mental health.

Diane Crossley, aged 70, from Leeds, took part in the study and received the tVNS therapy for two weeks. She said: "I was happy to be a participant in this really interesting study, it helped me with my awareness of my own health.

"It was a fascinating project and I was proud to be part of it."

In their new study, scientists at the University of Leeds wanted to see whether tVNS could benefit over 55-year-olds, who are more likely to have out-of-balance autonomic systems that could contribute to health issues associated with ageing.

They recruited 29 healthy volunteers, aged 55 or above, and gave each of them the tVNS therapy for 15 minutes per day, over a two week period. Participants were taught to self-administer the therapy at home during the study.

The therapy led to an increase in parasympathetic activity and a decrease in sympathetic activity, rebalancing the autonomic function towards that associated with healthy function. In addition, some people reported improvements in measures of mental health and sleeping patterns.

Being able to correct this balance of activity could help us age more healthily, as well as having the potential to help people with a variety of disorders such as heart disease and some mental health issues.

Additionally, improving the balance of the autonomic nervous system lowers an individual's risk of death, as well as the need for medication or hospital visits.

Researchers found that individuals who displayed the greatest imbalance at the start of the study experienced the most pronounced improvements after receiving the therapy.

They suggest that in future it may be possible to identify who is most likely to benefit from the therapy, so it can be offered through a targeted approach.

tVNS therapy has previously been shown to have positive psychological effects for patients with depression, and this study shows it could also have significant physiological benefits.

Dr Susan Deuchars, one of the senior authors on the study, said: "We believe this stimulation can make a big difference to people's lives, and we're now hoping to conduct further studies to see if tVNS can benefit multiple disorders."

Further studies are now needed to understand what the long-term health effects of tVNS might be, as this study involved a small number of participants over a short time period.



Contacts and sources:
University of Leeds

Fish Reveal Limb-Regeneration Secrets



What can fish teach scientists about limb regeneration? Quite a bit, as it turns out.

In the current issue of Proceedings of the National Academy of Sciences, Michigan State University scientists show that gar, a toothy, freshwater fish, can reveal many evolutionary secrets - even possible genetic blueprints for limb regeneration in people.

Scientists knew that salamanders can regrow full limbs after amputation. Ingo Braasch, MSU assistant professor of integrative biology, and his team, however, was the first to study how gar and other fish regenerate entire fins. More importantly, the researchers focused on how they rebuild the endochondral bones within their fins, which are the equivalents of human arms and legs.

In the current issue of Proceedings of the National Academy of Sciences, Michigan State University scientists show that gar, a toothy, freshwater fish, can reveal many evolutionary secrets -- even possible genetic blueprints for limb regeneration in people.
Credit: MSU

"Gars are often considered 'dinosaur fish' because of their ancestor-resembling body type," Braasch said. "They're becoming a popular, new research organism for biomedical research, largely in part because the gar genome is quite similar to the human genome."

Garfish has been called a "bridge species," as its genome is similar to both zebrafish - often used as a genetic model for human medical advances - and humans, a discovery in which Braasch led. Gar evolve slowly and have kept more ancestral elements in their genome than other fish. This means that along with serving as a bridge species to people, gar also are great connectors to the deep past.

Ingo Braasch studies the evolution of genomic and morphological relationships among vertebrate animals – connecting the past with the present – using gars as genetic models. 
Photo by Solomon David

So, by studying how fish regenerate fins, Braasch's team pinpointed the genes and the mechanisms responsible that drive the regrowth. When they compared their findings to the human genome, they made an interesting observation.

"The genes responsible for this action in fish also are largely present in humans," Braasch said. "What's missing, though, are the genetic mechanisms that activate these genes in humans. It is likely that the genetic switches that activate the genes have been lost or altered during the evolution of mammals, including humans."

Ingo Braasch has shown that gar, a toothy, freshwater fish, can reveal many evolutionary secrets ­– even possible genetic blueprints for limb regeneration in people. 
Courtesy of MSU


Evolutionary speaking, this suggests that the last common ancestor of fish and tetrapods, or four-legged vertebrates, had already acquired a specialized response for appendage regeneration, and that this program has been maintained during evolution in many fish species as well as salamanders, he added.

Continuing research into these key genes and missing mechanisms could eventually lead to some revolutionary medical advances.

"The more we study these commonalities among vertebrates, the more we can home in on prime targets for awakening this program for regenerative therapies in humans," Braasch said. "Such direct biomedical advances remain in the distant future, but studies of fin regeneration in fish will continue to reveal much about the regenerative potential of vertebrates."


Contacts and sources:
Layne CameronMichigan State University

Citation: Deep evolutionary origin of limb and fin regeneration Sylvain Darnet, Aline C. Dragalzew, Danielson B. Amaral, Josane F. Sousa, Andrew W. Thompson, Amanda N. Cass, Jamily Lorena, Eder S. Pires, Carinne M. Costa, Marcos P. Sousa, Nadia B. Fröbisch, Guilherme Oliveira, Patricia N. Schneider, Marcus C. Davis, Ingo Braasch, and Igor Schneider PNAS July 23, 2019 116 (30) 15106-15115; first published July 3, 2019 https://doi.org/10.1073/pnas.1900475116

Andrew Thompson, MSU researcher, contributed to this study. Scientists from Universidade de Federal do Para (Brazil), Instituto Tecnologico Vale (Brazil), Laboratorio de Biologia Molecular (Brazil), James Madison University, and Leibniz Institute for Evolution and Biodiversity Science (Germany) also contributed to this research.https://www.pnas.org/content/early/2019/07/02/1900475116.short)
 


Researchers Repair Faulty Brain Circuits Using Nanotechnology





Working with mouse and human tissue, Johns Hopkins Medicine researchers report new evidence that a protein pumped out of some — but not all — populations of “helper” cells in the brain, called astrocytes, plays a specific role in directing the formation of connections among neurons needed for learning and forming new memories.  The discovery makes possible new drug targets for dementia and intellectual disability.

Using mice genetically engineered and bred with fewer such connections, the researchers conducted proof-of-concept experiments that show they could deliver corrective proteins via nanoparticles to replace the missing protein needed for “road repairs” on the defective neural highway.

Red 8.3 astrocytes in the spine of a mouse.
7-30-19 Researchers Repair Faulty Brain Circuits Using Nanotechnology.jpgCredit: Rothstein lab

Since such connective networks are lost or damaged by neurodegenerative diseases such as Alzheimer’s or certain types of intellectual disability, such as Norrie disease, the researchers say their findings advance efforts to regrow and repair the networks and potentially restore normal brain function.

The findings are described in the May issue of Nature Neuroscience.

“We are looking at the fundamental biology of how astrocytes function, but perhaps have discovered a new target for someday intervening in neurodegenerative diseases with novel therapeutics,” says Jeffrey Rothstein, M.D., Ph.D., the John W. Griffin Director of the Brain Science Institute and professor of neurology at the Johns Hopkins University School of Medicine.

“Although astrocytes appear to all look alike in the brain, we had an inkling that they might have specialized roles in the brain due to regional differences in the brain’s function and because of observed changes in certain diseases,” says Rothstein. “The hope is that learning to harness the individual differences in these distinct populations of astrocytes may allow us to direct brain development or even reverse the effects of certain brain conditions, and our current studies have advanced that hope.”

In the brain, astrocytes are the support cells that act as guides to direct new cells, promote chemical signaling, and clean up byproducts of brain cell metabolism.

Rothstein’s team focused on a particular astrocyte protein, glutamate transporter-1, which previous studies suggested was lost from astrocytes in certain parts of brains with neurodegenerative diseases. Like a biological vacuum cleaner, the protein normally sucks up the chemical “messenger” glutamate from the spaces between neurons after a message is sent to another cell, a step required to end the transmission and prevent toxic levels of glutamate from building up.

When these glutamate transporters disappear from certain parts of the brain --such as the motor cortex and spinal cord in people with amyotrophic lateral sclerosis (ALS) --glutamate hangs around much too long, sending messages that overexcite and kill the cells.

To figure out how the brain decides which cells need the glutamate transporters, Rothstein and colleagues focused on the region of DNA in front of the gene that typically controls the on-off switch needed to manufacture the protein. They genetically engineered mice to glow red in every cell where the gene is activated.

Normally, the glutamate transporter is turned on in all astrocytes. But, by using between 1,000- and 7,000-bit segments of DNA code from the on-off switch for glutamate, all the cells in the brain glowed red, including the neurons. It wasn’t until the researchers tried the largest sequence of an 8,300-bit DNA code from this location that the researchers began to see some selection in red cells. These red cells were all astrocytes but only in certain layers of the brain’s cortex in mice.

Because they could identify these “8.3 red astrocytes,” the researchers thought they might have a specific function different than other astrocytes in the brain. To find out more precisely what these 8.3 red astrocytes do in the brain, the researchers used a cell-sorting machine to separate the red astrocytes from the uncolored ones in mouse brain cortical tissue, and then identified which genes were turned on to much higher than usual levels in the red compared to the uncolored cell populations. The researchers found that the 8.3 red astrocytes turn on high levels of a gene that codes for a different protein known as Norrin.

Rothstein’s team took neurons from normal mouse brains, treated them with Norrin, and found that those neurons grew more of the “branches” — or extensions — used to transmit chemical messages among brain cells. Then, Rothstein says, the researchers looked at the brains of mice engineered to lack Norrin, and saw that these neurons had fewer branches than in healthy mice that made Norrin.

In another set of experiments, the research team took the DNA code for Norrin plus the 8,300 “location” DNA and assembled them into deliverable nanoparticles. When they injected the Norrin nanoparticles into the brains of mice engineered without Norrin, the neurons in these mice began to quickly grow many more branches, a process suggesting repair to neural networks. They repeated these experiments with human neurons too.

Rothstein notes that mutations in the Norrin protein that reduce levels of the protein in people cause Norrie disease — a rare, genetic disorder that can lead to blindness in infancy and intellectual disability. Because the researchers were able to grow new branches for communication, they believe it may one day be possible to use Norrin to treat some types of intellectual disabilities such as Norrie disease.

For their next steps, the researchers are investigating if Norrin can repair connections in the brains of animal models with neurodegenerative diseases, and in preparation for potential success, Miller and Rothstein have submitted a patent for Norrin.

Other authors of the publication are Sean Miller, Thomas Philips, Namho Kim, Raha Dastgheyb, Zhuoxun Chen, Yi-Chun Hsieh, J. Gavin Daigle, Jeannie Chew, Svetlana Vidensky, Jacqueline Pham, Ethan Hughes, Michael Robinson, Rita Sattler, Jung Soo Suk, Dwight Bergles, Norman Haughey, Mikhail Pletnikov and Justin Hanes of Johns Hopkins, and Malika Datta and Raju Tomer of Columbia University.

This work was funded by grants from the National Science Foundation Graduate Fellowship Research Program and the National Institute of Neurological Disorders and Stroke (R01NS092067, R01NS094239).

Contacts and sources:
Johns Hopkins Medicine

Citation: Molecularly defined cortical astroglia subpopulation modulates neurons via secretion of Norrin.
Sean J. Miller, Thomas Philips, Namho Kim, Raha Dastgheyb, Zhuoxun Chen, Yi-Chun Hsieh, J. Gavin Daigle, Malika Datta, Jeannie Chew, Svetlana Vidensky, Jacqueline T. Pham, Ethan G. Hughes, Michael B. Robinson, Rita Sattler, Raju Tomer, Jung Soo Suk, Dwight E. Bergles, Norman Haughey, Mikhail Pletnikov, Justin Hanes, Jeffrey D. Rothstein.Nature Neuroscience, 2019; 22 (5): 741 DOI: 10.1038/s41593-019-0366-7



Ultra-thin Layers of Rust Generate Electricity from Flowing Wate



There are many ways to generate electricity—batteries, solar panels, wind turbines, and hydroelectric dams, to name a few examples. …. And now there's rust. Rust is a common problem on infrastructure, but new research shows that when it's combined with salt water, it can also be a source of electricity.

New research conducted by scientists at Caltech and Northwestern University shows that thin films of rust—iron oxide—can generate electricity when saltwater flows over them. These films represent an entirely new way of generating electricity and could be used to develop new forms of sustainable power production.


Credit: Morteza Akhnia/Unsplash

Interactions between metal compounds and saltwater often generate electricity, but this is usually the result of a chemical reaction in which one or more compounds are converted to new compounds. Reactions like these are what is at work inside batteries.

In contrast, the phenomenon discovered by Tom Miller, Caltech professor of chemistry, and Franz Geiger, Dow Professor of Chemistry at Northwestern, does not involve chemical reactions, but rather converts the kinetic energy of flowing saltwater into electricity.

The phenomenon, the electrokinetic effect, has been observed before in thin films of graphene—sheets of carbon atoms arranged in a hexagonal lattice—and it is remarkably efficient. The effect is around 30 percent efficient at converting kinetic energy into electricity. For reference, the best solar panels are only about 20 percent efficient.

"A similar effect has been seen in some other materials. You can take a drop of saltwater and drag it across graphene and see some electricity generated," Miller says.

However, it is difficult to fabricate graphene films and scale them up to usable sizes. The iron oxide films discovered by Miller and Geiger are relatively easy to produce and scalable to larger sizes, Miller says.

"It's basically just rust on iron, so it's pretty easy to make in large areas," Miller says. "This is a more robust implementation of the thing seen in graphene."

Though rust will form on iron alloys on its own, the team needed to ensure it formed in a consistently thin layer. To do that, they used a process called physical vapor deposition (PVD), which turns normally solid materials, in this case iron, into a vapor that condenses on a desired surface. PVD allowed them to create an iron layer 10 nanometers thick, about 10 thousand times thinner than a human hair. After taking the metal film out of the PVD machine, rust formed spontaneously in air to a thickness of about 2 nanometers.

When they took that rust-coated iron and flowed saltwater solutions of varying concentrations over it, they found that it generated several tens of millivolts and several microamps per cm-2.

"For perspective, plates having an area of 10 square meters each would generate a few kilowatts per hour—enough for a standard US home," Miller says. "Of course, less demanding applications, including low-power devices in remote locations, are more promising in the near term."

The mechanism behind the electricity generation is complex, involving ion adsorption and desorption, but it essentially works like this: The ions present in saltwater attract electrons in the iron beneath the layer of rust. As the saltwater flows, so do those ions, and through that attractive force, they drag the electrons in the iron along with them, generating an electrical current.

Miller says this effect could be useful in specific scenarios where there are moving saline solutions, like in the ocean or the human body.

"For example, tidal energy, or things bobbing in the ocean, like buoys, could be used for passive electrical energy conversion," he says. "You have saltwater flowing in your veins in periodic pulses. That could be used to generate electricity for powering implants."

The paper describing their findings, titled "Energy Conversion via Metal Nanolayers,"appears in the July 29 issue of the Proceedings of the National Academy of Sciences. Other co-authors include Mavis D. Boamah, Emilie H. Lozier, Paul E. Ohno, and Catherine E. Walker of Northwestern, and Jeongmin Kim, a graduate student in chemistry at Caltech.

Support for the research was provided by the National Science Foundation, the Office of Naval Research, the Defense Advanced Research Projects Agency (DARPA), and the Army Research Chemical Sciences Division.



Contacts and sources:
Emily Velasco
California Institute of Technology


Citation: Energy conversion via metal nanolayers.
Mavis D. Boamah, Emilie H. Lozier, Jeongmin Kim, Paul E. Ohno, Catherine E. Walker, Thomas F. Miller, Franz M. Geiger .Proceedings of the National Academy of Sciences, 2019; 201906601 DOI: 10.1073/pnas.1906601116



Monday, July 29, 2019

New Technology to Harness Energy from Mixing of Freshwater and Seawater



A new battery made from affordable and durable materials generates energy from places where salt and fresh waters mingle. The technology could make coastal wastewater treatment plants energy-independent and carbon neutral.

Salt is power. It might sound like alchemy, but the energy in places where salty ocean water and freshwater mingle could provide a massive source of renewable power. Stanford researchers have developed an affordable, durable technology that could harness this so-called blue energy.


The Hyperion Water Reclamation Plant on Santa Monica Bay in Los Angeles is an example of a coastal wastewater treatment operation that could potentially recover energy from the mixing of seawater and treated effluent.

Image credit: Doc Searls / Flickr

The paper, recently published in American Chemical Society’s ACS Omega, describes the battery and suggests using it to make coastal wastewater treatment plants energy-independent.

“Blue energy is an immense and untapped source of renewable energy,” said study coauthor Kristian Dubrawski, a postdoctoral scholar in civil and environmental engineering at Stanford. “Our battery is a major step toward practically capturing that energy without membranes, moving parts or energy input.”

Dubrawski works in the lab of study co-author Craig Criddle, a professor of civil and environmental engineering known for interdisciplinary field projects of energy-efficient technologies. The idea of developing a battery that taps into salt gradients originated with study coauthors Yi Cui, a professor of materials science and engineering, and Mauro Pasta, a postdoctoral scholar in materials science and engineering at the time of the research. Applying that concept to coastal wastewater treatment plants was Criddle’s twist, born of his long experience developing technologies for wastewater treatment.

The researchers tested a prototype of the battery, monitoring its energy production while flushing it with alternating hourly exchanges of wastewater effluent from the Palo Alto Regional Water Quality Control Plant and seawater collected nearby from Half Moon Bay. Over 180 cycles, battery materials maintained 97 percent effectiveness in capturing the salinity gradient energy.

The technology could work any place where fresh and saltwater intermix, but wastewater treatment plants offer a particularly valuable case study. Wastewater treatment is energy-intensive, accounting for about three percent of the total U.S. electrical load. The process – essential to community health – is also vulnerable to power grid shutdowns. Making wastewater treatment plants energy independent would not only cut electricity use and emissions but also make them immune to blackouts – a major advantage in places such as California, where recent wildfires have led to large-scale outages.
Water power

Every cubic meter of freshwater that mixes with seawater produces about .65 kilowatt-hours of energy – enough to power the average American house for about 30 minutes. Globally, the theoretically recoverable energy from coastal wastewater treatment plants is about 18 gigawatts – enough to power more than 1,700 homes for a year.

The Stanford group’s battery isn’t the first technology to succeed in capturing blue energy, but it’s the first to use battery electrochemistry instead of pressure or membranes. If it works at scale, the technology would offer a more simple, robust and cost-effective solution.

The process first releases sodium and chloride ions from the battery electrodes into the solution, making the current flow from one electrode to the other. Then, a rapid exchange of wastewater effluent with seawater leads the electrode to reincorporate sodium and chloride ions and reverse the current flow. Energy is recovered during both the freshwater and seawater flushes, with no upfront energy investment and no need for charging. This means that the battery is constantly discharging and recharging without needing any input of energy.
Durable and affordable technology

While lab tests showed power output is still low per electrode area, the battery’s scale-up potential is considered more feasible than previous technologies due to its small footprint, simplicity, constant energy creation and lack of membranes or instruments to control charge and voltage. The electrodes are made with Prussian Blue, a material widely used as a pigment and medicine, that costs less than $1 a kilogram, and polypyrrole, a material used experimentally in batteries and other devices, which sells for less than $3 a kilogram in bulk.

There’s also little need for backup batteries, as the materials are relatively robust, a polyvinyl alcohol and sulfosuccinic acid coating protects the electrodes from corrosion and there are no moving parts involved. If scaled up, the technology could provide adequate voltage and current for any coastal treatment plant. Surplus power production could even be diverted to a nearby industrial operation, such as a desalination plant.

“It is a scientifically elegant solution to a complex problem,” Dubrawski said. “It needs to be tested at scale, and it doesn’t address the challenge of tapping blue energy at the global scale – rivers running into the ocean – but it is a good starting point that could spur these advances.”

To assess the battery’s full potential in municipal wastewater plants, the researchers are working on a scaled version to see how the system functions with multiple batteries working simultaneously.



Contacts and sources:
Rob Jordan
Stanford University

Citation: Charge-Free Mixing Entropy Battery Enabled by Low-Cost Electrode Materials.
Meng Ye, Mauro Pasta, Xing Xie, Kristian L. Dubrawski, Jianqaio Xu, Chong Liu, Yi Cui, Craig S. Criddle. ACS Omega, 2019; 4 (7): 11785 DOI: 10.1021/acsomega.9b00863




Heat-Free Tech for Flexible Electronics; Prints Metal on Flowers, Gelatin



Martin Thuo of Iowa State University and the Ames Laboratory clicked through the photo gallery for one of his research projects.

How about this one? There was a rose with metal traces printed on a delicate petal.

Or this? A curled sheet of paper with a flexible, programmable LED display.

Maybe this? A gelatin cylinder with metal traces printed across the top.

Martin Thuo and his research group have developed heat-free technology that can print conductive, metallic lines and traces on just about anything, including a rose petal.

Credit: Martin Thuo/Iowa State University

All those photos showed the latest application of undercooled metal technology developed by Thuo and his research group. The technology features liquid metal (in this case Field's metal, an alloy of bismuth, indium and tin) trapped below its melting point in polished, oxide shells, creating particles about 10 millionths of a meter across.

When the shells are broken - with mechanical pressure or chemical dissolving - the metal inside flows and solidifies, creating a heat-free weld or, in this case, printing conductive, metallic lines and traces on all kinds of materials, everything from a concrete wall to a leaf.

That could have all kinds of applications, including sensors to measure the structural integrity of a building or the growth of crops. The technology was also tested in paper-based remote controls that read changes in electrical currents when the paper is curved. Engineers also tested the technology by making electrical contacts for solar cells and by screen printing conductive lines on gelatin, a model for soft biological tissues, including the brain.

Martin Thuo and his research group have printed electronic traces on gelatin.

Credit: Martin Thuo/Iowa State University

"This work reports heat-free, ambient fabrication of metallic conductive interconnects and traces on all types of substrates," Thuo and a team of researchers wrote in a paper describing the technology recently published online by the journal Advanced Functional Materials.

Thuo - an assistant professor of materials science and engineering at Iowa State, an associate of the U.S. Department of Energy's Ames Laboratory and a co-founder of the Ames startup SAFI-Tech Inc. that's commercializing the liquid-metal particles - is the lead author. Co-authors are Andrew Martin, a former undergraduate in Thuo's lab and now an Iowa State doctoral student in materials science and engineering; Boyce Chang, a postdoctoral fellow at the University of California, Berkeley, who earned his doctoral degree at Iowa State; Zachariah Martin, Dipak Paramanik and Ian Tevis, of SAFI-Tech; Christophe Frankiewicz, a co-founder of Sep-All in Ames and a former Iowa State postdoctoral research associate; and Souvik Kundu, an Iowa State graduate student in electrical and computer engineering.

The project was supported by university startup funds to establish Thuo's research lab at Iowa State, Thuo's Black & Veatch faculty fellowship and a National Science Foundation Small Business Innovation Research grant.

Thuo said he launched the project three years ago as a teaching exercise.

"I started this with undergraduate students," he said. "I thought it would be fun to get students to make something like this. It's a really beneficial teaching tool because you don't need to solve 2 million equations to do sophisticated science."

And once students learned to use a few metal-processing tools, they started solving some of the technical challenges of flexible, metal electronics.

"The students discovered ways of dealing with metal and that blossomed into a million ideas," Thuo said. "And now we can't stop."

And so the researchers have learned how to effectively bond metal traces to everything from water-repelling rose petals to watery gelatin. Based on what they now know, Thuo said it would be easy for them to print metallic traces on ice cubes or biological tissue.

All the experiments "highlight the versatility of this approach," the researchers wrote in their paper, "allowing a multitude of conductive products to be fabricated without damaging the base material."




Contacts and sources:
Martin Thuo
Iowa State University


Citation: Heat‐Free Fabrication of Metallic Interconnects for Flexible/Wearable Devices Andrew Martin Boyce S. Chang Zachariah Martin Dipak Paramanik Christophe Frankiewicz Souvik Kundu Ian D. Tevis Martin Thuo Advanced Functional Materials https://onlinelibrary.wiley.com/doi/abs/10.1002/adfm.201903687 http://dx.doi.org/10.1002/adfm.201903687




Most Precise Map Ever of Antarctic Ice Velocity



Constructed from a quarter century’s worth of satellite data, a new map of Antarctic ice velocity by glaciologists from the University of California, Irvine and NASA’s Jet Propulsion Laboratory is the most precise ever created.

Published today in a new paper in the AGU journal Geophysical Research Letters, the map is 10 times more accurate than previous renditions, covering more than 80 percent of the continent.

“By utilizing the full potential of interferometric phase signals from satellite synthetic-aperture radars, we have achieved a quantum leap in the description of ice flow in Antarctica,” said lead author Jeremie Mouginot, UCI associate researcher in Earth system science. “This more detailed representation will help improve our understanding of ice behavior under climate stress over a larger part of the continent, farther south, and will enable improved projections of sea level rise through numerical models.”

A new map of Antarctic ice velocity constructed from nearly a quarter century’s worth of satellite data.



Credit: AGU

To chart the movement of ice sheets across the surface of the enormous land mass, the researchers combined input from six satellite missions: the Canadian Space Agency’s Radarsat-1 and Radarsat-2; the European Space Agency’s Earth remote sensing satellites 1 and 2 and Envisat ASAR; and the Japan Aerospace Exploration Agency’s ALOS PALSAR-1.

While the data were spread across 25 years, the pace of signal gathering accelerated in the last decade as more resources were deployed in the Earth’s orbit. As ice sheet science coordinator in the World Meteorological Organization’s Polar Space Task Group, co-author Bernd Scheuchl, UCI associate project scientist in Earth system science, was responsible for acquiring the relevant data from the various international space agencies.

Previous mapping efforts relied heavily on “feature” and “speckle tracking” methods, which detect the subtle motion of parcels of ice on the ground over time; this approach has been proven effective in estimating ice flow speed. To measure significantly slower ice sheet movement in the vast interior regions, the UCI team augmented these techniques with synthetic-aperture radar phase interferometry, which detects the subtle motion of natural reflectors of radar signals in snow/ice independent of the size of the parcel of ice illuminated by the radar.

“The interferometric phase of SAR data measures the ice deformation signal with a precision of up to two orders of magnitude better than speckle tracking,” Mouginot said. “A drawback is that it requires a lot more data, namely multiple passes at different angles over the same point on the ground – a problem that was solved by a consortium of international space agencies pointing Earth-monitoring spacecrafts to this part of the world.”

The team was able to compose a map that resolves ice movement to a level of 20 centimeters (a little over half a foot) per year in speed and 5 degrees in annual flow direction for more than 70 percent of Antarctica. It’s the first time that high-precision mapping of the interior areas has been accomplished.

“This product will help climate scientists achieve a number of goals, such as a better determination of the boundaries between glaciers and a thorough evaluation of regional atmospheric climate models over the entire continent,” said co-author Eric Rignot, chair and Donald Bren Professor of Earth System Science at UCI and a JPL senior research scientist.

“It will also help in locating the most promising sites for ice core drilling to extract climate records and in examining the mass balance of Antarctica beyond its periphery.”

He said he’s looking forward to the joint NASA and Indian Space Research Organization satellite, launching in late 2021, which will be the first interferometric-mode SAR mission designed to look solely toward the South Pole. The spacecraft will provide a coast-to-coast view of Antarctica every 12 days.

“We’ll be able to collect enough quality phase data over the Antarctic to generate updates to the map we just created in one or two months instead of one or two decades,” Rignot said. “With this level of precision in the interior regions, we’ll be able to reconstruct high-resolution spatial details in the bed topography beneath the ice through inversion techniques over far broader areas than in previous attempts – essential to improving ice sheet models and projections of sea level rise from Antarctica.”

The new Antarctic ice velocity map and related datasets are available for download at the NASA Distributed Active Archive Center at the National Snow & Ice Data Center. This project was supported by NASA’s MEaSUREs program.


Contacts and sources:
Nanci Bompey
American Geophysical Union (AGU)

Brian Bell
University of California Irvine (UCI)


Citation: Continent-wide, interferometric SAR phase, mapping of Antarctic ice velocity”
J. Mouginot: Department of Earth System Science, University of California, Irvine, CA, USA; and Univ. Grenoble Alpes, CNRS, IRD, Grenoble, France;
E. Rignot: Department of Earth System Science, University of California, Irvine, CA, USA; California Institute of Technology’s Jet Propulsion Laboratory, Pasadena, CA, USA; and Department of Civil and Environmental Engineering, University of California, Irvine, CA, USA.
B. Scheuchl: Department of Earth System Science, University of California, Irvine, CA, USA.



Climate Change Could Revive Medieval Megadroughts in U.S. Southwest



About a dozen megadroughts struck the American Southwest during the 9th through the 15th centuries, but then they mysteriously ceased around the year 1600. What caused this clustering of megadroughts — that is, severe droughts that last for decades — and why do they happen at all?

If scientists can understand why megadroughts happened in the past, it can help us better predict whether, how, and where they might happen in the future. A study published today in Science Advances provides the first comprehensive theory for why there were megadroughts in the American Southwest. The authors found that ocean temperature conditions plus high radiative forcing — when Earth absorbs more sunlight than it radiates back into space — play important roles in triggering megadroughts. The study suggests an increasing risk of future megadroughts in the American Southwest due to climate change.

A drought-damaged ranch in California. 
Rising temperatures drive up the risk of a megadrought hitting the Southwestern United States this century, a new study finds. Photo: Cynthia Mendoza/USDA
Photo: Cynthia Mendoza/USDA


Previously, scientists have studied the individual factors that contribute to megadroughts. In the new study, a team of scientists at Columbia University’s Lamont-Doherty Earth Observatory has looked at how multiple factors from the global climate system work together, and projected that warming climate may bring a new round of megadroughts.

By reconstructing aquatic climate data and sea-surface temperatures from the last 2,000 years, the team found three key factors that led to megadroughts in the American Southwest: radiative forcing, severe and frequent La Niña events — cool tropical Pacific sea surface temperatures that cause changes to global weather events –– and warm conditions in the Atlantic. High radiative forcing appears to have dried out the American Southwest, likely due to an increase in solar activity (which would send more radiation toward us) and a decrease in volcanic activity (which would admit more of it) at the time. The resulting increase in heat would lead to greater evaporation. At the same time, warmer than usual Atlantic sea-surface temperatures combined with very strong and frequent La Niñas decreased precipitation in the already dried-out area. Of these three factors, La Niña conditions were estimated to be more than twice as important in causing the megadroughts.

While the Lamont scientists say they were able to pinpoint the causes of megadroughts in a more complete way than has been done before, they say such events will remain difficult for scientists to predict. There are predictions about future trends in temperatures, aridity, and sea surface temperatures, but future El Niño and La Niña activity remains difficult to simulate. Nevertheless, the researchers conclude that human-driven climate change is stacking the deck towards more megadroughts in the future.

“Because you increase the baseline aridity, in the future when you have a big La Niña, or several of them in a row, it could lead to megadroughts in the American West,” explained lead author Nathan Steiger, a Lamont-Doherty Earth Observatory hydroclimatologist.

During the time of the medieval megadroughts, increased radiative forcing was caused by natural climate variability. But today we are experiencing increased dryness in many locations around the globe due to human-made forces. Climate change is setting the stage for an increased possibility of megadroughts in the future through greater aridity, say the researchers.



Contacts and sources:
Nicole deRoberts
Earth Institute at Columbia University

Citation: Oceanic and radiative forcing of medieval megadroughts in the American Southwest.
Nathan J. Steiger, Jason E. Smerdon, Benjamin I. Cook, Richard Seager, A. Park Williams, Edward R. Cook. Science Advances, 2019; 5 (7): eaax0087 DOI: 10.1126/sciadv.aax0087



Study: Sizzling Southwest Summers Can Cause Pavement Burns in Seconds



When temperatures throughout the sizzling Southwestern U.S. climb to over 100 degrees, the pavement can get hot enough to cause second-degree burns on human skin in a matter of seconds. Pavement burns account for significant burn-related injuries in the Southwestern United States and other hot climates with nearly continuous sunlight and daily maximum temperatures above 100°F.

In a new study published in the Journal of Burn Care & Research, a team of surgeons from the UNLV School of Medicine reviewed all pavement burn admissions into a Las Vegas area burn center over five years. The team compared the outdoor temperatures at the time of each patient admission to, in essence, determine how hot is too hot.


File:High temperature warning symbol.png
Credit:  cwszot / Wikimedia Commons

"Pavement burns account for a significant number of burn-related injuries, particularly in the Southwestern United States," the study authors wrote. "The pavement can be significantly hotter than the ambient temperature in direct sunlight and can cause second-degree burns within two seconds."

For the study, researchers identified 173 pavement-related burn cases between 2013 to 2017. Of those, 149 cases were isolated pavement burns and 24 involved other injuries, including those from motor vehicle accidents. More than 88 percent (153) of related incidents occurred when temps were 95 degrees or higher, with the risk increasing exponentially as temperatures exceeded 105 degrees.

That's because pavement in direct sunlight absorbs radiant energy, making it significantly hotter and potentially dangerous. Study authors say that pavement on a 111-degree day, for example, can get as hot as 147 degrees in direct sunlight. For reference, a fried egg becomes firm at 158 degrees.

And while it seems like a no-brainer to stay off a hot sidewalk, for some it's unavoidable - including victims of motor vehicle accidents, people with mobility issues or medical episodes who have fallen to the ground, or small children who may not know better.

The takeaway - summer in the desert is no joke, and more education is needed to warn people of the risks of hot pavement, particularly as temperatures creep above 100 degrees.

"This information is useful for burn centers in hotter climates, to plan and prepare for the coordination of care and treatment," says study lead author Dr. Jorge Vega. "It can also be used for burn injury prevention and public health awareness, including increased awareness and additional training to emergency medical service and police personnel when attending to pavement burn victims in the field."

The study, "A 5-Year Review of Pavement Burns from a Desert Burn Center," was published in the July/August 2019 issue of the Journal of Burn Care & Research. [WARNING: study contains graphic imagery]



Contacts and sources:
Tony Allen
University of Nevada Las Vegas


Citation: A 5-Year Review of Pavement Burns From a Desert Burn Center Jorge Vega, Jr, MD, Paul Chestovich, MD, FACS, Syed Saquib, MD, Douglas Fraser, MD FACS Journal of Burn Care & Research, Volume 40, Issue 4, July/August 2019, Pages 422–426, https://doi.org/10.1093/jbcr/irz049 https://academic.oup.com/jbcr/article-abstract/40/4/422/5425552?redirectedFrom=fulltext http://dx.doi.org/10.1093/jbcr/irz049




Physicists Count Sound Particles with Quantum Microphone



Stanford physicists have developed a "quantum microphone" so sensitive that it can measure individual particles of sound, called phonons.

The device, which is detailed July 24 in the journal Nature, could eventually lead to smaller, more efficient quantum computers that operate by manipulating sound rather than light.

“We expect this device to allow new types of quantum sensors, transducers and storage devices for future quantum machines,” said study leader Amir Safavi-Naeini, an assistant professor of applied physics at Stanford’s School of Humanities and Sciences.
Quantum of motion

First proposed by Albert Einstein in 1907, phonons are packets of vibrational energy emitted by jittery atoms. These indivisible packets, or quanta, of motion manifest as sound or heat, depending on their frequencies.

Artist's impression of an array of nanomechanical resonators designed to generate and trap sound particles, or phonons. The mechanical motions of the trapped phonons are sensed by a qubit detector, which shifts its frequency depending on the number of phonons in a resonator. Different phonon numbers are visible as distinct peaks in the qubit spectrum, which are shown schematically behind the resonators.

Credit: Wentao Jiang


Like photons, which are the quantum carriers of light, phonons are quantized, meaning their vibrational energies are restricted to discrete values - similar to how a staircase is composed of distinct steps.

"Sound has this granularity that we don't normally experience," Safavi-Naeini said. "Sound, at the quantum level, crackles."

The energy of a mechanical system can be represented as different "Fock" states - 0, 1, 2, and so on - based on the number of phonons it generates. For example, a "1 Fock state" consist of one phonon of a particular energy, a "2 Fock state" consists of two phonons with the same energy, and so on. Higher phonon states correspond to louder sounds.

Until now, scientists have been unable to measure phonon states in engineered structures directly because the energy differences between states - in the staircase analogy, the spacing between steps - is vanishingly small. "One phonon corresponds to an energy ten trillion trillion times smaller than the energy required to keep a lightbulb on for one second," said graduate student Patricio Arrangoiz-Arriola, a co-first author of the study.

To address this issue, the Stanford team engineered the world's most sensitive microphone - one that exploits quantum principles to eavesdrop on the whispers of atoms.

In an ordinary microphone, incoming sound waves jiggle an internal membrane, and this physical displacement is converted into a measurable voltage. This approach doesn't work for detecting individual phonons because, according to the Heisenberg uncertainty principle, a quantum object's position can't be precisely known without changing it.

"If you tried to measure the number of phonons with a regular microphone, the act of measurement injects energy into the system that masks the very energy that you're trying to measure," Safavi-Naeini said.

Instead, the physicists devised a way to measure Fock states - and thus, the number of phonons - in sound waves directly. "Quantum mechanics tells us that position and momentum can't be known precisely - but it says no such thing about energy," Safavi-Naeini said. "Energy can be known with infinite precision."

Singing qubits

The quantum microphone the group developed consists of a series of supercooled nanomechanical resonators, so small that they are visible only through an electron microscope. The resonators are coupled to a superconducting circuit that contains electron pairs that move around without resistance. The circuit forms a quantum bit, or qubit, that can exist in two states at once and has a natural frequency, which can be read electronically. When the mechanical resonators vibrate like a drumhead, they generate phonons in different states.

"The resonators are formed from periodic structures that act like mirrors for sound. By introducing a defect into these artificial lattices, we can trap the phonons in the middle of the structures," Arrangoiz-Arriola said.

Like unruly inmates, the trapped phonons rattle the walls of their prisons, and these mechanical motions are conveyed to the qubit by ultra-thin wires. "The qubit's sensitivity to displacement is especially strong when the frequencies of the qubit and the resonators are nearly the same," said joint first-author Alex Wollack, also a graduate student at Stanford.

However, by detuning the system so that the qubit and the resonators vibrate at very different frequencies, the researchers weakened this mechanical connection and triggered a type of quantum interaction, known as a dispersive interaction, that directly links the qubit to the phonons.

This bond causes the frequency of the qubit to shift in proportion to the number of phonons in the resonators. By measuring the qubit's changes in tune, the researchers could determine the quantized energy levels of the vibrating resonators - effectively resolving the phonons themselves.

"Different phonon energy levels appear as distinct peaks in the qubit spectrum," Safavi-Naeini said. "These peaks correspond to Fock states of 0, 1, 2 and so on. These multiple peaks had never been seen before."

Mechanical quantum mechanical

Mastering the ability to precisely generate and detect phonons could help pave the way for new kinds of quantum devices that are able to store and retrieve information encoded as particles of sound or that can convert seamlessly between optical and mechanical signals.

Such devices could conceivably be made more compact and efficient than quantum machines that use photons, since phonons are easier to manipulate and have wavelengths that are thousands of times smaller than light particles.

"Right now, people are using photons to encode these states. We want to use phonons, which brings with it a lot of advantages," Safavi-Naeini said. "Our device is an important step toward making a 'mechanical quantum mechanical' computer."



Contacts and sources:
Ker Than
Stanford University - School of Humanities and Sciences


Citation: Resolving the energy levels of a nanomechanical oscillator
Patricio Arrangoiz-Arriola, E. Alex Wollack, Zhaoyou Wang, Marek Pechal, Wentao Jiang, Timothy P. McKenna, Jeremy D. Witmer, Raphaël Van Laer & Amir H. Safavi-Naeini
Nature volume 571, pages537–540 (2019)  https://www.nature.com/articles/s41586-019-1386-x



Scientists Film Molecular Rotation

Scientists have used precisely tuned pulses of laser light to film the ultrafast rotation of a molecule. The resulting “molecular movie” tracks one and a half revolutions of carbonyl sulphide (OCS) – a rod-shaped molecule consisting of one oxygen, one carbon and one sulphur atom – taking place within 125 trillionths of a second, at a high temporal and spatial resolution.

 The team headed by DESY’s Jochen Küpper from the Center for Free-Electron Laser Science (CFEL) and Arnaud Rouzée from the Max Born Institute in Berlin are presenting their findings in the journal Nature Communications. CFEL is a cooperation of DESY, the Max Planck Society and Universität Hamburg.

Quantum movie displays probability density distribution of rotating carbonyl sulphide molecules

\
The different stages of the molecule's periodic rotation repeat after about 82 picoseconds.

 Credit: DESY, Evangelos Karamatskos/Britta Liebaug

“Molecular physics has long dreamed of capturing the ultrafast motion of atoms during dynamic processes on film,” explains Küpper, who is also a professor at the University of Hamburg. This is by no means simple, however. Because in the realm of molecules, you normally need high-energy radiation with a wavelength of the order of the size of an atom in order to be able to see details. So Küpper’s team took a different approach: they used two pulses of infrared laser light which were precisely tuned to each other and separated by 38 trillionths of a second (picoseconds), to set the carbonyl sulphide molecules spinning rapidly in unison (i.e. coherently). They then used a further laser pulse, having a longer wavelength, to determine the position of the molecules at intervals of around 0.2 trillionths of a second each. “Since this diagnostic laser pulse destroys the molecules, the experiment had to be restarted again for each snapshot,” reports Evangelos Karamatskos, the principal author of the study from CFEL.


Steps of the molecule's rotation, recorded with an average gap of seven picoseconds each. 

Credit: DESY, Evangelos Karamatskos

Altogether, the scientists took 651 pictures covering one and a half periods of rotation of the molecule. Assembled sequentially, the pictures produced a 125 picosecond film of the molecule’s rotation. The carbonyl sulphide molecule takes about 82 trillionths of a second, i.e. 0.000 000 000 082 seconds, to complete one whole revolution. “It would be wrong to think of its motion as being like that of a rotating stick, though,” says Küpper. “The processes we are observing here are governed by quantum mechanics. On this scale, very small objects like atoms and molecules behave differently from the everyday objects in our surroundings. The position and momentum of a molecule cannot be determined simultaneously with the highest precision; you can only define a certain probability of finding the molecule in a specific place at a particular point in time.”

The movie, assembled from the individual snapshots, covers about 1.5 rotational periods. 
Credit: DESY, Evangelos Karamatskos

The peculiar features of quantum mechanics can be seen in several of the movie’s many images, in which the molecule does not simply point in one direction, but in various different directions at the same time – each with a different probability (see for example the 3 o’clock position in the figure). “It is precisely those directions and probabilities that we imaged experimentally in this study,” adds Rouzée. “From the fact that these individual images start to repeat after about 82 picoseconds, we can deduce the period of rotation of a carbonyl sulphide molecule.”

The scientists believe that their method can also be used for other molecules and processes, for example to study the internal twisting, i.e., torsion, of molecules or chiral compounds, that are compounds that exist in two forms, which are mirror images of each other – much like the right and left hands of a human being. “We recorded a high-resolution molecular movie of the ultrafast rotation of carbonyl sulphide as a pilot project,” says Karamatskos, summarising the experiment. “The level of detail we were able to achieve indicates that our method could be used to produce instructive films about the dynamics of other processes and molecules.”

In addition to DESY, Universität Hamburg, the Max Born Institute in Berlin, and the University of Aarhus in Denmark were also involved in the project.




Contacts and sources:
Jochen Küpper
Deutsches Elektronen-Synchrotron - DESY
Citation: Molecular movie of ultrafast coherent rotational dynamics; Evangelos T. Karamatskos, Sebastian Raabe, Terry Mullins, Andrea Trabattoni, Philipp Stammer, Gildas Goldsztejn, Rasmus R. Johansen, Karol Długołęcki, Henrik Stapelfeldt, Marc J. J. Vrakking, Sebastian Trippel, Arnaud Rouzée, and Jochen Küpper; Nature Communications, 2019; DOI: 10.1038/s41467-019-11122-y




TESS Discovers Three New Planets Nearby, Including Temperate “Sub-Neptune”


NASA’s Transiting Exoplanet Survey Satellite, or TESS, has discovered three new worlds that are among the smallest, nearest exoplanets known to date. The planets orbit a star just 73 light-years away and include a small, rocky super-Earth and two sub-Neptunes — planets about half the size of our own icy giant.

The sub-Neptune furthest out from the star appears to be within a “temperate” zone, meaning that the very top of the planet’s atmosphere is within a temperature range that could support some forms of life. However, scientists say the planet’s atmosphere is likely a thick, ultradense heat trap that renders the planet’s surface too hot to host water or life.


This infographic illustrates key features of the TOI 270 system, located about 73 light-years away in the southern constellation Pictor. The three known planets were discovered by NASA’s Transiting Exoplanet Survey Satellite through periodic dips in starlight caused by each orbiting world. Insets shows information about the planets, including their correct relative sizes, and how they compare to Earth. Temperatures given for TOI 270 planets are equilibrium temperatures, calculated without the warming effects of any possible atmospheres.
This infographic illustrates key features of the TOI 270 system, located about 73 light-years away in the southern constellation Pictor. The three known planets were discovered by NASA’s Transiting Exoplanet Survey Satellite through periodic dips in starlight caused by each orbiting world. Insets shows information about the planets, including their correct relative sizes, and how they compare to Earth. Temperatures given for TOI 270 planets are equilibrium temperatures, calculated without the warming effects of any possible atmospheres.
Credit: NASA’s Goddard Space Flight Center/Scott Wiessinger

Nevertheless, this new planetary system, which astronomers have dubbed TOI-270, is proving to have other curious qualities. For instance, all three planets appear to be relatively close in size. In contrast, our own solar system is populated with planetary extremes, from the small, rocky worlds of Mercury, Venus, Earth, and Mars, to the much more massive Jupiter and Saturn, and the more remote ice giants of Neptune and Uranus.

There’s nothing in our solar system that resembles an intermediate planet, with a size and composition somewhere in the middle of Earth and Neptune. But TOI-270 appears to host two such planets: both sub-Neptunes are smaller than our own Neptune and not much larger than the rocky planet in the system.

Astronomers believe TOI-270’s sub-Neptunes may be a “missing link” in planetary formation, as they are of an intermediate size and could help researchers determine whether small, rocky planets like Earth and more massive, icy worlds like Neptune follow the same formation path or evolve separately.

TOI-270 is an ideal system for answering such questions, because the star itself is nearby and therefore bright, and also unusually quiet. The star is an M-dwarf, a type of star that is normally extremely active, with frequent flares and solar storms. TOI-270 appears to be an older M-dwarf that has since quieted down, giving off a steady brightness, against which scientists can measure many properties of the orbiting planets, such as their mass and atmospheric composition.

“There are a lot of little pieces of the puzzle that we can solve with this system,” says Maximilian Günther, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research and lead author of a study published today in Nature Astronomy that details the discovery. “You can really do all the things you want to do in exoplanet science, with this system.”

Compare and contrast worlds in the TOI 270 system with these illustrations. Temperatures given for TOI 270 planets are equilibrium temperatures, calculated without the warming effects of any possible atmospheres.

Credit: NASA’s Goddard Space Flight Center

A planetary pattern

Günther and his colleagues detected the three new planets after looking through measurements of stellar brightness taken by TESS. The MIT-developed satellite stares at patches of the sky for 27 days at a time, monitoring thousands of stars for possible transits — characteristic dips in brightness that could signal a planet temporarily blocking the star’s light as it passes in front of it.

The team isolated several such signals from a nearby star, located 73 light years away in the southern sky. They named the star TOI-270, for the 270th “TESS Object of Interest” identified to date. The researchers used ground-based instruments to follow up on the star’s activity, and confirmed that the signals are the result of three orbiting exoplanets: planet b, a rocky super-Earth with a roughly three-day orbit; planet c, a sub-Neptune with a five-day orbit; and planet d, another sub-Neptune slightly further out, with an 11-day orbit.

Günther notes that the planets seem to line up in what astronomers refer to as a “resonant chain,” meaning that the ratio of their orbits are close to whole integers — in this case, 3:5 for the inner pair, and 2:1 for the outer pair — and that the planets are therefore in “resonance” with each other. Astronomers have discovered other small stars with similarly resonant planetary formations. And in our own solar system, the moons of Jupiter also happen to line up in resonance with each other.

“For TOI-270, these planets line up like pearls on a string,” Günther says. “That’s a very interesting thing, because it lets us study their dynamical behavior. And you can almost expect, if there are more planets, the next one would be somewhere further out, at another integer ratio.”

“An exceptional laboratory”

TOI-270’s discovery initially caused a stir of excitement within the TESS science team, as it seemed, in the first analysis, that planet d might lie in the star’s habitable zone, a region that would be cool enough for the planet’s surface to support water, and possibly life. But the researchers soon realized that the planet’s atmosphere was probably extremely thick, and would therefore generate an intense greenhouse effect, causing the planet’s surface to be too hot to be habitable.

But Günther says there is a good possibility that the system hosts other planets, further out from planet d, that might well lie within the habitable zone. Planet d, with an 11-day orbit, is about 10 million kilometers out from the star. Günther says that, given that the star is small and relatively cool — about half as hot as the sun — its habitable zone could potentially begin at around 15 million kilometers. But whether a planet exists within this zone, and whether it is habitable, depends on a host of other parameters, such as its size, mass, and atmospheric conditions.

Fortunately, the team writes in their paper that “the host star, TOI-270, is remarkably well-suited for future habitability searches, as it is particularly quiet.” The researchers plan to focus other instruments, including the upcoming James Webb Space Telescope, on TOI-270, to pin down various properties of the three planets, as well as search for additional planets in the star’s habitable zone.

“TOI-270 is a true Disneyland for exoplanet science, and one of the prime systems TESS was set out to discover,” Günther says. “It is an exceptional laboratory for not one, but many reasons — it really ticks all the boxes.”

This research was funded, in part, by NASA.


Contacts and sources:
Jennifer Chu
Massachusetts Institute of Technology

Citation: A super-Earth and two sub-Neptunes transiting the nearby and quiet M dwarf TOI-270
Maximilian N. Günther, Francisco J. Pozuelos, Jason A. Dittmann, Diana Dragomir, Stephen R. Kane, Tansu Daylan, Adina D. Feinstein, Chelsea X. Huang, Timothy D. Morton, Andrea Bonfanti, L. G. Bouma, Jennifer Burt, Karen A. Collins, Jack J. Lissauer, Elisabeth Matthews, Benjamin T. Montet, Andrew Vanderburg, Songhu Wang, Jennifer G. Winters, George R. Ricker, Roland K. Vanderspek, David W. Latham, Sara Seager, Joshua N. Winn, Jon M. Jenkins, James D. Armstrong, Khalid Barkaoui, Natalie Batalha, Jacob L. Bean, Douglas A. Caldwell, David R. Ciardi, Kevin I. Collins, Ian Crossfield, Michael Fausnaugh, Gabor Furesz, Tianjun Gan, Michaël Gillon, Natalia Guerrero, Keith Horne, Steve B. Howell, Michael Ireland, Giovanni Isopi, Emmanuël Jehin, John F. Kielkopf, Sebastien Lepine, Franco Mallia, Rachel A. Matson, Gordon Myers, Enric Palle, Samuel N. Quinn, Howard M. Relles, Bárbara Rojas-Ayala, Joshua Schlieder, Ramotholo Sefako, Avi Shporer, Juan C. Suárez, Thiam-Guan Tan, Eric B. Ting, Joseph D. Twicken, Ian A. Waite. . Nature Astronomy, 2019; DOI: 10.1038/s41550-019-0845-5




Sunday, July 28, 2019

Hidden Genetic Variations Power Evolutionary Leaps



Laboratory populations that quietly amass 'cryptic' genetic variants are capable of surprising evolutionary leaps, according to a paper in the July 26 issue of Science. A better understanding of cryptic variation may improve directed evolution techniques for developing new biomolecules for medical and other applications.

Genetic variation — that is, accumulated mutations in the DNA — is the fuel for all evolutionary change: the more genetic variation, the faster evolution works and the more possibilities for novel adaptive solutions.

But one kind of genetic variation — hidden, or "cryptic," variation — doesn't alter the appearance or behavior of an organism in its usual environment.

"It's an underappreciated kind of genetic variation," says corresponding author Andreas Wagner, an evolutionary biologist at the University of Zurich and external professor at the Santa Fe Institute, "and it plays an important role in evolution."

The hominoids are descendants of a common ancestor.
Credit: TimVickers / Wikimedia Commons

Previous work has shown that cryptic variation in natural populations promotes rapid evolutionary adaptation. But the underlying molecular mechanisms were unclear.

To explore those mechanisms, Wagner's team worked with populations of the gut bacterium E. coli that carried a plasmid with a gene for a yellow fluorescent protein (YFP). The team designed a two-stage experiment. In stage 1, they used mutagenic PCR to increase variation in the YFP gene. Simultaneously, they selected for a narrow range of yellow fluorescence. Any bacteria not sufficiently yellow were excluded, a process called 'stabilizing selection.' In this way, they built up deep stores of cryptic genetic variation without altering the yellow color of the YFP protein.

During stage 2, the team changed the selection rules and began selecting for E. coli that fluoresced in the green part of the spectrum ('directional selection'). They also introduced control populations of E. coli that lacked enhanced cryptic variation in YFP. The E. coli cell lines with stores of cryptic variation evolved green fluorescent protein (from YFP genes) that were both greener and genetically more diverse than any produced by the control E. coli lineages.

In the experiment, says co-author Joshua Payne (ETH Zurich), cryptic variation did more than drive evolutionary adaptation faster. Cell lines with deep reserves of cryptic variation evolved greener YFP proteins, forms of the protein that were inaccessible to regular bacteria, and they evolved by multiple unique routes not available to regular E. coli.

Current laboratory directed evolution often leads to the same evolutionary outcomes each time. The new work shows how amassing cryptic variation can open doors to otherwise inaccessible regions of protein sequence space, says first author Jia Zheng, a postdoctoral researcher at the University of Zurich.

In the wild, cryptic variation helps fish adapt to life in caves. In the lab, cryptic variation might help a biomolecule bind a new receptor. "Our work can help develop new directed evolution strategies to find innovative biomolecules for biotechnological and medical applications," says Zheng.

Like a fat savings account, cryptic variation is a store of variation that becomes available in an emergency to fuel rapid evolutionary change critical to the survival of a lineage and useful for molecular biologists.

Read the paper, "Cryptic genetic variation accelerates evolution by opening access to diverse adaptive peaks," in Science (July 26, 2019)



Contacts and sources:
Jenna Marshall
Santa Fe Institute,


Citation: