Thursday, August 31, 2017

NASA Creates Surface Flooding Maps from Hurricane Harvey

A new series of images generated with data from NASA's Soil Moisture Active Passive (SMAP) satellite illustrate the surface flooding caused by Hurricane Harvey from before its initial landfall through August 27, 2017. Harvey is now a depression spreading heavy rainfall through the south central and southeastern U.S.

The sequence of NASA SMAP images depicts successive satellite orbital swath observations showing the surface water conditions on August 22, before Harvey's landfall (left), and then on Aug. 27, two days after landfall (middle). The resulting increase in surface flooding from record rainfall over the three-day period, shown at right, depicts regionally heavy flooding around the Houston metropolitan area. The hardest hit areas (blue and purple shades) cover more than 23,000 square miles (about 59,600 square kilometers) and indicate a more than 1,000-fold increase in surface water cover from rainfall-driven flooding.


Credits: NASA/JPL-Caltech/GSFC/University of Montana


The SMAP observations detect the proportion of the ground covered by surface water within the satellite's field of view. The sequence of images depicts successive satellite orbital swath observations showing the surface water conditions on August 22, before Harvey's landfall and then on Aug. 27, two days after landfall.

The resulting increase in surface flooding from record rainfall over the three-day period depicts regionally heavy flooding around the Houston metropolitan area. The hardest hit areas cover more than 23,000 square miles (about 59,600 square kilometers) and indicate a more than 1,000-fold increase in surface water cover from rainfall-driven flooding. SMAP's low-frequency (L-band) microwave radiometer features enhanced capabilities for detecting surface water changes in nearly all weather conditions and under low-to-moderate vegetation cover. The satellite provides global coverage with one to three-day repeat sampling, which is well suited for monitoring dynamic inland waters around the world.

Top Texas Rainfall Totals from Harvey in inches

CEDAR BAYOU AT FM 1942               51.88                    
CLEAR CREEK AT I-45                  49.40                    
DAYTON 0.2 E                         49.23                    
MARYS CREEK AT WINDING ROAD          49.20                    
BEAUMONT/PORT ARTHUR                 47.35                    
SANTA FE 0.7 S                       46.70                    
PASADENA 4.4 WNW                     45.74                    
HORSEPEN CREEK AT BAY AREA BLVD      45.60                    
SOUTH HOUSTON 4.0 SSW                44.91                    
BERRY BAYOU AT FOREST OAKS BLVD      44.80                    
BERRY BAYOU AT NEVADA                44.44                    
FRIENDSWOOD 2.5 NNE                  44.05

Other NASA satellites continue to gather data on Harvey as it moves through the middle of the U.S. and weakens.

NASA's Aqua satellite provided a visible and infrared look at the weakening, soaking storm.

NASA's Aqua Satellite Provides a Double View of Harvey

On Aug. 30 at 3:20 p.m. EDT the AIRS instrument aboard Aqua showed some areas with strongest storms and coldest cloud top temperatures near northwestern Louisiana and in bands of thunderstorms over southern Mississippi, northwestern Alabama, and southwestern Georgia. Cloud top temperatures in those areas were as cold as minus 63 degrees Fahrenheit (minus 53 degrees Celsius).

Credits: NASA JPL, Ed Olsen


On Aug. 30 at 3:20 p.m. EDT the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite captured a visible light image of Tropical Storm Harvey moving north over Texas and Louisiana. At the same time, the Atmospheric Infrared Sounder or AIRS instrument aboard Aqua provided temperature data in infrared light. AIRS showed some areas with strongest storms and coldest cloud top temperatures near northwestern Louisiana and in bands of thunderstorms over southern Mississippi, northwestern Alabama, and southwestern Georgia. Cloud top temperatures in those areas were as cold as minus 63 degrees Fahrenheit (minus 53 degrees Celsius). Storms with temperatures that cold have been shown to generate heavy rainfall.

Harvey's Status on Aug. 31

The National Hurricane Center (NHC) has issued its final advisory on Harvey. Public Advisories from the Weather Prediction Center (WPC) will provide updates as long as the system remains a flood threat.

On Aug. 30 at 3:20 p.m. EDT NASA's Aqua satellite captured this visible light image of Tropical Storm Harvey moving north over Texas and Louisiana.

Credits: NASA Goddard MODIS Rapid Response Team


By Aug. 31, Harvey had been downgraded to a depression and was generating flooding rains in far eastern Texas and western Louisiana with heavy rainfall spreading northeastward through the Lower Mississippi Valley and into the Tennessee and Ohio Valleys and central Appalachians over the next day or two.

At 11 a.m. EDT (1500 UTC) the center of tropical depression Harvey was located near 32.5 degrees north latitude and 91.4 degrees west longitude. WPC said Tropical Depression Harvey is moving towards the northeast and is expected to continue this motion over the next 48 hours. This forecast track takes Harvey into northern Mississippi by Thursday evening, middle Tennessee by Friday, and into the Ohio Valley states on Saturday, Sept. 2 as a post-tropical low.

Harvey's Rainfall Spreading North and East

WPC said: Tropical Depression Harvey is expected to produce 3 to 5 inches of rain from eastern Arkansas and northern Mississippi northeastward across western to central Tennessee, western to central Kentucky, southern Ohio and into West Virginia. Locally Higher totals of 6 to 10 inches are possible across northern Mississippi, western Tennessee and into southwest Kentucky. These Rains will enhance the flash flooding risk across these areas, especially in northern Mississippi, western Tennessee and southwest Kentucky. However widespread flooding will continue in and around Houston, Beaumont/Port Arthur/Orange, and eastward around the Louisiana border through the weekend. The expected heavy rains spreading northeastward from Louisiana into western

Kentucky may also lead to flash flooding and increased river and small stream flooding.



About SMAP

SMAP is managed for NASA's Science Mission Directorate in Washington by NASA's Jet Propulsion Laboratory in Pasadena, California, and NASA's Goddard Space Flight Center in Greenbelt, Maryland. JPL is managed for NASA by Caltech. A consortium of researchers from other universities participate on the SMAP science team, including the Massachusetts Institute of Technology in Cambridge; Princeton University in Princeton, New Jersey; and the University of Montana in Missoula, which provided the SMAP surface water imagery.




Contacts and sources:
By Karen Boggs / Rob Gutro
NASA's Jet Propulsion Laboratory, Pasadena, Calif.
NASA's Goddard Space Flight Center, Greenbelt, Md.

For more information about SMAP, visit http://smap.jpl.nasa.gov
For rainfall totals, visit: http://www.nhc.noaa.gov/text/refresh/MIAWPCAT4+shtml/311459.shtml
For updated forecasts on Harvey, visit: http://www.nhc.noaa.gov/#Harvey

Star-formation ‘Fuel Tanks’ Found around Distant Galaxies from the Early Universe

In the early universe, brilliant starburst galaxies converted vast stores of hydrogen gas into new stars at a furious pace.

The energy from this vigorous star formation took its toll on many young galaxies, blasting away much of their hydrogen gas, tamping down future star formation. For reasons that remained unclear, other young galaxies were somehow able to retain their youthful star-forming power long after similar galaxies settled into middle age.

This cartoon shows how gas falling into distant starburst galaxies ends up in vast turbulent reservoirs of cool gas extending 30 000 light-years from the central regions. ALMA has been used to detect these turbulent reservoirs of cold gas surrounding similar distant starburst galaxies. By detecting CH+ for the first time in the distant Universe, this research opens up a new window of exploration into a critical epoch of star formation.

Credit: ESO/L. Benassi


Shedding light on this mystery, astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) studied six distant starburst galaxies and discovered that five of them are surrounded by turbulent reservoirs of hydrogen gas, the fuel for future star formation.

These star forming “fuel tanks” were uncovered by the discovery of extensive regions of carbon hydride (CH+) molecules in and around the galaxies. CH+ is an ion of the CH molecule and it traces highly turbulent regions in galaxies that are teeming with hydrogen gas.

The new ALMA observations, led by Edith Falgarone (Ecole Normale Supérieure and Observatoire, Paris, France) and appearing in the journal Nature, help explain how galaxies manage to extend their period of rapid star formation.

“By detecting these molecules with ALMA, we discovered that there are huge reservoirs of turbulent gas surrounding distant starburst galaxies. These observations provide new insights into the growth of galaxies and how a galaxy’s environs fuel star formation,” said Edwin Bergin, an astronomer with the University of Michigan, Ann Arbor, and co-author on the paper.

“CH+ is a special molecule,” said Martin Zwaan, an astronomer at ESO, who contributed to the paper. “It needs a lot of energy to form and is very reactive, which means its lifetime is very short and it can’t be transported far. CH+ therefore traces how energy flows in the galaxies and their surroundings.”

This ALMA image shows the Cosmic Eyelash, a remote starburst galaxy that appears double and brightened by gravitational lensing. ALMA has been used to detect turbulent reservoirs of cold gas surrounding this and other distant starburst galaxies. By detecting CH+ for the first time in the distant Universe, this research opens up a new window of exploration into a critical epoch of star formation.
Credit: ALMA (ESO/NAOJ/NRAO)/E. Falgarone et al.

The observed CH+ reveals dense shock waves, powered by hot, fast galactic winds originating inside the galaxies’ star-forming regions. These winds flow through a galaxy and push material out of it. Their turbulent motions are such that the galaxy’s gravitational pull can recapture part of that material. This material then gathers into turbulent reservoirs of cool, low-density gas, extending more than 30,000 light-years from the galaxy’s star-forming region.

“With CH+, we learn that energy is stored within vast galaxy-sized winds and ends up as turbulent motions in previously unseen reservoirs of cold gas surrounding the galaxy,” said Falgarone. “Our results challenge the theory of galaxy evolution. By driving turbulence in the reservoirs, these galactic winds extend the starburst phase instead of quenching it.”

The team determined that galactic winds alone could not replenish the newly revealed gaseous reservoirs. The researchers suggest that the mass is provided by galactic mergers or accretion from hidden streams of gas, as predicted by current theory.

“This discovery represents a major step forward in our understanding of how the inflow of material is regulated around the most intense starburst galaxies in the early universe,” says ESO’s Director for Science, Rob Ivison, a co-author on the paper. “It shows what can be achieved when scientists from a variety of disciplines come together to exploit the capabilities of one of the world’s most powerful telescopes.”

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.



Contacts and sources:
Charles BlueNational Radio Astronomy Observatory

Citation: “Large turbulent reservoirs of cold molecular gas around high redshift starburst galaxies” by E. Falgarone et al., appearing in Nature [http://www.nature.com/nature/journal/v548/n7668/full/nature23298.html]

Healthy Glucose Levels Are Key to a Healthy Ageing Brain

New research has found blood glucose levels even at the normal range can have a significant impact on brain atrophy in ageing.

Dr Erin Walsh, lead author and post-doctoral research fellow at ANU, said the impacts of blood glucose on the brain is not limited to people with type 2 diabetes.

"People without diabetes can still have high enough blood glucose levels to have a negative health impact," said Dr Walsh from the Centre for Research on Ageing, Health and Wellbeing (CRAHW) at ANU.

Illustration by Erin Walsh.
Illustration by Erin Walsh.

"People with diabetes can have lower blood glucose levels than you might expect due to successful glycaemic management with medication, diet and exercise.

"The research suggests that maintaining healthy blood glucose levels can help promote healthy brain ageing. If you don't have diabetes it's not too early and if you do have diabetes it's not too late."

Dr Walsh said people should consider adopting healthy lifestyle habits, such as regular exercise and healthy diets.

"Having a healthy lifestyle contributes to good glycaemic control without needing a diabetes diagnosis to spur them into adopting these good habits," she said.

"It helps to keep unhealthy highly processed and sugary foods to a minimum. Also, regular physical activity every day can help, even if it is just a going for walk."

The research is part of the "Too sweet for our own good: An investigation of the effects of higher plasma glucose on cerebral health" project led by Associate Professor Nicolas Cherbuin, which is part of the longitudinal PATH through life study led by Professor Kaarin Anstey at ANU.

"The work would not be possible without being able to longitudinally explore blood glucose in members of the general public," said Dr Walsh.

The research has been published in the journal Diabetes and Metabolism.



Contacts and sources:
Kate Prestt
Australian National University

Gut Bacteria That “Talk” to Human Cells May Lead to New Treatments

A new twist on listening to you gut. 

Scientists developed a method to genetically engineer gut bacteria to produce molecules that have the potential to treat certain disorders.

We have a symbiotic relationship with the trillions of bacteria that live in our bodies—they help us, we help them. It turns out that they even speak the same language. And new research from The Rockefeller University and the Icahn School of Medicine at Mt. Sinai suggests these newly discovered commonalities may open the door to “engineered” gut flora who can have therapeutically beneficial effects on disease.

Gut bacteria
Credit:  Rockefeller University

“We call it mimicry,” says Sean Brady, director of Rockefeller University’s Laboratory of Genetically Encoded Small Molecules, where the research was conducted. The breakthrough is described in a paper published this week in the journal Nature.

In a double-barreled discovery, Brady and co-investigator Louis Cohen found that gut bacteria and human cells, though different in many ways, speak what is basically the same chemical language, based on molecules called ligands. Building on that, they developed a method to genetically engineer the bacteria to produce molecules that have the potential to treat certain disorders by altering human metabolism. In a test of their system on mice, the introduction of modified gut bacteria led to reduced blood glucose levels and other metabolic changes in the animals.

Molecular impersonation

The method involves the lock-and-key relationship of ligands, which bind to receptors on the membranes of human cells to produce specific biological effects. In this case, the bacteria-derived molecules are mimicking human ligands that bind to a class of receptors known as GPCRs, for G-protein-coupled receptors.

Many of the GPCRs are implicated in metabolic diseases, Brady says, and are the most common targets of drug therapy. And they’re conveniently present in the gastrointestinal tract, where the gut bacteria are also found. “If you’re going to talk to bacteria,” says Brady, “you’re going to talk to them right there.” (Gut bacteria are part of the microbiome, the larger community of microbes that exist in and on the human body.)

E. coli bacteria
Image result for e coli
Credit: Hawaii Department of Health

In their work, Cohen and Brady engineered gut bacteria to produce specific ligands, N-acyl amides, that bind with a specific human receptor, GPR 119, that is known to be involved in the regulation of glucose and appetite, and has previously been a therapeutic target for the treatment of diabetes and obesity. The bacterial ligands they created turned out to be almost identical structurally to the human ligands, says Cohen, an assistant professor of gastroenterology in the Icahn School of Medicine at Mt. Sinai.

Manipulating the system

Among the advantages of working with bacteria, says Cohen, who spent five years in Brady’s lab as part of Rockefeller’s Clinical Scholars Program, is that their genes are easier to manipulate than human genes and much is already known about them. “All the genes for all the bacteria inside of us have been sequenced at some point,” he says.

In past projects, researchers in Brady’s lab have mined microbes from soil in search of naturally occurring therapeutic agents. In this instance, Cohen started with human stool samples in his hunt for gut bacteria with DNA he could engineer. When he found them he cloned them and packaged them inside E. coli bacteria, which is easy to grow. He could then see what molecules the engineered E. colistrains were making.

Although they are the product of non-human microorganisms, Brady says it’s a mistake to think of the bacterial ligands they create in the lab as foreign. “The biggest change in thought in this field over the last 20 years is that our relationship with these bacteria isn’t antagonistic,” he says. “They are a part of our physiology. What we’re doing is tapping into the native system and manipulating it to our advantage.”

“This is a first step in what we hope is a larger-scale, functional interrogation of what the molecules derived from microbes can do,” Brady says. His plan is to systematically expand and define the chemistry that is being used by the bacteria in our guts to interact with us. Our bellies, it turns out, are full of promise.



Contacts and sources:
Katherine Fenz
Rockefeller University

Citation: Commensal bacteria make GPCR ligands that mimic human signalling molecules.Louis J. Cohen, Daria Esterhazy, Seong-Hwan Kim, Christophe Lemetre, Rhiannon R. Aguilar, Emma A. Gordon, Amanda J. Pickard, Justin R. Cross, Ana B. Emiliano, Sun M. Han, John Chu, Xavier Vila-Farres, Jeremy Kaplitt, Aneta Rogoz, Paula Y. Calle, Craig Hunter, J. Kipchirchir Bitok, and Sean F. Brady Nature, 2017; DOI: 10.1038/nature23874

New Test Detects Antibiotic Resistance in Minutes Instead of Days



Researchers from Uppsala University have developed a new method to quickly determine whether an infection is caused by bacteria that are resistant or sensitive to antibiotics.

Antibiotic resistance is a growing medical problem that threatens human health globally. An important contributing factor to the development of resistance is the use of wrong antibiotics during treatment. Reliable methods to quickly and easily identify the bacterial resistance pattern and put in the right treatment from the beginning, that is, already at the doctor's visit, is a solution to the problem. This has not been possible until now because the existing antibiotic resistance tests take too long. Now, researchers at Uppsala have shown for the first time that it is possible to develop an antibiotic resistance test that is so fast that you can bring the right antibiotic home from the health center at the first visit. The test is primarily intended for urinary tract infection,

Klebsiella pneumoniae bacteria that grow in a microflow chip shot in phase contrast. The bacteria are 0.003mm long and divide every half an hour.

Credit: Uppsala University

"We have developed a new method that allows a determination of bacterial resistance patterns in urinary tract infections of 10 to 30 minutes. By comparison, the resistance determination currently in force requires 1 to 2 days. The rapid test is based on a new microflow plastic chip where the bacteria are captured and growing, as well as methods for analyzing bacterial growth at the single-cell level, "says Özden Baltekin, a doctoral student who performed most of the experimental work.

The method is very sensitive and is based on techniques developed to study the behavior of individual bacteria with very high sensitivity. By following if the cells grow in the presence of antibiotics (ie they are resistant) or not (they are sensitive) you can say within a few minutes whether they are resistant or sensitive.

"It's super cool that the basic research methods we developed for completely other issues can benefit from such an extremely important medical application," says Johan Elf, one of the researchers behind the study.

The detection method is now being developed into a user-friendly product of a company in Uppsala, Astrego AB. The company expects to have an automated test for urinary tract infections already in a couple of years.

"The hope is that the method in the future can be used in hospitals and health centers to quickly provide correct treatment and, in addition, reduce the unnecessary use of antibiotics," says Dan Andersson, one of the researchers behind the study. In addition, we believe that the method can also be used in other types of infections, such as blood infections where a fast and correct choice of antibiotics is vital to the patient.

The research has been funded by the Swedish Research Council, Knut and Alice Wallenberg Foundation and the European Research Council, ERC. It takes place within the framework of the newly established National Center for Research on Antibiotics Resistance at Uppsala University, UAC (Uppsala Antibiotic Center).




Contacts and sources:
Linda Koffmar
Uppsala University

The study is published in the journal Proceedings of the National Academy of Science, USA (PNAS): Özden Baltekin, et al., Antibiotic susceptibility testing in less than 30 minutes using direct single-cell imaging , PNAS, doi: 10.1073 / pnas.1708558114

Humans in Crete 5.7 Million Years Ago? Fossil Footprints Put Evolutionary Narrative in Question

Newly discovered human-like footprints from Crete may put the established narrative of early human evolution to the test. The footprints are approximately 5.7 million years old and were made at a time when previous research puts our ancestors in Africa – with ape-like feet.

Ever since the discovery of fossils of Australopithecus in South and East Africa during the middle years of the 20th century, the origin of the human lineage has been thought to lie in Africa. More recent fossil discoveries in the same region, including the iconic 3.7 million year old Laetoli footprints from Tanzania which show human-like feet and upright locomotion, have cemented the idea that hominins (early members of the human lineage) not only originated in Africa but remained isolated there for several million years before dispersing to Europe and Asia. The discovery of approximately 5.7 million year old human-like footprints from Crete, published online this week by an international team of researchers, overthrows this simple picture and suggests a more complex reality.

The footprints were discovered by Gerard Gierlinski (1st author of the study) by chance when he was on holiday on Crete in 2002. Gierlinski, a paleontologist at the Polish Geological Institute specialized in footprints, identified the footprints as mammal but did not interpret them further at the time. In 2010 he returned to the site together with Grzegorz Niedzwiedzki (2nd author), a Polish paleontologist now at Uppsala University, to study the footprints in detail. Together they came to the conclusion that the footprints were made by hominins.

Credit: Andrzej Boczarowski

Human feet have a very distinctive shape, different from all other land animals. The combination of a long sole, five short forward-pointing toes without claws, and a hallux ("big toe") that is larger than the other toes, is unique. The feet of our closest relatives, the great apes, look more like a human hand with a thumb-like hallux that sticks out to the side. The Laetoli footprints, thought to have been made by Australopithecus, are quite similar to those of modern humans except that the heel is narrower and the sole lacks a proper arch. By contrast, the 4.4 million year old Ardipithecus ramidus from Ethiopia, the oldest hominin known from reasonably complete fossils, has an ape-like foot. The researchers who described Ardipithecus argued that it is a direct ancestor of later hominins, implying that a human-like foot had not yet evolved at that time.

The new footprints, from Trachilos in western Crete, have an unmistakably human-like form. This is especially true of the toes. The big toe is similar to our own in shape, size and position; it is also associated with a distinct 'ball' on the sole, which is never present in apes. The sole of the foot is proportionately shorter than in the Laetoli prints, but it has the same general form. In short, the shape of the Trachilos prints indicates unambiguously that they belong to an early hominin, somewhat more primitive than the Laetoli trackmaker. They were made on a sandy seashore, possibly a small river delta, whereas the Laetoli tracks were made in volcanic ash.

‘What makes this controversial is the age and location of the prints,’ says Professor Per Ahlberg at Uppsala University, last author of the study.

At approximately 5.7 million years, they are younger than the oldest known fossil hominin, Sahelanthropus from Chad, and contemporary with Orrorin from Kenya, but more than a million years older than Ardipithecus ramidus with its ape-like feet. This conflicts with the hypothesis that Ardipithecus is a direct ancestor of later hominins. Furthermore, until this year, all fossil hominins older than 1.8 million years (the age of early Homo fossils from Georgia) came from Africa, leading most researchers to conclude that this was where the group evolved. 

However, the Trachilos footprints are securely dated using a combination of foraminifera (marine microfossils) from over- and underlying beds, plus the fact that they lie just below a very distinctive sedimentary rock formed when the Mediterranean sea briefly dried out, 5.6 millon years ago. By curious coincidence, earlier this year, another group of researchers reinterpreted the fragmentary 7.2 million year old primate Graecopithecus from Greece and Bulgaria as a hominin. Graecopithecus is only known from teeth and jaws.

The footprints were discovered by Gerard Gierlinski (1st author of the study) by chance when he was on holiday on Crete in 2002. Gierlinski, a paleontologist at the Polish Geological Institute specialized in footprints, identified the footprints as mammal but did not interpret them further at the time. In 2010 he returned to the site together with Grzegorz Niedzwiedzki (2nd author), a Polish paleontologist now at Uppsala University, to study the footprints in detail. Together they came to the conclusion that the footprints were made by hominins.

Credit: Andrzej Boczarowski

During the time when the Trachilos footprints were made, a period known as the late Miocene, the Sahara Desert did not exist; savannah-like environments extended from North Africa up around the eastern Mediterranean. Furthermore, Crete had not yet detached from the Greek mainland. It is thus not difficult to see how early hominins could have ranged across south-east Europe and well as Africa, and left their footprints on a Mediterranean shore that would one day form part of the island of Crete.

‘This discovery challenges the established narrative of early human evolution head-on and is likely to generate a lot of debate. Whether the human origins research community will accept fossil footprints as conclusive evidence of the presence of hominins in the Miocene of Crete remains to be seen,’ says Per Ahlberg.



Contacts and sources:
Per Ahlberg
Uppsala University

Biologists Find Unexpected Source for Brain’s Development

A team of biologists has found an unexpected source for the brain’s development, a finding that offers new insights into the building of the nervous system.

The research, which appears in the journal Science, discovered that glia, a collection of non-neuronal cells that had long been regarded as passive support cells, in fact are vital to nerve-cell development in the brain.

“The results lead us to revise the often neuro-centric view of brain development to now appreciate the contributions for non-neuronal cells such as glia,” explains Vilaiwan Fernandes, a postdoctoral fellow in New York University’s Department of Biology and the study’s lead author. “Indeed, our study found that fundamental questions in brain development with regard to the timing, identity, and coordination of nerve cell birth can only be understood when the glial contribution is accounted for.”

A confocal micrograph of a developing fruit fly visual system. Development of the retina (top) is coordinated with development of the optic lobe region of the brain (sphere below). All neurons are marked by yellow and their axon projections in cyan; magenta in the optic lobe marks the specific region of the brain where neuronal differentiation is regulated by glia.
Credit: Courtesy of Vilaiwan M Fernandes, Desplan Lab, NYU’s Department of Biology.


The brain is made up of two broad cell types, nerve cells or neurons and glia, which are non-nerve cells that make up more than half the volume of the brain. Neurobiologists have tended to focus on the former because these are the cells that form networks that process information.

 A time-lapse movie of a fruit fly visual system developing over the course of six hours.  A population of glia (bright green above magenta region) from the retina grow and infiltrate into the lamina region of the optic lobe (magenta), where they induce naïve cells to differentiate into neurons. In this way, glia coordinate neuronal development in the retina with that of the brain.

Credit: Courtesy of Vilaiwan M Fernandes, Desplan Lab, NYU’s Department of Biology.

However, given the preponderance of glia in the brain’s cellular make-up, the NYU researchers hypothesized that they could play a fundamental part in brain development.

To explore this, they examined the visual system of the fruit fly. The species serves as a powerful model organism for this line of study because its visual system, like the one in humans, holds repeated mini-circuits that detect and process light over the entire visual field.

This dynamic is of particular interest to scientists because, as the brain develops, it must coordinate the increase of neurons in the retina with other neurons in distant regions of the brain.

In their study, the NYU researchers found that the coordination of nerve-cell development is achieved through a population of glia, which relay cues from the retina to the brain to make cells in the brain become nerve cells.

“By acting as a signaling intermediary, glia exert precise control over not only when and where a neuron is born, but also the type of neuron it will develop into,” notes NYU Biology Professor Claude Desplan, the paper’s senior author.

The research was supported, in part, by a grant from the National Institutes of Health (EY13012).


Contacts and sources:
New York University

Apes' Abilities Misunderstood by Decades of Flawed Science, Wishful Thinking and a Superiority Complex

Apes' intelligence may be entirely misunderstood, because research has so far failed to measure it fairly and accurately, according to scientists.

Hundreds of scientific studies over two decades have told us that apes are clever - just not as clever as us.

A new analysis argues that what we think we know about apes' social intelligence is based on wishful thinking and flawed science.

Credit: Wikimedia Commons, By Aaron Logan 

Dr David Leavens, of the University of Sussex, with Professor Kim Bard, University of Portsmouth, and Professor Bill Hopkins, Georgia State University, USA, published their analysis in the journal Animal Cognition.

Dr Leavens said: "The fault underlying decades of research and our understanding of apes' abilities is due to such a strongly-held belief in our own superiority, that scientists have come to believe that human babies are more socially capable than ape adults. As humans, we see ourselves as top of the evolutionary tree. This had led to a systematic exaltation of the reasoning abilities of human infants, on the one hand, and biased research designs that discriminate against apes, on the other hand.

"Even when apes clearly outperform young human children, researchers tend to interpret the apes' superior performance to be a consequence of inferior cognitive abilities.

"There is not one scientifically sound report of an essential species difference between apes and humans in their abilities to use and understand clues from gestures, for example. Not one.

"This is not to say such a difference won't be found in future, but much of the existing scientific research is deeply flawed."

This isn't the first time science has seen such a pervasive collapse of rigor - 100 years ago scientists were sure that northern Europeans were the most intelligent in our species. Such bias is now seen as antiquated, but comparative psychology is applying the same bias to cross-species comparisons between humans and apes, the researchers say.

Professor Bard said: "In examining the literature, we found a chasm between evidence and belief. This suggests a deep commitment to the idea that humans alone possess sophisticated social intelligence, a bias that is often not supported by the evidence."


The starting point in comparative psychology research is that if an ape makes a pointing gesture, say a point to a distant object, the meaning is ambiguous, but if a human does it, a double standard of interpretation is applied, concluding that humans have a degree of sophistication, a product of evolution, which other species can't possibly share.

In the absence of rigorous scientific research, Professor Bard said, "it is reasonable to ask if current comparative or developmental psychology has anything useful to contribute to our understanding of the 'cognitive foundations' of communication development.

"For researchers interested in the origins of language, focusing on behaviours without considering the animal's specific learning experiences will easily and inaccurately load results in favour of humans."

Examples of this bias include in one large set of studies, the children were raised in Western households, steeped in the cultural conventions of nonverbal signalling, whereas the apes were raised without that cultural exposure. When both were tested on their understanding of Western conventions of non-verbal communication, of course the children out-performed the apes on some tasks, but it remains ambiguous whether this is due to their evolutionary histories or their specific learning experiences with respect to non-verbal communication.

Credit: Wikimedia Commons / Steve from Washington, DC

In another study, children aged 12 months were compared to apes aged, on average, 18-19 years old. The study found that humans alone have evolved to be able to point towards an absent object, taking no account of the differences in the humans' and apes' age, life history, or environment. More recent studies have amply demonstrated that, like human children, adult apes do communicate about absent objects.

The researchers cite four possible remedies for what they describe as the pervasive superiority complex

Hubble Delivers First Hints of Possible Water Content of TRAPPIST-1 Planets

An international team of astronomers used the NASA/ESA Hubble Space Telescope to estimate whether there might be water on the seven earth-sized planets orbiting the nearby dwarf star TRAPPIST-1. The results suggest that the outer planets of the system might still harbour substantial amounts of water. This includes the three planets within the habitable zone of the star, lending further weight to the possibility that they may indeed be habitable.

On 22 February 2017 astronomers announced the discovery of seven Earth-sized planets orbiting the ultracool dwarf star TRAPPIST-1, 40 light-years away [1]. This makes TRAPPIST-1 the planetary system with the largest number of Earth-sized planets discovered so far.

This artist's impression shows the view from the surface of one of the planets in the TRAPPIST-1 system. At least seven planets orbit this ultracool dwarf star 40 light-years from Earth and they are all roughly the same size as the Earth. Several of the planets are at the right distances from their star for liquid water to exist on the surfaces.

Credit: ESO/N. Bartmann/spaceengine.org

Following up on the discovery, an international team of scientists led by the Swiss astronomer Vincent Bourrier from the Observatoire de l'Université de Genève, used the Space Telescope Imaging Spectrograph (STIS) on the NASA/ESA Hubble Space Telescope to study the amount of ultraviolet radiation received by the individual planets of the system. "Ultraviolet radiation is an important factor in the atmospheric evolution of planets," explains Bourrier. "As in our own atmosphere, where ultraviolet sunlight breaks molecules apart, ultraviolet starlight can break water vapour in the atmospheres of exoplanets into hydrogen and oxygen."

Comparison between the Sun and the ultracool dwarf star TRAPPIST-1
Comparison between the Sun and the ultracool dwarf star TRAPPIST-1
While lower-energy ultraviolet radiation breaks up water molecules -- a process called photodissociation -- ultraviolet rays with more energy (XUV radiation) and X-rays heat the upper atmosphere of a planet, which allows the products of photodissociation, hydrogen and oxygen, to escape.

As it is very light, hydrogen gas can escape the exoplanets' atmospheres and be detected around the exoplanets with Hubble, acting as a possible indicator of atmospheric water vapour [2]. The observed amount of ultraviolet radiation emitted by TRAPPIST-1 indeed suggests that the planets could have lost gigantic amounts of water over the course of their history.

This animation shows all seven planets orbiting the ultracool dwarf TRAPPIST-1. The constellation of Orion (The Hunter) is visible below the star although it looks slightly different from how it appears from Earth because it is seen from a different star system.The artist’s impression in this video is based on the known physical parameters for the planets and stars seen, and uses a vast database of objects in the Universe.

Credit: ESO/L. Calçada/spaceengine.org

This is especially true for the innermost two planets of the system, TRAPPIST-1b and TRAPPIST-1c, which receive the largest amount of ultraviolet energy. "Our results indicate that atmospheric escape may play an important role in the evolution of these planets," summarises Julien de Wit, from MIT, USA, co-author of the study.

The inner planets could have lost more than 20 Earth-oceans-worth of water during the last eight billion years. However, the outer planets of the system -- including the planets e, f and g which are in the habitable zone -- should have lost much less water, suggesting that they could have retained some on their surfaces [3]. The calculated water loss rates as well as geophysical water release rates also favour the idea that the outermost, more massive planets retain their water. However, with the currently available data and telescopes no final conclusion can be drawn on the water content of the planets orbiting TRAPPIST-1.

A size comparison of the planets of the TRAPPIST-1 system, lined up in order of increasing distance from their host star. The planetary surfaces are portrayed with an artist’s impression of their potential surface features, including water, ice, and atmospheres.
Comparing the TRAPPIST-1 planets
Credit: NASA/R. Hurt/T. Pyle

"While our results suggest that the outer planets are the best candidates to search for water with the upcoming James Webb Space Telescope, they also highlight the need for theoretical studies and complementary observations at all wavelengths to determine the nature of the TRAPPIST-1 planets and their potential habitability," concludes Bourrier.


Notes

[1] The planets were discovered using: the ground-based TRAPPIST-South at ESO's La Silla Observatory in Chile; the orbiting NASA Spitzer Space Telescope; TRAPPIST-North in Morocco; ESO's HAWK-I instrument on the Very Large Telescope at the Paranal Observatory in Chile; the 3.8-metre UKIRT in Hawaii; the 2-metre Liverpool and 4-metre William Herschel telescopes at La Palma in the Canary Islands; and the 1-metre SAAO telescope in South Africa.

[2] This part of an atmosphere is called the exosphere. Earth's exosphere consists mainly of hydrogen with traces of helium, carbon dioxide and atomic oxygen.

[3] Results show that each of these planets have may have lost less than three Earth-oceans of water.



Contacts and sources:
Mathias Jäger
ESA/Hubble Information Centre

Human Settlement In The Americas May Have Occurred in the Late Pleistocene

Analysis of a skeleton found in the Chan Hol cave near Tulum, Mexico suggests human settlement in the Americas occurred in the late Pleistocene era, according to a study published August 30, 2017 in the open-access journal PLOS ONE by Wolfgang Stinnesbeck from Universität Heidelberg, Germany, and colleagues.

Scientists have long debated about when humans first settled in the Americas. While osteological evidence of early settlers is fragmentary, researchers have previously discovered and dated well-preserved prehistoric human skeletons in caves in Tulum in Southern Mexico.

This is a prehistoric human skeleton in the Chan Hol Cave near Tulúm on the Yucatán peninsula prior to looting by unknown cave divers.

Credit:  Tom Poole, Liquid Junge Lab

To learn more about America's early settlers, Stinnesbeck and colleagues examined human skeletal remains found in the Chan Hol cave near Tulum. The researchers dated the skeleton by analyzing the Uranium, Carbon and Oxygen isotopes found in its bones and in the stalagmite which had grown through its pelvic bone.

The researchers' isotopic analysis dated the skeleton to ~13 k BP, or approximately 13,000 years before present. This finding suggests that the Chan Hol cave was accessed during the late Pleistocene, providing one of oldest examples of a human settler in the Americas. While the researchers acknowledge that changes in climate over time may have influenced the dating of the skeleton, future research could potentially disentangle how climate impacted the Chan Hol archaeological record.

  

Contacts and sources:
Tessa Gregory
PLOS ONE:

Citation: Stinnesbeck W, Becker J, Hering F, Frey E, González AG, Fohlmeister J, et al. (2017) The earliest settlers of Mesoamerica date back to the late Pleistocene. PLoS ONE 12(8): e0183345. https://doi.org/10.1371/journal.pone.0183345 Freely available article in PLOS ONE:http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0183345

Wednesday, August 30, 2017

"Socially Aware" Robot Can Navigate Crowded Hallways and Thoroughfares

Just as drivers observe the rules of the road, most pedestrians follow certain social codes when navigating a hallway or a crowded thoroughfare: Keep to the right, pass on the left, maintain a respectable berth, and be ready to weave or change course to avoid oncoming obstacles while keeping up a steady walking pace.

Now engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.

In drive tests performed inside MIT’s Stata Center, the robot, which resembles a knee-high kiosk on wheels, successfully avoided collisions while keeping up with the average flow of pedestrians. The researchers have detailed their robotic design in a paper that they will present at the IEEE Conference on Intelligent Robots and Systems in September.

Engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.
Engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.
Courtesy of the researchers

“Socially aware navigation is a central capability for mobile robots operating in environments that require frequent interactions with pedestrians,” says Yu Fan “Steven” Chen, who led the work as a former MIT graduate student and is the lead author of the study. “For instance, small robots could operate on sidewalks for package and food delivery. Similarly, personal mobility devices could transport people in large, crowded spaces, such as shopping malls, airports, and hospitals.”

Chen’s co-authors are graduate student Michael Everett, former postdoc Miao Liu, and Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics at MIT.




Social drive

In order for a robot to make its way autonomously through a heavily trafficked environment, it must solve four main challenges: localization (knowing where it is in the world), perception (recognizing its surroundings), motion planning (identifying the optimal path to a given destination), and control (physically executing its desired path).

Chen and his colleagues used standard approaches to solve the problems of localization and perception. For the latter, they outfitted the robot with off-the-shelf sensors, such as webcams, a depth sensor, and a high-resolution lidar sensor. For the problem of localization, they used open-source algorithms to map the robot’s environment and determine its position. To control the robot, they employed standard methods used to drive autonomous ground vehicles.

“The part of the field that we thought we needed to innovate on was motion planning,” Everett says. “Once you figure out where you are in the world, and know how to follow trajectories, which trajectories should you be following?”

That’s a tricky problem, particularly in pedestrian-heavy environments, where individual paths are often difficult to predict. As a solution, roboticists sometimes take a trajectory-based approach, in which they program a robot to compute an optimal path that accounts for everyone's desired trajectories. These trajectories must be inferred from sensor data, because people don't explicitly tell the robot where they are trying to go.

“But this takes forever to compute. Your robot is just going to be parked, figuring out what to do next, and meanwhile the person’s already moved way past it before it decides ‘I should probably go to the right,’” Everett says. “So that approach is not very realistic, especially if you want to drive faster.”

Others have used faster, “reactive-based” approaches, in which a robot is programmed with a simple model, using geometry or physics, to quickly compute a path that avoids collisions.

The problem with reactive-based approaches, Everett says, is the unpredictability of human nature — people rarely stick to a straight, geometric path, but rather weave and wander, veering off to greet a friend or grab a coffee. In such an unpredictable environment, such robots tend to collide with people or look like they are being pushed around by avoiding people excessively.

“The knock on robots in real situations is that they might be too cautious or aggressive,” Everett says. “People don’t find them to fit into the socially accepted rules, like giving people enough space or driving at acceptable speeds, and they get more in the way than they help.”

Training days

The team found a way around such limitations, enabling the robot to adapt to unpredictable pedestrian behavior while continuously moving with the flow and following typical social codes of pedestrian conduct.

They used reinforcement learning, a type of machine learning approach, in which they performed computer simulations to train a robot to take certain paths, given the speed and trajectory of other objects in the environment. The team also incorporated social norms into this offline training phase, in which they encouraged the robot in simulations to pass on the right, and penalized the robot when it passed on the left.

“We want it to be traveling naturally among people and not be intrusive,” Everett says. “We want it to be following the same rules as everyone else.”

The advantage to reinforcement learning is that the researchers can perform these training scenarios, which take extensive time and computing power, offline. Once the robot is trained in simulation, the researchers can program it to carry out the optimal paths, identified in the simulations, when the robot recognizes a similar scenario in the real world.

The researchers enabled the robot to assess its environment and adjust its path, every one-tenth of a second. In this way, the robot can continue rolling through a hallway at a typical walking speed of 1.2 meters per second, without pausing to reprogram its route.

“We’re not planning an entire path to the goal — it doesn’t make sense to do that anymore, especially if you’re assuming the world is changing,” Everett says. “We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again. This way, we think our robot looks more natural, and is anticipating what people are doing.”

Crowd control

Everett and his colleagues test-drove the robot in the busy, winding halls of MIT’s Stata Building, where the robot was able to drive autonomously for 20 minutes at a time. It rolled smoothly with the pedestrian flow, generally keeping to the right of hallways, occasionally passing people on the left, and avoiding any collisions.

“We wanted to bring it somewhere where people were doing their everyday things, going to class, getting food, and we showed we were pretty robust to all that,” Everett says. “One time there was even a tour group, and it perfectly avoided them.”

Everett says going forward, he plans to explore how robots might handle crowds in a pedestrian environment.

“Crowds have a different dynamic than individual people, and you may have to learn something totally different if you see five people walking together,” Everett says. “There may be a social rule of, ‘Don’t move through people, don’t split people up, treat them as one mass.’ That’s something we’re looking at in the future.”

This research was funded by Ford Motor Company.



Contacts and sources:
Jennifer Chu
Massachusetts Institute of Technology: MIT


Citation: Socially aware motion planning with deep reinforcement learning

What Is the Meaning of Ancient Geometric Earthworks in Southwestern Amazonia

Researchers examine pre-colonial geometric earthworks in the southwestern Amazonia from the point of view of indigenous peoples and archaeology. The study shows that the earthworks were once important ritual communication spaces.

The geometric earthworks of southwestern Amazonia have raised the interest within the scientific community as well as the media and the general public, and they have been explored recently by several international research teams.

These unique archaeological sites have been labeled the Geoglyphs of Acre, as most of them are located in the Brazilian State of Acre. Nearly 500 sites have already been registered and have been included on the Brazilian State Party's Tentative List for inscription on the UNESCO World Heritage List.

Sá and Seu Chiquinho sites featuring circular, square, and U-shaped earthworks.

Photographer: Sanna Saunaluoma

The construction period and use, span the time period of approximately 3000-1000 BP. The earthwork ditches form geometric patterns, such as squares, circles, U-forms, ellipses and octagons. They can be several meters deep and enclose areas of hundreds of square meters.

Members of the community interacted with the environment

Pirjo Kristiina Virtanen, Assistant Professor of Indigenous Studies at the University of Helsinki, Finland, has conducted research with indigenous peoples in the study area for a long time. Sanna Saunaluoma, Post Doctoral researcher at the São Paulo University, Brazil, is specialized in Amazonian archeology and made her doctoral dissertation on Acre's earthwork sites. Their article published in the American Anthropologist (119[4], 2017), already in early view, examines pre-colonial geometric earthworks from the point of view of indigenous peoples and archaeology.

The study shows that the sites were once important ritual spaces where, through the geometric designs, certain members of the community communicated with various beings of the environment, such as ancestor spirits, animals, and celestial bodies. Thus people were constantly reminded that human life was intertwined with the environment and previous generations. People did not distinguish themselves from nature, but nonhumans enabled and produced life.

Geoglyphs on deforested land at the Fazenda Colorada site in the Amazon rainforest, Rio Branco area, Acre. Site dated c. AD 1283
File:Fazenda Colorada.jpg
Credit: Sanna Saunaluoma/ Wikimedia Commons

The geometric earthwork sites were especially used by the experts of that era, who specialized in the interaction with the nonhuman beings. The sites were important for members of the community at certain stages of life, and the various geometric patterns acted as "doors" and "paths" to gain the knowledge and strength of the different beings of the environment. Visualization and active interactions with nonhuman beings were constructive for these communities.

Contemporary indigenous peoples of Acre still regard earthwork sites as sacred places

The geometric patterns inspired by characteristics and skin patterns of animals still materialize the thinking of indigenous people of Amazonia and are also present in their modern pottery, fabrics, jewelry, and arts. As the theories of Amerindian visual art also show, geometric patterns can provide people with desired qualities and abilities, such as fertility, resistance, knowledge, and power.

Contemporary indigenous peoples of Acre still protect earthwork sites as sacred places and, unlike other Brazilian residents in the area, avoid using the sites for mundane activities, such as housing or agriculture, and therefore protect these peculiar ancient remains in their own way.



Contacts and sources:
Dr. Pirjo Kristiina Virtanen
University of Helsinki

Cassini Spacecraft Gets Ready for Final Plunge into Saturn

NASA's Cassini spacecraft is 18 days from its mission-ending dive into the atmosphere of Saturn. Its fateful plunge on Sept. 15 is a foregone conclusion -- an April 22 gravitational kick from Saturn's moon 

Titan placed the two-and-a-half ton vehicle on its path for impending destruction. Yet several mission milestones have to occur over the coming two-plus weeks to prepare the vehicle for one last burst of trailblazing science.

NASA's Cassini spacecraft is shown heading for the gap between Saturn and its rings during one of 22 such dives of the mission's finale in this illustration. The spacecraft will make a final plunge into the planet's atmosphere on Sept. 15.Artist's illustration of NASA's Cassini spacecraft at Saturn
Credits: NASA/JPL-Caltech
"The Cassini mission has been packed full of scientific firsts, and our unique planetary revelations will continue to the very end of the mission as Cassini becomes Saturn’s first planetary probe, sampling Saturn's atmosphere up until the last second," said Linda Spilker, Cassini project scientist from NASA's Jet Propulsion Laboratory in Pasadena, California. "We'll be sending data in near real time as we rush headlong into the atmosphere -- it's truly a first-of-its-kind event at Saturn."

Team members reflect on what has made the NASA/ESA Cassini mission such an epic journey -- the extraordinary spacecraft, tremendous science and historic international collaboration. This video uses a combination of animations and actual imagery returned over the course of the mission.



The spacecraft is expected to lose radio contact with Earth within about one to two minutes after beginning its descent into Saturn's upper atmosphere. But on the way down, before contact is lost, eight of Cassini's 12 science instruments will be operating. In particular, the spacecraft‘s ion and neutral mass spectrometer (INMS), which will be directly sampling the atmosphere's composition, potentially returning insights into the giant planet's formation and evolution.

On the day before the plunge, other Cassini instruments will make detailed, high-resolution observations of Saturn's auroras, temperature, and the vortices at the planet's poles. Cassini's imaging camera will be off during this final descent, having taken a last look at the Saturn system the previous day (Sept. 14).

In its final week, Cassini will pass several milestones en route to its science-rich Saturn plunge. (Times below are predicted and may change slightly; see https://go.nasa.gov/2wbaCBT for updated times.)
-- Sept. 9 -- Cassini will make the last of 22 passes between Saturn itself and its rings -- closest approach is 1,044 miles (1,680 kilometers) above the clouds tops.

-- Sept. 11 -- Cassini will make a distant flyby of Saturn's largest moon, Titan. Even though the spacecraft will be at 73,974 miles (119,049 kilometers) away, the gravitational influence of the moon will slow down the spacecraft slightly as it speeds past. A few days later, instead of passing through the outermost fringes of Saturn's atmosphere, Cassini will dive in too deep to survive the friction and heating.

-- Sept. 14 -- Cassini's imaging cameras take their last look around the Saturn system, sending back pictures of moons Titan and Enceladus, the hexagon-shaped jet stream around the planet's north pole, and features in the rings.

-- Sept. 14 (5:45 p.m. EDT / 2:45 p.m. PDT) -- Cassini turns its antenna to point at Earth, begins a communications link that will continue until end of mission, and sends back its final images and other data collected along the way.

-- Sept. 15 (4:37 a.m. EDT / 1:37 a.m. PDT) -- The "final plunge" begins. The spacecraft starts a 5-minute roll to position INMS for optimal sampling of the atmosphere, transmitting data in near real time from now to end of mission.

-- Sept. 15 (7:53 a.m. EDT / 4:53 a.m. PDT) -- Cassini enters Saturn's atmosphere. Its thrusters fire at 10 percent of their capacity to maintain directional stability, enabling the spacecraft's high-gain antenna to remain pointed at Earth and allowing continued transmission of data.

-- Sept. 15 (7:54 a.m. EDT / 4:54 a.m. PDT) -- Cassini's thrusters are at 100 percent of capacity. Atmospheric forces overwhelm the thrusters' capacity to maintain control of the spacecraft's orientation, and the high-gain antenna loses its lock on Earth. At this moment, expected to occur about 940 miles (1,510 kilometers) above Saturn's cloud tops, communication from the spacecraft will cease, and Cassini's mission of exploration will have concluded. The spacecraft will break up like a meteor moments later.

As Cassini completes its 13-year tour of Saturn, its Grand Finale -- which began in April -- and final plunge are just the last beat. Following a four-year primary mission and a two-year extension, NASA approved an ambitious plan to extend Cassini's service by an additional seven years. Called the Cassini Solstice Mission, the extension saw Cassini perform dozens more flybys of Saturn's moons as the spacecraft observed seasonal changes in the atmospheres of Saturn and Titan. From the outset, the planned endgame for the Solstice Mission was to expend all of Cassini's maneuvering propellant exploring, then eventually arriving in the ultra-close Grand Finale orbits, ending with safe disposal of the spacecraft in Saturn's atmosphere.

"The end of Cassini's mission will be a poignant moment, but a fitting and very necessary completion of an astonishing journey," said Earl Maize, Cassini project manager at NASA's Jet Propulsion Laboratory in Pasadena, California. "The Grand Finale represents the culmination of a seven-year plan to use the spacecraft’s remaining resources in the most scientifically productive way possible. By safely disposing of the spacecraft in Saturn's atmosphere, we avoid any possibility Cassini could impact one of Saturn's moons somewhere down the road, keeping them pristine for future exploration."

Since its launch in 1997, the findings of the Cassini mission have revolutionized our understanding of Saturn, its complex rings, the amazing assortment of moons and the planet's dynamic magnetic environment. The most distant planetary orbiter ever launched, Cassini started making astonishing discoveries immediately upon arrival and continues today. Icy jets shoot from the tiny moon Enceladus, providing samples of an underground ocean with evidence of hydrothermal activity. Titan's hydrocarbon lakes and seas are dominated by liquid ethane and methane, and complex pre-biotic chemicals form in the atmosphere and rain to the surface. Three-dimensional structures tower above Saturn's rings, and a giant Saturn storm circled the entire planet for most of a year. Cassini's findings at Saturn have also buttressed scientists' understanding of processes involved in the formation of planets.

The Cassini-Huygens mission is a cooperative project of NASA, ESA (European Space Agency) and the Italian Space Agency. NASA's Jet Propulsion Laboratory, a division of Caltech in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington. JPL designed, developed and assembled the Cassini orbiter.


Contacts and sources:
Preston Dyches
Jet Propulsion Laboratory
More information about Cassini: https://www.nasa.gov/cassini
https://saturn.jpl.nasa.gov   

Tuesday, August 29, 2017

Laser Zaps Decontaminate Soil

There might be a new and improved way to rid contaminated soil of toxins and pollutants: zap it with lasers. By directly breaking down pollutants, researchers say, high-powered lasers can now be more efficient and cheaper than conventional decontamination techniques.

"Other methods are either costly, labor intensive, have low efficiency, or take a long time," said Ming Su, an associate professor of chemical engineer at Northeastern University. With two of his graduate students, Wenjun Zheng and Sichao Hou, he has shown how such a laser system could work, describing the proof-of-principle results this week in the Journal of Applied Physics, from AIP Publishing.

The biggest advantage of lasers, Su explained, is that they can be used at the site of decontamination. Many conventional decontamination methods require digging up contaminated soil, hauling it somewhere else to be cleaned, and then returning it -- a process that is expensive and time-consuming. 

Laser induced soil decontamination (A), laser generated patterns (B and C), and an infrared image of temperature distribution along track of laser movement (D). 
Credit: AIP Publishing


These methods also have shortcomings in how well they can decontaminate. One of the most popular methods uses water or organic solvents to wash away the pollutants. But oftentimes, washing doesn't eradicate contaminants; it only dilutes them. And even if the soil is clean, you might be left with another problem in contaminated water: The organic solvents can themselves be harmful to people, and the process can create byproducts that become secondary contaminants.

There are ways to decontaminate soil on-site, but they have their own limitations. Soil vapor extraction, in which air is pumped into the ground to remove volatile organic compounds, only works on permeable or homogeneous soils. Biological approaches to break down pollutants using plants or microbes are slow, and only work for low concentrations of certain contaminants.

Lasers, however, can be used on-site to completely break down contaminants. "There is no other method that can do it at such high efficiency," Su said.

To demonstrate that the new method is feasible, the researchers tested it on a simulated soil made from porous silica. They contaminated their artificial soil with a carcinogenic chemical called DDE, which is a product of DDT, the carcinogenic pesticide that was banned in the U.S. in 1972. The DDE molecules fluoresce under ultraviolet light, making them easier to detect.

Almost immediately after shining a high-powered infrared laser on the contaminated artificial soil, the glowing ceased. The lack of fluorescence indicated that the DDE was no longer present.

To remove the harmful substance, the laser light heats up the pollutant locally, reaching temperatures of thousands of degrees Celsius. This heat is sufficient to break the chemical bonds of the pollutant, fragmenting DDE into smaller, safer molecules such as carbon dioxide and water.

In principle, lasers should be able to work on all types of contaminants, from organic compounds to metal ions. But first, Su said, the researchers will have to do more experiments with other contaminants. Future studies also need to involve more careful analysis to determine whether all of the contaminant is, in fact, broken down sufficiently to meet standards.

Eventually, Su envisions a multi-laser system carried on the back of a truck. The laser light, channeled through fiber-optic cables that penetrate the soil, could perhaps couple to a plow that loosens the dirt, better exposing it to the laser light.



Contacts and sources:
 American Institute of Physics (AIP)

Citation: "Laser induced rapid decontamination of aromatic compound from porous soil simulant," is authored by Wenjun Zheng, Sichao Hou and Ming Su. The article will appear in The Journal of Applied Physics on August 29, 2017 [DOI: 10.1063/1.4985813].  
http://aip.scitation.org/doi/full/10.1063/1.4985813

High-Tech Electronics Made from Autumn Leaves

Northern China’s roadsides are peppered with deciduous phoenix trees, producing an abundance of fallen leaves in autumn. These leaves are generally burned in the colder season, exacerbating the country’s air pollution problem. Investigators in Shandong, China, recently discovered a new method to convert this organic waste matter into a porous carbon material that can be used to produce high-tech electronics. The advance is reported in the Journal of Renewable and Sustainable Energy, by AIP Publishing.

The investigators used a multistep, yet simple, process to convert tree leaves into a form that could be incorporated into electrodes as active materials. The dried leaves were first ground into a powder, then heated to 220 degrees Celsius for 12 hours. This produced a powder composed of tiny carbon microspheres. These microspheres were then treated with a solution of potassium hydroxide and heated by increasing the temperature in a series of jumps from 450 to 800 C. 

Phoenix tree (Paulownia imperialis) leaves 
Credit: U.S. National Park Service, Public Domain

The chemical treatment corrodes the surface of the carbon microspheres, making them extremely porous. The final product, a black carbon powder, has a very high surface area due to the presence of many tiny pores that have been chemically etched on the surface of the microspheres. The high surface area gives the final product its extraordinary electrical properties.

The investigators ran a series of standard electrochemical tests on the porous microspheres to quantify their potential for use in electronic devices. The current-voltage curves for these materials indicate that the substance could make an excellent capacitor. Further tests show that the materials are, in fact, supercapacitors, with specific capacitances of 367 Farads/gram, which are over three times higher than values seen in some graphene supercapacitors (Link: http://aip.scitation.org/doi/full/10.1063/1.4984762). 


Galvanostatic Charge/Discharge (GCD) curves at different current densities, from 0.5 to 20A/g. 
Credit: Hongfang Ma, Qilu University of Technology

A capacitor is a widely used electrical component that stores energy by holding a charge on two conductors, separated from each other by an insulator. Supercapacitors can typically store 10-100 times as much energy as an ordinary capacitor, and can accept and deliver charges much faster than a typical rechargeable battery. For these reasons, supercapacitive materials hold great promise for a wide variety of energy storage needs, particularly in computer technology and hybrid or electric vehicles. 

Scanning Electron Microscopy (SEM) Image of Porous Carbon Microspheres 
Credit: Hongfang Ma, Qilu University of Technology

The research, led by Hongfang Ma of Qilu University of Technology, has been heavily focused on looking for ways to convert waste biomass into porous carbon materials that can be used in energy storage technology. In addition to tree leaves, the team and others have successfully converted potato waste, corn straw, pine wood, rice straw and other agricultural wastes into carbon electrode materials. Professor Ma and her colleagues hope to improve even further on the electrochemical properties of porous carbon materials by optimizing the preparation process and allowing for doping or modification of the raw materials.

The supercapacitive properties of the porous carbon microspheres made from phoenix tree leaves are higher than those reported for carbon powders derived from other biowaste materials. The fine scale porous structure seems to be key to this property, since it facilitates contact between electrolyte ions and the surface of the carbon spheres, as well as enhancing ion transfer and diffusion on the carbon surface. The investigators hope to improve even further on these electrochemical properties by optimizing their process and allowing for doping or modification of the raw materials.



Contacts and sources:
American Institute of Physics (AIP)

 The article, "Supercapacitive performance of porous carbon materials derived from tree leaves," is authored by Hongfang Ma, Zhibao Liu, Xiaodan Wang and Rongyan Jiang. The article appeared in The Journal of Renewable and Sustainable Energy August 29, 2017 [DOI: 10.1063/1.4997019] and can be accessed at http://aip.scitation.org/doi/full/10.1063/1.4997019

How 139 Countries Could Be Powered by 100% Wind, Water, and Solar Energy by 2050

The latest roadmap to a 100% renewable energy future from Stanford's Mark Z. Jacobson and 26 colleagues is the most specific global vision yet, outlining infrastructure changes that 139 countries can make to be entirely powered by wind, water, and sunlight by 2050 after electrification of all energy sectors. Such a transition could mean less worldwide energy consumption due to the efficiency of clean, renewable electricity; a net increase of over 24 million long-term jobs; an annual decrease in 4-7 million air pollution deaths per year; stabilization of energy prices; and annual savings of over $20 trillion in health and climate costs. 

The work appears August 23 in the journal Joule, Cell Press's new publication focused on sustainable energy.

The challenge of moving the world toward a low-carbon future in time to avoid exacerbating global warming and to create energy self-sufficient countries is one of the greatest of our time. The roadmaps developed by Jacobson's group provide one possible endpoint. For each of the 139 nations, they assess the raw renewable energy resources available to each country, the number of wind, water, and solar energy generators needed to be 80% renewable by 2030 and 100% by 2050, how much land and rooftop area these power sources would require (only around 1% of total available, with most of this open space between wind turbines that can be used for multiple purposes), and how this approach would reduce energy demand and cost compared with a business-as-usual scenario.

"Both individuals and governments can lead this change. Policymakers don't usually want to commit to doing something unless there is some reasonable science that can show it is possible, and that is what we are trying to do," says Jacobson, director of Stanford University's Atmosphere and Energy Program and co-founder of the Solutions Project, a U.S. non-profit educating the public and policymakers about a transition to 100% clean, renewable energy. "There are other scenarios. We are not saying that there is only one way we can do this, but having a scenario gives people direction."

This infographic represents the roadmaps developed by Jacobson et al for 139 countries to use 100 percent wind-water-solar in all energy sectors by 2050.

Credit: The Solutions Project

The analyses specifically examined each country's electricity, transportation, heating/cooling, industrial, and agriculture/forestry/fishing sectors. Of the 139 countries--selected because they were countries for which data were publically available from the International Energy Agency and collectively emit over 99% of all carbon dioxide worldwide--the places the study showed that had a greater share of land per population (e.g., the United States, China, the European Union) are projected to have the easiest time making the transition to 100% wind, water, and solar. Another learning was that the most difficult places to transition may be highly populated, very small countries surrounded by lots of ocean, such as Singapore, which may require an investment in offshore solar to convert fully.

As a result of a transition, the roadmaps predict a number of collateral benefits. For example, by eliminating oil, gas, and uranium use, the energy associated with mining, transporting and refining these fuels is also eliminated, reducing international power demand by around 13%. Because electricity is more efficient than burning fossil fuels, demand should go down another 23%. The changes in infrastructure would also mean that countries wouldn't need to depend on one another for fossil fuels, reducing the frequency of international conflict over energy. Finally, communities currently living in energy deserts would have access to abundant clean, renewable power.

"Aside from eliminating emissions and avoiding 1.5 degrees Celsius global warming and beginning the process of letting carbon dioxide drain from the Earth's atmosphere, transitioning eliminates 4-7 million air pollution deaths each year and creates over 24 million long-term, full-time jobs by these plans," Jacobson says. "What is different between this study and other studies that have proposed solutions is that we are trying to examine not only the climate benefits of reducing carbon but also the air pollution benefits, job benefits, and cost benefits"

The Joule paper is an expansion of 2015 roadmaps to transition each of the 50 United States to 100% clean, renewable energy (doi:10.1039/C5EE01283J) and an analysis of whether the electric grid can stay stable upon such a transition (doi: 10.1073/pnas.1510028112). Not only does this new study cover nearly the entire world, there are also improved calculations on the availability of rooftop solar energy, renewable energy resources, and jobs created versus lost.

The 100% clean, renewable energy goal has been criticized by some for focusing only on wind, water, and solar energy and excluding nuclear power, "clean coal," and biofuels. However, the researchers intentionally exclude nuclear power because of its 10-19 years between planning and operation, its high cost, and the acknowledged meltdown, weapons proliferation, and waste risks. "Clean coal" and biofuels are neglected because they both cause heavy air pollution, which Jacobson and coworkers are trying to eliminate, and emit over 50 times more carbon per unit of energy than wind, water, or solar power.

The 100% wind, water, solar studies have also been questioned for depending on some technologies such as underground heat storage in rocks, which exists only in a few places, and the proposed use of electric and hydrogen fuel cell aircraft, which exist only in small planes at this time. Jacobson counters that underground heat storage is not required but certainly a viable option since it is similar to district heating, which provides 60% of Denmark's heat. He also says that space shuttles and rockets have been propelled with hydrogen, and aircraft companies are now investing in electric airplanes. Wind, water, and solar can also face daily and seasonal fluctuation, making it possible that they could miss large demands for energy, but the new study refers to a new paper that suggests these stability concerns can be addressed in several ways.

These analyses have also been criticized for the massive investment it would take to move a country to the desired goal. Jacobson says that the overall cost to society (the energy, health, and climate cost) of the proposed system is one-fourth of that of the current fossil fuel system. In terms of upfront costs, most of these would be needed in any case to replace existing energy, and the rest is an investment that far more than pays itself off over time by nearly eliminating health and climate costs.

"It appears we can achieve the enormous social benefits of a zero-emission energy system at essentially no extra cost," says co-author Mark Delucchi, a research scientist at the Institute of Transportation Studies, University of California, Berkeley. "Our findings suggest that the benefits are so great that we should accelerate the transition to wind, water, and solar, as fast as possible, by retiring fossil-fuel systems early wherever we can."

"This paper helps push forward a conversation within and between the scientific, policy, and business communities about how to envision and plan for a decarbonized economy," writes Mark Dyson of Rocky Mountain Institute, in an accompanying preview of the paper. "The scientific community's growing body of work on global low-carbon energy transition pathways provides robust evidence that such a transition can be accomplished, and a growing understanding of the specific levers that need to be pulled to do so. Jacobson et al.'s present study provides sharper focus on one scenario, and refines a set of priorities for near-term action to enable it."


Contacts and sources:
Joseph Caputo
Joule (@Joule_CP) published monthly by Cell Press


Joule, Jacobson et al.: "100% Clean and Renewable Wind, Water, and Sunlight (WWS) All-Sector Energy Roadmaps for 139 Countries of the World" http://www.cell.com/joule/fulltext/S2542-4351(17)30012-0

Why Does Rubbing a Balloon on Your Hair Make It Stick?

For centuries, scientists have tried to understand triboelectric charging, commonly known as static electricity.

Triboelectric charging causes toner from a photocopier or laser printer to stick to paper, and likely facilitated the formation of planets from space dust and the origin of life on earth.

But the charges can also be destructive, sparking deadly explosions of coal dust in mines and of sugar and flour dust at food-processing plants.

New research led by Case Western Reserve University indicates that tiny holes and cracks in a material -- changes in the microstructure -- can control how the material becomes electrically charged through friction.

Changes in microstructure, such as this void and fibrils created by straining a polymer sheet, appear to control how the material charges through friction.

Credit: Case Western Reserve University

The research is a step toward understanding and, ultimately, managing the charging process for specific uses and to increase safety, the researchers say. The study is published in the journal Physical Review Materials.

"Electrostatic charging can be seen everywhere, but we noticed some cases where materials appeared to charge more -- like a balloon rubbed on your head, or packing peanuts sticking to your arm when you reach into a package," said Dan Lacks, chair of the Department of Chemical and Biomolecular Engineering and one of the study's lead authors.

"Our idea was that a strain on the materials was causing a higher propensity for the materials to become charged," Lacks said. "After blowing polystyrene to create the expanded polystyrene that comprises the peanut, the material maintains this distinct charging behavior indefinitely."

Testing the idea

Scientists have long known that rubbing two materials, such as a balloon on hair, causes electrostatic charging. To test the theory that strain affects charging, the researchers stretched a film of polytetrafluoroethlyne (PTFE) and rubbed it against a film of unstrained PTFE.

"Triboelectric charging experiments are generally known for their--as some would say--charmingly inconsistent results," said Andrew Wang, a Case Western Reserve PhD student and co-author who led the work. "What was surprising to me, initially, was the consistency of the unstrained versus strained charging results."

Lacks, Wang and Mohan Sankaran, professor of chemical engineering and the other lead author of the study, repeatedly found a systematic charge transfer in one direction, as if the materials were made of two different chemical compositions.

After rubbing, unstrained films clearly tended to carry a negative charge and the strained film a positive charge. The finding was not consistent 100 percent of the time, but statistically significant.

In contrast, unstrained films rubbed together and strained films rubbed together appeared to charge at random.

Analyzing the results

Collaborators at Bilkent University, in Ankara, Turkey, used X-ray diffraction and Raman spectroscopy to analyze samples of strained and unstrained films and found at the atomic level, they looked nearly the same.

The only detectable difference in the strained film from the unstrained film was the presence of voids in the material--holes and fractures created by stretching, which changed the microstructure. Some holes and fractures were detected with the naked eye, while others were so small they required the aid of a scanning electron microscope.

The researchers created molecular simulations of strained materials on a computer, which showed the birth of the voids but no other significant changes. That further indicated the change in microstructure is the likely cause of the systematic charge transfer.

"We think the void regions and the fibrils we see around them when we strain the polymer have different bonding and thus charge differently," Lacks said.

Although the experiment focused on one material, strain may affect all materials, Sankaran said. "The strain we put on the PTFE was large because we were looking for big effects," he said. "All materials may have a little strain from processing."

Next steps

The researchers are now focusing on granular materials as well as other polymers, including polystyrene peanuts and plastic bags.

They hope to understand the scientific basis of triboelectric charging and then control the process. The goal: to prevent damage and explosions or exploit the charging for beneficial uses, such as charged agricultural pesticides that stick better to plants, or paints for cars or even spray tans. Better adhesion would reduce the amounts applied and wasted.

Beyond earthly uses, Wang said, these applications and mitigation strategies might be more pertinent in the coming years as manned and unmanned space missions deal with moon, Mars and asteroid dust.



Contacts and sources:
Kevin Mayhood
Case Western Reserve University

Citation: Dependence of triboelectric charging behavior on material microstructure
Andrew E. Wang, Phwey S. Gil, Moses Holonga, Zelal Yavuz, H. Tarik Baytekin, R. Mohan Sankaran, and Daniel J. Lacks
Phys. Rev. Materials 1, 035605 – Published 23 August 2017 http://dx.doi.org/10.1103/PhysRevMaterials.1.035605