Wednesday, January 23, 2019

Human Mutation Rate Has Slowed Recently

Researchers from Aarhus University and Copenhagen Zoo have discovered that the human mutation rate is significantly slower than for our closest primate relatives. The new knowledge may be important for estimates of when the common ancestor for humans and chimpanzees lived - and for conservation of large primates in the wild.

The photograph shows Carl, an alpha-male chimpanzee at Copenhagen Zoo, and one of the participants in the study. 
Photo: Copenhagen Zoo, David Trood

Over the past million years or so, the human mutation rate has been slowing down so that significantly fewer new mutations now occur in humans per year than in our closest primate relatives. This is the conclusion of researchers from Aarhus University, Denmark, and Copenhagen Zoo in a new study in which they have found new mutations in chimpanzees, gorillas and orangutans, and compared these with corresponding studies in humans.

Using whole-genome sequencing of families, it is possible to discover new mutations by finding genetic variants that are only present in the child and not in the parents.

"Over the past six years, several large studies have done this for humans, so we have extensive knowledge about the number of new mutations that occur in humans every year. Until now, however, there have not been any good estimates of mutation rates in our closest primate relatives," says Søren Besenbacher from Aarhus University.

The study has looked at ten families with father, mother and offspring: seven chimpanzee-families, two gorilla families and one orangutan family. In all the families, researchers found more mutations than would be expected on the basis of the number of mutations that would typically arise in human families with parents of similar age. This means that the annual mutation rate is now about one-third lower in humans than in apes.
Time of speciation fits better with fossil evidence

The higher rates in apes have an impact on the length of time estimated to have passed since the common ancestor of humans and chimpanzees lived. This is because a higher mutation rate means that the number of genetic differences between humans and chimpanzees will accumulate over a shorter period.

If the new mutation rates for apes are applied, the researchers estimate that the species formation (speciation) that separated humans from chimpanzees took place around 6.6 million years ago. If the mutation rate for humans is applied, speciation should have been around 10 million years ago.

"The times of speciation we can now calculate on the basis of the new rate fit in much better with the speciation times we would expect from the dated fossils of human ancestors that we know of," explains Mikkel Heide Schierup from Aarhus University.

The reduction in the human mutation rate demonstrated in the study could also mean that we have to move our estimate for the split between Neanderthals and humans closer to the present.

Furthermore, the results could have an impact on conservation of the great apes. Christina Hvilsom from Copenhagen Zoo explains:

"All species of great apes are endangered in the wild. With more accurate dating of how populations have changed in relation to climate over time, we can get a picture of how species could cope with future climate change."

The study "Direct estimation of mutations in great apes reconciles phylogenetic dating" has been published in Nature Ecology and Evolution and is a collaboration between researchers from Aarhus University, Copenhagen Zoo and Universitat Pompeu Fabra in Barcelona.

  
Contacts and sources:
Christina Troelsen
Aarhus University

Professor Mikkel Heide Schierup
Bioinformatics Research Centre (BiRC)

Søren Besenbacher
Department of Clinical Medicine, Aarhus University

Christina Hvilsom, geneticist
Copenhagen Zoo, Copenhagen

Citation: Direct estimation of mutations in great apes reconciles phylogenetic dating.
Søren Besenbacher, Christina Hvilsom, Tomas Marques-Bonet, Thomas Mailund, Mikkel Heide Schierup. Nature Ecology & Evolution, 2019; DOI: 10.1038/s41559-018-0778-x

The Mysteries of the Hagfish's Slimy Defense Solved



The hagfish dates back at least 300 million years. The secret of survival for these eel-like sea creatures can be found in the rate and volume of slime it produces to fend off predators. Interestingly, the oldest fossils of these eel-like sea creatures, were found in Illinois, and today researchers from the University of Illinois at Urbana-Champaign are beginning to uncover the mystery of how the hagfish uses this substance to choke its predators.

Hagfish

Credit: University of Illinois at Urbana-Champaign

“Hagfish are both amazing and disgusting at the same time,” noted Randy Ewoldt, associate professor of mechanical science and engineering at Illinois, who has studied the hagfish for about a decade. “The reason that they are disgusting explains how they have survived all this time.”

Ewoldt along with first author Gaurav Chaudhary, a PhD student in Ewoldt’s lab, and Jean-Luc Thiffeault, a mathematician from the University of Wisconsin, have shared their findings on “Unraveling Hagfish Slime,” in a paper published by the Journal of the Royal Society Interface.

Their work is receiving much interest, including features in Newsweek, New Scientist, and Ars Technica, and Science Magazine.

The fact that they are slimy has been known for a number of years, but it wasn’t until this decade that there was finally evidence of the hagfish using its slime to clog up the gills of some suction-feeding predators, confirming the suspicions as to why the hagfish produces the slime.

“When the hagfish create this slime, they do so in an incredibly efficient way in that the amount of material they put into the water grows in a volume by a factor of up to 10,000 to make the final slimy gel,” Ewoldt explained. “The volumetric increase is astounding and as far as I know untouched by anything else in nature or anything we’ve done as engineers.”

Ewoldt admits that although there are a number of real-world applications that will likely be designed in the future using the hagfish’s method (The United States Navy is interested in using the material in defense, for instance), explaining how the hagfish produces this material was more curiosity-driven research.

“We wanted to investigate the whys behind just how amazingly curious and weird this thing is,” Ewoldt said “The interesting physics and the fact that there is no clear analogy became our motivation for trying to explain it.”

The Royal Society Interface publishes results of research that are at the interface of different sciences. In this case, explaining the mysterious hagfish met at the intersection of physics, applied mathematics, and biology.

“The hagfish slime doesn’t maintain the same topology while expanding in volume, it really fundamentally changes as threads unravel and its volume increases,” Ewoldt said. “The focus of this study was on this unique mechanism.”

The team of researchers discovered that hagfish can unfurl a skein of slime in a fraction of a second.

Credit: University of Illinois at Urbana-Champaign

What the team discovered was that the hagfish threads, 100 times thinner than a human hair and initially wound up like a skein of yarn, can unravel in the blink of an eye due to fluid flow. In doing so, it can go from a size of about 0.1 millimeter to 10 centimeters in length, or a water incorporation in a magnitude of 10,000:1.

The work builds on papers that Ewoldt has published from as far back as 10 years ago in collaborating with experimental biologists, including Doug Fudge, an associate professor at Chapman University and the keeper of some hagfish.

The team of Chaudhary, Ewoldt, and Thiffeault hypothesized that the fluid flow was actually making this happen rather than it being like a chemical explosion. Using material generated by the hagfish in Fudge’s care, the team created a model using both mathematics and physics to test the hypothesis and discovered that the time and volume of the slime’s unraveling was correlated to the viscosity and flow of the surrounding liquid. In other words, if the slime were put in still water, it wouldn’t be able to unravel, but in the midst of some attack, there would be enough fluid flow for the slime to unravel.

Ewoldt noted that because attacks on the hagfish happen under different conditions, his team had to simplify to come up with a model scenario. Another driving factor likely affecting this process has yet to be measured – that is the stickiness inside the thread, which could provide some resistance.

“Because that physical resistance in the hagfish case has never been measured, we had to really generalize our mathematical model and use some reasonable range of values to do this,” Ewoldt said. “One really powerful way to do this is to write down governing equations and use dimensional analysis to take the complicated high dimensional equations and find a way to simplify them. The robustness of the conclusion is that it wasn’t just a single value of a property, instead a range of things that we could cover by framing the mathematics of this in a powerful way.”

In the end, under various conditions, the team concluded that this unraveling from about 0.1 millimeters to about 10 centimeters can happen in less than a few hundred milliseconds.

While the resulting slime gel is a solid, it is soft enough to take the form of its container, making it almost undetectable by the eye in a bucket of water. It is also strong enough and non-permeable enough to virtually stop the flow of water.

In the future, Ewoldt, a leading expert in the study of rheology, and his team, want to further study its hydraulic permeability and ultimately play a part in developing materials with the same property.

“I would love to think more about the material design aspect,” Ewoldt said. “We still haven’t figured out how to make a material like this, which swells up in water and clogs things up in a marine environment.”

You can say that when it comes to the hagfish, a creature that has survived at least 300 million years, there is still more mysteries to unravel.


Contacts and sources:
Mike Koon 
University of Illinois College of Engineering
Citation: Unravelling hagfish slime.
Gaurav Chaudhary, Randy H. Ewoldt, Jean-Luc Thiffeault. Journal of The Royal Society Interface, 2019; 16 (150): 20180710 DOI: 10.1098/rsif.2018.0710

Modern humans Replaced The Neandertals in the South of the Iberian Peninsula 44,000 Years Ago, 5,000 Years Earlier Than Previously Believed


Modern humans replaced the Neanderthals about 44,000 years ago  at a site in Spain, according to a study carried out by researchers from Spain, Japan and the United Kingdom, coordinated by Professor Miguel Cortés (University of Seville), in Cueva Bajondillo (Torremolinos, Málaga).

Bajondillo Cave and Malága Bay (Spain) at the end of the 1950s. Foreground images show Neanderthal (La Chapelle-aux-Saints, France, left) and early Modern Human (from Abri Cro-Magnon, France, right) skulls. Left lithic tool corresponds to Mousterian technology, and right Aurignacian, both recovered at Bajondillo Cave.
Credit: University of Seville

 The work, which involves scientists from the Higher Council for Scientific Research (CSIC), indicates that this succession in southern Iberia is an early phenomenon in the context of Western Europe, contrary to what was previously believed. The work is published in the journal Nature Ecology & Evolution.

Western Europe is, according to researchers, a key area to date the replacement of Neandertals by modern humans since the former are associated with Mousterian industries (nominated from the Neanderthal site of Le Moustier, in France) and the latter with the auriñacienses (denominated thus by the French deposit of Aurignac). Until now, the radiocarbon dates available in Western Europe dated the conclusion of the replacement around 39,000 years ago, although in the south of the Iberian Peninsula the survival of the Mousterian industries and, therefore, the Neanderthals, would be extended to 32,000 years, there is no evidence in the area of ​​the early Aurignacian that was documented in Europe.

These are selected archaeological sites in Western Europe with Aurignacian industries actually or potentially older than 42,000 years, including Bajondillo Cave (Spain). Orange arrows indicate potential expansion routes across Europe at low sea level. Images on the left show a Neanderthal skull (La Chapelle-aux-Saints, France) and a Mousterian tool recovered at Bajondillo Cave. On the right the images show a Modern Human skull (Abri-Cro-Magnon, France) and an Aurignacian tool recovered at Bajondillo Cave


Credit: University of Seville

The new dates, however, delimit this replacement in a range of between 45,000 and 43,000 years before the present, which raises questions about a late survival of the Neanderthals in southern Iberia . The researchers suggest that further research will be necessary to determine whether the new dates show an earlier replacement of Neandertals throughout the peninsular south or more complex scenarios of "mosaic" coexistence between the two groups over millennia.

No relation to cold phenomena

The results of the study show that the implantation of modern humans in Cueva Bajondillo is isolated from phenomena of extreme cold , the so-called Heinrich events, being later than the dates that are known of the closest event (39,500 years). "The Heinrich events represent the most intense and variable climatic conditions in Western Europe at millennium scale but in this coastal region of the Mediterranean they do not seem to be involved in the transition from Mousteriense to Auriñaciense", comments Francisco Jimenez, CSIC researcher at the Andalusian Institute of Earth Sciences.

The location of Bajondillo Cave points to coastal corridors as a preferred route in the dispersion of the first modern humans . Chris Stringer, researcher at the National Museum of History in London (United Kingdom) and co-author of the study, says: "Finding a Aurignacian so early in a cave so close to the sea reinforces the idea that the Mediterranean coast was a route for modern humans that penetrated Europe. This reinforces the evidence that suggests that more than 40,000 years ago Homo sapiens had quickly dispersed throughout much of Eurasia. "

For his part, Arturo Morales-Muñiz, a scientist at the Autonomous University of Madrid, suggests that evidence from Cueva Bajondillo will help to draw attention to the role played by the Strait of Gibraltar as a potential route for the dispersion of modern humans. They left Africa .


Contacts and sources:
University of Seville


Citation: An early Aurignacian arrival in southwestern Europe.
Miguel Cortés-Sánchez, Francisco J. Jiménez-Espejo, María D. Simón-Vallejo, Chris Stringer, María Carmen Lozano Francisco, Antonio García-Alix, José L. Vera Peláez, Carlos P. Odriozola, José A. Riquelme-Cantal, Rubén Parrilla Giráldez, Adolfo Maestro González, Naohiko Ohkouchi, Arturo Morales-Muñiz. Nature Ecology & Evolution, 2019; DOI: 10.1038/s41559-018-0753-6

Ancient Climate Change Triggered Warming That Lasted Thousands Of Years

A rapid rise in temperature on ancient Earth triggered a climate response that may have prolonged the warming for many thousands of years, according to scientists.

Their study, published online in Nature Geoscience, provides new evidence of a climate feedback that could explain the long duration of the Paleocene-Eocene Thermal Maximum (PETM), which is considered the best analogue for modern climate change.

The findings also suggest that climate change today could have long-lasting impacts on global temperature even if humans are able to curb greenhouse gas emissions.

Fossiliferous core from a drilling site in Maryland. 
fossiliferous core
Image: Rosie Oakes / Penn State

"We found evidence for a feedback that occurs with rapid warming that can release even more carbon dioxide into the atmosphere," said Shelby Lyons, a doctoral student in geosciences at Penn State. "This feedback may have extended the PETM climate event for tens or hundreds of thousands of years. We hypothesize this is also something that could occur in the future."

Increased erosion during the PETM, approximately 56 million years ago, freed large amounts of fossil carbon stored in rocks and released enough carbon dioxide, a greenhouse gas, into the atmosphere to impact temperatures long term, researchers said.

Scientists found evidence for the massive carbon release in coastal sediment fossil cores. They analyzed the samples using an innovative molecular technique that enabled them to trace how processes like erosion moved carbon in deep time.

Victoria Fortiz (right), a former Penn State graduate student in geosciences, and a United States Geological Survey employee (left) wash a core sample from a site in Maryland.

Credit: Timothy Bralower / Penn State



"This technique uses molecules in a really innovative, out-of-the-box way to trace fossil carbon," said Katherine Freeman, Evan Pugh University Professor of Geosciences at Penn State. "We haven't really been able to do that before."

Global temperatures increased by about 9 to 14.4 degrees Fahrenheit during the PETM, radically changing conditions on Earth. Severe storms and flooding became more common, and the warm, wet weather led to increased erosion of rocks.

As erosion wore down mountains over thousands of years, carbon was released from rocks and transported by rivers to oceans, where some was reburied in coastal sediments. Along the way, some of the carbon entered the atmosphere as greenhouse gas.

"What we found in records were signatures of carbon transport that indicated there were massive erosion regimes occurring on land," Lyons said. "Carbon was locked on land and during the PETM it was moved and reburied. We were interested in seeing how much carbon dioxide that could release."

Lyons was studying PETM core samples from Maryland, in a location that was once underwater, when she discovered traces of older carbon that must have once been stored in rocks on land. She initially believed the samples were contaminated, but she found similar evidence in sediments from other Mid-Atlantic sites and Tanzania.

Carbon in these samples did not share common isotope patterns of life from the PETM and appeared oily, as if it been heated over long periods of time in a different location.


A core sample of the Paleocene-Eocene Thermal Maximum taken from a drilling site in Maryland.

Credit: USGS

"That told us what we were looking at in the records was not just material that was formed during the PETM," Lyons said. "It was not just carbon that had been formed and deposited at that time, but likely represented something older being transported in."

The researchers developed a mixing model to distinguish the sources of carbon. Based on the amount of older carbon in the samples, scientists were able to estimate how much carbon dioxide was released during the journey from rock to ocean sediment.

They estimated the climate feedback could have released enough carbon dioxide to explain the roughly 200,000-year duration of the PETM, something that has not been well understood.

The researchers said the findings offer a warning about modern climate change. If warming reaches certain tipping points, feedbacks can be triggered that have the potential to cause even more temperature change.

"One lesson we can learn from this research is that carbon is not stored very well on land when the climate gets wet and hot," Freeman said. "Today, we're pushing the system out of equilibrium and it's not going to snap back, even when we start reducing carbon dioxide emissions."

Additional authors from Penn State are Timothy Bralower, Elizabeth Hajekand Lee Kump, professors of geosciences; and Ellen Polites, an undergraduate majoring in geosciences. Kump is also dean of the College of Earth and Mineral Sciences.

Researchers from the University of California Santa Cruz, the U.S. Geological Survey, the University of Delaware and the University of Louisiana at Lafayette also collaborated on this project.

The National Science Foundation provided funding for this research.


Contacts and sources:
Patricia L. Craig, Matthew Carroll,  A'ndrea Elyse Messer
Penn State


Citation: Palaeocene–Eocene Thermal Maximum prolonged by fossil carbon oxidation.
Shelby L. Lyons, Allison A. Baczynski, Tali L. Babila, Timothy J. Bralower, Elizabeth A. Hajek, Lee R. Kump, Ellen G. Polites, Jean M. Self-Trail, Sheila M. Trampush, Jamie R. Vornlocher, James C. Zachos, Katherine H. Freeman. Nature Geoscience, 2018; 12 (1): 54 DOI: 10.1038/s41561-018-0277-3

How Hot Are Atoms In The Shock Wave Of An Exploding Star?

An international team of researchers combined observations of nearby supernova SN1987A, made with NASA's Chandra X-Ray Observatory, with simulations to measure the temperature atoms in the shock wave that occurs from the explosive death of a star. This image superimposes synthetic X-ray emission data onto a density map with from the simulation of SN1987A. 
Credit: Marco Miceli, Dipartimento di Fisica e Chimica, Università di Palermo, and INAF-Osservatorio Astronomico di Palermo, Palermo, Italy

A new method to measure the temperature of atoms during the explosive death of a star will help scientists understand the shock wave that occurs as a result of this supernova explosion. An international team of researchers, including a Penn State scientist, combined observations of a nearby supernova remnant—the structure remaining after a star’s explosion—with simulations in order to measure the temperature of slow-moving gas atoms surrounding the star as they are heated by the material propelled outward by the blast.

The research team analyzed long-term observations of the nearby supernova remnant SN1987A using NASA’s Chandra X-ray Observatory and created a model describing the supernova. The team confirmed that the temperature of even the heaviest atoms—which had not yet been investigated—is related to their atomic weight, answering a long-standing question about shock waves and providing important information about their physical processes. A paper describing the results appears January 21, 2019, in the journal Nature Astronomy.

“Supernova explosions and their remnants provide cosmic laboratories that enable us to explore physics in extreme conditions that cannot be duplicated on Earth,” said David Burrows, professor of astronomy and astrophysics at Penn State and an author of the paper. “Modern astronomical telescopes and instrumentation, both ground-based and space-based, have allowed us to perform detailed studies of supernova remnants in our galaxy and nearby galaxies. We have performed regular observations of supernova remnant SN1987A using NASA’s Chandra X-ray Observatory, the best X-ray telescope in the world, since shortly after Chandra was launched in 1999, and used simulations to answer longstanding questions about shock waves.”

The explosive death of a massive star like SN1987A propels material outwards at speeds of up to one tenth the speed of light, pushing shock waves into the surrounding interstellar gas. Researchers are particularly interested in the shock front, the abrupt transition between the supersonic explosion and the relatively slow-moving gas surrounding the star. The shock front heats this cool slow-moving gas to millions of degrees—temperatures high enough for the gas to emit X-rays detectable from Earth.The supernova shock front, the abrupt transition between the supersonic explosion and the gas surrounding the exploding star, is similar to transition in a “hydraulic jump,” where a high-speed stream of water hitting a surface flows smoothly outwards and then abruptly jumps in height and becomes turbulent. 
Credit: James Kilfiger, Wikimedia Commons

“The transition is similar to one observed in a kitchen sink when a high-speed stream of water hits the sink basin, flowing smoothly outward until it abruptly jumps in height and becomes turbulent,” said Burrows. “Shock fronts have been studied extensively in the Earth’s atmosphere, where they occur over an extremely narrow region. But in space, shock transitions are gradual and may not affect atoms of all elements the same way.”

The research team, led by Marco Miceli and Salvatore Orlando of the University of Palermo, Italy, measured the temperatures of different elements behind the shock front, which will improve understanding of the physics of the shock process. These temperatures are expected to be proportional to the elements’ atomic weight, but the temperatures are difficult to measure accurately. Previous studies have led to conflicting results regarding this relationship, and have failed to include heavy elements with high atomic weights. The research team turned to supernova SN1987A to help address this dilemma.

Supernova SN1987A, which is located in a nearby galaxy called the Large Magellanic Cloud, was the first supernova visible to the naked eye since Kepler’s Supernova in 1604. It is also the first to be studied in detail with modern astronomical instruments. The light from its explosion first reached earth on February 23, 1987, and since then it has been observed at all wavelengths of light, from radio waves to X-rays and gamma waves. The research team used these observations to build a model describing the supernova.

Models of SN1987A have typically focused on single observations, but in this study, the researchers used three-dimensional numerical simulations to incorporate the evolution of the supernova, from its onset to the current age. A comparison of the X-ray observations and the model allowed the researchers to accurately measure atomic temperatures of different elements with a wide range of atomic weights, and to confirm the relationship that predicts the temperature reached by each type of atom in the interstellar gas.

“We can now accurately measure the temperatures of elements as heavy as silicon and iron, and have shown that they indeed do follow the relationship that the temperature of each element is proportional to the atomic weight of that element,” said Burrows. “This result settles an important issue in the understanding of astrophysical shock waves and improves our understanding of the shock process.”


Contacts and sources:
David Burrows and Gail McCormick
Penn State

Citation: Collisionless shock heating of heavy ions in SN 1987A.
Marco Miceli, Salvatore Orlando, David N. Burrows, Kari A. Frank, Costanza Argiroffi, Fabio Reale, Giovanni Peres, Oleh Petruk, Fabrizio Bocchino. Nature Astronomy, 2019; DOI: 10.1038/s41550-018-0677-8

A Fleeting Moment in Time: The Last Breath of a Dying Star

The European Southern Observatory’s Cosmic Gems Programme captures last breath of a dying star.

The faint, ephemeral glow emanating from the planetary nebula ESO 577-24 persists for only a short time — around 10,000 years, a blink of an eye in astronomical terms. ESO’s Very Large Telescope captured this shell of glowing ionised gas — the last breath of the dying star whose simmering remains are visible at the heart of this image. As the gaseous shell of this planetary nebula expands and grows dimmer, it will slowly disappear from sight.
A Fleeting Moment in Time
Credit: ESO

An evanescent shell of glowing gas spreading into space — the planetary nebula ESO 577-24 — dominates this image [1]. This planetary nebula is the remains of a dead giant star that has thrown off its outer layers, leaving behind a small, intensely hot dwarf star. This diminished remnant will gradually cool and fade, living out its days as the mere ghost of a once-vast red giant star.

Red giants are stars at the end of their lives that have exhausted the hydrogen fuel in their cores and begun to contract under the crushing grip of gravity. As a red giant shrinks, the immense pressure reignites the core of the star, causing it to throw its outer layers into the void as a powerful stellar wind. The dying star’s incandescent core emits ultraviolet radiation intense enough to ionise these ejected layers and cause them to shine. The result is what we see as a planetary nebula — a final, fleeting testament to an ancient star at the end of its life [2].

Credit: ESO

This dazzling planetary nebula was discovered as part of the National Geographic Society  — Palomar Observatory Sky Survey in the 1950s, and was recorded in the Abell Catalogue of Planetary Nebulae in 1966 [3]. At around 1400 light years from Earth, the ghostly glow of ESO 577-24 is only visible through a powerful telescope. As the dwarf star cools, the nebula will continue to expand into space, slowly fading from view.

This pan video explores the planetary nebula ESO 577-24. ESO’s Very Large Telescope captured this shell of glowing ionised gas — the last breath of the dying star whose simmering remains are visible at the heart of this image. As the gaseous shell of this planetary nebula expands and grows dimmer, it will slowly disappear from sight.

Credit: ESO. Music: Thomas Edward Rice — Phantasm Retro.

This image of ESO 577-24 was created as part of the ESO Cosmic Gems Programme, an initiative that produces images of interesting, intriguing, or visually attractive objects using EO telescopes for the purposes of education and public outreach. The programme makes use of telescope time that cannot be used for scientific observations; nevertheless, the data collected are made available to astronomers through the ESO Science Archive.
Notes

[1] Planetary nebulae were first observed by astronomers in the 18th century — to them, their dim glow and crisp outlines resembled planets of the Solar System.

[2] By the time our Sun evolves into a red giant, it will have reached the venerable age of 10 billion years. There is no immediate need to panic, however — the Sun is currently only 5 billion years old.

[3] Astronomical objects often have a variety of official names, with different catalogues providing different designations. The formal name of this object in the Abell Catalogue of Planetary Nebulae is PN A66 36.



Contacts and sources:
Calum Turner
ESO

New Camera Enables You to See the World the Way Birds Do

Using a specially designed camera, researchers at Lund University in Sweden have succeeded for the first time in recreating how birds see colours in their surroundings. The study reveals that birds see a very different reality compared to what we see.

The image to the right was taken with the specially designed camera
Images of leaves taken with regular camera, and special camera to the right
Photo: Cynthia Tedore

Human color vision is based on three primary colors: red, green and blue. The color vision of birds is based on the same three colors - but also ultraviolet. Biologists at Lund have now shown that the fourth primary color of birds, ultraviolet, means that they see the world in a completely different way. Among other things, birds see contrasts in dense forest foliage, whereas people only see a wall of green.

“What appears to be a green mess to humans are clearly distinguishable leaves for birds. No one knew about this until this study”, says Dan-Eric Nilsson, professor at the Department of Biology at Lund University.

For birds, the upper sides of leaves appear much lighter in ultraviolet. From below, the leaves are very dark. In this way the three-dimensional structure of dense foliage is obvious to birds. This in turn makes it easy for them to move, find food and navigate. People, on the other hand, do not perceive ultraviolet, and see the foliage in green; the primary color where contrast is the worst.

Dan-Eric Nilsson founded the world-leading Lund Vision Group at Lund University. The study in question is a collaboration with Cynthia Tedore and was conducted during her time as a postdoc in Lund. She is now working at the University of Hamburg.

The image to the right was taken with the specially designed camera

Photo: Cynthia Tedore


It is the first time that researchers have succeeded in imitating bird colour vision with a high degree of precision. This was achieved with the help of a unique camera and advanced calculations. The camera was designed within the Lund Vision Group and equipped with rotating filter wheels and specially manufactured filters, which make it possible to show what different animals see clearly. In this case, the camera imitates with a high degree of accuracy the colour sensitivity of the four different types of cones in bird retinas.

“We have discovered something that is probably very important for birds, and we continue to reveal how reality appears also to other animals”, says Dan-Eric Nilsson, continuing:

“We may have the notion that what we see is the reality, but it’s a highly human reality. Other animals live in other realities, and we can now see through their eyes and reveal many secrets. Reality is in the eye of the beholder”, he concludes.

Publication in Nature Communications



Contacts and sources:
Lund University
Citation: Avian UV vision enhances leaf surface contrasts in forest environments.
Cynthia Tedore, Dan-Eric Nilsson. Nature Communications, 2019; 10 (1) DOI: 10.1038/s41467-018-08142-5

Are Feathers Better than Velcro

Engineers detail bird feather properties that could lead to better adhesives (and aerospace materials).
Feather zipping and unzipping
Credit: University of California San Diego

You may have seen a kid play with a feather, or you may have played with one yourself: Running a hand along a feather’s barbs and watching as the feather unzips and zips, seeming to miraculously pull itself back together.

That “magical” zipping mechanism could provide a model for new adhesives and new aerospace materials, according to engineers at the University of California San Diego. They detail their findings in the Jan. 16 issue of Science Advances in a paper titled “Scaling of bird wings and feathers for efficient flight.”

Researcher Tarah Sullivan, who earned a Ph.D. in materials science from the Jacobs School of Engineering at UC San Diego, is the first in about two decades to take a detailed look at the general structure of bird feathers (without focusing on a specific species). She 3D-printed structures that mimic the feathers’ vanes, barbs and barbules to better understand their properties—for example, how the underside of a feather can capture air for lift, while the top of the feather can block air out when gravity needs to take over.

Sullivan found that barbules— the smaller, hook-like structures that connect feather barbs— are spaced within 8 to 16 micrometers of one another in all birds, from the hummingbird to the condor. This suggests that the spacing is an important property for flight.

“The first time I saw feather barbules under the microscope I was in awe of their design: intricate, beautiful and functional,” she said. “As we studied feathers across many species it was amazing to find that despite the enormous differences in size of birds, barbules spacing was constant.”

Sullivan believes studying the vane-barb-barbule structure further could lead to the development of new materials for aerospace applications, and to new adhesives—think Velcro and its barbs. She built prototypes to prove her point, which she will discuss in a follow up paper. “We believe that these structures could serve as inspiration for an interlocking one-directional adhesive or a material with directionally tailored permeability,” she said.

Sullivan, who is part of the research group of Marc Meyers, a professor in the Departments of Nanoengineering and Mechanical and Aerospace Engineering at UC San Diego, also studied the bones found in bird wings. Like many of her predecessors, she found that the humerus— the long bone in the wing— is bigger than expected. But she went a step further: using mechanics equations, she was able to show why that is. She found that because bird bone strength is limited, it can’t scale up proportionally with the bird’s weight. Instead it needs to grow faster and be bigger to be strong enough to withstand the forces it is subject to in flight.

This is known as allometry—the growth of certain parts of body at different rates than the body as a whole. The human brain is allometric: in children, it grows much faster than the rest of the body. By contrast, the human heart grows proportionally to the rest of the body—researchers call this isometry.

“Professor Eduard Arzt, our co-author from Saarland University in Germany, is an amateur pilot and became fascinated by the ‘bird wing’ problem. Together, we started doing allometric analyses on them and result is fascinating,” said Meyers. “This shows that the synergy of scientists from different backgrounds can produce wonderful new understanding.”


Contacts and sources:
Ioana Patringenaru
University of California - San Diego


Citation: Scaling of bird wings and feathers for efficient flight.
T. N. Sullivan, M. A. Meyers, and E. Arzt. Science Advances, 2019 DOI: 10.1126/sciadv.aat4269

We Need to Rethink Everything We Know about Global Warming Says Researcher



The world’s scientific community has known for a long time that global warming is caused by manmade emissions in the form of greenhouse gases, while global cooling is caused by air pollution in the form of aerosols. 

An Israeli researcher claims his calculations show scientists have grossly underestimated the effects of air pollution.
In a new study published in the journal Science, Hebrew University of Jerusalem Prof. Daniel Rosenfeld argues that the degree to which aerosol particles cool the earth has been grossly underestimated.


Image by Parabol Studio via Shutterstock.com, with elements furnished by NASA
Aerosols are tiny particles of many different materials that get into the air, like dust and vehicle exhaust. They cool our environment by enhancing the cloud cover that reflects the sun’s heat back to space.

Prof. Daniel Rosenfeld says we need to recalculate our global-warming predictions.

 Photo: Hebrew University of Jerusalem

Rosenfeld says his findings necessitate a recalculation of climate-change models to more accurately predict the pace of global warming.

He and his colleague Yannian Zhu from the Meteorological Institute of Shaanxi Province in China developed a new method that uses satellite images to calculate the effect of vertical winds as well as aerosol cloud droplet numbers. Until now, it was impossible to separate the effects of rising winds, which create the clouds, from the effects of aerosols, which determine clouds’ composition.

Using this new methodology, Rosenfeld and his colleagues were able to more accurately calculate aerosols’ cooling effects on the Earth. They discovered this effect is nearly two times higher than previously thought.

Good news or bad news?

But this finding does not necessarily mean we can stop worrying about global warming. Rosenfeld has several theories about why temperatures are rising despite the aerosol effect.

“If the aerosols indeed cause a greater cooling effect than previously estimated, then the warming effect of the greenhouse gases has also been larger than we thought, enabling greenhouse-gas emissions to overcome the cooling effect of aerosols,” Rosenfeld said.

Another hypothesis to explain why Earth is getting warmer even though aerosols have been cooling it down at an even a greater rate is a possible warming effect of aerosols when they lodge in “deep clouds” located 10 kilometers or more above the Earth.

Israel’s Space Agency and France’s National Centre for Space Studies have teamed up to develop new satellites that will be able to investigate this deep-cloud phenomenon, with Rosenfeld as its principal investigator.

Either way, the conclusion is the same: Current global climate predictions do not correctly take into account the significant effects of aerosols on clouds on Earth’s overall energy balance, Rosenfeld says.

Currently, scientists predict a 1.5-degree to 4.5-degree Celsius temperature increase by the end of the 21st century. Rosenfeld’s findings may help provide a more accurate diagnosis and prognosis of the Earth’s climate.

Funding for the study was provided by the Joint Israel Science Foundation and National Natural Science Foundation of China.



Contacts and sources:
The Hebrew University of Jerusalem
Citation:  Aerosol-driven droplet concentrations dominate coverage and water of oceanic low level
Daniel Rosenfeld, Yannian Zhu, Minghuai Wang, Youtong Zheng, Tom Goren, Shaocai Yu.clouds. Science, 2019; eaav0566 DOI: 10.1126/science.aav0566

Ancient Carpet Shark Discovered with ‘Spaceship-Shaped’ Teeth

The world of the dinosaurs just got a bit more bizarre with a newly discovered species of freshwater shark whose tiny teeth resemble the alien ships from the popular 1980s video game Galaga.

Unlike its gargantuan cousin the megalodon, Galagadon nordquistae was a small shark (approximately 12 to 18 inches long), related to modern-day carpet sharks such as the “whiskered” wobbegong. Galagadon once swam in the Cretaceous rivers of what is now South Dakota, and its remains were uncovered beside “Sue,” the world’s most famous T. rex fossil.

Galagadon 
Galagadon
Credit: (c) Velizar Simeonovski, Field Museum

“The more we discover about the Cretaceous period just before the non-bird dinosaurs went extinct, the more fantastic that world becomes,” says Terry Gates, lecturer at North Carolina State University and research affiliate with the North Carolina Museum of Natural Sciences. Gates is lead author of a paper describing the new species along with colleagues Eric Gorscak and Peter J. Makovicky of the Field Museum of Natural History.

“It may seem odd today, but about 67 million years ago, what is now South Dakota was covered in forests, swamps and winding rivers,” Gates says. “Galagadon was not swooping in to prey on T. rex, Triceratops, or any other dinosaurs that happened into its streams. This shark had teeth that were good for catching small fish or crushing snails and crawdads.”

Galagadon teeth. 
Credit: Terry Gates, NC State University

The tiny teeth – each one measuring less than a millimeter across – were discovered in the sediment left behind when paleontologists at the Field Museum uncovered the bones of “Sue,” currently the most complete T. rexspecimen ever described. Gates sifted through the almost two tons of dirt with the help of volunteer Karen Nordquist, whom the species name, nordquistae, honors. Together, the pair recovered over two dozen teeth belonging to the new shark species.

“It amazes me that we can find microscopic shark teeth sitting right beside the bones of the largest predators of all time,” Gates says. “These teeth are the size of a sand grain. Without a microscope you’d just throw them away.”

Despite its diminutive size, Gates sees the discovery of Galagadon as an important addition to the fossil record. “Every species in an ecosystem plays a supporting role, keeping the whole network together,” he says. “There is no way for us to understand what changed in the ecosystem during the mass extinction at the end of the Cretaceous without knowing all the wonderful species that existed before.”

Gates credits the idea for Galagadon’s name to middle school teacher Nate Bourne, who worked alongside Gates in paleontologist Lindsay Zanno’s lab at the N.C. Museum of Natural Sciences.

The work appears in the Journal of Paleontology and was supported in part by the National Science Foundation.



Contacts and sources:
Tracey Peake
North Carolina State University

Citation:New sharks and other chondrichthyans from the latest Maastrichtian (Late Cretaceous) of North America.
Terry A. Gates, Eric Gorscak, Peter J. Makovicky. Journal of Paleontology, 2019; 1 DOI: 10.1017/jpa.2018.92

More Animal Species Under Threat of Extinction, New Assessment Shows

Currently approximately 600 species might be inaccurately assessed as non-threatened on the Red List of Threatened Species. More than a hundred others that couldn’t be assessed before, also appear to be threatened. A new more efficient, systematic and comprehensive approach to assess the extinction risk of animals has shown this. The method, designed by Radboud University ecologist Luca Santini and colleagues, is described in Conservation Biology on January 17th.

Verreaux's Sifaka from Madagaskar, a threatened species on the Red List.
Photo credits: Luca Santini

Using their new method, the researchers’ predictions of extinction risks are quite consistent with the current published Red List assessments, and even a bit more optimistic overall. However, they found that 20% of 600 species that were impossible to assess before by Red List experts, are likely under threat of extinction, such as the brown-banded rail and Williamson’s mouse-deer. Also, 600 species that were assessed previously as being non-threatened, are actually likely to be threatened, such as the red-breasted pygmy parrot and the Ethiopian striped mouse. “This indicates that urgent re-assessment is needed of the current statuses of animal species on the Red List”, Santini says.

Limited amount of data leads to misclassification

Once every few years, specialized researchers voluntarily assess the conservation status of animal species in the world, which is then recorded in the International Union for Conservation of Nature (IUCN) Red List of Threatened Species. Species are classified into five extinction risk categories ranging from Least Concern to Critically Endangered, based on data such as species distribution, population size and recent trends.

“While this process is extremely important for conservation, experts often have a limited amount of data to apply the criteria to the more than 90,000 species that are currently covered by the Red List”, Santini says. “Often these data are of poor quality because they are outdated or inaccurate because certain species that live in very remote areas have not been properly studied. This might lead to species to be misclassified or not assessed at all.”

New method: information and statistics lead to more efficiency

It’s time for a more efficient, systematic and comprehensive approach, according to Santini and his colleagues. They designed a new method that provides Red List experts with additional independent information, which should help them to better assess species.

The method uses information from land cover maps, that show how the distribution of species in the world has changed over time. The researchers’ method couples this information with statistical models to estimate a number of additional parameters, such as species’ abilities to move through fragmented landscapes, to classify species into a Red List extinction risk category.

Algorithms for a more dynamic Red List

The new approach is meant to complement the traditional methods of Red List assessments. “As the Red List grows, keeping it updated becomes a daunting task. Algorithms that use near-real time remote sensing products to scan across vast species lists, and flag those that may be nearing extinction, can improve dramatically the timeliness and effectiveness of the Red List”, says Carlo Rondinini, Director of the Global Mammal Assessment Programme for the Red List.

Santini: “Our vision is that our new method will soon be automated so that data is re-updated every year with new land cover information. Thus, our method really can speed up the process and provide an early warning system by pointing specifically to species that should be re-assessed quickly.”

The research was conducted in collaboration with Carlo Rondinini, Director of the Global Mammal Assessment Programme for the Red List, and Stuart Butchart, head scientist of BirdLife International, the Red List bird authority.

 


Contacts and sources:
Luca Santini
Radboud University Nijmegen


Citation: Combining remote sensing data and habitat suitability models to monitor species’ extinction risk through the IUCN Red List. Conservation Biology. DOI: 10.1111/cobi.13279






Greenland Ice Melting Rapidly, Could Be Major Contributor To Sea Level Rise



Greenland is melting faster than scientists previously thought—and will likely lead to faster sea level rise—thanks to the continued, accelerating warming of the Earth’s atmosphere, a new study has found.

Scientists concerned about sea level rise have long focused on Greenland’s southeast and northwest regions, where large glaciers stream iceberg-sized chunks of ice into the Atlantic Ocean. Those chunks float away, eventually melting. But a new study published Jan. 21 in the Proceedings of the National Academy of Sciences found that the largest sustained ice loss from early 2003 to mid-2013 came from Greenland’s southwest region, which is mostly devoid of large glaciers.
greenland iceberg
Credit: Ohio State University

“Whatever this was, it couldn’t be explained by glaciers, because there aren’t many there,” said Michael Bevis, lead author of the paper, Ohio Eminent Scholar and a professor of geodynamics at The Ohio State University. “It had to be the surface mass—the ice was melting inland from the coastline.”

That melting, which Bevis and his co-authors believe is largely caused by global warming, means that in the southwestern part of Greenland, growing rivers of water are streaming into the ocean during summer. The key finding from their study: Southwest Greenland, which previously had not been considered a serious threat, will likely become a major future contributor to sea level rise.

“We knew we had one big problem with increasing rates of ice discharge by some large outlet glaciers,” he said. “But now we recognize a second serious problem: Increasingly, large amounts of ice mass are going to leave as meltwater, as rivers that flow into the sea.”

Michael Bevis

Credit: Ohio State University

The findings could have serious implications for coastal U.S. cities, including New York and Miami, as well as island nations that are particularly vulnerable to rising sea levels.

And there is no turning back, Bevis said.

“The only thing we can do is adapt and mitigate further global warming—it’s too late for there to be no effect,” he said. “This is going to cause additional sea level rise. We are watching the ice sheet hit a tipping point.”

Climate scientists and glaciologists have been monitoring the Greenland ice sheet as a whole since 2002, when NASA and Germany joined forces to launch GRACE. GRACE stands for Gravity Recovery and Climate Experiment, and involves twin satellites that measure ice loss across Greenland. Data from these satellites showed that between 2002 and 2016, Greenland lost approximately 280 gigatons of ice per year, equivalent to 0.03 inches of sea level rise each year. But the rate of ice loss across the island was far from steady.

Bevis’ team used data from GRACE and from GPS stations scattered around Greenland’s coast to identify changes in ice mass. The patterns they found show an alarming trend—by 2012, ice was being lost at nearly four times the rate that prevailed in 2003. The biggest surprise: This acceleration was focused in southwest Greenland, a part of the island that previously hadn’t been known to be losing ice that rapidly.

Bevis said a natural weather phenomenon—the North Atlantic Oscillation, which brings warmer air to West Greenland, as well as clearer skies and more solar radiation—was building on man-made climate change to cause unprecedented levels of melting and runoff. Global atmospheric warming enhances summertime melting, especially in the southwest. The North Atlantic Oscillation is a natural—if erratic—cycle that causes ice to melt under normal circumstances. When combined with man-made global warming, though, the effects are supercharged.

“These oscillations have been happening forever,” Bevis said. “So why only now are they causing this massive melt? It’s because the atmosphere is, at its baseline, warmer. The transient warming driven by the North Atlantic Oscillation was riding on top of more sustained, global warming.”

Bevis likened the melting of Greenland’s ice to coral bleaching: Once the ocean’s water hits a certain temperature, coral in that region begins to bleach. There have been three global coral bleaching events. The first was caused by the 1997-98 El Niño, and the other two events by the two subsequent El Niños. But El Niño cycles have been happening for thousands of years—so why have they caused global coral bleaching only since 1997?

“What’s happening is sea surface temperature in the tropics is going up; shallow water gets warmer and the air gets warmer,” Bevis said. “The water temperature fluctuations driven by an El Niño are riding this global ocean warming. Because of climate change, the base temperature is already close to the critical temperature at which coral bleaches, so an El Niño pushes the temperature over the critical threshold value. And in the case of Greenland, global warming has brought summertime temperatures in a significant portion of Greenland close to the melting point, and the North Atlantic Oscillation has provided the extra push that caused large areas of ice to melt".

Before this study, scientists understood Greenland to be one of the Earth’s major contributors to sea-level rise—mostly because of its glaciers. But these new findings, Bevis said, show that scientists need to be watching the island’s snowpack and ice fields more closely, especially in and near southwest Greenland.

GPS systems in place now monitor Greenland’s ice margin sheet around most of its perimeter, but the network is very sparse in the southwest, so it is necessary to densify the network there, given these new findings.

“We’re going to see faster and faster sea level rise for the foreseeable future,” Bevis said. “Once you hit that tipping point, the only question is: How severe does it get?”

Co-authors on the study include researchers from Ohio State, the University of Arizona, DTU Space in Denmark, Princeton University, the University of Colorado, University of Liége in Belgium, Utrecht University in the Netherlands, University of Luxembourg and UNAVCO, Inc.


Contacts and sources:
Laura Arenschield
Ohio State University









Hagfish Fossil Shakes Vertebrate Family Tree

Tethymyxine tapirostrum,is a 100-million-year-old, 12-inch long fish embedded in a slab of Cretaceous period limestone from Lebanon, believed to be the first detailed fossil of a hagfish.
Hagfish fossil
Credit: Tetsuto Miyashita, University of Chicago.

Paleontologists at the University of Chicago have discovered the first detailed fossil of a hagfish, the slimy, eel-like carrion feeders of the ocean. The 100-million-year-old fossil helps answer questions about when these ancient, jawless fish branched off the evolutionary tree from the lineage that gave rise to modern-day jawed vertebrates, including bony fish and humans.

The fossil, named Tethymyxine tapirostrum,is a 12-inch long fish embedded in a slab of Cretaceous period limestone from Lebanon. It fills a 100-million-year gap in the fossil record and shows that hagfish are more closely related to the blood-sucking lamprey than to other fishes. This means that both hagfish and lampreys evolved their eel-like body shape and strange feeding systems after they branched off from the rest of the vertebrate line of ancestry about 500 million years ago.

“This is a major reorganization of the family tree of all fish and their descendants. This allows us to put an evolutionary date on unique traits that set hagfish apart from all other animals,” said Tetsuto Miyashita, PhD, a Chicago Fellow in the Department of Organismal Biology and Anatomy at UChicago who led the research. The findings are published this week in the Proceedings of the National Academy of Sciences.

The slimy dead giveaway

Modern-day hagfish are known for their bizarre, nightmarish appearance and unique defense mechanism. They don’t have eyes, or jaws or teeth to bite with, but instead use a spiky tongue-like apparatus to rasp flesh off dead fish and whales at the bottom of the ocean. When harassed, they can instantly turn the water around them into a cloud of slime, clogging the gills of would-be predators.

This ability to produce slime is what gave away the Tethymyxine fossil. Miyashita used an imaging technology called synchrotron scanning at Stanford University to identify chemical traces of soft tissue that were left behind in the limestone when the hagfish fossilized. These soft tissues are rarely preserved, which is why there are so few examples of ancient hagfish relatives to study.

The scanning picked up a signal for keratin, the same material that makes up fingernails in humans. Keratin, as it turns out, is a crucial part of what makes the hagfish slime defense so effective. Hagfish have a series of glands along their bodies that produce tiny packets of tightly-coiled keratin fibers, lubricated by mucus-y goo. When these packets hit seawater, the fibers explode and trap the water within, turning everything into shark-choking slop. The fibers are so strong that when dried out they resemble silk threads; they’re even being studied as possible biosynthetic fibers to make clothes and other materials.

A normal-sized hagfish can turn about 20 liters of water around it into slime when threatened by predators.

Credit: Tetsuto Miyashita, University of Chicago

Miyashita and his colleagues found more than a hundred concentrations of keratin along the body of the fossil, meaning that the ancient hagfish probably evolved its slime defense when the seas included fearsome predators such as plesiosaurs and ichthyosaurs that we no longer see today.

“We now have a fossil that can push back the origin of the hagfish-like body plan by hundreds of millions of years,” Miyashita said. “Now, the next question is how this changes our view of the relationships between all these early fish lineages.”

Shaking up the vertebrate family tree

Features of the new fossil help place hagfish and their relatives on the vertebrate family tree. In the past, scientists have disagreed about where they belonged, depending on how they tackled the question. Those who rely on fossil evidence alone tend to conclude that hagfish are so primitive that they are not even vertebrates. This implies that all fishes and their vertebrate descendants had a common ancestor that — more or less — looked like a hagfish.

But those who work with genetic data argue that hagfish and lampreys are more closely related to each other. This suggests that modern hagfish and lampreys are the odd ones out in the family tree of vertebrates. In that case, the primitive appearance of hagfish and lampreys is deceptive, and the common ancestor of all vertebrates was probably something more conventionally fish-like.

Miyashita’s work reconciles these two approaches, using physical evidence of the animal’s anatomy from the fossil to come to the same conclusion as the geneticists: that the hagfish and lampreys should be grouped separately from the rest of fishes.
Credit: University of Chicago Medical Center


The Tethymyxine tapirostrum hagfish fossil suggests a new hypothesis for the structure of the vertebrate family tree, with hagfish and other eel-like creatures branching off early from the lineage that gave rise to modern-day jawed vertebrates, including bony fish and humans.

“In a sense, this resets the agenda of how we understand these animals,” said Michael Coates, PhD, professor of organismal biology and anatomy at UChicago and a co-author of the new study. “Now we have this important corroboration that they are a group apart. Although they're still part of vertebrate biodiversity, we now have to look at hagfish and lampreys more carefully, and recognize their apparent primitiveness as a specialized condition.

Paleontologists have increasingly used sophisticated imaging techniques in the past few years, but Miyashita’s research is one of a handful so far to use synchrotron scanning to identify chemical elements in a fossil. While it was crucial to detect anatomical structures in the hagfish fossil, he believes it can also be a useful tool to help scientists detect paint or glue used to embellish a fossil or even outright forge a specimen. Any attempt to spice up a fossil specimen leaves chemical fingerprints that light up like holiday decorations in a synchrotron scan.

“I’m impressed with what Tetsuto has marshaled here,” Coates said. “He's maxed out all the different techniques and approaches that can be applied to this fossil to extract information from it, to understand it and to check it thoroughly.”

The study, “A Hagfish from the Cretaceous Tethys Sea and a Reconciliation of the Morphological-Molecular Conflict in Early Vertebrate Phylogeny,” was supported by the National Science Foundation and the National Science and Engineering Research Council (Canada). Additional authors include Robert Farrar and Peter Larson from the Black Hills Institute of Geological Research; Phillip Manning and Roy Wogelius from the University of Manchester; Nicholas Edwards and Uwe Bergmann from the SLAC National Accelerator Laboratory; Jennifer Anné from the Children’s Museum of Indianapolis; and Richard Palmer and Philip Currie from the University of Alberta.



Contacts and sources:
Matt Wood
University of Chicago Medical Center




Monday, January 21, 2019

Decreased Deep Sleep Linked to Early Signs Of Alzheimer’s Disease






Reduced amounts of slow brain waves – the kind that occur in deep, refreshing sleep – are associated with high levels of the toxic brain protein tau. This computer-generated image maps the areas where the link is strongest, in shades of red and orange. A new study from Washington University School of Medicine in St. Louis has found that decreased deep sleep is associated with early signs of Alzheimer’s disease.

Credit: Brendan Lucey




Poor sleep is a hallmark of Alzheimer’s disease. People with the disease tend to wake up tired, and their nights become even less refreshing as memory loss and other symptoms worsen. But how and why restless nights are linked to Alzheimer’s disease is not fully understood.

Now, researchers at Washington University School of Medicine in St. Louis may have uncovered part of the explanation. They found that older people who have less slow-wave sleep – the deep sleep you need to consolidate memories and wake up feeling refreshed – have higher levels of the brain protein tau. Elevated tau is a sign of Alzheimer’s disease and has been linked to brain damage and cognitive decline.

The findings, published Jan. 9 in Science Translational Medicine, suggest that poor-quality sleep in later life could be a red flag for deteriorating brain health.

“What’s interesting is that we saw this inverse relationship between decreased slow-wave sleep and more tau protein in people who were either cognitively normal or very mildly impaired, meaning that reduced slow-wave activity may be a marker for the transition between normal and impaired,” said first author Brendan Lucey, MD, an assistant professor of neurology and director of the Washington University Sleep Medicine Center. “Measuring how people sleep may be a noninvasive way to screen for Alzheimer’s disease before or just as people begin to develop problems with memory and thinking.”

The brain changes that lead to Alzheimer’s, a disease that affects an estimated 5.7 million Americans, start slowly and silently. Up to two decades before the characteristic symptoms of memory loss and confusion appear, amyloid beta protein begins to collect into plaques in the brain. Tangles of tau appear later, followed by atrophy of key brain areas. Only then do people start showing unmistakable signs of cognitive decline.

The challenge is finding people on track to develop Alzheimer’s before such brain changes undermine their ability to think clearly. For that, sleep may be a handy marker.

To better understand the link between sleep and Alzheimer’s disease, Lucey, along with David Holtzman, MD, the Andrew B. and Gretchen P. Jones Professor and head of the Department of Neurology, and colleagues studied 119 people 60 years of age or older who were recruited through the Charles F. and Joanne Knight Alzheimer’s Disease Research Center. Most – 80 percent – were cognitively normal, and the remainder were very mildly impaired.

The researchers monitored the participants’ sleep at home over the course of a normal week. Participants were given a portable EEG monitor that strapped to their foreheads to measure their brain waves as they slept, as well as a wristwatch-like sensor that tracks body movement. They also kept sleep logs, where they made note of both nighttime sleep sessions and daytime napping. Each participant produced at least two nights of data; some had as many as six.

The researchers also measured levels of amyloid beta and tau in the brain and in the cerebrospinal fluid that bathes the brain and spinal cord. Thirty-eight people underwent PET brain scans for the two proteins, and 104 people underwent spinal taps to provide cerebrospinal fluid for analysis. Twenty-seven did both.

After controlling for factors such as sex, age and movements while sleeping, the researchers found that decreased slow-wave sleep coincided with higher levels of tau in the brain and a higher tau-to-amyloid ratio in the cerebrospinal fluid.

“The key is that it wasn’t the total amount of sleep that was linked to tau, it was the slow-wave sleep, which reflects quality of sleep,” Lucey said. “The people with increased tau pathology were actually sleeping more at night and napping more in the day, but they weren’t getting as good quality sleep.”

If future research bears out their findings, sleep monitoring may be an easy and affordable way to screen earlier for Alzheimer’s disease, the researchers said. Daytime napping alone was significantly associated with high levels of tau, meaning that asking a simple question – How much do you nap during the day? – might help doctors identify people who could benefit from further testing.

“I don’t expect sleep monitoring to replace brain scans or cerebrospinal fluid analysis for identifying early signs of Alzheimer’s disease, but it could supplement them,” Lucey said. “It’s something that could be easily followed over time, and if someone’s sleep habits start changing, that could be a sign for doctors to take a closer look at what might be going on in their brains.”



Contacts and sources:
Diane Duke Williams / Tamara Bhandari
Washington University School of Medicine in St. Louis

Citation: Reduced non-rapid eye movement sleep is associated with tau pathology in early Alzheimer’s disease. Lucey BP, McCullough A, Landsness EC, Toedebusch CD, McLeland JS, Zaza AM, Fagan AM, McCue L, Xiong C, Morris JC, Benzinger TLS, Holtzman DM. Science Translational Medicine. Jan. 9, 2018. DOI: 10.1126/scitranslmed.aau6550


Blood Test Detects Alzheimer’s Damage Before Symptoms

The blood test also may identify neurodegeneration in other brain diseases, injuries.

A simple blood test reliably detects signs of brain damage in people on the path to developing Alzheimer’s disease – even before they show signs of confusion and memory loss, according to a new study from Washington University School of Medicine in St. Louis and the German Center for Neurodegenerative Diseases.

File:Blood test.jpg
Credit: GrahamColm / Wikimedia Commons

A simple blood test reliably detects signs of brain damage in people on the path to developing Alzheimer’s disease – even before they show signs of confusion and memory loss, according to a new study from Washington University School of Medicine in St. Louis and the German Center for Neurodegenerative Diseases in Germany.

The findings, published Jan. 21 in Nature Medicine, may one day be applied to quickly and inexpensively identify brain damage in people with not just Alzheimer’s disease but other neurodegenerative conditions such as multiple sclerosis, traumatic brain injury or stroke.

“This is something that would be easy to incorporate into a screening test in a neurology clinic,” said Brian Gordon, PhD, an assistant professor of radiology at Washington University’s Mallinckrodt Institute of Radiology and an author on the study. “We validated it in people with Alzheimer’s disease because we know their brains undergo lots of neurodegeneration, but this marker isn’t specific for Alzheimer’s. High levels could be a sign of many different neurological diseases and injuries.”

The test detects neurofilament light chain, a structural protein that forms part of the internal skeleton of neurons. When brain neurons are damaged or dying, the protein leaks out into the cerebrospinal fluid that bathes the brain and spinal cord and from there, into the bloodstream.

Finding high levels of the protein in a person’s cerebrospinal fluid has been shown to provide strong evidence that some of their brain cells have been damaged. But obtaining cerebrospinal fluid requires a spinal tap, which many people are reluctant to undergo. Senior author Mathias Jucker, PhD, a professor of cellular neurology at the German Center for Neurodegenerative Diseases in Tübingen, along with Gordon and colleagues from all over the world, studied whether levels of the protein in blood also reflect neurological damage.

They turned to a group of families with rare genetic variants that cause Alzheimer’s at a young age – typically in a person’s 50s, 40s or even 30s. The families form the study population of the Dominantly Inherited Alzheimer’s Network (DIAN), an international consortium led by Washington University that is investigating the roots of Alzheimer’s disease. A parent with such a mutation has a 50 percent chance of passing the genetic error to a child, and any child who inherits a variant is all but guaranteed to develop symptoms of dementia near the same age as his or her parent. This timeline gives researchers an opportunity to study what happens in the brain in the years before cognitive symptoms arise.

The researchers studied more than 400 people participating in the DIAN study, 247 who carry an early-onset genetic variant and 162 of their unaffected relatives. Each participant had previously visited a DIAN clinic to give blood, undergo brain scans and complete cognitive tests. Roughly half had been evaluated more than once, typically about two to three years apart.

In those with the faulty gene variant, protein levels were higher at baseline and rose over time. In contrast, protein levels were low and largely steady in people with the healthy form of the gene. This difference was detectable 16 years before cognitive symptoms were expected to arise.

In addition, when the researchers took a look at participants’ brain scans, they found that how quickly the protein levels rose tracked with the speed with which the precuneus – a part of the brain involved in memory – thinned and shrank.

“Sixteen years before symptoms arise is really quite early in the disease process, but we were able to see differences even then,” said Washington University graduate student Stephanie Schultz, one of the paper’s co-first authors. “This could be a good preclinical biomarker to identify those who will go on to develop clinical symptoms.”

To find out whether protein blood levels could be used to predict cognitive decline, the researchers collected data on 39 people with disease-causing variants when they returned to the clinic an average of two years after their last visit. The participants underwent brain scans and two cognitive tests: the Mini-Mental State Exam and the Logical Memory test. The researchers found that people whose blood protein levels had previously risen rapidly were most likely to show signs of brain atrophy and diminished cognitive abilities when they revisited the clinic.

“It will be important to confirm our findings in late-onset Alzheimer´s disease and to define the time period over which neurofilament changes have to be assessed for optimal clinical predictability,” said Jucker, who leads the DIAN study in Germany.

All kinds of neurological damage can cause the neurofilament light protein to spill out of neurons and into blood. Protein levels are high in people with Lewy body dementia and Huntington’s disease; they rise dramatically in people with multiple sclerosis during a flare-up and in football players immediately after a blow to the head.

A commercial kit – very similar to the one used by the authors – is available to test for protein levels in the blood, but it has not been approved by the FDA to diagnose or predict an individual’s risk of brain damage. Before such a test can be used for individual patients with Alzheimer’s or any other neurodegenerative condition, researchers will need to determine how much protein in the blood is too much, and how quickly protein levels can rise before it becomes a cause for concern.

“I could see this being used in the clinic in a few years to identify signs of brain damage in individual patients,” said Gordon, who is also an assistant professor of psychological & brain sciences. “We’re not at the point we can tell people, ‘In five years you’ll have dementia.’ We are all working towards that.”



Contacts and sources:
Diane Duke Williams / Tamara BhandariWashington University School of Medicine in St. Louis


Citation: Serum neurofilament dynamics predicts neurodegeneration and clinical progression in presymptomatic Alzheimer’s Disease. Preische O, Schultz SA, Apel A, Kuhle J, Kaeser SA, Barro C, Gräber S, Kuder-Buletta E, LaFougere C, Laske C, Vöglein J, Levin J, Masters CL, Martins R, Schofield PR, Rossor MM, Graff-Radford NR, Salloway S, Ghetti B, Ringman JM, Noble JM, Chhatwal J, Goate AM, Benzinger TLS, Morris JC, Bateman RJ, Wang G, Fagan AM, McDade EM, Gordon BA, Jucker M, and the Dominantly Inherited Alzheimer Network. Nature Medicine. Jan. 21, 2019. DOI: 10.1038/s41591-018-0304-3


Scientists Turn Carbon Emissions Into Usable Energy



A recent study, affiliated with Ulsan National Institute of Science and Technology (UNIST)
 has developed a system that produces electricity and hydrogen (H2) while eliminating carbon dioxide (CO2), which is the main contributor of global warming.

Published This breakthrough has been led by Professor Guntae Kim in the School of Energy and Chemical Engineering at UNIST in collaboration with Professor Jaephil Cho in the Department of Energy Engineering and Professor Meilin Liu in the School of Materials Science and Engineering at Georgia Institute of Technology.

In this work, the research team presented Hybrid Na-CO2 system that can continuously produce electrical energy and hydrogen through efficient CO2 conversion with stable operation for over 1,000 hr from spontaneous CO2 dissolution in aqueous solution.

This is a schematic illustration of Hybrid Na-CO2 System and its reaction mechanism.

Credit: UNIST

"Carbon capture, utilization, and sequestration (CCUS) technologies have recently received a great deal of attention for providing a pathway in dealing with global climate change," says Professor Kim. "The key to that technology is the easy conversion of chemically stable CO2 molecules to other materials." He adds, "Our new system has solved this problem with CO2 dissolution mechanism."

Much of human CO2 emissions are absorbed by the ocean and turned into acidity. The researchers focused on this phenomenon and came up with the idea of melting CO2 into water to induce an electrochemical reaction. If acidity increases, the number of protons increases, which in turn increases the power to attract electrons. If a battery system is created based on this phenomenon, electricity can be produced by removing CO2.

Their Hybrid Na-CO2 System, just like a fuel cell, consists of a cathode (sodium metal), separator (NASICON), and anode (catalyst). Unlike other batteries, catalysts are contained in water and are connected by a lead wire to a cathode. When CO2 is injected into the water, the entire reaction gets started, eliminating CO2 and creating electricity and H2. At this time, the conversion efficiency of CO2 is high at 50%.

"This hybrid Na-CO2 cell, which adopts efficient CCUS technologies, not only utilizes CO2 as the resource for generating electrical energy but also produces the clean energy source, hydrogen," says Jeongwon Kim in the Combined M.S/Ph.D. in Energy Engineering at UNIST, the co-first author for the research.

In particular, this system has shown stability to the point of operating for more than 1,000 hours without damage to electrodes. The system can be applied to remove CO2 by inducing voluntary chemical reactions.

"This research will lead to more derived research and will be able to produce H2 and electricity more effectively when electrolytes, separator, system design, and electrocatalysts are improved," said Professor Kim.



Contacts and sources:
JooHyeon Heo
Ulsan National Institute Of Science and Technology (UNIST)


Citation:"Efficient CO2 Utilization via a Hybrid Na-CO2 System Based on CO2 Dissolution,"  Changmin Kim et. al.,  iScience, (2018).


A Tilt of the Head Facilitates Social Engagement, Researchers Say

Every time we look at a face, we take in a flood of information effortlessly: age, gender, race, expression, the direction of our subject's gaze, perhaps even their mood. Faces draw us in and help us navigate relationships and the world around us.

How the brain does this is a mystery. Understanding how facial recognition works has great value—perhaps particularly for those whose brains process information in ways that make eye contact challenging, including people with autism. Helping people tap into this flow of social cues could be transformational.

A new study of facial "fixation" led by Nicolas Davidenko, an assistant professor of psychology at the University of California, Santa Cruz, boosts our insights considerably.
davidenko-mona-740.jpg
Credit: University of California - Santa Cruz

"Looking at the eyes allows you to gather much more information," said Davidenko. "It's a real advantage."

By contrast, the inability to make eye contact has causal effects. "It impairs your facial processing abilities and puts you at a real social disadvantage," he said. People who are reluctant to make eye contact may also be misperceived as disinterested, distracted, or aloof, he noted.

Scientists have known for decades that when we look at a face, we tend to focus on the left side of the face we're viewing, from the viewer's perspective. Called the "left-gaze bias," this phenomenon is thought to be rooted in the brain, the right hemisphere of which dominates the face-processing task.

Researchers also know that we have a terrible time "reading" a face that's upside down. It's as if our neural circuits become scrambled, and we are challenged to grasp the most basic information. Much less is known about the middle ground, how we take in faces that are rotated or slightly tilted­.

"We take in faces holistically, all at once—not feature by feature," said Davidenko. "But no one had studied where we look on rotated faces."

Davidenko used eye-tracking technology to get the answers, and what he found surprised him: The left-gaze bias completely vanished and an "upper eye bias" emerged, even with a tilt as minor as 11 degrees off center.

"People tend to look first at whichever eye is higher," he said. "A slight tilt kills the left-gaze bias that has been known for so long. That's what's so interesting. I was surprised how strong it was."

Perhaps more importantly for people with autism, Davidenko found that the tilt leads people to look more at the eyes, perhaps because it makes them more approachable and less threatening. "Across species, direct eye contact can be threatening," he said. "When the head is tilted, we look at the upper eye more than either or both eyes when the head is upright. I think this finding could be used therapeutically."

Davidenko is eager to explore two aspects of these findings: whether people with autism are more comfortable engaging with images of rotated faces, and whether tilts help facilitate comprehension during conversation.

The findings may also be of value for people with amblyopia, or "lazy eye," which can be disconcerting to others. "In conversation, they may want to tilt their head so their dominant eye is up," he said. "That taps into our natural tendency to fix our gaze on that eye."

The effect is strongest when the rotation is 45 degrees. The upper-eye bias is much weaker at a 90-degree rotation. "Ninety degrees is too weird," said Davidenko. "People don't know where to look, and it changes their behavior totally."

Davidenko's findings appear in the latest edition of the journal Perception, in an article titled "The Upper Eye Bias: Rotated Faces Draw Fixations to the Upper Eye." His coauthors are Hema Kopalle, a graduate student in the Department of Neurosciences at UC San Diego who was an undergraduate researcher on the project, and the late Bruce Bridgeman, professor emeritus of psychology at UCSC.



Contacts and sources:
Jennifer McNulty
University of California - Santa Cruz

Citation: The Upper Eye Bias: Rotated Faces Draw Fixations to the Upper Eye Nicolas Davidenko, Hema Kopalle, Bruce Bridgeman.  First Published December 27, 2018 Research Article https://doi.org/10.1177/0301006618819628



Mystery Orbits in Outermost Reaches of Solar System Not Caused by 'Planet Nine'



The strange orbits of some objects in the farthest reaches of our solar system, hypothesized by some astronomers to be shaped by an unknown ninth planet, can instead be explained by the combined gravitational force of small objects orbiting the Sun beyond Neptune, say researchers.

The alternative explanation to the so-called 'Planet Nine' hypothesis, put forward by researchers at the University of Cambridge and the American University of Beirut, proposes a disc made up of small icy bodies with a combined mass as much as ten times that of Earth. When combined with a simplified model of the solar system, the gravitational forces of the hypothesized disc can account for the unusual orbital architecture exhibited by some objects at the outer reaches of the solar system.

Kuiper Belt's ice cores
 Credit: ESO/M. Kornmesser

While the new theory is not the first to propose that the gravitational forces of a massive disc made of small objects could avoid the need for a ninth planet, it is the first such theory which is able to explain the significant features of the observed orbits while accounting for the mass and gravity of the other eight planets in our solar system. The results are reported in the Astronomical Journal.

Beyond the orbit of Neptune lies the Kuiper Belt, which is made up of small bodies left over from the formation of the solar system. Neptune and the other giant planets gravitationally influence the objects in the Kuiper Belt and beyond, collectively known as trans-Neptunian Objects (TNOs), which encircle the Sun on nearly-circular paths from almost all directions.

However, astronomers have discovered some mysterious outliers. Since 2003, around 30 TNOs on highly elliptical orbits have been spotted: they stand out from the rest of the TNOs by sharing, on average, the same spatial orientation. This type of clustering cannot be explained by our existing eight-planet solar system architecture and has led to some astronomers hypothesizing that the unusual orbits could be influenced by the existence of an as-yet-unknown ninth planet.

The 'Planet Nine' hypothesis suggests that to account for the unusual orbits of these TNOs, there would have to be another planet, believed to be about ten times more massive than Earth, lurking in the distant reaches of the solar system and 'shepherding' the TNOs in the same direction through the combined effect of its gravity and that of the rest of the solar system.

"The Planet Nine hypothesis is a fascinating one, but if the hypothesized ninth planet exists, it has so far avoided detection," said co-author Antranik Sefilian, a PhD student in Cambridge's Department of Applied Mathematics and Theoretical Physics. "We wanted to see whether there could be another, less dramatic and perhaps more natural, cause for the unusual orbits we see in some TNOs. We thought, rather than allowing for a ninth planet, and then worry about its formation and unusual orbit, why not simply account for the gravity of small objects constituting a disc beyond the orbit of Neptune and see what it does for us?"

Professor Jihad Touma, from the American University of Beirut, and his former student Sefilian modelled the full spatial dynamics of TNOs with the combined action of the giant outer planets and a massive, extended disc beyond Neptune. The duo's calculations, which grew out of a seminar at the American University of Beirut, revealed that such a model can explain the perplexing spatially clustered orbits of some TNOs. In the process, they were able to identify ranges in the disc's mass, its 'roundness' (or eccentricity), and forced gradual shifts in its orientations (or precession rate), which faithfully reproduced the outlier TNO orbits.

"If you remove planet nine from the model and instead allow for lots of small objects scattered across a wide area, collective attractions between those objects could just as easily account for the eccentric orbits we see in some TNOs," said Sefilian, who is a Gates Cambridge Scholar and a member of Darwin College.

Earlier attempts to estimate the total mass of objects beyond Neptune have only added up to around one-tenth the mass of the Earth. However, in order for the TNOs to have the observed orbits and for there to be no Planet Nine, the model put forward by Sefilian and Touma requires the combined mass of the Kuiper Belt to be between a few to ten times the mass of the Earth.

"When observing other systems, we often study the disc surrounding the host star to infer the properties of any planets in orbit around it," said Sefilian. "The problem is when you're observing the disc from inside the system, it's almost impossible to see the whole thing at once. While we don't have direct observational evidence for the disc, neither do we have it for Planet Nine, which is why we're investigating other possibilities. Nevertheless, it is interesting to note that observations of Kuiper belt analogues around other stars, as well as planet formation models, reveal massive remnant populations of debris.

"It's also possible that both things could be true - there could be a massive disc and a ninth planet. With the discovery of each new TNO, we gather more evidence that might help explain their behavior."



Contacts and sources:
Sarah Collins
University of Cambridge

Citation: Shepherding in a self-gravitating disk of trans-Neptunian objects.’ Antranik A. Sefilian and Jihad R. Touma. ‘Astronomical Journal (2019).