Saturday, August 31, 2019

Plant-Based Diets Risks Worsening Brain Health Due to Nutrient Deficiency

The momentum behind a move to plant-based and vegan diets for the good of the planet is commendable, but risks worsening an already low intake of an essential nutrient involved in brain health, warns a nutritionist in the online journal BMJ Nutrition, Prevention & Health.

To make matters worse, the UK government has failed to recommend or monitor dietary levels of this nutrient--choline--found predominantly in animal foods, says Dr Emma Derbyshire, of Nutritional Insight, a consultancy specializing in nutrition and biomedical science.

Choline is an essential dietary nutrient, but the amount produced by the liver is not enough to meet the requirements of the human body. Choline can be likened to omega-3 fatty acids in that it is an ‘essential’ nutrient that cannot be produced by the body in amounts needed for human requirements.

Choline is critical to brain health, particularly during fetal development. It also influences liver function, with shortfalls linked to irregularities in blood fat metabolism as well as excess free radical cellular damage, writes Dr Derbyshire.

Credit: Nitsan Simantov - Wikimedia Commons

The primary sources of dietary choline are found in beef, eggs, dairy products, fish, and chicken, with much lower levels found in nuts, beans, and cruciferous vegetables, such as broccoli.

In 1998, recognizing the importance of choline, the US Institute of Medicine recommended minimum daily intakes. These range from 425 mg/day for women to 550 mg/day for men, and 450 mg/day and 550 mg/day for pregnant and breastfeeding women, respectively, because of the critical role the nutrient has in fetal development.

In 2016, the European Food Safety Authority published similar daily requirements. Yet national dietary surveys in North America, Australia, and Europe show that habitual choline intake, on average, falls short of these recommendations.

"This is....concerning given that current trends appear to be towards meat reduction and plant-based diets," says Dr Derbyshire.

She commends the first report (EAT-Lancet) to compile a healthy food plan based on promoting environmental sustainability, but suggests that the restricted intakes of whole milk, eggs and animal protein it recommends could affect choline intake.

And she is at a loss to understand why choline does not feature in UK dietary guidance or national population monitoring data.

"Given the important physiological roles of choline and authorisation of certain health claims, it is questionable why choline has been overlooked for so long in the UK," she writes. "Choline is presently excluded from UK food composition databases, major dietary surveys, and dietary guidelines," she adds.

It may be time for the UK government's independent Scientific Advisory Committee on Nutrition to reverse this, she suggests, particularly given the mounting evidence on the importance of choline to human health and growing concerns about the sustainability of the planet's food production.

"More needs to be done to educate healthcare professionals and consumers about the importance of a choline-rich diet, and how to achieve this," she writes.

"If choline is not obtained in the levels needed from dietary sources per se then supplementation strategies will be required, especially in relation to key stages of the life cycle, such as pregnancy, when choline intakes are critical to infant development," she concludes.

Contacts and sources:
British Medical Journal (BMJ)

Could we be overlooking a potential choline crisis in the United Kingdom?
Emma Derbyshire

First People Arrived in North America Earlier than Previously Thought

Stone tools and other artifacts unearthed from an archeological dig at the Cooper’s Ferry site in western Idaho suggest that people lived in the area 16,000 years ago, more than a thousand years earlier than scientists previously thought.

The artifacts would be considered among the earliest evidence of people in North America.

Possible early American Pacific coastal migration route. 
Map by Teresa Hall, Oregon State University.

The findings, published today in Science, add weight to the hypothesis that initial human migration to the Americas followed a Pacific coastal route rather than through the opening of an inland ice-free corridor, said Loren Davis, a professor of anthropology at Oregon State University and the study’s lead author.

“The Cooper’s Ferry site is located along the Salmon River, which is a tributary of the larger Columbia River basin. Early peoples moving south along the Pacific coast would have encountered the Columbia River as the first place below the glaciers where they could easily walk and paddle in to North America,” Davis said. “Essentially, the Columbia River corridor was the first off-ramp of a Pacific coast migration route.

“The timing and position of the Cooper’s Ferry site is consistent with and most easily explained as the result of an early Pacific coastal migration.”

Overview of Cooper's Ferry. 
Photo courtesy Loren Davis.

Cooper’s Ferry, located at the confluence of Rock Creek and the lower Salmon River, is known by the Nez Perce Tribe as an ancient village site named Nipéhe. Today the site is managed by the U.S. Bureau of Land Management.

Davis first began studying Cooper’s Ferry as an archaeologist for the BLM in the 1990s. After joining the Oregon State faculty, he partnered with the BLM to establish a summer archaeological field school there, bringing undergraduate and graduate students from Oregon State and elsewhere for eight weeks each summer from 2009 to 2018 to help with the research.

The site includes two dig areas; the published findings are about artifacts found in area A. In the lower part of that area, researchers uncovered several hundred artifacts, including stone tools; charcoal; fire-cracked rock; and bone fragments likely from medium- to large-bodied animals, Davis said. They also found evidence of a fire hearth, a food processing station and other pits created as part of domestic activities at the site.

Over the last two summers, the team of students and researchers reached the lower layers of the site, which, as expected, contained some of the oldest artifacts uncovered, Davis said. He worked with a team of researchers at Oxford University, who were able to successfully radiocarbon date a number of the animal bone fragments.

The results showed many artifacts from the lowest layers are associated with dates in the range of 15,000 to 16,000 years old.

Dig site at Cooper's Ferry. 
Photo courtesy Loren Davis.

“Prior to getting these radiocarbon ages, the oldest things we’d found dated mostly in the 13,000-year range, and the earliest evidence of people in the Americas had been dated to just before 14,000 years old in a handful of other sites,” Davis said. “When I first saw that the lower archaeological layer contained radiocarbon ages older than 14,000 years, I was stunned but skeptical and needed to see those numbers repeated over and over just to be sure they’re right. So we ran more radiocarbon dates, and the lower layer consistently dated between 14,000-16,000 years old.”

The dates from the oldest artifacts challenge the long-held “Clovis First” theory of early migration to the Americas, which suggested that people crossed from Siberia into North America and traveled down through an opening in the ice sheet near the present-day Dakotas. The ice-free corridor is hypothesized to have opened as early as 14,800 years ago, well after the date of the oldest artifacts found at Cooper’s Ferry, Davis said.

Loren Davis at Cooper's Ferry. 
Photo courtesy Loren Davis.

“Now we have good evidence that people were in Idaho before that corridor opened,” he said. “This evidence leads us to conclude that early peoples moved south of continental ice sheets along the Pacific coast.”

Davis’s team also found tooth fragments from an extinct form of horse known to have lived in North America at the end of the last glacial period. These tooth fragments, along with the radiocarbon dating, show that Cooper’s Ferry is the oldest radiocarbon-dated site in North America that includes artifacts associated with the bones of extinct animals, Davis said.

The oldest artifacts uncovered at Cooper’s Ferry also are very similar in form to older artifacts found in northeastern Asia, and particularly, Japan, Davis said. He is now collaborating with Japanese researchers to do further comparisons of artifacts from Japan, Russia and Cooper’s Ferry. He is also awaiting carbon-dating information from artifacts from a second dig location at the Cooper’s Ferry site.

“We have 10 years’ worth of excavated artifacts and samples to analyze,” Davis said. “We anticipate we’ll make other exciting discoveries as we continue to study the artifacts and samples from our excavations.”

Co-authors of the paper include David Sisson, an archaeologist with the BLM; David Madsen of the University of Texas at Austin; Lorena Becerra Valdivia and Thomas Higham of the Oxford University radiocarbon accelerator unit; and other researchers in the U.S., Japan and Canada. The research was funded in part by the Keystone Archaeological Research Fund and the Bernice Peltier Huber Charitable Trust.

Contacts and sources:
Michelle Klampe, Loren Davis
Oregon State University

Rapid Loss of Oxygen Led to Mass Extinction 420 Million Years Ago

Silurian Period

Late in the prehistoric , around 420 million years ago, a devastating mass extinction event wiped 23 percent of all marine animals from the face of the planet.

Artist's impression of Silurian underwater fauna
Credit: by Joseph Smit (1836-1929), from Nebula to Man, 1905 England / Wikimedia Commons

For years, scientists struggled to connect a mechanism to this mass extinction, one of the 10 most dramatic ever recorded in Earth’s history. Now, researchers from Florida State University have confirmed that this event, referred to by scientists as the Lau/Kozlowskii extinction, was triggered by an all-too-familiar culprit: rapid and widespread depletion of oxygen in the global oceans.

Their study, published today in the journal Geology, resolves a longstanding paleoclimate mystery, and raises urgent concerns about the ruinous fate that could befall our modern oceans if well-established trends of deoxygenation persist and accelerate.

Unlike other famous mass extinctions that can be tidily linked to discrete, apocalyptic calamities like meteor impacts or volcanic eruptions, there was no known, spectacularly destructive event responsible for the Lau/Kozlowskii extinction.

Assistant Professor Seth Young, graduate student Chelsie Bowman and Assistant Professor Jeremy Owens studied the link between oxygen depletion and a mass extinction event.

Credit: Florida State University

“This makes it one of the few extinction events that is comparable to the large-scale declines in biodiversity currently happening today, and a valuable window into future climate scenarios,” said study co-author Seth Young, an assistant professor in the Department of Earth, Ocean and Atmospheric Science.

Scientists have long been aware of the Lau/Kozlowskii extinction, as well as a related disruption in Earth’s carbon cycle during which the burial of enormous amounts of organic matter caused significant climate and environmental changes. But the link and timing between these two associated events — the extinction preceded the carbon cycle disruption by more than a hundred thousand years — remained stubbornly opaque.

“It’s never been clearly understood how this timing of events could be linked to a climate perturbation, or whether there was direct evidence linking widespread low-oxygen conditions to the extinction,” said FSU doctoral student Chelsie Bowman, who led the study.

To crack this difficult case, the team employed a pioneering research strategy.

Using advanced geochemical methods including thallium isotope, manganese concentration, and sulfur isotope measurements from important sites in Latvia and Sweden, the FSU scientists were able to reconstruct a timeline of ocean deoxygenation with relation to the Lau/Kozlowskii extinction and subsequent changes to the global carbon cycle.

The team’s new and surprising findings confirmed their original hypothesis that the extinction record might be driven by a decline of ocean oxygenation. Their multiproxy measurements established a clear connection between the steady creep of deoxygenated waters and the step-wise nature of the extinction event — its start in communities of deep-water organisms and eventual spread to shallow-water organisms.

Their investigations also revealed that the extinction was likely driven in part by the proliferation of sulfidic ocean conditions.

“For the first time, this research provides a mechanism to drive the observed step-wise extinction event, which first coincided with ocean deoxygenation and was followed by more severe and toxic ocean conditions with sulfide in the water column,” Bowman said.

With the oxygen-starved oceans of the Lau/Kozlowskii extinction serving as an unnerving precursor to the increasingly deoxygenated waters observed around the world today, study co-author Jeremy Owens, an assistant professor in the Department of Earth, Ocean and Atmospheric Science, said that there are still important lessons to be learned from ecological crises of the distant past.

“This work provides another line of evidence that initial deoxygenation in ancient oceans coincides with the start of extinction events,” he said. “This is important as our observations of the modern ocean suggest there is significant widespread deoxygenation which may cause greater stresses on organisms that require oxygen, and may be the initial steps towards another marine mass extinction.”

Dimitri Kaljo, Olle Hints and Tõnu Martma from Tallinn University of Technology; Mats E. Eriksson from Lund University; and Theodore R. Them from the College of Charleston contributed to this study. This research was funded by the National Science Foundation and the Estonian Research Council.

Contacts and sources:
Zachary Boehm
Florida State University

Citation: Linking the progressive expansion of reducing conditionsto a stepwise mass extinction event in the late Silurianoceans.
Chelsie N. Bowman, Seth A. Young, Dimitri Kaljo, Mats E. Eriksson, Theodore R. Them, Olle Hints, Tõnu Martma, Jeremy D. Owens. Geology, 2019; DOI: 10.1130/G46571.1

Humans Were Changing the Planet Earlier Than We Knew

Humans had caused significant landcover change on Earth up to 4000 years earlier than previously thought, University of Queensland researchers have found.

The School of Social Sciences' Dr Andrea Kay said some scientists defined the Anthropocene as starting in the 20th century, but the new research showed human-induced landcover change was globally extensive by 2000 BC.

Rice terraces in Bali ... farmers have been altering the Earth's surface for thousands of years. 
Image: Andrea Kay

The Anthropocene – the current geological age – is viewed as the period in which human activity has been the dominant influence on Earth’s climate and environment.

“The activities of farmers, pastoralists and hunter-gatherers had significantly changed the planet four milennia ago,‘‘ Dr Kay said.

The ArchaeoGLOBE project used an online survey to gather land-use estimates over the past 10,000 years from archaeologists with regional expertise.

‘‘The modern rate and scale of anthropogenic global change is far greater than those of the deep past, but the long-term cumulative changes that early food producers wrought on Earth are greater than many people realise,“ Dr Kay said.

"Even small-scale, shifting agriculture can cause significant change when considered at large scales and over long time-periods.“

Image: Lucas Stephens

Fellow researcher Dr Nicole Boivin said the innovative crowdsourcing-from-experts approach to pooling archaeological data had provided the project with a unique perspective.

‘‘Archaeologists possess critical datasets for assessing long-term human impacts to the natural world, but these remain largely untapped in terms of global-scale assessments,“ Dr Boivin said.

Another researcher on the team, Dr Alison Crowther, said the study could help plan for future climate scenarios.

“This research and the collaborative approach we used means we can better understand early land use as a driver of long-term global environmental changes across the Earth‘s system,“ Dr Crowther said.

Dr Kay, Dr Boivin, Dr Crowther and UQ Senior Research Fellow Dr Patrick Roberts each have joint appointments at UQ and The Max Planck Institute for the Science of Human History’s Department of Archaeology.

Other researchers on the team were UQ’s head of archaeology, Associate Professor Andrew Fairbairn, and archaeologists from Australian National University, the University of Melbourne, University of Sydney, Flinders University and LaTrobe University.

Dr Kay, Dr Boivin, Dr Crowther and UQ Senior Research Fellow Dr Patrick Roberts each have joint appointments at UQ and The Max Planck Institute for the Science of Human History's Department of Archaeology.

Other researchers on the team were UQ's head of archaeology, Associate Professor Andrew Fairbairn, and archaeologists from Australian National University, the University of Melbourne, University of Sydney, Flinders University and LaTrobe University.

Contacts and sources:
Dr Andrea KayUniversity of Queensland

Citation:  Archaeological assessment reveals Earth’s early transformation through land use
Lucas Stephens, Dorian Fuller, Nicole Boivin, Torben Rick, Nicolas Gauthier, Andrea Kay, Ben Marwick, Chelsey Geralda, Denise Armstrong, C. Michael Barton, Tim Denham, Kristina Douglass, Jonathan Driver, Lisa Janz, Patrick Roberts, J. Daniel Rogers, Heather Thakar, Mark Altaweel, Amber L. Johnson, Maria Marta Sampietro Vattuone, Mark Aldenderfer, Sonia Archila, Gilberto Artioli, Martin T. Bale, Timothy Beach, Ferran Borrell, Todd Braje, Philip I. Buckland, Nayeli Guadalupe Jiménez Cano, José M. Capriles, Agustín Diez Castillo, Çiler Çilingiroğlu, Michelle Negus Cleary, James Conolly, Peter R. Coutros, R. Alan Covey, Mauro Cremaschi, Alison Crowther, Lindsay Der, Savino di Lernia, John F. Doershuk, William E. Doolittle, Kevin J. Edwards, Jon M. Erlandson, Damian Evans, Andrew Fairbairn, Patrick Faulkner, Gary Feinman, Ricardo Fernandes, Scott M. Fitzpatrick, Ralph Fyfe, Elena Garcea, Steve Goldstein, Reed Charles Goodman, Jade Dalpoim Guedes, Jason Herrmann, Peter Hiscock, Peter Hommel, K. Ann Horsburgh, Carrie Hritz, John W. Ives, Aripekka Junno, Jennifer G. Kahn, Brett Kaufman, Catherine Kearns, Tristram R. Kidder, François Lanoë, Dan Lawrence, Gyoung-Ah Lee, Maureece J. Levin, Henrik B. Lindskoug, José Antonio López-Sáez, Scott Macrae, Rob Marchant, John M. Marston, Sarah McClure, Mark D. McCoy, Alicia Ventresca Miller, Michael Morrison, Giedre Motuzaite Matuzeviciute, Johannes Müller, Ayushi Nayak, Sofwan Noerwidi, Tanya M. Peres, Christian E. Peterson, Lucas Proctor, Asa R. Randall, Steve Renette, Gwen Robbins Schug, Krysta Ryzewski, Rakesh Saini, Vivian Scheinsohn, Peter Schmidt, Pauline Sebillaud, Oula Seitsonen, Ian A. Simpson, Arkadiusz Sołtysiak, Robert J. Speakman, Robert N. Spengler, Martina L. Steffen, Michael J. Storozum, Keir M. Strickland, Jessica Thompson, T. L. Thurston, Sean Ulm, M. Cemre Ustunkaya, Martin H. Welker, Catherine West, Patrick Ryan Williams, David K. Wright, Nathan Wright, Muhammad Zahir, Andrea Zerboni, Ella Beaudoin, Santiago Munevar Garcia, Jeremy Powell, Alexa Thornton, Jed O. Kaplan, Marie-José Gaillard, Kees Klein Goldewijk, Erle Ellis
Science 30 Aug 2019:
Vol. 365, Issue 6456, pp. 897-902
DOI: 10.1126/science.aax1192

Astronomers Determine Earth’s Fingerprint in Hopes of Finding Habitable Planets beyond the Solar System

Two McGill University astronomers have assembled a “fingerprint” for Earth, which could be used to identify a planet beyond our Solar System capable of supporting life.

An artist’s conception of Earth-like planets. 
Credit: NASA/ESA/G. Bacon (STScI)

McGill Physics student Evelyn Macdonald and her supervisor Prof. Nicolas Cowan used over a decade of observations of Earth’s atmosphere taken by the SCISATsatellite to construct a transit spectrum of Earth, a sort of fingerprint for Earth’s atmosphere in infrared light, which shows the presence of key molecules in the search for habitable worlds. This includes the simultaneous presence of ozone and methane, which scientists expect to see only when there is an organic source of these compounds on the planet. Such a detection is called a “biosignature”.

“A handful of researchers have tried to simulate Earth’s transit spectrum, but this is the first empirical infrared transit spectrum of Earth,” says Prof. Cowan. “This is what alien astronomers would see if they observed a transit of Earth.”

The findings, published Aug. 28 in the journal Monthly Notices of the Royal Astronomical Society, could help scientists determine what kind of signal to look for in their quest to find Earth-like exoplanets (planets orbiting a star other than our Sun). Developed by the Canadian Space Agency, SCISAT was created to help scientists understand the depletion of Earth’s ozone layer by studying particles in the atmosphere as sunlight passes through it. 

A view of Earth from space taken from the International Space Station. 
Credit: NASA/Reid Wiseman

In general, astronomers can tell what molecules are found in a planet’s atmosphere by looking at how starlight changes as it shines through the atmosphere. Instruments must wait for a planet to pass – or transit – over the star to make this observation. With sensitive enough telescopes, astronomers could potentially identify molecules such as carbon dioxide, oxygen or water vapour that might indicate if a planet is habitable or even inhabited.

Cowan was explaining transit spectroscopy of exoplanets at a group lunch meeting at the McGill Space Institute (MSI) when Prof. Yi Huang, an atmospheric scientist and fellow member of the MSI, noted that the technique was similar to solar occultation studies of Earth’s atmosphere, as done by SCISAT.

Since the first discovery of an exoplanet in the 1990s, astronomers have confirmed the existence of 4,000 exoplanets. The holy grail in this relatively new field of astronomy is to find planets that could potentially host life – an Earth 2.0.

A very promising system that might hold such planets, called TRAPPIST-1, will be a target for the upcoming James Webb Space Telescope, set to launch in 2021. Macdonald and Cowan built a simulated signal of what an Earth-like planet’s atmosphere would look like through the eyes of this future telescope which is a collaboration between NASA, the Canadian Space Agency and the European Space Agency.

The TRAPPIST-1 system located 40 light years away contains seven planets, three or four of which are in the so-called “habitable zone” where liquid water could exist. The McGill astronomers say this system might be a promising place to search for a signal similar to their Earth fingerprint since the planets are orbiting an M-dwarf star, a type of star which is smaller and colder than our Sun.

“TRAPPIST-1 is a nearby red dwarf star, which makes its planets excellent targets for transit spectroscopy. This is because the star is much smaller than the Sun, so its planets are relatively easy to observe,” explains Macdonald. “Also, these planets orbit close to the star, so they transit every few days. Of course, even if one of the planets harbours life, we don’t expect its atmosphere to be identical to Earth’s since the star is so different from the Sun.”

According to their analysis, Macdonald and Cowan affirm that the Webb Telescope will be sensitive enough to detect carbon dioxide and water vapour using its instruments. It may even be able to detect the biosignature of methane and ozone if enough time is spent observing the target planet.

Prof. Cowan and his colleagues at the Montreal-based Institute for Research on Exoplanets are hoping to be some of the first to detect signs of life beyond our home planet. The fingerprint of Earth assembled by Macdonald for her senior undergraduate thesis could tell other astronomers what to look for in this search. She will be starting her Ph.D. in the field of exoplanets at the University of Toronto in the Fall.

The James Webb Space Telescope, set to launch in 2021, will be studying the atmospheres of exoplanets and could determine if these planets are habitable or contain biosignatures. The Webb Telescope is an international collaboration between NASA, the Canadian Space Agency and the European Space Agency. 
Credit: Northrop Grumman

Funding for the research was provided by the Natural Sciences and Engineering Research Council of Canada, the Fonds de recherche du Québec – Nature et technologies, and a McGill Science Undergraduate Research Award.

An empirical infrared transit spectrum of Earth: opacity windows and biosignatures,” Evelyn J. R. Macdonald and Nicolas B. Cowan, was published online Aug. 28, 2019, in Monthly Notices of the Royal Astronomical Society.

Contacts and sources:
Nathalie Ouellette
Institute for Research on Exoplanets, Université de Montréal, Montréal, Canada

Nicolas Cowan
McGill Space Institute, McGill University, Montréal, Canada

Evelyn Macdonald
McGill Space Institute, McGill University, Montréal, Canada


Newly Discovered Giant Planet Slingshots Around Its Star, Unlike Anything in Our Solar System

Astronomers have discovered a planet three times the mass of Jupiter that travels on a long, egg-shaped path around its star. If this planet were somehow placed into our own solar system, it would swing from within our asteroid belt to out beyond Neptune. Other giant planets with highly elliptical orbits have been found around other stars, but none of those worlds were located at the very outer reaches of their star systems like this one.

This illustration compares the eccentric orbit of HR 5183 b to the more circular orbits of the planets in our own solar system.
Credit: W. M. Keck Observatory/Adam Makarenko

"This planet is unlike the planets in our solar system, but more than that, it is unlike any other exoplanets we have discovered so far," says Sarah Blunt, a Caltech graduate student and first author on the new study publishing in The Astronomical Journal. "Other planets detected far away from their stars tend to have very low eccentricities, meaning that their orbits are more circular. The fact that this planet has such a high eccentricity speaks to some difference in the way that it either formed or evolved relative to the other planets."

The planet was discovered using the radial velocity method, a workhorse of exoplanet discovery that detects new worlds by tracking how their parent stars "wobble" in response to gravitational tugs from those planets. However, analyses of these data usually require observations taken over a planet's entire orbital period. For planets orbiting far from their stars, this can be difficult: a full orbit can take tens or even hundreds of years.

The California Planet Search, led by Caltech Professor of Astronomy Andrew W. Howard, is one of the few groups that watches stars over the decades-long timescales necessary to detect long-period exoplanets using radial velocity. The data needed to make the discovery of the new planet were provided by the two observatories used by the California Planet Search—the Lick Observatory in Northern California and the W. M. Keck Observatory in Hawaii—and by the McDonald Observatory in Texas.

The astronomers have been watching the planet's star, called HR 5183, since the 1990s, but do not have data corresponding to one full orbit of the planet, called HR 5183 b, because it circles its star roughly every 45 to 100 years. The team instead found the planet because of its strange orbit.

"This planet spends most of its time loitering in the outer part of its star’s planetary system in this highly eccentric orbit, then it starts to accelerate in and does a slingshot around its star," explains Howard. "We detected this slingshot motion. We saw the planet come in and now it's on its way out. That creates such a distinctive signature that we can be sure that this is a real planet, even though we haven't seen a complete orbit."

The new findings show that it is possible to use the radial velocity method to make detections of other far-flung planets without waiting decades. And, the researchers suggest, looking for more planets like this one could illuminate the role of giant planets in shaping their solar systems.

Planets take shape out of disks of material left over after stars form. That means that planets should start off in flat, circular orbits. For the newly detected planet to be on such an eccentric orbit, it must have gotten a gravitational kick from some other object. The most plausible scenario, the researchers propose, is that the planet once had a neighbor of similar size. When the two planets got close enough to each other, one pushed the other out of the solar system, forcing HR 5183 b into a highly eccentric orbit.

"This newfound planet basically would have come in like a wrecking ball," says Howard, "knocking anything in its way out of the system."

This discovery demonstrates that our understanding of planets beyond our solar system is still evolving. Researchers continue to find worlds that are unlike anything in our solar system or in solar systems we have already discovered.

"Copernicus taught us that Earth is not the center of the solar system, and as we expanded into discovering other solar systems of exoplanets, we expected them to be carbon copies of our own solar system," Howard explains, "But it's just been one surprise after another in this field. This newfound planet is another example of a system that is not the image of our solar system but has remarkable features that make our universe incredibly rich in its diversity."

The study, titled, "Radial Velocity of an Eccentric Jovian World Orbiting at 18AU," was funded by the National Science Foundation, NASA, Tennessee State University and the State of Tennessee, the Beatrice Watson Parrent Fellowship, the Trottier Family Foundation, and Caltech. Other Caltech authors include: BJ Fulton, a staff scientist at IPAC; former postdoctoral scholar Sean Mills (BS '12); Erik Petigura, a former postdoctoral scholar now based at UCLA; and Arpita Roy, R.A. & G.B. Millikan Postdoctoral Scholar in Astronomy.

Contacts and sources:
Whitney Calvin

MIT engineers have developed robotic thread (in black) that can be steered magnetically and is small enough to work through narrow spaces such as the vasculature of the human brain. The researchers envision the technology may be used in the future to clear blockages in patients with stroke and aneurysms.
MIT engineers have developed robotic thread (in black) that can be steered magnetically and is small enough to work through narrow spaces such as the vasculature of the human brain. The researchers envision the technology may be used in the future to clear blockages in patients with stroke and aneurysms.
Image courtesy of the researchers

MIT engineers have developed a magnetically steerable, thread-like robot that can actively glide through narrow, winding pathways, such as the labrynthine vasculature of the brain.

In the future, this robotic thread may be paired with existing endovascular technologies, enabling doctors to remotely guide the robot through a patient’s brain vessels to quickly treat blockages and lesions, such as those that occur in aneurysms and stroke.

“Stroke is the number five cause of death and a leading cause of disability in the United States. If acute stroke can be treated within the first 90 minutes or so, patients’ survival rates could increase significantly,” says Xuanhe Zhao, associate professor of mechanical engineering and of civil and environmental engineering at MIT. “If we could design a device to reverse blood vessel blockage within this ‘golden hour,’ we could potentially avoid permanent brain damage. That’s our hope.”

Zhao and his team, including lead author Yoonho Kim, a graduate student in MIT’s Department of Mechanical Engineering, describe their soft robotic design today in the journal Science Robotics. The paper’s other co-authors are MIT graduate student German Alberto Parada and visiting student Shengduo Liu.

In a tight spot

To clear blood clots in the brain, doctors often perform an endovascular procedure, a minimally invasive surgery in which a surgeon inserts a thin wire through a patient’s main artery, usually in the leg or groin. Guided by a fluoroscope that simultaneously images the blood vessels using X-rays, the surgeon then manually rotates the wire up into the damaged brain vessel. A catheter can then be threaded up along the wire to deliver drugs or clot-retrieval devices to the affected region.

Kim says the procedure can be physically taxing, requiring surgeons, who must be specifically trained in the task, to endure repeated radiation exposure from fluoroscopy.

“It’s a demanding skill, and there are simply not enough surgeons for the patients, especially in suburban or rural areas,” Kim says.

The medical guidewires used in such procedures are passive, meaning they must be manipulated manually, and are typically made from a core of metallic alloys, coated in polymer, a material that Kim says could potentially generate friction and damage vessel linings if the wire were to get temporarily stuck in a particularly tight space.

The team realized that developments in their lab could help improve such endovascular procedures, both in the design of the guidewire and in reducing doctors’ exposure to any associated radiation.

Threading a needle

Over the past few years, the team has built up expertise in both hydrogels — biocompatible materials made mostly of water — and 3-D-printed magnetically-actuated materials that can be designed to crawl, jump, and even catch a ball, simply by following the direction of a magnet.

In this new paper, the researchers combined their work in hydrogels and in magnetic actuation, to produce a magnetically steerable, hydrogel-coated robotic thread, or guidewire, which they were able to make thin enough to magnetically guide through a life-size silicone replica of the brain’s blood vessels.

The core of the robotic thread is made from nickel-titanium alloy, or “nitinol,” a material that is both bendy and springy. Unlike a clothes hanger, which would retain its shape when bent, a nitinol wire would return to its original shape, giving it more flexibility in winding through tight, tortuous vessels. The team coated the wire’s core in a rubbery paste, or ink, which they embedded throughout with magnetic particles.

Finally, they used a chemical process they developed previously, to coat and bond the magnetic covering with hydrogel — a material that does not affect the responsiveness of the underlying magnetic particles and yet provides the wire with a smooth, friction-free, biocompatible surface.

They demonstrated the robotic thread’s precision and activation by using a large magnet, much like the strings of a marionette, to steer the thread through an obstacle course of small rings, reminiscent of a thread working its way through the eye of a needle.

The researchers also tested the thread in a life-size silicone replica of the brain’s major blood vessels, including clots and aneurysms, modeled after the CT scans of an actual patient’s brain. The team filled the silicone vessels with a liquid simulating the viscosity of blood, then manually manipulated a large magnet around the model to steer the robot through the vessels’ winding, narrow paths.

Kim says the robotic thread can be functionalized, meaning that features can be added — for example, to deliver clot-reducing drugs or break up blockages with laser light. To demonstrate the latter, the team replaced the thread’s nitinol core with an optical fiber and found that they could magnetically steer the robot and activate the laser once the robot reached a target region.

When the researchers ran comparisons between the robotic thread coated versus uncoated with hydrogel, they found that the hydrogel gave the thread a much-needed, slippery advantage, allowing it to glide through tighter spaces without getting stuck. In an endovascular surgery, this property would be key to preventing friction and injury to vessel linings as the thread works its way through.

“One of the challenges in surgery has been to be able to navigate through complicated blood vessels in the brain, which has a very small diameter, where commercial catheters can’t reach,” says Kyujin Cho, professor of mechanical engineering at Seoul National University. “This research has shown potential to overcome this challenge and enable surgical procedures in the brain without open surgery.”

And just how can this new robotic thread keep surgeons radiation-free? Kim says that a magnetically steerable guidewire does away with the necessity for surgeons to physically push a wire through a patient’s blood vessels. This means that doctors also wouldn’t have to be in close proximity to a patient, and more importantly, the radiation-generating fluoroscope.

In the near future, he envisions endovascular surgeries that incorporate existing magnetic technologies, such as pairs of large magnets, the directions of which doctors can manipulate from just outside the operating room, away from the fluoroscope imaging the patient’s brain, or even in an entirely different location.

“Existing platforms could apply magnetic field and do the fluoroscopy procedure at the same time to the patient, and the doctor could be in the other room, or even in a different city, controlling the magnetic field with a joystick,” Kim says. “Our hope is to leverage existing technologies to test our robotic thread in vivo in the next step.”

This research was funded, in part, by the Office of Naval Research, the MIT Institute for Soldier Nanotechnologies, and the National Science Foundation (NSF).

Contacts and sources:
Jennifer ChuMassachusetts Institute of Technology - MIT

Fleet of Autonomous Boats Can Now Shapeshift

MIT’s fleet of robotic boats has been updated with new capabilities to “shapeshift,” by autonomously disconnecting and reassembling into a variety of configurations, to form floating structures in Amsterdam’s many canals.

The autonomous boats — rectangular hulls equipped with sensors, thrusters, microcontrollers, GPS modules, cameras, and other hardware — are being developed as part of the ongoing “Roboat” project between MIT and the Amsterdam Institute for Advanced Metropolitan Solutions (AMS Institute). The project is led by MIT professors Carlo Ratti, Daniela Rus, Dennis Frenchman, and Andrew Whittle. In the future, Amsterdam wants the roboats to cruise its 165 winding canals, transporting goods and people, collecting trash, or self-assembling into “pop-up” platforms — such as bridges and stages — to help relieve congestion on the city’s busy streets.

MIT’s fleet of robotic boats has been updated with new capabilities to “shapeshift,” by autonomously disconnecting and reassembling into different configurations to form various floating platforms in the canals of Amsterdam. In experiments in a pool, the boats rearranged themselves from a connected straight line into an “L” (shown here) and other shapes.
MIT’s fleet of robotic boats has been updated with new capabilities to “shapeshift,” by autonomously disconnecting and reassembling into different configurations to form various floating platforms in the canals of Amsterdam. In experiments in a pool, the boats rearranged themselves from a connected straight line into an “L” (shown here) and other shapes.
Images and gifs: courtesy of the researchers

In 2016, MIT researchers tested a roboat prototype that could move forward, backward, and laterally along a preprogrammed path in the canals. Last year, researchers designed low-cost, 3-D-printed, one-quarter scale versions of the boats, which were more efficient and agile, and came equipped with advanced trajectory-tracking algorithms. In June, they created an autonomous latching mechanism that let the boats target and clasp onto each other, and keep trying if they fail.

In a new paper presented at the last week’s IEEE International Symposium on Multi-Robot and Multi-Agent Systems, the researchers describe an algorithm that enables the roboats to smoothly reshape themselves as efficiently as possible. The algorithm handles all the planning and tracking that enables groups of roboat units to unlatch from one another in one set configuration, travel a collision-free path, and reattach to their appropriate spot on the new set configuration.

In demonstrations in an MIT pool and in computer simulations, groups of linked roboat units rearranged themselves from straight lines or squares into other configurations, such as rectangles and “L” shapes. The experimental transformations only took a few minutes. More complex shapeshifts may take longer, depending on the number of moving units — which could be dozens — and differences between the two shapes.

Courtesy of the researchers

“We’ve enabled the roboats to now make and break connections with other roboats, with hopes of moving activities on the streets of Amsterdam to the water,” says Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “A set of boats can come together to form linear shapes as pop-up bridges, if we need to send materials or people from one side of a canal to the other. Or, we can create pop-up wider platforms for flower or food markets.”

Joining Rus on the paper are: Ratti, director of MIT’s Senseable City Lab, and, also from the lab, first author Banti Gheneti, Ryan Kelly, and Drew Meyers, all researchers; postdoc Shinkyu Park; and research fellow Pietro Leoni.

Collision-free trajectories

For their work, the researchers had to tackle challenges with autonomous planning, tracking, and connecting groups of roboat units. Giving each unit unique capabilities to, for instance, locate each other, agree on how to break apart and reform, and then move around freely, would require complex communication and control techniques that could make movement inefficient and slow.

To enable smoother operations, the researchers developed two types of units: coordinators and workers. One or more workers connect to one coordinator to form a single entity, called a “connected-vessel platform” (CVP). All coordinator and worker units have four propellers, a wireless-enabled microcontroller, and several automated latching mechanisms and sensing systems that enable them to link together.

Coordinators, however, also come equipped with GPS for navigation, and an inertial measurement unit (IMU), which computes localization, pose, and velocity. Workers only have actuators that help the CVP steer along a path. Each coordinator is aware of and can wirelessly communicate with all connected workers. Structures comprise multiple CVPs, and individual CVPs can latch onto one another to form a larger entity.

During shapeshifting, all connected CVPs in a structure compare the geometric differences between its initial shape and new shape. Then, each CVP determines if it stays in the same spot and if it needs to move. Each moving CVP is then assigned a time to disassemble and a new position in the new shape.

Each CVP uses a custom trajectory-planning technique to compute a way to reach its target position without interruption, while optimizing the route for speed. To do so, each CVP precomputes all collision-free regions around the moving CVP as it rotates and moves away from a stationary one.

After precomputing those collision-free regions, the CVP then finds the shortest trajectory to its final destination, which still keeps it from hitting the stationary unit. Notably, optimization techniques are used to make the whole trajectory-planning process very efficient, with the precomputation taking little more than 100 milliseconds to find and refine safe paths. Using data from the GPS and IMU, the coordinator then estimates its pose and velocity at its center of mass, and wirelessly controls all the propellers of each unit and moves into the target location.

In their experiments, the researchers tested three-unit CVPs, consisting of one coordinator and two workers, in several different shapeshifting scenarios. Each scenario involved one CVP unlatching from the initial shape and moving and relatching to a target spot around a second CVP.

Three CVPs, for instance, rearranged themselves from a connected straight line — where they were latched together at their sides — into a straight line connected at front and back, as well as an “L.” In computer simulations, up to 12 roboat units rearranged themselves from, say, a rectangle into a square or from a solid square into a Z-like shape.

Courtesy of the researchers

Scaling up

Experiments were conducted on quarter-sized roboat units, which measure about 1 meter long and half a meter wide. But the researchers believe their trajectory-planning algorithm will scale well in controlling full-sized units, which will measure about 4 meters long and 2 meters wide.

The researchers hope to use the roboats to form into a dynamic “bridge” across a 60-meter canal between the NEMO Science Museum in Amsterdam’s city center and an area that’s under development. Called RoundAround, the idea is to employ roboats to sail in a continuous circle across the canal, picking up and dropping off passengers at docks and stopping or rerouting when they detect anything in the way. Currently, walking around that waterway takes about 10 minutes, but the bridge can cut that time to around two minutes. This is still an explorative concept.

“This will be the world’s first bridge comprised of a fleet of autonomous boats,” Ratti says. “A regular bridge would be super expensive, because you have boats going through, so you’d need to have a mechanical bridge that opens up or a very high bridge. But we can connect two sides of canal [by using] autonomous boats that become dynamic, responsive architecture that float on the water.”

To reach that goal, the researchers are further developing the roboats to ensure they can safely hold people, and are robust to all weather conditions, such as heavy rain. They’re also making sure the roboats can effectively connect to the sides of the canals, which can vary greatly in structure and design.

Contacts and sources:
Rob Matheson

Massachusetts Institute of Technology

Hypothetical Particle Could Be Heavyweight Candidate for Dark Matter

Almost a quarter of the universe stands literally in the shadows. According to cosmologists’ theories, 25.8% of it is made up of dark matter, whose presence is signaled essentially only by its gravitational pull. What this substance consists of remains a mystery. 

Hermann Nicolai, Director at the Max Planck Institute for Gravitational Physics in Potsdam, and his colleague Krzysztof Meissner from the University of Warsaw have now proposed a new candidate - a superheavy gravitino. The existence of this still hypothetical particle follows from a hypothesis that seeks to explain how the observed spectrum of quarks and leptons in the standard model of particle physics might emerge from a fundamental theory. In addition, the researchers describe a possible method for actually tracking down this particle.

Looking at dark matter: this photo is a montage of several images and shows the colliding galaxy clusters collectively known as the “Bullet Cluster” (1E 0657-56). The galaxies visible in optical light in the background image are overlaid with X-rays from the intergalactic gas clouds (pink), as well as the mass distribution calculated from gravitational lensing effects and therefore – indirectly – the dark matter (blue).

Credit: © NASA/CXC/M. Weiss – Chandra X-Ray Observatory

The standard model of particle physics encompasses the building blocks of matter and the forces that hold them together. It states that there are six different quarks and six leptons that are grouped into three “families”. However, the matter around us and we ourselves are ultimately made up of only three particles from the first family: the up and down quarks and the electron, which is a member of the lepton family.

Until now, this long-established standard model has remained unchanged. The Large Hadron Collider (LHC) at CERN in Geneva was brought into service around ten years ago with the main purpose of exploring what might lie beyond. However, after ten years of taking data scientists have failed to detect any new elementary particles, apart from the Higgs boson, despite widely held expectations to the contrary. In other words, until now, measurements with the LHC have failed to provide any hints whatsoever of “new physics” beyond the standard model. These findings stand in stark contrast to numerous proposed extensions of this model that suggest a large number of new particles.

In an earlier article published in Physical Review Letters, Hermann Nicolai and Krzysztof Meissner have presented a new hypothesis that seeks to explain why only the already known elementary particles occur as basic building blocks of matter in Nature – and why, contrary to what was previously thought, no new particles are to be expected in the energy range accessible to current or conceivable future experiments.

In addition, the two researchers postulate the existence of supermassive gravitinos, which could be highly unusual candidates for dark matter. In a second publication, which recently appeared in the journal Physical Review D, they also set out a proposal for how to track these gravitinos down.

In their work, Nicolai and Meissner take up an old idea from the Nobel Prize winner Murray Gell-Mann that is based on the “N=8 Supergravity” theory. One key element of their proposal is a new type of infinite-dimensional symmetry that is intended to explain the observed spectrum of the known quarks and leptons in three families. “Our hypothesis actually produces no additional particles for ordinary matter that would then need to be argued away because they do not show up in accelerator experiments,” says Hermann Nicolai. “By contrast, our hypothesis can in principle explain precisely what we see, in particular the replication of quarks and leptons in three families.”

However, processes in the cosmos cannot be explained entirely by the ordinary matter that we are already aware of. One sign of this are galaxies: they rotate at a high speed, and the visible matter in the universe – which only accounts for about 5% of the matter in the universe – would not be enough to hold them together. So far, however, no one knows what the rest is made of, despite numerous suggestions. The nature of dark matter is therefore one of the most important unanswered questions in cosmology.

“The common expectation is that dark matter is made up of an elementary particle, and that it hasn’t been possible to detect this particle yet because it interacts with ordinary matter almost exclusively by the gravitational force,” says Hermann Nicolai. The model developed in collaboration with Krzysztof Meissner offers a new candidate for a dark-matter particle of this kind, albeit one with completely different properties from all of the candidates discussed so far, such as axions or WIMPs. The latter interact only very weakly with known matter. The same holds true for the very light gravitinos that have been repeatedly proposed as dark matter candidates in connection with low energy supersymmetry. However, the present proposal goes in a completely different direction, in that it no longer assigns a primary role to supersymmetry, even though the scheme descends from maximal N=8 supergravity. “In particular, our scheme predicts the existence of superheavy gravitinos, which – unlike the usual candidates and unlike the previously considered light gravitinos – would also interact strongly and electromagnetically with ordinary matter,” says Hermann Nicolai.

Their large mass means that these particles could only occur in very dilute form in the universe; otherwise, they would `overclose’ the universe and thus lead to its early collapse. According to the Max Planck researcher, one actually wouldn’t need very many of them to explain the dark matter content in the universe and in our galaxy – one particle per 10,000 cubic kilometres would be sufficient.

The mass of the particle postulated by Nicolai and Meissner lies in the region of the Planck mass – that is, around a hundred millionth of a kilogram. In comparison, protons and neutrons – the building blocks of the atomic nucleus – are around ten quintillion (ten million trillion) times lighter. In intergalactic space, the density would be even much lower.

“The stability of these heavy gravitinos hinges on their unusual quantum numbers (charges),” says Nicolai. “Specifically, there are quite simply no final states with the corresponding charges in the standard model into which these gravitinos could decay – otherwise, they would have disappeared shortly after the Big Bang.”

Their strong and electromagnetic interactions with known matter may make these dark matter particles easier to track down despite their extreme rarity. One possibility is to search for them with dedicated time-of-flight measurements deep underground, as these particles move a great deal slower than the speed of light, unlike ordinary elementary particles originating from cosmic radiation. Nevertheless, they would penetrate the Earth without effort because of their large mass – like a cannon ball that cannot be stopped by a swarm of mosquitoes.

This fact gives the researchers the idea of using our planet itself as a “paleo-detector”: the Earth has been orbiting through interplanetary space for some 4.5 billion years, during which time it must have been penetrated by many of these massive gravitinos. In the process, the particles should have left long, straight ionisation tracks in the rock, but it may not be easy to distinguish them from tracks caused by known particles.

“Ionising radiation is known to cause lattice defects in crystal structures. It may be possible to detect relics of such ionisation tracks in crystals that remain stable over millions of years,” says Hermann Nicolai. Because of its long “exposure time” such a search strategy could also be successful in case dark matter is not homogeneously distributed inside galaxies but subject to local density fluctuations – which could also explain the failure of searches for more conventional dark matter candidates so far.

Contacts and sources:
Dr. Elke Müller
Max Planck Institute for Gravitational Physics, Potsdam-Golm

Planck Mass Charged Gravitino Dark Matter: 
 K. A. Meissner, H. Nicolai
Physical Review D 100, 035001 (2019)

Standard Model Fermions and Infinite-Dimensional R Symmetries
K. A. Meissner, H. Nicolai
Physical Review Letters 121, 091601 (2018)

Waist Size, Not Body Mass Index, Likely More Predictive of Coronary Artery Disease

For years, women have been told that weight gain could lead to heart disease. A new study indicates that it is the location of the fat that matters most, with abdominal fat representing the greatest harm and not overall body mass index (BMI) when assessing risk for coronary artery disease (CAD).

8 women with the same Body Mass Index rating (BMI - 30) but with different weight distribution and abdominal volume, so they have different Body Volume Index (BVI) ratings.

8 Women with a BMI of 30.JPG
Credit:  Richard2902 /Wikimedia Commons

Results are published online in Menopause, the journal of The North American Menopause Society (NAMS). Because CAD remains the leading cause of death worldwide, there is tremendous attention given to its modifiable risk factors. 

Because CAD remains the leading cause of death worldwide, there is tremendous attention given to its modifiable risk factors. Estrogen protects women’s cardiovascular systems before menopause, which helps explain why the incidence of CAD in premenopausal women is lower than in men. However, as women’s estrogen levels decline during and after menopause, the incidence of CAD in postmenopausal women outpaces similarly aged men.

Obesity has long been known as a risk factor for CAD because it causes endothelial cell dysfunction, insulin resistance, and coronary atherosclerosis, among other problems. It also is often accompanied by other cardiovascular risk factors, such as hypertension and diabetes. In the past, it has been suggested that overall obesity (which is often defined by BMI) is a primary risk factor. Few studies have attempted to compare the effect of overall obesity versus central obesity, which is typically described by waist circumference and/or waist-to-hip ratio

Estrogen protects women’s cardiovascular systems before menopause, which helps explain why the incidence of CAD in premenopausal women is lower than in men. However, as women’s estrogen levels decline during and after menopause, the incidence of CAD in postmenopausal women outpaces similarly aged men. Obesity has long been known as a risk factor for CAD because it causes endothelial cell dysfunction, insulin resistance, and coronary atherosclerosis, among other problems. It also is often accompanied by other cardiovascular risk factors, such as hypertension and diabetes. 

In the past, it has been suggested that overall obesity (which is often defined by BMI) is a primary risk factor. Few studies have attempted to compare the effect of overall obesity versus central obesity, which is typically described by waist circumference and/or waist-to-hip ratio. The results of this new study of nearly 700 Korean women, however, demonstrated that the presence of obstructive CAD was significantly higher in women with central obesity.

 No significant difference was identified based on BMI, indicating that overall obesity was not a risk factor for obstructive CAD. These results are especially relevant for postmenopausal women because menopause causes a change in body fat distribution, especially in the abdominal area. Findings were published in the article “Association between obesity type and obstructive cardiovascular disease in stable symptomatic postmenopausal women: data from the KoRean wOmen’S chest pain rEgistry (KoROSE). 

“The findings of this study are consistent with what we know about the detrimental effects of central obesity. Not all fat is the same, and central obesity is particularly dangerous because it is associated with risk for heart disease, the number one killer of women. Identifying women with excess abdominal fat, even with a normal BMI, is important so that lifestyle interventions can be implemented,” says Dr. Stephanie Faubion, NAMS medical director. For more information about menopause and healthy aging, visit

Contacts and sources:
Eileen Petridis
The North American Menopause Society (NAMS)

Citation: Association between obesity type and obstructive coronary artery disease in stable symptomatic postmenopausal women.
Jun Hwan Cho, Hack-Lyoung Kim, Myung-A Kim, Sohee Oh, Mina Kim, Seong Mi Park, Hyun Ju Yoon, Mi Seung Shin, Kyung-Soon Hong, Gil Ja Shin, Wan-Joo Shim. Menopause, 2019; 1 DOI: 10.1097/GME.0000000000001392

The Face of a 3.8 Million Year Old Human Ancestor, A Game Changer

Researchers discover remarkably complete 3.8 million-year-old cranium of Australopithecus anamensis at Woranso-Mille in Ethiopia

Australopithecus anamensis is the earliest-known species of Australopithecus and widely accepted as the progenitor of “Lucy’s” species, Australopithecus afarensis. Until now, A. anamensis was known mainly from jaws and teeth. Yohannes Haile-Selassie of the Cleveland Museum of Natural History, Stephanie Melillo of the Max Planck Institute for Evolutionary Anthropology and their colleagues have discovered the first cranium of A. anamensis at the paleontological site of Woranso-Mille, in the Afar Region of Ethiopia.

The 3.8 million-year-old cranium of Australopithecus anamensis is remarkably complete.

Credit: © Dale Omori, Cleveland Museum of Natural History

The 3.8 million-year-old fossil cranium represents a time interval between 4.1 and 3.6 million years ago, when A. anamensis gave rise to A. afarensis. Researchers used morphological features of the cranium to identify which species the fossil represents. "Features of the upper jaw and canine tooth were fundamental in determining that MRD was attributable to A. anamensis", said Melillo. "It is good to finally be able to put a face to the name." The MRD cranium, together with other fossils previously known from the Afar, show that A. anamensis and A. afarensis co-existed for approximately 100,000 years. This temporal overlap challenges the widely-accepted idea of a linear transition between these two early human ancestors. Haile-Selassie said: "This is a game changer in our understanding of human evolution during the Pliocene."

Working for the past 15 years at the site, the team discovered the cranium (MRD-VP-1/1, here referred to as "MRD") in February 2016. In the years following their discovery, paleoanthropologists of the project conducted extensive analyses of MRD, while project geologists worked on determining the age and context of the specimen. The results of the team’s findings are published online in two papers in the international scientific journal Nature.

Discovery of the cranium

The cranium was discovered in 2016 at Miro Dora, Mille district of the Afar Regional State in Ethiopia.

Credit: © Yohannes Haile-Selassie, Cleveland Museum of Natural History

The Woranso-Mille project has been conducting field research in the central Afar region of Ethiopia since 2004. The project has collected more than 12,600 fossil specimens representing about 85 mammalian species. The fossil collection includes about 230 fossil hominin specimens dating to between more than 3.8 and about 3.0 million years ago. The first piece of MRD, the upper jaw, was found by Ali Bereino (a local Afar worker) on February 10, 2016 at a locality known as Miro Dora, Mille district of the Afar Regional State. The specimen was exposed on the surface and further investigation of the area resulted in the recovery of the rest of the cranium. "I couldn’t believe my eyes when I spotted the rest of the cranium. It was a eureka moment and a dream come true", said Haile-Selassie.

Geology and age determination

In a companion paper published in the same issue of Nature, Beverly Saylor of Case Western Reserve University and her colleagues determined the age of the fossil as 3.8 million years by dating minerals in layers of volcanic rocks nearby. They mapped the dated levels to the fossil site using field observations and the chemistry and magnetic properties of rock layers. Saylor and her colleagues combined the field observations with analysis of microscopic biological remains to reconstruct the landscape, vegetation and hydrology where MRD died.

MRD was found in the sandy deposits of a delta where a river entered a lake. The river likely originated in the highlands of the Ethiopian plateau while the lake developed at lower elevations where rift activity caused the Earth surface to stretch and thin, creating the lowlands of the Afar region. Fossil pollen grains and chemical remains of fossil plant and algae that are preserved in the lake and delta sediments provide clues about the ancient environmental conditions. Specifically they indicate that the watershed of the lake was mostly dry but that there were also forested areas on the shores of the delta or along the side the river that fed the delta and lake system. "MRD lived near a large lake in a region that was dry. We’re eager to conduct more work in these deposits to understand the environment of the MRD specimen, the relationship to climate change and how it affected human evolution, if at all", said Naomi Levin, a co-author on the study from University of Michigan.

A new face in the crowd

The facial reconstruction of "MRD" by John Gurche was made possible through generous contribution by Susan and George Klein.
 Credit:© Matt Crow, Cleveland Museum of Natural History

Australopithecus anamensis is the oldest known member of the genus Australopithecus. Due to the cranium’s rare near-complete state, the researchers identified never-before-seen facial features in the species. "MRD has a mix of primitive and derived facial and cranial features that I didn’t expect to see on a single individual", Haile-Selassie said. Some characteristics were shared with later species, while others had more in common with those of even older and more primitive early human ancestor groups such as Ardipithecus and Sahelanthropus. "Until now, we had a big gap between the earliest-known human ancestors, which are about 6 million years old, and species like 'Lucy', which are two to three million years old. One of the most exciting aspects of this discovery is how it bridges the morphological space between these two groups", said Melillo.

Branching out

Among the most important findings was the team’s conclusion that A. anamensis and its descendant species, the well-known A. afarensis, coexisted for a period of at least 100,000 years. This finding contradicts the long-held notion of an anagenetic relationship between these two taxa, instead supporting a branching pattern of evolution. Melillo explains: "We used to think that A. anamensis gradually turned into A. afarensis over time. We still think that these two species had an ancestor-descendent relationship, but this new discovery suggests that the two species were actually living together in the Afar for quite some time. It changes our understanding of the evolutionary process and brings up new questions - were these animals competing for food or space?"

This conclusion is based on the assignment of the 3.8-million-year-old MRD to A. anamensis and the 3.9-million-year-old hominin cranial fragment commonly known as the Belohdelie frontal, to A. afarensis. The Belohdelie frontal was discovered in the Middle Awash of Ethiopia by a team of paleontologists in 1981, but its taxonomic status has been questioned in the intervening years.

The new MRD cranium enabled the researchers to characterize frontal morphology in A. anamensis for the first time and to recognize that these features differed from the morphology common to the Belohdelie frontal and to other cranial specimens already known for Lucy’s species. As a result, the new study confirms that the Belohdelie frontal belonged to an individual of Lucy’s species. This identification extends the earliest record of A. afarensis back to 3.9 million years ago, while the discovery of MRD nudges the last appearance date of A. anamensis forward to 3.8 million years - indicating the overlap period of at least 100,000 years.

Contacts and sources:
Dr. Stephanie Melillo, Dr. Yohannes Haile-Selassie, Sandra Jacob
Max Planck Institute for Evolutionary Anthropology


A 3.8-million-year-old hominin cranium from Woranso-Mille, Ethiopia.
Yohannes Haile-Selassie, Stephanie M. Melillo, Antonino Vazzana, Stefano Benazzi, Timothy M. Ryan. Nature, 2019; DOI: 10.1038/s41586-019-1513-8

Age and context of mid-Pliocene hominin cranium from Woranso-Mille, Ethiopia. Beverly Z. Saylor, Luis Gibert, Alan Deino, Mulugeta Alene, Naomi E. Levin, Stephanie M. Melillo, Mark D. Peaple, Sarah J. Feakins, Benjamin Bourel, Doris Barboni, Alice Novello, Florence Sylvestre, Stanley A. Mertzman, Yohannes Haile-Selassie. Nature, 2019; DOI: 10.1038/s41586-019-1514-7

Wednesday, August 28, 2019

First Dir­ect Evid­ence for a Mantle Plume Ori­gin of Jur­as­sic Flood Basalts in South­ern Africa

Convection currents have stirred Earth’s mantle for some 4.5 billion years. The plume theory states that some of the currents bring material from the core-mantle boundary to the planetary surface.

Remnants of the Mesozoic flood basalts on the reconstructed Gondwana supercontinent. In the case of Karoo province, intrusive rocks (formed when the feeding channels and magma chambers crystallized) also shown. Main research areas marked with red. 
Remnants of the Mesozoic flood basalts on the reconstructed Gondwana supercontinent. In the case of Karoo province, intrusive rocks (formed when the feeding channels and magma chambers crystallized) also shown. Main research areas marked with red. Image: Luomus / Jussi Heinonen
Image: Luomus / Jussi Heinonen

The origin of gigantic magma eruptions that led to global climatic crises and extinctions of species has remained controversial. Two competing paradigms explain these cataclysms, either by the splitting of tectonic plates at the Earth's surface or by the impacts of hot currents, called mantle plumes, from the planetary interior. A group of geochemists from Finland and Mozambique suggests they have found the smoking gun in the Karoo magma province. Their new article reports the discovery of primitive picrite lavas that may provide the first direct sample of a hot mantle plume underneath southern Africa in the Jurassic period

Professor Daúd Jamal standing next to picrite lava outcrops on the Luenha River, Central Mozambique.
Professor Daúd Jamal standing next to picrite lava outcrops
Credit: Jussi Heinonen

The great Jurassic lava flows that flooded across southern Africa and parts of East Antarctica prior to the splitting of the Pangea supercontinent make up one of the largest volcanic systems on Earth. The magma eruptions caused global environmental turmoil and the extinctions of species. The rapid origin of this Karoo flood basalt province in southern Africa has been frequently associated with the melting of a large plume that ascended from the deep mantle around 180 million years ago. However, the plume model has lacked confirmation from lava compositions that preserve a geochemical 'plume signature'.

"To our knowledge, the Luenha picrites are the first lava samples that could originate from the plume source that has been previously inferred from various geological and geophysical data on the Karoo province. Therefore they allow compositional analysis of this source," says Sanni Turunen, the leading author and a doctoral student at the Finnish Museum of Natural History, which is part of the University of Helsinki. In the case of the Luenha picrites, named after the research area near the Luenha River, the geochemical compositions indicate a hot magma source that is in many respects different from previously reported magma sources in the Karoo province. They show compositional similarities to magmas formed in other deep mantle plume-related volcanic provinces worldwide.

"It is very important to realize that in huge and complex volcanic systems, such as the Karoo province, large amounts of magmas may be produced from several magma sources", explains Daúd Jamal, professor at the Eduardo Mondlane University, in Mozambique.

Picrite lava outcrops at the Luenha River, Central Mozambique.

Credit:  Arto Luttinen

"Previous studies of Karoo picrites in Africa and Antarctica by us and by other groups have suggested the generation of magmas in the upper mantle, but our new results indicate plume sources were also involved", adds Jussi Heinonen, an Academy of Finland fellow at the Department of Geosciences and Geography at the University of Helsinki.

Importantly, the Luenha picrites appear to represent the main source of the voluminous flood basalts of southern Africa. "We were fascinated to realise that the Luenha picrites revealed a type of magma source that was recently predicted using lava compositions, but which had not been confirmed by observational evidence", as characterised by Arto Luttinen, senior curator at the Finnish Museum of Natural History. According to the study, the presently available data are compatible with a plume source that has retained the composition of Earth's primitive mantle remarkably well. This is quite unusual because of the 4.5 billion year evolution of the convecting mantle.

Schematic cross-section of the Karoo continental flood basalt province c. 180 million years ago. 1) Mantle melts extensively and the 2) melts intrude the lithosphere (=crust + brittle upper mantle), where they form large magma chambers and mix with it. 3) The contaminated melts proceed upwards and 4) erupt from shield volcanoes or fissures. 5) Some rare melts do not assimilate lithosphere and preserve the original mantle-derived geochemical signature.
Schematic cross-section of the Karoo continental flood basalt province c. 180 million years ago. 1) Mantle melts extensively and the 2) melts intrude the lithosphere (=crust + brittle upper mantle), where they form large magma chambers and mix with it. 3) The contaminated melts proceed upwards and 4) erupt from shield volcanoes or fissures. 5) Some rare melts do not assimilate lithosphere and preserve the original mantle-derived geochemical signature. Image: Luomus / Jussi Heinonen
 Image: Luomus / Jussi Heinonen

Confirmation of the age and evolution of the primitive mantle-like source of the Luenha picrites requires further constraints from future isotopic studies.

"Whatever the exact nature of the Luenha source turns out to be, we feel confident that we have uncovered rocks that help to address the complex origin of large eruptions in new detail", Turunen concludes.

Primitive lavas containing magnesium-rich olivine can record early events of the magmatic system.

Credit: Sanni Turunen

The research will be published in Lithos, Volume 346-347C, in December 2019. The open access article is available online (

Contacts and sources:
Arto LuttinenUniversity of Helsinki

Red Wine Benefits Linked to Better Gut Health, Study Finds

A study from King's College London has found that people who drank red wine had an increased gut microbiota diversity (a sign of gut health) compared to non-red wine drinkers as well as an association with lower levels of obesity and 'bad' cholesterol.

In a paper published today in the journal Gastroenterology, a team of researchers from the Department of Twin Research & Genetic Epidemiology, King's College London explored the effect of beer, cider, red wine, white wine and spirits on the gut microbiome (GM) and subsequent health in a group of 916 UK female twins.

Credit: CDC

They found that the GM of red wine drinkers was more diverse compared to non-red wine drinkers. This was not observed with white wine, beer or spirits consumption.

First author of the study, Dr Caroline Le Roy from King's College London said: "While we have long known of the unexplained benefits of red wine on heart health, this study shows that moderate red wine consumption is associated with greater diversity and a healthier gut microbiota that partly explain its long debated beneficial effects on health."

The microbiome is the collection of microorganisms in an environment and plays an important role in human health. An imbalance of 'good' microbes compared to 'bad' in the gut can lead to adverse health outcomes such as reduced immune system, weight gain or high cholesterol.

A person's gut microbiome with a higher number of different bacterial species is considered a marker of gut health.

The team observed that the gut microbiota of red wine consumers contained a greater number of different bacterial species compared to than non-consumers. This result was also observed in three different cohorts in the UK, the U.S. And the Netherlands. The authors took into account factors such as age, weight, the regular diet and socioeconomic status of the participants and continued to see the association.

The authors believe the main reason for the association is due to the many polyphenols in red wine. Polyphenols are defence chemicals naturally present in many fruits and vegetables. They have many beneficial properties (including antioxidants) and mainly act as a fuel for the microbes present in our system.

Lead author Professor Tim Spector from King's College London said: "This is one of the largest ever studies to explore the effects of red wine in the guts of nearly three thousand people in three different countries and provides insights that the high levels of polyphenols in the grape skin could be responsible for much of the controversial health benefits when used in moderation."

The study also found that red wine consumption was associated with lower levels of obesity and 'bad' cholesterol which was in part due to the gut microbiota.

"Although we observed an association between red wine consumption and the gut microbiota diversity, drinking red wine rarely, such as once every two weeks, seems to be enough to observe an effect. If you must choose one alcoholic drink today, red wine is the one to pick as it seems to potentially exert a beneficial effect on you and your gut microbes, which in turn may also help weight and risk of heart disease. However, it is still advised to consume alcohol with moderation," added Dr Le Roy.

The TwinsUK microbiota project was funded by the National Institute of Health. TwinsUK is funded by the Wellcome Trust, Medical Research Council, European Union, The CDRF, The Denise Coates Foundation and the National Institute for Health Research (NIHR) through the NIHR BioResource and the NIHR Guy's and St Thomas' Biomedical Research Centre.

Contacts and sources:
Tanya Wood
King's College London

Protein Batteries for Safer, Environmentally Friendly Power Storage

Proteins are good for building muscle, but their building blocks also might be helpful for building sustainable organic batteries that could someday be a viable substitute for conventional lithium-ion batteries, without their safety and environmental concerns. By using synthetic polypeptides — which make up proteins —- and other polymers, researchers have taken the first steps toward constructing electrodes for such power sources. The work could also provide a new understanding of electron-transfer mechanisms.

“The trend in the battery field right now is to look at how the electrons are transported within a polymer network,” says Tan Nguyen, a Ph.D. student who helped develop the project. “The beauty of polypeptides is that we can control the chemistry on their side chains in 3D without changing the geometry of the backbone, or the main part of the structure. Then we can systematically examine the effect of changing different aspects of the side chains.”

Lithium-ion batteries could one day be replaced by power sources made of polypeptides, the building blocks of proteins.
Credit: Janaka Dharmasena/

Current lithium-ion batteries can harm the environment, and because the cost of recycling them is higher than manufacturing them from scratch, they often accumulate in landfills. At the moment, there is no safe way of disposing of them. Developing a protein-based, or organic, battery would change this situation.

“The amide bonds along the peptide backbone are pretty stable — so the durability is there, and we can then trigger when they break down for recycling,” says Karen Wooley, Ph.D., who leads the team at Texas A&M University. She envisions that polypeptides could eventually be used in applications such as flow batteries for storing electrical energy. “The other advantage is that by using this protein-like architecture, we’re building in the kinds of conformations that are found in proteins in nature that already transport electrons efficiently,” Wooley says. “We can also optimize this to control battery performance.”

The researchers built the system using electrodes made of composites of carbon black, constructing polypeptides that contain either viologen or 2,2,6,6-tetramethylpiperidine 1-oxyl (TEMPO). They attached viologens to the matrix used for the anode, which is the negative electrode, and used a TEMPO-containing polypeptide for the cathode, which is the positive electrode. The viologens and TEMPO are redox-active molecules. “What we’ve measured so far for the range, the potential window between the two materials, is about 1.5 volts, suitable for low-energy requirement applications, such as biosensors,” Nguyen says.

Credit: American Chemical Society

For potential use in an organic battery, Nguyen has synthesized several polymers that adopt different conformations, such as a random coil, an alpha helix and a beta sheet, to investigate their electrochemical characteristics. With these peptides in hand, Nguyen is now collaborating with Alexandra Danielle Easley, a Ph.D. student in the laboratory of Jodie Lutkenhaus, Ph.D., also at Texas A&M University, to build the battery prototypes. Part of that work will include testing to better understand how the polymers function when they're organized on a substrate.

While this early stage research has far to go before organic-based batteries are commercially available, the flexibility and variety of structures that proteins can provide promise wide potential for sustainable energy storage that is safer for the environment.

The researchers acknowledge support and funding from the National Science Foundation, the Welch Foundation and the U.S. Department of Energy.

The researchers presented their results  at the American Chemical Society (ACS) Fall 2019 National Meeting & Exposition. ACS, the world’s largest scientific society, is holding the meeting here through Thursday. It features more than 9,500 presentations on a wide range of science topics.

Contacts and sources:
Katie Cottingham, Ph.D.
American Chemical Society


Device Vanishes on Command after Military Missions

A polymer that self-destructs? While once a fictional idea, new polymers now exist that are rugged enough to ferry packages or sensors into hostile territory and vaporize immediately upon a military mission’s completion. The material has been made into a rigid-winged glider and a nylon-like parachute fabric for airborne delivery across distances of a hundred miles or more. It could also be used someday in building materials or environmental sensors.

“This is not the kind of thing that slowly degrades over a year, like the biodegradable plastics that consumers might be familiar with,” says Paul Kohl, Ph.D., whose team developed the material. “This polymer disappears in an instant when you push a button to trigger an internal mechanism or the sun hits it.” The disappearing polymers were developed for the Department of Defense, which is interested in deploying electronic sensors and delivery vehicles that leave no trace of their existence after use, thus avoiding discovery and alleviating the need for device recovery.

The key to making a polymer disappear, or break apart, is “ceiling temperature.” Below the ceiling temperature, a polymer configuration is favored, but above that temperature, the polymer will break apart into its component monomers. Common polymers, like polystyrene, have a ceiling temperature above ambient temperature and are very stable. And even when they are warmed above their ceiling temperature, some of these materials can take a long time to decompose. For example, thousands of chemical bonds link all of the monomers together in polystyrene, and all of these bonds must be broken for the materials to decompose. But with low ceiling-temperature polymers, such as the cyclic ones Kohl is using, only one bond needs to break, and then all of the other bonds come apart, so the depolymerization happens quickly. The process can be initiated by a temperature spike from an outside or embedded source, or by a light-sensitive catalyst.

A polymer (left) depolymerizes and disappears after being exposed to sunlight for 10 min (right).
Credit: Paul Kohl

For many years, researchers have attempted to make these polymers, but were unsuccessful because of the materials’ instability at room temperature. Kohl’s research group at the Georgia Institute of Technology discovered that they could overcome this issue if they were careful to remove all impurities formed during the synthesis. In addition, they found a number of aldehydes, including phthalaldehyde, that readily form cyclic polymers. Once they had optimized this polymer’s synthesis, they focused on ways to make it disappear.

To do this, the researchers incorporated into the polymer a photosensitive additive, which absorbs light and catalyzes depolymerization. “Initially, we made it photosensitive to just ultraviolet light so we could make the parts in a well-lit room with fluorescent lighting, and it was just fine; it was stable,” Kohl says. But when the polymer was placed outside, exposure to sunlight vaporized it (or reverted it back to a liquid, in some cases). A vehicle deployed at night would, therefore, disappear with the sunrise.

Kohl’s group has since discovered new additives that can trigger depolymerization at different wavelengths of visible light, so the polymer can decompose indoors. “We have polymers designed for applications in which you come in the room, you turn the light on, and the thing disappears,” Kohl says.

The group has also determined how to stall depolymerization. “We have a way to delay the depolymerization for a specific amount of time – one hour, two hours, three hours,” he says. “You would keep it in the dark until you were going to use it, but then you would deploy it during the day, and you would have three hours before it decomposes.” The team has considered chemical methods to start the decomposition process, as well. In addition, they are testing various copolymers that can be added to phthalaldehyde to change the material’s properties without altering its ability to vanish.

Kohl says that this “James Bond”-like material is already being incorporated in military devices by other researchers. But he also sees the potential of the materials for non-military applications. For example, the researchers have made a disappearing epoxy for a temporary adhesive that could be used in building materials. They also imagine the material could be used as sensors for environmental monitoring. Once the sensors are finished collecting data, there is no risk of littering the environment since they can be triggered to vaporize. The material can also be used for delivery vehicles in remote areas where recovery is difficult.

The researchers acknowledge support and funding from the U.S. Department of Defense.

The researchers presented their results  at the American Chemical Society (ACS) Fall 2019 National Meeting & Exposition. ACS, the world’s largest scientific society, is holding the meeting here through Thursday. It features more than 9,500 presentations on a wide range of science topics.

Contacts and sources:
Katie Cottingham, Ph.D.
American Chemical Society