Wednesday, February 28, 2018

New Data Helps Explain Recent Fluctuations in Earth’s Magnetic Field

Using new data gathered from sites in southern Africa, University of Rochester researchers have extended their record of Earth’s magnetic field back thousands of years to the first millennium.

The record provides historical context to help explain recent, ongoing changes in the magnetic field, most prominently in an area in the Southern Hemisphere known as the South Atlantic Anomaly.

“We’ve known for quite some time that the magnetic field has been changing, but we didn’t really know if this was unusual for this region on a longer timescale, or whether it was normal,” says Vincent Hare, who recently completed a postdoctoral associate appointment in the Department of Earth and Environmental Sciences (EES) at the University of Rochester, and is lead author of a paper published in Geophysical Research Letters.

Earth’s geomagnetic field surrounds and protects our planet from harmful space radiation. (
Earth's magnetic field connects the North Pole with the South Pole in this NASA-created image.
Credit:  NASA Goddard Space Flight Center 

Weakening magnetic field a recurrent anomaly

The new data also provides more evidence that a region in southern Africa may play a unique role in magnetic pole reversals.

The magnetic field that surrounds Earth not only dictates whether a compass needle points north or south, but also protects the planet from harmful radiation from space. Nearly 800,000 years ago, the poles were switched: north pointed south and vice versa. The poles have never completely reversed since, but for the past 160 years, the strength of the magnetic field has been decreasing at an alarming rate. The region where it is weakest, and continuing to weaken, is a large area stretching from Chile to Zimbabwe called the South Atlantic Anomaly.

In order to put these relatively recent changes into historical perspective, Rochester researchers—led by John Tarduno, a professor and chair of EES—gathered data from sites in southern Africa, which is within the South Atlantic Anomaly, to compile a record of Earth’s magnetic field strength over many centuries. Data previously collected by Tarduno and Rory Cottrell, an EES research scientist, together with theoretical models developed by Eric Blackman, a professor of physics and astronomy at Rochester, suggest the core region beneath southern Africa may be the birthplace of recent and future pole reversals.

“We were looking for recurrent behavior of anomalies because we think that’s what is happening today and causing the South Atlantic Anomaly,” Tarduno says. “We found evidence that these anomalies have happened in the past, and this helps us contextualize the current changes in the magnetic field.”

The researchers discovered that the magnetic field in the region fluctuated from 400-450 AD, from 700-750 AD, and again from 1225-1550 AD. This South Atlantic Anomaly, therefore, is the most recent display of a recurring phenomenon in Earth’s core beneath Africa that then affects the entire globe.

“We’re getting stronger evidence that there’s something unusual about the core-mantel boundary under Africa that could be having an important impact on the global magnetic field,” Tarduno says.
A pole reversal? Not yet, say researchers.

The magnetic field is generated by swirling, liquid iron in Earth’s outer core. It is here, roughly 1800 miles beneath the African continent, that a special feature exists. Seismological data has revealed a denser region deep beneath southern Africa called the African Large Low Shear Velocity Province. The region is located right above the boundary between the hot liquid outer core and the stiffer, cooler mantle. Sitting on top of the liquid outer core, it may sink slightly, disturbing the flow of iron and ultimately affecting Earth’s magnetic field.

A major change in the magnetic field would have wide-reaching ramifications; the magnetic field stimulates currents in anything with long wires, including the electrical grid. Changes in the magnetic field could therefore cause electrical grid failures, navigation system malfunctions, and satellite breakdowns. A weakening of the magnetic field might also mean more harmful radiation reaches Earth—and trigger an increase in the incidence of skin cancer.

Hare and Tarduno warn, however, that their data does not necessarily portend a complete pole reversal.

“We now know this unusual behavior has occurred at least a couple of times before the past 160 years, and is part of a bigger long-term pattern,” Hare says. “However, it’s simply too early to say for certain whether this behavior will lead to a full pole reversal.”

Even if a complete pole reversal is not in the near future, however, the weakening of the magnetic field strength is intriguing to scientists, Tarduno says. “The possibility of a continued decay in the strength of the magnetic field is a societal concern that merits continued study and monitoring.”

This study was funded by the US National Science Foundation.

In the Field: “Archaeomagnetism” at work

Archaeologist Tom Huffman of the University of Witwatersrand in South Africa helps John Tarduno and his students orient and collect samples at a field site in southern Africa.

Credit: University of Rochester photo / courtesy John Tarduno

The researchers gathered data for this project from an unlikely source: ancient clay remnants from southern Africa dating back to the early and late Iron Ages. As part of a field called “archaeomagnetism,” geophysicists team up with archaeologists to study the past magnetic field.

The Rochester team, which included several undergraduate students, collaborated with archaeologist Thomas Huffman of the University of Witwatersrand in South Africa, a leading expert on Iron Age southern Africa. The group excavated clay samples from a site in the Limpopo River Valley, which borders Zimbabwe, South Africa, and Botswana.

During the Iron Age in southern Africa, around the time of the first millennium, there was a group of Bantu-speaking people who cultivated grain and lived in villages composed of grain bins, huts, and cattle enclosures. Draughts were devastating to their agriculturally based culture. During periods of draught, they would perform elaborate ritual cleansings of the villages by burning down the huts and grain bins.

“When you burn clay at very high temperatures, you actually stabilize the magnetic minerals, and when they cool from these very high temperatures, they lock in a record of the earth’s magnetic field,” Tarduno says.

Researchers excavate the samples, orient them in the field, and bring them back to the lab to conduct measurements using magnetometers. In this way, they are able to use the samples to compile a record of Earth’s magnetic field in the past.

Contacts and sources:
Lindsey Valich
University of Rochester

Search for First Stars in the Universe Uncovers "Dark Matter"

A new discovery offers first direct proof that dark matter exists and that it is made up of low-mass particles, Tel Aviv University, Arizona State University researchers say.
A team of astronomers led by Prof. Judd Bowman of Arizona State University unexpectedly stumbled upon "dark matter," the most mysterious building block of outer space, while attempting to detect the earliest stars in the universe through radio wave signals, according to a study published this week in Nature.

The idea that these signals implicate dark matter is based on a second Nature paper published this week, by Prof. Rennan Barkana of Tel Aviv University, which suggests that the signal is proof of interactions between normal matter and dark matter in the early universe. According to Prof. Barkana, the discovery offers the first direct proof that dark matter exists and that it is composed of low-mass particles.

The signal, recorded by a novel radio telescope called EDGES, dates to 180 million years after the Big Bang.

What the universe is made of

"Dark matter is the key to unlocking the mystery of what the universe is made of," says Prof. Barkana, Head of the Department of Astrophysics at TAU's School of Physics and Astronomy. "We know quite a bit about the chemical elements that make up the earth, the sun and other stars, but most of the matter in the universe is invisible and known as 'dark matter.' The existence of dark matter is inferred from its strong gravity, but we have no idea what kind of substance it is. Hence, dark matter remains one of the greatest mysteries in physics.

Pattern of radio waves on the sky caused by the combination of radiation from the first stars and the effect of dark matter. Blue regions are those where the dark matter cooled down the ordinary matter most strongly. If a similar pattern is detected with new radio telescopes over the next few years, this will confirm that the first stars have revealed the dark matter.
Credit: Prof. Rennan Barkana

"To solve it, we must travel back in time. Astronomers can see back in time, since it takes light time to reach us. We see the sun as it was eight minutes ago, while the immensely distant first stars in the universe appear to us on earth as they were billions of years in the past."

Prof. Bowman and colleagues reported the detection of a radio wave signal at a frequency of 78 megahertz. The width of the observed profile is largely consistent with expectations, but they also found it had a larger amplitude (corresponding to deeper absorption) than predicted, indicating that the primordial gas was colder than expected.

Prof. Barkana suggests that the gas cooled through the interaction of hydrogen with cold, dark matter.

"Tuning in" to the early universe

"I realized that this surprising signal indicates the presence of two actors: the first stars, and dark matter," says Prof. Barkana. "The first stars in the universe turned on the radio signal, while the dark matter collided with the ordinary matter and cooled it down. Extra-cold material naturally explains the strong radio signal."

Physicists expected that any such dark matter particles would be heavy, but the discovery indicates low-mass particles. Based on the radio signal, Prof. Barkana argues that the dark-matter particle is no heavier than several proton masses. "This insight alone has the potential to reorient the search for dark matter," says Prof. Barkana.

Once stars formed in the early universe, their light was predicted to have penetrated the primordial hydrogen gas, altering its internal structure. This would cause the hydrogen gas to absorb photons from the cosmic microwave background, at the specific wavelength of 21 cm, imprinting a signature in the radio spectrum that should be observable today at radio frequencies below 200 megahertz. The observation matches this prediction except for the unexpected depth of the absorption.

Prof. Barkana predicts that the dark matter produced a very specific pattern of radio waves that can be detected with a large array of radio antennas. One such array is the SKA, the largest radio telescope in the world, now under construction. "Such an observation with the SKA would confirm that the first stars indeed revealed dark matter," concludes Prof. Barkana.

Contacts and sources:
George Hunka
American Friends of Tel Aviv University (AFTAU)

New Tardigrade Species Discovered

A new tardigrade species has been identified in Japan, according to a study published February 28, 2018 in the open-access journal PLOS ONE by Daniel Stec from the Jagiellonian University, Poland, and colleagues.

Credit: Stec et al (2018)

Tardigrades are microscopic metazoans that are found all over the world, and there were 167 known species from Japan. For decades, the globally distributed Macrobiotus hufelandi complex has been represented only by the nominal taxon M. hufelandi, but currently numerous species within the complex are recognised.

In this study, Stec and colleagues describe a new tardigrade species of the hufelandi group, Macrobiotus shonaicus sp. nov., from East Asia. The researchers collected a sample of moss from a car park in Japan and examined it for tardigrades, extracting 10 individuals from the sample, which were used to start a laboratory culture to obtain more individuals required for the range of analyses. They then used phase contrast light microscopy (PCM) and scanning electron microscopy (SEM) as well as analyzed the DNA for four molecular markers to characterize the new species and determine where it fit in the phylogenetic tree.

To distinguish between different tardigrade species, the researchers paid special attention to their eggs. This new tardigrade species has a solid egg surface, placing it in the persimilis subgroup within the hufelandi complex. The eggs also have flexible filaments attached, resembling those of two other recently described species, Macrobiotus paulinae from Africa and Macrobiotus polypiformis from South America.

The researchers' phylogenetic and morphological analysis identifies M. shonaicus sp. nov. as a new species within the M. hufelandi complex, increasing the number of known tardigrade species from Japan to 168.

Co-author Kazuharu Arakawa says: "We revisit the large and long-standing Macrobiotus hufelandi group of tardigrades, originally described by Schultze in 1834 and where M. shonaicus also belongs, and suggest that the group contains two clades with different egg morphology."

Contacts and sources:
Tessa Gregory

Citation: Stec D, Arakawa K, Michalczyk ? (2018) An integrative description of Macrobiotus shonaicus sp. nov. (Tardigrada: Macrobiotidae) from Japan with notes on its phylogenetic position within the hufelandi group. PLoS ONE 13(2): e0192210.

Squid Skin Could Be the Solution to Camouflage Material

Cephalopods -- which include octopuses, squid, and cuttlefish -- are masters of disguise. They can camouflage to precisely match their surroundings in a matter of seconds, and no scientist has quite been able to replicate the spectacle. But new research by Leila Deravi, assistant professor of chemistry and chemical biology at Northeastern, brings us a step closer.

The chromatophore organs, which appear as hundreds of multi-colored freckles on the surface of a cephalopod's body, contribute to fast changes in skin color. In a paper published last week in Advanced Optical Materials, Deravi's group describes its work in isolating the pigment granules within these organs to better understand their role in color change. The researchers discovered these granules have remarkable optical qualities and used them to make thin films and fibers that could be incorporated into textiles, flexible displays, and future color-changing devices. Deravi's lab collaborated with the U.S. Army Natick Soldier Research, Development, and Engineering Center for the study.

Researchers made spools of fibers from the squids’ pigment particles and are now exploring uses for the material. The fibers are so visually interesting that it’s not difficult to imagine weaving them into fabric for clothing or other art forms. But perhaps the most exciting possible application is wearable, flexible screens and textiles that are capable of adaptive coloration. 
Photo by Adam Glanzman/Northeastern University

Chromatophores come in shades of red, yellow, brown, and orange. They are similar to the freckles on human skin that appear over time. But in cephalopods, these freckles open and close within a fraction of a second to give rise to a continuously reconfiguring skin color. Underneath the chromatophores is a layer of iridophores that act as a mirror. Together, these organs reflect all colors of visible light.

By removing individual pigment particles from the squid, Deravi was able to explore the breadth of their capabilities as static materials. One particle is only 500 nanometers in size, which is 150 times smaller than the diameter of a human hair. Deravi's team layered and reorganized the particles and found they could produce an expansive color pallet.

"We're showing these pigments are a powerful tool that can produce ultra-thin films that are really rich in colors," Deravi said.

Her team also discovered the pigments can scatter both visible and infrared light. This enhances brightness and light absorption and affects how a final color is perceived. And when Deravi engineered a system that included a mirror -- mimicking the layout of organs that squids have naturally--she was able to further enhance the perceived color through scattering light through and off the granules. This process could potentially be replicated on functional materials like solar cells to increase the absorption of sunlight, Deravi said.

"From a scientific and technical engineering perspective, understanding how light scattering affects color is very important, and this is an exciting new development in the field of optics in biology," said Richard Osgood, a collaborator from the U.S. Army Natick Soldier Research, Development, and Engineering Center. "This is an unusual harnessing of optics and physics knowledge in scattering to understand biological systems."

The researchers made spools of fibers from the squids' pigment particles and are now exploring uses for the material. The fibers are so visually interesting that it's not difficult to imagine weaving them into fabric for clothing or other art forms. But perhaps the most exciting possible application is wearable, flexible screens and textiles that are capable of adaptive coloration. Osgood said the research could allow the Army to create new capabilities for soldiers.

"For more than a decade, scientists and engineers have been trying to replicate this process and build these devices that can color match, color change, and camouflage just like the cephalopods, but many of them come nowhere near the speed or dynamic range of color that the animals can display," Deravi said. "Cephalopods have evolved to incorporate these specific pigment granules for a reason, and we're starting to piece together what that reason is."

Contacts and sources:
John O'Neill / Allie Nicodemo.
Northeastern University

New Origin Story for the Moon

A new explanation for the Moon's origin has it forming inside the Earth when our planet was a seething, spinning cloud of vaporized rock, called a synestia. The new model led by researchers at the University of California, Davis and Harvard University resolves several problems in lunar formation and is published Feb. 28 in the Journal of Geophysical Research - Planets.

"The new work explains features of the Moon that are hard to resolve with current ideas," said Sarah Stewart, professor of Earth and Planetary Sciences at UC Davis. "The Moon is chemically almost the same as the Earth, but with some differences," she said. "This is the first model that can match the pattern of the Moon's composition."

Current models of lunar formation suggest that the Moon formed as a result of a glancing blow between the early Earth and a Mars-size body, commonly called Theia. According to the model, the collision between Earth and Theia threw molten rock and metal into orbit that collided together to make the Moon.

This artist's rendering shows the hot, molten Moon emerging from a synestia, a giant spinning donut of vaporized rock that formed when planet-sized objects collided. The synestia is in the process of condensing to form the Earth. This new model for the Moon's origin answers outstanding questions about how the Moon's composition compares to that of Earth.

Image by Sarah Stewart/UC Davis based on NASA rendering.

The new theory relies instead on a synestia, a new type of planetary object proposed by Stewart and Simon Lock, graduate student at Harvard and visiting student at UC Davis, in 2017. A synestia forms when a collision between planet-sized objects results in a rapidly spinning mass of molten and vaporized rock with part of the body in orbit around itself. The whole object puffs out into a giant donut of vaporized rock.

Synestias likely don't last long - perhaps only hundreds of years. They shrink rapidly as they radiate heat, causing rock vapor to condense into liquid, finally collapsing into a molten planet.

"Our model starts with a collision that forms a synestia," Lock said. "The Moon forms inside the vaporized Earth at temperatures of four to six thousand degrees Fahrenheit and pressures of tens of atmospheres."

An advantage of the new model, Lock said, is that there are multiple ways to form a suitable synestia - it doesn't have to rely on a collision with the right sized object happening in exactly the right way.

Once the Earth-synestia formed, chunks of molten rock injected into orbit during the impact formed the seed for the Moon. Vaporized silicate rock condensed at the surface of the synestia and rained onto the proto-Moon, while the Earth-synestia itself gradually shrank. Eventually, the Moon would have emerged from the clouds of the synestia trailing its own atmosphere of rock vapor. The Moon inherited its composition from the Earth, but because it formed at high temperatures it lost the easily vaporized elements, explaining the Moon's distinct composition.

Additional authors on the paper are Michail Petaev and Stein Jacobsen at Harvard University, Zoe Leinhardt and Mia Mace at the University of Bristol, England and Matija Cuk, SETI Institute, Mountain View, Calif. The work was supported by grants from NASA, the U.S. Department of Energy and the UK's Natural Environment Research Council.

Contacts and sources:
Andy Fell
University of California, Davis

Mankind Created a "Rainforest Crisis" in Central Africa 2,600 Years Ago

Fields, streets and cities, but also forests planted in rank and file, and dead straight rivers: humans shape nature to better suit their purposes, and not only since the onset of industrialization. Such influences are well documented in the Amazonian rainforest. 

On the other hand, the influence of humans was debated in Central Africa where major interventions seem to have occurred there 2,600 years ago: Potsdam geoscientist Yannick Garcin and his team have published a report on their findings in the journal PNAS. The research team examined lake sediments in southern Cameroon to solve the riddle of the "rainforest crisis." They found that the drastic transformation of the rainforest ecosystem at this time wasn't a result of climatic change, it was mankind.

More than 20 years ago, the analysis of lake sediments from Lake Barombi in southern Cameroon showed that older sediment layers mainly contained tree pollen reflecting a dense forested environment. In contrast, the newer sediments contained a significant proportion of savannah pollen: the dense primitive forest quickly transformed into savannahs around 2,600 years ago, followed by an equally abrupt recovery of the forest approximately 600 years later. For a long time, the most probable cause of this sudden change, dubbed the "rainforest crisis", was thought to be climate change brought about by a decrease in precipitation amount and increase in precipitation seasonality. Despite some controversy, the origin of the rainforest crisis was thought to be settled.

High volumes of precipitation in the region (over 3,000 mm annually) have ensured that the lake has not dried out over the millenia. This heavy rainfall has created large volumes of sediment, which are then washed into the lake. These circumstances make it possible to perform sediment analyses with the utmost precision.

Credit: B. Brademann / GFZ

Yet Garcin, a postdoctoral researcher at the University of Potsdam, and his international team of scientists from UP, CEREGE, IRD, ENS Lyon, GFZ, MARUM, AMU, AWI, and from Cameroon suspected that other causes could have led to the ecosystem's transformation. By reconstructing both vegetation and climate change independently - through stable isotope analysis of plant waxes, molecular fossils preserved in the sediment - the team confirmed that there was a large change in vegetation during the rainforest crisis, but indicated that this was not accompanied by a change in precipitation. "The rainforest crisis is proven, but it cannot be explained by a climate change," says Garcin. 

"In fact, in over 460 archaeological finds in the region, we have found indications that humans triggered these changes in the ecosystem." Archaeological remains older than 3,000 years are rare in Central Africa. Around 2,600 years ago, coincident with the rainforest crisis, the number of sites increased significantly, suggesting a rapid human population growth - probably related to the expansion of the Bantu-speaking peoples in Central Africa. This period also saw the emergence of pearl millet cultivation, oil palm use, and iron metallurgy in the region.

This floating platform can be completely taken apart and transported overseas. The platform enabled the collection of sediment samples in the approximately 100-meter-deep Lake Barombi, which were then analyzed in the laboratory.

Credit: Y. Garcin / University of Potsdam

"The combination of regional archaeological data and our results from the sediments of Lake Barombi shows convincingly that humans strongly impacted the tropical forests of Central Africa thousands of years ago, and left detectable anthropogenic footprints in geological archives," says Dirk Sachse at the Helmholtz Center Potsdam - Research Center for Geosciences (GFZ). Sachse was one of the major contributors to the development of the method for analyzing plant wax molecular fossils (termed biomarkers).

"We are therefore convinced that it was not climate change that caused the rainforest crisis 2,600 years ago, but it was the growing populations that settled in the region and needed to clear the forest for exploiting arable land," says Garcin. "We are currently observing a similar process underway in many parts of Africa, South America, and Asia." But the work of Garcin and his team also shows that nature has powerful regenerative abilities. When anthropogenic pressure decreased 2,000 years ago forest ecosystems reconstituted, but not necessarily as before: as in the Amazonian rainforest, field studies show that the presence of certain species is very often related to past human activity.

Contacts and sources: 
Yannick Garcin
GFZ German Research Centre for Geosciences

When ‘Colder’ Means ‘Hotter’: Explaining Increasing Temperature of Cooling Granular Gases

Researchers shed light on scientific phenomenon which helps to better understand evolution of interstellar dust and planetary rings in space
A Leicester mathematician has developed a theory to explain a peculiar phenomenon which can be observed both on Earth and in space – ‘heating by cooling’, where the temperature of a granular gas increases while the total energy drops down.

Granular gases are one of the few examples where this scientific mystery can be observed. These systems are widely spread in nature in the form of aerosols and smoke on the Earth, or in the form of interstellar dust, planetary rings and proto-planetary discs in space.

Aggregating Granular Gases in Space

Credit: University of Leicester

The stunning ‘heating by cooling’ effect corresponds, in physical terms, to a negative heat capacity. Aggregating granular gases is the second object in the world, after gravitating systems, which manifests this astonishing property.

“From secondary school we are taught that temperature means energy -- the higher the temperature, the larger the energy. If a system loses energy, its temperature drops down,” says Professor Nikolai Brilliantov from the University of Leicester’s Department of Mathematics, who led the research. “Surprisingly, this is not always true for granular gases.”

The international group of scientists has provided a vital clue on how granular gases function and demonstrate this mysterious quality in a paper published in the journal, Nature Communications, where they have built a solid mathematical foundation of the phenomenon.

They have elaborated a novel mathematical tool - generalised Smoluchowski equations. While classical Smoluchowski equations, which have been known for more than a century, deal with evolution of agglomerates concentration only, the new equations describe the evolution of the agglomerates temperature as well.

The direct microscopic modelling of the system, by extensive computer simulations, has confirmed the existence of this surprising regime and other predictions of the theory.

It has also been shown that, in spite of its peculiarity, ‘heating by cooling’ may be observed for many systems at natural conditions. However, the inter-particle forces have to comply with an important prerequisite - the attraction strength should increase with the agglomerates size.

“Understanding different regimes of the evolution of aggregating granular gases is important to comprehend numerous natural phenomena where these systems are involved,” adds Professor Brilliantov.

Contacts and sources:
University of Leicester

The paper, ‘Increasing temperature of cooling granular gases’ published in the journal Nature Communications is available here:

Some Black Holes Erase Your Past

A reasonably realistic simulation of falling into a black hole shows how space and time are distorted, and how light is blue shifted as you approach the inner or Cauchy horizon, where most physicists think you would be annihilated. However, a UC Berkeley mathematician argues that you could, in fact, survive passage through this horizon.
Credit: Animation by Andrew Hamilton, based on supercomputer simulation by John Hawley

In the real world, your past uniquely determines your future. If a physicist knows how the universe starts out, she can calculate its future for all time and all space.

But a UC Berkeley mathematician has found some types of black holes in which this law breaks down. If someone were to venture into one of these relatively benign black holes, they could survive, but their past would be obliterated and they could have an infinite number of possible futures.

A reasonably realistic simulation of falling into a black hole shows how space and time are distorted, and how light is blue shifted as you approach the inner or Cauchy horizon, where most physicists think you would be annihilated. However, a UC Berkeley mathematician argues that you could, in fact, survive passage through this horizon. Animation by Andrew Hamilton, based on supercomputer simulation by John Hawley.

Such claims have been made in the past, and physicists have invoked “strong cosmic censorship” to explain it away. That is, something catastrophic — typically a horrible death — would prevent observers from actually entering a region of spacetime where their future was not uniquely determined. This principle, first proposed 40 years ago by physicist Roger Penrose, keeps sacrosanct an idea — determinism — key to any physical theory. That is, given the past and present, the physical laws of the universe do not allow more than one possible future.

But, says UC Berkeley postdoctoral fellow Peter Hintz, mathematical calculations show that for some specific types of black holes in a universe like ours, which is expanding at an accelerating rate, it is possible to survive the passage from a deterministic world into a non-deterministic black hole.

What life would be like in a space where the future was unpredictable is unclear. But the finding does not mean that Einstein’s equations of general relativity, which so far perfectly describe the evolution of the cosmos, are wrong, said Hintz, a Clay Research Fellow.

“No physicist is going to travel into a black hole and measure it. This is a math question. But from that point of view, this makes Einstein’s equations mathematically more interesting,” he said. “This is a question one can really only study mathematically, but it has physical, almost philosophical implications, which makes it very cool.”

“This … conclusion corresponds to a severe failure of determinism in general relativity that cannot be taken lightly in view of the importance in modern cosmology” of accelerating expansion, said his colleagues at the University of Lisbon in Portugal, Vitor Cardoso, João Costa and Kyriakos Destounis, and at Utrecht University, Aron Jansen.

As quoted by Physics World, Gary Horowitz of UC Santa Barbara, who was not involved in the research, said that the study provides “the best evidence I know for a violation of strong cosmic censorship in a theory of gravity and electromagnetism.”

Hintz and his colleagues published a paper describing these unusual black holes last month in the journal Physical Review Letters.

Beyond the event horizon

Passing through the outer or event horizon of a black hole would be uneventful for a massive black hole.

Credit: Art by Andrew Hamilton, based on supercomputer simulation by John Hawley

Black holes are bizarre objects that get their name from the fact that nothing can escape their gravity, not even light. If you venture too close and cross the so-called event horizon, you’ll never escape.

For small black holes, you’d never survive such a close approach anyway. The tidal forces close to the event horizon are enough to spaghettify anything: that is, stretch it until it’s a string of atoms.

But for large black holes, like the supermassive objects at the cores of galaxies like the Milky Way, which weigh tens of millions if not billions of times the mass of a star, crossing the event horizon would be, well, uneventful.

Because it should be possible to survive the transition from our world to the black hole world, physicists and mathematicians have long wondered what that world would look like, and have turned to Einstein’s equations of general relativity to predict the world inside a black hole. These equations work well until an observer reaches the center or singularity, where in theoretical calculations the curvature of spacetime becomes infinite.

Even before reaching the center, however, a black hole explorer – who would never be able to communicate what she found to the outside world — could encounter some weird and deadly milestones. Hintz studies a specific type of black hole — a standard, non-rotating black hole with an electrical charge — and such an object has a so-called Cauchy horizon within the event horizon.

The Cauchy horizon is the spot where determinism breaks down, where the past no longer determines the future. Physicists, including Penrose, have argued that no observer could ever pass through the Cauchy horizon point because they would be annihilated.

As the argument goes, as an observer approaches the horizon, time slows down, since clocks tick slower in a strong gravitational field. As light, gravitational waves and anything else encountering the black hole fall inevitably toward the Cauchy horizon, an observer also falling inward would eventually see all this energy barreling in at the same time. In effect, all the energy the black hole sees over the lifetime of the universe hits the Cauchy horizon at the same time, blasting into oblivion any observer who gets that far.

You can’t see forever in an expanding universe

Hintz realized, however, that this may not apply in an expanding universe that is accelerating, such as our own. Because spacetime is being increasingly pulled apart, much of the distant universe will not affect the black hole at all, since that energy can’t travel faster than the speed of light.

A spacetime diagram of the gravitational collapse of a charged spherical star to form a charged black hole. An observer traveling across the event horizon will eventually encounter the Cauchy horizon, the boundary of the region of spacetime that can be predicted from the initial data. Hintz and his colleagues found that a region of spacetime, denoted by a question mark, cannot be predicted from the initial data in a universe with accelerating expansion, like our own. This violates the principle of strong cosmic censorship.

Credit: APS/Alan Stonebraker

In fact, the energy available to fall into the black hole is only that contained within the observable horizon: the volume of the universe that the black hole can expect to see over the course of its existence. For us, for example, the observable horizon is bigger than the 13.8 billion light years we can see into the past, because it includes everything that we will see forever into the future. The accelerating expansion of the universe will prevent us from seeing beyond a horizon of about 46.5 billion light years.

In that scenario, the expansion of the universe counteracts the amplification caused by time dilation inside the black hole, and for certain situations, cancels it entirely. In those cases — specifically, smooth, non-rotating black holes with a large electrical charge, so-called Reissner-Nordström-de Sitter black holes — an observer could survive passing through the Cauchy horizon and into a non-deterministic world.

“There are some exact solutions of Einstein’s equations that are perfectly smooth, with no kinks, no tidal forces going to infinity, where everything is perfectly well behaved up to this Cauchy horizon and beyond,” he said, noting that the passage through the horizon would be painful but brief. “After that, all bets are off; in some cases, such as a Reissner-Nordström-de Sitter black hole, one can avoid the central singularity altogether and live forever in a universe unknown.”

Admittedly, he said, charged black holes are unlikely to exist, since they’d attract oppositely charged matter until they became neutral. However, the mathematical solutions for charged black holes are used as proxies for what would happen inside rotating black holes, which are probably the norm. Hintz argues that smooth, rotating black holes, called Kerr-Newman-de Sitter black holes, would behave the same way.

“That is upsetting, the idea that you could set out with an electrically charged star that undergoes collapse to a black hole, and then Alice travels inside this black hole and if the black hole parameters are sufficiently extremal, it could be that she can just cross the Cauchy horizon, survives that and reaches a region of the universe where knowing the complete initial state of the star, she will not be able to say what is going to happen,” Hintz said. “It is no longer uniquely determined by full knowledge of the initial conditions. That is why it’s very troublesome.”

He discovered these types of black holes by teaming up with Cardoso and his colleagues, who calculated how a black hole rings when struck by gravitational waves, and which of its tones and overtones lasted the longest. In some cases, even the longest surviving frequency decayed fast enough to prevent the amplification from turning the Cauchy horizon into a dead zone.

Hintz’s paper has already sparked other papers, one of which purports to show that most well-behaved black holes will not violate determinism. But Hintz insists that one instance of violation is one too many.

“People had been complacent for some 20 years, since the mid ’90s, that strong cosmological censorship is always verified,” he said. “We challenge that point of view.”

Hintz’s work was supported by the Clay Mathematics Institute and the Miller Institute for Basic Research in Science at UC Berkeley.

Contacts and sources: 
Robert Sanders
UC Berkeley

Cognitive Benefits of 'Young Blood' Linked to Brain Protein in Mice

Loss of an enzyme that modifies gene activity to promote brain regeneration may be partly responsible for age-related cognitive decline, according to new research in laboratory mice by UC San Francisco scientists, who also found that restoring the enzyme to youthful levels can improve memory in healthy adult mice. If the results translate to humans, the researchers say, it could lead to new therapies for maintaining healthy brain function into old age.

The study, published online February 20, 2018, in Cell Reports, was conducted by the UCSF lab of Saul Villeda, an assistant professor of anatomy and member of the Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research at UCSF.

Using a technique called parabiosis, in which the vascular systems of two mice are surgically connected, Villeda’s lab had previously discovered that infusing old mice with the blood of younger mice leads to brain rejuvenation, including improvements in learning and memory, while infusions of old blood cause premature brain aging in young mice. The lab has since been searching for the specific biological molecules in the blood and the brain that confer the benefits of youth or the impairments of aging.

young blood and neurons
Credit: UCSF

Geraldine Gontier, a postdoctoral researcher in the lab, recently discovered that infusions of young blood made a molecule called Tet2 go up in a part of the brain involved in learning and memory called the hippocampus, suggesting that Tet2 might be a good candidate for a molecule driving the cognitive benefits of young blood.

Gerladine Gontier, Ph.D.
Credit: UCSF

“At first I didn’t believe it,” Gontier said. “I did the experiment again and again to make sure that it was right. But it became clear that some circulating factor in the blood is able to change the level of Tet2 in the brain.”

Tet2 declines in mouse brain with age

Tet2 is a type of cellular enzyme known as an epigenetic regulator — responsible for making specific chemical annotations to regions of DNA that alter the activity of many different genes. Recent genetic research in humans has implicated mutations in the gene for Tet2 as a risk factor for many different diseases of aging, including cancer, cardiovascular disease, and stroke.

Gontier and colleagues found that as mice age, Tet2 levels in the hippocampus decline, as do the epigenetic tags Tet2 makes on DNA. Among the genes that lost these tags with age were those associated with neurogenesis — the ability to produce new brain cells during adulthood — a process which also declines with age in the mouse hippocampus. Looking more closely at this decline, Villeda’s team found that it closely paralleled the age-related loss of Tet2 expression.

To find out if the loss of Tet2 in aging could directly cause cognitive decline, the researchers used a technique called RNAi to block Tet2 activity in the hippocampi of young adult mice. They found that the reduction of Tet2 significantly reduced the birth of new neurons, and also caused animals to perform significantly worse — more like aged mice — on tests of learning and memory such as remembering the location of a submerged platform in a water maze.

The birth of new neurons in the mouse hippocampus starts waning in early adulthood — well before cognitive decline becomes obvious — so the researchers wondered whether boosting Tet2 levels in the adult hippocampus could restore neurogenesis and potentially prevent the onset of cognitive decline later in life.

To test this hypothesis, they used custom-designed viruses to cause over-expression of Tet2 in the hippocampi of mature adult (6-month-old) mice. Boosting Tet2 increased epigenetic DNA tagging, the researchers found, and restored neurogenesis to more youthful levels. These mice did not perform significantly differently from untreated mice in many tests of learning and memory, but did improve their memory of places where they had previously received mild electrical shocks.

“This was amazing because it’s like improving memory in a healthy, 30-year-old human,” Villeda said. “I always assumed that because there are no overt cognitive impairments in middle-aged mice, we wouldn’t be able to improve their brain function, but here we see that, no, you can improve cognition to make it better than normal.”

“This finding is exciting on many levels,” Gontier added. “I’ve spent my entire Ph.D. and now my postdoc trying to understand how the brain ages and how can we reverse it. And in this study, we find that one molecule, Tet2, is able to rescue regenerative decline and enhance some cognitive functions in the adult mouse brain.”

Driving changes in the whole brain structureResearch in Saul Villeda's lab suggests the loss of the enzyme Tet2 is involved in age-related cognitive decline.
Credit: Steven Babuljak

It’s not yet clear exactly how Tet2 levels drive improved learning and memory in the mouse brain, or whether these improvements will translate to humans, Villeda cautions. For example, the existence of adult neurogenesis in humans is still controversial, suggesting that the same benefits seen in mice may not be possible in humans. However, Villeda says that he believes neurogenesis is just one aspect of the brain’s regenerative abilities, and probably not the only one being impacted by altering Tet2 levels.

“In our study, we found that removing Tet2 from the hippocampal stem cells that give birth to new neurons caused some cognitive impairment, but removing it from the whole hippocampus caused even more. That suggests that this is about more than just stem cells. This molecule is driving changes throughout the whole brain structure.” Villeda said. “I think of neurogenesis as a signpost of regeneration in the brain, but ultimately I think that it’s changes to the neurons themselves — preventing synapse loss, boosting plasticity — that are going to improve cognition. One of our next big steps is to catalogue exactly what’s happening, both at a genetic level and at a neural level, in mice who’ve had this treatment.”

Other authors of the paper were Manasi Iyer, Jeremy M. Shea, Gregor Bieri, Elizabeth G. Wheatley and Miguel Ramalho-Santos, all of UCSF.

The research was funded by the National Institutes of Health (F31-AG050415, F32-AG055292), the National Institute on Aging (R01 AG053382, R01 AG055797), the Irene Diamond Fund, the Glenn Foundation, and gift from Marc and Lynne Benioff.

Contacts and sources:
Nicholas Weiler, UCSF

Wind and Solar Power Could Meet 80% of U.S. Electricity Demand

The United States could reliably meet about 80 percent of its electricity demand with solar and wind power generation, according to scientists at the University of California, Irvine; the California Institute of Technology; and the Carnegie Institution for Science.

However, meeting 100 percent of electricity demand with only solar and wind energy would require storing several weeks’ worth of electricity to compensate for the natural variability of these two resources, the researchers said.

“The sun sets, and the wind doesn’t always blow,” noted Steven Davis, UCI associate professor of Earth system science and co-author of a renewable energy study published today in the journal Energy & Environmental Science. “If we want a reliable power system based on these resources, how do we deal with their daily and seasonal changes?”

Solar panels cover the roof of UCI's Student Center Parking Structure. A new study co-authored by Steven Davis, associate professor of Earth system science, shows that the U.S. can meet 80 percent of its electricity demand with renewable solar and wind resources. 
Solar panels cover the roof of UCI’s Student Center Parking Structure.
Credit: Steve Zylius / UCI

The team analyzed 36 years of hourly U.S. weather data (1980 to 2015) to understand the fundamental geophysical barriers to supplying electricity with only solar and wind energy.

“We looked at the variability of solar and wind energy over both time and space and compared that to U.S. electricity demand,” Davis said. “What we found is that we could reliably get around 80 percent of our electricity from these sources by building either a continental-scale transmission network or facilities that could store 12 hours’ worth of the nation’s electricity demand.”

The researchers said that such expansion of transmission or storage capabilities would mean very substantial – but not inconceivable – investments. They estimated that the cost of the new transmission lines required, for example, could be hundreds of billions of dollars. In comparison, storing that much electricity with today’s cheapest batteries would likely cost more than a trillion dollars, although prices are falling.

Other forms of energy stockpiling, such as pumping water uphill to later flow back down through hydropower generators, are attractive but limited in scope. The U.S. has a lot of water in the East but not much elevation, with the opposite arrangement in the West.

Fossil fuel-based electricity production is responsible for about 38 percent of U.S. carbon dioxide emissions – CO2 pollution being the major cause of global climate change. Davis said he is heartened by the progress that has been made and the prospects for the future.

“The fact that we could get 80 percent of our power from wind and solar alone is really encouraging,” he said. “Five years ago, many people doubted that these resources could account for more than 20 or 30 percent.”

But beyond the 80 percent mark, the amount of energy storage required to overcome seasonal and weather variabilities increases rapidly. “Our work indicates that low-carbon-emission power sources will be needed to complement what we can harvest from the wind and sun until storage and transmission capabilities are up to the job,” said co-author Ken Caldeira of the Carnegie Institution for Science. “Options could include nuclear and hydroelectric power generation, as well as managing demand.”

Support for this study was provided by the National Science Foundation.

Contacts and sources: 
Brian Bell
University of California, Irvine

Chinook Salmon Kings Vanishing from West Coast

The largest and oldest Chinook salmon — fish also known as "kings" and prized for their exceptional size — have mostly disappeared along the West Coast.

That's the main finding of a new University of Washington-led study published Feb. 27 in the journal Fish and Fisheries. The researchers analyzed nearly 40 years of data from hatchery and wild Chinook populations from California to Alaska, looking broadly at patterns that emerged over the course of four decades and across thousands of miles of coastline. In general, Chinook salmon populations from Alaska showed the biggest reductions in age and size, with Washington salmon a close second.

"Chinook are known for being the largest Pacific salmon and they are highly valued because they are so large," said lead author Jan Ohlberger, a research scientist in the UW's School of Aquatic and Fishery Sciences. "The largest fish are disappearing, and that affects subsistence and recreational fisheries that target these individuals."

A Chinook salmon pictured in Oregon’s McKenzie River.This adult fish is smaller than its predecessors.

Credit: Morgan Bond

Chinook salmon are born in freshwater rivers and streams, then migrate to the ocean where they spend most of their lives feeding and growing to their spectacular body size. Each population's lifestyle in the ocean varies, mainly depending on where they can find food. California Chinook salmon tend to stay in the marine waters off the coast, while Oregon and Washington fish often migrate thousands of miles northward along the west coast to the Gulf of Alaska where they feed. Western Alaska populations tend to travel to the Bering Sea.

After one to five years in the ocean, the fish return to their home streams, where they spawn and then die.

Despite these differences in life history, most populations analyzed saw a clear reduction in the average size of the returning fish over the last four decades — up to 10 percent shorter in length, in the most extreme cases.

These broad similarities point to a cause that transcends regional fishing practices, ecosystems, or animal behaviors, the authors said.

A historically large Chinook salmon from the Columbia River.
A historically large Chinook salmon from the Columbia River.
Columbia River Maritime Museum

"This suggests that there is something about the larger ocean environment that is driving these patterns," Ohlberger said. "I think fishing is part of the story, but it's definitely not sufficient to explain all of the patterns we see. Many populations are exploited at lower rates than they were 20 to 30 years ago."

It used to be common to find Chinook salmon 40 inches or more in length, particularly in the Columbia River or Alaska's Kenai Peninsula and Copper River regions. The reductions in size could have a long-term impact on the abundance of Chinook salmon, because smaller females carry fewer eggs, so over time the number of fish that hatch and survive to adulthood may decrease.

There are likely many reasons for the changes in size and age, and the researchers say there is no "smoking gun." Their analysis, however, points to fishing pressure and marine mammal predation as two of the bigger drivers.

Commercial and sport fishing have for years targeted larger Chinook. But fishing pressure has relaxed in the last 30 years due to regulations to promote sustainable fishing rates, while the reductions in Chinook size have been most rapid over the past 15 years. Resident killer whales, which are known Chinook salmon specialists, as well as other marine mammals that feed on salmon are probably contributing to the overall changes, the researchers found.

"We know that resident killer whales have a very strong preference for eating the largest fish, and this selectivity is far greater than fisheries ever were," said senior author Daniel Schindler, a UW professor of aquatic and fishery sciences.

While southern resident killer whales that inhabit Puget Sound are in apparent decline, populations of northern resident killer whales, and those that reside in the Gulf of Alaska and along the Aleutian Islands, appear to be growing at extremely fast rates. The paper's authors propose that these burgeoning northern populations are possibly a critical, but poorly understood, cause of the observed declines in Chinook salmon sizes.

Chinook salmon, shorter in length than in earlier years, swim in Oregon’s McKenzie River.
Chinook salmon, shorter in length than in earlier years, swim in Oregon’s McKenzie River.
Credit; Morgan Bond
Scientists are still trying to understand the impacts of orcas and other marine mammals on Chinook salmon, and the ways in which their relationships may have ebbed and flowed in the past. It may not be possible, for example, for marine mammals and Chinook salmon populations to be robust at the same time, given their predator-prey relationship.

"When you have predators and prey interacting in a real ecosystem, everything can't flourish all the time," Schindler said. "These observations challenge our thinking about what we expect the structure and composition of our ecosystems to be."

Co-authors are Eric Ward of NOAA's Northwest Fisheries Science Center and Bert Lewis of the Alaska Department of Fish and Game.

This study was funded by the Pacific States Marine Fisheries Commission.

Contacts and sources:
Michelle Ma
University of Washington

Plasma Bubbles Help Trigger Massive Magnetic Events in Outer Space

Scientists at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have discovered key conditions that give rise to fast magnetic reconnection, the process that triggers solar flares, auroras, and geomagnetic storms that can disrupt signal transmissions and other electrical activities, including cell phone service. The process occurs when the magnetic field lines in plasma, the hot, charged state of matter composed of free electrons and atomic nuclei, break apart and violently reconnect, releasing vast amounts of energy. This happens in thin sheets of plasma, called current sheets, in which electric current is strongly concentrated.

By incorporating computer simulations, the findings add to an earlier theory of fast reconnection developed mathematically by physicists at PPPL and Princeton University. The new results incorporate a new predictive model that gives a more complete description of the physics involved.

The impact of reconnection can be felt throughout the universe. The process may cause enormous bursts of gamma-ray radiation thought to be associated with supernova explosions and the formation of ultra-dense neutron stars and black holes. “A gamma-ray burst in our Milky Way galaxy, if pointing towards Earth, could potentially cause a mass extinction event,” said PPPL physicist Yi-Min Huang, lead author of a paper reporting the findings in Astrophysical Journal. “Clearly, it is important to know when, how, and why magnetic reconnection takes place.”

PPPL physicist Yi-Min Huang

Credit: Elle Starkman / Office of Communications

Scientists have observed that reconnection happens suddenly, after a long period of quiescent behavior by magnetic fields inside current sheets. What exactly causes the magnetic fields to separate and reconnect, and why does the reconnection take place more quickly than theory says it should?

Using computer simulations and theoretical analysis, the physicists demonstrated that a phenomenon called the “plasmoid instability” creates bubbles within plasma that can lead to reconnection when certain conditions are met:

  • The plasma must have a high Lundquist number, which characterizes how well it conducts electricity.
  • Random fluctuations in the magnetic field of the plasma provide “seeds” from which the plasma instability grows.

Taken together, these conditions allow plasmoid instabilities to give rise to reconnection in current sheets. “Our study suggests that disruption of the current sheet caused by the plasmoid instability may provide a trigger,” Huang said.

The trigger breaks up two-dimensional sheets of electric current within plasma into bubbles, or plasmoids, and many smaller sheets. The growing number of sheets creates more opportunity for magnetic lines to break apart and join together. Reconnection also occurs in more than one place, causing the aggregate rate for an entire system to increase.

The smaller size of current sheets speeds up reconnection as well. Electromagnetic forces tend to propel the plasma between sheets, producing motion that accelerates when the sheets break into smaller ones. The accelerating plasma brings magnetic lines together more quickly and leads to faster reconnection rates.

Huang and fellow physicists would like to test their new model using experimental machines with additional capability. While no such machine exists at present, researchers look forward to a new unit that is coming online.

Funding for this research was provided by the National Science Foundation and the DOE (Fusion Energy Sciences). The simulations were performed by supercomputers at the Oak Ridge Leadership Computing Facility and the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory in Berkeley, California. Coauthors include Amitava Bhattacharjee, head of the Theory Department at PPPL, and Luca Comisso, a former PPPL and Princeton University physicist now at Columbia University.

Contacts and soources:
Princeton Plasma Physics Laboratory

Storm Waves Can Move Boulders Once Thought Only Tsunamis Had the Power to Shift

It's not just tsunamis that can change the landscape: storms shifted giant boulders four times the size of a house on the coast of Ireland in the winter of 2013-14, leading researchers to rethink the maximum energy storm waves can have - and the damage they can do.

In a new paper in Earth Science Reviews, researchers from Williams College in the US show that four years ago, storms moved huge boulders along the west coast of Ireland. The same storms shifted smaller ones as high as 26 meters above high water and 222 meters inland. Many of the boulders moved were heavier than 100 tons, and the largest moved was 620 tons - the equivalent of six blue whales or four single-storey houses.

Caption: An example of coastal boulder deposits on Inishmaan, Aran Islands. The cliffs are about 20 m high, and the boulders are piled 32-42 m inland from the cliff edge. Note the people near the cliff edge, showing the scale. Some of the boulders in this ridge, weighing many tonnes, were moved by storm waves in the winter of 2013-2014 
Credit Peter Cox

It was previously assumed that only tsunamis could move boulders of the size seen displaced in Ireland, but the new paper provides direct evidence that storm waves can do this kind of work. According to the UN, about 40 percent of the world's population live in coastal areas (within 100 meters of the sea), so millions of people are at risk from storms. Understanding how those waves behave, and how powerful they can be, is key for preparation. It is therefore important to know the upper limits of storm wave energy, even in areas where these kinds of extreme wave energies are not expected.

"The effect of the storms of winter 2013-14 was dramatic," said Dr. Rónadh Cox, Professor and Chair of Geosciences at Williams College and lead author of the study. "We had been studying these sites for a number of years, and realised that this was an opportunity to measure the coastal response to very large storm events."

In the summer after the storms, Prof. Cox and a team of seven undergraduate students from Williams College surveyed 100 sites in western Ireland, documenting with photos the displacement of 1,153 boulders. They measured the dimensions and calculated the mass of each boulder. They knew where 374 of the boulders had come from, so for those they also documented the distance travelled. The largest boulder, at 237-239 m3 was an estimated 620 tons; the second biggest, at 180-185 m3, was about 475 tons. These giant rocks were close to sea level (although above the high tide mark). At higher elevations, and at greater distances inland, smaller boulders moved upwards and inland.

Analysis of this information showed that the waves had most power at lower elevations and closer to the shore. While this may not be surprising, the sheer energy of the waves and their ability to move such large boulders was - and this evidence proves that not only tsunami but also storm waves can move such large objects.

"These data will be useful to engineers and coastal scientists working in other locations," said Prof. Cox. "Now that we know what storm waves are capable of, we have much more information for policy makers who are responsible for preparing coastal communities for the impact of high-energy storms."

Contacts and sources:

The article is "Extraordinary boulder transport by storm waves (west of Ireland, winter 2013-2014), and criteria for analysing coastal boulder deposits," by Rónadh Cox, Kalle L. Jahn, Oona G. Watkins and Peter Cox ( It appears in Earth Science Reviews, volume 177, (February 2018), published by Elsevier.

Tuesday, February 27, 2018

Study Reveals Milky Way Stars Being Evicted by Invading Galaxies

An international team of astronomers has discovered that some stars located in the Galactic halo surrounding the Milky Way - previously thought to be remnants of invading galaxies from the past - are instead former residents of the Galactic disk, kicked out by those invading dwarf galaxies.

One in a series of scientific papers contributing to this story appears this week in the journal Nature.

"These stars are teaching us what happened to the Galactic disk - what happened to the Milky Way in the past," said Columbia University Professor of Astronomy Kathryn Johnston, a co-author on the current paper and co- or lead author of previous papers leading up to this finding. "We're simultaneously learning about our history and our future. This gives us a whole new window into our universe."

The galaxy in which we live, the Milky Way, is a fairly average, spiral galaxy with the majority of its stars circling its center within a disk, and a dusting of stars beyond that orbiting in what's called the Galactic halo.

The Milky Way galaxy, perturbed by the tidal interaction with a dwarf galaxy, as predicted by N-body simulations. The locations of the observed stars above and below the disk, which are used to test the perturbation scenario, are indicated. 
Credit: T. Mueller/C. Laporte/NASA/JPL-Caltech
About five years ago, researchers set out to study a set of structures - large over-densities of stars streaming in partial rings close to the Galactic disk. These clusters extend well beyond what is considered to be the edge of the disk and bulge above and below the halo where disk stars normally lie. The streams had been interpreted as signatures of the Milky Way's tumultuous past - debris from many smaller galaxies that are thought to have invaded the Milky Way galaxy and been disrupted by its gravitational field.

Allyson Sheffield, a post-doc at Columbia University at the time, led the first project to measure the speed of stars in the most distant structures, known as the Triangulum-Andromeda Clouds, and demonstrated that those stars formed coherent sequences in speed, as well as in space. Johnston then produced simulations that could explain the positions and speeds of the stars. Researchers at Fermi Lab next collected and consolidated data on the closer structures, known as A13 and the Monoceros Ring, to show that they also followed clear sequences in speed and space. Further, that study showed that all of the structures followed the same sequences, suggesting they could be related to each other, perhaps through the same disruption event.

The research team then looked at the composition of the stars in the over-densities. While previous studies had looked at "M giant" stars, which are relatively rich in elements heavier than helium, Adrian Price-Whelan, then a graduate student at Columbia University, proposed expanding this sample by also collecting speeds for "RR Lyrae" stars, which contain a much smaller fraction of heavy elements.

This first data for the Triangulum Andromeda Clouds showed that none of the RR Lyrae stars followed the same sequence as the M giants. It also revealed that the Galactic disk contains much larger populations of M giant stars than RR Lyrae stars, which is the opposite of smaller galaxies falling into the Milky Way, which typically contain larger populations of RR Lyrae stars and few, if any M giant stars. The researchers realized that these stellar over-densities bulging from the Galactic disk were not remnants of an invading galaxy destroyed by the Milky Way's gravitational field, but that they were the result of an eviction of the disk itself.

The current study confirms this emerging picture. The paper, which cites all of the team's previous work, reveals the compositions of several M giant stars burning in the rings around the Milky Way galaxy. As predicted, they are composed of a mixture of elements very similar to those in the Galactic disk and unlike the composition of stars located in the Galactic halo or in the still-invading satellite galaxies.

"I'm excited to see the final piece of evidence slot into place to confirm this story of our galaxy being bombarded from the outside - a story that astronomers from all over the globe have contributed to piecing together over many years," Johnston said. "Many recent efforts in the field have concentrated on how our galaxy rips other, smaller galaxies apart. It's refreshing to find confirmation of the damage those smaller galaxies inflict upon the Milky Way."

Researchers involved in this collaboration include: Maria Bergemann, Branimir Sesar. and Andrew Gould (Max Planck Institute for Astronomy); Kathryn Johnston and Chervin F.P. Laporte (Columbia University); Adrian M. Price-Whelan (Princeton University); Allyson Sheffield (City University of New York); Judith G. Cohen (California Institute of Technology); Aldo M. Serenelli (Institute of Space Sciences/IEEC-CSIC); Ting S. Li (Fermi National Accelerator Laboratory); Luca Casagrande (The Australian National University); and Ralph Schönrich (University of Oxford, UK).

Contacts and sources:
Jessica Guenzel
Columbia University

Lunar Origin Story: Is a Wet Moon Incompatible with a Giant Impact Formation?

It’s amazing what a difference a little water can make.

The Moon formed between about 4.4 and 4.5 billion years ago when an object collided with the still-forming proto-Earth. This impact created a hot and partially vaporized disk of material that rotated around the baby planet, eventually cooling and accreting into the Moon.

For years, scientists thought that in the aftermath of the collision hydrogen dissociated from water molecules and it and other elements that have low boiling temperatures, so-called “volatile elements,” escaped from the disk and were lost to space. This would lead to a dry and volatile element-depleted Moon, which seemed to be consistent with previous analyses of lunar samples.

A video simulation show the canonical model of the Moon’s formation, in which the proto-Earth was hit by a Mars-sized object between 4.4. and 4.5 billion years ago.
Credit:  Miki Nakajima and Dave Stevenson.

But ongoing research about the Moon’s chemistry is revealing that it may be wetter than initially thought, which raises questions about some aspects of this origin story.

“This is still very much an area of active research, so there is much that scientists, including our Department of Terrestrial Magnetism staff scientist Erik Hauri, as well as many other Carnegie colleagues and alumni, are figuring out about how much water exists in the Moon. This is a highly important and challenging question to answer given that we have limited knowledge on the history and distribution of lunar water,” explained Carnegie’s Miki Nakajima who, together with Caltech’s Dave Stevenson, set out to determine whether prevailing Moon-formation theories needed to be adjusted to account for the more recent higher estimates of lunar water content.

The work is published by Earth and Planetary Science Letters.

They created detailed models to determine whether existing theories about the Moon-forming collision could explain a wet Moon that’s still depleted in other volatile elements like potassium and sodium.

Carnegie Science, Carnegie Institution, Carnegie Institution for Science, Miki Nakajima and Dave Stevenson
 Credit:  Miki Nakajima and Dave Stevenson.

They modeled different temperature conditions and water abundances of the Moon-forming disk. At higher temperatures, their disk was dominated by silicate vapor, which came from evaporation of the mantles of both the proto-Earth and the impactor, with a relatively small abundance of hydrogen dissociated from water. At lower temperatures, their disk was dominated by water, from which hydrogen did not disassociate under this temperature range, making its escape mechanism very inefficient.

“The good news is that our models show that observations of a wet Moon are not incompatible with a giant impact origin,” Nakajima explained.

However, it also means that scientists need to come up with other explanations for why the Moon is depleted of potassium, sodium, and other volatile elements. Other possibilities exist, such as the volatile elements in the disk falling onto Earth rather than escaping or being part of the Moon’s formation. Or potentially they were part of the Moon when it first accreted from the post-collision disk but were later lost.

Contacts and sources:
Miki Nakajima
Carnegie Institution for Science

King Penguins Expected To Be on the Move Very Soon

"The main issue is that there is only a handful of islands in the Southern Ocean and not all of them are suitable to sustain large breeding colonies" says Robin Cristofari, first author of the study, from the Institut Pluridisciplinaire Hubert Curien (IPHC/CNRS/University of Strasbourg) and the Centre Scientifique de Monaco (CSM).

King penguins are in fact picky animals: in order to form a colony where they can mate, lay eggs and rear chicks over a year, they need tolerable temperature all year round, no winter sea ice around the island, and smooth beach of sand or pebbles. But, above all, they need an abundant and reliable source of food close by to feed their chicks. For millennia, this seabird has relied on the Antarctic Polar Front, an upwelling front in the Southern Ocean concentrating enormous amounts of fish on a relatively small area.

The pinguins form colonies in Crozet, Kerguelen and Marion sub-Antarctic islands.

Credit: Copyright: Celine LeBohec

Yet, due to climate change, this area is drifting south, away from the islands where most King penguins currently live. Parents are then forced to swim farther to find food, while their progeny is waiting, fasting longer and longer on the shore. This study predicts that, for most colonies, the length of the parents’ trips to get food will soon exceed the resistance to starvation of their offspring, leading to massive King penguin crashes in population size, or, hopefully, relocation.

More than 70 percent of the global King penguin population may be nothing more than a memory in a matter of decades, as global warming will soon force the birds to move south, or disappear.

Credit: Copyright: Celine LeBohec

Using the information hidden away in the penguin’s genome, the research team has reconstructed the changes in the worldwide King penguin population throughout the last 50,000 years, and discovered that past climatic changes, causing shifts in marine currents, sea-ice distribution and Antarctic Polar Front location, have always been linked to critical episodes for the King penguins. However, hope is not lost yet: 

King penguins have already survived such crises several times (the last time was 20 thousand years ago), and they may be particularly good at it. "Extremely low values in indices of genetic differentiation told us that all colonies are connected by a continuous exchange of individuals", says Emiliano Trucchi formerly at the University of Vienna and now at the University of Ferrara, one of the coordinator of the study. "In other words, King penguins seem to be able to move around quite a lot to find the safest breeding locations when things turn grim".

King penguins are picky animals.

Credit: Copyright: Celine LeBohec

But there is a major difference this time: for the first time in the history of penguins, human activities are leading to rapid and/or irreversible changes in the Earth system, and remote areas are no exception. In addition to the strongest impact of climate change in Polar Regions, Southern Ocean is now subject to industrial fishing, and penguins may soon have a very hard time fighting for their food. "

There are still some islands further south where King penguins may retreat", notes Céline Le Bohec (IPHC/CNRS/University of Strasbourg and CSM), leader of the programme 137 of the French Polar Institut Paul-Emile Victor within which the study was initiated, "but the competition for breeding sites and for food will be harsh, especially with the other penguin species like the Chinstrap, Gentoo or Adélie penguins, even without the fisheries. It is difficult to predict the outcome, but there will surely be losses on the way. If we want to save anything, proactive and efficient conservation efforts but, above all, coordinated global action against global warming should start now."

Contacts and sources:
University of Vienna

Publication in "Nature Climate Change":
Cristofari R., Liu X., Bonadonna F., Cherel Y., Pistorius P., Le Maho Y., Raybaud V., Stenseth N.C., Le Bohec C. and Trucchi E. (2018) Climate-driven range shifts of the king penguin in a fragmented ecosystem. Nature Climate Change. DOI: 10.1038/s41558-018-0084-2

How Good Is Your Sense of Smell? You Could Recognize Odors Not Yet Invented

Historically speaking, smell is the Rodney Dangerfield of the human senses. A series of scientific mischaracterizations rooted in (of all things) the religious politics of 19th-century France and perpetuated by scientists like Sigmund Freud led to widespread acceptance that humans were underachievers in the smelling category.

But a growing number of scientists are rewriting the script, advancing the argument that smell has a powerful influence on emotions and behavior.

"We all have heard the hypothetical question 'Would you rather lose your eyesight or your sense of smell?'" says John McGann, an Associate Professor at Rutgers University. "Most people would rather be able to see than smell, of course, but smells evoke strong emotional and behavioral reactions and are often associated with distinct memories. The social role of smell is greater than we give it credit for, and olfactory impairment can have a disastrous effect on emotional well-being and diet. It's important that medical practice begin to take that into account."

People can smell as well as rats, scientists say...

Credit: John McGann, Rutgers University

The bulk of McGann's career has been dedicated to exploring how the brain uses smell to learn about the world. One of his most interesting findings turns cause and effect in the smelling world on its ear.

"We've found that when animals learn that a certain smell is associated with an unpleasant experience, the olfactory system itself changes. The cells in the nose that detect the odor become hypersensitive to that odor and send stronger signals to the brain that look like a warning signal for the associated danger,” he said. "We didn’t expect to see that learned information about odors would show up so early in the olfactory system, and it has inspired a series of follow-up experiments to see what else 'the nose knows.'”

Because mice have roughly 1,000 odor receptors and humans have about 350, people erroneously extrapolate that rodents' sense of smell is triple that of a human, but McGann says the math doesn’t work that way.

"Both mice and humans are capable of smelling almost anything volatile enough to get into the nose and more than a couple atoms in size," McGann says. "In fact, we have odor receptors that are so broadly tuned that we can detect smells that scientists haven't yet invented."

That provides hope for perfumists -- and for chefs and other food scientists looking for ways to help patients enjoy food and/or make healthy choices about what they eat.

McGann joins luminaries from the worlds of science, nutrition, and culinary arts at the International Society of Neurogastronomy Symposium next month, where they will share their data and experience on the psychological influences on eating and behavior, the chemosensory properties of food and how we experience them, the role of food as medicine, and the history and evolution of flavor and flavor perception.

The day's format differs from the typical symposium, featuring brief presentations modeled after the popular TED talks and punctuated with breaks for tastings and a contest where the food from nationally acclaimed chefs Taria Camerino and Jehangir Mehta will be judged by patients with diabetes.

This year, there is an experiential event on Friday, March 2: a five-course dinner with wine pairings and bourbon flavor wheel instruction by Chris Morris, Master Distiller at the Woodford Reserve, plus interdisciplinary clinical neuroscience lectures.

Contacts and sources: 
University of Kentucky   

Smellovision Invented: Brain Can Navigate Based Solely on Odors

Northwestern University researchers have developed a new “smell virtual landscape” that enables the study of how smells engage the brain’s navigation system. The work demonstrates, for the first time, that the mammalian brain can form a map of its surroundings based solely on smells.

The olfactory-based virtual reality system could lead to a fuller understanding of odor-guided navigation and explain why mammals have an aversion to unpleasant odors, an attraction to pheromones and an innate preference to one odor over another. The system could also help tech developers incorporate smell into current virtual reality systems to give users a more multisensory experience.

“We have invented what we jokingly call a ‘smellovision,’” said Daniel A. Dombeck, associate professor of neurobiology in Northwestern’s Weinberg College of Arts and Sciences, who led the study. “It is the world’s first method to control odorant concentrations rapidly in space for mammals as they move around.”

Olfactory virtual reality landscape

Credit: Daniel Dombeck and Brad Radvansky

The study was published online today, Feb. 26, by the journal Nature Communications.

Researchers have long known that odors can guide animals’ behaviors. But studying this phenomenon has been difficult because odors are nearly impossible to control as they naturally travel and diffuse in the air. By using a virtual reality system made of smells instead of audio and visuals, Dombeck and graduate student Brad Radvansky created a landscape in which smells can be controlled and maintained.

“Imagine a room in which each position is defined by a unique smell profile,” Radvansky said. “And imagine that this profile is maintained no matter how much time elapses or how fast you move through the room.”

That is exactly what Dombeck’s team developed, using mice in their study. Aided by a predictive algorithm that determined precise timing and distributions, the airflow system pumped scents — such as bubblegum, pine and a sour smell — past the mouse’s nose to create a virtual room. Mice first explored the virtual environment through both visual and olfactory cues. Researchers then shut off the visual virtual reality system, forcing the mice to navigate the room in total darkness based on olfactory cues alone. The mice did not show a decrease in performance. Instead, the study indicated that moving through a smell landscape engages the brain’s spatial mapping mechanisms.

Not only can the platform help researchers learn more about how the brain processes and uses smells, it could also lay the groundwork for human applications.

“Development of virtual reality technology has mainly focused on vision and sound,” Dombeck said. “It is likely that our technology will eventually be incorporated into commercial virtual reality systems to create a more immersive multisensory experience for humans.”

The study is titled “An olfactory virtual reality system for mice.”

The research was supported by The McKnight Foundation, The Klingenstein Foundation, The Whitehall Foundation, the Chicago Biomedical Consortium with support from the Searle Funds of The Chicago Community Trust, the National Institutes of Health (award number 1R01MH101297) and the National Science Foundation (award number CRCNS 1516235).

Contacts and sources:
Northwestern University

Continental Interiors May Not Be as Tectonically Stable as Geologists Think

A University of Illinois-led team has identified unexpected geophysical signals underneath tectonically stable interiors of South America and Africa. The data suggest that geologic activity within stable portions of Earth’s uppermost layer may have occurred more recently than previously believed. The findings, published in Nature Geoscience, challenge some of today’s leading theories regarding plate tectonics.

The most ancient rocks on Earth are located within continental interiors, far from active tectonic boundaries where rocks recycle back into the planet’s interior. These strong, buoyant and deeply rooted blocks of Earth, called cratons, have been drifting on the surface for billions of years, seemingly undisturbed. They occasionally join and break apart along their edges in a dance called the supercontinent cycle.

“We usually think of cratons as being cold, stable and low-elevation,” said professor of geology and study co-author Lijun Liu. “Cold because the rocks are far above the hot mantle layers, stable because their crusts have not been disturbed significantly by faulting or deformation, and their low elevation is because they have been sitting there, eroding down for billions of years.”

However, there are places where cratons don’t follow these rules.

“For example, there are regions of high topography within the cratons of South America and Africa,” said graduate student and lead author Jiashun Hu.

Cratonic lithosphere with a high-density root undergoes delamination when perturbed by mantle plumes from beneath. The removed cratonic root then thermally grows back, with its rock fabrics preserving recent mantle deformation.

Image courtesy of Lijun Liu

The researchers processed geophysical data with the Blue Waters supercomputer at the National Center for Supercomputing Applications at Illinois hoping to better understand these high-elevation regions. The thick roots of cratons have been thought to be buoyant due to their low-density mineral content, allowing them to float on top of the hot underlying mantle. However, the new data indicate that the cold mantle that lies below these regions in South America and Africa – once joined as part of the supercontinent Pangea – has a layered structure and that the lower layer was more dense in the past than it is today, Liu said.

This density difference could be the result of a process called mantle delamination. During delamination, the denser lower mantle layer peels away from the buoyant upper layer under the crust of the craton after interacting with hot magma from mantle plumes, the researchers said.

“From several types of seismic imaging data, we can see what we think are delaminated mantle slabs sinking into the hot, viscous deep mantle,” Liu said.

“The material that subsequently grows back at the roots of the cratons after delamination, due to cooling from above, is probably compositionally much less dense than what was there before,” said geology professor Craig Lundstrom. “That adds buoyancy, and that force from buoyancy could be what forms the anomalously high topography.”

Researchers, from left, Manuele Faccenda, of the University of Padova, and Stephen Marshak, Quan Zhou, Craig Lundstrom, Jiashun Hu and Lijun Liu, all of the University of Illinois, along with Karen Fischer of Brown University (not pictured), are challenging some of today’s leading theories regarding plate tectonics with their interpretation of ancient mantle-crust interactions.
Researchers, from left, Manuele Faccenda, of the University of Padova, and Stephen Marshak, Quan Zhou, Craig Lundstrom, Jiashun Hu and Lijun Liu, all of the University of Illinois, are challenging some of today's leading theories regarding plate tectonics with their interpretation of ancient mantle-crust interactions.
Photo by L. Brian Stauffer

This multidisciplinary study is beginning to give the team a very logical – albeit complicated – update on the story of Earth’s tectonic history, the researchers said.

“The high topography of Africa and South America is only part of the story,” Hu said. “There are many geologic phenomena such as the location of hotspot trajectories, continental volcanism, surface uplift and erosion, as well as seismically imaged deformation within the craton roots that all seem to correlate well with the proposed delamination event, implying a potential causal relationship.”

There is also evidence to support other locations of craton-plume interaction during other times in Earth’s history.

“The rock record shows that uplift and erosion events have taken place during previous supercontinent cycles,” said geology professor and School of Earth, Society and Environment director Stephen Marshak. “A related study discusses what might be a similar event, namely continental uplift possibly related to delamination of cratonic lithosphere that caused the period of global erosion resulting in the Great Unconformity, which is the contact between Precambrian basement rock and Paleozoic sedimentary strata.”

For now, it is not clear if and how craton-plume interaction may affect modern-day earthquake activity and volcanism in areas thought of as geologically inactive. However, the study marks new thinking in how geologists may understand the so-called stable cratons.

The National Science Foundation, the National Center for Supercomputing Applications at the U. of I. and the Progetto di Ateneo grant funded this research.

Contacts and sources:
Lois Yoksoulian
University of Illinois at Urbana-Champaign

The paper “Modification of the Western Gondwana craton by plume–lithosphere interaction”