Friday, July 31, 2015

Death Throes of a Dying Star Captured by Hubble Space Telescope

A dying star’s final moments are captured in this image from the NASA/ESA Hubble Space Telescope. The death throes of this star may only last mere moments on a cosmological timescale, but this star’s demise is still quite lengthy by our standards, lasting tens of thousands of years!

Image credit: ESA/Hubble & NASA, Acknowledgement: Matej Novak

The star’s agony has culminated in a wonderful planetary nebula known as NGC 6565, a cloud of gas that was ejected from the star after strong stellar winds pushed the star’s outer layers away into space. Once enough material was ejected, the star’s luminous core was exposed, enabling its ultraviolet radiation to excite the surrounding gas to varying degrees and causing it to radiate in an attractive array of colors. These same colors can be seen in the famous and impressive Ring Nebula (heic1310), a prominent example of a nebula like this one.

Planetary nebulae are illuminated for around 10,000 years before the central star begins to cool and shrink to become a white dwarf. When this happens, the star’s light drastically diminishes and ceases to excite the surrounding gas, so the nebula fades from view.

Contacts and sources:
Ashley Morrow, NASA
Text credit: European Space Agency

Distant Uranus-Sized Planet Found Through Microlensing

NASA's Hubble Space Telescope and the W. M. Keck Observatory in Hawaii have made independent confirmations of an exoplanet orbiting far from its central star. The planet was discovered through a technique called gravitational microlensing.

This graphic illustrates how a star can magnify and brighten the light of a background star when it passes in front of the distant star. If the foreground star has planets, then the planets may also magnify the light of the background star, but for a much shorter period of time than their host star. Astronomers use this method, called gravitational microlensing, to identify planets.
Credit: NASA, ESA, and A. Feild (STScI)

This finding opens a new piece of discovery space in the extrasolar planet hunt: to uncover planets as far from their central stars as Jupiter and Saturn are from our sun. The Hubble and Keck Observatory results will appear in two papers in the July 30 edition of The Astrophysical Journal.

The large majority of exoplanets cataloged so far are very close to their host stars because several current planet-hunting techniques favor finding planets in short-period orbits. But this is not the case with the microlensing technique, which can find more distant and colder planets in long-period orbits that other methods cannot detect.

Microlensing occurs when a foreground star amplifies the light of a background star that momentarily aligns with it. If the foreground star has planets, then the planets may also amplify the light of the background star, but for a much shorter period of time than their host star. The exact timing and amount of light amplification can reveal clues to the nature of the foreground star and its accompanying planets.

The system, cataloged as OGLE-2005-BLG-169, was discovered in 2005 by the Optical Gravitational Lensing Experiment (OGLE), the Microlensing Follow-Up Network (MicroFUN), and members of the Microlensing Observations in Astrophysics (MOA) collaborations — groups that search for extrasolar planets through gravitational microlensing.

Without conclusively identifying and characterizing the foreground star, however, astronomers have had a difficult time determining the properties of the accompanying planet. Using Hubble and the Keck Observatory, two teams of astronomers have now found that the system consists of a Uranus-sized planet orbiting about 370 million miles from its parent star, slightly less than the distance between Jupiter and the sun. The host star, however, is about 70 percent as massive as our sun.

"These chance alignments are rare, occurring only about once every 1 million years for a given planet, so it was thought that a very long wait would be required before the planetary microlensing signal could be confirmed," said David Bennett of the University of Notre Dame, Indiana, the lead of the team that analyzed the Hubble data. "Fortunately, the planetary signal predicts how fast the apparent positions of the background star and planetary host star will separate, and our observations have confirmed this prediction. The Hubble and Keck Observatory data, therefore, provide the first confirmation of a planetary microlensing signal."

In fact, microlensing is such a powerful tool that it can uncover planets whose host stars cannot be seen by most telescopes. "It is remarkable that we can detect planets orbiting unseen stars, but we'd really like to know something about the stars that these planets orbit," explained Virginie Batista of the Institut d'Astrophysique de Paris, France, leader of the Keck Observatory analysis. "The Keck and Hubble telescopes allow us to detect these faint planetary host stars and determine their properties."

Planets are small and faint compared to their host stars; only a few have been observed directly outside our solar system. Astronomers often rely on two indirect techniques to hunt for extrasolar planets. The first method detects planets by the subtle gravitational tug they give to their host stars. In another method, astronomers watch for small dips in the amount of light from a star as a planet passes in front of it.

Both of these techniques work best when the planets are either extremely massive or when they orbit very close to their parent stars. In these cases, astronomers can reliably determine their short orbital periods, ranging from hours to days to a couple years.

But to fully understand the architecture of distant planetary systems, astronomers must map the entire distribution of planets around a star. Astronomers, therefore, need to look farther away from the star-from about the distance of Jupiter is from our sun, and beyond.

"It's important to understand how these systems compare with our solar system," said team member Jay Anderson of the Space Telescope Science Institute in Baltimore, Maryland. "So we need a complete census of planets in these systems. Gravitational microlensing is critical in helping astronomers gain insights into planetary formation theories."

The planet in the OGLE system is probably an example of a "failed-Jupiter" planet, an object that begins to form a Jupiter-like core of rock and ice weighing around 10 Earth masses, but it doesn't grow fast enough to accrete a significant mass of hydrogen and helium. So it ends up with a mass more than 20 times smaller than that of Jupiter. "Failed-Jupiter planets, like OGLE-2005-BLG-169Lb, are predicted to be more common than Jupiters, especially around stars less massive than the sun, according to the preferred theory of planet formation. So this type of planet is thought to be quite common," Bennett said.

Microlensing takes advantage of the random motion of stars, which are generally too small to be noticed without precise measurements. If one star, however, passes nearly precisely in front of a farther background star, the gravity of the foreground star acts like a giant lens, magnifying the light from the background star.

A planetary companion around the foreground star can produce a variation in the brightening of the background star. This brightening fluctuation can reveal the planet, which can be too faint, in some cases, to be seen by telescopes. The duration of an entire microlensing event is several months, while the variation in brightening due to a planet lasts a few hours to a couple of days.

The initial microlensing data of OGLE-2005-BLG-169 had indicated a combined system of foreground and background stars plus a planet. But due to the blurring effects of our atmosphere, a number of unrelated stars are also blended with the foreground and background stars in the very crowded star field in the direction of our galaxy's center.

The sharp Hubble and Keck Observatory images allowed the research teams to separate out the background source star from its neighbors in the very crowded star field in the direction of our galaxy's center. Although the Hubble images were taken 6.5 years after the lensing event, the source and lens star were still so close together on the sky that their images merged into what looked like an elongated stellar image.

Astronomers can measure the brightness of both the source and planetary host stars from the elongated image. When combined with the information from the microlensing light curve, the lens brightness reveals the masses and orbital separation of the planet and its host star, as well as the distance of the planetary system from Earth. The foreground and background stars were observed in several different colors with Hubble's Wide Field Camera 3 (WFC3), allowing independent confirmations of the mass and distance determinations.

The observations, taken with the Near Infrared Camera 2 (NIRC2) on the Keck 2 telescope more than eight years after the microlensing event, provided a precise measurement of the foreground and background stars' relative motion. "It is the first time we were able to completely resolve the source star and the lensing star after a microlensing event. This enabled us to discriminate between two models that fit the data of the microlensing light curve," Batista said.

The Hubble and Keck Observatory data are providing proof of concept for the primary method of exoplanet detection that will be used by NASA's planned, space-based Wide-Field Infrared Survey Telescope (WFIRST), which will allow astronomers to determine the masses of planets found with microlensing. WFIRST will have Hubble's sharpness to search for exoplanets using the microlensing technique. The telescope will be able to observe foreground, planetary host stars approaching the background source stars prior to the microlensing events, and receding from the background source stars after the microlensing events.

"WFIRST will make measurements like we have made for OGLE-2005-BLG-169 for virtually all the planetary microlensing events it observes. We'll know the masses and distances for the thousands of planets discovered by WFIRST," Bennett explained.

Contacts and sources:
Donna Weaver / Ray Villard
Space Telescope Science Institute, Baltimore, Maryland

David Bennett
University of Notre Dame, Notre Dame, Indiana

Jean-Phillipe Beaulieu
Institut d'Astrophysique de Paris, Paris, France

Thursday, July 30, 2015

New Theory on the Origin of Life

When life on Earth began nearly 4 billion years ago, long before humans, dinosaurs or even the earliest single-celled forms of life roamed, it may have started as a hiccup rather than a roar: small, simple molecular building blocks known as "monomers" coming together into longer "polymer" chains and falling apart in the warm pools of primordial ooze over and over again.

A schematic drawing of template-assisted ligation, shown in this model to give rise to autocatalytic systems
Credit:  Maslov and Tkachenko

Then, somewhere along the line, these growing polymer chains developed the ability to make copies of themselves. Competition between these molecules would allow the ones most efficient at making copies of themselves to do so faster or with greater abundance, a trait that would be shared by the copies they made. These rapid replicators would fill the soup faster than the other polymers, allowing the information they encoded to be passed on from one generation to another and, eventually, giving rise to what we think of today as life.

Or so the story goes. But with no fossil record to check from those early days, it's a narrative that still has some chapters missing. One question in particular remains problematic: what enabled the leap from a primordial soup of individual monomers to self-replicating polymer chains?

A new model published this week in The Journal of Chemical Physics, from AIP Publishing, proposes a potential mechanism by which self-replication could have emerged. It posits that template-assisted ligation, the joining of two polymers by using a third, longer one as a template, could have enabled polymers to become self-replicating.

"We tried to fill this gap in understanding between simple physical systems to something that can behave in a life-like manner and transmit information," said Alexei Tkachenko, a researcher at Brookhaven National Laboratory. Tkachenko carried out the research alongside Sergei Maslov, a professor at the University of Illinois at Urbana-Champaign with joint appointment at Brookhaven.

Origins of Self-Replication

Self-replication is a complicated process -- DNA, the basis for life on earth today, requires a coordinated cohort of enzymes and other molecules in order to duplicate itself. Early self-replicating systems were surely more rudimentary, but their existence in the first place is still somewhat baffling.

Tkachenko and Maslov have proposed a new model that shows how the earliest self-replicating molecules could have worked. Their model switches between "day" phases, where individual polymers float freely, and "night" phases, where they join together to form longer chains via template-assisted ligation. The phases are driven by cyclic changes in environmental conditions, such as temperature, pH, or salinity, which throw the system out of equilibrium and induce the polymers to either come together or drift apart.

According to their model, during the night cycles, multiple short polymers bond to longer polymer strands, which act as templates. These longer template strands hold the shorter polymers in close enough proximity to each other that they can ligate to form a longer strand -- a complementary copy of at least part of the template. Over time, the newly synthesized polymers come to dominate, giving rise to an autocatalytic and self-sustaining system of molecules large enough to potentially encode blueprints for life, the model predicts

Polymers can also link together without the aid of a template, but the process is somewhat more random -- a chain that forms in one generation will not necessarily be carried over into the next. Template-assisted ligation, on the other hand, is a more faithful means of preserving information, as the polymer chains of one generation are used to build the next. Thus, a model based on template-assisted ligation combines the lengthening of polymer chains with their replication, providing a potential mechanism for heritability.

While some previous studies have argued that a mix of the two is necessary for moving a system from monomers to self-replicating polymers, Maslov and Tkachenko's model demonstrates that it is physically possible for self-replication to emerge with only template-assisted ligation.

"What we have demonstrated for the first time is that even if all you have is template-assisted ligation, you can still bootstrap the system out of primordial soup," said Maslov.

The idea of template-assisted ligation driving self-replication was originally proposed in the 1980s, but in a qualitative manner. "Now it's a real model that you can run through a computer," said Tkachenko. "It's a solid piece of science to which you can add other features and study memory effects and inheritance."

Under Tkachenko and Maslov's model, the move from monomers to polymers is a very sudden one. It's also hysteretic -- that is, it takes a very certain set of conditions to make the initial leap from monomers to self-replicating polymers, but those stringent requirements are not necessary to maintain a system of self-replicating polymers once one has leapt over the first hurdle.

One limitation of the model that the researchers plan to address in future studies is its assumption that all polymer sequences are equally likely to occur. Transmission of information requires heritable variation in sequence frequencies -- certain combinations of bases code for particular proteins, which have different functions. The next step, then, is to consider a scenario in which some sequences become more common than others, allowing the system to transmit meaningful information.

Maslov and Tkachenko's model fits into the currently favored RNA world hypothesis -- the belief that life on earth started with autocatalytic RNA molecules that then lead to the more stable but more complex DNA as a mode of inheritance. But because it is so general, it could be used to test any origins of life hypothesis that relies on the emergence of a simple autocatalytic system.

"The model, by design, is very general," said Maslov. "We're not trying to address the question of what this primordial soup of monomers is coming from" or the specific molecules involved. Rather, their model shows a physically plausible path from monomer to self-replicating polymer, inching a step closer to understanding the origins of life.

Waiter, there's an RNA in my Primordial Soup -- Tracing the Origins of Life, from Darwin to Today

Nearly every culture on earth has an origins story, a legend explaining its existence. We humans seem to have a deep need for an explanation of how we ended up here, on this small planet spinning through a vast universe. Scientists, too, have long searched for our origins story, trying to discern how, on a molecular scale, the earth shifted from a mess of inorganic molecules to an ordered system of life. The question is impossible to answer for certain -- there's no fossil record, and no eyewitnesses. But that hasn't stopped scientists from trying.

Over the past 150 years, our shifting understanding of the origins of life has mirrored the emergence and development of the fields of organic chemistry and molecular biology. That is, increased understanding of the role that nucleotides, proteins and genes play in shaping our living world today has also gradually improved our ability to peer into their mysterious past.

When Charles Darwin published his seminal On the Origin of the Species in 1859, he said little about the emergence of life itself, possibly because, at the time, there was no way to test such ideas. His only real remarks on the subject come from a later letter to a friend, in which he suggested a that life emerged out of a "warm little pond" with a rich chemical broth of ions. Nevertheless, Darwin's influence was far-reaching, and his offhand remark formed the basis of many origins of life scenarios in the following years.

In the early 20th century, the idea was popularized and expanded upon by a Russian biochemist named Alexander Oparin. He proposed that the atmosphere on the early earth was reducing, meaning it had an excess of negative charge. This charge imbalance could catalyze existing a prebiotic soup of organic molecules into the earliest forms of life.

Oparin's writing eventually inspired Harold Urey, who began to champion Oparin's proposal. Urey then caught the attention of Stanley Miller, who decided to formally test the idea. Miller took a mixture of what he believed the early earth's oceans may have contained -- a reducing mixture of methane, ammonia, hydrogen, and water -- and activated it with an electric spark. The jolt of electricity, acting like a strike of lightening, transformed nearly half of the carbon in the methane into organic compounds. One of the compounds he produced was glycine, the simplest amino acid.

The groundbreaking Miller-Urey experiment showed that inorganic matter could give rise to organic structures. And while the idea of a reducing atmosphere gradually fell out of favor, replaced by an environment rich in carbon dioxide, Oparin's basic framework of a primordial soup rich with organic molecules stuck around.

The identification of DNA as the hereditary material common to all life, and the discovery that DNA coded for RNA, which coded for proteins, provided fresh insight into the molecular basis for life. But it also forced origins of life researchers to answer a challenging question: how could this complicated molecular machinery have started? DNA is a complex molecule, requiring a coordinated team of enzymes and proteins to replicate itself. Its spontaneous emergence seemed improbable.

In the 1960s, three scientists -- Leslie Orgel, Francis Crick and Carl Woese -- independently suggested that RNA might be the missing link. Because RNA can self-replicate, it could have acted as both the genetic material and the catalyst for early life on earth. DNA, more stable but more complex, would have emerged later.

Today, it is widely believed (though by no means universally accepted) that at some point in history, an RNA-based world dominated the earth. But how it got there -- and whether there was a simpler system before it -- is still up for debate. Many argue that RNA is too complicated to have been the first self-replicating system on earth, and that something simpler preceded it.

Graham Cairns-Smith, for instance, has argued since the 1960s that the earliest gene-like structures were not based on nucleic acids, but on imperfect crystals that emerged from clay. The defects in the crystals, he believed, stored information that could be replicated and passed from one crystal to another. His idea, while intriguing, is not widely accepted today.

Others, taken more seriously, suspect that RNA may have emerged in concert with peptides -- an RNA-peptide world, in which the two worked together to build up complexity. Biochemical studies are also providing insight into simpler nucleic acid analogs that could have preceded the familiar bases that make up RNA today. It's also possible that the earliest self-replicating systems on earth have left no trace of themselves in our current biochemical systems. We may never know, and yet, the challenge of the search seems to be part of its appeal.

Recent research by Tkachenko and Maslov, published July 28, 2015 in The Journal of Chemical Physics, suggests that self-replicating molecules such as RNA may have arisen through a process called template-assisted ligation. That is, under certain environmental conditions, small polymers could be driven to bond to longer complementary polymer template strands, holding the short strands in close enough proximity to each other that they could fuse into longer strands. Through cyclic changes in environmental conditions that induce complementary strands to come together and then fall apart repeatedly, a self-sustaining collection of hybridized, self-replicating polymers able to encode the blueprints for life could emerge.

Contacts and sources:
American Institute of Physics (AIP)

Citation:  "Spontaneous emergence of autocatalytic information-coding polymers," is authored by Alexei Tkachenko and Sergei Maslov. It will appear in The Journal of Chemical Physics on July 28, 2015.

The Journal of Chemical Physics publishes concise and definitive reports of significant research in the methods and applications of chemical physics.

California “Rain Debt” Equal to Average Full Year of Precipitation

A new NASA study has concluded California accumulated a debt of about 20 inches of precipitation between 2012 and 2015 -- the average amount expected to fall in the state in a single year. The deficit was driven primarily by a lack of air currents moving inland from the Pacific Ocean that are rich in water vapor.

California's accumulated precipitation “deficit” from 2012 to 2014 shown as a percent change from the 17-year average based on TRMM multi-satellite observations.
Credits: NASA/Goddard Scientific Visualization Studio

In an average year, 20 to 50 percent of California's precipitation comes from relatively few, but extreme events called atmospheric rivers that move from over the Pacific Ocean to the California coast.

"When they say that an atmospheric river makes landfall, it's almost like a hurricane, without the winds. They cause extreme precipitation," said study lead author Andrey Savtchenko at NASA's Goddard Space Flight Center in Greenbelt, Maryland.

Savtchenko and his colleagues examined data from 17 years of satellite observations and 36 years of combined observations and model data to understand how precipitation has varied in California since 1979. The results were published Thursday in Journal of Geophysical Research – Atmospheres, a journal of the American Geophysical Union.

The state as a whole can expect an average of about 20 inches of precipitation each year, with regional differences. But, the total amount can vary as much as 30 percent from year to year, according to the study.

In non-drought periods, wet years often alternate with dry years to balance out in the short term. However, from 2012 to 2014, California accumulated a deficit of almost 13 inches, and the 2014-2015 wet season increased the debt another seven inches, for a total 20 inches accumulated deficit during the course of three dry years.

The majority of that precipitation loss is attributed to a high-pressure system in the atmosphere over the eastern Pacific Ocean that has interfered with the formation of atmospheric rivers since 2011.

Atmospheric rivers occur all over the world. They are narrow, concentrated tendrils of water vapor that travel through the atmosphere similar to, and sometimes with, the winds of a jet stream. Like a jet stream, they typically travel from west to east. The ones destined for California originate over the tropical Pacific, where warm ocean water evaporates a lot of moisture into the air. The moisture-rich atmospheric rivers, informally known as the Pineapple Express, then break northward toward North America.

The atmospheric rivers that drenched California in December 2014 are shown in this data visualization: water vapor (white) and precipitation (red to yellow).

Credits: NASA/Goddard Scientific Visualization Studio

Earlier this year, a NASA research aircraft participated in the CalWater 2015 field campaign to improve understanding of when and how atmospheric rivers reach California.

Some of the water vapor rains out over the ocean, but the show really begins when an atmospheric river reaches land. Two reached California around Dec. 1 and 10, 2014, and brought more than three inches of rain, according to NASA's Tropical Rainfall Measuring Mission (TRMM)'s multi-satellite dataset. The inland terrain, particularly mountains, force the moist air to higher altitudes where lower pressure causes it to expand and cool. The cooler air condenses the concentrated pool of water vapor into torrential rains, or snowfall as happens over the Sierra Nevada Mountains, where water is stored in the snowpack until the spring melt just before the growing season.

The current drought isn't the first for California. Savtchenko and his colleagues recreated a climate record for 1979 to the present using the Modern-Era Retrospective Analysis for Research and Applications, or MERRA. Their efforts show that a 27.5 inch deficit of rain and snow occurred in the state between 1986 and 1994.

"Drought has happened here before. It will happen again, and some research groups have presented evidence it will happen more frequently as the planet warms," Savtchenko said. "But, even if the climate doesn’t change, are our demands for fresh water sustainable?"

The current drought has been notably severe because, since the late 1980s, California's population, industry and agriculture have experienced tremendous growth, with a correlating growth in their demand for water. Human consumption has depleted California's reservoirs and groundwater reserves, as shown by data from NASA's Gravity Recovery and Climate Experiment (GRACE) mission, leading to mandatory water rationing.

"The history of the American West is written in great decade-long droughts followed by multi-year wet periods," said climatologist Bill Patzert at NASA's Jet Propulsion Laboratory in Pasadena, California. He was not involved in the research. "Savtchenko and his team have shown how variable California rainfall is.”

According to Patzert, this study added nuance to how scientists may interpret the atmospheric conditions that cause atmospheric rivers and an El Niño's capacity to bust the drought. Since March, rising sea surface temperatures in the central equatorial Pacific have indicated the formation of El Niño conditions. El Niño conditions are often associated with higher rainfall to the western United States, but it’s not guaranteed.

Savtchenko and his colleagues show that El Niño contributes only six percent to California's precipitation variability and is one factor among other, more random effects that influence how much rainfall the state receives. While it’s more likely El Niño increases precipitation in California, it’s still possible it will have no, or even a drying, effect.

A strong El Niño that lasts through the rainy months, from November to March, is more likely to increase the amount of rain that reaches California, and Savtchenko noted the current El Niño is quickly strengthening.

The National Oceanic and Atmospheric Administration (NOAA), which monitors El Niño events, ranks it as the third strongest in the past 65 years for May and June. Still, it will likely take several years of higher than normal rain and snowfall to recover from the current drought.

"If this El Niño holds through winter, California’s chances to recoup some of the precipitation increase. Unfortunately, so do the chances of floods and landslides," Savtchenko said. “Most likely the effects would be felt in late 2015-2016.”

Contacts and sources: 

Helium-Shrouded Planets May Be Common in Our Galaxy

They wouldn't float like balloons or give you the chance to talk in high, squeaky voices, but planets with helium skies may constitute an exotic planetary class in our Milky Way galaxy. Researchers using data from NASA's Spitzer Space Telescope propose that warm Neptune-size planets with clouds of helium may be strewn about the galaxy by the thousands.

This artist's concept depicts a proposed helium-atmosphere planet called GJ 436b, which was found by Spitzer to lack in methane -- a first clue about its lack of hydrogen.

Credits: NASA/JPL-Caltech

"We don't have any planets like this in our own solar system," said Renyu Hu, NASA Hubble Fellow at the agency's Jet Propulsion Laboratory in Pasadena, California, and lead author of a new study on the findings accepted for publication in the Astrophysical Journal. "But we think planets with helium atmospheres could be common around other stars."

Prior to the study, astronomers had been investigating a surprising number of so-called warm Neptunes in our galaxy. NASA's Kepler space telescope has found hundreds of candidate planets that fall into this category. They are the size of Neptune or smaller, with tight orbits that are closer to their stars than our own sizzling Mercury is to our sun. These planets reach temperatures of more than 1,340 degrees Fahrenheit (1,000 Kelvin), and orbit their stars in as little as one or two days.

In the new study, Hu and his team make the case that some warm Neptunes -- and warm sub-Neptunes, which are smaller than Neptune -- could have atmospheres enriched with helium. They say that the close proximity of these planets to their searing stars would cause the hydrogen in their atmospheres to boil off.

"Hydrogen is four times lighter than helium, so it would slowly disappear from the planets' atmospheres, causing them to become more concentrated with helium over time," said Hu. "The process would be gradual, taking up to 10 billion years to complete." For reference, our planet Earth is about 4.5 billion years old.

Warm Neptunes are thought to have either rocky or liquid cores, surrounded by gas. If helium is indeed the dominant component in their atmospheres, the planets would appear white or gray. By contrast, the Neptune of our own solar system is a brilliant azure blue. The methane in its atmosphere absorbs the color red, giving Neptune its blue hue.

This diagram illustrates how hypothetical helium atmospheres might form. These would be on planets about the mass of Neptune, or smaller, which orbit tightly to their stars, whipping around in just days. They are thought to have cores of water or rock, surrounded by thick atmospheres of gas. Radiation from their nearby stars would boil off hydrogen and helium, but because hydrogen is lighter, more hydrogen would escape. It's also possible that planetary bodies, such as asteroids, could impact the planet, sending hydrogen out into space. Over time, the atmospheres would become enriched in helium.

Image credit: NASA/JPL-Caltech

With less hydrogen in the planets' atmospheres, the concentration of methane and water would go down. Both water and methane consist in part of hydrogen. Eventually, billions of years later (a "Gyr" equals one billion years), the abundances of the water and methane would be greatly reduced. Since hydrogen would not be abundant, the carbon would be forced to pair with oxygen, forming carbon monoxide.

A lack of methane in one particular warm Neptune, called GJ 436b, is in fact what led Hu and his team to develop their helium planet theory. Spitzer had previously observed GJ 436b, located 33 light-years away, and found evidence for carbon but not methane. This was puzzling to scientists, because methane molecules are made of one carbon and four hydrogen atoms, and planets like this are expected to have a lot of hydrogen. Why wasn't the hydrogen linking up with carbon to produce methane?

According to the new study, the hydrogen might have been slow-cooked off the planet by radiation from the host stars. With less hydrogen around, the carbon would pair up with oxygen to make carbon monoxide. In fact, Spitzer found evidence for a predominance of carbon monoxide in the atmosphere of GJ 436b.

The next step to test this theory is to look at other warm Neptunes for signs of carbon monoxide and carbon dioxide, which are indicators of helium atmospheres. The team says this might be possible with the help of NASA's Hubble Space Telescope, and NASA's upcoming James Webb Space Telescope may one day directly detect that helium.

Meanwhile, the wacky world of exoplanets continues to surprise astronomers.

"Any planet one can imagine probably exists, out there, somewhere, as long as it fits within the laws of physics and chemistry," said co-author Sara Seager of the Massachusetts Institute of Technology in Cambridge and JPL. "Planets are so incredibly diverse in their masses, sizes and orbits that we expect this to extend to exoplanet atmospheres."

A third author of the paper is Yuk Yung of the California Institute of Technology in Pasadena and JPL.

JPL manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology in Pasadena. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at the Infrared Processing and Analysis Center at Caltech. Caltech manages JPL for NASA.

Contacts and sources:
Whitney Clavin 
Jet Propulsion Laboratory

Can Planets Be Rejuvenated Around Dead Stars?

For a planet, this would be like a day at the spa. After years of growing old, a massive planet could, in theory, brighten up with a radiant, youthful glow. Rejuvenated planets, as they are nicknamed, are only hypothetical. But new research from NASA's Spitzer Space Telescope has identified one such candidate, seemingly looking billions of years younger than its actual age.

"When planets are young, they still glow with infrared light from their formation," said Michael Jura of UCLA, coauthor of a new paper on the results in the June 10 issue of the Astrophysical Journal Letters. "But as they get older and cooler, you can't see them anymore. Rejuvenated planets would be visible again."

This artist's concept shows a hypothetical "rejuvenated" planet -- a gas giant that has reclaimed its youthful infrared glow. NASA's Spitzer Space Telescope found tentative evidence for one such planet around a dead star, or white dwarf, called PG 0010+280 (depicted as white dot in illustration).
Credits: NASA/JPL-Caltech

How might a planet reclaim the essence of its youth? Years ago, astronomers predicted that some massive, Jupiter-like planets might accumulate mass from their dying stars. As stars like our sun age, they puff up into red giants and then gradually lose about half or more of their mass, shrinking into skeletons of stars, called white dwarfs. The dying stars blow winds of material outward that could fall onto giant planets that might be orbiting in the outer reaches of the star system.

Thus, a giant planet might swell in mass, and heat up due to friction felt by the falling material. This older planet, having cooled off over billions of years, would once again radiate a warm, infrared glow.

The new study describes a dead star, or white dwarf, called PG 0010+280. An undergraduate student on the project, Blake Pantoja, then at UCLA, serendipitously discovered unexpected infrared light around this star while searching through data from NASA's Wide-field Infrared Survey Explorer, or WISE. Follow-up research led them to Spitzer observations of the star, taken back in 2006, which also showed the excess of infrared light.

At first, the team thought the extra infrared light was probably coming from a disk of material around the white dwarf. In the last decade or so, more and more disks around these dead stars have been discovered -- around 40 so far. The disks are thought to have formed when asteroids wandered too close to the white dwarfs, becoming chewed up by the white dwarfs' intense, shearing gravitational forces.

Other evidence for white dwarfs shredding asteroids comes from observations of the elements in white dwarfs. White dwarfs should contain only hydrogen and helium in their atmospheres, but researchers have found signs of heavier elements -- such as oxygen, magnesium, silicon and iron -- in about 100 systems to date. The elements are thought to be leftover bits of crushed asteroids, polluting the white dwarf atmospheres.

But the Spitzer data for the white dwarf PG 0010+280 did not fit well with models for asteroid disks, leading the team to look at other possibilities. Perhaps the infrared light is coming from a companion small "failed" star, called a brown dwarf -- or more intriguingly, from a rejuvenated planet.

"I find the most exciting part of this research is that this infrared excess could potentially come from a giant planet, though we need more work to prove it," said Siyi Xu of UCLA and the European Southern Observatory in Germany. "If confirmed, it would directly tell us that some planets can survive the red giant stage of stars and be present around white dwarfs."

In the future, NASA's upcoming James Webb Space Telescope could possibly help distinguish between a glowing disk or a planet around the dead star, solving the mystery. But for now, the search for rejuvenated planets -- much like humanity's own quest for a fountain of youth -- endures.

JPL manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology in Pasadena. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at the Infrared Processing and Analysis Center at Caltech. Caltech manages JPL for NASA.

Contacts and sources: 
Whitney Clavin
Jet Propulsion Laboratory

Super-Earth Found in Nearby Solar System, Only 21 Light Years Away

Using NASA's Spitzer Space Telescope, astronomers have confirmed the discovery of the nearest rocky planet outside our solar system, larger than Earth and a potential gold mine of science data.

Dubbed HD 219134b, this exoplanet, which orbits too close to its star to sustain life, is a mere 21 light-years away. While the planet itself can't be seen directly, even by telescopes, the star it orbits is visible to the naked eye in dark skies in the Cassiopeia constellation, near the North Star.

This artist's concept shows the silhouette of a rocky planet, dubbed HD 219134b. At 21 light-years away, the planet is the closest outside of our solar system that can be seen crossing, or transiting, its star.
Credits: NASA/JPL-Caltech

HD 219134b is also the closest exoplanet to Earth to be detected transiting, or crossing in front of, its star and, therefore, perfect for extensive research.

"Transiting exoplanets are worth their weight in gold because they can be extensively characterized," said Michael Werner, the project scientist for the Spitzer mission at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California. "This exoplanet will be one of the most studied for decades to come."

The planet, initially discovered using HARPS-North instrument on the Italian 3.6-meter Galileo National Telescope in the Canary Islands, is the subject of a study accepted for publication in the journal Astronomy & Astrophysics.

Study lead author Ati Motalebi of the Geneva Observatory in Switzerland said she believes the planet is the ideal target for NASA’s James Webb Space Telescope in 2018.

"Webb and future large, ground-based observatories are sure to point at it and examine it in detail,” Motalebi said.

Only a small fraction of exoplanets can be detected transiting their stars due to their relative orientation to Earth. When the orientation is just right, the planet’s orbit places it between its star and Earth, dimming the detectable light of its star. It’s this dimming of the star that is actually captured by observatories such as Spitzer, and can reveal not only the size of the planet but also clues about its composition.

"Most of the known planets are hundreds of light-years away. This one is practically a next-door neighbor," said astronomer and study co-author Lars A. Buchhave of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. For reference, the closest known planet is GJ674b at 14.8 light-years away; its composition is unknown.

HD 219134b was first sighted by the HARPS-North instrument and a method called the radial velocity technique, in which a planet's mass and orbit can be measured by the tug it exerts on its host star. The planet was determined to have a mass 4.5 times that of Earth, and a speedy three-day orbit around its star.

Spitzer followed up on the finding, discovering the planet transits its star. Infrared measurements from Spitzer revealed the planet's size, about 1.6 times that of Earth. Combining the size and mass gives it a density of 3.5 ounces per cubic inch (six grams per cubic centimeter) -- confirming HD 219134b is a rocky planet.

Now that astronomers know HD 219134b transits its star, scientists will be scrambling to observe it from the ground and space. The goal is to tease chemical information out of the dimming starlight as the planet passes before it. If the planet has an atmosphere, chemicals in it can imprint patterns in the observed starlight.

Rocky planets such as this one, with bigger-than-Earth proportions, belong to a growing class of planets termed super-Earths.

"Thanks to NASA's Kepler mission, we know super-Earths are ubiquitous in our galaxy, but we still know very little about them," said co-author Michael Gillon of the University of Liege in Belgium, lead scientist for the Spitzer detection of the transit. "Now we have a local specimen to study in greater detail. It can be considered a kind of Rosetta Stone for the study of super-Earths."

Further observations with HARPS-North also revealed three more planets in the same star system, farther than HD 219134b. Two are relatively small and not too far from the star. Small, tightly packed multi-planet systems are completely different from our own solar system, but, like super-Earths, are being found in increasing numbers.

JPL manages the Spitzer mission for NASA's Science Mission Directorate in Washington. Science operations are conducted at the Spitzer Science Center at the California Institute of Technology (Caltech) in Pasadena. Spacecraft operations are based at Lockheed Martin Space Systems Company in Littleton, Colorado. Data are archived at the Infrared Science Archive, housed at Caltech’s Infrared Processing and Analysis Center. 

Contacts and sources:
Whitney Clavin
Jet Propulsion Laboratory

Dense Star Clusters Shown as Binary Black Hole Factories

The coalescence of two black holes -- a very violent and exotic event -- is one of the most sought-after observations of modern astronomy. But, as these mergers emit no light of any kind, finding such elusive events has been impossible so far.

Colliding black holes do, however, release a phenomenal amount of energy as gravitational waves. The first observatories capable of directly detecting these 'gravity signals' -- ripples in the fabric of spacetime first predicted by Albert Einstein 100 years ago -- will begin observing the universe later this year.

An artist's conception of a black hole binary in a heart of a quasar, with the data showing the periodic variability superposed.
An artist's conception of a black hole binary in a heart of a quasar
Credits: Santiago Lombeyda, Center for Data-Driven Discovery, Caltech

When the gravitational waves rolling in from space are detected on Earth for the first time, a team of Northwestern University astrophysicists predicts astronomers will "hear," through these waves, five times more colliding black holes than previously expected. Direct observations of these mergers will open a new window into the universe.

"This information will allow astrophysicists to better understand the nature of black holes and Einstein's theory of gravity," said Frederic A. Rasio, a theoretical astrophysicist and senior author of the study. "Our study indicates the observatories will detect more of these energetic events than previously thought, which is exciting."

Rasio is the Joseph Cummings Professor in the department of physics and astronomy in Northwestern's Weinberg College of Arts and Sciences.

Rasio's team, utilizing observations from our own galaxy, report in a new modeling study two significant findings about black holes:

Globular clusters (spherical collections of up to a million densely packed stars found in galactic haloes) could be factories of binary black holes (two black holes in close orbit around each other); and

Supercomputer models of merging black holes reveal properties that are
crucial to understanding future detections of gravitational waves. This
movie follows two orbiting black holes and their accretion disk during
their final three orbits and ultimate merger. Redder colors correspond
to higher gas densities.
Credit: NASA 

The sensitive new observatories potentially could detect 100 merging binary black holes per year forged in the cores of these dense star clusters. (A burst of gravitational waves is emitted whenever two black holes merge.) This number is more than five times what previous studies predicted.

The study has been accepted for publication by the journal Physical Review Letters and is scheduled to be published today (July 29).

"Gravitational waves will let us hear the universe for the first time, through the ripples made by astronomical events in spacetime," said Carl L. Rodriguez, lead author of the paper. He is a Ph.D. student in Rasio's research group.

"Up until now, all of our observations have been from telescopes, literally looking out at the universe. Detecting gravitational waves will change that. And the cool part is we can hear things we could never see, such as binary black hole mergers, the subject of our study," he said.

Rodriguez and colleagues used detailed computer models to demonstrate how a globular cluster acts as a dominant source of binary black holes, producing hundreds of black hole mergers over a cluster's 12-billion-year lifetime.

By comparing the models to recent observations of clusters in the Milky Way galaxy and beyond, the results show that the next generation of gravitational-wave observatories could see more than 100 binary black hole mergers per year.

Frame from a simulation of the merger of two black holes and the resulting emission of gravitational radiation (colored fields). The outer red sheets correspond directly to the outgoing gravitational radiation that one day may be detected by gravitational-wave observatories.
Credit: NASA/C. Henze 

Advanced LIGO (Laser Interferometer Gravitational-Wave Observatory) is one of the new gravitational-wave observatories. Slated to begin operation later this year, Advanced LIGO is a large-scale physics experiment designed to directly detect gravitational waves of cosmic origin. Laser interferometers detect gravitational waves from the minute oscillations of suspended mirrors set into motion as the waves pass through the Earth.

Rasio and Rodriguez are members of Northwestern's Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA).

For the study, the research team used a parallel computing code for modeling star clusters developed through a CIERA-supported interdisciplinary collaboration between Northwestern's physics and astronomy department and electrical engineering and computer science department.

Contacts and sources:
Megan Fellman
Northwestern University

The title of the paper is "Binary Black Hole Mergers From Globular Clusters: Implications for Advanced LIGO."

Stressed Out Plants Send Animal-Like Signals

University of Adelaide research has shown for the first time that, despite not having a nervous system, plants use signals normally associated with animals when they encounter stress.

Published today in the journal Nature Communications, the researchers at the Australian Research Council (ARC) Centre of Excellence in Plant Energy Biology reported how plants respond to their environment with a similar combination of chemical and electrical responses to animals, but through machinery that is specific to plants.

"We've known for a long-time that the animal neurotransmitter GABA (gamma-aminobutyric acid) is produced by plants under stress, for example when they encounter drought, salinity, viruses, acidic soils or extreme temperatures," says senior author Associate Professor Matthew Gilliham, ARC Future Fellow in the University's School of Agriculture, Food and Wine.

 Professor Matthew Gilliham
Credit: University of Adelaide

"But it was not known whether GABA was a signal in plants. We've discovered that plants bind GABA in a similar way to animals, resulting in electrical signals that ultimately regulate plant growth when a plant is exposed to a stressful environment."

By identifying how plants respond to GABA the researchers are optimistic that they have opened up many new possibilities for modifying how plants respond to stress.

"The major stresses agricultural crops face like pathogens and poor environmental conditions account for most yield losses around the planet - and consequently food shortages," says co-lead author Professor Stephen Tyerman.

"By identifying how plants use GABA as a stress signal we have a new tool to help in the global effort to breed more stress resilient crops to fight food insecurity."

Despite a similar function, the proteins that bind GABA and their mammalian counterparts only resemble each other in the region where they interact with the neurotransmitter - the rest of the protein looks quite different.

"This raises very interesting questions about how GABA has been recruited as a messenger in both plant and animal kingdoms," says co-lead author Dr Sunita Ramesh. "It seems likely that this has evolved in both kingdoms separately."

The researchers say these findings could also explain why particular plant-derived drugs used as sedatives and anti-epileptics work in humans. These drugs are able to interact with proteins in the GABA-signalling system in both plants and animals -- suggesting that future work on other plant GABA signalling agents will also benefit the medical field.

The work also involved researchers at CSIRO Canberra, the University of Tasmania, the Gulbenkian Institute in Portugal and the University of Maryland, USA.

Contacts and sources:
Associate Professor Matthew Gilliham
ARC Future Fellow and Chief Investigator
ARC Centre of Excellence in Plant Energy Biology
School of Agriculture, Food and Wine
The University of Adelaide

Electric Fields Signal 'No Flies Zone'

A new piece of research led by the University of Southampton has found that the behaviour of fruit flies, which are commonly used in laboratory experiments, is altered by electric fields.

The research indicates that the wings of the insects are disturbed by static electric fields, leading to changes in avoidance behaviour and the neurochemical balance of their brains.

Small male Drosophila melanogaster fly
Credit: André Karwath

The paper, published in the Proceeding of the Royal Society B, suggests that the plastic housing laboratory fruit flies are commonly kept in (which hold their own static electric charge) could agitate the flies, changing their behaviour and neurochemical profile which has the potential to impact or confound other studies for which they are being used.

"Fruit flies are often used as model organisms to understand fundamental problems in biology," say Professor Philip Newland, Professor of Neuroscience at the University of Southampton and lead author of the study. "75 per cent of the genes that cause disease in humans are shared by fruit flies, so by studying them we can learn a lot about basic mechanisms.

"Plastic can retain a charge for a long period and, given the use of plastic in the rearing of these insects and other small insects such as mosquitos, long term exposure to these fields is inevitable."

The researchers put fruit flies in a Y-shaped maze, with one arm of the maze exposed to an electric charge and the other receiving none. They found that the flies avoided the charged chamber and gathered in the non-charged arm. Interestingly flies with no wings didn't display this behaviour, and flies with smaller wings only avoided higher charges - suggesting it is the wings of the fly that are involved in detection and are affected by the fields.

This was borne out when subjecting stationary flies to electric fields. The researchers observed that the wings of the flies could be manipulated by a field of a similar strength to that which produced the avoidance behaviour.

Professor Newland explains: "When a fly was placed underneath a negatively charged electrode, the static field forces caused elevation of the wings toward the electrode, as opposite charges were attracted.

"Static electric fields are all around us but for a small insect like a fruit fly it appears these fields' electrical charges are significant enough to have an effect on their wing movement and this means they will avoid them if possible."

The effect on the wings being moved seems to agitate the flies, as revealed by changes in their brain chemistry. Flies exposed to an electric field showed increased levels of octopamine (similar to noradrenaline in humans) which indicates stress and aggression. The flies also showed decreased levels of dopamine, meaning they would be more responsive to external stimuli.

As well as having consequences for flies used in laboratories, the results also have implications for flies in their natural environment.

"We are particularly interested in how electric fields could be used in pest control," says co-author Dr Christopher Jackson, also of Southampton. "Meshes that can generate static electric fields could be put across windows of houses or green houses to prevent insects like fruit flies or even mosquitos entering, yet allow air movement."

"It also raises questions of how pollinating species like bees could be affected by power lines, which have stronger electric fields."

Contacts and sources:
Steven Williams

Black Phosphorus Computers: BP Could Compete with Silicon as a Chip Building Material

Silicon Valley in Northern California got its nickname from the multitude of computer chip manufacturers that sprung up in the surrounding area in the 1980's. Despite its ubiquity as a chip building material, silicon may be facing some competition from a new version of an old substance

Researchers working at the Institute for Basic Science (IBS) Center for Integrated Nanostructure Physics at Sungkyunkwan University (SKKU) in South Korea, led in part by Director Young Hee Lee, have created a high performance transistor using black phosphorus (BP) which has revealed some fascinating results.

This is the atomic structure of black phosphorus and np-type transistor property of BP transistor.

Credit: Institute For Basic Science

Transistors are made up of materials with semiconducting properties, which come in two varieties: n-type (excess electrons) and p-type (excess holes). With the BP crystal, researchers have discovered that they can change its thickness and/or the contact metals and that will determine if it is high performance n-type, p-type, or ambipolar (function as both n- or p-type) material.

What does this mean?

Silicon has to be extrinsically doped (inserting another element into its crystal structure) to make it n-type or p-type in order for it to work in a semiconductor chip. The BP crystals can operate as both n-type and p-type or something in between, but don't require extrinsic doping. This means that instead of having to fabricate a silicon-arsenic crystal sandwiched between silicon-boron crystals, a transistor can have a single, lightweight, pure black phosphorus logic chip -- no doping required.

Additionally, changing the metals used to connect the chip to the circuit has an influence on whether BP will be n- or p-type. Instead of doping to make an n- and p-type material, both n- and p-type BP can be put all together on one chip just by changing its thickness and the contact metal used.

Why is this important?

Technology manufacturers are in an arms race to make their devices lighter, smaller and more efficient. By using BP that is only several atomic layers thick, transistors can be made smaller and more energy efficient than what exists now.

Silicon chips exist in all of our electronic devices, and as manufacturers make devices smaller and more energy efficient, they begin to approach the threshold for just how small components can be. BP may provide a thinner, more efficient alternative to silicon chips in electrical devices.

Another example is tiny autonomous data recording and transmitting devices which will make up the Internet of Things (IoT). A major constraint from preventing IoT from taking off immediately is the inability to scale down the component size and the lack of a long-term power solution. 2 dimensional layered materials (such as black phosphorus) are interesting in this aspect, since both the electrical and mechanical properties are often enhanced compared to their bulk (3 dimensional) counterparts.

Is BP a good alternative to current semiconductor materials?

It is a great material for transistors since it has a high carrier mobility (how quickly an electron can move through it). This gives BP the ability to operate at lower voltages while also increasing performance, which translates to greatly reduced power consumption.

With aluminum as a contact, thicker BP flakes (13 nanometer) show ambipolar properties similar to graphene while thin 3 nm flakes are unipolar n-type with switching on/off ratios greater than 105. The thinner they can make the material, the better the switching performance.

Perello explains, "The driving force in back phosphorus is the carrier mobility. Everything centers around that. The fact that the band gap changes with thickness also gives us flexibility in circuit design. As a researcher it gives me a lot of things to play with."

Is it ready to compete with silicon?

Unlike other industry standard semiconductor materials, there isn't a good method for making pure BP on a large scale. Currently, thin layers can be made only from scraping bulk crystalline BP samples, as no other manufacturing method exists yet. Tackling the scaling problem is already underway, with chemical vapor deposition (CVD) and other thin film growth techniques being investigated in labs across the world. The lack of a monolayer fabrication technique isn't necessarily a problem though. SKKU research fellow David Perello explains, "We can probably operate with 3, 5, or 7 layers and that might actually be better in terms of performance."

When asked if BP was ready to compete with silicon today, Perello said, "I don't think it can compete with silicon at the moment, that's a dream everybody has. Silicon is cheap and plentiful and the best silicon transistors we can make have mobilities that are similar to what I was able to make in these BP devices."

This doesn't mean that BP isn't worth exploring further though. According to Perello, "The fact that it was so simple to make such an excellent transistor without having access to state of the art commercial growth, fabrication and lithography facilities means that we could make it significantly better. We expect the upper bound for carrier mobility in black phosphorus to be much higher than silicon."

At present, BP isn't ready for commercial use and its potential has just started to be recognized. If it continues to perform in further tests, it should be strong a contender as a chip material for future technology.

Contacts and sources: 
Sunny Kim

Occator Among Batch of New Names and Insights at Ceres

Scientists continue to analyze the latest data from Dawn as the spacecraft makes its way to its third mapping orbit.

This color-coded map from NASA's Dawn mission shows the highs and lows of topography on the surface of dwarf planet Ceres. 

Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

"The craters we find on Ceres, in terms of their depth and diameter, are very similar to what we see on Dione and Tethys, two icy satellites of Saturn that are about the same size and density as Ceres. The features are pretty consistent with an ice-rich crust," said Dawn science team member Paul Schenk, a geologist at the Lunar and Planetary Institute, Houston.

This pair of images shows color-coded maps from NASA's Dawn mission, revealing the highs and lows of topography on the surface of dwarf planet Ceres.
Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

Colorful new maps of Ceres, based on data from NASA's Dawn spacecraft, showcase a diverse topography, with height differences between crater bottoms and mountain peaks as great as 9 miles (15 kilometers).

Some of these craters and other features now have official names, inspired by spirits and deities relating to agriculture from a variety of cultures. The International Astronomical Union recently approved a batch of names for features on Ceres.

The newly labeled features include Occator, the mysterious crater containing Ceres' brightest spots, which has a diameter of about 60 miles (90 kilometers) and a depth of about 2 miles (4 kilometers). Occator is the name of the Roman agriculture deity of harrowing, a method of leveling soil.

A smaller crater with bright material, previously labeled "Spot 1," is now identified as Haulani, after the Hawaiian plant goddess. Haulani has a diameter of about 20 miles (30 kilometers). Temperature data from Dawn's visible and infrared mapping spectrometer show that this crater seems to be colder than most of the territory around it.

Dantu crater, named after the Ghanaian god associated with the planting of corn, is about 75 miles (120 kilometers) across and 3 miles (5 kilometers) deep. A crater called Ezinu, after the Sumerian goddess of grain, is about the same size. Both are less than half the size of Kerwan, named after the Hopi spirit of sprouting maize, and Yalode, a crater named after the African Dahomey goddess worshipped by women at harvest rites.

This image, from Dawn's visible and infrared mapping spectrometer (VIR), highlights a bright region on Ceres known as Haulani, named after the Hawaiian plant goddess.

Image credit: NASA/JPL-Caltech/UCLA/ASI/INAF

Each row shows Ceres' surface at different wavelengths. On top is a black-and-white image; in the middle is a true-color image, and the bottom is in thermal infrared, where brighter colors represent higher temperatures and dark colors correspond to colder temperatures. The three images appear slightly flattened in the y-axis and smeared in the upper part due to the motion of the spacecraft.

"The impact craters Dantu and Ezinu are extremely deep, while the much larger impact basins Kerwan and Yalode exhibit much shallower depth, indicating increasing ice mobility with crater size and age," said Ralf Jaumann, a Dawn science team member at the German Aerospace Center (DLR) in Berlin.

Almost directly south of Occator is Urvara, a crater named for the Indian and Iranian deity of plants and fields. Urvara, about 100 miles (160 kilometers) wide and 3 miles (6 kilometers) deep, has a prominent central pointy peak that is 2 miles (3 kilometers) high.

Dawn is currently spiraling toward its third science orbit, 900 miles (less than 1,500 kilometers) above the surface, or three times closer to Ceres than its previous orbit. The spacecraft will reach this orbit in mid-August and begin taking images and other data again.

Ceres, with a diameter of 584 miles (940 kilometers), is the largest object in the main asteroid belt, located between Mars and Jupiter. This makes Ceres about 40 percent the size of Pluto, another dwarf planet, which NASA's New Horizons mission flew by earlier this month.

On March 6, 2015, Dawn made history as the first mission to reach a dwarf planet, and the first to orbit two distinct extraterrestrial targets. It conducted extensive observations of Vesta in 2011-2012.

Dawn's mission is managed by JPL for NASA's Science Mission Directorate in Washington. Dawn is a project of the directorate's Discovery Program, managed by NASA's Marshall Space Flight Center in Huntsville, Alabama. UCLA is responsible for overall Dawn mission science. Orbital ATK Inc., in Dulles, Virginia, designed and built the spacecraft. The German Aerospace Center, Max Planck Institute for Solar System Research, Italian Space Agency and Italian National Astrophysical Institute are international partners on the mission team.

Contacts and sources:
Elizabeth Landau
Jet Propulsion Laboratory

How Sunlight Pushes Asteroids

Rotating asteroids have a tough time sticking to their orbits. Their surfaces heat up during the day and cool down at night, giving off radiation that can act as a sort of mini-thruster.

This force, called the Yarkovsky effect, can cause rotating asteroids to drift widely over time, making it hard for scientists to predict their long-term risk to Earth. To learn more about the Yarkovsky effect, NASA is sending a spacecraft called OSIRIS-REx to the near-Earth asteroid Bennu. 

OSIRIS-REx will observe how Bennu’s shape, brightness, and surface features influence the strength of the Yarkovsky effect, helping scientists to better predict Bennu’s orbit over time and pin down its long-term risk.  

Contacts and sources: 
NASA Goddard Flight Center

Wednesday, July 29, 2015

Brown Dwarfs Host Powerful Aurora Displays

Brown dwarf stars host powerful aurora displays just like planets, astronomers have discovered.

The so-called failed stars, which are difficult to detect and also remain hard to classify, are too massive to be planets but physicists from the Universities of Sheffield and Oxford have revealed that they host powerful auroras just like Earth.

The international team of researchers made the discovery by observing a brown dwarf 20 light years away using both radio and optical telescopes. Their findings provide further evidence that suggests these stars act more like supersized planets.

Brown dwarf stars host powerful aurora displays just like planets, astronomers have discovered.
Credit:  Chuck Carter and Gregg Hallinan/Caltech.

Dr Stuart Littlefair, from the University of Sheffield's Department of Physics and Astronomy, said: "Brown dwarfs span the gap between stars and planets and these results are yet more evidence that we need to think of brown dwarfs as beefed-up planets, rather than "failed stars".

"We already know that brown dwarfs have cloudy atmospheres - like planets - although the clouds in brown dwarfs are made of minerals that form rocks on Earth now we know brown dwarfs host powerful auroras too."

He added: "Sometimes the best thing about a scientific result is simply the thrill of discovering something exciting and cool. The northern lights on Earth are one of the most spectacular and beautiful things you can see.

"I've always wanted to see them, but have never got the chance. It's particularly ironic that I got to discover an auroral light show which is vastly more powerful and many light years away!"

Auroral displays result when charged particles manage to enter a planet's magnetic field. Once within the magnetosphere, those particles get accelerated along the planet's magnetic field lines to the planet's poles where they collide with gas atoms in the atmosphere, producing the bright emissions associated with auroras.

During the study the international research team, led by Professor Gregg Hallinan from the California Institute of Technology, conducted an extensive observation campaign of a brown dwarf called LSRJ1835+3259.

The team used the most powerful radio telescope in the world, the National Radio Astronomy Observatory's Karl G. Jansky Very Large Array (JVLA) in New Mexico, as well as optical telescopes including Palomar's Hale Telescope and the W.M Keck Observatory's telescopes to make their ground breaking observations.

Using the JVLA they detected a bright pulse of radio waves that appeared as the brown dwarf rotated around. The object rotates every 2.84 hours, so the team were able to watch nearly three full rotations over the course of a single night.

The astronomers worked with the Hale Telescope and observed the brown dwarf varied optically on the same period as the radio pulses. The team found that the object's brightness varied periodically, indicating that there was a bright feature on the brown dwarf's surface. Dr Garret Cotter, from the University of Oxford, who also took part in the study said: "It was incredibly exciting to track the optical light form the aurora during the night with the Hale Telescope in California, one of the most venerable telescopes in the world, while simultaneously tracking the radio emission with the JVLA, one the world's newest radio telescopes."

Finally, the researchers used the Keck telescopes to precisely measure the brightness of the brown dwarf over time which was no simple feat given that these objects are extremely faint, many thousands of times fainter than our own Sun. The astronomers determined that the bright optical feature was likely to be caused by electrons hitting the hydrogen-dominated atmosphere of the brown dwarf to produce auroras.

The findings from the study, published in the journal Nature offer astronomers a convenient stepping stone for further study into exoplanets, planets orbiting stars other than our own sun.

Dr Cotter said: "In science, new knowledge often challenges our understanding. We know how controversial the situation was with Pluto, where astronomers had to look hard to try to decide if it is fundamentally one of the major planets of the solar system, or the first of the Kuiper Belt objects. Now, up at the other end of the size scale, we are challenged by seeing objects that traditionally would have been classified as stars, but seem to be showing more and more properties that make them look like super-sized planets."

Contacts and sources:
Amy Pullan
University of Sheffield

Is Earth’s Magnetic Field Reversing? Clues Found in Anomaly beneath South Africa

A team of researchers has for the first time recovered a magnetic field record from ancient minerals for Iron Age southern Africa (between 1000 and 1500 AD). The data, combined with the current weakening of Earth's magnetic field, suggest that the region of Earth's core beneath southern Africa may play a special role in reversals of the planet's magnetic poles.

Magnetic field strength in the South Atlantic Anomaly is shown. 
Credit:  Graphic by Michael Osadciw/University of Rochester.

The team was led by geophysicist John Tarduno from the University of Rochester and included researchers from Witwatersrand University and Kwa-Zulu Natal University of South Africa.

Reversals of the North and South Poles have occurred irregularly throughout history, with the last one taking place about 800,000 years ago. Once a reversal starts, it can take as long as 15,000 years to complete. The new data suggests the core region beneath southern Africa may be the birthplace of some of the more recent and future pole reversals.

"It has long been thought reversals start at random locations, but our study suggests this may not be the case," said Tarduno, a leading expert on Earth's magnetic field.

The results have been published in the latest issue of the journal Nature Communications.

Tarduno collected the data for his study from five sites along South Africa's borders with Zimbabwe and Botswana, near the Limpopo River. That part of Africa belongs to a region called the South Atlantic Anomaly--extending west beyond South America--that today has an unusually weak magnetic field strength.

Earth's dipole magnetic field strength has decreased 16 percent since 1840--with most of the decay related to the weakening field in the South Atlantic Anomaly--leading to much speculation that the planet is in the early stages of a field reversal. As Tarduno points out, it is only speculation because weakening magnetic fields can recover without leading to a reversal of the poles.

Tarduno and his fellow-researchers believe they found the reason for the unusually low magnetic field strength in that region of the Southern Hemisphere.

"The top of the core beneath this region is overlain by unusually hot and dense mantle rock," said Tarduno.

That hot and dense mantle rock lies 3000 km below the surface, has steep sides, and is about 6000 km across, which is roughly the distance from New York to Paris.

Together with Eric Blackman, an astrophysicist at the University of Rochester, and Michael Watkeys, a geologist at the University of KwaZulu-Natal in South Africa, Tarduno hypothesizes that the region--which is referred to as a Large Low Shear Velocity Province (LLSVP)--affects the direction of the churning liquid iron that generates Earth's magnetic field. Tarduno says it's the shift in the flow of liquid iron that causes irregularities in the magnetic field, ultimately resulting in a loss of magnetic intensity, giving the region its characteristically low magnetic field strength.

Until now, researchers have relied solely on estimates from models derived from data collected at other global sites for an Iron Age record of the magnetic field of southern Africa. Tarduno and his team wanted hard data on both the intensity and direction of the magnetic field, which are recorded and stored in minerals, such as magnetite, at the time they were formed.

The researchers were able to get their data thanks to a knowledge of ancient African practices--in this case, the ritualistic cleansing of villages in agricultural communities. Archeologist Thomas Huffman of Witwatersrand University, a member of the research team and a leading authority on Iron Age southern Africa, explains that villages were cleansed by burning down huts and grain bins. The burning clay floors reached a temperature in excess of 1000 ?C, hot enough to erase the magnetic information stored in the magnetite and create a new record of the magnetic field strength and direction at the time of the burning.

Modern grain bins in southern Africa, which are very similar to the grain bins found in that continent's Iron Age, are pictured.
Photo by John Tarduno/University of Rochester.

Tarduno and his team found a sharp 30 percent drop in magnetic field intensity from 1225 to 1550 AD. Given that the field intensity in the region is also declining today--though less rapidly, as measured by satellites--the research team believes that the process causing the weakening field may be a recurring feature of the magnetic field.

"Because rock in the deep mantle moves less than a centimeter a year, we know the LLSVP is ancient, meaning it may be a longstanding site for the loss of magnetic field strength," said Tarduno. "And it is also possible that the region may actually be a trigger for magnetic pole reversals, which might happen if the weak field region becomes very large."

Earth's dipole magnetic field strength has decreased 16 percent since 1840, leading to much speculation that the planet is in the early stages of a field reversal. Most of the global decay of intensity is related to the weakening field of the Southern Hemisphere that includes Southern Africa.

Tarduno points out that the new data cannot be used to predict with confidence that the present-day magnetic field is entering a reversal. However, it does suggest that the present-day pattern may be the latest manifestation of a repeating feature that occasionally leads to a global field reversal.

Contacts and sources:
Peter Iglinski 
University of Rochester 

What Aluminum Tells Us About Solar System Origins

Physicists at the University of York have revealed a new understanding of nucleosynthesis in stars, providing insight into the role massive stars play in the evolution of the Milky Way and the origins of the Solar System.

The Milky Way arching at a high inclination across the night sky

Credit: Bruno Gilli/ESO

Radioactive aluminium (aluminium-26, or Al26) is an element that emits gamma radiation through its decay enabling astronomers to image its location in our galaxy. Studying how Al26 is created in massive stars, scientists have distinguished between previously conflicting assumptions about its rate of production by nuclear fusion.

Measuring the fusion of helium and sodium at two separate particle accelerators in Canada and Denmark, the rate of production of Al26 was determined to within a factor of two. An improvement on previous experiments where there was disagreement of around a factor of 100 between measurements, this outcome removes dispute about the effect of sodium fusion on the rate of aluminium production.

Al26 is known for its relatively short lifespan (in astrophysical terms), decaying in around 1 million years, compared with the lifetime of massive stars of about 19 million years. This means we are now able to better understand gamma radiation maps of the galaxy, observed by space telescopes such as INTEGRAL and COMPTEL, and deduce a more accurate picture of recent activities of massive stars in the galaxy.

Evidence of Al26 decay observed in meteorites and pre-solar grains also suggests that material from massive stars contaminated the gas cloud from which the Solar System formed, providing insight into its early existence.

Dr Alison Laird, Reader in the University of York’s Department of Physics and lead author on one of the two research papers, said: “This research highlights clear and unambiguous evidence from gamma-ray observations of the galaxy that nucleosynthesis is happening in stars. By pinning down the production rate of radioactive aluminium, we will be able to interpret and understand these observations.

The gas cell target and silicon detector array inside the TUDA scattering chamber at TRIUMF  
Credit: Jessica Tomlinson

“Now we better understand the processes within stars that drive aluminium production, we pave the way for more detailed and thorough research into how massive stars affect our galaxy and the origins of our Solar System.”

Dr Christian Diget, Lecturer in Nuclear Astrophysics in York’s Department of Physics and a lead researcher on the second research paper, said: “These two experiments, completely independent of each other at a technical level and using opposite methodology, provide the most definitive research we have to date of radioactive aluminium production. Through this, we can now much better understand where and how aluminium-26 is produced in stars, and can simulate in the lab how stars work.

“By observing aluminium decay through gamma-radiation maps, we are now able to build a more accurate picture of the conditions when our Solar System formed.”

Contacts and sources:
Saskia Angenent
University of York

Causes of the Viking Age Revealed in New Research

The Viking hit-and-run raids on monastic communities such as Lindisfarne and Iona were the most infamous result of burgeoning Scandinavian maritime prowess in the closing years of the Eighth Century.

The Vale of York Cup - a Christian vessel from northern mainland Europe that was probably held by Scandinavians for some time after its capture, before finishing its life as the receptacle for a large silver hoard buried in Yorkshire. 
Credit : Copyright York Museums Trust (Yorkshire Museum)

These skirmishes led to more expansive military campaigns, settlement, and ultimately conquest of large swathes of the British Isles. But Dr Steve Ashby, of the Department of Archaeology at the University of York, wanted to explore the social justifications for this spike in aggressive activity.

Previous research has considered environmental, demographic, technological and political drivers, as well as the palpable lure of silver and slave and why these forms of wealth became important at this stage.

Dr Ashby said: "I wanted to try to discover what would make a young chieftain invest in the time and resources for such a risky venture. And what were the motives of his crew?"

In research published in Archaeological Dialogues, Dr Ashby argues that focusing on the spoils of raiding is to ignore half the picture as the rewards of such voyages consisted of much more than portable wealth.

Dr Ashby says: "The lure of the exotic, of the world beyond the horizon, was an important factor. Classic anthropology has shown that the mystique of the exotic is a powerful force, and something that leaders and people of influence often use to prop up their power base. It is not difficult to see how this would have worked in the Viking Age."

The acquisition not just of silver but of distinctive forms of Anglo-Saxon, Frankish, and Celtic metalwork were tangible reminders of successful sorties, symbols of status and power, as well as calls-to-arms for future raids. Many of the large quantity of Christian artefacts found in Scandinavian contexts (particularly Norwegian pagan burials) escaped melting and recycling, not because of some form of artistic appreciation, but because they were foundation stones for power, and touchstones in any argument for undertaking military activity.

Dr Ashby says there was also a clear motive for joining raiding parties rather than blindly following their leaders. Raiding activity provided not only an opportunity for violence and the accumulation of wealth, but an arena in which individuals could be noticed by their peers and superiors. It was an opportunity to build reputations for skill, reliability, cunning, or courage. Just as leaders of raiding parties stood to gain more than portable wealth, so too their followers could seek intangible social capital from participation.

"The lure of the raid was thus more than booty; it was about winning and preserving power through the enchantment of travel and the doing of deeds. This provides an important correction to models that focus on the need for portable wealth; the act of acquiring silver was as important as the silver itself," Dr Ashby adds.

Contacts and sources:
David Garner
University of York

Cataclysmic Cosmic Collision Triggered Global Cooling about12,800 Years Ago

At the end of the Pleistocene period, approximately 12,800 years ago­ -- give or take a few centuries -- a cosmic impact triggered an abrupt cooling episode that earth scientists refer to as the Younger Dryas.

New research by UC Santa Barbara geologist James Kennett and an international group of investigators has narrowed the date to a 100-year range, sometime between 12,835 and 12,735 years ago. The team's findings appear today in the Proceedings of the National Academy of Science.

This map shows the Younger Dryas Boundary locations that provided data for the analysis.

Credit: UCSB

The researchers used Bayesian statistical analyses of 354 dates taken from 30 sites on more than four continents. By using Bayesian analysis, the researchers were able to calculate more robust age models through multiple, progressive statistical iterations that consider all related age data.

"This range overlaps with that of a platinum peak recorded in the Greenland ice sheet and of the onset of the Younger Dryas climate episode in six independent key records," explained Kennett, professor emeritus in UCSB's Department of Earth Science. "This suggests a causal connection between the impact event and the Younger Dryas cooling."

In a previous paper, Kennett and colleagues conclusively identified a thin layer called the Younger Dryas Boundary (YDB) that contains a rich assemblage of high-temperature spherules, melt-glass and nanodiamonds, the production of which can be explained only by cosmic impact. However, in order for the major impact theory to be possible, the YDB layer would have to be the same age globally, which is what this latest paper reports.

"We tested this to determine if the dates for the layer in all of these sites are in the same window and statistically whether they come from the same event," Kennett said. "Our analysis shows with 95 percent probability that the dates are consistent with a single cosmic impact event."

This is James Kennett.
Credit: Sonia Fernandez

All together, the locations cover a huge range of distribution, reaching from northern Syria to California and from Venezuela to Canada. Two California sites are on the Channel Islands off Santa Barbara.

However, Kennett and his team didn't rely solely on their own data, which mostly used radiocarbon dating to determine date ranges for each site. They also examined six instances of independently derived age data that used other dating methods, in most cases counting annual layers in ice and lake sediments.

Two core studies taken from the Greenland ice sheet revealed an anomalous platinum layer, a marker for the YDB. A study of tree rings in Germany also showed evidence of the YDB, as did freshwater and marine varves, the annual laminations that occur in bodies of water. Even stalagmites in China displayed signs of abrupt climate change around the time of the Younger Dryas cooling event.

"The important takeaway is that these proxy records suggest a causal connection between the YDB cosmic impact event and the Younger Dryas cooling event," Kennett said. "In other words, the impact event triggered this abrupt cooling.

"The chronology is very important because there's been a long history of trying to figure out what caused this anomalous and enigmatic cooling," he added. "We suggest that this paper goes a long way to answering that question and hope that this study will inspire others to use Bayesian statistical analysis in similar kinds of studies because it's such a powerful tool."

Contacts and sources:
Julie Cohen
UC Santa Barbara