ADS


Unseen Is Free

Unseen Is Free
Try It Now

OpenX

Google Translate

Friday, July 25, 2014

Saharan Dust Is Key To The Formation Of Bahamas' Great Bank Says New Research


A new study suggests that Saharan dust played a major role in the formation of the Bahamas islands. Researchers from the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science showed that iron-rich Saharan dust provides the nutrients necessary for specialized bacteria to produce the island chain's carbonate-based foundation.

Distribution of insoluble material in the sediments and collection sites are shown. The insoluble material is derived from atmospheric dust.
Credit: Peter Swart, Ph.D., UM Rosenstiel School of Marine and Atmospheric Science

UM Rosenstiel School Lewis G. Weeks Professor Peter Swart and colleagues analyzed the concentrations of two trace elements characteristic of atmospheric dust – iron and manganese – in 270 seafloor samples collected along the Great Bahama Bank over a three-year period. The team found that the highest concentrations of these trace elements occurred to the west of Andros Island, an area which has the largest concentration of whitings, white sediment-laden bodies of water produced by photosynthetic cyanobacteria.

"Cyanobacteria need 10 times more iron than other photosynthesizers because they fix atmospheric nitrogen," said Swart, lead author of the study. "This process draws down the carbon dioxide and induces the precipitation of calcium carbonate, thus causing the whiting. The signature of atmospheric nitrogen, its isotopic ratio is left in the sediments."


  This is the Great Bahama Bank.
Credit: NASA

Swart's team suggests that high concentrations of iron-rich dust blown across the Atlantic Ocean from the Sahara is responsible for the existence of the Great Bahama Bank, which has been built up over the last 100 million years from sedimentation of calcium carbonate. The dust particles blown into the Bahamas' waters and directly onto the islands provide the nutrients necessary to fuel cyanobacteria blooms, which in turn, produce carbonate whitings in the surrounding waters.

Persistent winds across Africa's 3.5-million square mile Sahara Desert lifts mineral-rich sand into the atmosphere where it travels the nearly 5,000-mile northwest journey towards the U.S. and Caribbean.

The paper, titled "The fertilization of the Bahamas by Saharan dust: A trigger for carbonate precipitation?" was published in the early online edition of the journal Geology. The paper's authors include Swart, Amanda Oehlert, Greta Mackenzie, Gregor Eberli from the UM Rosenstiel School's Department of Marine Geosciences and John Reijmer of VU University Amsterdam in the Netherlands.



Contacts and sources:

Total Darkness At Night Is Key To Success Of Breast Cancer Therapy -- Tulane Study


Exposure to light at night, which shuts off nighttime production of the hormone melatonin, renders breast cancer completely resistant to tamoxifen, a widely used breast cancer drug, says a new study by Tulane University School of Medicine cancer researchers.

Principal investigators and co-leaders of Tulane's Circadian Cancer Biology Group, Steven Hill (left) and David Blask (right), and team members Robert Dauchy and Shulin Xiang.
Credit: Photograph by Paula Burch-Celentano, Tulane University

The study, "Circadian and Melatonin Disruption by Exposure to Light at Night Drives Intrinsic Resistance to Tamoxifen Therapy in Breast Cancer," published in the journal Cancer Research, is the first to show that melatonin is vital to the success of tamoxifen in treating breast cancer.

Principal investigators and co-leaders of Tulane's Circadian Cancer Biology Group, Steven Hill and David Blask, along with team members Robert Dauchy and Shulin Xiang, investigated the role of melatonin on the effectiveness of tamoxifen in combating human breast cancer cells implanted in rats.

"In the first phase of the study, we kept animals in a daily light/dark cycle of 12 hours of light followed by 12 hours of total darkness (melatonin is elevated during the dark phase) for several weeks," says Hill. "In the second study, we exposed them to the same daily light/dark cycle; however, during the 12 hour dark phase, animals were exposed to extremely dim light at night (melatonin levels are suppressed), roughly equivalent to faint light coming under a door."

Melatonin by itself delayed the formation of tumors and significantly slowed their growth but tamoxifen caused a dramatic regression of tumors in animals with either high nighttime levels of melatonin during complete darkness or those receiving melatonin supplementation during dim light at night exposure.

These findings have potentially enormous implications for women being treated with tamoxifen and also regularly exposed to light at night due to sleep problems, working night shifts or exposed to light from computer and TV screens.

"High melatonin levels at night put breast cancer cells to 'sleep' by turning off key growth mechanisms. These cells are vulnerable to tamoxifen. But when the lights are on and melatonin is suppressed, breast cancer cells 'wake up' and ignore tamoxifen," Blask says.

The study could make light at night a new and serious risk factor for developing resistance to tamoxifen and other anticancer drugs and make the use of melatonin in combination with tamoxifen, administered at the optimal time of day or night, standard treatment for breast cancer patients.



Contacts and sources:
Arthur Nead
Tulane University

Bacteria Build Shelters Of Salt To Sleep In

For the first time, Spanish researchers have detected an unknown interaction between microorganisms and salt. When Escherichia coli cells are introduced into a droplet of salt water and is left to dry, bacteria manipulate the sodium chloride crystallisation to create biomineralogical biosaline 3D morphologically complex formations, where they hibernate.

Afterwards, simply by rehydrating the material, bacteria are revived. The discovery was made by chance with a home microscope, but it made the cover of the 'Astrobiology' journal and may help us find signs of life on other planets.

Dried biosaline patterns formed by the interaction of Escherichia coli cells with common salt. 
Credit: J. M. Gómez-Gómez

The bacterium Escherichia coli is one of the most studied living forms by biologists, but none had to date noticed what this microorganism can do within a simple drop of salt water: create impressive biomineralogical patterns in which it shelters itself when it dries.

"It was a complete surprise, a fully unexpected result, when I introduced E.. coli cells into salt water and I realised that the bacteria had the ability to join the salt crystallisation and modulate the development and growth of the sodium chloride crystals," biologist José María Gómez told SINC.

"Thus, in around four hours, in the drop of water that had dried, an impressive tapestry of biosaline patterns was created with complex 3D architecture," added the researcher, who made the discovery with the microscope in his house, although he later confirmed it with the help of his colleagues from the Laboratory of BioMineralogy and Astrobiological Research (LBMARS, University of Valladolid-CSIC), Spain.

Until present, we knew of similar patterns created from saline solutions and isolated proteins, but this is the first report that demonstrates how whole bacterial cells can manage the crystallisation of sodium chloride (NaCl) and generate self-organised biosaline structures of a fractal or dendritic appearance. The study and the striking three-dimensional patterns are on the front cover of this month's 'Astrobiology' edition.

"The most interesting result is that the bacteria enter a state of hibernation inside these desiccated patterns, but they can later be 'revived' simply by rehydration," said Gómez, who highlighted a very important result from an astrobiological point of view: "Given the richness and complexity of these formations, they may be used as biosignatures in the search for life in extremely dry environments outside our own planet, such as the surface of Mars or that of Jupiter's satellite, Europa".

In fact, the LBMARS laboratory participates in the development of the Raman RLS instrument of the ExoMars rover, the mission that the European Space Agency (ESA) will send to the red planet in 2018, and this new finding may help them search for possible biological signs. According to the researcher, "the patterns observed will help calibrate the instrument and test its detection of signs of hibernation or traces of Martian life".

"The challenge we now face is to understand how the bacteria control the crystallisation of NaCl to create these incredible 3D structures and vice-versa, how salt influences this action, as well as studying the structure of these microorganisms that withstand desiccation," said Gómez, who reminds us that a simple curiosity and excitement about science, although it may be with simple means, still allows us to make some interesting discoveries: "This is a tribute to scientists such as the Spaniard Santiago Ramón y Cajal and the Dutch scientist Anton van Leeuwenhoek, who also worked from home with their own microscopes"


Contacts and sources:
Plataforma SINC


Citation:  José María Gómez Gómez, Jesús Medina, David Hochberg, Eva Mateo-Martí, Jesús Martínez-Frías, Fernando Rull "Drying Bacterial Biosaline Patterns Capable of Vital Reanimation upon Rehydration: Novel Hibernating Biomineralogical Life Formations". Astrobiology 14 (7): 589-602, 2014. Doi: 10.1089/ast.2014.1162

New Fast Charging Nano-Supercapacitors For Electric Cars Crush The Commercial Competition

Innovative nano-material based supercapacitors are set to bring mass market appeal a good step closer to the lukewarm public interest in Germany. This movement is currently being motivated by the advancements in the state-of-the-art of this device.

Electric cars are very much welcomed in Norway and they are a common sight on the roads of the Scandinavian country – so much so that electric cars topped the list of new vehicle registrations for the second time. This poses a stark contrast to the situation in Germany, where electric vehicles claim only a small portion of the market.

Innovative nano-material based supercapacitors are set to bring mass market appeal a good step closer to the lukewarm public interest in Germany. This movement is current-ly being motivated by the advancements in the state-of-the-art of this device.
Credit: © Fraunhofer IPA

Of the 43 million cars on the roads in Germany, only a mere 8000 are electric powered. The main factors discouraging motorists in Germany from switching to electric vehicles are the high investments cost, their short driving ranges and the lack of charging stations. Another major obstacle en route to the mass acceptance of electric cars is the charging time involved. 

The minutes involved in refueling conventional cars are so many folds shorter that it makes the situation almost incomparable. However, the charging durations could be dramatically shortened with the inclusion of supercapacitors. These alternative energy storage devices are fast charging and can therefore better support the use of economical energy in electric cars. 

Taking traditional gasoline-powered vehicles for instance, the action of braking converts the kinetic energy into heat which is dissipated and unused. Per contra, generators on electric vehicles are able to tap into the kinetic energy by converting it into electricity for further usage. This electricity often comes in jolts and requires storage devices that can withstand high amount of energy input within a short period of time. 

In this example, supercapacitors with their capability in capturing and storing this converted energy in an instant fits in the picture wholly. Unlike batteries that offer limited charging/discharging rates, supercapacitors require only seconds to charge and can feed the electric power back into the air-conditioning systems, defogger, radio, etc. as required.

Rapid energy storage devices are distinguished by their energy and power density characteristics – in other words, the amount of electrical energy the device can deliver with respect to its mass and within a given period of time. 

Supercapacitors are known to possess high power density, whereby large amounts of electrical energy can be provided or captured within short durations, albeit at a short-coming of low energy density. The amount of energy in which supercapacitors are able to store is generally about 10% that of electrochemical batteries (when the two devices of same weight are being compared). 

This is precisely where the challenge lies and what the “ElectroGraph” project is attempting to address. ElectroGraph is a project supported by the EU and its consortium consists of ten partners from both research institutes and industries. One of the main tasks of this project is to develop new types of supercapacitors with significantly improved energy storage capacities. 

As the project is approaches its closing phase in June, the project coordinator at Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart, Carsten Glanz explained the concept and approach taken en route to its successful conclusion: “during the storage process, the electrical energy is stored as charged particles attached on the electrode material.” “So to store more energy efficiently, we designed light weight electrodes with larger, usable surfaces.”

Graphene electrodes significantly improve energy efficiency

In numerous tests, the researcher and his team investigated the nano-material graphene, whose extremely high specific surface area of up to 2,600 m2/g and high electrical conductivity practically cries out for use as an electrode material. It consists of an ultrathin monolayer lattice made of carbon atoms. When used as an electrode material, it greatly increases the surface area with the same amount of material. From this aspect, graphene is showing its potential in replacing activated carbon – the material that has been used in commercial supercapacitors to date – which has a specific surface area between 1000 and 1800 m2/g.

“The space between the electrodes is filled with a liquid electrolyte,” revealed Glanz. “We use ionic liquids for this purpose. Graphene-based electrodes together with ionic liquid electrolytes present an ideal material combination where we can operate at higher voltages.” 

“By arranging the graphene layers in a manner that there is a gap between the individual layers, the researchers were able to establish a manufacturing method that efficiently uses the intrinsic surface area available of this nano-material. This prevents the individual graphene layers from restacking into graphite, which would reduce the storage surface and consequently the amount of energy storage capacity. “Our electrodes have already surpassed commercially available one by 75 percent in terms of storage capacity,” emphasizes the engineer. 

“I imagine that the cars of the future will have a battery connected to many capacitors spread throughout the vehicle, which will take over energy supply during high-power demand phases during acceleration for example and ramming up of the air-conditioning system. These capacitors will ease the burden on the battery and cover voltage peaks when starting the car. As a result, the size of massive batteries can be reduced.”

In order to present the new technology, the ElectroGraph consortium developed a demonstrator consisting of supercapacitors installed in an automobile side-view mirror and charged by a solar cell in an energetically self-sufficient system. The demonstrator will be unveiled at the end of May during the dissemination workshop at Fraunhofer IPA.

New Steel-Reinforced Concrete Better Protects Buildings From Bomb Attacks

A new type of steel-reinforced concrete protects buildings better from bomb attacks. Researchers have developed a formula to quickly calculate the concrete’s required thickness. The material will be used in the One World Trade Center at Ground Zero.

Earthquakes and explosions produce tremendous forces. Pressures in the immediate vicinity of a car bomb are in the range of several thousand megapascals, and even further away from the detonation itself, pressures are still in the order of several hundred kilopascals. Pressure in a bicycle tire – at about three bar – corresponds to about 300 kilopascals.

The One World Trade Center at Ground Zero shortly before the official opening. One safety measure adopted was the use of specially formulated safety concrete, developed by DUCON Europe GmbH & CO KG. Fraunhofer scientists were able to accurately compute how much of this concrete could be efficiently used to best effect. 

Credit: © Fraunhofer EMI

“So people at a good distance from the detonation point are not so much endangered by a pressure wave – our bodies can usually cope pretty well with them – it’s flying debris that poses the real danger,” explains Dr. Alexander Stolz from the Safety Technology and Protective Structures department at the Fraunhofer Institute for High Speed Dynamics, Ernst Mach-Institut, EMI in Efringen-Kirchener, a German town just north of Basel. This is exactly what happens to conventional reinforced concrete when it is hit by an explosion’s pressure wave: it is so brittle that individual and often large pieces are torn off and fly through the air uncontrolled.

Dr. Stephan Hauser, managing director of DUCON Europe GmbH & CoKG, has developed a concrete that merely deforms when subjected to such pressures – and doesn’t break. Behind the development is a special mixture made from very hard high-performance concrete, combined with finely meshed reinforced steel. The EMI has been supporting Hauser for many years in the optimization of his patented innovation. 

In particular, the researchers take responsibility for dynamic qualification testing of the material under extreme loads. This also involves characterizing the material and calculating characteristic curve profiles. The researchers have developed a mathematical formula that simply and quickly computes the required thickness of the new concrete for each specific application. “Calculations used to be based on comparable and historical values,” says Stolz. “Now we can use a universal algorithm.”

The formula was developed during a test series with the new shock tube in Efringen-Kirchen. “We can simulate detonations of different blasting forces – from 100 to 2,500 kilograms TNT at distances from 35 to 50 meters from buildings. And that’s without even having to use explosives,” says Stolz. The principle behind it is this: The shock tube consists of a (high-pressure) driver section and a (low-pressure) driven section, which are separated by a steel diaphragm. Air can be compressed in the driver section to a pressure of up to 30, bar, i.e. to approximately 30 times atmospheric pressure at sea level. The steel diaphragm is ruptured when the desired level of pressure is reached: the air is forced through the driven section as a uniform shock front that hits the concrete sample being tested, attached to the end of the shock tube. 

“With conventional concrete, the impact pressure ripped out parts of the sample concrete wall, which failed almost instantly, while the ductile and more flexible security version of the concrete merely deformed. There was no debris, and the material remained intact,” says Stolz. Thanks to its ductile qualities, the security concrete is considerably less bulky and yet more stable than conventional steel-reinforced concrete. Thinner building components are possible. “As a rule of thumb, you get the same stability with half the thickness,” says Stolz.

Formula also appropriate for earthquake and blast protection

Designing elements with the ductile concrete is easier with the new computational formula. The material’s high load capacity, many years of experience in its use in a variety of applications, and ultimately its load limits under explosive charge led to it being used in the new One World Trade Center in New York. 

The building rests on a 20 story, bombproof foundation that reaches 60 meters underground. Overall, at points within the building where safety is especially critical, several thousand square meters of safety concrete have been used to shore up the construction. Over the past few years, the skyscraper has been growing steadily upwards on the southern tip of Manhattan, on the site of the old World Trade Center’s Twin Towers. 

On September 11, 2001, an unprecedented act of terror resulted in the collapse of the towers, burying more than 3000 people under the debris. At 541.3 meters, the One World Trade Center is the tallest building in the USA and the third tallest in the world. “Our formula allows us to calculate the exact thickness of the concrete required to meet the safety considerations posed by such a special building,” says Stolz.


Contacts and sources:
Dr. Alexander StolzFraunhofer-Gesellschaft

Zika Virus Escapes Africa, Carried By Mosquitoes, Coming To America And Europe, Causes Epidemics In Asia

A newcomer among arboviruses

In the group of viruses that includes dengue and chikungunya, a newcomer now has people talking about it. Also originating in Africa, zika was isolated in humans in the 1970s. Several years earlier, only a few human cases had been reported. It took until 2007 for the virus to show its epidemic capacity, with 5,000 cases in Micronesia in the Pacific, and then especially, at the end of 2013 in Polynesia, where 55,000 people were affected.

Tiger mosquito Aedes albopictus
Credit; © IRD / M. Jacquet

In light of these recent events, researchers from IRD and the CIRMF in Gabon restarted work on the concomitant dengue and chikungunya epidemic that occurred in 2007 in the capital, Libreville, and which affected 20,000 people. Showing almost the same symptoms as its two dreaded cousins, did zika pass unnoticed by the researchers?

As many cases of zika fever as of dengue and chikungunya

To remove any doubt, the researchers conducted a second analysis of the blood samples taken seven years earlier from the patients. The result: many cases were due to the zika virus. The latter infected the inhabitants of Libreville with the same frequency as by the dengue or chikungunya viruses. Therefore, the capital actually experienced a concomitant epidemic of dengue, chikungunya, and zika in 2007. Additionally, analysis of the phylogenetic tree of the zika viruses detected in Libreville confirms that it was a strain belonging to the old African line. In other words, the latter was found to be more virulent than thought.

An emerging threat to human health

The researchers also re-analysed the mosquitoes captured in 2007. These studies attested to the first known presence of zika in Aedes albopictus, better known as the tiger mosquito. Thus, this insect, known to be the vector of dengue and chikungunya, also carries the zika virus. It is the predominant species in Libreville, where it represents more than 55% of the mosquitoes collected. The tiger mosquito prospers in small bodies of standing water such as in broken bottles, tins, flowerpots, abandoned used tires, etc.

Originally from Asia, the tiger mosquito was introduced to Africa in 1991 and detected in Gabon in 2007, where its arrival undoubtedly contributed to the emergence of dengue, chikungunya, and as shown by this new study, zika. The rapid geographic expansion of this invasive species in Africa, Europe, and America allows for a risk of propagation of zika fever around the world, including in the south of France.


Contacts and sources:
Institut de Recherche pour le Développement (IRD)


Citation:  Grard G., Caron M., Mombo I. M., Nkoghe D., Ondo S. M., Jiolle Davy, Fontenille Didier, Paupy Christophe, Leroy Eric. Zika virus in Gabon (Central Africa) - 2007: a new threat from Aedes albopictus ?. Plos Neglected Tropical Diseases, 2014, 8 (2), art. e2681 [6 p.] ISSN 1935-2735doi:10.1371/journal.pntd.0002681

Thursday, July 24, 2014

Leaf-Mining Insects Destroyed With The Dinosaurs, Others Quickly Appeared

After the asteroid impact at the end of the Cretaceous period that triggered the dinosaurs' extinction and ushered in the Paleocene, leaf-mining insects in the western United States completely disappeared. Only a million years later, at Mexican Hat, in southeastern Montana, fossil leaves show diverse leaf-mining traces from new insects that were not present during the Cretaceous, according to paleontologists.

This is a Platanus raynoldski, or sycamore, with two mines at the leaf base produced by wasp larvae.
Credit: Michael Donovan, Penn State


"Our results indicate both that leaf-mining diversity at Mexican Hat is even higher than previously recognized, and equally importantly, that none of the Mexican Hat mines can be linked back to the local Cretaceous mining fauna," said Michael Donovan, graduate student in geosciences, Penn State.

Insects that eat leaves produce very specific types of damage. One type is from leaf miners -- insect larvae that live in the leaves and tunnel for food, leaving distinctive feeding paths and patterns of droppings.

Donovan, Peter Wilf, professor of geosciences, Penn State, and colleagues looked at 1,073 leaf fossils from Mexican Hat for mines. They compared these with more than 9,000 leaves from the end of the Cretaceous, 65 million years ago, from the Hell Creek Formation in southwestern North Dakota, and with more than 9,000 Paleocene leaves from the Fort Union Formation in North Dakota, Montana and Wyoming. The researchers present their results in today's (July 24) issue of PLOS ONE.

"We decided to focus on leaf miners because they are typically host specific, feeding on only a few plant species each," said Donovan. "Each miner also leaves an identifiable mining pattern."

The researchers found nine different mine-damage types at Mexican Hat attributable to the larvae of moths, wasps and flies, and six of these damage types were unique to the site.


This is a mine produced by a micromoth larva on Platanus raynoldski, a sycamore.
Credit: Michael Donovan, Penn State

The researchers were unsure whether the high diversity of leaf miners at Mexican Hat compared to other early Paleocene sites, where there is little or no leaf mining, was caused by insects that survived the extinction event in refugia -- areas where organisms persist during adverse conditions -- or were due to range expansions of insects from somewhere else during the early Paleocene.

However, with further study, the researchers found no evidence of the survival of any leaf miners over the Cretaceous-Paleocene boundary, suggesting an even more total collapse of terrestrial food webs than has been recognized previously.

"These results show that the high insect damage diversity at Mexican Hat represents an influx of novel insect herbivores during the early Paleocene and not a refugium for Cretaceous leaf miners," said Wilf. "The new herbivores included a startling diversity for any time period, and especially for the classic post-extinction disaster interval."

Insect extinction across the Cretaceous-Paleocene boundary may have been directly caused by catastrophic conditions after the asteroid impact and by the disappearance of host plant species. While insect herbivores constantly need leaves to survive, plants can remain dormant as seeds in the ground until more auspicious circumstances occur.

The low-diversity flora at Mexican Hat is typical for the area in the early Paleocene, so what caused the high insect damage diversity?

Insect outbreaks are associated with a rapid population increase of a single insect species, so the high diversity of mining damage seen in the Mexican Hat fossils makes the possibility of an outbreak improbable.

The researchers hypothesized that the leaf miners that are seen in the Mexican Hat fossils appeared in that area because of a transient warming event, a number of which occurred during the early Paleocene.

This is a micromoth larva mine on Juglandiphyllites glabra, the earliest known member of the walnut family.
Credit: Michael Donovan, Penn State

"Previous studies have shown a correlation between temperature and insect damage diversity in the fossil record, possibly caused by evolutionary radiations or range shifts in response to a warmer climate," said Donovan. "Current evidence suggests that insect herbivore extinction decreased with increasing distance from the asteroid impact site in Mexico, so pools of surviving insects would have existed elsewhere that could have provided a source for the insect influx that we observed at Mexican Hat."

Other researchers on this project were Conrad C. Labandeira, Department of Paleobiology, National Museum of Natural History, Smithsonian Institution and Department of Entomology and BEES Program, University of Maryland, College Park; Kirk R. Johnson, National Museum of Natural History, Smithsonian Institution; and Daniel J. Peppe, Department of Geology, Baylor University.



Contacts and sources:
A'ndrea Elyse Messer
Penn State

Highest-Precision Measurement Of Water In Planet Outside The Solar System


A team of astronomers using NASA's Hubble Space Telescope have gone looking for water vapour in the atmospheres of three planets orbiting stars similar to the Sun – and have come up nearly dry.

The three planets, HD 189733b, HD 209458b, and WASP-12b, are between 60 and 900 light-years away, and are all gas giants known as 'hot Jupiters.' These worlds are so hot, with temperatures between 900 to 2200 degrees Celsius, that they are ideal candidates for detecting water vapour in their atmospheres.

Illustration of a 'hot Jupiter' orbiting a sun-like star

Credit: Haven Giguere, Nikku Madhusudhan

However, the three planets have only one-tenth to one-thousandth the amount of water predicted by standard planet formation theories. The best water measurement, for the planet HD 209458b, was between 4 and 24 parts per million. The results raise new questions about how exoplanets form and highlight the challenges in searching for water on Earth-like exoplanets in the future. The findings are published today (24 July) in the journal Astrophysical Journal Letters.

"Our water measurement in one of the planets, HD 209458b, is the highest-precision measurement of any chemical compound in a planet outside the solar system, and we can now say with much greater certainty than ever before that we've found water in an exoplanet," said Dr Nikku Madhusudhan of the Institute of Astronomy at the University of Cambridge, who led the research. "However, the low water abundance we are finding is quite astonishing."

Dr Madhusudhan and his collaborators used near-infrared spectra of the planetary atmospheres observed with the Hubble Space Telescope as the planets were passing in front of their parent stars as viewed from Earth. Absorption features from water vapour in the planetary atmosphere are superimposed on the small amount of starlight that passes through the planetary atmosphere before reaching the telescope. The planetary spectrum is obtained by determining the variation in the stellar spectrum caused due to the planetary atmosphere and is then used to estimate the amount of water vapour in the planetary atmosphere using sophisticated computer models and statistical techniques.

Madhusudhan said that the findings present a major challenge to exoplanet theory. "It basically opens a whole can of worms in planet formation. We expected these planets to have lots of water in their atmospheres. We have to revisit planet formation and migration models of giant planets, especially hot Jupiters, to investigate how they're formed."

The currently accepted theory on how giant planets in our solar system formed is known as core accretion, in which a planet is formed around the young star in a protoplanetary disc made primarily of hydrogen, helium, and particles of ices and dust composed of other chemical elements. The dust particles stick to each other, eventually forming larger and larger grains. The gravitational forces of the disc draw in these grains and larger planetesimals until a solid core forms. This core then leads to runaway accretion of both planetesimals and gas to eventually form a giant planet.

This theory predicts that the proportions of the different elements in the planet are enhanced relative to those in their star, especially oxygen, which is supposed to be the most enhanced. Once a giant planet forms, its atmospheric oxygen is expected to be largely in the form of water. Therefore, the very low levels of water vapour found by this research raise a number of questions about the chemical ingredients that lead to planet formation.

"There are so many things we still don't understand about exoplanets – this opens up a new chapter in understanding how planets and solar systems form," said Dr Drake Deming of the University of Maryland, who led one of the precursor studies and is a co-author in the present study. "These findings highlight the need for high-precision spectroscopy – additional observations from the Hubble Space Telescope and the next-generation telescopes currently in development will make this task easier."

The new discovery also highlights some major challenges in the search for the exoplanet 'holy grail' – an exoplanet with a climate similar to Earth, a key characteristic of which is the presence of liquid water.

"These very hot planets with large atmospheres orbit some of our nearest stars, making them the best possible candidates for measuring water levels, and yet the levels we found were much lower than expected," said Dr Madhusudhan. "These results show just how challenging it could be to detect water on Earth-like exoplanets in our search for potential life elsewhere." Instruments on future telescopes searching for biosignatures may need to be designed with a higher sensitivity to account for the possibility of planets being significantly drier than predicted.

The researchers also considered the possibility that clouds may be responsible for obscuring parts of the atmospheres, thereby leading to the low observed water levels. However, such an explanation requires heavy cloud particles to be suspended too high in the atmosphere to be physically plausible for all the planets in the study.



Contacts and sources:
Sarah Collins
University of Cambridge

Synchronization Of North Atlantic, North Pacific Preceded Abrupt Warming, End Of Ice Age

Scientists have long been concerned that global warming may push Earth's climate system across a "tipping point," where rapid melting of ice and further warming may become irreversible -- a hotly debated scenario with an unclear picture of what this point of no return may look like.

A newly published study by researchers at Oregon State University probed the geologic past to understand mechanisms of abrupt climate change. The study pinpoints the emergence of synchronized climate variability in the North Pacific Ocean and the North Atlantic Ocean a few hundred years before the rapid warming that took place at the end of the last ice age about 15,000 years ago.

This image depicts the Hubbard Glacier ice front, with floating ice 'growlers' in August 2004.
Credit: Photo courtesy of Oregon State University

The study suggests that the combined warming of the two oceans may have provided the tipping point for abrupt warming and rapid melting of the northern ice sheets.

Results of the study, which was funded by the National Science Foundation, appear this week in Science.

This new discovery by OSU researchers resulted from an exhaustive 10-year examination of marine sediment cores recovered off southeast Alaska where geologic records of climate change provide an unusually detailed history of changing temperatures on a scale of decades to centuries over many thousands of years.

"Synchronization of two major ocean systems can amplify the transport of heat toward the polar regions and cause larger fluctuations in northern hemisphere climate," said Summer Praetorius, a doctoral student in marine geology at Oregon State and lead author on the Science paper. "This is consistent with theoretical predictions of what happens when Earth's climate reaches a tipping point."

"That doesn't necessarily mean that the same thing will happen in the future," she pointed out, "but we cannot rule out that possibility."

The study found that synchronization of the two regional systems began as climate was gradually warming. After synchronization, the researchers detected wild variability that amplified the changes and accelerated into an abrupt warming event of several degrees within a few decades.

"As the systems become synchronized, they organized and reinforced each other, eventually running away like screeching feedback from a microphone," said Alan Mix, a professor in OSU's College of Earth, Ocean, and Atmospheric Sciences and co-author on the paper. "Suddenly you had the combined effects of two major oceans forcing the climate instead of one at a time."

"The example that we uncovered is a cause for concern because many people assume that climate change will be gradual and predictable," Mix added. "But the study shows that there can be vast climate swings over a period of decades to centuries. If such a thing happened in the future, it could challenges society's ability to cope."

What made this study unusual is that the researchers had such a detailed look at the geologic record. While modern climate observations can be made every day, the length of instrumental records is relatively short – typically less than a century. In contrast, paleoclimatic records extend far into the past and give good context for modern changes, the researchers say. However, the resolution of most paleo records is low, limited to looking at changes that occur over thousands of years.

In this study, the researchers examined sediment cores taken from the Gulf of Alaska in 2004 during an expedition led by Mix. The mountains in the region are eroding so fast that sedimentation rates are "phenomenal," he said. "Essentially, this rapid sedimentation provides a 'climate tape recorder' at extremely high fidelity."

Praetorius then led an effort to look at past temperatures by slicing the sediment into decade-long chunks spanning more than 8,000 years – a laborious process that took years to complete. She measured ratios of oxygen isotopes trapped in fossil shells of marine plankton called foraminifera. The isotopes record the temperature and salinity of the water where the plankton lived.

When the foraminifera died, their shells sank to the sea floor and were preserved in the sediments that eventually were recovered by Mix's coring team.

The researchers then compared their findings with data from the North Greenland Ice Core Project to see if the two distinct high-latitude climate systems were in any way related.

Most of the time, the two regions vary independently, but about 15,500 years ago, temperature changes started to line up and then both regions warmed abruptly by about five degrees (C) within just a few decades. Praetorius noted that much warmer ocean waters likely would have a profound effect on northern-hemisphere climates by melting sea ice, warming the atmosphere and destabilizing ice sheets over Canada and Europe.

A tipping point for climate change "may be crossed in an instant," Mix noted, "but the actual response of the Earth's system may play out over centuries or even thousands of years during a period of dynamic adjustment."

"Understanding those dynamics requires that we look at examples from the past," Mix said. "If we really do cross such a boundary in the future, we should probably take a long-term perspective and realize that change will become the new normal. It may be a wild ride."

Added Praetorius: "Our study does suggest that the synchronization of the two major ocean systems is a potential early warning system to begin looking for the tipping point."


Contacts and sources:
Summer Praetorius
Oregon State University

Four Billion-Year-Old Primordial Soup Chemistry In Cells Today

Parts of the primordial soup in which life arose have been maintained in our cells today according to scientists at the University of East Anglia.
Credit: www.flickr.com

Research published today in the Journal of Biological Chemistry reveals how cells in plants, yeast and very likely also in animals still perform ancient reactions thought to have been responsible for the origin of life – some four billion years ago.

The primordial soup theory suggests that life began in a pond or ocean as a result of the combination of metals, gases from the atmosphere and some form of energy, such as a lightning strike, to make the building blocks of proteins which would then evolve into all species.

The new research shows how small pockets of a cell – known as mitochondria – continue to perform similar reactions in our bodies today. These reactions involve iron, sulfur and electro-chemistry and are still important for functions such as respiration in animals and photosynthesis in plants.

Lead researcher Dr Janneke Balk, from UEA’s school of Biological Sciences and the John Innes Centre, said: “Cells confine certain bits of dangerous chemistry to specific compartments of the cell.

“For example small pockets of a cell called mitochondria deal with electrochemistry and also with toxic sulfur metabolism. These are very ancient reactions thought to have been important for the origin of life.

“Our research has shown that a toxic sulfur compound is being exported by a mitochondrial transport protein to other parts of the cell. We need sulfur for making iron-sulfur catalysts, again a very ancient chemical process.

“The work shows that parts of the primordial soup in which life arose has been maintained in our cells today, and is in fact harnessed to maintain important biological reactions.”

The research was carried out at UEA and JIC in collaboration with Dr Hendrik van Veen at the University of Cambridge. It was funded by the Biotechnology and Biological Sciences Research Council (BBSRC).

‘A Conserved Mitochondrial ATB-Binding Cassette Transporter Exports Glutathione Polysufide for Cytosolic Metal Cofactor Assembly’ is published in the Journal of Biological Chemistry.


Contacts and sources: 
Lisa Horton
University of East Anglia

Warning Of Early Stages Of Earth's 6th Mass Extinction Event

Stanford Biology Professor Rodolfo Dirzo and his colleagues warn that this "defaunation" could have harmful downstream effects on human health.

The planet's current biodiversity, the product of 3.5 billion years of evolutionary trial and error, is the highest in the history of life. But it may be reaching a tipping point.

Elephants and other large animals face an increased risk of extinction in what Stanford Biology Professor Rodolfo Dirzo terms "defaunation."

Credit: Claudia Paulussen/Shutterstock

In a new review of scientific literature and analysis of data published in Science, an international team of scientists cautions that the loss and decline of animals is contributing to what appears to be the early days of the planet's sixth mass biological extinction event.

Since 1500, more than 320 terrestrial vertebrates have become extinct. Populations of the remaining species show a 25 percent average decline in abundance. The situation is similarly dire for invertebrate animal life.

And while previous extinctions have been driven by natural planetary transformations or catastrophic asteroid strikes, the current die-off can be associated to human activity, a situation that the lead authorRodolfo Dirzo, a professor of biology at Stanford, designates an era of "Anthropocene defaunation."

Across vertebrates, 16 to 33 percent of all species are estimated to be globally threatened or endangered. Large animals – described as megafauna and including elephants, rhinoceroses, polar bears and countless other species worldwide – face the highest rate of decline, a trend that matches previous extinction events.

Larger animals tend to have lower population growth rates and produce fewer offspring. They need larger habitat areas to maintain viable populations. Their size and meat mass make them easier and more attractive hunting targets for humans.

Although these species represent a relatively low percentage of the animals at risk, their loss would have trickle-down effects that could shake the stability of other species and, in some cases, even human health.

For instance, previous experiments conducted in Kenya have isolated patches of land from megafauna such as zebras, giraffes and elephants, and observed how an ecosystem reacts to the removal of its largest species. Rather quickly, these areas become overwhelmed with rodents. Grass and shrubs increase and the rate of soil compaction decreases. Seeds and shelter become more easily available, and the risk of predation drops.

Consequently, the number of rodents doubles – and so does the abundance of the disease-carrying ectoparasites that they harbor.

"Where human density is high, you get high rates of defaunation, high incidence of rodents, and thus high levels of pathogens, which increases the risks of disease transmission," said Dirzo, who is also a senior fellow at the Stanford Woods Institute for the Environment. "Who would have thought that just defaunation would have all these dramatic consequences? But it can be a vicious circle."

The scientists also detailed a troubling trend in invertebrate defaunation. Human population has doubled in the past 35 years; in the same period, the number of invertebrate animals – such as beetles, butterflies, spiders and worms – has decreased by 45 percent.

As with larger animals, the loss is driven primarily by loss of habitat and global climate disruption, and could have trickle-up effects in our everyday lives.

For instance, insects pollinate roughly 75 percent of the world's food crops, an estimated 10 percent of the economic value of the world's food supply. Insects also play a critical role in nutrient cycling and decomposing organic materials, which helps ensure ecosystem productivity. In the United States alone, the value of pest control by native predators is estimated at $4.5 billion annually.

Dirzo said that the solutions are complicated. Immediately reducing rates of habitat change and overexploitation would help, but these approaches need to be tailored to individual regions and situations. He said he hopes that raising awareness of the ongoing mass extinction – and not just of large, charismatic species – and its associated consequences will help spur change.

"We tend to think about extinction as loss of a species from the face of Earth, and that's very important, but there's a loss of critical ecosystem functioning in which animals play a central role that we need to pay attention to as well," Dirzo said. "Ironically, we have long considered that defaunation is a cryptic phenomenon, but I think we will end up with a situation that is non-cryptic because of the increasingly obvious consequences to the planet and to human wellbeing."

The coauthors on the report include Hillary S. Young, University of California, Santa Barbara; Mauro Galetti, Universidade Estadual Paulista in Brazil; Gerardo Ceballos, Universidad Nacional Autonoma de Mexico; Nick J.B. Isaac, of the Natural Environment Research Council Centre for Ecology and Hydrology in England; and Ben Collen, of University College London.

Contacts and sources:
Bjorn Carey
Stanford University

Moose Drool Inhibits Growth Of Toxic Fungus: York University Research

Some sticky research out of York University shows a surprisingly effective way to fight against a certain species of toxic grass fungus: moose saliva (yes… moose saliva).

Credit: Dawn Brazely

Published in this month’s Biology Letters, “Ungulate saliva inhibits a grass–endophyte mutualism” shows that moose and reindeer saliva, when applied to red fescue grass (which hosts a fungus called epichloë festucae that produces the toxin ergovaline) results in slower fungus growth and less toxicity.

“Plants have evolved defense mechanisms to protect themselves, such as thorns, bitter-tasting berries, and in the case of certain types of grass, by harbouring toxic fungus deep within them that can be dangerous or even fatal for grazing animals,” says York U Biology Professor Dawn Bazely, who worked with University of Cambridge researcher Andrew Tanentzap and York U researcher Mark Vicari on the project. “We wanted to find out how moose were able to eat such large quantities of this grass without negative effects.”

Inspired by an earlier study that showed that moose grazing and saliva distribution can have a positive effect on plant growth, the research team set out to test an interesting hypothesis – whether moose saliva may, in fact, “detoxify” the grass before it is eaten.

Working in partnership with the Toronto Zoo, the team collected saliva samples from moose and reindeer, which they then smeared onto clipped samples of red fescue grass carrying the toxic fungus, simulating the effect of grazing. They found that the application of saliva produced rapid results, inhibiting fungus growth within 12-36 hours.

“We found that the saliva worked very quickly in slowing the growth of the fungus, and the fungus colonies,” says Bazely. “In addition, by applying multiple applications of saliva to the grass over the course of two months, we found we could lower the concentration of ergovaline between 41 and 70 per cent.”

Bazely says that because moose tend to graze within a defined home range, it’s possible that certain groups of plants are receiving repeated exposure to the moose saliva, which over time has resulted in fewer toxins within their preferred area.

“We know that animals can remember if certain plants have made them feel ill, and they may avoid these plants in future,” says Bazely. “This study the first evidence, to our knowledge, of herbivore saliva being shown to ‘fight back’ and slow down the growth of the fungus.”


Contacts and sources:
Robin Heron
York University

Million Year Old Stone Age Artifacts Found In Northern Cape Of South Africa

Excavations at an archaeological site at Kathu in the Northern Cape province of South Africa have produced tens of thousands of Earlier Stone Age artifacts, including hand axes and other tools. These discoveries were made by archaeologists from the University of Cape Town (UCT), South Africa and the University of Toronto (U of T), in collaboration with the McGregor Museum in Kimberley, South Africa.

Steven James Walker from the Department of Archaeology at UCTextracts a sample at the interface between the overlying red sands and the Earlier Stone Age archaeological deposits at the Kathu Townlands site. 

Credit: Vasa Lukich.

The archaeologists’ research on the Kathu Townlands site, one of the richest early prehistoric archaeological sites in South Africa, was published in the journal, PLOS ONE, on 24 July 2014. It is estimated that the site is between 700,000 and one million years old.

Steven James Walker from the Department of Archaeology at UCT, lead author of the journal paper, says: “The site is amazing and it is threatened. We’ve been working well with developers as well as the South African Heritage Resources Agency to preserve it, but the town of Kathu is rapidly expanding around the site. It might get cut off on all sides by development and this would be regrettable.”

Flakes and cores from Kathu Townlands, Beaumont Excavation.
A: Large flake off the edge of the core consistent with biface shaping removal.
B: Large flake with centripedal dorsal scars.
C: Blade, note that there is some cortex (indicated by C in the sketch) and that scars are not parallel.
D-F: Small flakes, note that F is off the edge of the core.
G: Discoidal core with removals off both faces. Break on one edge (upper edge in right view).
H: Discoidal core with one large flake removal. Note that on the right-hand face the working is unclear and it is possible that this is a natural surface.
Credit: Steven James Walker & et al.

Today, Kathu is a major iron mining centre. Walker adds that the fact that such an extensive prehistoric site is located in the middle of a zone of intensive development poses a unique challenge for archaeologists and developers to find strategies to work cooperatively.

Profiles from 2013 excavation.
A. Trench A: Square 1. Massive deposit of Banded Irontone rubble and artefacts overlying bedrock in a sandy matrix. Note lack of bedding or sorting.
B. Trench I: Square 5. Shallow massive deposit of Banded Ironstone rubble and artefacts overlying bedrock with overlying deposits of sand.
C. Trench E: Square 3. Discrete calcrete nodule that developed near the interface of the rubble/artefact deposit and underlying bedrock. Note parallel bedding of the Ironstone within the calcrete nodule. Approximate width of image 50cm.
D. Trench J/K. Discrete nodular calcrete developing in the sand and into the underlying Banded Ironstone rubble. Does not exhibit parallel Ironstone bedding found in (c). Approximate width of images 50cm.


Credit: Steven James Walker & et al.

The Kathu Townlands site is one component of a grouping of prehistoric sites known as the Kathu Complex. Other sites in the complex include Kathu Pan 1 which has produced fossils of animals such as elephants and hippos, as well as the earliest known evidence of tools used as spears from a level dated to half a million years ago. 

Hand axes from surface collection. A-B. Banded Ironstone  C. Quartzite
Credit: Steven James Walker & et al.

Michael Chazan, Director of the Archaeology Centre at U of T, emphasizes the scientific challenge posed by the density of the traces of early human activity in this area.

“We need to imagine a landscape around Kathu that supported large populations of human ancestors, as well as large animals like hippos. All indications suggest that Kathu was much wetter, maybe more like the Okavango than the Kalahari. There is no question that the Kathu Complex presents unique opportunities to investigate the evolution of human ancestors in Southern Africa.”

New Mass Map Of A Distant Galaxy Cluster Is The Most Precise Yet

Astronomers using the NASA/ESA Hubble Space Telescope have mapped the mass within a galaxy cluster more precisely than ever before. Created using observations from Hubble's Frontier Fields observing programme, the map shows the amount and distribution of mass within MCS J0416.1-2403, a massive galaxy cluster found to be 160 trillion times the mass of the Sun. The detail in this mass map was made possible thanks to the unprecedented depth of data provided by new Hubble observations, and the cosmic phenomenon known as strong gravitational lensing.
 

This image from the NASA/ESA Hubble Space Telescope shows the galaxy cluster MCS J0416.1-2403. This is one of six being studied by the Hubble Frontier Fields programme. This programme seeks to analyse the mass distribution in these huge clusters and to use the gravitational lensing effect of these clusters, to peer even deeper into the distant Universe.
Credit: ESA/Hubble, NASA, HST Frontier Fields Acknowledgement: Mathilde Jauzac (Durham University, UK and Astrophysics & Cosmology Research Unit, South Africa) and Jean-Paul Kneib (École Polytechnique Fédérale de Lausanne, Switzerland)

Measuring the amount and distribution of mass within distant objects in the Universe can be very difficult. A trick often used by astronomers is to explore the contents of large clusters of galaxies by studying the gravitational effects they have on the light from very distant objects beyond them. This is one of the main goals of Hubble's Frontier Fields, an ambitious observing programme scanning six different galaxy clusters -- including MCS J0416.1-2403, the cluster shown in this stunning new image [1].

Large clumps of mass in the Universe warp and distort the space-time around them. Acting like lenses, they appear to magnify and bend light that travels through them from more distant objects [2].

Despite their large masses, the effect of galaxy clusters on their surroundings is usually quite minimal. For the most part they cause what is known as weak lensing, making even more distant sources appear as only slightly more elliptical or smeared across the sky. However, when the cluster is large and dense enough and the alignment of cluster and distant object is just right, the effects can be more dramatic. The images of normal galaxies can be transformed into rings and sweeping arcs of light, even appearing several times within the same image. This effect is known as strong lensing, and it is this phenomenon, seen around the six galaxy clusters targeted by the Frontier Fields programme, that has been used to map the mass distribution of MCS J0416.1-2403, using the new Hubble data.

"The depth of the data lets us see very faint objects and has allowed us to identify more strongly lensed galaxies than ever before," explains Mathilde Jauzac of Durham University, UK, and Astrophysics & Cosmology Research Unit, South Africa, lead author of the new Frontier Fields paper. "Even though strong lensing magnifies the background galaxies they are still very far away and very faint. The depth of these data means that we can identify incredibly distant background galaxies. We now know of more than four times as many strongly lensed galaxies in the cluster than we did before."

Using Hubble's Advanced Camera for Surveys, the astronomers identified 51 new multiply imaged galaxies around the cluster, quadrupling the number found in previous surveys and bringing the grand total of lensed galaxies to 68. Because these galaxies are seen several times this equates to almost 200 individual strongly lensed images which can be seen across the frame. This effect has allowed Jauzac and her colleagues to calculate the distribution of visible and dark matter in the cluster and produce a highly constrained map of its mass [3].

"Although we've known how to map the mass of a cluster using strong lensing for more than twenty years, it's taken a long time to get telescopes that can make sufficiently deep and sharp observations, and for our models to become sophisticated enough for us to map, in such unprecedented detail, a system as complicated as MCS J0416.1-2403," says team member Jean-Paul Kneib.

By studying 57 of the most reliably and clearly lensed galaxies, the astronomers modelled the mass of both normal and dark matter within MCS J0416.1-2403. "Our map is twice as good as any previous models of this cluster!" adds Jauzac.

The total mass within MCS J0416.1-2403 -- modelled to be over 650 000 light- years across -- was found to be 160 trillion times the mass of the Sun. This measurement is several times more precise than any other cluster map, and is the most precise ever produced [4]. By precisely pinpointing where the mass resides within clusters like this one, the astronomers are also measuring the warping of space-time with high precision.

"Frontier Fields' observations and gravitational lensing techniques have opened up a way to very precisely characterise distant objects -- in this case a cluster so far away that its light has taken four and a half billion years to reach us," adds Jean-Paul Kneib. "But, we will not stop here. To get a full picture of the mass we need to include weak lensing measurements too. Whilst it can only give a rough estimate of the inner core mass of a cluster, weak lensing provides valuable information about the mass surrounding the cluster core."

The team will continue to study the cluster using ultra-deep Hubble imaging and detailed strong and weak lensing information to map the outer regions of the cluster as well as its inner core, and will thus be able to detect substructures in the cluster's surroundings. They will also take advantage of X-ray measurements of hot gas and spectroscopic redshifts to map the contents of the cluster, evaluating the respective contribution of dark matter, gas and stars [5].

Combining these sources of data will further enhance the detail of this mass distribution map, showing it in 3D and including the relative velocities of the galaxies within it. This paves the way to understanding the history and evolution of this galaxy cluster.


Notes:

[1] The cluster is also known as MACS J0416.1-2403.

[2] The warping of space-time by large objects in the Universe was one of the predictions of Albert Einstein's theory of general relativity.

[3] Gravitational lensing is one of the few methods astronomers have to find out about dark matter. Dark matter, which makes up around three quarters of all matter in the Universe, cannot be seen directly as it does not emit or reflect any light, and can pass through other matter without friction (it is collisionless). It interacts only by gravity, and its presence must be deduced from its gravitational effects.

[4] The uncertainty on the measurement is only around 0.5%, or 1 trillion times the mass of the sun. This may not seem precise but it is for a measurement such as this.

[5] NASA's Chandra X-ray Observatory was used to obtain X-ray measurements of hot gas in the cluster and ground based observatories provide the data needed to measure spectroscopic redshifts.


The results of the study will be published online (mnras.oxfordjournals.org/lookup/doi/10.1093/mnras/stu) in Monthly Notices of the Royal Astronomical Society on 24 July 2014.

Hubble Finds Three Surprisingly Dry Exoplanets

Astronomers using NASA's Hubble Space Telescope have gone looking for water vapor in the atmospheres of three planets orbiting stars similar to the Sun — and have come up nearly dry.

The three planets, HD 189733b, HD 209458b, and WASP-12b, are between 60 and 900 light-years away. These giant gaseous worlds are so hot, with temperatures between 1,500 and 4,000 degrees Fahrenheit, that they are ideal candidates for detecting water vapor in their atmospheres.
This is an artistic illustration of the gas giant planet HD 209458b (unofficially named Osiris) located 150 light-years away in the constellation Pegasus. This is a "hot Jupiter" class planet. Estimated to be 220 times the mass of Earth. The planet's atmosphere is a seething 2,150 degrees Fahrenheit. It orbits very closely to its bright sunlike star, and the orbit is tilted edge-on to Earth. This makes the planet an ideal candidate for the Hubble Space Telescope to be used to make precise measurements of the chemical composition of the giant's atmosphere as starlight filters though it. To the surprise of astronomers, they have found much less water vapor in the atmosphere than standard planet-formation models predict.


However, to the surprise of the researchers, the planets surveyed have only one-tenth to one one-thousandth the amount of water predicted by standard planet-formation theories.

"Our water measurement in one of the planets, HD 209458b, is the highest-precision measurement of any chemical compound in a planet outside the solar system, and we can now say with much greater certainty than ever before that we've found water in an exoplanet," said Dr. Nikku Madhusudhan of the Institute of Astronomy at the University of Cambridge, United Kingdom, who led the research. "However, the low water abundance we are finding is quite astonishing."

Madhusudhan said that this finding presents a major challenge to exoplanet theory. "It basically opens a whole can of worms in planet formation. We expected all these planets to have lots of water in them. We have to revisit planet formation and migration models of giant planets, especially 'hot Jupiters', and investigate how they're formed."

He emphasizes that these results, though found in these large hot planets close to their parent stars, may have major implications for the search for water in potentially habitable Earth-sized exoplanets. Instruments on future space telescopes may need to be designed with a higher sensitivity if target planets are drier than predicted. "We should be prepared for much lower water abundances than predicted when looking at super-Earths (rocky planets that are several times the mass of Earth)," Madhusudhan said.

Using near-infrared spectra of the planets observed with Hubble, Madhusudhan and his collaborators from the Space Telescope Science Institute, Baltimore, Maryland; the University of Maryland, College Park, Maryland; the Johns Hopkins University, Baltimore, Maryland; and the Dunlap Institute at the University of Toronto, Ontario, Canada, estimated the amount of water vapor in the planetary atmospheres based on sophisticated computer models and statistical techniques to explain the data.

The planets were selected because they orbit relatively bright stars that provide enough radiation for an infrared-light spectrum to be taken. Absorption features from the water vapor in the planet's atmosphere are superimposed on the small amount of starlight that glances through the planet's atmosphere.

Detecting water is almost impossible for transiting planets from the ground because Earth's atmosphere has a lot of water in it that contaminates the observation. "We really need the Hubble Space Telescope to make such observations," said Nicolas Crouzet of the Dunlap Institute at the University of Toronto and co-author of the study.
This graph compares observations with modeled infrared spectra of three hot-Jupiter-class exoplanets that were spectroscopically observed with the Hubble Space Telescope. The red curve in each case is the best-fit model spectrum for the detection of water vapor absorption in the planetary atmosphere. The blue circles and error bars show the processed and analyzed data from Hubble's spectroscopic observations.

Credit: NASA, ESA, N. Madhusudhan (University of Cambridge), and A. Feild and G. Bacon (STScI)

The currently accepted theory on how giant planets in our solar system formed is known as core accretion, in which a planet is formed around the young star in a protoplanetary disk made primarily of hydrogen, helium, and particles of ices and dust composed of other chemical elements. The dust particles stick to each other, eventually forming larger and larger grains. The gravitational forces of the disk draw in these grains and larger particles until a solid core forms. This core then leads to runaway accretion of both solids and gas to eventually form a giant planet.

This theory predicts that the proportions of the different elements in the planet are enhanced relative to those in their star, especially oxygen that is supposed to be the most enhanced. Once the giant planet forms, its atmospheric oxygen is expected to be largely encompassed within water molecules. The very low levels of water vapor found by this research raises a number of questions about the chemical ingredients that lead to planet formation, say researchers.

"There are so many things we still don't know about exoplanets, so this opens up a new chapter in understanding how planets and solar systems form," said Drake Deming of the University of Maryland, who led one of the precursor studies. "The problem is that we are assuming the water to be as abundant as in our own solar system. What our study has shown is that water features could be a lot weaker than our expectations."

The findings are being published on July 24 in The Astrophysical Journal Letters.


Contacts and sources:
Ray Villard
Space Telescope Science Institute, Baltimore, Md.

Nikku Madhusudhan
Institute of Astronomy, University of Cambridge, United Kingdom

Satellite Study Reveals Parched U.S. West Using Up Underground Water

A new study by NASA and University of California, Irvine, scientists finds more than 75 percent of the water loss in the drought-stricken Colorado River Basin since late 2004 came from underground resources. The extent of groundwater loss may pose a greater threat to the water supply of the western United States than previously thought.

The Colorado River Basin lost nearly 53 million acre feet of freshwater over the past nine years, according to a new study based on data from NASA’s GRACE mission. This is almost double the volume of the nation's largest reservoir, Nevada's Lake Mead (pictured).
Image Credit: U.S. Bureau of Reclamation

This study is the first to quantify the amount that groundwater contributes to the water needs of western states. According to the U.S. Bureau of Reclamation, the federal water management agency, the basin has been suffering from prolonged, severe drought since 2000 and has experienced the driest 14-year period in the last hundred years.

The research team used data from NASA's Gravity Recovery and Climate Experiment (GRACE) satellite mission to track changes in the mass of the Colorado River Basin, which are related to changes in water amount on and below the surface. Monthly measurements of the change in water mass from December 2004 to November 2013 revealed the basin lost nearly 53 million acre feet (65 cubic kilometers) of freshwater, almost double the volume of the nation's largest reservoir, Nevada's Lake Mead. More than three-quarters of the total -- about 41 million acre feet (50 cubic kilometers) -- was from groundwater.

The Colorado River Basin (black outline) supplies water to about 40 million people in seven states. Major cities outside the basin (red shading) also use water from the Colorado River.
Image Credit: U.S. Bureau of Reclamation

"We don't know exactly how much groundwater we have left, so we don't know when we're going to run out," said Stephanie Castle, a water resources specialist at the University of California, Irvine, and the study's lead author. "This is a lot of water to lose. We thought that the picture could be pretty bad, but this was shocking."

Water above ground in the basin's rivers and lakes is managed by the U.S. Bureau of Reclamation, and its losses are documented. Pumping from underground aquifers is regulated by individual states and is often not well documented.

"There's only one way to put together a very large-area study like this, and that is with satellites," said senior author Jay Famiglietti, senior water cycle scientist at JPL on leave from UC Irvine, where he is an Earth system science professor. "There's just not enough information available from well data to put together a consistent, basin-wide picture."

Famiglietti said GRACE is like having a giant scale in the sky. Within a given region, the change in mass due to rising or falling water reserves influences the strength of the local gravitational attraction. By periodically measuring gravity regionally, GRACE reveals how much a region's water storage changes over time.

The Colorado River is the only major river in the southwestern United States. Its basin supplies water to about 40 million people in seven states, as well as irrigating roughly four million acres of farmland.

"The Colorado River Basin is the water lifeline of the western United States," said Famiglietti. "With Lake Mead at its lowest level ever, we wanted to explore whether the basin, like most other regions around the world, was relying on groundwater to make up for the limited surface-water supply. We found a surprisingly high and long-term reliance on groundwater to bridge the gap between supply and demand."

Famiglietti noted that the rapid depletion rate will compound the problem of short supply by leading to further declines in streamflow in the Colorado River.

"Combined with declining snowpack and population growth, this will likely threaten the long-term ability of the basin to meet its water allocation commitments to the seven basin states and to Mexico," Famiglietti said.

The study has been accepted for publication in the journal Geophysical Research Letters, which posted the manuscript online Thursday. Coauthors included other scientists from NASA's Goddard Space Flight Center, Greenbelt, Maryland, and the National Center for Atmospheric Research, Boulder, Colorado. The research was funded by NASA and the University of California.

For more information on NASA's GRACE satellite mission, see: http://www.nasa.gov/grace and http://www.csr.utexas.edu/grace

GRACE is a joint mission with the German Aerospace Center and the German Research Center for Geosciences, in partnership with the University of Texas at Austin. JPL developed the GRACE spacecraft and manages the mission for NASA's Science Mission Directorate, Washington.

NASA monitors Earth's vital signs from land, air and space with a fleet of satellites and ambitious airborne and ground-based observation campaigns. NASA develops new ways to observe and study Earth's interconnected natural systems with long-term data records and computer analysis tools to better see how our planet is changing. The agency shares this unique knowledge with the global community and works with institutions in the United States and around the world that contribute to understanding and protecting our home planet.





Contacts and sources:
Alan Buis
Jet Propulsion Laboratory, Pasadena, Calif.
NASA

Artificial Intelligence Analyzes Beatles' Music Transformations


Music fans and critics know that the music of the Beatles underwent a dramatic transformation in just a few years, but until now there hasn’t been a scientific way to measure the progression. That could change now that computer scientists at Lawrence Technological University have developed an artificial intelligence algorithm that can analyze and compare musical styles, enabling research into the musical progression of the Beatles.


Assistant Professor Lior Shamir and graduate student Joe George had previously developed audio analysis technology to study the vocal communication of whales, and they expanded the algorithm to analyze the albums of the Beatles and other well-known bands such as Queen, U2, ABBA and Tears for Fears. The study, published in the August issue of the journal Pattern Recognition Letters, demonstrates scientifically that the structure of the Beatles music changes progressively from one album to the next.

The algorithm works by first converting each song to a spectrogram – a visual representation of the audio content. That turns an audio analysis task into an image analysis problem, which is solved by applying comprehensive algorithms that turn each music spectrogram into a set of almost 3,000 numeric descriptors reflecting visual aspects such as textures, shapes and the statistical distribution of the pixels. Pattern recognition and statistical methods are then used to detect and quantify the similarities between different pieces of music.

In popular music, albums are widely considered milestones in the stylistic development of music artists, and these collections of songs provide a convenient unit for establishing measurements to quantify a band’s progression.

LTU's study analyzed 11 songs from each of the 13 Beatles studio albums released in Great Britain, and quantified the similarities between each song and all the others in the study. The results for the individual songs were then used to compare the similarities between the albums.

The automatic placement of the albums by the algorithm was in agreement with the chronological order of the recording of each album, starting with the Beatles’ first album, “Please, Please Me,” and followed by the subsequent early albums, “With the Beatles,” “Beatles for Sale” and “A Hard Day’s Night.”
The automatic association of these albums demonstrated that the computer algorithm determined that the songs on the first album, “Please, Please Me,” were most like the group of songs on the second album, “With the Beatles,” and least like the songs on the last album recorded, “Abbey Road.”

The algorithm then placed the albums “Help!,” and “Rubber Soul,” followed by “Revolver,” “Sergeant Pepper’s Lonely Hearts Club Band,” “Magical Mystery Tour,” “Yellow Submarine,” and “The Beatles” (The White Album).
“Let It Be” was the last album released by the Beatles, but the algorithm correctly identified those songs as having been recorded earlier than the songs on “Abbey Road.”

“People who are not Beatles fans normally can’t tell that ‘Help!’ was recorded before ‘Rubber Soul,’ but the algorithm can,” Shamir said. “This experiment demonstrates that artificial intelligence can identify the changes and progression in musical styles by ‘listening’ to popular music albums in a completely new way.”

The computer algorithm was able to deduce the chronological order of the albums of the other groups in the study by analyzing the audio data alone – with one notable exception. Strong similarities were identified between two Tears for Fears albums released 15 years apart. That makes sense because “Seeds of Love,” released in 1989, was the last album before the band’s breakup, and “Everybody Loves a Happy Ending,” released in 2004, was recorded after the band reunited. Those two albums had less in common with two solo albums released by Roland Orzabal, the group’s principal songwriter, after the band split up in 1991.

In the case of “Queen,” the computer not only sorted the albums by their chronological order, but also distinguished between albums before and after the album “Hot Space,” which represented a major shift in Queen’s musical style.

In this era of big data, such algorithms can assist in searching, browsing, and organizing large music databases, as well as identifying music that matches an individual listener’s musical preferences.

In the case of the Beatles, Shamir believes this type of research will have historical significance. “The baby boomers loved the music of the Beatles, I love the Beatles, and now my daughters and their friends love the Beatles. Their music will live on for a very long time,” Shamir said. “It is worthwhile to study what makes their music so distinctive, and computer science and big data can help.”


Contacts and sources:

Wednesday, July 23, 2014

Voyager Spacecraft Might Not Have Reached Interstellar Space

In 2012, the Voyager mission team announced that the Voyager 1 spacecraft had passed into interstellar space, traveling further from Earth than any other manmade object.

But, in the nearly two years since that historic announcement, and despite subsequent observations backing it up, uncertainty about whether Voyager 1 really crossed the threshold continues. There are some scientists who say that the spacecraft is still within the heliosphere – the region of space dominated by the Sun and its wind of energetic particles – and has not yet reached the space between the stars.

Now, two Voyager team scientists have developed a test that they say could prove once and for all if Voyager 1 has crossed the boundary. The new test is outlined in a study accepted for publication in Geophysical Research Letters, a journal of the American Geophysical Union.

The scientists predict that, in the next two years, Voyager 1 will cross the current sheet – the sprawling surface within the heliosphere where the polarity of the sun’s magnetic field changes from plus to minus. The spacecraft will detect a reversal in the magnetic field, proving that it is still within the heliosphere. But, if the magnetic field reversal doesn’t happen in the next year or two as expected, that is confirmation that Voyager 1 has already passed into interstellar space.

The heliosphere, in which the Sun and planets reside, is a large bubble inflated from the inside by the high-speed solar wind blowing out from the Sun. Pressure from the solar wind, along with pressure from the surrounding interstellar medium, determines the size and shape of the heliosphere. The supersonic flow of solar wind abruptly slows at the termination shock, the innermost boundary of the solar system. The edge of the solar system is the heliopause. The bow shock pushes ahead through the interstellar medium as the heliosphere plows through the galaxy.

Credit: Southwest Research Institute

“The proof is in the pudding,” said George Gloeckler, a professor in atmospheric, oceanic and space sciences at the University of Michigan in Ann Arbor and lead author of the new study.

Gloeckler has worked on the Voyager mission since 1972 and has been a vocal opponent of the view that Voyager 1 has entered interstellar space. He said that, although the spacecraft has observed many of the signs indicating it may have reached interstellar space, like cosmic rays, Voyager 1 did not see a change in magnetic field that many were expecting.

“This controversy will continue until it is resolved by measurements,” Gloeckler said.

If the new prediction is right, “this will be the highlight of my life,” he said. “There is nothing more gratifying than when you have a vision or an idea and you make a prediction and it comes true.”

The Voyager 1 and 2 spacecraft were launched in 1977 to study Jupiter and Saturn. The mission has since been extended to explore the outermost limits of the Sun’s influence and beyond. Voyager 2, which also flew by Uranus and Neptune, is on its way to interstellar space.

Gloeckler and co-author, Len Fisk, also a professor in atmospheric, oceanic and space sciences at the University of Michigan, are basing their new test on a model they developed and published earlier this year in The Astrophysical Journal. The model assumes that the solar wind is slowing down and, as a result, that the solar wind can be compressed. Based on this assumption, the study says Voyager 1 is moving faster than the outward flow of the solar wind and will encounter current sheets where the polarity of the magnetic field will reverse, proving that the spacecraft has not yet left the heliosphere. The scientists predict this reversal will most likely happen during 2015, based on observations made by Voyager 1.

“If that happens, I think if anyone still believes Voyager 1 is in the interstellar medium, they will really have something to explain,” Gloeckler said. “It is a signature that can’t be missed.”

Ed Stone of the California Institute of Technology in Pasadena and NASA’s Voyager Project Scientist said in a statement that “It is the nature of the scientific process that alternative theories are developed in order to account for new observations. This paper differs from other models of the solar wind and the heliosphere and is among the new models that the Voyager team will be studying as more data are acquired by Voyager.”

Alan Cummings, a senior research scientist at California Institute of Technology in Pasadena and a co-investigator on the Voyager mission, believes Voyager 1 has most likely crossed into interstellar space, but he said there is a possibility that Gloeckler and Fisk are right and the spacecraft is still in the heliosphere. He said that if Voyager 1 experiences a current sheet crossing like the one being proposed in the new study, it could also mean that the heliosphere is expanding and crossed the spacecraft again.

“If the magnetic field had cooperated, I don’t think we’d be having this discussion,” Cummings said. “This is a puzzle. It is very reasonable to explore alternate explanations. We don’t understand everything that happened out there.”

Stephen Fuselier, director of the space science department at the Southwest Research Institute in San Antonio, Texas, who is not involved with the research and is not on the Voyager 1 team, said the scientists have come up with a good test to prove once and for all if Voyager 1 has crossed into interstellar space. However, he does not agree with the assumption that the paper is making about the how fast the solar wind is moving. But, he said there is no way to measure this flow velocity, and if Gloeckler and Fisk’s assumptions are correct, the model makes sense and Voyager 1 could still be inside the heliosphere.

This artist’s concept shows the Voyager 1 spacecraft entering the space between stars. The Voyager mission team announced in 2012 that the Voyager 1 spacecraft had passed into interstellar space, but some scientists say it is still within the heliosphere – the region of space domininated by the Sun and its wind of energetic particles. In a new study, two Voyager team scientists are proposing a test that they say could prove once and for all of Voyager 1 has crossed the boundary.

Credit: NASA/JPL-Caltech

“I applaud them for coming out with a bold prediction,” said Fuselier, who works on the Interstellar Boundary Explorer mission that is examining the boundary between the solar wind and the interstellar medium. “If they are right, they are heroes. If they are wrong, though, it is important for the community to understand why … If they are wrong, then that must mean that one or more of their assumptions is incorrect, and we as a community have to understand which it is.”

Fuselier, who believes Voyager 1 has entered interstellar space, said he will reserve judgment on whether Gloecker and Fisk are correct until 2016. He said there is a sizeable fraction of the space community that is skeptical that Voyager 1 has entered interstellar space, but the new proposed test could help end that debate. Another good test will come when Voyager 2 crosses into interstellar space in the coming years, Fuselier and Cummings said.

“If you go back 10 years and talk to the Voyager people, they would have told you 10 years ago that what they would see upon exiting the heliosphere is very, very different from what they are seeing now,” Fuselier said. “We are just loaded down with surprises and this might be one of them.”


Contacts and sources:
Peter Weiss
American Geophysical Union 

Citation: A test for whether or not Voyager 1 has crossed the heliopause”  Authors:G. Gloeckler and L.A. Fisk: Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, Michigan, USA.