ADS


Unseen Is Free

Unseen Is Free
Try It Now

OpenX

Google Translate

Thursday, July 24, 2014

Leaf-Mining Insects Destroyed With The Dinosaurs, Others Quickly Appeared

After the asteroid impact at the end of the Cretaceous period that triggered the dinosaurs' extinction and ushered in the Paleocene, leaf-mining insects in the western United States completely disappeared. Only a million years later, at Mexican Hat, in southeastern Montana, fossil leaves show diverse leaf-mining traces from new insects that were not present during the Cretaceous, according to paleontologists.

This is a Platanus raynoldski, or sycamore, with two mines at the leaf base produced by wasp larvae.
Credit: Michael Donovan, Penn State


"Our results indicate both that leaf-mining diversity at Mexican Hat is even higher than previously recognized, and equally importantly, that none of the Mexican Hat mines can be linked back to the local Cretaceous mining fauna," said Michael Donovan, graduate student in geosciences, Penn State.

Insects that eat leaves produce very specific types of damage. One type is from leaf miners -- insect larvae that live in the leaves and tunnel for food, leaving distinctive feeding paths and patterns of droppings.

Donovan, Peter Wilf, professor of geosciences, Penn State, and colleagues looked at 1,073 leaf fossils from Mexican Hat for mines. They compared these with more than 9,000 leaves from the end of the Cretaceous, 65 million years ago, from the Hell Creek Formation in southwestern North Dakota, and with more than 9,000 Paleocene leaves from the Fort Union Formation in North Dakota, Montana and Wyoming. The researchers present their results in today's (July 24) issue of PLOS ONE.

"We decided to focus on leaf miners because they are typically host specific, feeding on only a few plant species each," said Donovan. "Each miner also leaves an identifiable mining pattern."

The researchers found nine different mine-damage types at Mexican Hat attributable to the larvae of moths, wasps and flies, and six of these damage types were unique to the site.


This is a mine produced by a micromoth larva on Platanus raynoldski, a sycamore.
Credit: Michael Donovan, Penn State

The researchers were unsure whether the high diversity of leaf miners at Mexican Hat compared to other early Paleocene sites, where there is little or no leaf mining, was caused by insects that survived the extinction event in refugia -- areas where organisms persist during adverse conditions -- or were due to range expansions of insects from somewhere else during the early Paleocene.

However, with further study, the researchers found no evidence of the survival of any leaf miners over the Cretaceous-Paleocene boundary, suggesting an even more total collapse of terrestrial food webs than has been recognized previously.

"These results show that the high insect damage diversity at Mexican Hat represents an influx of novel insect herbivores during the early Paleocene and not a refugium for Cretaceous leaf miners," said Wilf. "The new herbivores included a startling diversity for any time period, and especially for the classic post-extinction disaster interval."

Insect extinction across the Cretaceous-Paleocene boundary may have been directly caused by catastrophic conditions after the asteroid impact and by the disappearance of host plant species. While insect herbivores constantly need leaves to survive, plants can remain dormant as seeds in the ground until more auspicious circumstances occur.

The low-diversity flora at Mexican Hat is typical for the area in the early Paleocene, so what caused the high insect damage diversity?

Insect outbreaks are associated with a rapid population increase of a single insect species, so the high diversity of mining damage seen in the Mexican Hat fossils makes the possibility of an outbreak improbable.

The researchers hypothesized that the leaf miners that are seen in the Mexican Hat fossils appeared in that area because of a transient warming event, a number of which occurred during the early Paleocene.

This is a micromoth larva mine on Juglandiphyllites glabra, the earliest known member of the walnut family.
Credit: Michael Donovan, Penn State

"Previous studies have shown a correlation between temperature and insect damage diversity in the fossil record, possibly caused by evolutionary radiations or range shifts in response to a warmer climate," said Donovan. "Current evidence suggests that insect herbivore extinction decreased with increasing distance from the asteroid impact site in Mexico, so pools of surviving insects would have existed elsewhere that could have provided a source for the insect influx that we observed at Mexican Hat."

Other researchers on this project were Conrad C. Labandeira, Department of Paleobiology, National Museum of Natural History, Smithsonian Institution and Department of Entomology and BEES Program, University of Maryland, College Park; Kirk R. Johnson, National Museum of Natural History, Smithsonian Institution; and Daniel J. Peppe, Department of Geology, Baylor University.



Contacts and sources:
A'ndrea Elyse Messer
Penn State

Highest-Precision Measurement Of Water In Planet Outside The Solar System


A team of astronomers using NASA's Hubble Space Telescope have gone looking for water vapour in the atmospheres of three planets orbiting stars similar to the Sun – and have come up nearly dry.

The three planets, HD 189733b, HD 209458b, and WASP-12b, are between 60 and 900 light-years away, and are all gas giants known as 'hot Jupiters.' These worlds are so hot, with temperatures between 900 to 2200 degrees Celsius, that they are ideal candidates for detecting water vapour in their atmospheres.

Illustration of a 'hot Jupiter' orbiting a sun-like star

Credit: Haven Giguere, Nikku Madhusudhan

However, the three planets have only one-tenth to one-thousandth the amount of water predicted by standard planet formation theories. The best water measurement, for the planet HD 209458b, was between 4 and 24 parts per million. The results raise new questions about how exoplanets form and highlight the challenges in searching for water on Earth-like exoplanets in the future. The findings are published today (24 July) in the journal Astrophysical Journal Letters.

"Our water measurement in one of the planets, HD 209458b, is the highest-precision measurement of any chemical compound in a planet outside the solar system, and we can now say with much greater certainty than ever before that we've found water in an exoplanet," said Dr Nikku Madhusudhan of the Institute of Astronomy at the University of Cambridge, who led the research. "However, the low water abundance we are finding is quite astonishing."

Dr Madhusudhan and his collaborators used near-infrared spectra of the planetary atmospheres observed with the Hubble Space Telescope as the planets were passing in front of their parent stars as viewed from Earth. Absorption features from water vapour in the planetary atmosphere are superimposed on the small amount of starlight that passes through the planetary atmosphere before reaching the telescope. The planetary spectrum is obtained by determining the variation in the stellar spectrum caused due to the planetary atmosphere and is then used to estimate the amount of water vapour in the planetary atmosphere using sophisticated computer models and statistical techniques.

Madhusudhan said that the findings present a major challenge to exoplanet theory. "It basically opens a whole can of worms in planet formation. We expected these planets to have lots of water in their atmospheres. We have to revisit planet formation and migration models of giant planets, especially hot Jupiters, to investigate how they're formed."

The currently accepted theory on how giant planets in our solar system formed is known as core accretion, in which a planet is formed around the young star in a protoplanetary disc made primarily of hydrogen, helium, and particles of ices and dust composed of other chemical elements. The dust particles stick to each other, eventually forming larger and larger grains. The gravitational forces of the disc draw in these grains and larger planetesimals until a solid core forms. This core then leads to runaway accretion of both planetesimals and gas to eventually form a giant planet.

This theory predicts that the proportions of the different elements in the planet are enhanced relative to those in their star, especially oxygen, which is supposed to be the most enhanced. Once a giant planet forms, its atmospheric oxygen is expected to be largely in the form of water. Therefore, the very low levels of water vapour found by this research raise a number of questions about the chemical ingredients that lead to planet formation.

"There are so many things we still don't understand about exoplanets – this opens up a new chapter in understanding how planets and solar systems form," said Dr Drake Deming of the University of Maryland, who led one of the precursor studies and is a co-author in the present study. "These findings highlight the need for high-precision spectroscopy – additional observations from the Hubble Space Telescope and the next-generation telescopes currently in development will make this task easier."

The new discovery also highlights some major challenges in the search for the exoplanet 'holy grail' – an exoplanet with a climate similar to Earth, a key characteristic of which is the presence of liquid water.

"These very hot planets with large atmospheres orbit some of our nearest stars, making them the best possible candidates for measuring water levels, and yet the levels we found were much lower than expected," said Dr Madhusudhan. "These results show just how challenging it could be to detect water on Earth-like exoplanets in our search for potential life elsewhere." Instruments on future telescopes searching for biosignatures may need to be designed with a higher sensitivity to account for the possibility of planets being significantly drier than predicted.

The researchers also considered the possibility that clouds may be responsible for obscuring parts of the atmospheres, thereby leading to the low observed water levels. However, such an explanation requires heavy cloud particles to be suspended too high in the atmosphere to be physically plausible for all the planets in the study.



Contacts and sources:
Sarah Collins
University of Cambridge

Synchronization Of North Atlantic, North Pacific Preceded Abrupt Warming, End Of Ice Age

Scientists have long been concerned that global warming may push Earth's climate system across a "tipping point," where rapid melting of ice and further warming may become irreversible -- a hotly debated scenario with an unclear picture of what this point of no return may look like.

A newly published study by researchers at Oregon State University probed the geologic past to understand mechanisms of abrupt climate change. The study pinpoints the emergence of synchronized climate variability in the North Pacific Ocean and the North Atlantic Ocean a few hundred years before the rapid warming that took place at the end of the last ice age about 15,000 years ago.

This image depicts the Hubbard Glacier ice front, with floating ice 'growlers' in August 2004.
Credit: Photo courtesy of Oregon State University

The study suggests that the combined warming of the two oceans may have provided the tipping point for abrupt warming and rapid melting of the northern ice sheets.

Results of the study, which was funded by the National Science Foundation, appear this week in Science.

This new discovery by OSU researchers resulted from an exhaustive 10-year examination of marine sediment cores recovered off southeast Alaska where geologic records of climate change provide an unusually detailed history of changing temperatures on a scale of decades to centuries over many thousands of years.

"Synchronization of two major ocean systems can amplify the transport of heat toward the polar regions and cause larger fluctuations in northern hemisphere climate," said Summer Praetorius, a doctoral student in marine geology at Oregon State and lead author on the Science paper. "This is consistent with theoretical predictions of what happens when Earth's climate reaches a tipping point."

"That doesn't necessarily mean that the same thing will happen in the future," she pointed out, "but we cannot rule out that possibility."

The study found that synchronization of the two regional systems began as climate was gradually warming. After synchronization, the researchers detected wild variability that amplified the changes and accelerated into an abrupt warming event of several degrees within a few decades.

"As the systems become synchronized, they organized and reinforced each other, eventually running away like screeching feedback from a microphone," said Alan Mix, a professor in OSU's College of Earth, Ocean, and Atmospheric Sciences and co-author on the paper. "Suddenly you had the combined effects of two major oceans forcing the climate instead of one at a time."

"The example that we uncovered is a cause for concern because many people assume that climate change will be gradual and predictable," Mix added. "But the study shows that there can be vast climate swings over a period of decades to centuries. If such a thing happened in the future, it could challenges society's ability to cope."

What made this study unusual is that the researchers had such a detailed look at the geologic record. While modern climate observations can be made every day, the length of instrumental records is relatively short – typically less than a century. In contrast, paleoclimatic records extend far into the past and give good context for modern changes, the researchers say. However, the resolution of most paleo records is low, limited to looking at changes that occur over thousands of years.

In this study, the researchers examined sediment cores taken from the Gulf of Alaska in 2004 during an expedition led by Mix. The mountains in the region are eroding so fast that sedimentation rates are "phenomenal," he said. "Essentially, this rapid sedimentation provides a 'climate tape recorder' at extremely high fidelity."

Praetorius then led an effort to look at past temperatures by slicing the sediment into decade-long chunks spanning more than 8,000 years – a laborious process that took years to complete. She measured ratios of oxygen isotopes trapped in fossil shells of marine plankton called foraminifera. The isotopes record the temperature and salinity of the water where the plankton lived.

When the foraminifera died, their shells sank to the sea floor and were preserved in the sediments that eventually were recovered by Mix's coring team.

The researchers then compared their findings with data from the North Greenland Ice Core Project to see if the two distinct high-latitude climate systems were in any way related.

Most of the time, the two regions vary independently, but about 15,500 years ago, temperature changes started to line up and then both regions warmed abruptly by about five degrees (C) within just a few decades. Praetorius noted that much warmer ocean waters likely would have a profound effect on northern-hemisphere climates by melting sea ice, warming the atmosphere and destabilizing ice sheets over Canada and Europe.

A tipping point for climate change "may be crossed in an instant," Mix noted, "but the actual response of the Earth's system may play out over centuries or even thousands of years during a period of dynamic adjustment."

"Understanding those dynamics requires that we look at examples from the past," Mix said. "If we really do cross such a boundary in the future, we should probably take a long-term perspective and realize that change will become the new normal. It may be a wild ride."

Added Praetorius: "Our study does suggest that the synchronization of the two major ocean systems is a potential early warning system to begin looking for the tipping point."


Contacts and sources:
Summer Praetorius
Oregon State University

Four Billion-Year-Old Primordial Soup Chemistry In Cells Today

Parts of the primordial soup in which life arose have been maintained in our cells today according to scientists at the University of East Anglia.
Credit: www.flickr.com

Research published today in the Journal of Biological Chemistry reveals how cells in plants, yeast and very likely also in animals still perform ancient reactions thought to have been responsible for the origin of life – some four billion years ago.

The primordial soup theory suggests that life began in a pond or ocean as a result of the combination of metals, gases from the atmosphere and some form of energy, such as a lightning strike, to make the building blocks of proteins which would then evolve into all species.

The new research shows how small pockets of a cell – known as mitochondria – continue to perform similar reactions in our bodies today. These reactions involve iron, sulfur and electro-chemistry and are still important for functions such as respiration in animals and photosynthesis in plants.

Lead researcher Dr Janneke Balk, from UEA’s school of Biological Sciences and the John Innes Centre, said: “Cells confine certain bits of dangerous chemistry to specific compartments of the cell.

“For example small pockets of a cell called mitochondria deal with electrochemistry and also with toxic sulfur metabolism. These are very ancient reactions thought to have been important for the origin of life.

“Our research has shown that a toxic sulfur compound is being exported by a mitochondrial transport protein to other parts of the cell. We need sulfur for making iron-sulfur catalysts, again a very ancient chemical process.

“The work shows that parts of the primordial soup in which life arose has been maintained in our cells today, and is in fact harnessed to maintain important biological reactions.”

The research was carried out at UEA and JIC in collaboration with Dr Hendrik van Veen at the University of Cambridge. It was funded by the Biotechnology and Biological Sciences Research Council (BBSRC).

‘A Conserved Mitochondrial ATB-Binding Cassette Transporter Exports Glutathione Polysufide for Cytosolic Metal Cofactor Assembly’ is published in the Journal of Biological Chemistry.


Contacts and sources: 
Lisa Horton
University of East Anglia

Warning Of Early Stages Of Earth's 6th Mass Extinction Event

Stanford Biology Professor Rodolfo Dirzo and his colleagues warn that this "defaunation" could have harmful downstream effects on human health.

The planet's current biodiversity, the product of 3.5 billion years of evolutionary trial and error, is the highest in the history of life. But it may be reaching a tipping point.

Elephants and other large animals face an increased risk of extinction in what Stanford Biology Professor Rodolfo Dirzo terms "defaunation."

Credit: Claudia Paulussen/Shutterstock

In a new review of scientific literature and analysis of data published in Science, an international team of scientists cautions that the loss and decline of animals is contributing to what appears to be the early days of the planet's sixth mass biological extinction event.

Since 1500, more than 320 terrestrial vertebrates have become extinct. Populations of the remaining species show a 25 percent average decline in abundance. The situation is similarly dire for invertebrate animal life.

And while previous extinctions have been driven by natural planetary transformations or catastrophic asteroid strikes, the current die-off can be associated to human activity, a situation that the lead authorRodolfo Dirzo, a professor of biology at Stanford, designates an era of "Anthropocene defaunation."

Across vertebrates, 16 to 33 percent of all species are estimated to be globally threatened or endangered. Large animals – described as megafauna and including elephants, rhinoceroses, polar bears and countless other species worldwide – face the highest rate of decline, a trend that matches previous extinction events.

Larger animals tend to have lower population growth rates and produce fewer offspring. They need larger habitat areas to maintain viable populations. Their size and meat mass make them easier and more attractive hunting targets for humans.

Although these species represent a relatively low percentage of the animals at risk, their loss would have trickle-down effects that could shake the stability of other species and, in some cases, even human health.

For instance, previous experiments conducted in Kenya have isolated patches of land from megafauna such as zebras, giraffes and elephants, and observed how an ecosystem reacts to the removal of its largest species. Rather quickly, these areas become overwhelmed with rodents. Grass and shrubs increase and the rate of soil compaction decreases. Seeds and shelter become more easily available, and the risk of predation drops.

Consequently, the number of rodents doubles – and so does the abundance of the disease-carrying ectoparasites that they harbor.

"Where human density is high, you get high rates of defaunation, high incidence of rodents, and thus high levels of pathogens, which increases the risks of disease transmission," said Dirzo, who is also a senior fellow at the Stanford Woods Institute for the Environment. "Who would have thought that just defaunation would have all these dramatic consequences? But it can be a vicious circle."

The scientists also detailed a troubling trend in invertebrate defaunation. Human population has doubled in the past 35 years; in the same period, the number of invertebrate animals – such as beetles, butterflies, spiders and worms – has decreased by 45 percent.

As with larger animals, the loss is driven primarily by loss of habitat and global climate disruption, and could have trickle-up effects in our everyday lives.

For instance, insects pollinate roughly 75 percent of the world's food crops, an estimated 10 percent of the economic value of the world's food supply. Insects also play a critical role in nutrient cycling and decomposing organic materials, which helps ensure ecosystem productivity. In the United States alone, the value of pest control by native predators is estimated at $4.5 billion annually.

Dirzo said that the solutions are complicated. Immediately reducing rates of habitat change and overexploitation would help, but these approaches need to be tailored to individual regions and situations. He said he hopes that raising awareness of the ongoing mass extinction – and not just of large, charismatic species – and its associated consequences will help spur change.

"We tend to think about extinction as loss of a species from the face of Earth, and that's very important, but there's a loss of critical ecosystem functioning in which animals play a central role that we need to pay attention to as well," Dirzo said. "Ironically, we have long considered that defaunation is a cryptic phenomenon, but I think we will end up with a situation that is non-cryptic because of the increasingly obvious consequences to the planet and to human wellbeing."

The coauthors on the report include Hillary S. Young, University of California, Santa Barbara; Mauro Galetti, Universidade Estadual Paulista in Brazil; Gerardo Ceballos, Universidad Nacional Autonoma de Mexico; Nick J.B. Isaac, of the Natural Environment Research Council Centre for Ecology and Hydrology in England; and Ben Collen, of University College London.

Contacts and sources:
Bjorn Carey
Stanford University

Moose Drool Inhibits Growth Of Toxic Fungus: York University Research

Some sticky research out of York University shows a surprisingly effective way to fight against a certain species of toxic grass fungus: moose saliva (yes… moose saliva).

Credit: Dawn Brazely

Published in this month’s Biology Letters, “Ungulate saliva inhibits a grass–endophyte mutualism” shows that moose and reindeer saliva, when applied to red fescue grass (which hosts a fungus called epichloë festucae that produces the toxin ergovaline) results in slower fungus growth and less toxicity.

“Plants have evolved defense mechanisms to protect themselves, such as thorns, bitter-tasting berries, and in the case of certain types of grass, by harbouring toxic fungus deep within them that can be dangerous or even fatal for grazing animals,” says York U Biology Professor Dawn Bazely, who worked with University of Cambridge researcher Andrew Tanentzap and York U researcher Mark Vicari on the project. “We wanted to find out how moose were able to eat such large quantities of this grass without negative effects.”

Inspired by an earlier study that showed that moose grazing and saliva distribution can have a positive effect on plant growth, the research team set out to test an interesting hypothesis – whether moose saliva may, in fact, “detoxify” the grass before it is eaten.

Working in partnership with the Toronto Zoo, the team collected saliva samples from moose and reindeer, which they then smeared onto clipped samples of red fescue grass carrying the toxic fungus, simulating the effect of grazing. They found that the application of saliva produced rapid results, inhibiting fungus growth within 12-36 hours.

“We found that the saliva worked very quickly in slowing the growth of the fungus, and the fungus colonies,” says Bazely. “In addition, by applying multiple applications of saliva to the grass over the course of two months, we found we could lower the concentration of ergovaline between 41 and 70 per cent.”

Bazely says that because moose tend to graze within a defined home range, it’s possible that certain groups of plants are receiving repeated exposure to the moose saliva, which over time has resulted in fewer toxins within their preferred area.

“We know that animals can remember if certain plants have made them feel ill, and they may avoid these plants in future,” says Bazely. “This study the first evidence, to our knowledge, of herbivore saliva being shown to ‘fight back’ and slow down the growth of the fungus.”


Contacts and sources:
Robin Heron
York University

Million Year Old Stone Age Artifacts Found In Northern Cape Of South Africa

Excavations at an archaeological site at Kathu in the Northern Cape province of South Africa have produced tens of thousands of Earlier Stone Age artifacts, including hand axes and other tools. These discoveries were made by archaeologists from the University of Cape Town (UCT), South Africa and the University of Toronto (U of T), in collaboration with the McGregor Museum in Kimberley, South Africa.

Steven James Walker from the Department of Archaeology at UCTextracts a sample at the interface between the overlying red sands and the Earlier Stone Age archaeological deposits at the Kathu Townlands site. 

Credit: Vasa Lukich.

The archaeologists’ research on the Kathu Townlands site, one of the richest early prehistoric archaeological sites in South Africa, was published in the journal, PLOS ONE, on 24 July 2014. It is estimated that the site is between 700,000 and one million years old.

Steven James Walker from the Department of Archaeology at UCT, lead author of the journal paper, says: “The site is amazing and it is threatened. We’ve been working well with developers as well as the South African Heritage Resources Agency to preserve it, but the town of Kathu is rapidly expanding around the site. It might get cut off on all sides by development and this would be regrettable.”

Flakes and cores from Kathu Townlands, Beaumont Excavation.
A: Large flake off the edge of the core consistent with biface shaping removal.
B: Large flake with centripedal dorsal scars.
C: Blade, note that there is some cortex (indicated by C in the sketch) and that scars are not parallel.
D-F: Small flakes, note that F is off the edge of the core.
G: Discoidal core with removals off both faces. Break on one edge (upper edge in right view).
H: Discoidal core with one large flake removal. Note that on the right-hand face the working is unclear and it is possible that this is a natural surface.
Credit: Steven James Walker & et al.

Today, Kathu is a major iron mining centre. Walker adds that the fact that such an extensive prehistoric site is located in the middle of a zone of intensive development poses a unique challenge for archaeologists and developers to find strategies to work cooperatively.

Profiles from 2013 excavation.
A. Trench A: Square 1. Massive deposit of Banded Irontone rubble and artefacts overlying bedrock in a sandy matrix. Note lack of bedding or sorting.
B. Trench I: Square 5. Shallow massive deposit of Banded Ironstone rubble and artefacts overlying bedrock with overlying deposits of sand.
C. Trench E: Square 3. Discrete calcrete nodule that developed near the interface of the rubble/artefact deposit and underlying bedrock. Note parallel bedding of the Ironstone within the calcrete nodule. Approximate width of image 50cm.
D. Trench J/K. Discrete nodular calcrete developing in the sand and into the underlying Banded Ironstone rubble. Does not exhibit parallel Ironstone bedding found in (c). Approximate width of images 50cm.


Credit: Steven James Walker & et al.

The Kathu Townlands site is one component of a grouping of prehistoric sites known as the Kathu Complex. Other sites in the complex include Kathu Pan 1 which has produced fossils of animals such as elephants and hippos, as well as the earliest known evidence of tools used as spears from a level dated to half a million years ago. 

Hand axes from surface collection. A-B. Banded Ironstone  C. Quartzite
Credit: Steven James Walker & et al.

Michael Chazan, Director of the Archaeology Centre at U of T, emphasizes the scientific challenge posed by the density of the traces of early human activity in this area.

“We need to imagine a landscape around Kathu that supported large populations of human ancestors, as well as large animals like hippos. All indications suggest that Kathu was much wetter, maybe more like the Okavango than the Kalahari. There is no question that the Kathu Complex presents unique opportunities to investigate the evolution of human ancestors in Southern Africa.”

New Mass Map Of A Distant Galaxy Cluster Is The Most Precise Yet

Astronomers using the NASA/ESA Hubble Space Telescope have mapped the mass within a galaxy cluster more precisely than ever before. Created using observations from Hubble's Frontier Fields observing programme, the map shows the amount and distribution of mass within MCS J0416.1-2403, a massive galaxy cluster found to be 160 trillion times the mass of the Sun. The detail in this mass map was made possible thanks to the unprecedented depth of data provided by new Hubble observations, and the cosmic phenomenon known as strong gravitational lensing.
 

This image from the NASA/ESA Hubble Space Telescope shows the galaxy cluster MCS J0416.1-2403. This is one of six being studied by the Hubble Frontier Fields programme. This programme seeks to analyse the mass distribution in these huge clusters and to use the gravitational lensing effect of these clusters, to peer even deeper into the distant Universe.
Credit: ESA/Hubble, NASA, HST Frontier Fields Acknowledgement: Mathilde Jauzac (Durham University, UK and Astrophysics & Cosmology Research Unit, South Africa) and Jean-Paul Kneib (École Polytechnique Fédérale de Lausanne, Switzerland)

Measuring the amount and distribution of mass within distant objects in the Universe can be very difficult. A trick often used by astronomers is to explore the contents of large clusters of galaxies by studying the gravitational effects they have on the light from very distant objects beyond them. This is one of the main goals of Hubble's Frontier Fields, an ambitious observing programme scanning six different galaxy clusters -- including MCS J0416.1-2403, the cluster shown in this stunning new image [1].

Large clumps of mass in the Universe warp and distort the space-time around them. Acting like lenses, they appear to magnify and bend light that travels through them from more distant objects [2].

Despite their large masses, the effect of galaxy clusters on their surroundings is usually quite minimal. For the most part they cause what is known as weak lensing, making even more distant sources appear as only slightly more elliptical or smeared across the sky. However, when the cluster is large and dense enough and the alignment of cluster and distant object is just right, the effects can be more dramatic. The images of normal galaxies can be transformed into rings and sweeping arcs of light, even appearing several times within the same image. This effect is known as strong lensing, and it is this phenomenon, seen around the six galaxy clusters targeted by the Frontier Fields programme, that has been used to map the mass distribution of MCS J0416.1-2403, using the new Hubble data.

"The depth of the data lets us see very faint objects and has allowed us to identify more strongly lensed galaxies than ever before," explains Mathilde Jauzac of Durham University, UK, and Astrophysics & Cosmology Research Unit, South Africa, lead author of the new Frontier Fields paper. "Even though strong lensing magnifies the background galaxies they are still very far away and very faint. The depth of these data means that we can identify incredibly distant background galaxies. We now know of more than four times as many strongly lensed galaxies in the cluster than we did before."

Using Hubble's Advanced Camera for Surveys, the astronomers identified 51 new multiply imaged galaxies around the cluster, quadrupling the number found in previous surveys and bringing the grand total of lensed galaxies to 68. Because these galaxies are seen several times this equates to almost 200 individual strongly lensed images which can be seen across the frame. This effect has allowed Jauzac and her colleagues to calculate the distribution of visible and dark matter in the cluster and produce a highly constrained map of its mass [3].

"Although we've known how to map the mass of a cluster using strong lensing for more than twenty years, it's taken a long time to get telescopes that can make sufficiently deep and sharp observations, and for our models to become sophisticated enough for us to map, in such unprecedented detail, a system as complicated as MCS J0416.1-2403," says team member Jean-Paul Kneib.

By studying 57 of the most reliably and clearly lensed galaxies, the astronomers modelled the mass of both normal and dark matter within MCS J0416.1-2403. "Our map is twice as good as any previous models of this cluster!" adds Jauzac.

The total mass within MCS J0416.1-2403 -- modelled to be over 650 000 light- years across -- was found to be 160 trillion times the mass of the Sun. This measurement is several times more precise than any other cluster map, and is the most precise ever produced [4]. By precisely pinpointing where the mass resides within clusters like this one, the astronomers are also measuring the warping of space-time with high precision.

"Frontier Fields' observations and gravitational lensing techniques have opened up a way to very precisely characterise distant objects -- in this case a cluster so far away that its light has taken four and a half billion years to reach us," adds Jean-Paul Kneib. "But, we will not stop here. To get a full picture of the mass we need to include weak lensing measurements too. Whilst it can only give a rough estimate of the inner core mass of a cluster, weak lensing provides valuable information about the mass surrounding the cluster core."

The team will continue to study the cluster using ultra-deep Hubble imaging and detailed strong and weak lensing information to map the outer regions of the cluster as well as its inner core, and will thus be able to detect substructures in the cluster's surroundings. They will also take advantage of X-ray measurements of hot gas and spectroscopic redshifts to map the contents of the cluster, evaluating the respective contribution of dark matter, gas and stars [5].

Combining these sources of data will further enhance the detail of this mass distribution map, showing it in 3D and including the relative velocities of the galaxies within it. This paves the way to understanding the history and evolution of this galaxy cluster.


Notes:

[1] The cluster is also known as MACS J0416.1-2403.

[2] The warping of space-time by large objects in the Universe was one of the predictions of Albert Einstein's theory of general relativity.

[3] Gravitational lensing is one of the few methods astronomers have to find out about dark matter. Dark matter, which makes up around three quarters of all matter in the Universe, cannot be seen directly as it does not emit or reflect any light, and can pass through other matter without friction (it is collisionless). It interacts only by gravity, and its presence must be deduced from its gravitational effects.

[4] The uncertainty on the measurement is only around 0.5%, or 1 trillion times the mass of the sun. This may not seem precise but it is for a measurement such as this.

[5] NASA's Chandra X-ray Observatory was used to obtain X-ray measurements of hot gas in the cluster and ground based observatories provide the data needed to measure spectroscopic redshifts.


The results of the study will be published online (mnras.oxfordjournals.org/lookup/doi/10.1093/mnras/stu) in Monthly Notices of the Royal Astronomical Society on 24 July 2014.

Hubble Finds Three Surprisingly Dry Exoplanets

Astronomers using NASA's Hubble Space Telescope have gone looking for water vapor in the atmospheres of three planets orbiting stars similar to the Sun — and have come up nearly dry.

The three planets, HD 189733b, HD 209458b, and WASP-12b, are between 60 and 900 light-years away. These giant gaseous worlds are so hot, with temperatures between 1,500 and 4,000 degrees Fahrenheit, that they are ideal candidates for detecting water vapor in their atmospheres.
This is an artistic illustration of the gas giant planet HD 209458b (unofficially named Osiris) located 150 light-years away in the constellation Pegasus. This is a "hot Jupiter" class planet. Estimated to be 220 times the mass of Earth. The planet's atmosphere is a seething 2,150 degrees Fahrenheit. It orbits very closely to its bright sunlike star, and the orbit is tilted edge-on to Earth. This makes the planet an ideal candidate for the Hubble Space Telescope to be used to make precise measurements of the chemical composition of the giant's atmosphere as starlight filters though it. To the surprise of astronomers, they have found much less water vapor in the atmosphere than standard planet-formation models predict.


However, to the surprise of the researchers, the planets surveyed have only one-tenth to one one-thousandth the amount of water predicted by standard planet-formation theories.

"Our water measurement in one of the planets, HD 209458b, is the highest-precision measurement of any chemical compound in a planet outside the solar system, and we can now say with much greater certainty than ever before that we've found water in an exoplanet," said Dr. Nikku Madhusudhan of the Institute of Astronomy at the University of Cambridge, United Kingdom, who led the research. "However, the low water abundance we are finding is quite astonishing."

Madhusudhan said that this finding presents a major challenge to exoplanet theory. "It basically opens a whole can of worms in planet formation. We expected all these planets to have lots of water in them. We have to revisit planet formation and migration models of giant planets, especially 'hot Jupiters', and investigate how they're formed."

He emphasizes that these results, though found in these large hot planets close to their parent stars, may have major implications for the search for water in potentially habitable Earth-sized exoplanets. Instruments on future space telescopes may need to be designed with a higher sensitivity if target planets are drier than predicted. "We should be prepared for much lower water abundances than predicted when looking at super-Earths (rocky planets that are several times the mass of Earth)," Madhusudhan said.

Using near-infrared spectra of the planets observed with Hubble, Madhusudhan and his collaborators from the Space Telescope Science Institute, Baltimore, Maryland; the University of Maryland, College Park, Maryland; the Johns Hopkins University, Baltimore, Maryland; and the Dunlap Institute at the University of Toronto, Ontario, Canada, estimated the amount of water vapor in the planetary atmospheres based on sophisticated computer models and statistical techniques to explain the data.

The planets were selected because they orbit relatively bright stars that provide enough radiation for an infrared-light spectrum to be taken. Absorption features from the water vapor in the planet's atmosphere are superimposed on the small amount of starlight that glances through the planet's atmosphere.

Detecting water is almost impossible for transiting planets from the ground because Earth's atmosphere has a lot of water in it that contaminates the observation. "We really need the Hubble Space Telescope to make such observations," said Nicolas Crouzet of the Dunlap Institute at the University of Toronto and co-author of the study.
This graph compares observations with modeled infrared spectra of three hot-Jupiter-class exoplanets that were spectroscopically observed with the Hubble Space Telescope. The red curve in each case is the best-fit model spectrum for the detection of water vapor absorption in the planetary atmosphere. The blue circles and error bars show the processed and analyzed data from Hubble's spectroscopic observations.

Credit: NASA, ESA, N. Madhusudhan (University of Cambridge), and A. Feild and G. Bacon (STScI)

The currently accepted theory on how giant planets in our solar system formed is known as core accretion, in which a planet is formed around the young star in a protoplanetary disk made primarily of hydrogen, helium, and particles of ices and dust composed of other chemical elements. The dust particles stick to each other, eventually forming larger and larger grains. The gravitational forces of the disk draw in these grains and larger particles until a solid core forms. This core then leads to runaway accretion of both solids and gas to eventually form a giant planet.

This theory predicts that the proportions of the different elements in the planet are enhanced relative to those in their star, especially oxygen that is supposed to be the most enhanced. Once the giant planet forms, its atmospheric oxygen is expected to be largely encompassed within water molecules. The very low levels of water vapor found by this research raises a number of questions about the chemical ingredients that lead to planet formation, say researchers.

"There are so many things we still don't know about exoplanets, so this opens up a new chapter in understanding how planets and solar systems form," said Drake Deming of the University of Maryland, who led one of the precursor studies. "The problem is that we are assuming the water to be as abundant as in our own solar system. What our study has shown is that water features could be a lot weaker than our expectations."

The findings are being published on July 24 in The Astrophysical Journal Letters.


Contacts and sources:
Ray Villard
Space Telescope Science Institute, Baltimore, Md.

Nikku Madhusudhan
Institute of Astronomy, University of Cambridge, United Kingdom

Satellite Study Reveals Parched U.S. West Using Up Underground Water

A new study by NASA and University of California, Irvine, scientists finds more than 75 percent of the water loss in the drought-stricken Colorado River Basin since late 2004 came from underground resources. The extent of groundwater loss may pose a greater threat to the water supply of the western United States than previously thought.

The Colorado River Basin lost nearly 53 million acre feet of freshwater over the past nine years, according to a new study based on data from NASA’s GRACE mission. This is almost double the volume of the nation's largest reservoir, Nevada's Lake Mead (pictured).
Image Credit: U.S. Bureau of Reclamation

This study is the first to quantify the amount that groundwater contributes to the water needs of western states. According to the U.S. Bureau of Reclamation, the federal water management agency, the basin has been suffering from prolonged, severe drought since 2000 and has experienced the driest 14-year period in the last hundred years.

The research team used data from NASA's Gravity Recovery and Climate Experiment (GRACE) satellite mission to track changes in the mass of the Colorado River Basin, which are related to changes in water amount on and below the surface. Monthly measurements of the change in water mass from December 2004 to November 2013 revealed the basin lost nearly 53 million acre feet (65 cubic kilometers) of freshwater, almost double the volume of the nation's largest reservoir, Nevada's Lake Mead. More than three-quarters of the total -- about 41 million acre feet (50 cubic kilometers) -- was from groundwater.

The Colorado River Basin (black outline) supplies water to about 40 million people in seven states. Major cities outside the basin (red shading) also use water from the Colorado River.
Image Credit: U.S. Bureau of Reclamation

"We don't know exactly how much groundwater we have left, so we don't know when we're going to run out," said Stephanie Castle, a water resources specialist at the University of California, Irvine, and the study's lead author. "This is a lot of water to lose. We thought that the picture could be pretty bad, but this was shocking."

Water above ground in the basin's rivers and lakes is managed by the U.S. Bureau of Reclamation, and its losses are documented. Pumping from underground aquifers is regulated by individual states and is often not well documented.

"There's only one way to put together a very large-area study like this, and that is with satellites," said senior author Jay Famiglietti, senior water cycle scientist at JPL on leave from UC Irvine, where he is an Earth system science professor. "There's just not enough information available from well data to put together a consistent, basin-wide picture."

Famiglietti said GRACE is like having a giant scale in the sky. Within a given region, the change in mass due to rising or falling water reserves influences the strength of the local gravitational attraction. By periodically measuring gravity regionally, GRACE reveals how much a region's water storage changes over time.

The Colorado River is the only major river in the southwestern United States. Its basin supplies water to about 40 million people in seven states, as well as irrigating roughly four million acres of farmland.

"The Colorado River Basin is the water lifeline of the western United States," said Famiglietti. "With Lake Mead at its lowest level ever, we wanted to explore whether the basin, like most other regions around the world, was relying on groundwater to make up for the limited surface-water supply. We found a surprisingly high and long-term reliance on groundwater to bridge the gap between supply and demand."

Famiglietti noted that the rapid depletion rate will compound the problem of short supply by leading to further declines in streamflow in the Colorado River.

"Combined with declining snowpack and population growth, this will likely threaten the long-term ability of the basin to meet its water allocation commitments to the seven basin states and to Mexico," Famiglietti said.

The study has been accepted for publication in the journal Geophysical Research Letters, which posted the manuscript online Thursday. Coauthors included other scientists from NASA's Goddard Space Flight Center, Greenbelt, Maryland, and the National Center for Atmospheric Research, Boulder, Colorado. The research was funded by NASA and the University of California.

For more information on NASA's GRACE satellite mission, see: http://www.nasa.gov/grace and http://www.csr.utexas.edu/grace

GRACE is a joint mission with the German Aerospace Center and the German Research Center for Geosciences, in partnership with the University of Texas at Austin. JPL developed the GRACE spacecraft and manages the mission for NASA's Science Mission Directorate, Washington.

NASA monitors Earth's vital signs from land, air and space with a fleet of satellites and ambitious airborne and ground-based observation campaigns. NASA develops new ways to observe and study Earth's interconnected natural systems with long-term data records and computer analysis tools to better see how our planet is changing. The agency shares this unique knowledge with the global community and works with institutions in the United States and around the world that contribute to understanding and protecting our home planet.





Contacts and sources:
Alan Buis
Jet Propulsion Laboratory, Pasadena, Calif.
NASA

Artificial Intelligence Analyzes Beatles' Music Transformations


Music fans and critics know that the music of the Beatles underwent a dramatic transformation in just a few years, but until now there hasn’t been a scientific way to measure the progression. That could change now that computer scientists at Lawrence Technological University have developed an artificial intelligence algorithm that can analyze and compare musical styles, enabling research into the musical progression of the Beatles.


Assistant Professor Lior Shamir and graduate student Joe George had previously developed audio analysis technology to study the vocal communication of whales, and they expanded the algorithm to analyze the albums of the Beatles and other well-known bands such as Queen, U2, ABBA and Tears for Fears. The study, published in the August issue of the journal Pattern Recognition Letters, demonstrates scientifically that the structure of the Beatles music changes progressively from one album to the next.

The algorithm works by first converting each song to a spectrogram – a visual representation of the audio content. That turns an audio analysis task into an image analysis problem, which is solved by applying comprehensive algorithms that turn each music spectrogram into a set of almost 3,000 numeric descriptors reflecting visual aspects such as textures, shapes and the statistical distribution of the pixels. Pattern recognition and statistical methods are then used to detect and quantify the similarities between different pieces of music.

In popular music, albums are widely considered milestones in the stylistic development of music artists, and these collections of songs provide a convenient unit for establishing measurements to quantify a band’s progression.

LTU's study analyzed 11 songs from each of the 13 Beatles studio albums released in Great Britain, and quantified the similarities between each song and all the others in the study. The results for the individual songs were then used to compare the similarities between the albums.

The automatic placement of the albums by the algorithm was in agreement with the chronological order of the recording of each album, starting with the Beatles’ first album, “Please, Please Me,” and followed by the subsequent early albums, “With the Beatles,” “Beatles for Sale” and “A Hard Day’s Night.”
The automatic association of these albums demonstrated that the computer algorithm determined that the songs on the first album, “Please, Please Me,” were most like the group of songs on the second album, “With the Beatles,” and least like the songs on the last album recorded, “Abbey Road.”

The algorithm then placed the albums “Help!,” and “Rubber Soul,” followed by “Revolver,” “Sergeant Pepper’s Lonely Hearts Club Band,” “Magical Mystery Tour,” “Yellow Submarine,” and “The Beatles” (The White Album).
“Let It Be” was the last album released by the Beatles, but the algorithm correctly identified those songs as having been recorded earlier than the songs on “Abbey Road.”

“People who are not Beatles fans normally can’t tell that ‘Help!’ was recorded before ‘Rubber Soul,’ but the algorithm can,” Shamir said. “This experiment demonstrates that artificial intelligence can identify the changes and progression in musical styles by ‘listening’ to popular music albums in a completely new way.”

The computer algorithm was able to deduce the chronological order of the albums of the other groups in the study by analyzing the audio data alone – with one notable exception. Strong similarities were identified between two Tears for Fears albums released 15 years apart. That makes sense because “Seeds of Love,” released in 1989, was the last album before the band’s breakup, and “Everybody Loves a Happy Ending,” released in 2004, was recorded after the band reunited. Those two albums had less in common with two solo albums released by Roland Orzabal, the group’s principal songwriter, after the band split up in 1991.

In the case of “Queen,” the computer not only sorted the albums by their chronological order, but also distinguished between albums before and after the album “Hot Space,” which represented a major shift in Queen’s musical style.

In this era of big data, such algorithms can assist in searching, browsing, and organizing large music databases, as well as identifying music that matches an individual listener’s musical preferences.

In the case of the Beatles, Shamir believes this type of research will have historical significance. “The baby boomers loved the music of the Beatles, I love the Beatles, and now my daughters and their friends love the Beatles. Their music will live on for a very long time,” Shamir said. “It is worthwhile to study what makes their music so distinctive, and computer science and big data can help.”


Contacts and sources:

Wednesday, July 23, 2014

Voyager Spacecraft Might Not Have Reached Interstellar Space

In 2012, the Voyager mission team announced that the Voyager 1 spacecraft had passed into interstellar space, traveling further from Earth than any other manmade object.

But, in the nearly two years since that historic announcement, and despite subsequent observations backing it up, uncertainty about whether Voyager 1 really crossed the threshold continues. There are some scientists who say that the spacecraft is still within the heliosphere – the region of space dominated by the Sun and its wind of energetic particles – and has not yet reached the space between the stars.

Now, two Voyager team scientists have developed a test that they say could prove once and for all if Voyager 1 has crossed the boundary. The new test is outlined in a study accepted for publication in Geophysical Research Letters, a journal of the American Geophysical Union.

The scientists predict that, in the next two years, Voyager 1 will cross the current sheet – the sprawling surface within the heliosphere where the polarity of the sun’s magnetic field changes from plus to minus. The spacecraft will detect a reversal in the magnetic field, proving that it is still within the heliosphere. But, if the magnetic field reversal doesn’t happen in the next year or two as expected, that is confirmation that Voyager 1 has already passed into interstellar space.

The heliosphere, in which the Sun and planets reside, is a large bubble inflated from the inside by the high-speed solar wind blowing out from the Sun. Pressure from the solar wind, along with pressure from the surrounding interstellar medium, determines the size and shape of the heliosphere. The supersonic flow of solar wind abruptly slows at the termination shock, the innermost boundary of the solar system. The edge of the solar system is the heliopause. The bow shock pushes ahead through the interstellar medium as the heliosphere plows through the galaxy.

Credit: Southwest Research Institute

“The proof is in the pudding,” said George Gloeckler, a professor in atmospheric, oceanic and space sciences at the University of Michigan in Ann Arbor and lead author of the new study.

Gloeckler has worked on the Voyager mission since 1972 and has been a vocal opponent of the view that Voyager 1 has entered interstellar space. He said that, although the spacecraft has observed many of the signs indicating it may have reached interstellar space, like cosmic rays, Voyager 1 did not see a change in magnetic field that many were expecting.

“This controversy will continue until it is resolved by measurements,” Gloeckler said.

If the new prediction is right, “this will be the highlight of my life,” he said. “There is nothing more gratifying than when you have a vision or an idea and you make a prediction and it comes true.”

The Voyager 1 and 2 spacecraft were launched in 1977 to study Jupiter and Saturn. The mission has since been extended to explore the outermost limits of the Sun’s influence and beyond. Voyager 2, which also flew by Uranus and Neptune, is on its way to interstellar space.

Gloeckler and co-author, Len Fisk, also a professor in atmospheric, oceanic and space sciences at the University of Michigan, are basing their new test on a model they developed and published earlier this year in The Astrophysical Journal. The model assumes that the solar wind is slowing down and, as a result, that the solar wind can be compressed. Based on this assumption, the study says Voyager 1 is moving faster than the outward flow of the solar wind and will encounter current sheets where the polarity of the magnetic field will reverse, proving that the spacecraft has not yet left the heliosphere. The scientists predict this reversal will most likely happen during 2015, based on observations made by Voyager 1.

“If that happens, I think if anyone still believes Voyager 1 is in the interstellar medium, they will really have something to explain,” Gloeckler said. “It is a signature that can’t be missed.”

Ed Stone of the California Institute of Technology in Pasadena and NASA’s Voyager Project Scientist said in a statement that “It is the nature of the scientific process that alternative theories are developed in order to account for new observations. This paper differs from other models of the solar wind and the heliosphere and is among the new models that the Voyager team will be studying as more data are acquired by Voyager.”

Alan Cummings, a senior research scientist at California Institute of Technology in Pasadena and a co-investigator on the Voyager mission, believes Voyager 1 has most likely crossed into interstellar space, but he said there is a possibility that Gloeckler and Fisk are right and the spacecraft is still in the heliosphere. He said that if Voyager 1 experiences a current sheet crossing like the one being proposed in the new study, it could also mean that the heliosphere is expanding and crossed the spacecraft again.

“If the magnetic field had cooperated, I don’t think we’d be having this discussion,” Cummings said. “This is a puzzle. It is very reasonable to explore alternate explanations. We don’t understand everything that happened out there.”

Stephen Fuselier, director of the space science department at the Southwest Research Institute in San Antonio, Texas, who is not involved with the research and is not on the Voyager 1 team, said the scientists have come up with a good test to prove once and for all if Voyager 1 has crossed into interstellar space. However, he does not agree with the assumption that the paper is making about the how fast the solar wind is moving. But, he said there is no way to measure this flow velocity, and if Gloeckler and Fisk’s assumptions are correct, the model makes sense and Voyager 1 could still be inside the heliosphere.

This artist’s concept shows the Voyager 1 spacecraft entering the space between stars. The Voyager mission team announced in 2012 that the Voyager 1 spacecraft had passed into interstellar space, but some scientists say it is still within the heliosphere – the region of space domininated by the Sun and its wind of energetic particles. In a new study, two Voyager team scientists are proposing a test that they say could prove once and for all of Voyager 1 has crossed the boundary.

Credit: NASA/JPL-Caltech

“I applaud them for coming out with a bold prediction,” said Fuselier, who works on the Interstellar Boundary Explorer mission that is examining the boundary between the solar wind and the interstellar medium. “If they are right, they are heroes. If they are wrong, though, it is important for the community to understand why … If they are wrong, then that must mean that one or more of their assumptions is incorrect, and we as a community have to understand which it is.”

Fuselier, who believes Voyager 1 has entered interstellar space, said he will reserve judgment on whether Gloecker and Fisk are correct until 2016. He said there is a sizeable fraction of the space community that is skeptical that Voyager 1 has entered interstellar space, but the new proposed test could help end that debate. Another good test will come when Voyager 2 crosses into interstellar space in the coming years, Fuselier and Cummings said.

“If you go back 10 years and talk to the Voyager people, they would have told you 10 years ago that what they would see upon exiting the heliosphere is very, very different from what they are seeing now,” Fuselier said. “We are just loaded down with surprises and this might be one of them.”


Contacts and sources:
Peter Weiss
American Geophysical Union 

Citation: A test for whether or not Voyager 1 has crossed the heliopause”  Authors:G. Gloeckler and L.A. Fisk: Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, Michigan, USA.

Model Of Titan's Astmosphere

A researcher from MIPT, Prof. Vladimir Krasnopolsky, who heads the Laboratory of High Resolution Infrared Spectroscopy of Planetary Atmospheres, has published the results of the comparison of his model of Titan’s atmosphere with the latest data.

The article in the journal Icarus compares the chemical composition of Titan’s atmosphere with parameters predicted by a mathematical model. The atmosphere of Saturn’s largest moon was described by a model that took into account the presence of 83 neutral molecules and33 ions and420 different chemical reactions between them. Despite the fact that Titan is located much further from the Sun than the Earth and that radiation flux coming from the Sun to the moon is 100 times less, the intensity of UV rays is enough to spur photochemical reactions in the upper layers of Titan’s atmosphere.

The data regarding the composition of Titan’s atmosphere, which is 1.6 times denser near the surface than the Earth’s air, was obtained from several sources, the main of which was the Cassini orbiter. It was equipped with a number of gauges, including ultraviolet and infrared spectrometers and equipment for studying the ions that were drawn into space. Within ten years in Saturn’s orbit, a plasma complex and a mass spectrometer designed specifically for this research project gathered enough data to compare it with mathematical models.

Titan’s atmosphere 
Image from the Cassini orbiter

In addition to Cassini, part of the data was obtained using the IRAM ground submillimeter telescope and the Hershel infrared space observatory. Data onthe distribution of aerosol particles in Titan’s atmosphere was received from a unique space capsule, Huygens, which landed on Titan for the first time in the history of mankind and sent the first photos of its surface.

Comparing this data with the previously developed model, Krasnopolsky showed that the theoretical description of Titan’s atmosphere matches the reality quite accurately. There are discrepancies, however, but they are caused by inevitable measurement errors – so far the concentrations of many substances are approximate. The most important thing is not the absolute matching of specific parameters but the correctness of the general model of chemical processes.

“The coherence of the model with reality means that we can correctly tell where different substances go from Titan’s ionosphere and where they come from,” Krasnopolsky said.

Krasnopolsky is considered a leading global expert on the atmosphere of celestial bodies of the solar system. He has participated in the creation of spectrometers for a variety of spacecraft, including the legendary Voyagers and the first Soviet interplanetary probes.


Contacts and sources: 
Prof. Vladimir Krasnopolsky
 MIPT 

SETI Targeting Alien Polluters With New Approach

Humanity is on the threshold of being able to detect signs of alien life on other worlds. By studying exoplanet atmospheres, we can look for gases like oxygen and methane that only coexist if replenished by life. But those gases come from simple life forms like microbes. What about advanced civilizations? Would they leave any detectable signs?

They might, if they spew industrial pollution into the atmosphere. New research by theorists at the Harvard-Smithsonian Center for Astrophysics (CfA) shows that we could spot the fingerprints of certain pollutants under ideal conditions. This would offer a new approach in the search for extraterrestrial intelligence (SETI).  
Credit: Harvard-Smithsonian Center for Astrophysics 

"We consider industrial pollution as a sign of intelligent life, but perhaps civilizations more advanced than us, with their own SETI programs, will consider pollution as a sign of unintelligent life since it's not smart to contaminate your own air," says Harvard student and lead author Henry Lin.

"People often refer to ETs as 'little green men,' but the ETs detectable by this method should not be labeled 'green' since they are environmentally unfriendly," adds Harvard co-author Avi Loeb.

The team, which also includes Smithsonian scientist Gonzalo Gonzalez Abad, finds that the upcoming James Webb Space Telescope (JWST) should be able to detect two kinds of chlorofluorocarbons (CFCs) -- ozone-destroying chemicals used in solvents and aerosols. They calculated that JWST could tease out the signal of CFCs if atmospheric levels were 10 times those on Earth. A particularly advanced civilization might intentionally pollute the atmosphere to high levels and globally warm a planet that is otherwise too cold for life.

There is one big caveat to this work. JWST can only detect pollutants on an Earth-like planet circling a white dwarf star, which is what remains when a star like our Sun dies. That scenario would maximize the atmospheric signal. Finding pollution on an Earth-like planet orbiting a Sun-like star would require an instrument beyond JWST -- a next-next-generation telescope.

The team notes that a white dwarf might be a better place to look for life than previously thought, since recent observations found planets in similar environments. Those planets could have survived the bloating of a dying star during its red giant phase, or have formed from the material shed during the star's death throes.

While searching for CFCs could ferret out an existing alien civilization, it also could detect the remnants of a civilization that annihilated itself. Some pollutants last for 50,000 years in Earth's atmosphere while others last only 10 years. Detecting molecules from the long-lived category but none in the short-lived category would show that the sources are gone.

"In that case, we could speculate that the aliens wised up and cleaned up their act. Or in a darker scenario, it would serve as a warning sign of the dangers of not being good stewards of our own planet," says Loeb.

This work has been accepted for publication in The Astrophysical Journal and is available online.

Headquartered in Cambridge, Mass., the Harvard-Smithsonian Center for Astrophysics (CfA) is a joint collaboration between the Smithsonian Astrophysical Observatory and the Harvard College Observatory. CfA scientists, organized into six research divisions, study the origin, evolution and ultimate fate of the universe.

Contacts and sources:
David A. Aguilar
Harvard-Smithsonian Center for Astrophysics

Lives And Deaths Of Sibling Stars

This beautiful star cluster, NGC 3293, is found 8000 light-years from Earth in the constellation of Carina (The Keel). This cluster was first spotted by the French astronomer Nicolas-Louis de Lacaille in 1751, during his stay in what is now South Africa, using a tiny telescope with an aperture of just 12 millimetres. It is one of the brightest clusters in the southern sky and can be easily seen with the naked eye on a dark clear night.

In this image from the Wide Field Imager on the MPG/ESO 2.2-metre telescope at ESO's La Silla Observatory in Chile young stars huddle together against a backdrop of clouds of glowing gas and lanes of dust. The star cluster, known as NGC 3293, would have been just a cloud of gas and dust itself about ten million years ago, but as stars began to form it became the bright group we see here. Clusters like this are celestial laboratories that allow astronomers to learn more about how stars evolve.

Credit: ESO/G. Beccari

Star clusters like NGC 3293 contain stars that all formed at the same time, at the same distance from Earth and out of the same cloud of gas and dust, giving them the same chemical composition. As a result clusters like this are ideal objects for testing stellar evolution theory.

Most of the stars seen here are very young, and the cluster itself is less than 10 million years old. Just babies on cosmic scales if you consider that the Sun is 4.6 billion years old and still only middle-aged. An abundance of these bright, blue, youthful stars is common in open clusters like NGC 3293, and, for example, in the better known Kappa Crucis cluster, otherwise known as the Jewel Box or NGC 4755.

This zoom video starts from a broad view of the Milky Way and takes the viewer on a journey to the bright star cluster NGC 3293 in the constellation of Carina (The Keel). This spectacular object would have been just a cloud of gas and dust about ten million years ago, but as stars began to form it became the bright group we see here. Clusters like this are celestial laboratories that allow astronomers to learn more about how stars evolve.

Credit: ESO/G. Beccari/N. Risinger (skysurvey.org). Music: movetwo

These open clusters each formed from a giant cloud of molecular gas and their stars are held together by their mutual gravitational attraction. But these forces are not enough to hold a cluster together against close encounters with other clusters and clouds of gas as the cluster's own gas and dust dissipates. So, open clusters will only last a few hundred million years, unlike their big cousins, the globular clusters, which can survive for billions of years, and hold on to far more stars.

Despite some evidence suggesting that there is still some ongoing star formation in NGC 3293, it is thought that most, if not all, of the nearly fifty stars in this cluster were born in one single event. But even though these stars are all the same age, they do not all have the dazzling appearance of a star in its infancy; some of them look positively elderly, giving astronomers the chance to explore how and why stars evolve at different speeds.

This pan video gives a close-up view of a colourful image from the Wide Field Imager on the MPG/ESO 2.2-metre telescope at ESO's La Silla Observatory in Chile. It shows a group of young stars huddled together against a backdrop of clouds of glowing gas and lanes of dust. This star cluster, known as NGC 3293, would have been just a cloud of gas and dust itself about ten million years ago, but as stars began to form it became the bright group we see here. Clusters like this are celestial laboratories that allow astronomers to learn more about how stars evolve.

Credit: ESO/G. Beccari. Music: movetwo

Take the bright orange star at the bottom right of the cluster. This huge star, a red giant, would have been born as one of the biggest and most luminous of its litter, but bright stars burn out fast. As the star used up the fuel at its core its internal dynamics changed and it began to swell and cool, becoming the red giant we now observe. Red giants are reaching the end of their life cycle, but this red giant's sister stars are still in what is known as the pre-main-sequence — the period before the long, stable, middle period in a star's life. We see these stars in the prime of their life as hot, bright and white against the red and dusty background.

This image was taken with the Wide Field Imager (WFI) installed on the MPG/ESO 2.2-metre telescope at ESO's La Silla Observatory in northern Chile.



Contacts and sources:

Transformer Pulsar Discovered

In late June 2013, an exceptional binary containing a rapidly spinning neutron star underwent a dramatic change in behavior never before observed. The pulsar's radio beacon vanished, while at the same time the system brightened fivefold in gamma rays, the most powerful form of light, according to measurements by NASA's Fermi Gamma-ray Space Telescope.

These artist's renderings show one model of pulsar J1023 before (top) and after (bottom) its radio beacon (green) vanished. Normally, the pulsar's wind staves off the companion's gas stream. When the stream surges, an accretion disk forms and gamma-ray particle jets (magenta) obscure the radio beam. 
Image Credit: NASA's Goddard Space Flight Center

"It's almost as if someone flipped a switch, morphing the system from a lower-energy state to a higher-energy one," said Benjamin Stappers, an astrophysicist at the University of Manchester, England, who led an international effort to understand this striking transformation. "The change appears to reflect an erratic interaction between the pulsar and its companion, one that allows us an opportunity to explore a rare transitional phase in the life of this binary."

Zoom into an artist's concept of AY Sextantis, a binary star system whose pulsar switched from radio emissions to high-energy gamma rays in 2013. This transition likely means the pulsar's spin-up process is nearing its end.



A binary consists of two stars orbiting around their common center of mass. This system, known as AY Sextantis, is located about 4,400 light-years away in the constellation Sextans. It pairs a 1.7-millisecond pulsar named PSR J1023+0038 -- J1023 for short -- with a star containing about one-fifth the mass of the sun. The stars complete an orbit in only 4.8 hours, which places them so close together that the pulsar will gradually evaporate its companion.

When a massive star collapses and explodes as a supernova, its crushed core may survive as a compact remnant called a neutron star or pulsar, an object squeezing more mass than the sun's into a sphere no larger than Washington, D.C. Young isolated neutron stars rotate tens of times each second and generate beams of radio, visible light, X-rays and gamma rays that astronomers observe as pulses whenever the beams sweep past Earth. Pulsars also generate powerful outflows, or "winds," of high-energy particles moving near the speed of light. The power for all this comes from the pulsar's rapidly spinning magnetic field, and over time, as the pulsars wind down, these emissions fade.

More than 30 years ago, astronomers discovered another type of pulsar revolving in 10 milliseconds or less, reaching rotational speeds up to 43,000 rpm. While young pulsars usually appear in isolation, more than half of millisecond pulsars occur in binary systems, which suggested an explanation for their rapid spin.

"Astronomers have long suspected millisecond pulsars were spun up through the transfer and accumulation of matter from their companion stars, so we often refer to them as recycled pulsars," explained Anne Archibald, a postdoctoral researcher at the Netherlands Institute for Radio Astronomy (ASTRON) in Dwingeloo who discovered J1023 in 2007.

During the initial mass-transfer stage, the system would qualify as a low-mass X-ray binary, with a slower-spinning neutron star emitting X-ray pulses as hot gas raced toward its surface. A billion years later, when the flow of matter comes to a halt, the system would be classified as a spun-up millisecond pulsar with radio emissions powered by a rapidly rotating magnetic field.

To better understand J1023's spin and orbital evolution, the system was regularly monitored in radio using the Lovell Telescope in the United Kingdom and the Westerbork Synthesis Radio Telescope in the Netherlands. These observations revealed that the pulsar's radio signal had turned off and prompted the search for an associated change in its gamma-ray properties.

A few months before this, astronomers found a much more distant system that flipped between radio and X-ray states in a matter of weeks. Located in M28, a globular star cluster about 19,000 light-years away, a pulsar known as PSR J1824-2452I underwent an X-ray outburst in March and April 2013. As the X-ray emission dimmed in early May, the pulsar's radio beam emerged.

While J1023 reached much higher energies and is considerably closer, both binaries are otherwise quite similar. What's happening, astronomers say, are the last sputtering throes of the spin-up process for these pulsars.

In J1023, the stars are close enough that a stream of gas flows from the sun-like star toward the pulsar. The pulsar's rapid rotation and intense magnetic field are responsible for both the radio beam and its powerful pulsar wind. When the radio beam is detectable, the pulsar wind holds back the companion's gas stream, preventing it from approaching too closely. But now and then the stream surges, pushing its way closer to the pulsar and establishing an accretion disk.

Gas in the disk becomes compressed and heated, reaching temperatures hot enough to emit X-rays. Next, material along the inner edge of the disk quickly loses orbital energy and descends toward the pulsar. When it falls to an altitude of about 50 miles (80 km), processes involved in creating the radio beam are either shut down or, more likely, obscured.

The inner edge of the disk probably fluctuates considerably at this altitude. Some of it may become accelerated outward at nearly the speed of light, forming dual particle jets firing in opposite directions -- a phenomenon more typically associated with accreting black holes. Shock waves within and along the periphery of these jets are a likely source of the bright gamma-ray emission detected by Fermi.

The findings were published in the July 20 edition of The Astrophysical Journal. The team reports that J1023 is the first example of a transient, compact, low-mass gamma-ray binary ever seen. The researchers anticipate that the system will serve as a unique laboratory for understanding how millisecond pulsars form and for studying the details of how accretion takes place on neutron stars.

"So far, Fermi has increased the number of known gamma-ray pulsars by about 20 times and doubled the number of millisecond pulsars within in our galaxy," said Julie McEnery, the project scientist for the mission at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "Fermi continues to be an amazing engine for pulsar discoveries."


Contacts and sources:
NASA

Scientists Successfully Generate Human Platelets Using Next-Generation Bioreactor

Scientists at Brigham and Women's Hospital (BWH) have developed a scalable, next-generation platelet bioreactor to generate fully functional human platelets in vitro. The work is a major biomedical advancement that will help address blood transfusion needs worldwide.

The study is published July 21, 2014 in Blood.

"The ability to generate an alternative source of functional human platelets with virtually no disease transmission represents a paradigm shift in how we collect platelets that may allow us meet the growing need for blood transfusions," said Jonathan Thon, PhD, Division of Hematology, BWH Department of Medicine, lead study author.

Scanning electron micrograph of blood cells. From left to right: humanerythrocyte, activated platelet, leukocyte.

Credit: Wikipedia

According to the researchers, more than 2.17 million platelet units from donors are transfused yearly in the United States to treat patients undergoing chemotherapy, organ transplantation and surgery, as well as for those needing blood transfusions following a major trauma. However, increasing demand; a limited five-day shelf-life; and risk of contamination, rejection and infection have made blood platelet shortages common.

"Bioreactor-derived platelets theoretically have several advantages over conventional, donor-derived platelets in terms of safety and resource utilization," said William Savage, MD, PhD, medical director, Kraft Family Blood Donor Center at Dana Farber Cancer Institute/Brigham and Women's Hospital, who did not contribute to the study. "A major factor that has limited our ability to compare bioreactor platelets to donor platelets is the inefficiency of growing platelets, a problem that slows progress of clinical research. This study addresses that gap, while contributing to our understanding of platelet biology at the same time."

3D Rendering of Platelets

Credit:  Wikipedia

Blood cells, such as platelets, are made in bone marrow. The bioreactor-a device that mimics a biological environment to carry out a reaction on an industrial scale-uses biologically inspired engineering to fully integrate the major components of bone marrow, modeling both its composition and blood flow characteristics. The microfluidic platelet bioreactor recapitulates features such as bone marrow stiffness, extracellular matrix composition, micro-channel size, and blood flow stability under high-resolution live-cell microscopy to make human platelets.

Application of shear forces of blood flow in the bioreactor triggered a dramatic increase in platelet initiation from 10 percent to 90 percent, leading to functional human platelets.

"By being able to develop a device that successfully models bone marrow represents a crucial bridge connecting our understanding of the physiological triggers of platelet formation to support drug development and scale platelet production," said senior study author Joseph Italiano, Jr., PhD, Division of Hematology, BWH Department of Medicine, and the Vascular Biology Program at Boston Children's Hospital.

In terms of next steps, the researchers would like to commence phase 0/I in human clinical trials in 2017.

"The regulatory bar is appropriately set high for blood products, and it is important to us that we show platelet quality, function and safety over these next three years since we'll likely be recipients of these platelets ourselves at some point," said Thon.

This research was supported by the National Institutes of Health (R01Hl68130), American Society of Hematology Scholar Award, Brigham Research Institute at Brigham and Women's Hospital, and Marie Curie Actions International Outgoing Fellowship (300121).

Jonathan Thon, PhD, and Joseph Italiano, Jr., PhD, are both founders of Platelet BioGenesis, a company that aims to produce donor-independent human platelets from human-induced pluripotent stem cells at scale.



Contacts and sources:
 Brigham and Women's Hospital (BWH)