Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Wednesday, October 31, 2012

A Ghost in Cepheus

Described as a "dusty curtain" or "ghostly apparition," mysterious reflection nebula VdB 152 really is very faint. Far from your neighborhood on this Halloween Night, the cosmic phantom is nearly 1,400 light-years away. Also catalogued as Ced 201, it lies along the northern Milky Way in the royal constellation Cepheus. Near the edge of a large molecular cloud, pockets of interstellar dust in the region block light from background stars or scatter light from the embedded bright star giving parts of the nebula a characteristic blue color. 

Ultraviolet light from the star is also thought to cause a dim reddish luminescence in the nebular dust. Though stars do form in molecular clouds, this star seems to have only accidentally wandered into the area, as its measured velocity through space is very different from the cloud's velocity. This deep telescopic image of the region spans about 7 light-years.
Described as a
Image Credit: NASA/Stephen Leshin

First Direct Detection Sheds Light On Dark Galaxies

Dark galaxies – galaxies with few if any stars and made predominately of dense gas – have been impossible to detect directly... until now. Three leading members of an international scientific team discuss their discovery and the place these galaxies hold in the universe.

This image begins with a photograph of the area around the constellation of Sculptor. It then zooms in through a Digitized Sky Survey 2 image to VLT observations of HE 0108-3518, a bright quasar which is illuminating the gas in surrounding dark galaxies. These galaxies are essentially devoid of stars and would not be visible at all without the light coming from the quasar.

 Credit: ESO, Digitized Sky Survey 2, Akira Fujii/David Malin Images. Music: Disasterpeace

Most people think of galaxies as huge islands of stars, gas and dust that populate the universe in groups and clusters. But theory has predicted that some galaxies, called “dark galaxies,” have few if any stars and are made predominately of dense gas. Because they are devoid of stars, dark galaxies have been impossible to detect directly – until now. An international team of astronomers, using the European Southern Observatory’s 8.2-meter Very Large Telescope (VLT) in Chile, has detected several dark galaxies by observing the fluorescent glow of their hydrogen gas illuminated by the ultraviolet light of a nearby quasar.

The galaxies detected by the team are nearly 11 billion light years away, which means they existed at an early time in the 13.7 billion-year-old Universe. Dark galaxies are thought to be the building blocks of modern-day star-forming galaxies, either through mergers with star-forming galaxies, or by feeding them gas along filaments that connect the Universe’s galaxies in a kind of cosmic web.

Three members of the team that made the detection spoke recently with The Kavli Foundation in a roundtable discussion about how they made the discovery, what it means, what questions are still unanswered, and what they plan next. Among the participants:
Sebastiano Cantalupo – Postdoctoral Fellow at the University of California, Santa Cruz, the Astronomy and Astrophysics Department. He studies the high redshift universe, and in particular the space between galaxies known as the Intergalactic Medium (IGM) to learn about galaxy formation and evolution.

Dr. Cantalupo also studies how the first stars and galaxies ionized the fog of neutral hydrogen gas in the early universe, making it transparent to light.

Martin Haehnelt – Professor of Cosmology and Astrophysics, Institute of Astronomy and the Kavli Institute for Cosmology, University of Cambridge. An observational cosmologist interested in the emergence of structure during the epoch of reionization, he was a member of the initial science working group for the European Extremely Large Telescope project in Chile, anticipated to see First Light in the early 2020s.

Simon Lilly – Professor of Observational Cosmology, Institute for Astronomy at the Swiss Federal Institute of Technology in Zurich, Switzerland. His group seeks to understand the formation and evolution of galaxies, and uses observational data on the distant universe obtained from ground-based and space-based observatories. Dr. Lilly is involved in instrumentation projects for the VLT and the James Webb Space Telescope.

THE KAVLI FOUNDATION (TKF): To begin, what is a dark galaxy and how is it different from regular galaxies?

SEBASTIANO CANTALUPO: Dark galaxies are similar in some respects to those we see today, as they are composed of dark matter and gas; but for some reason they have not been able to form stars. As a result, they cannot be detected with our optical telescopes. Some theoretical models have predicted that dark galaxies were common in the early universe when galaxies had more difficulty forming stars – partly because their density of gas was not sufficient to form stars – and only later did galaxies begin to ignite stars, becoming like the galaxies we see today.

TKF: The dark galaxies that you discovered are nearly 11 billion light years away, so we’re seeing them as they existed 11 billion years ago. What was the universe like back then?

SIMON LILLY: This seems to be an epoch when the Universe as a whole was forming stars at a peak rate – about 20 times faster than today. It also seems to be a key time for the growth of black holes, because we see at that time a peak in the number of bright quasars where these growing black holes reside.

TKF: So this is a time when we had galaxies maturing very quickly with very rapid starbursts, but we also had these dark galaxies – these dense clouds of gas that are not yet forming stars. Should we consider dark galaxies to be precursors to modern-day galaxies?

Sebastiano Cantalupo, Postdoctoral Fellow at the University of California, Santa Cruz, the Astronomy and Astrophysics Department.
  Courtesy: S. Cantalupo

CANTALUPO: We do believe these dark galaxies are the building blocks of modern galaxies. In our current theory of galaxy formation, we believe that big galaxies form from the merger of smaller galaxies. Dark galaxies bring to big galaxies a lot of gas, which then accelerates star formation in the bigger galaxies.

MARTIN HAEHNELT: To make that same point a little more concrete, we expect the precursor to the Milky Way was a smaller bright galaxy that merged with dark galaxies nearby. They all came together to form our Milky Way that we see today.

TKF: Did dark galaxies exist only early in the history of the universe?

HAEHNELT: They exist today but are not easy to see. There is actually a firm prediction from our current theory of galaxy formation that there should be many dark galaxies in our own local group of galaxies, which includes the Milky Way and Andromeda galaxies but also many smaller objects. Many small satellite galaxies in the Local Group are actually expected to be dark galaxies. However, many of them lack enough stars to be detected by starlight, and they also have very little gas at the present time.

TKF: And are these dark galaxies relics of the very early universe, or have they formed more recently?

HAEHNELT: They’re a mixture. Some should be very old, some would have formed later, and some may have joined the Local Group relatively recently.


TKF: How did you discover the dark galaxies you detail in your paper?

Simon Lilly, Professor of Observational Cosmology, Institute for Astronomy at the Swiss Federal Institute of Technology.
 Courtesy: S. Lilly

LILLY: By detecting the emission from hydrogen gas within them. This emission is generated when ultraviolet light shines onto the gas and causes its atoms to excite. When they de-excite, these atoms emit photons with a very particular wavelength that we can detect and recognize. Now, the required ultraviolet light permeates the universe but usually the resulting emission is very, very faint. Our approach was to look for dark galaxies in places where the ambient ultraviolet light would be much brighter than the usual background levels – and that was in the vicinity of bright quasars. In the neighborhood of a quasar, this extra UV light boosts the emission from the gas in dark galaxies to levels that we can in principle detect with powerful telescopes.

TKF: If you're just looking around quasars for these dark galaxies, aren’t you underestimating how many there might be?

LILLY: Yes, but the region of space where we are detecting dark galaxies extends out to something like 10 or more Megaparsecs away from the quasar (more than 30 million light years), so we are not just looking at a very small region around the quasars.

CANTALUPO: At the same time, the success of our survey depends on using one of the brightest sources of light that we know of in the universe, which is why quasars are so important.

LILLY: And, using quasars also means that we may be able to use dark galaxies to learn something about the quasars illuminating them – for instance, how long they have been shining at their current very high level, and whether the light from the quasar is emitted in all directions or is just beamed in some particular directions.

TKF: How large are the dark galaxies you’ve detected?

CANTALUPO: These things are probably as small or smaller than the Magellanic Clouds near the Milky Way. That means they are probably five or six kilo parsecs, [or about 16,500 to 19,800 light years across. By comparison, the Milky Way Galaxy is about 100,000 light years across.] So they are really dwarf galaxies.

TKF: Which makes it even more remarkable that you can detect them from 11 billion light years away.

CANTALUPO: That’s because they have a lot of gas, and the nearby quasar is illuminating all this gas to make it bright enough to be seen.

TKF: In your paper you also discuss the detection of filaments – something distinct from the dark galaxies. What are filaments?

Martin Haehnelt, Professor of Cosmology and Astrophysics, Institute of Astronomy and the Kavli Institute for Cosmology, University of Cambridge.
Courtesy: M. Haehnelt

HAEHNELT: Filaments are very interesting in their own right. Galaxies are actually believed to draw in gas that resides along filamentary strings. In fact, most of the material that falls into galaxies doesn't flow in uniformly from all directions; it flows in along a few well-defined filaments that interconnect the different galaxies of the Universe. We call this the “cosmic web.” If you look at computer models, it is immediately obvious why the structures they predict gave rise to the term cosmic web. If you look closely at these computer models you see free-flowing gas, smaller galaxies and dark matter all streaming along these filaments together. This is a very active area of research, and we hope to someday see gas as it falls into these more normal galaxies along these filamentary streams. So, our paper proposes a slightly tentative detection of this, and we are really interested in studying this further.

TKF: Do you expect dark galaxies are also embedded in these filaments?

HAEHNELT: Yes.

TKF: What are the prospects for detecting dark galaxies that are even farther away than the ones you found?

LILLY: A number of instruments are being designed and constructed, which will enable us to take our observations to the next level in sensitivity. Our current observations were done with a narrow filter to isolate the emission that fluorescent gas produces. One can use a spectrograph to split the light up much more finely, and that gives you a big potential increase in sensitivity to isolate this emission. But there’s currently a cost to that approach, because it significantly reduces the area of the sky that you can look at.

This deep image shows the region of the sky around the quasar HE0109-3518. The quasar is labelled with a red circle near the centre of the image. The energetic radiation of the quasar makes dark galaxies glow, helping astronomers to understand the obscure early stages of galaxy formation. The faint images of the glow from 12 dark galaxies are labelled with blue circles. Dark galaxies are essentially devoid of stars, therefore they don’t emit any light that telescopes can catch. This makes them virtually impossible to observe unless they are illuminated by an external light source like a background quasar.

This image combines observations from the Very Large Telescope, tuned to detect the fluorescent emissions produced by the quasar illuminating the dark galaxies, with colour data from the Digitized Sky Survey 2.

Credit: ESO, Digitized Sky Survey 2 and S. Cantalupo (UCSC)

However, the next generation of instruments is expected to enable us to look at a large area, about an arc minute to a side, [which is equal to about 1/30th the size of a full moon], and across a wide range of wavelengths and with a high spectral resolution.

The MUSE (Multi Unit Spectroscopic Explorer) spectrograph is one of several competing instruments – another spectrograph will be on the Keck telescope in Hawaii. MUSE should be operating in about a year on the VLT in Chile. With long exposure times on MUSE, we should have the sensitivity to see these filaments in the cosmic web – particularly if UV light from a nearby quasar boosts the gas emission. We are quite optimistic about this, and the question is really whether we can make that detection first.

TKF: What are your more immediate plans for studying dark galaxies?

CANTALUPO: This first survey was sort of a test of the technique. Now that we know it works, we will study the regions around ten or so quasars and look for more dark galaxies. This new survey will be conducted in November, from the Keck telescope on Mauna Kea in Hawaii. But before that, in October at the VLT in Chile, we’ll keep studying the dark galaxies we’ve already detected. In particular, we are going to do spectroscopic analyses of these objects. In the future it will also be very important to get images from space, so the Hubble Space Telescope will be very valuable.

HAEHNELT: So far, we have relatively little direct information about the physical properties of dark galaxies from our observations. Computer models have helped us tremendously to understand what it is that we’re seeing. But with additional observations, we can learn more about the underlying properties of these objects, for example how much dark matter you would expect these objects to have. And depending on what dark matter is actually made of, there are very different predictions for how many of these dark galaxies there should actually be. If we manage to count accurately the numbers of dark galaxies that we see around quasars, then this might actually allow us to get a handle on discriminating between competing theories about what dark matter might be.


TKF: As each of you moves forward, what are your biggest questions about dark galaxies?

HAEHNELT: I would really like to know the minimum mass of galaxies that are able to efficiently produce stars. By observing a sufficiently large sample of the dark galaxies we have detected, we might be able to answer this question.

This image shows 12 close-up images of dark galaxies. These are essentially devoid of stars and would normally be invisible to telescopes. However, their gas is being illuminated by the intense light from a nearby quasar, making them visible to the VLT.
Credit: ESO, Digitized Sky Survey 2 and S. Cantalupo (UCSC)

LILLY: I am first and foremost an observer, and I wonder if we can indeed use this technique to see the emission of filamentary gas in the cosmic web, and if so, how close are we to seeing that? That has been something of a Holy Grail for many, many years and I think this most recent discovery of dark galaxies is a significant step toward the goal.

CANTALUPO: The questions that have motivated us for seven years now are, “How do galaxies form their stars?” and “How do we look at the earliest stages of galaxy formation?” Today, we can only see what dark galaxies look like, estimate their mass, say a few things about the efficiency at which they form any stars at all, and speculate why they are there and their ultimate fate. But these are now questions we can begin to address because of this new technique.

LILLY: As Sebastiano said, we started this project back in 2004. Eight years later, it's nice to see the little acorns that we planted then now growing into oak trees. 

Contacts and sources:

First Ever Family Tree For All Living Birds Reveals Evolution And Diversification

A Yale-led scientific team has produced the most comprehensive family tree for birds to date, connecting all living bird species — nearly 10,000 in total — and revealing surprising new details about their evolutionary history and its geographic context.

Analysis of the family tree shows when and where birds diversified — and that birds’ diversification rate has increased over the last 50 million years, challenging the conventional wisdom of biodiversity experts.

“It’s the first time that we have — for such a large group of species and with such a high degree of confidence — the full global picture of diversification in time and space,” said biologist Walter Jetz of Yale, lead author of the team’s research paper, published Oct. 31 online in the journal Nature.

The world's first family tree linking all living bids and revealing when and where they evolved and diversified since dinosaurs walked the earth has been created by scientists from the University of Sheffield.

This shows the Bird Family Tree 

 Credit: University of Sheffield

Experts used the family tree to map out where the almost 10,000 species of birds live to show where the most diversification has taken place in the world.

Researchers, from the University of Sheffield, Yale University, University of Tasmania and Simon Fraser University, say the creation of new species has speeded-up over the last 50 million years. Surprisingly, species formation is not faster in the species rich tropics, but was found to be faster in the Western Hemisphere compared to the Eastern Hemisphere as well as on islands.


Credit: Yale University

As well as being the first time scientists have created a family tree for birds, it is hoped the research could help prioritise conservation efforts in a bid to save the most diverse species from extinction.

Dr Gavin Thomas, of the University of Sheffield's Department of Animal and Plant Sciences, said: "We have built the first ever family tree showing the evolutionary relationship among the species of birds. We used fossils and genetic data to estimate the ages of all the different branches of the bird tree so that we could assess how diversity has accumulated through time. Our work is indebted to researchers from museums and universities who have collected astounding amounts of genetic data from birds around the world."

Despite major steps forward in modern super computers it has still taken the researchers almost five years to analyse the millions of year's worth of fossil data, DNA, maths and maps, to create this never-before-snapshot of how the thousands of birds alive made it to where they are today.

To even enable the scientists to calculate which species were more or less diverse they had to create a new "species rate" measure.

Dr Thomas added: "Diversification is the net outcome of new species arising, called speciation, and existing species going extinct. We combined this data with existing data on the geographic ranges of all living bird species so that we could map diversification across the world.

Bird diversity
Credit: Wikipedia

"This 'phylogeny' is important because it is the first that includes all living birds. It means we can ask questions about biodiversity and evolution on a global scale and gain new insight into how diversity has changed over millions of years as well as understand those changes. More widely, one way in which the phylogeny can be used, and which may not be obvious, is in helping to prioritise conservation efforts.

"We can identify where species at greatest risk of extinction are on the tree and ask how much distinct evolutionary history they represent. Some species have many close relatives and represent a small amount of distinct evolutionary history whereas others have few close relatives and their loss would represent the disappearance of vast amounts of evolutionary history that could never be recovered. Environmental change has very likely affected diversification over time. Climate change could be a part of that through its effects on the extent of different types of habitat."

The paper – titled 'The global diversity of birds in space and time' - is published in the journal Nature.


Contacts and sources: 
Eric Gershon
Yale University 

Protoplanet Vesta: Forever young?

Like a movie star constantly retouching her makeup, the protoplanet Vesta is continually stirring its outermost layer and presenting a young face.

New data from NASA's Dawn mission show that a common form of weathering that affects many airless bodies like Vesta in the inner solar system, including the moon, surprisingly doesn't age the protoplanet's outermost layer.

This shows Vesta, as seen by by NASA's Dawn spacecraft.

Credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/Brown

The data also indicate that carbon-rich asteroids have been splattering dark material on Vesta's surface over a long span of the body's history.

The findings are described in two papers published Nov. 1 in the journal Nature.

Over time, soils on the moon and on asteroids have undergone extensive weathering. Scientists see this in the accumulation of tiny metallic particles containing iron, which dulls the bright, fluffy outer layers of these bodies. Yet Dawn's visible and infrared mapping spectrometer (VIR) and framing camera detected no accumulation of these tiny particles on Vesta, and the protoplanet (sometimes called a giant asteroid) remains bright and pristine.

                            This shows Vesta, as seen by by NASA's Dawn spacecraft 
Vesta as seen by Dawn
Credit:  NASA/JPL-Caltech/UCLA/MPS/DLR/IDA/Brown

Still, the bright rays of the youngest features on Vesta are seen to degrade rapidly and disappear into background soil. Scientists know that frequent, small impacts from asteroids and comets continually mix the fluffy outer layer of broken debris. Vesta also has unusually steep topography relative to other large bodies in the inner solar system, which leads to landslides that further mix the surface material.

Early pictures of Vesta showed a variety of dramatic light and dark splotches on Vesta's surface. These light and dark materials were unexpected and now show that Vesta has a brightness range that is among the largest observed on rocky bodies in our solar system.

Scientists initially theorized that the dark material on Vesta might come from the shock of high-speed impacts melting and darkening the underlying rocks or from recent volcanic activity. An analysis of data from VIR and the framing camera, however, has revealed that the distribution of dark material is widespread and occurs in both small spots and diffuse deposits, without correlation to any particular underlying geology. The likely source of the dark material is now shown to be carbon-rich asteroids, which are also believed to have deposited hydrated minerals on Vesta.

"Ever since Dawn arrived at Vesta [in July 2011] and we saw the bright and dark streaks across the surface, we have wondered how the zebra got her stripes," said Christopher T. Russell, a professor in UCLA's Department of Earth and Space Sciences and principal investigator for the Dawn mission. "Now we know that the bright streaks and spots are due to very pure early Vestan material, and the dark patches are deposits on the surface most probably due to collisions with material from the dark outer reaches of the asteroid belt."

Scientists estimate that to get the amount of darkening Dawn observed on Vesta, approximately 300 dark asteroids with diameters between 0.6 and 6 miles (1 and 10 kilometers) likely hit Vesta during the last 3.5 billion years. This would have been enough to wrap Vesta in a blanket of mixed material about 3 to 7 feet (1 to 2 meters) thick.

Launched Sept. 27, 2007, Dawn spent more than a year investigating Vesta, which is in the doughnut-shaped asteroid belt between Mars and Jupiter. Dawn orbited Vesta and observed the protoplanet's surface beginning in July 2011. It departed in September 2012.

Studies of meteorites found on Earth that are linked to Vesta suggest that Vesta formed from interstellar gas and dust during the solar system's first 2 to 5 million years.

"Vesta has been recording the history of the solar system from the beginning — more than 4.5 billion years ago," Russell said. "We're going back further than ever before on the surface of a body."

Dawn has a high-quality camera, along with a back-up; a visible and near-infrared mapping spectrometer to identify minerals on the surface; and a gamma ray and neutron spectrometer to reveal the abundance of elements such as iron and hydrogen, possibly from water, in the soil. Dawn also probed Vesta's gravity using extremely precise navigation.

The study of Vesta, however, is only half of Dawn's mission. The spacecraft is now on its way to the dwarf planet Ceres, where it will conduct a detailed study of Ceres' structure and composition. Vesta and Ceres are the most massive objects in the main asteroid belt between Mars and Jupiter. Ceres, the largest object in the main belt, could harbor substantial water or ice beneath its rock crust — and possibly life. The spacecraft will rendezvous with Ceres and begin orbiting in 2015, conducting studies and observations for at least five months.

"Ceres is the largest asteroid and one of the darker bodies in the belt," Russell said. "We will soon learn more about the dark materials that have added so many highlights to the face of Vesta."
 
Dawn, the second scientific mission to be powered by an advanced NASA technology known as ion propulsion, is the first NASA mission to orbit two solar system targets beyond the moon.

UCLA is in charge of Dawn's science and public outreach. Russell leads the science team that has the lead role in analyzing and interpreting the data from Dawn.

For more information about Dawn, visit http://www.nasa.gov/dawn and http://dawn.jpl.nasa.gov.

JPL manages the Dawn mission for NASA's Science Mission Directorate in Washington. Dawn is a project of the directorate's Discovery Program, managed by NASA's Marshall Space Flight Center in Huntsville, Ala. The University of California at Los Angeles (UCLA) is responsible for overall Dawn mission science. Orbital Sciences Corp. in Dulles, Va., designed and built the spacecraft. The German Aerospace Center, the Max Planck Institute for Solar System Research, the Italian Space Agency and the Italian National Astrophysical Institute are international partners on the mission team. The California Institute of Technology in Pasadena manages JPL for NASA.


Contacts and sources:
Stuart Wolpert
University of California - Los Angeles

Green Tea Found To Reduce Rate Of Some GI Cancers

Women who drink green tea may lower their risk of developing some digestive system cancers, especially cancers of the stomach/esophagus and colorectum, according to a study led by researchers from Vanderbilt-Ingram Cancer Center.

The study by lead author Sarah Nechuta, Ph.D., MPH, assistant professor of Medicine, was published online in advance of the Nov. 1 edition of the American Journal of Clinical Nutrition. Wei Zheng, M.D., Ph.D., MPH, professor of Medicine, chief of the Division of Epidemiology and director of the Vanderbilt Epidemiology Center, was the principal investigator for the study.

Green Tea
File:Green tea 3 appearances.jpg
Credit: Wikipedia

To determine green tea's impact on cancer risk, the investigators surveyed women enrolled in the Shanghai Women's Health Study, a population-based study of approximately 75,000 middle-aged and older Chinese women. During the initial interview participants were asked if they drank tea, the type of tea consumed and how much they consumed. Most of the Chinese women reported drinking primarily green tea.

The researchers found that regular tea consumption, defined as tea consumption at least three times a week for more than six months, was associated with a 17 percent reduced risk of all digestive cancers combined. A further reduction in risk was found to be associated with an increased level of tea drinking. Specifically, those who consumed about two to three cups per day (at least 150 grams of tea per month) had a 21 percent reduced risk of digestive system cancers.

The trend toward fewer digestive cancers was strongest for stomach/esophageal and colorectal cancers.

"For all digestive system cancers combined, the risk was reduced by 27 percent among women who had been drinking tea regularly for at least 20 years," said Nechuta. "For colorectal cancer, risk was reduced by 29 percent among the long-term tea drinkers. These results suggest long-term cumulative exposure may be particularly important."

Tea contains polyphenols or natural chemicals that include catechins like EGCG and ECG. Catechins have antioxidant properties and may inhibit cancer by reducing DNA damage and blocking tumor cell growth and invasion.

The researchers also asked about other lifestyle factors including the kinds of food eaten regularly, exercise habits, education level and occupation. Women who had ever smoked or who drank alcohol were excluded from the study.

Regular tea drinkers in the study were younger, had higher education, exercised more and consumed more fruits and vegetables. While the researchers adjusted for these factors, they could not rule out an effect from these and other unmeasured lifestyle habits.

The study was conducted in nonsmoking and nondrinking Chinese women to minimize the potential influence of these two risk factors on the results for tea consumption and digestive system cancer risk.

Other investigators who contributed to the study included Xiao Ou Shu, M.D., Ph.D., MPH, Gong Yang, M.D., MPH, Hui Cai, M.D., Ph.D., VICC; Yu-Tang Gao, M.D., Hong-Lan Li, M.D., Yong-Bing Xiang, M.D., MPH, Department of Epidemiology, Shanghai Cancer Center; Bu-Tian Ji, M.D., Dr.P.H., Wong-Ho Chow, Ph.D., Division of Cancer Epidemiology and Genetics, National Cancer Institute.

The research was supported by funding from the National Cancer Institute (grant number R37 CA70867), which is a division of the National Institutes of Health.


Contacts and sources:
Dagny McMillin 
Vanderbilt University Medical Center

Unprecedented View Of 100 Galaxies In Our Local Universe

The Calar Alto Legacy Integral Field Area survey (CALIFA survey), that counts with the participation of the Instituto de Astrofísica de Canarias (IAC), announces today its first public release of data, offering an unprecedentedly detailed view of one hundred galaxies in the local universe with ample opportunities for scientific study. Together with the data release, two technical publications authored by members of the CALIFA collaboration have been made publicly available, describing the data and showing some of their scientific applications.

Credit: Instituto de Astrofísica de Canarias

“I am tremendously happy to see a dream come true" says Sebastián Sánchez, Principal Investigator of CALIFA. "When we first thought of CALIFA, five years ago, the perspective of releasing such wonderful data seemed far off, yet it is happening right now! We hope and expect that the scientific community will make use of the opportunity”, points out Sánchez.

Galaxies are the end products of cosmic evolution along cosmological times, and their secret history is hidden in the properties of their different components. CALIFA is an on going project running at Calar Alto Observatory, focused on characterizing the galaxies in the local universe with unprecedented detail, trying to uncover these ‘archaeological treasures’.

An example of the scientific results that can be obtained from CALIFA data: A stack of maps of one galaxy (NGC 5406) displaying the spatial distribution of several properties; from top down: emission from ionized hydrogen, velocity of the gas inside the galaxy, estimated age of the stellar population, brightness in the visual band.
Credit: Instituto de Astrofísica de Canarias

To this end, CALIFA uses the technique called integral field spectroscopy (IFS) to obtain data of 600 galaxies in the local universe. Traditional observational studies in extragalactic astronomy used one of two classical techniques: either imaging, which gives detailed information about the spatial extent of galaxies, or spectroscopy, which gives detailed information about many properties of galaxies, but no or very little information on the spatial distribution of them. The recent technology of IFS allows taking a multitude of simultaneous spectra at many points on each galaxy, thanks to a clever combination of fibre optics and classical techniques. CALIFA is the first IFS study to be explicitly designed as a legacy project and, upon completion, it will be the largest survey of this kind ever accomplished.

The integral field spectrograph used for the CALIFA survey at Calar Alto Observatory, PMAS (in a special configuration called PPAK) uses more than 350 optical fibres to cover a field of view of one square arcminute (equivalent to the apparent width of a 1 euro coin placed at a distance of approximately 80 metres). This way, a complete extended object, such as a galaxy, can be fully mapped in detail in just one exposure.

Maps to ‘surf’ the history of cosmos

The delivered data allow producing maps for the different properties of galaxies, such as velocity, stellar ages or chemical composition, to mention just a few. This information will yield new insights in several key issues linked to the structure and history of galaxies in the universe. It is expected to reach results, for instance, on which processes drove galaxy evolution in time, how the chemical elements needed for life are produced inside different galaxies or at different regions inside individual galaxies, the phenomena involved in galaxy collisions… This wealth of information allows to uncover the history not only of an entire galaxy, but also of its different components.

"The amount of science we can do is simply incredible" says Jakob Walcher, Project Scientist of CALIFA. "We can study local processes that drive galaxy evolution and that happen at different places in the galaxies, such as star formation, dynamical effects etc. But we also globally characterize the properties of galaxies in the local universe in a way that was not possible before. For example we map the 2D distribution of the stellar mass and chemical elements in the galaxies. Finally, our large sample will allow us to draw comparisons between different galaxy types”, adds Walcher.

Calar Alto Observatory is jointly operated by the Max Planck Institute for Astronomy (MPIA-MPG, Heidelberg, Germany) and the Astrophysical Institute of Andalusia (IAA-CSIC, Granada, Spain). The Observatory has guaranteed 250 observing nights (distributed along three years) for the CALIFA survey with the 3.5 m Zeiss reflector, supporting the acquisition, reduction and data storage processes. To accomplish this enormous effort the concourse of a large collaboration of astronomers is required, its composition reflecting the dual Spanish/German heritage of the Observatory. However, it also includes participants from all over the world, comprising a total of 80 people from 13 nations spread among 25 research institutes, from so far away as Australia and Canada or the USA.

With ten scientist involved in the CALIFA survey, coordinated by Jesús Falcón, the Instituto de Astrofísica de Canarias (IAC) is one of the major partners of the collaboration. The IAC is the leading group in the study of all aspects of the kinematics in galaxies: from the stellar angular momentum and emission-line kinematics to the pattern speed of bars in spiral galaxies. These studies on the large morphological coverage of the CALIFA survey will provide an unbiased view of the dominant processes driving the evolution and fate of galaxies.


Contacts and sources:
CALIFA home page  

Calcium Keeps Night Vision From Tricking Our Brains

As candy-crazed kids run up and down driveways this Halloween, guided only by the flickering light of jack-o'-lanterns, it's easy to appreciate the low light vision that's preventing trips over superhero capes and princess dresses. But despite the usefulness of night vision, scientists have only now identified the important chemical process that compensates for visual errors in low light.

File:Jack-o'-Lantern 2003-10-31.jpg
Credit: Wikipedia

Biochemist Marie Burns led the University of California, Davis team that made the discovery. The researchers hope their findings, published this month in the journal Neuron, will improve understanding of vision and offer insight to scientists creating treatments for eye diseases.

Our ability to see in low light comes from the rod cells in our eyes. These cells contain a special receptor called rhodopsin, which helps translate the light around us into an electrical signal our brains can understand.

Rhodopsin is very light sensitive and allows humans to see when light is scarce. Even just one photon, the smallest amount of light, can activate it. Burns calls the rod cell's ability to detect just a single photon "a biophysical amazement."

"The ability to signal single photons is absolutely essential for good nighttime vision," said Burns. "If it goes wrong, you can't see well at night. If it goes completely wrong, you can't see at all."

But the rhodopsin's signal isn't consistent; sometimes it transmits significant amounts of random electrical disturbances, or noise, to the brain. This rhodopsin noise comes in short bursts, lasting only a few hundredths of a second, but is enough to keep the brain from understanding what the eyes are seeing, the researchers said.

The effect of this noise isn't as important in bright light where an abundance of photons result in a consistent signal, but in near darkness it's critical for clear vision.

Scientists speculated there must be a process in the eye counteracting noisy rhodopsin and keeping the information sent to the brain reliable.

"The biology had apparently evolved in such a way to perfectly compensate for any noise that rhodopsin might inject into the system," said Burns.

Burns and her team tinkered with the genetics of the eye and zeroed in on the key chemical quieting noisy rhodopsin: calcium. When a rhodopsin receptor is activated, the calcium levels in the rod cell increase. The more over-active the rhodopsin is, the faster the increase in calcium.

This change triggers a series of chemical reactions that stifle the overactive rhodopsin's signal, standardizing the message sent to the brain every time a photon enters the eye.

"Rhodopsin is equivalent to someone driving a car who's either puttering along at 35 miles per hour or slamming on the gas trying to accelerate quickly," said Burns. "The calcium feedback is constantly the brake on the system that keeps everything going at the same speed."

The reliable signal sent to our brains from each photon of light makes our vision consistent. When our eyes see the same image twice, the same message is sent to our brain.

"It's important for our daily experience that each time you wake up in the morning your bedroom looks the same as it did yesterday," said ophthalmology researcher Vadim Arshavsky of Duke University in Durham, N.C. "That consistency is very important for us as functional and very visual creatures."

When Burns looked at her data she found that all of the questions about the consistency of rhodopsin's signals were solved by their newly discovered calcium feedback mechanism. She had originally expected to find additional reactions also to play a part in quieting overactive receptors, but in the end all of the mystery was solved by the one discovery.

"For me that was a very humbling moment," said Burns. "I realized that one can't always rely on one's intuition when it comes to biology."

Knowing the chemical process behind night vision will have important benefits for scientists in related fields, Burns says. Arshavsky believes that Burns' findings could be a major boost for those creating prosthetic devices to restore normal sight to blind people.

"I think that one big challenge is to bring in these principles to the electronics behind these devices," said Arshavsky. "Understanding how the responses by these cells are so reproducible is important to building the prosthetic devices as they become more and more sophisticated down the road."

When asked whether any particular eye disease research could benefit from the knowledge, Burns replied that she believes "the work is bigger than one disease."

"In the case of our research, this understanding can prove essential for progress on a range of vision deficits that are currently poorly understood and untreatable," said Burns.
Contacts and sources:
Story by Thomas Sumner
ISNS Contributor

The Altar At the Center Of The Milky Way

This colorful view of the globular star cluster NGC 6362 was captured by the Wide Field Imager attached to the MPG/ESO 2.2-meter telescope at ESO's La Silla Observatory in Chile. This new picture, along with a new image of the central region from the NASA/ESA Hubble Space Telescope, provide the best view of this little-known cluster ever obtained. Globular clusters are mainly composed of tens of thousands of very ancient stars, but they also contain some stars that look suspiciously young.

The globular star cluster NGC 6362

Credit: ESO

Globular star clusters are among the oldest objects in the Universe, and NGC 6362 cannot hide its age in this picture. The many yellowish stars in the cluster have already run through much of their lives and become red giant stars. But globular clusters are not static relics from the past -- some curious stellar activities are still going on in these dense star cities.

The NASA/ESA Hubble Space Telescope offers an impressive view of the centre of globular cluster NGC 6362. The image of this spherical collection of stars takes a deeper look at the core of the globular cluster, which contains a high concentration of stars with different colours. This image was created combining ultraviolet, visual and infrared images taken with the Wide Field Channel of the Advanced Camera for Surveys and the Wide Field Camera 3.
Hubble image of the globular star cluster NGC 6362
Credit: ESA/Hubble & NASA
For instance, NGC 6362 is home to many blue stragglers -- old stars that really do succeed in passing for a younger age. All of the stars in a globular cluster formed from the same material at roughly the same time (typically, about 10 billion years ago for most globulars). Yet blue stragglers are bluer and more luminous -- and hence more massive -- than they should be after ten billion years of stellar evolution. Blue stars are hot and consume their fuel quickly, so if these stars had formed about ten billion years ago, then they should have fizzled out long ago. How did they survive?

This video starts with a broad view of the Milky Way. We close in gradually on a fuzzy blob in the southern constellation of Ara (The Altar). This is one of more than 150 globular star clusters that orbit the centre of our galaxy. The main image of the cluster used here comes from the Wide Field Imager on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile and the final detailed view of the centre from the NASA/ESA Hubble Space Telescope.

Credit: ESO/NASA/ESA/Hubble, Nick Risinger (skysurvey.org), Digitized Sky Survey 2, Music: delmo "acoustic
"
Astronomers are keen to understand the secret of the youthful appearance of blue stragglers. Currently, there are two main theories: stars colliding and merging, and a transfer of material between two companion stars. The basic idea behind both of these options is that the stars were not born as big as we see them today, but that they received an injection of extra material at some point during their lifetimes and this then gave them a new lease of life.

This colorful view of the globular cluster NGC 6362 on the left was captured by the Wide Field Imager attached to the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This brilliant ball of ancient stars lies in the southern constellation of Ara (The Altar). The close up view of the core of the cluster on the right is from the NASA/ESA Hubble Space Telescope.

Credit: ESO

Although less well known than some brighter globular clusters, NGC 6362 holds much that is of interest to astronomers and has been well studied over the years. It was selected as one of the 160 stellar fields for the Pre-FLAMES Survey -- a preliminary survey conducted between 1999 and 2002 using the 2.2-metre telescope at La Silla to find suitable stars for follow-up observations with the VLT's spectroscopic instrument FLAMES. The picture here comes from data collected as part of this survey.

This video gives a close-up view of the globular cluster NGC 6362 in a picture that was captured by the Wide Field Imager attached to the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. This brilliant ball of ancient stars lies in the southern constellation of Ara (The Altar).

Credit: ESO, Music: delmo "acoustic"

The new image shows the entire cluster against a rich background of the carpet of stars in the Milky Way. The central parts of NGC 6362 have also been [studied in detail][1] by the NASA/ESA Hubble Space Telescope. The Hubble view shows a much smaller area of sky in much greater detail. The two views -- one wide-angle and one zoomed in -- complement each other perfectly.

This brilliant ball of stars lies in the southern constellation of Ara (The Altar). It can be easily seen in a small telescope. It was first spotted in 1826 by the Scottish astronomer James Dunlop using a 22-centimetre telescope in Australia.


Contacts and sources:
Richard Hook
ESO

Special Mouthwash Could Swish Away Tooth Decay

Microbiologists develop an antimicrobial rinse that could prevent cavities



A new mouthwash developed by a microbiologist at the UCLA School of Dentistry is highly successful in targeting the harmfulStreptococcus mutans bacteria that is the principal cause tooth decay and cavities.

In a recent clinical study, 12 subjects who rinsed just one time with the experimental mouthwash experienced a nearly complete elimination of the S. mutans bacteria over the entire four-day testing period. The findings from the small-scale study are published in the current edition of the international dental journal Caries Research.

Dental caries, commonly known as tooth decay or cavities, is one of the most common and costly infectious diseases in the United States, affecting more than 50 percent of children and the vast majority of adults aged 18 and older. Americans spend more than $70 billion each year on dental services, with the majority of that amount going toward the treatment of dental caries.

This new mouthwash is the product of nearly a decade of research conducted by Wenyuan Shi, chair of the oral biology section at the UCLA School of Dentistry. Shi developed a new antimicrobial technology called STAMP (specifically targeted anti-microbial peptides) with support from Colgate-Palmolive and from C3-Jian Inc., a company he founded around patent rights he developed at UCLA; the patents were exclusively licensed by UCLA to C3-Jian. The mouthwash uses a STAMP known as C16G2.

Wenyuan Shi, 
Wenyuan Shi
Credit:  UCLA

The human body is home to millions of different bacteria, some of which cause diseases such as dental caries but many of which are vital for optimum health. Most common broad-spectrum antibiotics, like conventional mouthwash, indiscriminately kill both benign and harmful pathogenic organisms and only do so for a 12-hour time period.

The overuse of broad-spectrum antibiotics can seriously disrupt the body's normal ecological balance, rendering humans more susceptible to bacterial, yeast and parasitic infections.

Shi's Sm STAMP C16G2 investigational drug, tested in the clinical study, acts as a sort of "smart bomb," eliminating only the harmful bacteria and remaining effective for an extended period.

Based on the success of this limited clinical trial, C3-Jian Inc. has filed a New Investigational Drug application with the U.S. Food and Drug Administration, which is expected to begin more extensive clinical trials in March 2012. If the FDA ultimately approves Sm STAMP C16G2 for general use, it will be the first such anti–dental caries drug since fluoride was licensed nearly 60 years ago.

"With this new antimicrobial technology, we have the prospect of actually wiping out tooth decay in our lifetime," said Shi, who noted that this work may lay the foundation for developing additional target-specific "smart bomb" antimicrobials to combat other diseases.

"The work conducted by Dr. Shi's laboratory will help transform the concept of targeted antimicrobial therapy into a reality," said Dr. No-Hee Park, dean of the UCLA School of Dentistry. "We are proud that UCLA will become known as the birthplace of this significant treatment innovation."


Contacts and sources:
Inside Science TV
The UCLA School of Dentistry 

The Future Of Art? Viewers Affect Appearance Of Art On The Wall In Interactive Exhibit

''The Pixel's Habitat: From Code to Line'' is part of the Manofim project that opens the exhibition season in Jerusalem

Static image of an animation that will be altered by human interaction

Photo: The Hebrew University of Jerusalem

A new interactive art exhibition opens today at the Hebrew University of Jerusalem to mark the start of the 2012-13 academic year. ''The Pixel's Habitat — From Code to Line'' incorporates computer programming and audience participation to join the worlds of science and art in an ever-evolving process of two-dimensional animation. Visitors to the exhibit participate in making new virtual creations that are projected on the gallery’s walls and floors.

The exhibition, part of the Manofim project that kicks off the exhibition season in Jerusalem, is displayed in the Max and Iris Stern Gallery (small gallery) and the Bloomfield Library for the Humanities and Social Sciences on the Mount Scopus campus. Admission is free and it is open to visitors from 9 a.m. to 7 p.m. from Sunday to Thursday through the end of February. More information is available at 02-5882940 or michalmor@savion.huji.ac.il.

The artist, Reuven Zahavi, uses computer code to create a community of ''agents'' — digital creatures, each of which has a number of characteristics that define their behavior in space, such as speed of movement and relations with others. Their characteristics differ from one agent to another and their placement in space creates a visual performance in which the agents meet amongst themselves and with the viewers. These encounters cause some of the original agents — a kind of genetic base — to disappear, and others to remain and undergo mutation. Thus a new generation is created which guarantees renewal of the population and an evolution of the work.

Visitors influence the exhibit through cameras installed in the area that record their movements and voices, and projects the information on the gallery's walls and floors. In this way, viewers become, knowingly or not, active partners who influence the scene — even if they stand still without moving.

''The interaction created with the viewer is like a person walking and dragging with him seeds, smells and materials, so that you have a simultaneous process of construction and deconstruction,'' says Zahavi. ''I want to encourage the creation of unexpected phenomena stemming from simple rules while using an array of algorithms.''

Zahavi is a senior lecturer at the Bezalel Academy of Arts and Design in Jerusalem, in the Department of Jewelry and Fashion and in the Department of History and Theory. He completed his doctorate on the subject of artists' tools and strategies of creation in modern art at the Université Paris VIII in France. Exhibiting in Israel and abroad, he lives and works in Jerusalem.

The various projections, or habitats, are based on different themes. For example, the habitat ''Under-the-Skin/Underground” evolves in an environment that resembles something between the surface of the body and the surface of the ground. It simulates geological layering and the presence of invisible, motile under-the-skin/underground forces. The ''skin'' or ground surface is formed, stretched and continually developed through these underground forces and the action of tiny creatures that ceaselessly crawl, twist, etch or stain the surface, each marking it with its own specialty signature. The work, which is governed by code, develops in real time and displays the result of the relations between the forces moving below skin level as they respond to the tracks laid by the creatures moving above ground, as well as to the movement of spectators, which is captured by the camera.

The exhibit, which was curated by Michal Mor and designed and produced by Ron Yosef, was made possible with the support of the Hebrew University.

Contacts and sources:

Tuesday, October 30, 2012

NASA Rover's First Soil Studies Help Fingerprint Martian Minerals

NASA's Mars rover Curiosity has completed initial experiments showing the mineralogy of Martian soil is similar to weathered basaltic soils of volcanic origin in Hawaii.

The minerals were identified in the first sample of Martian soil ingested recently by the rover. Curiosity used its Chemistry and Mineralogy instrument (CheMin) to obtain the results, which are filling gaps and adding confidence to earlier estimates of the mineralogical makeup of the dust and fine soil widespread on the Red Planet.

This graphic shows results of the first analysis of Martian soil by the Chemistry and Mineralogy (CheMin) experiment on NASA's Curiosity rover. 
First X-ray view of Martian soil
Image credit: NASA/JPL-Caltech/Ames

"We had many previous inferences and discussions about the mineralogy of Martian soil," said David Blake of NASA Ames Research Center in Moffett Field, Calif., who is the principal investigator for CheMin. "Our quantitative results provide refined and in some cases new identifications of the minerals in this first X-ray diffraction analysis on Mars."

The identification of minerals in rocks and soil is crucial for the mission's goal to assess past environmental conditions. Each mineral records the conditions under which it formed. The chemical composition of a rock provides only ambiguous mineralogical information, as in the textbook example of the minerals diamond and graphite, which have the same chemical composition, but strikingly different structures and properties.

This pair of images from the Mast Camera on NASA's Curiosity rover shows the upper portion of a wind-blown deposit dubbed "Rocknest." 
This pair of images from the Mast Camera on NASA's Curiosity rover shows the upper portion of a wind-blown deposit dubbed
Image credit: NASA/JPL-Caltech/MSSS

CheMin uses X-ray diffraction, the standard practice for geologists on Earth using much larger laboratory instruments. This method provides more accurate identifications of minerals than any method previously used on Mars. X-ray diffraction reads minerals' internal structure by recording how their crystals distinctively interact with X-rays. Innovations from Ames led to an X-ray diffraction instrument compact enough to fit inside the rover.

These NASA technological advances have resulted in other applications on Earth, including compact and portable X-ray diffraction equipment for oil and gas exploration, analysis of archaeological objects and screening of counterfeit pharmaceuticals, among other uses.

"Our team is elated with these first results from our instrument," said Blake. "They heighten our anticipation for future CheMin analyses in the months and miles ahead for Curiosity."

The specific sample for CheMin's first analysis was soil Curiosity scooped up at a patch of dust and sand that the team named Rocknest. The sample was processed through a sieve to exclude particles larger than 0.006 inch (150 micrometers), roughly the width of a human hair. The sample has at least two components: dust distributed globally in dust storms and fine sand originating more locally. Unlike conglomerate rocks Curiosity investigated a few weeks ago, which are several billion years old and indicative of flowing water, the soil material CheMin has analyzed is more representative of modern processes on Mars.

This image shows a "bite mark" where NASA's Curiosity rover scooped up some Martian soil.
This pair of images shows a
Image credit: NASA/JPL-Caltech/MSSS

"Much of Mars is covered with dust, and we had an incomplete understanding of its mineralogy," said David Bish, CheMin co-investigator with Indiana University in Bloomington. "We now know it is mineralogically similar to basaltic material, with significant amounts of feldspar, pyroxene and olivine, which was not unexpected. Roughly half the soil is non-crystalline material, such as volcanic glass or products from weathering of the glass. "

Bish said, "So far, the materials Curiosity has analyzed are consistent with our initial ideas of the deposits in Gale Crater recording a transition through time from a wet to dry environment. The ancient rocks, such as the conglomerates, suggest flowing water, while the minerals in the younger soil are consistent with limited interaction with water."

During the two-year prime mission of the Mars Science Laboratory Project, researchers are using Curiosity's 10 instruments to investigate whether areas in Gale Crater ever offered environmental conditions favorable for microbial life.

NASA's Jet Propulsion Laboratory, a division of Caltech in Pasadena, manages the project for NASA's Science Mission Directorate, Washington, and built Curiosity and CheMin.
 

Contacts and sources:
Guy Webster /
Jet Propulsion Laboratory, Pasadena, Calif.

Rachel Hoover
NASA Ames Research Center, Moffett Field, Calif.
 

Two Giant Planets Collided To Form Earth And The Moon Says New Theory

New research, funded by the NASA Lunar Science Institute (NLSI), theorizes that our early Earth and moon were both created together in a giant collision of two planetary bodies that were each five times the size of Mars. 

New research funded by NLSI theorizes that our early Earth and moon were perhaps created in a different manner than has previously been believed. 

Image credit: NASA

This new theory about how Earth’s moon formed is challenging the commonly believed “giant impact hypothesis,” which suggests that Earth's moon formed from a colossal impact of a hypothetical planetary embryo, named Theia, with Earth, early in our Solar System's history.

“Our understanding of the solar system is constantly being refined with each new discovery. This research illustrates the importance of modeling planetary formation to enhance our scientific understanding of the moon and its place in the solar system,” said NLSI Deputy Director Greg Schmidt.

The giant impact hypothesis has been a widely accepted theory for how the Earth-moon system formed. In the giant impact scenario, the moon forms from debris ejected into an Earth-orbiting disk by the collision of a smaller proto-planet with the early Earth. One of the challenges to the longstanding theory of the collision, is that a Mars-sized impacting body, whose composition likely would have differed substantially from that of Earth, likely would have left Earth and the moon with different chemical compositions, which they are not.

After colliding, the two similar-sized bodies then re-collided, forming an early Earth surrounded by a disk of material that combined to form the moon. The re-collision and subsequent merger left the two bodies with the similar chemical compositions seen today.

The new model was developed by Robin M. Canup of the Southwest Research Institute (SwRI), San Antonio, Texas. Canup’s research was motivated by accompanying studies by other scientists on the early dynamical history of the moon, and accounts for this similarity in composition, while also yielding an appropriate mass for Earth and the moon.

“The ultimate likelihood of each impact scenario will need to be assessed by improved models of terrestrial planet formation,” Canup said.

The paper, “Forming a Moon with an Earth-like composition via a Giant Impact,” by R.M Canup, was published recently in the journal Science online, at the Science Express website. The results were also presented October 15th at the 44th Meeting of the AAS Division for Planetary Sciences in Reno, Nev.


Contacts and sources:
Karen Jenvey
Ames Research Center, Moffett Field, Calif.

$750 Billion Wasted Health Care Dollars In U.S. Annually

Eliminating excessive spending could mean windfall for US, study suggests

The respected national Institute of Medicine estimates that $750 billion is lost each year to wasteful or excessive health care spending. This sum includes excess administrative costs, inflated prices, unnecessary services and fraud — dollars that add no value to health and well-being.

Redirecting health industry waste

If those wasteful costs could be corralled without sacrificing health care quality, how might that money be better spent?

In a study published in the current online edition of the American Journal of Preventive Medicine, Frederick J. Zimmerman, professor and chair of the department of health policy and Management at the UCLA Fielding School of Public Health, and colleagues outline some of the myriad ways that $750 billion could benefit Americans.

"If cut from current health care expenditures, these funds could provide businesses and households with a huge windfall, with enough money left over to fund deficit reduction on the order of the most ambitious plans in Washington," Zimmerman said. "The money could also cover needed investments in transportation infrastructure, early childhood education, human capital programs, rural development, job retraining programs and much more. And it could transform America with little to no reduction in the quality of, or access to, health care actually provided."

Zimmerman noted that while different observers would likely have different priorities regarding the alternative uses toward which the wasted expenditures could be directed, all would agree that the alternatives proposed in this study have inherent social value.

"When the fastest-growing part of the economy is also the least efficient, the economy as a whole loses its ability over time to support our current living standards," said Jonathan Fielding, a UCLA professor of health policy and management and director of the Los Angeles County Department of Public Health, who is a co-author of the study. "The U.S. has become irrationally attached to its inefficient health care system. Recognizing the opportunity costs of this attachment is the first step in repairing the system."

In the study, the research group, which also included Dr. Steven Teutsch, chief science officer of the Los Angeles County Department of Public Health, and first author Jeffrey C. McCullough, a graduate student at the UCLA Fielding School, presented one scenario of how that money could be used.

For one, the authors propose that more than $410 billion per year — or 55 percent of the savings — could be returned to the private sector for individuals and companies to use as they please; another $202 billion (27percent) could go toward deficit reduction, yielding a greater reduction than the congressional "super committee" sought and failed to achieve. An additional $104 billion (14 percent) could support additional investments in human capital and physical infrastructure.

"For example," Zimmerman said, "the Head Start program could be doubled in size, universal preschool could be provided, average class size could be reduced from 22 to 13 students. And trained nurses could conduct regular home visits for high-risk pregnancies."

Two percent of the savings, amounting to $18 billion, could promote urban and rural quality of life by improving the built environment surrounding schools, expanding and modernizing public libraries, improving wastewater treatment and providing rural development grants to every small town in the nation. Job-training opportunities would be affordable for nearly 50,000 unemployed persons. And under the research group's scenario, the remaining 2 percent of the savings would be devoted to fully funding an extensive wish list of transportation projects to alleviate road congestion and promote mass transit alternatives.

Freeing up this money would be no easy task, Fielding warned. These excess expenditures will be difficult to reduce because the costs are spread across many groups, and the financial beneficiaries are coordinated, clear-minded and powerful, he said. Overcoming this resistance will require concerted collective action on the part of many economic sectors, governmental agencies and other organizations that are not used to seeing themselves as sharing interests with the others.

But whatever one's values and preferences, said Zimmerman, "eliminating excess medical care costs provides a monumental opportunity to reallocate those resources to strengthen our international competitiveness, enhance our well-being and build a healthier nation."

The result of redirecting some $750 billion per year, he said, could be transformative for Americans, and the potential uses for these funds are panoramic in both scope and possibility.

"This will not be an easy fight," Zimmerman said. "But we believe reconceptualizing our excess health care spending by looking at its opportunity cost to society is an important first step."

A video of the group's research is available online at www.ajpmonline.org/content/video_pubcasts_collection .

This research was not supported by external grants or funding. The authors report no conflict of interest.

The UCLA Fielding School of Public Health is dedicated to enhancing the public's health by conducting innovative research; training future leaders and health professionals; translating research into policy and practice; and serving local, national and international communities.


Contacts and sources:
Mark Wheeler
University of California - Los Angeles

20,000 Trillion Calculations Each Second, ORNL Debuts Titan Supercomputer, World's Most Powerful Computer

The U.S. Department of Energy's (DOE) Oak Ridge National Laboratory launched a new era of scientific supercomputing today with Titan, a system capable of churning through more than 20,000 trillion calculations each second—or 20 petaflops—by employing a family of processors called graphic processing units first created for computer gaming. Titan will be 10 times more powerful than ORNL's last world-leading system, Jaguar, while overcoming power and space limitations inherent in the previous generation of high-performance computers.

Oak Ridge National Laboratory is home to Titan, the world’s most powerful supercomputer for open science with a theoretical peak performance exceeding 20 petaflops (quadrillion calculations per second). That kind of computational capability—almost unimaginable—is on par with each of the world’s 7 billion people being able to carry out 3 million calculations per second.
Oak Ridge National Laboratory is home to Titan, the world’s most powerful supercomputer for open science with a theoretical peak performance exceeding 20 petaflops (quadrillion calculations per second). That kind of computational capability—almost unimaginable—is on par with each of the world’s 7 billion people being able to carry out 3 million calculations per second.
Credit: ORNL 

Titan, which is supported by the Department of Energy, will provide unprecedented computing power for research in energy, climate change, efficient engines, materials and other disciplines and pave the way for a wide range of achievements in science and technology.

The Cray XK7 system contains 18,688 nodes, with each holding a 16-core AMD Opteron 6274 processor and an NVIDIA Tesla K20 graphics processing unit (GPU) accelerator. Titan also has more than 700 terabytes of memory. The combination of central processing units, the traditional foundation of high-performance computers, and more recent GPUs will allow Titan to occupy the same space as its Jaguar predecessor while using only marginally more electricity.



"One challenge in supercomputers today is power consumption," said Jeff Nichols, associate laboratory director for computing and computational sciences. "Combining GPUs and CPUs in a single system requires less power than CPUs alone and is a responsible move toward lowering our carbon footprint. Titan will provide unprecedented computing power for research in energy, climate change, materials and other disciplines to enable scientific leadership."

Because they handle hundreds of calculations simultaneously, GPUs can go through many more than CPUs in a given time. By relying on its 299,008 CPU cores to guide simulations and allowing its new NVIDIA GPUs to do the heavy lifting, Titan will enable researchers to run scientific calculations with greater speed and accuracy.

"Titan will allow scientists to simulate physical systems more realistically and in far greater detail," said James Hack, director of ORNL's National Center for Computational Sciences. "The improvements in simulation fidelity will accelerate progress in a wide range of research areas such as alternative energy and energy efficiency, the identification and development of novel and useful materials and the opportunity for more advanced climate projections."

Titan will be open to select projects while ORNL and Cray work through the process for final system acceptance. The lion's share of access to Titan in the coming year will come from the Department of Energy's Innovative and Novel Computational Impact on Theory and Experiment program, better known as INCITE.

Researchers have been preparing for Titan and its hybrid architecture for the past two years, with many ready to make the most of the system on day one. Among the flagship scientific applications on Titan:

Materials Science The magnetic properties of materials hold the key to major advances in technology. The application WL-LSMS provides a nanoscale analysis of important materials such as steels, iron-nickel alloys and advanced permanent magnets that will help drive future electric motors and generators. Titan will allow researchers to improve the calculations of a material's magnetic states as they vary by temperature.

"The order-of-magnitude increase in computational power available with Titan will allow us to investigate even more realistic models with better accuracy," noted ORNL researcher and WL-LSMS developer Markus Eisenbach.

Combustion The S3D application models the underlying turbulent combustion of fuels in an internal combustion engine. This line of research is critical to the American energy economy, given that three-quarters of the fossil fuel used in the United States goes to powering cars and trucks, which produce one-quarter of the country's greenhouse gases.

Titan will allow researchers to model large-molecule hydrocarbon fuels such as the gasoline surrogate isooctane; commercially important oxygenated alcohols such as ethanol and butanol; and biofuel surrogates that blend methyl butanoate, methyl decanoate and n-heptane.

"In particular, these simulations will enable us to understand the complexities associated with strong coupling between fuel chemistry and turbulence at low preignition temperatures," noted team member Jacqueline Chen of Sandia National Laboratories. "These complexities pose challenges, but also opportunities, as the strong sensitivities to both the fuel chemistry and to the fluid flows provide multiple control options which may lead to the design of a high-efficiency, low-emission, optimally combined engine-fuel system."


Nuclear Energy Nuclear researchers use the Denovo application to, among other things, model the behavior of neutrons in a nuclear power reactor. America's aging nuclear power plants provide about a fifth of the country's electricity, and Denovo will help them extend their operating lives while ensuring safety. Titan will allow Denovo to simulate a fuel rod through one round of use in a reactor core in 13 hours; this job took 60 hours on the Jaguar system.

Climate Change The Community Atmosphere Model-Spectral Element simulates long-term global climate. Improved atmospheric modeling under Titan will help researchers better understand future air quality as well as the effect of particles suspended in the air.

Using a grid of 14-kilometer cells, the new system will be able to simulate from one to five years per day of computing time, up from the three months or so that Jaguar was able to churn through in a day.

"As scientists are asked to answer not only whether the climate is changing but where and how, the workload for global climate models must grow dramatically," noted CAM-SE team member Kate Evans of ORNL. "Titan will help us address the complexity that will be required in such models."

ORNL is managed by UT-Battelle for the Department of Energy. The Department of Energy is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov/.

Robotic Prosthetic Arm Allows Amputees To Perform Every Day Tasks

A new validated and reliable measure of how well an adult amputee is able to perform everyday tasks with a prosthetic arm will help physical and occupational therapists, prosthetists, and doctors assess the progress that patients make during training with their new limb.

Researchers have devised standardized methods and criteria for clinicians to grade patients’ performance, speed, and skill using any kind of prosthetic arm to do 18 everyday tasks.
Measurement of success - Researchers have devised standardized methods and criteria for clinicians to 
grade patients’ performance, speed, and skill using any kind of 
prosthetic arm to do 18 everyday tasks.
Credit: Courtesy of Linda Resnik, U.S. Department of Veterans Affairs

Amputees with a new prosthetic arm must learn how to use their device to perform everyday tasks that were once second nature. Taking off a shirt becomes a conscious, multistep effort: grasp the shirt, lift the shirt over the head, pull arms through the sleeves, place the shirt on the table, let go of the shirt.

In the best cases of treatment, patients work with teams of doctors, prosthetists, and therapists to learn how their new limbs can help them regain function and quality of life. But clinicians have had few tools to assess whether that crucial teaching/learning process is going well, because of a lack of standardized measurements to use with adults with upper limb amputations. To change that, a research team has unveiled a new index that clinicians can use to assess their patients’ progress. They describe the Activities Measure for Upper Limb Amputees (the AM-ULA) in an article published online Oct. 17 in the Archives of Physical Medicine and Rehabilitation.

“Patients can’t just take a prosthesis out of the box and start using it skillfully,” said lead author and physical therapist Linda Resnik, an associate research professor in public health at Brown University and a research scientist at the Providence Veterans Affairs Medical Center. “The upper limb is used to perform so many types of tasks. Patients need training to make the most of an upper limb prosthesis. Physical and occupational therapists train people to use adaptive equipment and prosthetic devices, – teaching them strategies to accomplish functional tasks, and guiding them in therapeutic exercises and activities. We need measures to let us know if our patients are improving the way that we expect them to. When they get a new device, what are the benefits? Are they able do more with it?”

The AM-ULA provides standardized methods and criteria for clinicians to grade patients’ performance, speed, and skill using any kind of prosthetic arm to do 18 everyday tasks. Tasks on the AM-ULA include not only putting on and removing a shirt, but also serving soda from a can, combing hair, tying shoes, and using a spoon. They are the kinds of tasks adults need to be independent in caring for themselves and others.

Resnik, who directs a prosthetics research program at the Providence VA’s new Center for Neurorestoration and Neurotechnology (http://news.brown.edu/pressreleases/2012/10/neurocenter), led the development of the new measure to aid her testing of a prosthetic arm developed by DEKA Research (http://www.dekaresearch.com/deka_arm.shtml). She and the team tested the metric with 49 veterans at VA facilities in Tampa and New York and the U.S. Army’s Fort Sam Houston in Texas.

Some existing measures are self-assessments where patients report how they are doing on a standardized scale, but Resnik said while those are essential, they don’t tell clinicians everything they need to know. For instance, an amputee might subconsciously use other body parts to compensate for an insufficiency with a prosthetic arm.

“This particular tool, because of the grading criteria that we use, considers aspects of movement quality that might not be picked up in a self report,” she said. “We look at the amount of body compensation used to perform a task – how much bending or use of other more proximal joints is involved in an activity. That’s important, because we know that upper limb amputees often develop problems in their neck and back.”

Resnik’s team built the new measure much like a prosthetics engineer builds an arm: they designed a prototype based on tasks from other measures, and refined it iteratively using feedback from content experts.

One of the key methods of refining the metric and ensuring its reliability was determining whether two independent raters, observing the same patient performance, arrived at the same ratings or strongly disagreed. The researchers also determined when raters differed with themselves when the test was administered twice within a short period of time. Originally the measure included 24 tasks, but ultimately six were dropped because the independent raters came to ratings that were too different, too often.

The researchers also validated the measure by making sure that the results made sense, based upon what was known clinically. For instance, scores of the AM-ULA were highest for people with amputation of the hand, lower for those with amputation above the elbow, and lowest for amputees with amputation at the shoulder.

To help clinicians interpret changes in AM-ULA scores in individual patients, the team analyzed the statistics to calculate how much of a change in the overall score could be considered more than just natural “noise” in the data. For example, a patient whose scores change more than 3.7 points between sessions is likely to have truly changed, that change exceeds random error of measurement.

Although Resnik was inspired to develop the metric for her own research, she said she hopes the metric will become a commonly used, standard tool in the field.

“Outcome measures are needed in all areas of health care, but particularly so in the area of prosthetic rehabilitation,” Resnik said. “High tech prosthetic devices have great promise to help people regain function, but the costs for myoelectric and microprocessor prosthetic devices are substantially higher than those for simpler, body powered devices.”

Most insurance companies restrict the amount of money allowable for prosthetic devices and the amount of therapy that’s available to people, she said. If patients don’t benefit from using a high tech device or from therapy, it’s difficult to justify prescription of such devices or continued therapy services.

“Thus, it is important to have sensitive and responsive methods to objectively assess the benefits of prosthetic devices and training,” Resnik said. “What I hope is that clinicians would use this measure to track how their patients are doing."

Resnik’s co-authors on the study are Matthew Borgia of the Providence VA, Laurel Adams and Jemy Delikat of the James A. Haley Veterans Hospital in Tampa, Roxanne Disla of the New York Harbor Healthcare System, and Christopher Ebner and Lisa Smurr Walters at the Brooke Army Medical Center at Fort Sam Houston.

The U.S. Department of Veterans Affairs provided funding for the study.

How Silver Can Turn Your Skin Blue

Researchers from Brown University have shown for the first time how ingesting too much silver can cause argyria, a rare condition in which patients' skin turns a striking shade of grayish blue.

Argyria
File:Argyria 2.jpg
Credit: Wikipedia

"It's the first conceptual model giving the whole picture of how one develops this condition," said Robert Hurt, professor of engineering at Brown and part of the research team. "What's interesting here is that the particles someone ingests aren't the particles that ultimately cause the disorder."

Scientists have known for years argyria — a condition that turns the skin blue — had something to do with silver. Brown scientists have figured out the complex chemistry behind it.
Too much of a good thing - Scientists have known for years argyria — a condition that turns the skin blue — had something to do with silver. Brown scientists have figured out the complex chemistry behind it.
Credit: Brown University

Scientists have known for years argyria had something to do with silver. The condition has been documented in people who (ill advisedly) drink antimicrobial health tonics containing silver nanoparticles and in people who have had alternative medical treatments involving silver. Tissue samples from patients showed silver particles actually lodged deep in the skin, but it wasn't clear how they got there.

As it turns out, argyria is caused by a complex series of chemical reactions, Hurt says. His paper on the subject, authored with Brown University colleagues Jingyu Liu, Zhongying Wang, Frances Liu, and Agnes Kane, was published online earlier this month in the journal ACS Nano.

Hurt and his team show that nanosilver is broken down in the stomach, absorbed into the bloodstream as a salt and finally deposited in the skin, where exposure to light turns the salt back into silver metal and creates the telltale bluish hue. That final stage, oddly, involves the same photochemical reaction used to develop black-and-white photographs.

From silver to salt and back again

Hurt and his team have been studying the environmental impact of silver, specifically silver nanoparticles, for years. They've found that nanosilver tends to corrode in acidic environments, giving off charged ions — silver salts — that can be toxic in large amounts. Hurt's graduate student, Jingyu Liu (now a postdoctoral researcher at the National Institute of Standards and Technology), thought those same toxic ions might also be produced when silver enters the body, and could play a role in argyria.

To find out, the researchers mixed a series chemical treatments that could simulate what might happen to silver inside the body. One treatment simulated the acidic environment in the gastrointestinal tract; one mimicked the protein content of the bloodstream; and a collagen gel replicated the base membranes of the skin.

Robert Hurt
Robert Hurt“The particles someone ingests aren’t the particles that ultimately cause the disorder.”

Credit: Brown University

They found that nanosilver corrodes in stomach acid in much the same way it does in other acidic environments. Corrosion strips silver atoms of electrons, forming positively charged silver salt ions. Those ions can easily be taken into the bloodstream through channels that absorb other types of salt. That's a crucial step, Hurt says. Silver metal particles themselves aren't terribly likely to make it from the GI tract to the blood, but when some of them are transformed into a salt, they're ushered right through.

From there, Hurt and his team showed that silver ions bind easily with sulfur present in blood proteins, which would give them a free ride through the bloodstream. Some of those ions would eventually end up in the skin, where they'd be exposed to light.

To re-create this end stage, the researchers shined ultraviolet light on collagen gel containing silver ions. The light caused electrons from the surrounding materials to jump onto the unstable ions, returning them to their original state — silver metal. This final reaction is ultimately what turns patients' skin blue. The photoreaction is similar to the way silver is used in black and white photography. When exposed to light, silver salts on a photographic film reduce to silver metal and darken, creating an image.

Implications for nanosilver safety

Despite its potential toxicity, silver has been valued for centuries for its ability to kill germs, which is why silver nanoparticles are used today in everything from food packaging to bandages. Regulators have established limits for occupational exposure to silver, but there are questions as to whether there should be special limits on the nanoparticle form.

This research "would be one piece of evidence that you could treat nanoparticles in the same way as other forms of silver," Hurt says.

That's because the bioavailable form of silver — the form that is absorbed into the bloodstream — is the silver salt that's made in the stomach. Any silver metal that's ingested is just the raw material to make that bioavailable salt. So ingesting silver in any form, be it nano or not, would have basically the same effect, Hurt said.

"The concern in this case is the total dose of silver, not what form it's in," Hurt said. "This study implies that silver nanoparticles will be less toxic than an equivalent amount of silver salt, at least in this exposure scenario."

The National Science Foundation and the Superfund Research Program of the National Institute of Environmental Health Sciences funded the research.


Contacts and sources:
Kevin Stacey
Brown University

Monday, October 29, 2012

Brainless Slime Mold Remembers, It Excretes Memories

Can you have a memory if you don't have a brain? The question has been answered with the discovery that brainless slime molds use excreted chemicals as a memory system.

The finding by University of Sydney researchers is strong support for the theory that the first step toward the evolution of memory was the use of feedback from chemicals.

The research found that that a single-celled organism with no brain uses an external spatial memory to navigate through a complex environment. 
The research found that that a single-celled organism with no brain uses an external spatial memory to navigate through a complex environment. [Image: Tanya Latty]
Image: Tanya Latty

"We have shown for the first time that a single-celled organism with no brain uses an external spatial memory to navigate through a complex environment," said Christopher Reid from the University's School of Biological Sciences.

The research, led by Reid, with colleagues from the school and a colleague from Toulouse University, is published in the Proceedings of the National Academy of Sciences journal today.

"Our discovery is evidence of how the memory of multi-cellular organisms may have evolved - by using external chemical trails in the environment before the development of internal memory systems," said Reid.

"Results from insect studies, for example ants leaving pheromone trails, have already challenged the assumption that navigation requires learning or a sophisticated spatial awareness. We've now gone one better and shown that even an organism without a nervous system can navigate a complex environment, with the help of externalized memory."

The research method was inspired by robots designed to respond only to feedback from their immediate environment to navigate obstacles and avoid becoming trapped. This 'reactive navigation' method allows robots to navigate without a programmed map or the ability to build one and slime molds use the same process.

The researchers used a classic test of independent navigational ability, commonly used in robotics, requiring the slime mold to navigate its way out of a U-shaped barrier.

As the slime mold (Physarum polycephalum) moves it leaves behind a thick mat of non-living, translucent, extracellular slime.

Physarum polycephalum
File:Slime Mold (Fuligo).jpg
Credit: Wikipedia

When it is foraging the slime mold avoids areas that it has already 'slimed' suggesting it can sense extracellular slime upon contact and will recognize and avoid areas it has already explored.

"This shows it is using a form of external spatial memory to more efficiently explore its environment," said Reid.

"We then upped the ante for the slime molds by challenging them with the U-shaped trap problem to test their navigational ability in a more complex situation than foraging. We found that, as we had predicted, its success was greatly dependent on being able to apply its external spatial memory to navigate its way out of the trap."

In simple environments the use of externalized spatial memory is not necessary for effective navigation but in more complex situations it significantly enhances the organism's chance of success, just as it does for robots using reactive navigation.

Christopher Reid's work also appears in the Macleay Museum's current exhibition The Meaning of Life which features prominent research from the School of Biological Sciences in its 50-year history.


Contacts and sources:
University of Sydney