ADS


Unseen Is Free

Unseen Is Free
Try It Now

OpenX

Google Translate

Friday, April 18, 2014

Laser Beams To Trigger Rain And Lightning

The adage "Everyone complains about the weather but nobody does anything about it," may one day be obsolete if researchers at the University of Central Florida's College of Optics & Photonics and the University of Arizona further develop a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning.

 This is an illustration of the dressed filament that fuels the high-intensity laser to travel farther.
Credit: Courtesy of University of Central Florida College of Optics and Photonics

The solution? Surround the beam with a second beam to act as an energy reservoir, sustaining the central beam to greater distances than previously possible. The secondary "dress" beam refuels and helps prevent the dissipation of the high-intensity primary beam, which on its own would break down quickly. A report on the project, "Externally refueled optical filaments," was recently published in Nature Photonics.

Water condensation and lightning activity in clouds are linked to large amounts of static charged particles. Stimulating those particles with the right kind of laser holds the key to possibly one day summoning a shower when and where it is needed.

Lasers can already travel great distances but "when a laser beam becomes intense enough, it behaves differently than usual – it collapses inward on itself," said Matthew Mills, a graduate student in the Center for Research and Education in Optics and Lasers (CREOL). "The collapse becomes so intense that electrons in the air's oxygen and nitrogen are ripped off creating plasma – basically a soup of electrons."

At that point, the plasma immediately tries to spread the beam back out, causing a struggle between the spreading and collapsing of an ultra-short laser pulse. This struggle is called filamentation, and creates a filament or "light string" that only propagates for a while until the properties of air make the beam disperse.


"Because a filament creates excited electrons in its wake as it moves, it artificially seeds the conditions necessary for rain and lightning to occur," Mills said. Other researchers have caused "electrical events" in clouds, but not lightning strikes.

But how do you get close enough to direct the beam into the cloud without being blasted to smithereens by lightning?

"What would be nice is to have a sneaky way which allows us to produce an arbitrary long 'filament extension cable.' It turns out that if you wrap a large, low intensity, doughnut-like 'dress' beam around the filament and slowly move it inward, you can provide this arbitrary extension," Mills said.

"Since we have control over the length of a filament with our method, one could seed the conditions needed for a rainstorm from afar. Ultimately, you could artificially control the rain and lightning over a large expanse with such ideas."

So far, Mills and fellow graduate student Ali Miri have been able to extend the pulse from 10 inches to about 7 feet. And they're working to extend the filament even farther.

"This work could ultimately lead to ultra-long optically induced filaments or plasma channels that are otherwise impossible to establish under normal conditions," said professor Demetrios Christodoulides, who is working with the graduate students on the project.

"In principle such dressed filaments could propagate for more than 50 meters or so, thus enabling a number of applications. This family of optical filaments may one day be used to selectively guide microwave signals along very long plasma channels, perhaps for hundreds of meters."

Other possible uses of this technique could be used in long-distance sensors and spectrometers to identify chemical makeup. Development of the technology was supported by a $7.5 million grant from the Department of Defense.


Contacts and sources: 
University of Central Florida

How The Brain Pays Attention

Neuroscientists identify a brain circuit that’s key to shifting our focus from one object to another.

Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match.

Screen shots from a video of overlapping images of faces and houses, shown to subjects who were asked to pay attention to one or the other.
Video by Daniel Baldauf, screen shots colorized by MIT News

A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science.

A video of overlapping images of faces and houses was shown to subjects who were asked to pay attention to one or the other.

Video: Daniel Baldauf

Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper.

“The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.”

In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.

In the new study, the researchers found that IFJ coordinates with a brain region that processes faces, known as the fusiform face area (FFA), and a region that interprets information about places, known as the parahippocampal place area (PPA). The FFA and PPA were first identified in the human cortex by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT.

The IFJ has previously been implicated in a cognitive ability known as working memory, which is what allows us to gather and coordinate information while performing a task — such as remembering and dialing a phone number, or doing a math problem.

For this study, the researchers used magnetoencephalography (MEG) to scan human subjects as they viewed a series of overlapping images of faces and houses. Unlike functional magnetic resonance imaging (fMRI), which is commonly used to measure brain activity, MEG can reveal the precise timing of neural activity, down to the millisecond. The researchers presented the overlapping streams at two different rhythms — two images per second and 1.5 images per second — allowing them to identify brain regions responding to those stimuli.

“We wanted to frequency-tag each stimulus with different rhythms. When you look at all of the brain activity, you can tell apart signals that are engaged in processing each stimulus,” says Daniel Baldauf, a postdoc at the McGovern Institute and the lead author of the paper.

Each subject was told to pay attention to either faces or houses; because the houses and faces were in the same spot, the brain could not use spatial information to distinguish them. When the subjects were told to look for faces, activity in the FFA and the IFJ became synchronized, suggesting that they were communicating with each other. When the subjects paid attention to houses, the IFJ synchronized instead with the PPA.

The researchers also found that the communication was initiated by the IFJ and the activity was staggered by 20 milliseconds — about the amount of time it would take for neurons to electrically convey information from the IFJ to either the FFA or PPA. The researchers believe that the IFJ holds onto the idea of the object that the brain is looking for and directs the correct part of the brain to look for it.

The MEG scanner, as well as the study’s “elegant design,” were critical to discovering this relationship, says Robert Knight, a professor of psychology and neuroscience at the University of California at Berkeley who was not part of the research team.

“Functional MRI gives hints of connectivity,” Knight says, “but the time course is way too slow to show these millisecond-scale frequencies and to establish what they show, which is that the inferior frontal lobe is the prime driver.”

Further bolstering this idea, the researchers used an MRI-based method to measure the white matter that connects different brain regions and found that the IFJ is highly connected with both the FFA and PPA.

Members of Desimone’s lab are now studying how the brain shifts its focus between different types of sensory input, such as vision and hearing. They are also investigating whether it might be possible to train people to better focus their attention by controlling the brain interactions involved in this process.

“You have to identify the basic neural mechanisms and do basic research studies, which sometimes generate ideas for things that could be of practical benefit,” Desimone says. “It’s too early to say whether this training is even going to work at all, but it’s something that we’re actively pursuing.”

The research was funded by the National Institutes of Health and the National Science Foundation.


Contacts and sources: 
By Anne Trafton 
MIT News Office

Alien Life Confirmed In Anuradhapura Meteorite

The discovery of biological structures in the Anuradhapura meteorite confirm life as a cosmic phenomenon. 

Unmistakably biological structures on the micrometre scale were discovered as an integral part of the
interior structure of a meteorite that fell near Anuradhapura Sri Lanka (Thambuttegama) on 8
December 2013.

Credit: Buckingham Centre for Astrobiology

Carbonaceous structures of the type shown on the top left were discovered by Ms A.D.M.  Damayanthi using equipment at Sri Lanka’s modern scientific research centre – The Sri Lanka Institute Of Nanotechnology (SLINTEC) currently headed by Professor Gehan Amaratunga  of Cambridge University.

The present work was carried out in collaboration with Keerthi Wickramarathne at the Medical Research Institute of Sri Lanka, and the project was conducted under the direction of Professor Chandra Wickramasinghe, Director of the Buckingham Centre for Astrobiology. A preliminary report of this work has been published in the Journal of Cosmology (http://journalofcosmology.com/JOC23/Anuradhapura.pdf).

Professor Wickramasinghe said that the new results give further support to the Hoyle –
Wickramasinghe theory of panspermia, showing that extraterrestrial life exists on a cosmic scale

Asteroid Impact Glass Stores Biodata For Millions Of Years

Bits of plant life encapsulated in molten glass by asteroid and comet impacts millions of years ago give geologists information about climate and life forms on the ancient Earth. Scientists exploring large fields of impact glass in Argentina suggest that what happened on Earth might well have happened on Mars millions of years ago. Martian impact glass could hold traces of organic compounds.

Asteroid and comet impacts can cause widespread ecological havoc, killing off plants and animals on regional or even global scales. But new research from Brown University shows that impacts can also preserve the signatures of ancient life at the time of an impact.

A snapshot of ancient environmental conditions: The scorching heat produced by asteroid or comet impacts can melt tons of soil and rock, some of which forms glass as it cools. Some of that glass preserves bits of ancient plant material.
Credit: Brown University

A research team led by Brown geologist Pete Schultz has found fragments of leaves and preserved organic compounds lodged inside glass created by a several ancient impacts in Argentina. The material could provide a snapshot of environmental conditions at the time of those impacts. The find also suggests that impact glasses could be a good place to look for signs of ancient life on Mars.

The work is published in the latest issue of Geology Magazine.

The scorching heat produced by asteroid or comet impacts can melt tons of soil and rock, some of which forms glass as it cools. The soil of eastern Argentina, south of Buenos Aires, is rife with impact glass created by at least seven different impacts that occurred between 6,000 and 9 million years ago, according to Schultz. One of those impacts, dated to around 3 million years ago, coincides with the disappearance of 35 animal genera, as reported in the journal Science a few years back.

“We know these were major impacts because of the shocked minerals trapped inside with plant materials,” Schultz said. “These glasses are present in different layers of sediment throughout an area about the size of Texas.”

Within glass associated with two of those impacts — one from 3 million years ago and one from 9 million years ago — Schultz and his colleagues found exquisitely preserved plant matter. “These glasses preserve plant morphology from macro features all the way down to the micron scale,” Schultz said. “It’s really remarkable.”

The glass samples contain centimeter-size leaf fragments, including intact structures like papillae, tiny bumps that line leaf surfaces. Bundles of vein-like structures found in several samples are very similar to modern pampas grass, a species common to that region of Argentina.

Chemical analysis of the samples also revealed the presence of organic hydrocarbons, the chemical signatures of living matter.

To understand how these structures and compounds could have been preserved, Schultz and his colleagues tried to replicate that preservation in the lab. They mixed pulverized impact glass with fragments of pampas grass leaves and heated the mixture at various temperatures for various amounts of time. The experiments showed that plant material was preserved when the samples were quickly heated to above 1,500 degrees Celsius.

It appears, Schultz says, that water in the exterior layers of the leaves insulates the inside layers, allowing them to stay intact. “The outside of the leaves takes it for the interior,” he said. “It’s a little like deep frying. The outside fries up quickly but the inside takes much longer to cook.”

Implications for Mars

If impact glass can preserve the signatures of life on Earth, it stands to reason that it could do the same on Mars, Schultz says. And the soil conditions in Argentina that contributed to the preservation of samples in this study are not unlike soils found on Mars.

The Pampas region of Argentina is covered with thick layers of windblown sediment called loess. Schultz believes that when an object impacts this sediment, globs of melted material roll out from the edge of the impact area like molten snowballs. As they roll, they collect material from the ground and cool quickly — the dynamics that the lab experiments showed were important for preservation. After the impact, those glasses are slowly covered over as dust continues to accumulate. That helps to preserve both the glasses and the stowaways within them for long periods — in the Argentine case, for millions of years.

Much of the surface of Mars is covered in a loess-like dust, and the same mechanism that preserved the Argentine samples could also work on Mars.

“Impact glass may be where the 4 billion-year-old signs of life are hiding,” Schultz said. “On Mars they’re probably not going to come out screaming in the form of a plant, but we may find traces of organic compounds, which would be really exciting.”

Landscape Frozen In Time For Three Million Years

NSF-funded researchers say the massive ice sheet has fixed the landscape in place, rather than scouring it away
Some of the landscape underlying the massive Greenland ice sheet may have been undisturbed for almost 3 million years, ever since the island became completely ice-covered, according to researchers funded by the National Science Foundation (NSF).

Basing their discovery on an analysis of the chemical composition of silts recovered from the bottom of an ice core more than 3,000 meters long, the researchers argue that the find suggests "pre-glacial landscapes can remain preserved for long periods under continental ice sheets."

A camp at the edge of the Greenland ice sheet.
A camp at the edge of the Greenland ice sheet
Credit: Paul Bierman, University of Vermont
In the time since the ice sheet formed "the soil has been preserved and only slowly eroded, implying that an ancient landscape underlies 3,000 meters of ice at Summit, Greenland," they conclude.

They add that "these new data are most consistent with [the concept of] a continuous cover of Summit… by ice … with at most brief exposure and minimal surface erosion during the warmest or longest interglacial [periods]."

They also note that fossils found in northern Greenland indicated there was a green and forested landscape prior to the time that the ice sheet began to form. The new discovery indicates that even during the warmest periods since the ice sheet formed, the center of Greenland remained stable, allowing the landscape to be locked away, unmodified, under ice through millions of years of cyclical warming and cooling.

The ice edge meets the landscape in modern Greenland.
Credit: Paul Bierman, University of Vermont

"Rather than scraping and sculpting the landscape, the ice sheet has been frozen to the ground, like a giant freezer that's preserved an antique landscape", said Paul R. Bierman, of the Department of Geology and Rubenstein School of the Environment and Natural Resources at the University of Vermont and lead author of the paper.

Bierman's work was supported by two NSF grants made by its Division of Polar Programs, 1023191 and 0713956. Thomas A. Neumann, also of the University of Vermont, but now at NASA's Goddard Space Flight Center, a co-author on the paper, also was a co-principal investigator on the latter grant.

Researchers from Idaho State University, the University of California, Santa Barbara, and the Scottish Universities Environmental Research Centre at the University of Glasgow also contributed to the paper.

The research also included contributions from two graduate students, both supported by NSF, one of whom was supported by the NSF Graduate Research Fellowships Program.

The team's analysis was published on line on April 17 and will appear in Science magazine the following week.

Understanding how Greenland's ice sheet behaved in the past, and in particular, how much of the ice sheet melted during previous warm periods as well as how it re-grew is important to developing a scientific understanding of how the ice sheet might behave in the future.

As global average temperatures rise, scientists are concerned about how the ice sheets in Greenland and Antarctica will respond. Vast amounts of freshwater are stored in the ice and may be released by melting, which would raise sea levels, perhaps by many meters.

The magnitude and rate of sea level rise are unknown factors in climate models.

The team based its analysis on material taken from the bottom of an ice core retrieved by the NSF-funded Greenland Ice Sheet Project Two (GISP2), which drilled down into the ice sheet near NSF's Summit Station. An ice core is a cylinder of ice in which individual layers of ice, compacted from snowfall, going back over millennia can be observed and sampled.

Summit is situated at an elevation of 3,216 meters (10,551 feet) above sea level.

In the case of GISP2, the core itself, taken from the center of the present-day Greenland ice sheet, was 3,054 meters (10,000 feet) deep. It provides a history of the balance of gases that made up the atmosphere at time the snow fell as well as movements in the ice sheet stretching back more than 100,000 years. It also contains a mix of silts and sediments at its base where ice and rock come together.

The scientists looked at the proportions of the elements carbon, nitrogen and Beryllium-10, the source of which is cosmic rays, in sediments taken from the bottom 13 meters (42 feet) of the GISP2 ice core.

They also compared levels of the various elements with soil samples taken in Alaska, leading them to the conclusion that the landscape under the ice sheet was indeed an ancient one that predates the advent of the ice sheet. The soil comparisons were supported by two NSF grants: 0806394 and 0806399.



Contacts and sources:
Joshua Brown, University of Vermont

Principal Investigators
Paul Bierman, University of Vermont 

Thursday, April 17, 2014

Food Shortages Coming Warns Top Scientist

The world is less than 40 years away from a food shortage that will have serious implications for people and governments, according to a top scientist at the U.S. Agency for International Development.

"For the first time in human history, food production will be limited on a global scale by the availability of land, water and energy," said Dr. Fred Davies, senior science advisor for the agency's bureau of food security. "Food issues could become as politically destabilizing by 2050 as energy issues are today."

This is Dr. Fred Davies, US Agency for International Development senior science advisor for the agency's bureau of food security and a Texas A&M AgriLife Regents Professor of Horticultural Sciences.
Credit: Texas A&M AgriLife Research photo by Kathleen Phillips

Davies, who also is a Texas A&M AgriLife Regents Professor of Horticultural Sciences, addressed the North American Agricultural Journalists meeting in Washington, D.C. on the "monumental challenge of feeding the world."

He said the world population will increase 30 percent to 9 billion people by mid-century. That would call for a 70 percent increase in food to meet demand.

"But resource limitations will constrain global food systems," Davies added. "The increases currently projected for crop production from biotechnology, genetics, agronomics and horticulture will not be sufficient to meet food demand." Davies said the ability to discover ways to keep pace with food demand have been curtailed by cutbacks in spending on research.

"The U.S. agricultural productivity has averaged less than 1.2 percent per year between 1990 and 2007," he said. "More efficient technologies and crops will need to be developed -- and equally important, better ways for applying these technologies locally for farmers -- to address this challenge." Davies said when new technologies are developed, they often do not reach the small-scale farmer worldwide.

"A greater emphasis is needed in high-value horticultural crops," he said. "Those create jobs and economic opportunities for rural communities and enable more profitable, intense farming." Horticultural crops, Davies noted, are 50 percent of the farm-gate value of all crops produced in the U.S.

He also made the connection between the consumption of fruits and vegetables and chronic disease prevention and pointed to research centers in the U.S. that are making links between farmers, biologists and chemists, grocers, health care practitioners and consumers. That connection, he suggested, also will be vital in the push to grow enough food to feed people in coming years.

"Agricultural productivity, food security, food safety, the environment, health, nutrition and obesity -- they are all interconnected," Davies said. One in eight people worldwide, he added, already suffers from chronic undernourishment, and 75 percent of the world's chronically poor are in the mid-income nations such as China, India, Brazil and the Philippines.

"The perfect storm for horticulture and agriculture is also an opportunity," Davies said. "Consumer trends such as views on quality, nutrition, production origin and safety impact what foods we consume. Also, urban agriculture favors horticulture." For example, he said, the fastest growing segment of new farmers in California, are female, non-Anglos who are "intensively growing horticultural crops on small acreages," he said.
 

Contacts and sources:
Kathleen Phillips
Texas A&M AgriLife Communications

Ebola Outbreak Focuses Need For Global Surveillance Strategies

EcoHealth Alliance, a nonprofit organization that focuses on conservation and global public health issues, published a comprehensive review today examining the current state of knowledge of the deadly Ebola and Marburg virus. 

The review calls for improved global surveillance strategies to combat the emergence of infectious diseases such as the recent outbreak of Ebola in West Africa that has claimed the lives of 122 people in the countries of Guinea and Liberia. According to the World Health Organization (WHO), the deadly Ebola virus can cause mortality rates up to 90 percent of those individuals who contract the disease. No cure or vaccine exists for Ebola hemorrhagic fever and public health officials are concerned about further spread of the virus in the region.

Bushmeat being prepared for cooking in Ghana. Human consumption of equatorial animals in Africa in the form of bushmeat has been linked to the transmission of diseases to people, including ebola
File:Bushmeat - Buschfleisch Ghana.JPG
Credit: Wikipedia

The virus is transmitted from person to person through contact with infected blood or bodily fluids, but the origin of each outbreak is ultimately linked to wildlife. The consumption of bushmeat in Guinea may possibly serve as the transmission point from wildlife to human populations for the disease. Guinea has forbidden the sale and consumption of bats, which serve as natural reservoirs of the virus, and warned against eating rats and monkeys in its effort to keep the illness from spreading.

Since the late 1970s, Ebola outbreaks have sporadically erupted in various parts of Africa, and experts report this is the worst outbreak in the past 7 years. Historically, Ebola outbreaks have been contained through quarantine and public health measures, but where and when the next outbreak will emerge still remains a mystery. 

"Our scientists have developed a strategy to predict where the next new viruses from wildlife will emerge and affect people. These zoonotic viruses cause significant loss in life, create panic and disrupt the economics of an entire region," said Dr. Peter Daszak, Disease Ecologist and President of EcoHealth Alliance. "Our research shows that focusing surveillance on viruses in bats, rodents and non-human primates (a "SMART surveillance approach), and understanding what's disrupting these species' ecology is the best strategy to predict and prevent local outbreaks and pandemic disease," Daszak continued. 

Electron micrograph of an Ebola virus virion
File:Ebola virus virion.jpg
Credit: Wikipedia

The study, published by EcoHealth Alliance's Dr. Kevin Olival and Dr. David Hayman from Massey University, reviewed all of the current literature on filoviruses - the class of viruses that include both Ebola and Marburg virus - and took a critical look at the ecological and virological methods needed to understand these viruses to protect human health. 

 As part of the study, EcoHealth Alliance's modeling team mapped the geographic distribution of all known bat hosts for these viruses, and found that Guinea and Liberia lie within the expected range of Zaire Ebola - the strain responsible for the current outbreak. The team highlighted the need for more unified and improved global surveillance strategies to monitor outbreak events around the globe in wildlife. 

 "We are in the beginning stages of developing early warning systems to identify disease "spillover" events from wildlife to humans before they occur, but much work remains to be done. It's an exciting time where ecology, disease surveillance, mathematical modeling, and policy are all critically converging towards the goal of pandemic prevention," said Dr. Kevin Olival, Senior Research Scientist at EcoHealth Alliance. "Our work on bat ecology is specifically important since we know that they are reservoirs for a number of viruses, including Ebola and Marburg. Bat species are critical to the health of ecosystems and disease studies must be conducted with conservation as a integral component," he continued.

EcoHealth Alliance continues to work around the globe to study and uncover the ecological drivers of disease emergence. It is estimated that 15 million people die from infectious disease each year with more than half of those afflicted being children. For that reason alone, EcoHealth Alliance's research to find the reservoirs of potentially deadly diseases in wildlife and its research to discover how disease spillovers occur make it crucially important conservation-focused work.

The journal, Viruses, published the paper entitled, Filoviruses in Bats: Current Knowledge and Future Directions, and can be accessed online for download at http://www.mdpi.com/1999-4915/6/4/1759.




Contacts and sources:
Anthony M. Ramos
EcoHealth Alliance

Earth-Sized Planet In Habitable Zone Discovered

Notre Dame astrophysicist Justin R. Crepp and researchers from NASA working with the Kepler space mission have detected an Earth-like planet orbiting the habitable zone of a cool star. The planet which was found using the Kepler Space Telescope has been identified as Kepler-186f and is 1.11 times the radius of the Earth. Their research titled, "An Earth-sized Planet in the Habitable Zone of a Cool Star" will be published in the journal Sciencetoday.



Kepler-186f is part of a multi-planet system around the star Kepler-186 which has five planets, one of which is in the center of the habitable zone—the region around a star within which a planet can sustain liquid water on its surface. While there have been other discoveries of Earth-sized and smaller planets, those planets have been found in orbits that are too close to their host stars for water to exist in liquid form. Findings taken from three years of data show that the intensity and spectrum of radiation from Kepler-186f indicate that the planet could have an Earth-like atmosphere and water at its surface which is likely to be in liquid form.

Credit: Notre Dame

“The host star, Kepler 186, is an M1-type dwarf star, which means it will burn hydrogen forever, so there is ample opportunity to develop life around this particular star. And because it has just the right orbital period, water may exist in a liquid phase on this planet,” said Crepp, who is the Frank M. Freimann Assistant Professor of Physics in the College of Science.

"What makes this finding particularly compelling is that this Earth-sized planet, one of five orbiting this star, which is cooler than the Sun, resides in a temperate region where water could exist in liquid form," says Elisa Quintana of the SETI Institute and NASA Ames Research Center who led the paper published in the current issue of the journal Science. The region in which this planet orbits its star is called the habitable zone, as it is thought that life would most likely form on planets with liquid water.

Steve Howell, Kepler's Project Scientist and a co-author on the paper, adds that neither Kepler (nor any telescope) is currently able to directly spot an exoplanet of this size and proximity to its host star. "However, what we can do is eliminate essentially all other possibilities so that the validity of these planets is really the only viable option."

With such a small host star, the team employed a technique that eliminated the possibility that either a background star or a stellar companion could be mimicking what Kepler detected. To do this, the team obtained extremely high spatial resolution observations from the eight-meter Gemini North telescope on Mauna Kea in Hawai`i using a technique called speckle imaging, as well as adaptive optics (AO) observations from the ten-meter Keck II telescope, Gemini's neighbor on Mauna Kea. Together, these data allowed the team to rule out sources close enough to the star's line-of-sight to confound the Kepler evidence, and conclude that Kepler's detected signal has to be from a small planet transiting its host star.

The artistic concept of Kepler-186f is the result of scientists and artists collaborating to help imagine the appearance of these distant worlds.

Credit: Credit: NASA Ames/SETI Institute/JPL-CalTech.

"The Keck and Gemini data are two key pieces of this puzzle," says Quintana. "Without these complementary observations we wouldn't have been able to confirm this Earth-sized planet."

The Gemini "speckle" data directly imaged the system to within about 400 million miles (about 4 AU, approximately equal to the orbit of Jupiter in our solar system) of the host star and confirmed that there were no other stellar size objects orbiting within this radius from the star. Augmenting this, the Keck AO observations probed a larger region around the star but to fainter limits. According to Quintana,

"These Earth-sized planets are extremely hard to detect and confirm, and now that we've found one, we want to search for more. Gemini and Keck will no doubt play a large role in these endeavors."

The host star, Kepler-186, is an M1-type dwarf star relatively close to our solar system, at about 500 light years and is in the constellation of Cygnus. The star is very dim, being over half a million times fainter than the faintest stars we can see with the naked eye. Five small planets have been found orbiting this star, four of which are in very short-period orbits and are very hot. 

This animation depicts Kepler-186f, the first validated Earth-size planet orbiting a distant star in the habitable zone -- a range of distances from a star where liquid water might pool on the surface of an orbiting planet. The discovery of Kepler-186f confirms that Earth-size planets exist in the habitable zone of other stars and signals a significant step closer to finding a world similar to Earth. Kepler-186f is less than ten percent larger than Earth in size, but its mass and composition are not known.
 Credit: Sean Raymond. 

The planet designated Kepler-186f, however, is earth-sized and orbits within the star's habitable zone. The Kepler evidence for this planetary system comes from the detection of planetary transits. These transits can be thought of as tiny eclipses of the host star by a planet (or planets) as seen from the Earth. When such planets block part of the star's light, its total brightness diminishes. Kepler detects that as a variation in the star's total light output and evidence for planets. So far more than 3,800 possible planets have been detected by this technique with Kepler.
Crepp is building an instrument at Notre Dame named the iLocater that will be the first ultra-precise Doppler spectrometer to be fiber-fed and operated behind an adaptive optics system. His instrument, to be installed at the Large Binocular Telescope in Arizona, will identify terrestrial planets orbiting in the habitable zone of nearby M-dwarf stars, much closer to the Sun than Kepler-186, by achieving unprecedented radial velocity precision at near-infrared wavelengths. He and his research collaborators will also probe nearby terrestrial planets to determine what their atmospheres are made of.

“Professor Justin Crepp’s outstanding exoplanet research is helping us comprehend our complex universe and in particular those planets that are in the habitable zone. This much-anticipated discovery is shedding new light on planetary systems and their composition,” said Greg Crawford, dean of the College of Science at the University of Notre Dame.

Crepp is one of only 11 Kepler Participating Scientists in the country. He and his colleagues are advancing the goals of the Kepler Mission by seeking to find terrestrial planets comparable in size to Earth, especially those in the habitable zone of their stars where


Contacts and sources:
Justin Crepp
University of Notre Dame

Peter Michaud
Gemini Observatory

Wednesday, April 16, 2014

Meteorites Yield Clues To Mars' Early Atmosphere



Geologists who analyzed 40 meteorites that fell to Earth from Mars unlocked secrets of the Martian atmosphere hidden in the chemical signatures of these ancient rocks. Their study, published April 17 in the journal Nature, shows that the atmospheres of Mars and Earth diverged in important ways very early in the 4.6 billion year evolution of our solar system.

The results will help guide researchers’ next steps in understanding whether life exists, or has ever existed, on Mars and how water—now absent from the Martian surface—flowed there in the past.

A microscope reveals colorful augite crystals in this 1.3 billion-year-old meteorite from Mars, which researchers studied to understand the red planet's atmospheric history. 

Photo: James Day

Heather Franz, a former University of Maryland research associate who now works on the Curiosity rover science team at the NASA Goddard Space Flight Center, led the study with James Farquhar, co-author and UMD geology professor. The researchers measured the sulfur composition of 40 Mars meteorites—a much larger number than in previous analyses. Of more than 60,000 meteorites found on Earth, only 69 are believed to be pieces of rocks blasted off the Martian surface.

The meteorites are igneous rocks that formed on Mars, were ejected into space when an asteroid or comet slammed into the red planet, and landed on Earth. The oldest meteorite in the study is about 4.1 billion years old, formed when our solar system was in its infancy. The youngest are between 200 million and 500 million years old.

Studying Martian meteorites of different ages can help scientists investigate the chemical composition of the Martian atmosphere throughout history, and learn whether the planet has ever been hospitable to life. Mars and Earth share the basic elements for life, but conditions on Mars are much less favorable, marked by an arid surface, cold temperatures, radioactive cosmic rays, and ultraviolet radiation from the Sun. Still, some Martian geological features were evidently formed by water – a sign of milder conditions in the past. Scientists are not sure what conditions made it possible for liquid water to exist on the surface, but greenhouse gases released by volcanoes likely played a role.

Under a microscope, crystals of skeletal magnetite in this 1.3 billion-year-old Martian meteorite reminded scientists of a piranha.

 Photo courtesy of Heather Franz

Sulfur, which is plentiful on Mars, may have been among the greenhouse gases that warmed the surface, and could have provided a food source for microbes. Because meteorites are a rich source of information about Martian sulfur, the researchers analyzed sulfur atoms that were incorporated into the rocks.

In the Martian meteorites, some sulfur came from molten rock, or magma, which came to the surface during volcanic eruptions. Volcanoes also vented sulfur dioxide into the atmosphere, where it interacted with light, reacted with other molecules, and settled on the surface.


Sulfur has four naturally occurring stable isotopes, or different forms of the element, each with its own atomic signature. Sulfur is also chemically versatile, interacting with many other elements, and each type of interaction distributes sulfur isotopes in a different way. Researchers measuring the ratios of sulfur isotopes in a rock sample can learn whether the sulfur was magma from deep below the surface, atmospheric sulfur dioxide or a related compound, or a product of biological activity.

Using state-of-the-art techniques to track the sulfur isotopes in samples from the Martian meteorites, the researchers were able to identify some sulfur as a product of photochemical processes in the Martian atmosphere. The sulfur was deposited on the surface and later incorporated into erupting magma that formed igneous rocks. The isotopic fingerprints found in the meteorite samples are different than those that would have been produced by sulfur-based life forms.The researchers found the chemical reactions involving sulfur in the Martian atmosphere were different than those that took place early in Earth’s geological history. This suggests the two planets’ early atmospheres were very different, Franz said.

This Martian meteorite belongs to a group called shergottites which are between 200 million and 500 million years old. Minerals shown include pyrrhotite (yellow), maskelynite (dark gray), pryoxene and olivine (light gray) and euhedral Cr-spinel grains (pinkish). 

Photo courtesy of Heather Franz

The exact nature of the differences is unclear, but other evidence suggests that soon after our solar system formed, much of Mars’ atmosphere was lost, leaving it thinner than Earth’s, with lower concentrations of carbon dioxide and other gases. That is one reason why Mars is too cold for liquid water today—but that may not always have been the case, said Franz.

“Climate models show that a moderate abundance of sulfur dioxide in the atmosphere after volcanic episodes, which have occurred throughout Mars’ history, could have produced a warming effect which may have allowed liquid water to exist at the surface for extended periods,” Franz said. “Our measurements of sulfur in Martian meteorites narrow the range of possible atmospheric compositions, since the pattern of isotopes that we observe points to a distinctive type of photochemical activity on Mars, different from that on early Earth.”

Periods of higher levels of sulfur dioxide may help explain the red planet’s dry lakebeds, river channels and other evidence of a watery past. Warm conditions may even have persisted long enough for microbial life to develop.

The team’s work has yielded the most comprehensive record of the distribution of sulfur isotopes on Mars. In effect, they have compiled a database of atomic fingerprints that provide a standard of comparison for sulfur-containing samples collected by NASA’s Curiosity rover and future Mars missions. This information will make it much easier for researchers to zero in on any signs of biologically produced sulfur, Farquhar said.



Contacts and sources:
Heather Dewar
University of Maryland

Targeting Cancer With A Triple Threat, Nanoparticle Delivers Three Drugs At Once

MIT chemists have designed nanoparticles that can deliver three cancer drugs at a time.

Delivering chemotherapy drugs in nanoparticle form could help reduce side effects by targeting the drugs directly to the tumors. In recent years, scientists have developed nanoparticles that deliver one or two chemotherapy drugs, but it has been difficult to design particles that can carry any more than that in a precise ratio.

Now MIT chemists have devised a new way to build such nanoparticles, making it much easier to include three or more different drugs. In a paper published in the Journal of the American Chemical Society, the researchers showed that they could load their particles with three drugs commonly used to treat ovarian cancer.

The new MIT nanoparticles consist of polymer chains (blue) and three different drug molecules — doxorubicin is red, the small green particles are camptothecin, and the larger green core contains cisplatin.
Image courtesy of Jeremiah Johnson

“We think it’s the first example of a nanoparticle that carries a precise ratio of three drugs and can release those drugs in response to three distinct triggering mechanisms,” says Jeremiah Johnson, an assistant professor of chemistry at MIT and the senior author of the new paper.

Such particles could be designed to carry even more drugs, allowing researchers to develop new treatment regimens that could better kill cancer cells while avoiding the side effects of traditional chemotherapy. In the JACS paper, Johnson and colleagues demonstrated that the triple-threat nanoparticles could kill ovarian cancer cells more effectively than particles carrying only one or two drugs, and they have begun testing the particles against tumors in animals.

Longyan Liao, a postdoc in Johnson’s lab, is the paper’s lead author.

Putting the pieces together

Johnson’s new approach overcomes the inherent limitations of the two methods most often used to produce drug-delivering nanoparticles: encapsulating small drug molecules inside the particles or chemically attaching them to the particle. With both of these techniques, the reactions required to assemble the particles become increasingly difficult with each new drug that is added.

Combining these two approaches — encapsulating one drug inside a particle and attaching a different one to the surface — has had some success, but is still limited to two drugs.

Johnson set out to create a new type of particle that would overcome those constraints, enabling the loading of any number of different drugs. Instead of building the particle and then attaching drug molecules, he created building blocks that already include the drug. These building blocks can be joined together in a very specific structure, and the researchers can precisely control how much of each drug is included.

Each building block consists of three components: the drug molecule, a linking unit that can connect to other blocks, and a chain of polyethylene glycol (PEG), which helps protect the particle from being broken down in the body. Hundreds of these blocks can be linked using an approach Johnson developed, called “brush first polymerization.”

“This is a new way to build the particles from the beginning,” Johnson says. “If I want a particle with five drugs, I just take the five building blocks I want and have those assemble into a particle. In principle, there’s no limitation on how many drugs you can add, and the ratio of drugs carried by the particles just depends on how they are mixed together in the beginning.”

Varying combinations

For this paper, the researchers created particles that carry the drugs cisplatin, doxorubicin, and camptothecin, which are often used alone or in combination to treat ovarian cancer.

Each particle carries the three drugs in a specific ratio that matches the maximum tolerated dose of each drug, and each drug has its own release mechanism. Cisplatin is freed as soon as the particle enters a cell, as the bonds holding it to the particle break down on exposure to glutathione, an antioxidant present in cells. Camptothecin is also released quickly when it encounters cellular enzymes called esterases.

The third drug, doxorubicin, was designed so that it would be released only when ultraviolet light shines on the particle. Once all three drugs are released, all that is left behind is PEG, which is easily biodegradable.

This approach “represents a clever new breakthrough in multidrug release through the simultaneous inclusion of different drugs, through distinct chemistries, within the same … platform,” says Todd Emrick, a professor of polymer science and engineering at the University of Massachusetts at Amherst who was not involved in the study.

Working with researchers in the lab of Paula Hammond, the David H. Koch Professor of Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, the team tested the particles against ovarian cancer cells grown in the lab. Particles carrying all three drugs killed the cancer cells at a higher rate than those that delivered only one or two drugs.

Johnson’s lab is now working on particles that carry four drugs, and the researchers are also planning to tag the particles with molecules that will allow them to home to tumor cells by interacting with proteins found on the cell surfaces.

Johnson also envisions that the ability to reliably produce large quantities of multidrug-carrying nanoparticles will enable large-scale testing of possible new cancer treatments. “It’s important to be able to rapidly and efficiently make particles with different ratios of multiple drugs, so that you can test them for their activity,” he says. “We can’t just make one particle, we need to be able to make different ratios, which our method can easily do.”

Other authors of the paper are graduate students Jenny Liu and Stephen Morton, and postdocs Erik Dreaden and Kevin Shopsowitz.

The research was funded by the MIT Research Support Committee, the Department of Defense Ovarian Cancer Research Program Teal Innovator Award, the National Institutes of Health, the National Sciences and Engineering Research Council, and the Koch Institute Support Grant from the National Cancer Institute.

Contacts and sources:
Anne Trafton
MIT News Office

Excitons Observed In Action For The First Time

Technique developed at MIT reveals the motion of energy-carrying quasiparticles in solid material.

A quasiparticle called an exciton — responsible for the transfer of energy within devices such as solar cells, LEDs, and semiconductor circuits — has been understood theoretically for decades. But exciton movement within materials has never been directly observed.

Now scientists at MIT and the City University of New York have achieved that feat, imaging excitons’ motions directly. This could enable research leading to significant advances in electronics, they say, as well as a better understanding of natural energy-transfer processes, such as photosynthesis.

Diagram of an exciton within a tetracene crystal, used in these experiments, shows the line across which data was collected. That data, plotted below as a function of both position (horizontal axis) and time (vertical axis) provides the most detailed information ever obtained on how excitons move through the material.
Illustration courtesy of the researchers

The research is described this week in the journal Nature Communications, in a paper co-authored by MIT postdocs Gleb Akselrod and Parag Deotare, professors Vladimir Bulovic and Marc Baldo, and four others.

“This is the first direct observation of exciton diffusion processes,” Bulovic says, “showing that crystal structure can dramatically affect the diffusion process.”

“Excitons are at the heart of devices that are relevant to modern technology,” Akselrod explains: The particles determine how energy moves at the nanoscale. “The efficiency of devices such as photovoltaics and LEDs depends on how well excitons move within the material,” he adds.

An exciton, which travels through matter as though it were a particle, pairs an electron, which carries a negative charge, with a place where an electron has been removed, known as a hole. Overall, it has a neutral charge, but it can carry energy. For example, in a solar cell, an incoming photon may strike an electron, kicking it to a higher energy level. That higher energy is propagated through the material as an exciton: The particles themselves don’t move, but the boosted energy gets passed along from one to another.

While it was previously possible to determine how fast, on average, excitons could move between two points, “we really didn’t have any information about how they got there,” Akselrod says. Such information is essential to understanding which aspects of a material’s structure — for example, the degree of molecular order or disorder — might facilitate or slow that motion.

“People always assumed certain behavior of the excitons,” Deotare says. Now, using this new technique — which combines optical microscopy with the use of particular organic compounds that make the energy of excitons visible — “we can directly say what kind of behavior the excitons were moving around with.” This advance provided the researchers with the ability to observe which of two possible kinds of “hopping” motion was actually taking place.

“This allows us to see new things,” Deotare says, making it possible to demonstrate that the nanoscale structure of a material determines how quickly excitons get trapped as they move through it.

For some applications, such as LEDs, Deotare says, it is desirable to maximize this trapping, so that energy is not lost to leakage; for other uses, such as solar cells, it is essential to minimize the trapping. The new technique should allow researchers to determine which factors are most important in increasing or decreasing this trapping.

“We showed how energy flow is impeded by disorder, which is the defining characteristic of most materials for low-cost solar cells and LEDs,” Baldo says.

While these experiments were carried out using a material called tetracene — a well-studied archetype of a molecular crystal — the researchers say that the method should be applicable to almost any crystalline or thin-film material. They expect it to be widely adopted by researchers in academia and industry.

“It’s a very simple technique, once people learn about it,” Akselrod says, “and the equipment required is not that expensive.”

Exciton diffusion is also a basic mechanism underlying photosynthesis: Plants absorb energy from photons, and this energy is transferred by excitons to areas where it can be stored in chemical form for later use in supporting the plant’s metabolism. The new method might provide an additional tool for studying some aspects of this process, the team says.

David Lidzey, a professor of physics and astronomy at the University of Sheffield who was not involved in this work, calls the research “a really impressive demonstration of a direct measurement of the diffusion of triplet excitons and their eventual trapping.” He adds, “Exciton diffusion and transport are important processes in solar-cell devices, so understanding what limits these may well help the design of better materials, or the development of better ways to process materials so that energy losses during exciton migration are limited.”

The work was supported by the U.S. Department of Energy and by the National Science Foundation, and used facilities of the Eni-MIT Solar Frontiers Center. 

Contacts and sources:
David L. Chandler 
MIT News Office

Floating Nuclear Power Plants: A Good Idea?

New power plant design could provide enhanced safety, easier siting, and centralized construction

When an earthquake and tsunami struck the Fukushima Daiichi nuclear plant complex in 2011, neither the quake nor the inundation caused the ensuing contamination. Rather, it was the aftereffects — specifically, the lack of cooling for the reactor cores, due to a shutdown of all power at the station — that caused most of the harm.

This illustration shows a possible configuration of a floating offshore nuclear plant, based on design work by Jacopo Buongiorno and others at MIT's Department of Nuclear Science and Engineering. Like offshore oil drilling platforms, the structure would include living quarters and a helipad for transportation to the site.

Illustration courtesy of Jake Jurewicz/MIT-NSE

A new design for nuclear plants built on floating platforms, modeled after those used for offshore oil drilling, could help avoid such consequences in the future. Such floating plants would be designed to be automatically cooled by the surrounding seawater in a worst-case scenario, which would indefinitely prevent any melting of fuel rods, or escape of radioactive material.

The concept is being presented this week at the Small Modular Reactors Symposium, hosted by the American Society of Mechanical Engineers, by MIT professors Jacopo Buongiorno, Michael Golay, and Neil Todreas, along with others from MIT, the University of Wisconsin, and Chicago Bridge and Iron, a major nuclear plant and offshore platform construction company.

Such plants, Buongiorno explains, could be built in a shipyard, then towed to their destinations five to seven miles offshore, where they would be moored to the seafloor and connected to land by an underwater electric transmission line. The concept takes advantage of two mature technologies: light-water nuclear reactors and offshore oil and gas drilling platforms. Using established designs minimizes technological risks, says Buongiorno, an associate professor of nuclear science and engineering (NSE) at MIT.

Although the concept of a floating nuclear plant is not unique — Russia is in the process of building one now, on a barge moored at the shore — none have been located far enough offshore to be able to ride out a tsunami, Buongiorno says. For this new design, he says, "the biggest selling point is the enhanced safety."



A floating platform several miles offshore, moored in about 100 meters of water, would be unaffected by the motions of a tsunami; earthquakes would have no direct effect at all. Meanwhile, the biggest issue that faces most nuclear plants under emergency conditions — overheating and potential meltdown, as happened at Fukushima, Chernobyl, and Three Mile Island — would be virtually impossible at sea, Buongiorno says: "It's very close to the ocean, which is essentially an infinite heat sink, so it's possible to do cooling passively, with no intervention. The reactor containment itself is essentially underwater."

Buongiorno lists several other advantages. For one thing, it is increasingly difficult and expensive to find suitable sites for new nuclear plants: They usually need to be next to an ocean, lake, or river to provide cooling water, but shorefront properties are highly desirable. By contrast, sites offshore, but out of sight of land, could be located adjacent to the population centers they would serve. "The ocean is inexpensive real estate," Buongiorno says.

In addition, at the end of a plant's lifetime, "decommissioning" could be accomplished by simply towing it away to a central facility, as is done now for the Navy's carrier and submarine reactors. That would rapidly restore the site to pristine conditions.

This design could also help to address practical construction issues that have tended to make new nuclear plants uneconomical: Shipyard construction allows for better standardization, and the all-steel design eliminates the use of concrete, which Buongiorno says is often responsible for construction delays and cost overruns.

There are no particular limits to the size of such plants, he says: They could be anywhere from small, 50-megawatt plants to 1,000-megawatt plants matching today's largest facilities. "It's a flexible concept," Buongiorno says.

Most operations would be similar to those of onshore plants, and the plant would be designed to meet all regulatory security requirements for terrestrial plants. "Project work has confirmed the feasibility of achieving this goal, including satisfaction of the extra concern of protection against underwater attack," says Todreas, the KEPCO Professor of Nuclear Science and Engineering and Mechanical Engineering.

Buongiorno sees a market for such plants in Asia, which has a combination of high tsunami risks and a rapidly growing need for new power sources. "It would make a lot of sense for Japan," he says, as well as places such as Indonesia, Chile, and Africa.

The paper was co-authored by NSE students Angelo Briccetti, Jake Jurewicz, and Vincent Kindfuller; Michael Corradini of the University of Wisconsin; and Daniel Fadel, Ganesh Srinivasan, Ryan Hannink, and Alan Crowle of Chicago Bridge and Iron, based in Canton, Mass.


Contacts and sources:
Andrew Carleen
Massachusetts Institute of Technology
Written by David Chandler, MIT News Office

Fish On Anti-Depressants In The Wild Show Altered Behaviors

Fish exposed to the antidepressant Fluoxetine, an active ingredient in prescription drugs such as Prozac, exhibited a range of altered mating behaviors, repetitive behavior and aggression towards female fish, according to new research published on in the latest special issue of Aquatic Toxicology: Antidepressants in the Aquatic Environment.

Credit:  Elsevier

The authors of the study set up a series of experiments exposing a freshwater fish (Fathead Minnow) to a range of Prozac concentrations. Following exposure for 4 weeks the authors observed and recorded a range of behavioural changes among male and female fish relating to reproduction, mating, general activity and aggression.

On a positive note, author Rebecca Klaper, Director of the Great Lakes Genomics Center at University of Wisconsin-Milwaukee, emphasizes that the impact on behavior is reversible once the concentration level is reduced. "With increased aggression, in the highest level of concentration, female survivorship was only 33% compared to the other exposures that had a survivorship of 77–87.5%.

The females that died had visible bruising and tissue damage," according to Rebecca Klaper. There is an increasing proportion of antidepressants prescriptions, and like most prescription drugs, they end up, not fully broken down, back into our aquatic ecosystems, inducing their therapeutic effects on wildlife. Although concentrations observed in our rivers and estuaries are very small, an increasing number of studies have shown that these incredibly small concentrations can dramatically alter the biology of the organisms they come in contact with.

The impact of pharmaceuticals is currently not only of interest amongst scientists but also amongst environmental regulators, industry and general public. Some US states are looking to charge pharmaceutical companies with the cost of appropriate drug disposal, some of which is currently being challenged in the courts.

"This is just one of an increasing number of studies that suggest that pharmaceuticals in the environment can impact the complex range of behaviors in aquatic organisms," said Alex Ford, Guest Editor of the special issue of Aquatic Toxicology in which the study was published. "Worryingly, an increasing number of these studies are demonstrating that these effects can be seen at concentrations currently found in our rivers and estuaries and they appear to impact a broad range of biological functions and a wide variety of aquatic organisms."

This is one of the reasons why Alex proposed a full special dedicated to this topic. Antidepressants in the Aquatic Environment, includes among other studies, research that demonstrates that antidepressants affect the ability of cuttlefish to change color and a fish study whereby reproductive effects were observed in offspring whose parents who were exposed to mood stabilizing drugs.

Ford emphasizes that although the results from this study and others published in the issue show troubling results for aquatic species, this doesn't indicate that these results are applicable to humans. "This special issue focuses on the biology of aquatic systems and organisms and results only indicate how pharmaceuticals could potentially have effects on this particular environment."



Contacts and sources:
Kitty van Hensbergen
Elsevier

Citation: The special issue is: "Antidepressants in the Aquatic Environment" (Volume 151, Pages 1-134 (June 2014), Aquatic Toxicology published by Elsevier.

A Study In Scarlet: Star Forming In The Centaur

This area of the southern sky, in the constellation of Centaurus (The Centaur), is home to many bright nebulae, each associated with hot newborn stars that formed out of the clouds of hydrogen gas.

The intense radiation from the stellar newborns excites the remaining hydrogen around them, making the gas glow in the distinctive shade of red typical of star-forming regions. Another famous example of this phenomenon is the Lagoon Nebula, a vast cloud that glows in similar bright shades of scarlet.

This new image from the Wide Field Imager on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile reveals a cloud of hydrogen and newborn stars called Gum 41. In the middle of this little-known nebula, brilliant hot young stars emit energetic radiation that causes the surrounding hydrogen to glow with a characteristic red hue.
Credit: ESO

The nebula in this picture is located some 7300 light-years from Earth. Australian astronomer Colin Gum discovered it on photographs taken at the Mount Stromlo Observatory near Canberra, and included it in his catalogue of 84 emission nebulae, published in 1955. Gum 41 is actually one small part of a bigger structure called the Lambda Centauri Nebula, also known by the more exotic name of the Running Chicken Nebula. Gum died at a tragically early age in a skiing accident in Switzerland in 1960.

This pan video takes a close up look at a new image from the Wide Field Imager (WFI) on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile. It reveals a cloud of hydrogen and newborn stars called Gum 41 in the constellation of Centaurus (The Centaur). In the middle of this little-known nebula, brilliant hot young stars emit energetic radiation that causes the surrounding hydrogen to glow with a characteristic red hue.

Credit: ESO. Music: movetwo

In this picture of Gum 41, the clouds appear to be quite thick and bright, but this is actually misleading. If a hypothetical human space traveller could pass through this nebula, it is likely that they would not notice it as — even at close quarters — it would be toofaint for the human eye to see. This helps to explain why this large object had to wait until the mid-twentieth century to be discovered — its light is spread very thinly and the red glow cannot be well seen visually.

This chart shows the location of a cloud of hydrogen and newborn stars called Gum 41 in the large southern constellation of Centaurus (The Centaur). This map shows most of the stars visible to the unaided eye under good conditions and the location of the nebula itself is marked with a red circle. This object is part of the larger Lambda Centauri Nebula. Gum 41 is very faint and was only discovered photographically in the mid-20th century.

Credit: ESO, IAU and Sky & Telescope

This new portrait of Gum 41 — likely one of the best so far of this elusive object — has been created using data from the Wide Field Imager (WFI) on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile. It is a combination of images taken through blue, green, and red filters, along with an image using a special filter designed to pick out the red glow from hydrogen.

This zoom sequence starts with a broad view of the Milky Way and closes in on one of the more spectacular sections in the constellation of Centaurus (The Centaur). In the final sequence we see the star formation region known as Gum 41 in a new image from the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile.
Credit: ESO/N. Risinger (skysurvey.org)/Hisayoshi Kato. Music: movetwo


Contacts and sources: 
Richard Hook
ESO

Warm US West, Cold East: A 4,000-Year Pattern

Global warming may bring more curvy jet streams during winter.

Last winter's curvy jet stream pattern brought mild temperatures to western North America and harsh cold to the East. A University of Utah-led study shows that pattern became more pronounced 4,000 years ago, and suggests it may worsen as Earth's climate warms.

These maps show winter temperature patterns (top) and winter precipitation patterns (bottom) associated with a curvy jet stream (not shown) that moves north from the Pacific to the Yukon and Alaska, then plunges down over the Canadian plains and into the eastern United States. A University of Utah-led study shows that starting 4,000 years ago, the jet stream tended to become curvier than it was between 8,000 and 4,000 years ago, and suggests global warming will enhance such curviness and thus frigid weather in the eastern states similar to this past winter's. 

Credit: Zhongfang Liu, Tianjin Normal University, China.

The curvy jet stream brought abnormally warm temperatures (red and orange) to the West and Alaska and an abnormal deep freeze (blue) to the East this past winter, similar to what is shown in the top map, except the upper Midwest was colder than shown. The bottom map of a typical curvy jet stream precipitation pattern shows how that normally brings dry winters to reddish-orange areas and wet winters to blue regions. Precipitation patterns this winter matched the bottom map in many regions, except California was drier than expected and the upper Midwest was wetter than expected.

"If this trend continues, it could contribute to more extreme winter weather events in North America, as experienced this year with warm conditions in California and Alaska and intrusion of cold Arctic air across the eastern USA," says geochemist Gabe Bowen, senior author of the study.

The study was published online April 16 by the journal Nature Communications.

"A sinuous or curvy winter jet stream means unusual warmth in the West, drought conditions in part of the West, and abnormally cold winters in the East and Southeast," adds Bowen, an associate professor of geology and geophysics at the University of Utah. "We saw a good example of extreme wintertime climate that largely fit that pattern this past winter," although in the typical pattern California often is wetter.

It is not new for scientists to forecast that the current warming of Earth's climate due to carbon dioxide, methane and other "greenhouse" gases already has led to increased weather extremes and will continue to do so.

The new study shows the jet stream pattern that brings North American wintertime weather extremes is millennia old – "a longstanding and persistent pattern of climate variability," Bowen says. Yet it also suggests global warming may enhance the pattern so there will be more frequent or more severe winter weather extremes or both.

University of Utah geochemist Gabe Bowen led a new study, published in Nature Communications, showing that the curvy jet stream pattern that brought mild weather to western North America and intense cold to the eastern states this past winter has become more dominant during the past 4,000 years than it was from 8,000 to 4,000 years ago. The study suggests global warming may aggravate the pattern, meaning such severe winter weather extremes may be worse in the future.
Credit: Lee J. Siegel, University of Utah.

"This is one more reason why we may have more winter extremes in North America, as well as something of a model for what those extremes may look like," Bowen says. Human-caused climate change is reducing equator-to-pole temperature differences; the atmosphere is warming more at the poles than at the equator. Based on what happened in past millennia, that could make a curvy jet stream even more frequent and-or intense than it is now, he says.

Bowen and his co-authors analyzed previously published data on oxygen isotope ratios in lake sediment cores and cave deposits from sites in the eastern and western United States and Canada. Those isotopes were deposited in ancient rainfall and incorporated into calcium carbonate. They reveal jet stream directions during the past 8,000 years, a geological time known as middle and late stages of the Holocene Epoch.

Next, the researchers did computer modeling or simulations of jet stream patterns – both curvy and more direct west to east – to show how changes in those patterns can explain changes in the isotope ratios left by rainfall in the old lake and cave deposits.

They found that the jet stream pattern – known technically as the Pacific North American teleconnection – shifted to a generally more "positive phase" – meaning a curvy jet stream – over a 500-year period starting about 4,000 years ago. In addition to this millennial-scale change in jet stream patterns, they also noted a cycle in which increases in the sun's intensity every 200 years make the jet stream flatter.

Bowen conducted the study with Zhongfang Liu of Tianjin Normal University in China, Kei Yoshimura of the University of Tokyo, Nikolaus Buenning of the University of Southern California, Camille Risi of the French National Center for Scientific Research, Jeffrey Welker of the University of Alaska at Anchorage, and Fasong Yuan of Cleveland State University.

The study was funded by the National Science Foundation, National Natural Science Foundation of China, Japan Society for the Promotion of Science and a joint program by the society and Japan's Ministry of Education, Culture, Sports, Science and Technology: the Program for Risk Information on Climate Change.


Sinuous Jet Stream Brings Winter Weather Extremes

The Pacific North American teleconnection, or PNA, "is a pattern of climate variability" with positive and negative phases, Bowen says.

"In periods of positive PNA, the jet stream is very sinuous. As it comes in from Hawaii and the Pacific, it tends to rocket up past British Columbia to the Yukon and Alaska, and then it plunges down over the Canadian plains and into the eastern United States. The main effect in terms of weather is that we tend to have cold winter weather throughout most of the eastern U.S. You have a freight car of arctic air that pushes down there."

Jet streams flow from west to east in the upper portion of the troposphere.

Credit: Wikipedia

Bowen says that when the jet stream is curvy, "the West tends to have mild, relatively warm winters, and Pacific storms tend to occur farther north. So in Northern California, the Pacific Northwest and parts of western interior, it tends to be relatively dry, but tends to be quite wet and unusually warm in northwest Canada and Alaska."

This past winter, there were times of a strongly curving jet stream, and times when the Pacific North American teleconnection was in its negative phase, which means "the jet stream is flat, mostly west-to-east oriented," and sometimes split, Bowen says. In years when the jet stream pattern is more flat than curvy, "we tend to have strong storms in Northern California and Oregon. That moisture makes it into the western interior. The eastern U.S. is not affected by arctic air, so it tends to have milder winter temperatures."

The jet stream pattern – whether curvy or flat – has its greatest effects in winter and less impact on summer weather, Bowen says. The curvy pattern is enhanced by another climate phenomenon, the El Nino-Southern Oscillation, which sends a pool of warm water eastward to the eastern Pacific and affects climate worldwide.

Traces of Ancient Rains Reveal Which Way the Wind Blew

Over the millennia, oxygen in ancient rain water was incorporated into calcium carbonate deposited in cave and lake sediments. The ratio of rare, heavy oxygen-18 to the common isotope oxygen-16 in the calcium carbonate tells geochemists whether clouds that carried the rain were moving generally north or south during a given time.

Previous research determined the dates and oxygen isotope ratios for sediments in the new study, allowing Bowen and colleagues to use the ratios to tell if the jet stream was curvy or flat at various times during the past 8,000 years.

Bowen says air flowing over the Pacific picks up water from the ocean. As a curvy jet stream carries clouds north toward Alaska, the air cools and some of the water falls out as rain, with greater proportions of heavier oxygen-18 falling, thus raising the oxygen-18-to-16 ratio in rain and certain sediments in western North America. Then the jet stream curves south over the middle of the continent, and the water vapor, already depleted in oxygen-18, falls in the East as rain with lower oxygen-18-to-16 ratios.

When the jet stream is flat and moving east-to-west, oxygen-18 in rain is still elevated in the West and depleted in the East, but the difference is much less than when the jet stream is curvy.

Credit: Wikipedia

By examining oxygen isotope ratios in lake and cave sediments in the West and East, Bowen and colleagues showed that a flatter jet stream pattern prevailed from about 8,000 to 4,000 years ago in North America, but then, over only 500 years, the pattern shifted so that curvy jet streams became more frequent or severe or both. The method can't distinguish frequency from severity.

The new study is based mainly on isotope ratios at Buckeye Creek Cave, W. Va.; Lake Grinell, N.J.; Oregon Caves National Monument; and Lake Jellybean, Yukon.

Additional data supporting increasing curviness of the jet stream over recent millennia came from seven other sites: Crawford Lake, Ontario; Castor Lake, Wash.; Little Salt Spring, Fla.; Estancia Lake, N.M.; Crevice Lake, Mont.; and Dog and Felker lakes, British Columbia. Some sites provided oxygen isotope data; others showed changes in weather patterns based on tree ring growth or spring deposits.

Simulating the Jet Stream

As a test of what the cave and lake sediments revealed, Bowen's team did computer simulations of climate using software that takes isotopes into account.

Simulations of climate and oxygen isotope changes in the Middle Holocene and today resemble, respectively, today's flat and curvy jet stream patterns, supporting the switch toward increasing jet stream sinuosity 4,000 years ago.

Why did the trend start then?

"It was a when seasonality becomes weaker," Bowen says. The Northern Hemisphere was closer to the sun during the summer 8,000 years ago than it was 4,000 years ago or is now due to a 20,000-year cycle in Earth's orbit. He envisions a tipping point 4,000 years ago when weakening summer sunlight reduced the equator-to-pole temperature difference and, along with an intensifying El Nino climate pattern, pushed the jet stream toward greater curviness.


Contacts and sources: 
Lee J. Siegel
University of Utah

Tuesday, April 15, 2014

Strange Tilt-A-Worlds Could Harbor Life

A fluctuating tilt in a planet’s orbit does not preclude the possibility of life, according to new research by astronomers at the University of Washington, Utah’s Weber State University and NASA. In fact, sometimes it helps.

That’s because such “tilt-a-worlds,” as astronomers sometimes call them — turned from their orbital plane by the influence of companion planets — are less likely than fixed-spin planets to freeze over, as heat from their host star is more evenly distributed.

Tilted orbits such as those shown might make some planets wobble like a top that’s almost done spinning, an effect that could maintain liquid water on the surface, thus giving life a chance.

Credit: NASA/GSFC

This happens only at the outer edge of a star’s habitable zone, the swath of space around it where rocky worlds could maintain liquid water at their surface, a necessary condition for life. Further out, a “snowball state” of global ice becomes inevitable, and life impossible.

The findings, which are published online and will appear in the April issue of Astrobiology, have the effect of expanding that perceived habitable zone by 10 to 20 percent.

And that in turn dramatically increases the number of worlds considered potentially right for life.

Such a tilt-a-world becomes potentially habitable because its spin would cause poles to occasionally point toward the host star, causing ice caps to quickly melt.

“Without this sort of ‘home base’ for ice, global glaciation is more difficult,” said UW astronomer Rory Barnes. “So the rapid tilting of an exoplanet actually increases the likelihood that there might be liquid water on a planet’s surface.”

Barnes is second author on the paper. First author is John Armstrong of Weber State, who earned his doctorate at the UW.

Earth and its neighbor planets occupy roughly the same plane in space. But there is evidence, Barnes said, of systems whose planets ride along at angles to each other. As such, “they can tug on each other from above or below, changing their poles’ direction compared to the host star.”

The team used computer simulations to reproduce such off-kilter planetary alignments, wondering, he said, “what an Earthlike planet might do if it had similar neighbors.”

Their findings also argue against the long-held view among astronomers and astrobiologists that a planet needs the stabilizing influence of a large moon — as Earth has — to have a chance at hosting life.

“We’re finding that planets don’t have to have a stable tilt to be habitable,” Barnes said. Minus the moon, he said, Earth’s tilt, now at a fairly stable 23.5 degrees, might increase by 10 degrees or so. Climates might fluctuate, but life would still be possible.

“This study suggests the presence of a large moon might inhibit life, at least at the edge of the habitable zone.”

The work was done through the UW’s Virtual Planetary Laboratory, an interdisciplinary research group that studies how to determine if exoplanets — those outside the solar system — might have the potential for life.

“The research involved orbital dynamics, planetary dynamics and climate studies. It’s bigger than any of those disciplines on their own,” Barnes said.

Armstrong said that expanding the habitable zone might almost double the number of potentially habitable planets in the galaxy.

Applying the research and its expanded habitable zone to our own celestial neighborhood for context, he said, “It would give the ability to put Earth, say, past the orbit of Mars and still be habitable at least some of the time — and that’s a lot of real estate.”

Barnes’ UW co-authors are Victoria Meadows, Thomas Quinn and Jonathan Breiner. Shawn Domagal-Goldman of NASA’s Goddard Space Flight Center is also a co-author. The research was funded by a grant from the NASA Astrobiology Institute.

Contacts and sources:
Peter Kelley
University of Washington

Monday, April 14, 2014

The Science Of Caffeine, The World's Most Popular Drug (Video)

It seems there are new caffeine-infused products hitting the shelves every day. From energy drinks to gum and even jerky, our love affair with that little molecule shows no signs of slowing. In the American Chemical Society's (ACS') latest Reactions video, we look at the science behind the world's most popular drug, including why it keeps you awake and how much caffeine is too much. 



Contacts and sources:
Michael Bernstein
American Chemical Society

Dogs Getting It On With Wolves: Study Finds Recent Wolf-Dog Hybridization

Dog owners in the Caucasus Mountains of Georgia might want to consider penning up their dogs more often: hybridization of wolves with shepherd dogs might be more common, and more recent, than previously thought, according to a recently published study in the Journal of Heredity (DOI: 10.1093/jhered/esu014).

Upper panel: This is a livestock-guarding shepherd dog; middle panel: This is a livestock-guarding dog with inferred wolf ancestry (first-generation hybrid); lower panel: This is a wolf (all from Kazbegi, Georgia).
Credit: Photo courtesy of David Tarkhnishvili and Natia Kopaliani

Dr. Natia Kopaliani, Dr. David Tarkhnishvili, and colleagues from the Institute of Ecology at Ilia State University in Georgia and from the Tbilisi Zoo in Georgia used a range of genetic techniques to extract and examine DNA taken from wolf and dog fur samples as well as wolf scat and blood samples. They found recent hybrid ancestry in about ten percent of the dogs and wolves sampled. About two to three percent of the sampled wolves and dogs were identified as first-generation hybrids. This included hybridization between wolves and the shepherd dogs used to guard sheep from wolf attacks.

The study was undertaken as part of Dr. Kopaliani's work exploring human-wolf conflict in Georgia. "Since the 2000s, the frequency of wolf depredation on cattle has increased in Georgia, and there were several reports of attacks on humans. Wolves were sighted even in densely populated areas," she explained.

"Reports suggested that, unlike wild wolves, wolf-dog hybrids might lack fear of humans, so we wanted to examine the ancestry of wolves near human settlements to determine if they could be of hybrid origin with free-ranging dogs such as shepherds," she added.

The research team examined maternally-inherited DNA (mitochondrial DNA) and microsatellite markers to study hybridization rates. Microsatellite markers mutate easily, as they do not have any discernible purpose in the genome, and are highly variable even within a single population. For these reasons, they are often used to study hybridization.

"We expected to identify some individuals with hybrid ancestry, but it was quite surprising that recent hybrid ancestry was found in every tenth wolf and every tenth shepherd dog," said study co-author Tarkhnishvili.

"Two dogs out of the 60 or so we studied were inferred to be first generation hybrids," he added.

The study also found that about a third of the dogs sampled shared relatively recent maternal ancestry with local wolves, not with wolves domesticated in the Far East, where most experts believe dogs were first domesticated.

The research team used several alternate methods to confirm their results, and came to the same conclusions with each approach.
 
The shepherd dogs studied are a local breed used to guard livestock. "Ironically, their sole function is to protect sheep from wolves or thieves," Kopaliani explained. "The shepherd dogs are free-ranging, largely outside the tight control of their human masters. They guard the herds from wolves, which are common in the areas where they are used, but it appears that they are also consorting with the enemy."


Contacts and sources:
Nancy Steinberg
American Genetic Association