ADS


Unseen Is Free

Unseen Is Free
Try It Now

OpenX

Google Translate

Wednesday, April 16, 2014

Meteorites Yield Clues To Mars' Early Atmosphere



Geologists who analyzed 40 meteorites that fell to Earth from Mars unlocked secrets of the Martian atmosphere hidden in the chemical signatures of these ancient rocks. Their study, published April 17 in the journal Nature, shows that the atmospheres of Mars and Earth diverged in important ways very early in the 4.6 billion year evolution of our solar system.

The results will help guide researchers’ next steps in understanding whether life exists, or has ever existed, on Mars and how water—now absent from the Martian surface—flowed there in the past.

A microscope reveals colorful augite crystals in this 1.3 billion-year-old meteorite from Mars, which researchers studied to understand the red planet's atmospheric history. 

Photo: James Day

Heather Franz, a former University of Maryland research associate who now works on the Curiosity rover science team at the NASA Goddard Space Flight Center, led the study with James Farquhar, co-author and UMD geology professor. The researchers measured the sulfur composition of 40 Mars meteorites—a much larger number than in previous analyses. Of more than 60,000 meteorites found on Earth, only 69 are believed to be pieces of rocks blasted off the Martian surface.

The meteorites are igneous rocks that formed on Mars, were ejected into space when an asteroid or comet slammed into the red planet, and landed on Earth. The oldest meteorite in the study is about 4.1 billion years old, formed when our solar system was in its infancy. The youngest are between 200 million and 500 million years old.

Studying Martian meteorites of different ages can help scientists investigate the chemical composition of the Martian atmosphere throughout history, and learn whether the planet has ever been hospitable to life. Mars and Earth share the basic elements for life, but conditions on Mars are much less favorable, marked by an arid surface, cold temperatures, radioactive cosmic rays, and ultraviolet radiation from the Sun. Still, some Martian geological features were evidently formed by water – a sign of milder conditions in the past. Scientists are not sure what conditions made it possible for liquid water to exist on the surface, but greenhouse gases released by volcanoes likely played a role.

Under a microscope, crystals of skeletal magnetite in this 1.3 billion-year-old Martian meteorite reminded scientists of a piranha.

 Photo courtesy of Heather Franz

Sulfur, which is plentiful on Mars, may have been among the greenhouse gases that warmed the surface, and could have provided a food source for microbes. Because meteorites are a rich source of information about Martian sulfur, the researchers analyzed sulfur atoms that were incorporated into the rocks.

In the Martian meteorites, some sulfur came from molten rock, or magma, which came to the surface during volcanic eruptions. Volcanoes also vented sulfur dioxide into the atmosphere, where it interacted with light, reacted with other molecules, and settled on the surface.


Sulfur has four naturally occurring stable isotopes, or different forms of the element, each with its own atomic signature. Sulfur is also chemically versatile, interacting with many other elements, and each type of interaction distributes sulfur isotopes in a different way. Researchers measuring the ratios of sulfur isotopes in a rock sample can learn whether the sulfur was magma from deep below the surface, atmospheric sulfur dioxide or a related compound, or a product of biological activity.

Using state-of-the-art techniques to track the sulfur isotopes in samples from the Martian meteorites, the researchers were able to identify some sulfur as a product of photochemical processes in the Martian atmosphere. The sulfur was deposited on the surface and later incorporated into erupting magma that formed igneous rocks. The isotopic fingerprints found in the meteorite samples are different than those that would have been produced by sulfur-based life forms.The researchers found the chemical reactions involving sulfur in the Martian atmosphere were different than those that took place early in Earth’s geological history. This suggests the two planets’ early atmospheres were very different, Franz said.

This Martian meteorite belongs to a group called shergottites which are between 200 million and 500 million years old. Minerals shown include pyrrhotite (yellow), maskelynite (dark gray), pryoxene and olivine (light gray) and euhedral Cr-spinel grains (pinkish). 

Photo courtesy of Heather Franz

The exact nature of the differences is unclear, but other evidence suggests that soon after our solar system formed, much of Mars’ atmosphere was lost, leaving it thinner than Earth’s, with lower concentrations of carbon dioxide and other gases. That is one reason why Mars is too cold for liquid water today—but that may not always have been the case, said Franz.

“Climate models show that a moderate abundance of sulfur dioxide in the atmosphere after volcanic episodes, which have occurred throughout Mars’ history, could have produced a warming effect which may have allowed liquid water to exist at the surface for extended periods,” Franz said. “Our measurements of sulfur in Martian meteorites narrow the range of possible atmospheric compositions, since the pattern of isotopes that we observe points to a distinctive type of photochemical activity on Mars, different from that on early Earth.”

Periods of higher levels of sulfur dioxide may help explain the red planet’s dry lakebeds, river channels and other evidence of a watery past. Warm conditions may even have persisted long enough for microbial life to develop.

The team’s work has yielded the most comprehensive record of the distribution of sulfur isotopes on Mars. In effect, they have compiled a database of atomic fingerprints that provide a standard of comparison for sulfur-containing samples collected by NASA’s Curiosity rover and future Mars missions. This information will make it much easier for researchers to zero in on any signs of biologically produced sulfur, Farquhar said.



Contacts and sources:
Heather Dewar
University of Maryland

Targeting Cancer With A Triple Threat, Nanoparticle Delivers Three Drugs At Once

MIT chemists have designed nanoparticles that can deliver three cancer drugs at a time.

Delivering chemotherapy drugs in nanoparticle form could help reduce side effects by targeting the drugs directly to the tumors. In recent years, scientists have developed nanoparticles that deliver one or two chemotherapy drugs, but it has been difficult to design particles that can carry any more than that in a precise ratio.

Now MIT chemists have devised a new way to build such nanoparticles, making it much easier to include three or more different drugs. In a paper published in the Journal of the American Chemical Society, the researchers showed that they could load their particles with three drugs commonly used to treat ovarian cancer.

The new MIT nanoparticles consist of polymer chains (blue) and three different drug molecules — doxorubicin is red, the small green particles are camptothecin, and the larger green core contains cisplatin.
Image courtesy of Jeremiah Johnson

“We think it’s the first example of a nanoparticle that carries a precise ratio of three drugs and can release those drugs in response to three distinct triggering mechanisms,” says Jeremiah Johnson, an assistant professor of chemistry at MIT and the senior author of the new paper.

Such particles could be designed to carry even more drugs, allowing researchers to develop new treatment regimens that could better kill cancer cells while avoiding the side effects of traditional chemotherapy. In the JACS paper, Johnson and colleagues demonstrated that the triple-threat nanoparticles could kill ovarian cancer cells more effectively than particles carrying only one or two drugs, and they have begun testing the particles against tumors in animals.

Longyan Liao, a postdoc in Johnson’s lab, is the paper’s lead author.

Putting the pieces together

Johnson’s new approach overcomes the inherent limitations of the two methods most often used to produce drug-delivering nanoparticles: encapsulating small drug molecules inside the particles or chemically attaching them to the particle. With both of these techniques, the reactions required to assemble the particles become increasingly difficult with each new drug that is added.

Combining these two approaches — encapsulating one drug inside a particle and attaching a different one to the surface — has had some success, but is still limited to two drugs.

Johnson set out to create a new type of particle that would overcome those constraints, enabling the loading of any number of different drugs. Instead of building the particle and then attaching drug molecules, he created building blocks that already include the drug. These building blocks can be joined together in a very specific structure, and the researchers can precisely control how much of each drug is included.

Each building block consists of three components: the drug molecule, a linking unit that can connect to other blocks, and a chain of polyethylene glycol (PEG), which helps protect the particle from being broken down in the body. Hundreds of these blocks can be linked using an approach Johnson developed, called “brush first polymerization.”

“This is a new way to build the particles from the beginning,” Johnson says. “If I want a particle with five drugs, I just take the five building blocks I want and have those assemble into a particle. In principle, there’s no limitation on how many drugs you can add, and the ratio of drugs carried by the particles just depends on how they are mixed together in the beginning.”

Varying combinations

For this paper, the researchers created particles that carry the drugs cisplatin, doxorubicin, and camptothecin, which are often used alone or in combination to treat ovarian cancer.

Each particle carries the three drugs in a specific ratio that matches the maximum tolerated dose of each drug, and each drug has its own release mechanism. Cisplatin is freed as soon as the particle enters a cell, as the bonds holding it to the particle break down on exposure to glutathione, an antioxidant present in cells. Camptothecin is also released quickly when it encounters cellular enzymes called esterases.

The third drug, doxorubicin, was designed so that it would be released only when ultraviolet light shines on the particle. Once all three drugs are released, all that is left behind is PEG, which is easily biodegradable.

This approach “represents a clever new breakthrough in multidrug release through the simultaneous inclusion of different drugs, through distinct chemistries, within the same … platform,” says Todd Emrick, a professor of polymer science and engineering at the University of Massachusetts at Amherst who was not involved in the study.

Working with researchers in the lab of Paula Hammond, the David H. Koch Professor of Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, the team tested the particles against ovarian cancer cells grown in the lab. Particles carrying all three drugs killed the cancer cells at a higher rate than those that delivered only one or two drugs.

Johnson’s lab is now working on particles that carry four drugs, and the researchers are also planning to tag the particles with molecules that will allow them to home to tumor cells by interacting with proteins found on the cell surfaces.

Johnson also envisions that the ability to reliably produce large quantities of multidrug-carrying nanoparticles will enable large-scale testing of possible new cancer treatments. “It’s important to be able to rapidly and efficiently make particles with different ratios of multiple drugs, so that you can test them for their activity,” he says. “We can’t just make one particle, we need to be able to make different ratios, which our method can easily do.”

Other authors of the paper are graduate students Jenny Liu and Stephen Morton, and postdocs Erik Dreaden and Kevin Shopsowitz.

The research was funded by the MIT Research Support Committee, the Department of Defense Ovarian Cancer Research Program Teal Innovator Award, the National Institutes of Health, the National Sciences and Engineering Research Council, and the Koch Institute Support Grant from the National Cancer Institute.

Contacts and sources:
Anne Trafton
MIT News Office

Excitons Observed In Action For The First Time

Technique developed at MIT reveals the motion of energy-carrying quasiparticles in solid material.

A quasiparticle called an exciton — responsible for the transfer of energy within devices such as solar cells, LEDs, and semiconductor circuits — has been understood theoretically for decades. But exciton movement within materials has never been directly observed.

Now scientists at MIT and the City University of New York have achieved that feat, imaging excitons’ motions directly. This could enable research leading to significant advances in electronics, they say, as well as a better understanding of natural energy-transfer processes, such as photosynthesis.

Diagram of an exciton within a tetracene crystal, used in these experiments, shows the line across which data was collected. That data, plotted below as a function of both position (horizontal axis) and time (vertical axis) provides the most detailed information ever obtained on how excitons move through the material.
Illustration courtesy of the researchers

The research is described this week in the journal Nature Communications, in a paper co-authored by MIT postdocs Gleb Akselrod and Parag Deotare, professors Vladimir Bulovic and Marc Baldo, and four others.

“This is the first direct observation of exciton diffusion processes,” Bulovic says, “showing that crystal structure can dramatically affect the diffusion process.”

“Excitons are at the heart of devices that are relevant to modern technology,” Akselrod explains: The particles determine how energy moves at the nanoscale. “The efficiency of devices such as photovoltaics and LEDs depends on how well excitons move within the material,” he adds.

An exciton, which travels through matter as though it were a particle, pairs an electron, which carries a negative charge, with a place where an electron has been removed, known as a hole. Overall, it has a neutral charge, but it can carry energy. For example, in a solar cell, an incoming photon may strike an electron, kicking it to a higher energy level. That higher energy is propagated through the material as an exciton: The particles themselves don’t move, but the boosted energy gets passed along from one to another.

While it was previously possible to determine how fast, on average, excitons could move between two points, “we really didn’t have any information about how they got there,” Akselrod says. Such information is essential to understanding which aspects of a material’s structure — for example, the degree of molecular order or disorder — might facilitate or slow that motion.

“People always assumed certain behavior of the excitons,” Deotare says. Now, using this new technique — which combines optical microscopy with the use of particular organic compounds that make the energy of excitons visible — “we can directly say what kind of behavior the excitons were moving around with.” This advance provided the researchers with the ability to observe which of two possible kinds of “hopping” motion was actually taking place.

“This allows us to see new things,” Deotare says, making it possible to demonstrate that the nanoscale structure of a material determines how quickly excitons get trapped as they move through it.

For some applications, such as LEDs, Deotare says, it is desirable to maximize this trapping, so that energy is not lost to leakage; for other uses, such as solar cells, it is essential to minimize the trapping. The new technique should allow researchers to determine which factors are most important in increasing or decreasing this trapping.

“We showed how energy flow is impeded by disorder, which is the defining characteristic of most materials for low-cost solar cells and LEDs,” Baldo says.

While these experiments were carried out using a material called tetracene — a well-studied archetype of a molecular crystal — the researchers say that the method should be applicable to almost any crystalline or thin-film material. They expect it to be widely adopted by researchers in academia and industry.

“It’s a very simple technique, once people learn about it,” Akselrod says, “and the equipment required is not that expensive.”

Exciton diffusion is also a basic mechanism underlying photosynthesis: Plants absorb energy from photons, and this energy is transferred by excitons to areas where it can be stored in chemical form for later use in supporting the plant’s metabolism. The new method might provide an additional tool for studying some aspects of this process, the team says.

David Lidzey, a professor of physics and astronomy at the University of Sheffield who was not involved in this work, calls the research “a really impressive demonstration of a direct measurement of the diffusion of triplet excitons and their eventual trapping.” He adds, “Exciton diffusion and transport are important processes in solar-cell devices, so understanding what limits these may well help the design of better materials, or the development of better ways to process materials so that energy losses during exciton migration are limited.”

The work was supported by the U.S. Department of Energy and by the National Science Foundation, and used facilities of the Eni-MIT Solar Frontiers Center. 

Contacts and sources:
David L. Chandler 
MIT News Office

Floating Nuclear Power Plants: A Good Idea?

New power plant design could provide enhanced safety, easier siting, and centralized construction

When an earthquake and tsunami struck the Fukushima Daiichi nuclear plant complex in 2011, neither the quake nor the inundation caused the ensuing contamination. Rather, it was the aftereffects — specifically, the lack of cooling for the reactor cores, due to a shutdown of all power at the station — that caused most of the harm.

This illustration shows a possible configuration of a floating offshore nuclear plant, based on design work by Jacopo Buongiorno and others at MIT's Department of Nuclear Science and Engineering. Like offshore oil drilling platforms, the structure would include living quarters and a helipad for transportation to the site.

Illustration courtesy of Jake Jurewicz/MIT-NSE

A new design for nuclear plants built on floating platforms, modeled after those used for offshore oil drilling, could help avoid such consequences in the future. Such floating plants would be designed to be automatically cooled by the surrounding seawater in a worst-case scenario, which would indefinitely prevent any melting of fuel rods, or escape of radioactive material.

The concept is being presented this week at the Small Modular Reactors Symposium, hosted by the American Society of Mechanical Engineers, by MIT professors Jacopo Buongiorno, Michael Golay, and Neil Todreas, along with others from MIT, the University of Wisconsin, and Chicago Bridge and Iron, a major nuclear plant and offshore platform construction company.

Such plants, Buongiorno explains, could be built in a shipyard, then towed to their destinations five to seven miles offshore, where they would be moored to the seafloor and connected to land by an underwater electric transmission line. The concept takes advantage of two mature technologies: light-water nuclear reactors and offshore oil and gas drilling platforms. Using established designs minimizes technological risks, says Buongiorno, an associate professor of nuclear science and engineering (NSE) at MIT.

Although the concept of a floating nuclear plant is not unique — Russia is in the process of building one now, on a barge moored at the shore — none have been located far enough offshore to be able to ride out a tsunami, Buongiorno says. For this new design, he says, "the biggest selling point is the enhanced safety."



A floating platform several miles offshore, moored in about 100 meters of water, would be unaffected by the motions of a tsunami; earthquakes would have no direct effect at all. Meanwhile, the biggest issue that faces most nuclear plants under emergency conditions — overheating and potential meltdown, as happened at Fukushima, Chernobyl, and Three Mile Island — would be virtually impossible at sea, Buongiorno says: "It's very close to the ocean, which is essentially an infinite heat sink, so it's possible to do cooling passively, with no intervention. The reactor containment itself is essentially underwater."

Buongiorno lists several other advantages. For one thing, it is increasingly difficult and expensive to find suitable sites for new nuclear plants: They usually need to be next to an ocean, lake, or river to provide cooling water, but shorefront properties are highly desirable. By contrast, sites offshore, but out of sight of land, could be located adjacent to the population centers they would serve. "The ocean is inexpensive real estate," Buongiorno says.

In addition, at the end of a plant's lifetime, "decommissioning" could be accomplished by simply towing it away to a central facility, as is done now for the Navy's carrier and submarine reactors. That would rapidly restore the site to pristine conditions.

This design could also help to address practical construction issues that have tended to make new nuclear plants uneconomical: Shipyard construction allows for better standardization, and the all-steel design eliminates the use of concrete, which Buongiorno says is often responsible for construction delays and cost overruns.

There are no particular limits to the size of such plants, he says: They could be anywhere from small, 50-megawatt plants to 1,000-megawatt plants matching today's largest facilities. "It's a flexible concept," Buongiorno says.

Most operations would be similar to those of onshore plants, and the plant would be designed to meet all regulatory security requirements for terrestrial plants. "Project work has confirmed the feasibility of achieving this goal, including satisfaction of the extra concern of protection against underwater attack," says Todreas, the KEPCO Professor of Nuclear Science and Engineering and Mechanical Engineering.

Buongiorno sees a market for such plants in Asia, which has a combination of high tsunami risks and a rapidly growing need for new power sources. "It would make a lot of sense for Japan," he says, as well as places such as Indonesia, Chile, and Africa.

The paper was co-authored by NSE students Angelo Briccetti, Jake Jurewicz, and Vincent Kindfuller; Michael Corradini of the University of Wisconsin; and Daniel Fadel, Ganesh Srinivasan, Ryan Hannink, and Alan Crowle of Chicago Bridge and Iron, based in Canton, Mass.


Contacts and sources:
Andrew Carleen
Massachusetts Institute of Technology
Written by David Chandler, MIT News Office

Fish On Anti-Depressants In The Wild Show Altered Behaviors

Fish exposed to the antidepressant Fluoxetine, an active ingredient in prescription drugs such as Prozac, exhibited a range of altered mating behaviors, repetitive behavior and aggression towards female fish, according to new research published on in the latest special issue of Aquatic Toxicology: Antidepressants in the Aquatic Environment.

Credit:  Elsevier

The authors of the study set up a series of experiments exposing a freshwater fish (Fathead Minnow) to a range of Prozac concentrations. Following exposure for 4 weeks the authors observed and recorded a range of behavioural changes among male and female fish relating to reproduction, mating, general activity and aggression.

On a positive note, author Rebecca Klaper, Director of the Great Lakes Genomics Center at University of Wisconsin-Milwaukee, emphasizes that the impact on behavior is reversible once the concentration level is reduced. "With increased aggression, in the highest level of concentration, female survivorship was only 33% compared to the other exposures that had a survivorship of 77–87.5%.

The females that died had visible bruising and tissue damage," according to Rebecca Klaper. There is an increasing proportion of antidepressants prescriptions, and like most prescription drugs, they end up, not fully broken down, back into our aquatic ecosystems, inducing their therapeutic effects on wildlife. Although concentrations observed in our rivers and estuaries are very small, an increasing number of studies have shown that these incredibly small concentrations can dramatically alter the biology of the organisms they come in contact with.

The impact of pharmaceuticals is currently not only of interest amongst scientists but also amongst environmental regulators, industry and general public. Some US states are looking to charge pharmaceutical companies with the cost of appropriate drug disposal, some of which is currently being challenged in the courts.

"This is just one of an increasing number of studies that suggest that pharmaceuticals in the environment can impact the complex range of behaviors in aquatic organisms," said Alex Ford, Guest Editor of the special issue of Aquatic Toxicology in which the study was published. "Worryingly, an increasing number of these studies are demonstrating that these effects can be seen at concentrations currently found in our rivers and estuaries and they appear to impact a broad range of biological functions and a wide variety of aquatic organisms."

This is one of the reasons why Alex proposed a full special dedicated to this topic. Antidepressants in the Aquatic Environment, includes among other studies, research that demonstrates that antidepressants affect the ability of cuttlefish to change color and a fish study whereby reproductive effects were observed in offspring whose parents who were exposed to mood stabilizing drugs.

Ford emphasizes that although the results from this study and others published in the issue show troubling results for aquatic species, this doesn't indicate that these results are applicable to humans. "This special issue focuses on the biology of aquatic systems and organisms and results only indicate how pharmaceuticals could potentially have effects on this particular environment."



Contacts and sources:
Kitty van Hensbergen
Elsevier

Citation: The special issue is: "Antidepressants in the Aquatic Environment" (Volume 151, Pages 1-134 (June 2014), Aquatic Toxicology published by Elsevier.

A Study In Scarlet: Star Forming In The Centaur

This area of the southern sky, in the constellation of Centaurus (The Centaur), is home to many bright nebulae, each associated with hot newborn stars that formed out of the clouds of hydrogen gas.

The intense radiation from the stellar newborns excites the remaining hydrogen around them, making the gas glow in the distinctive shade of red typical of star-forming regions. Another famous example of this phenomenon is the Lagoon Nebula, a vast cloud that glows in similar bright shades of scarlet.

This new image from the Wide Field Imager on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile reveals a cloud of hydrogen and newborn stars called Gum 41. In the middle of this little-known nebula, brilliant hot young stars emit energetic radiation that causes the surrounding hydrogen to glow with a characteristic red hue.
Credit: ESO

The nebula in this picture is located some 7300 light-years from Earth. Australian astronomer Colin Gum discovered it on photographs taken at the Mount Stromlo Observatory near Canberra, and included it in his catalogue of 84 emission nebulae, published in 1955. Gum 41 is actually one small part of a bigger structure called the Lambda Centauri Nebula, also known by the more exotic name of the Running Chicken Nebula. Gum died at a tragically early age in a skiing accident in Switzerland in 1960.

This pan video takes a close up look at a new image from the Wide Field Imager (WFI) on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile. It reveals a cloud of hydrogen and newborn stars called Gum 41 in the constellation of Centaurus (The Centaur). In the middle of this little-known nebula, brilliant hot young stars emit energetic radiation that causes the surrounding hydrogen to glow with a characteristic red hue.

Credit: ESO. Music: movetwo

In this picture of Gum 41, the clouds appear to be quite thick and bright, but this is actually misleading. If a hypothetical human space traveller could pass through this nebula, it is likely that they would not notice it as — even at close quarters — it would be toofaint for the human eye to see. This helps to explain why this large object had to wait until the mid-twentieth century to be discovered — its light is spread very thinly and the red glow cannot be well seen visually.

This chart shows the location of a cloud of hydrogen and newborn stars called Gum 41 in the large southern constellation of Centaurus (The Centaur). This map shows most of the stars visible to the unaided eye under good conditions and the location of the nebula itself is marked with a red circle. This object is part of the larger Lambda Centauri Nebula. Gum 41 is very faint and was only discovered photographically in the mid-20th century.

Credit: ESO, IAU and Sky & Telescope

This new portrait of Gum 41 — likely one of the best so far of this elusive object — has been created using data from the Wide Field Imager (WFI) on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile. It is a combination of images taken through blue, green, and red filters, along with an image using a special filter designed to pick out the red glow from hydrogen.

This zoom sequence starts with a broad view of the Milky Way and closes in on one of the more spectacular sections in the constellation of Centaurus (The Centaur). In the final sequence we see the star formation region known as Gum 41 in a new image from the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile.
Credit: ESO/N. Risinger (skysurvey.org)/Hisayoshi Kato. Music: movetwo


Contacts and sources: 
Richard Hook
ESO

Warm US West, Cold East: A 4,000-Year Pattern

Global warming may bring more curvy jet streams during winter.

Last winter's curvy jet stream pattern brought mild temperatures to western North America and harsh cold to the East. A University of Utah-led study shows that pattern became more pronounced 4,000 years ago, and suggests it may worsen as Earth's climate warms.

These maps show winter temperature patterns (top) and winter precipitation patterns (bottom) associated with a curvy jet stream (not shown) that moves north from the Pacific to the Yukon and Alaska, then plunges down over the Canadian plains and into the eastern United States. A University of Utah-led study shows that starting 4,000 years ago, the jet stream tended to become curvier than it was between 8,000 and 4,000 years ago, and suggests global warming will enhance such curviness and thus frigid weather in the eastern states similar to this past winter's. 

Credit: Zhongfang Liu, Tianjin Normal University, China.

The curvy jet stream brought abnormally warm temperatures (red and orange) to the West and Alaska and an abnormal deep freeze (blue) to the East this past winter, similar to what is shown in the top map, except the upper Midwest was colder than shown. The bottom map of a typical curvy jet stream precipitation pattern shows how that normally brings dry winters to reddish-orange areas and wet winters to blue regions. Precipitation patterns this winter matched the bottom map in many regions, except California was drier than expected and the upper Midwest was wetter than expected.

"If this trend continues, it could contribute to more extreme winter weather events in North America, as experienced this year with warm conditions in California and Alaska and intrusion of cold Arctic air across the eastern USA," says geochemist Gabe Bowen, senior author of the study.

The study was published online April 16 by the journal Nature Communications.

"A sinuous or curvy winter jet stream means unusual warmth in the West, drought conditions in part of the West, and abnormally cold winters in the East and Southeast," adds Bowen, an associate professor of geology and geophysics at the University of Utah. "We saw a good example of extreme wintertime climate that largely fit that pattern this past winter," although in the typical pattern California often is wetter.

It is not new for scientists to forecast that the current warming of Earth's climate due to carbon dioxide, methane and other "greenhouse" gases already has led to increased weather extremes and will continue to do so.

The new study shows the jet stream pattern that brings North American wintertime weather extremes is millennia old – "a longstanding and persistent pattern of climate variability," Bowen says. Yet it also suggests global warming may enhance the pattern so there will be more frequent or more severe winter weather extremes or both.

University of Utah geochemist Gabe Bowen led a new study, published in Nature Communications, showing that the curvy jet stream pattern that brought mild weather to western North America and intense cold to the eastern states this past winter has become more dominant during the past 4,000 years than it was from 8,000 to 4,000 years ago. The study suggests global warming may aggravate the pattern, meaning such severe winter weather extremes may be worse in the future.
Credit: Lee J. Siegel, University of Utah.

"This is one more reason why we may have more winter extremes in North America, as well as something of a model for what those extremes may look like," Bowen says. Human-caused climate change is reducing equator-to-pole temperature differences; the atmosphere is warming more at the poles than at the equator. Based on what happened in past millennia, that could make a curvy jet stream even more frequent and-or intense than it is now, he says.

Bowen and his co-authors analyzed previously published data on oxygen isotope ratios in lake sediment cores and cave deposits from sites in the eastern and western United States and Canada. Those isotopes were deposited in ancient rainfall and incorporated into calcium carbonate. They reveal jet stream directions during the past 8,000 years, a geological time known as middle and late stages of the Holocene Epoch.

Next, the researchers did computer modeling or simulations of jet stream patterns – both curvy and more direct west to east – to show how changes in those patterns can explain changes in the isotope ratios left by rainfall in the old lake and cave deposits.

They found that the jet stream pattern – known technically as the Pacific North American teleconnection – shifted to a generally more "positive phase" – meaning a curvy jet stream – over a 500-year period starting about 4,000 years ago. In addition to this millennial-scale change in jet stream patterns, they also noted a cycle in which increases in the sun's intensity every 200 years make the jet stream flatter.

Bowen conducted the study with Zhongfang Liu of Tianjin Normal University in China, Kei Yoshimura of the University of Tokyo, Nikolaus Buenning of the University of Southern California, Camille Risi of the French National Center for Scientific Research, Jeffrey Welker of the University of Alaska at Anchorage, and Fasong Yuan of Cleveland State University.

The study was funded by the National Science Foundation, National Natural Science Foundation of China, Japan Society for the Promotion of Science and a joint program by the society and Japan's Ministry of Education, Culture, Sports, Science and Technology: the Program for Risk Information on Climate Change.


Sinuous Jet Stream Brings Winter Weather Extremes

The Pacific North American teleconnection, or PNA, "is a pattern of climate variability" with positive and negative phases, Bowen says.

"In periods of positive PNA, the jet stream is very sinuous. As it comes in from Hawaii and the Pacific, it tends to rocket up past British Columbia to the Yukon and Alaska, and then it plunges down over the Canadian plains and into the eastern United States. The main effect in terms of weather is that we tend to have cold winter weather throughout most of the eastern U.S. You have a freight car of arctic air that pushes down there."

Jet streams flow from west to east in the upper portion of the troposphere.

Credit: Wikipedia

Bowen says that when the jet stream is curvy, "the West tends to have mild, relatively warm winters, and Pacific storms tend to occur farther north. So in Northern California, the Pacific Northwest and parts of western interior, it tends to be relatively dry, but tends to be quite wet and unusually warm in northwest Canada and Alaska."

This past winter, there were times of a strongly curving jet stream, and times when the Pacific North American teleconnection was in its negative phase, which means "the jet stream is flat, mostly west-to-east oriented," and sometimes split, Bowen says. In years when the jet stream pattern is more flat than curvy, "we tend to have strong storms in Northern California and Oregon. That moisture makes it into the western interior. The eastern U.S. is not affected by arctic air, so it tends to have milder winter temperatures."

The jet stream pattern – whether curvy or flat – has its greatest effects in winter and less impact on summer weather, Bowen says. The curvy pattern is enhanced by another climate phenomenon, the El Nino-Southern Oscillation, which sends a pool of warm water eastward to the eastern Pacific and affects climate worldwide.

Traces of Ancient Rains Reveal Which Way the Wind Blew

Over the millennia, oxygen in ancient rain water was incorporated into calcium carbonate deposited in cave and lake sediments. The ratio of rare, heavy oxygen-18 to the common isotope oxygen-16 in the calcium carbonate tells geochemists whether clouds that carried the rain were moving generally north or south during a given time.

Previous research determined the dates and oxygen isotope ratios for sediments in the new study, allowing Bowen and colleagues to use the ratios to tell if the jet stream was curvy or flat at various times during the past 8,000 years.

Bowen says air flowing over the Pacific picks up water from the ocean. As a curvy jet stream carries clouds north toward Alaska, the air cools and some of the water falls out as rain, with greater proportions of heavier oxygen-18 falling, thus raising the oxygen-18-to-16 ratio in rain and certain sediments in western North America. Then the jet stream curves south over the middle of the continent, and the water vapor, already depleted in oxygen-18, falls in the East as rain with lower oxygen-18-to-16 ratios.

When the jet stream is flat and moving east-to-west, oxygen-18 in rain is still elevated in the West and depleted in the East, but the difference is much less than when the jet stream is curvy.

Credit: Wikipedia

By examining oxygen isotope ratios in lake and cave sediments in the West and East, Bowen and colleagues showed that a flatter jet stream pattern prevailed from about 8,000 to 4,000 years ago in North America, but then, over only 500 years, the pattern shifted so that curvy jet streams became more frequent or severe or both. The method can't distinguish frequency from severity.

The new study is based mainly on isotope ratios at Buckeye Creek Cave, W. Va.; Lake Grinell, N.J.; Oregon Caves National Monument; and Lake Jellybean, Yukon.

Additional data supporting increasing curviness of the jet stream over recent millennia came from seven other sites: Crawford Lake, Ontario; Castor Lake, Wash.; Little Salt Spring, Fla.; Estancia Lake, N.M.; Crevice Lake, Mont.; and Dog and Felker lakes, British Columbia. Some sites provided oxygen isotope data; others showed changes in weather patterns based on tree ring growth or spring deposits.

Simulating the Jet Stream

As a test of what the cave and lake sediments revealed, Bowen's team did computer simulations of climate using software that takes isotopes into account.

Simulations of climate and oxygen isotope changes in the Middle Holocene and today resemble, respectively, today's flat and curvy jet stream patterns, supporting the switch toward increasing jet stream sinuosity 4,000 years ago.

Why did the trend start then?

"It was a when seasonality becomes weaker," Bowen says. The Northern Hemisphere was closer to the sun during the summer 8,000 years ago than it was 4,000 years ago or is now due to a 20,000-year cycle in Earth's orbit. He envisions a tipping point 4,000 years ago when weakening summer sunlight reduced the equator-to-pole temperature difference and, along with an intensifying El Nino climate pattern, pushed the jet stream toward greater curviness.


Contacts and sources: 
Lee J. Siegel
University of Utah

Tuesday, April 15, 2014

Strange Tilt-A-Worlds Could Harbor Life

A fluctuating tilt in a planet’s orbit does not preclude the possibility of life, according to new research by astronomers at the University of Washington, Utah’s Weber State University and NASA. In fact, sometimes it helps.

That’s because such “tilt-a-worlds,” as astronomers sometimes call them — turned from their orbital plane by the influence of companion planets — are less likely than fixed-spin planets to freeze over, as heat from their host star is more evenly distributed.

Tilted orbits such as those shown might make some planets wobble like a top that’s almost done spinning, an effect that could maintain liquid water on the surface, thus giving life a chance.

Credit: NASA/GSFC

This happens only at the outer edge of a star’s habitable zone, the swath of space around it where rocky worlds could maintain liquid water at their surface, a necessary condition for life. Further out, a “snowball state” of global ice becomes inevitable, and life impossible.

The findings, which are published online and will appear in the April issue of Astrobiology, have the effect of expanding that perceived habitable zone by 10 to 20 percent.

And that in turn dramatically increases the number of worlds considered potentially right for life.

Such a tilt-a-world becomes potentially habitable because its spin would cause poles to occasionally point toward the host star, causing ice caps to quickly melt.

“Without this sort of ‘home base’ for ice, global glaciation is more difficult,” said UW astronomer Rory Barnes. “So the rapid tilting of an exoplanet actually increases the likelihood that there might be liquid water on a planet’s surface.”

Barnes is second author on the paper. First author is John Armstrong of Weber State, who earned his doctorate at the UW.

Earth and its neighbor planets occupy roughly the same plane in space. But there is evidence, Barnes said, of systems whose planets ride along at angles to each other. As such, “they can tug on each other from above or below, changing their poles’ direction compared to the host star.”

The team used computer simulations to reproduce such off-kilter planetary alignments, wondering, he said, “what an Earthlike planet might do if it had similar neighbors.”

Their findings also argue against the long-held view among astronomers and astrobiologists that a planet needs the stabilizing influence of a large moon — as Earth has — to have a chance at hosting life.

“We’re finding that planets don’t have to have a stable tilt to be habitable,” Barnes said. Minus the moon, he said, Earth’s tilt, now at a fairly stable 23.5 degrees, might increase by 10 degrees or so. Climates might fluctuate, but life would still be possible.

“This study suggests the presence of a large moon might inhibit life, at least at the edge of the habitable zone.”

The work was done through the UW’s Virtual Planetary Laboratory, an interdisciplinary research group that studies how to determine if exoplanets — those outside the solar system — might have the potential for life.

“The research involved orbital dynamics, planetary dynamics and climate studies. It’s bigger than any of those disciplines on their own,” Barnes said.

Armstrong said that expanding the habitable zone might almost double the number of potentially habitable planets in the galaxy.

Applying the research and its expanded habitable zone to our own celestial neighborhood for context, he said, “It would give the ability to put Earth, say, past the orbit of Mars and still be habitable at least some of the time — and that’s a lot of real estate.”

Barnes’ UW co-authors are Victoria Meadows, Thomas Quinn and Jonathan Breiner. Shawn Domagal-Goldman of NASA’s Goddard Space Flight Center is also a co-author. The research was funded by a grant from the NASA Astrobiology Institute.

Contacts and sources:
Peter Kelley
University of Washington

Monday, April 14, 2014

The Science Of Caffeine, The World's Most Popular Drug (Video)

It seems there are new caffeine-infused products hitting the shelves every day. From energy drinks to gum and even jerky, our love affair with that little molecule shows no signs of slowing. In the American Chemical Society's (ACS') latest Reactions video, we look at the science behind the world's most popular drug, including why it keeps you awake and how much caffeine is too much. 



Contacts and sources:
Michael Bernstein
American Chemical Society

Dogs Getting It On With Wolves: Study Finds Recent Wolf-Dog Hybridization

Dog owners in the Caucasus Mountains of Georgia might want to consider penning up their dogs more often: hybridization of wolves with shepherd dogs might be more common, and more recent, than previously thought, according to a recently published study in the Journal of Heredity (DOI: 10.1093/jhered/esu014).

Upper panel: This is a livestock-guarding shepherd dog; middle panel: This is a livestock-guarding dog with inferred wolf ancestry (first-generation hybrid); lower panel: This is a wolf (all from Kazbegi, Georgia).
Credit: Photo courtesy of David Tarkhnishvili and Natia Kopaliani

Dr. Natia Kopaliani, Dr. David Tarkhnishvili, and colleagues from the Institute of Ecology at Ilia State University in Georgia and from the Tbilisi Zoo in Georgia used a range of genetic techniques to extract and examine DNA taken from wolf and dog fur samples as well as wolf scat and blood samples. They found recent hybrid ancestry in about ten percent of the dogs and wolves sampled. About two to three percent of the sampled wolves and dogs were identified as first-generation hybrids. This included hybridization between wolves and the shepherd dogs used to guard sheep from wolf attacks.

The study was undertaken as part of Dr. Kopaliani's work exploring human-wolf conflict in Georgia. "Since the 2000s, the frequency of wolf depredation on cattle has increased in Georgia, and there were several reports of attacks on humans. Wolves were sighted even in densely populated areas," she explained.

"Reports suggested that, unlike wild wolves, wolf-dog hybrids might lack fear of humans, so we wanted to examine the ancestry of wolves near human settlements to determine if they could be of hybrid origin with free-ranging dogs such as shepherds," she added.

The research team examined maternally-inherited DNA (mitochondrial DNA) and microsatellite markers to study hybridization rates. Microsatellite markers mutate easily, as they do not have any discernible purpose in the genome, and are highly variable even within a single population. For these reasons, they are often used to study hybridization.

"We expected to identify some individuals with hybrid ancestry, but it was quite surprising that recent hybrid ancestry was found in every tenth wolf and every tenth shepherd dog," said study co-author Tarkhnishvili.

"Two dogs out of the 60 or so we studied were inferred to be first generation hybrids," he added.

The study also found that about a third of the dogs sampled shared relatively recent maternal ancestry with local wolves, not with wolves domesticated in the Far East, where most experts believe dogs were first domesticated.

The research team used several alternate methods to confirm their results, and came to the same conclusions with each approach.
 
The shepherd dogs studied are a local breed used to guard livestock. "Ironically, their sole function is to protect sheep from wolves or thieves," Kopaliani explained. "The shepherd dogs are free-ranging, largely outside the tight control of their human masters. They guard the herds from wolves, which are common in the areas where they are used, but it appears that they are also consorting with the enemy."


Contacts and sources:
Nancy Steinberg
American Genetic Association

Neanderthals And Cro-Magnons: They Lived In The Same Iberian Caves But Did Not Meet:

A piece of research in which a UPV/EHU group is participating indicates that 1,000 years separate the records of the presence of the two species

The meeting between a Neanderthal and one of the first humans, which we used to picture in our minds, did not happen on the Iberian Peninsula. That is the conclusion reached by an international team of researchers from the Australian National University, Oxford University, the UPV/EHU-University of the Basque Country, University of Maryland, Universitat de Girona and the University of Oviedo, after redoing the dating of the remains in three caves located on the route through the Pyrenees of the first beings of our species: L'Arbreda, Labeko Koba and La Viña.


The paper, entitled The chronology of the earliest Upper Palaeolithic in northern Iberia: New insights from L'Arbreda, Labeko Koba and La Viña, has been published in the Journal of Human Evolution.

Until now, the carbon 14 technique, a radioactive isotope which gradually disappears with the passing of time, has been used to date prehistoric remains. When about 40,000 years, in other words approximately the period corresponding to the arrival of the first humans in Europe, have elapsed, the portion that remains is so small that it can become easily contaminated and cause the dates to appear more recent. It was from 2005 onwards that a new technique began to be used; it is the one used to purify the collagen in DNA tests. Using this method, the portion of the original organic material is obtained and all the subsequent contamination is removed.

And by using this technique, scientists have been arriving at the same conclusions at key sites across Europe: "We can see that the arrival of our species in Europe took place 8,000 years earlier than what had been thought and we can see the earliest datings of our species and the most recent Neanderthal ones, in which, in a specific regional framework, there is no overlapping," explained Alvaro Arrizabalaga, professor of the department of Geography, Prehistory and Archaeology, and one of the UPV/EHU researchers alongside María-José Iriarte and Aritza Villaluenga.

The three caves chosen for the recently published research are located in Girona (L'Arbreda), Gipuzkoa (Labeko Koba) and Asturias (La Viña); in other words, at the westernmost and easternmost tips of the Pyrenees and it was where the flow of populations and animals between the peninsula and continent took place. 

"L'Arbreda is on the eastern pass; Labeko Koba, in the Deba valley, is located on the entry corridor through the Western Pyrenees (Arrizabalaga and Iriarte excavated it in a hurry in 1988 before it was destroyed by the building of the Arrasate-Mondragon bypass) and La Viña is of value as a paradigm, since it provides a magnificent sequence of the Upper Palaeolithic, in other words, of the technical and cultural behaviour of the Cro-magnons during the last glaciation", pointed out Arrizabalaga.


The selecting of the remains was very strict allowing only tools made of bones or, in the absence of them, bones bearing clear traces of human activity, as a general rule with butchery marks, in other words, cuts in the areas of the tendons so that the muscle could be removed. 

"The Labeko Koba curve is the most consistent of the three, which in turn are the most consistent on the Iberian Peninsula," explained Arrizabalaga. 18 remains were dated at Labeko Koba and the results are totally convergent with respect to their stratigraphic position, in other words, those that appeared at the lowest depths are the oldest ones.

The main conclusion -"the scene of the meeting between a Neanderthal and a Cro-magnon does not seem to have taken place on the Iberian Peninsula"- is the same as the one that has been gradually reached over the last three years by different research groups when studying key settlements in Great Britain, Italy, Germany and France. "For 25 years we had been saying that Neanderthals and early humans lived together for 8,000-10,000 years. 

Today, we think that in Europe there was a gap between one species and the other and, therefore, there was no hybridation, which did in fact take place in areas of the Middle East," explained Arrizabalaga. The UPV/EHU professor is also the co-author of a piece of research published in 2012 that puts back the datings of the Neanderthals. 

"We did the dating again in accordance with the ultrafiltration treatment that eliminates rejuvenating contamination, remains of the Mousterian, the material culture belonging to the Neanderthals from sites in the south of the Peninsula. Very recent dates had been obtained in them -up to 29,000 years- but the new datings go back to 44,000 years older than the first dates that can be attributed to the Cro-Magnons," explained the UPV/EHU professor.



Contacts and sources:
Matxalen Sotillo
Universidad del País Vasco
Translated by Basque Research.

Reference

R.E Wood, A. Arrizabalaga, M. Camps, S. Fallon, M.-J. Iriarte-Chiapusso, R. Jones, J. Maroto, M. de la Rasilla, D. Santamaría, J. Soler, N. Soler, A. Villaluenga, T.F.G. Higham. The chronology of the earliest Upper Palaeolithic in northern Iberia: New insights from L'Arbreda, Labeko Koba and La Viña, Journal of Human Evolution (2014),http://dx.doi.org/10.1016/j.jhevol.2013.12.017

Julià Maroto, Manuel Vaquero, Álvaro Arrizabalaga, Javier Baena, Enrique Baquedano, Jesús Jordá, Ramon Julià, Ramón Montes, Johannes Van Der Plicht, j, Pedro Rasines, Rachel Wood. Current issues in late Middle Palaeolithic chronology: New assessments from Northern Iberia Quaternary International (2012) doi:10.1016/j.quaint.2011.07.007

Saturday, April 12, 2014

Graduate Student Brings Extinct Plants To Life

Jeff Benca is an admitted über-geek when it comes to prehistoric plants, so it was no surprise that, when he submitted a paper describing a new species of long-extinct lycopod for publication, he ditched the standard line drawing and insisted on a detailed and beautifully rendered color reconstruction of the plant. This piece earned the cover of March’s centennial issue of the American Journal of Botany.

Benca described this 400-million-year-old fossil lycopod, Leclercqia scolopendra, and created a life-like computer rendering. The stem of the lycopod is about 2.5 millimeters across.
Credit: UC Berkeley/Jeff Benca

“Typically, when you see pictures of early land plants, they’re not that sexy: there is a green forking stick and that’s about it. We don’t have many thorough reconstructions,” said Benca, a graduate student in the Department of Integrative Biology and Museum of Paleontology at UC Berkeley. “I wanted to give an impression of what they may have really looked like. There are great color reconstructions of dinosaurs, so why not a plant?”

Benca’s realistic, full-color image could be a life portrait, except for the fact that it was drawn from a plant that lay flattened and compressed into rock for more than 375 million years.

Called Leclercqia scolopendra, or centipede clubmoss, the plant lived during the “age of fishes,” the Devonian Period. At that time, lycopods – the group Leclercqia belonged to – were one of few plant lineages with leaves. Leclercqia shoots were about a quarter-inch in diameter and probably formed prickly, scrambling, ground-covering mats. The function of Leclercqia’s hook-like leaf tips is unclear, Benca said, but they may have been used to clamber over larger plants. Today, lycopods are represented by a group of inconspicuous plants called club mosses, quillworts and spikemosses.

Both living and extinct lycopods have fascinated Benca since high school. When he came to UC Berkeley last year from the University of Washington, he brought a truckload of some 70 different species, now part of collections at the UC Botanical Garden.

Now working in the paleobotany lab of Cindy Looy, Berkeley assistant professor of integrative biology, Benca continues to establish a growing list of living lycopod species, several of which will eventually be incorporated into the UC and Jepson Herbaria collections.

Visualizing plant evolution

Benca and colleagues wrote their paper primarily to demonstrate a new technique that is helping paleobotanists interpret early land plant fossils with greater confidence. Since living clubmosses share many traits with early lycopods, the research team was able to test their methods using living relatives Benca was growing in greenhouses. 

Jeff Benca and the cover of the March 2014 issue of the American Journal of Botany.
Credit: Cathy Cockrell photo; Jeff Benca rendering

Early land plant fossils are not easy to come by, but they can be abundant in places where rocks from the Devonian Period form outcrops. But a large portion of these are just stem fragments with few diagnostic features to distinguish them, Benca said.

“The way we analyzed Leclercqiamaterial makes it possible to gain more information from these fragments, increasing our sample size of discernible fossils,” he said.

“Getting a better grip on just how diverse and variable Devonian plants were will be important to understanding the origins of key traits we see in so many plants today.” Looy said.

Benca’s co-authors are Maureen H. Carlisle, Silas Bergen and Caroline A. E. Strömberg from the University of Washington and Burke Museum of Natural History and Culture, Seattle.


Contacts and sources:
Robert Sanders
University of California - Berkeley

More informaiton
Applying morphometrics to early land plant systematics: a new Leclercqia (Lycopsida) species from Washington state, USA (American Journal of Botany, March 2014))
Primitive-plant uber-geek’s heart belongs to lycopods (11/4/12)



New Chinese Herbal Medicine Has Significant Potential In Treating Hepatitis C

Data from a late-breaking abstract presented at the International Liver Congress 2014 identifies a new compound, SBEL1, that has the ability to inhibit hepatitis C virus (HCV) activity in cells at several points in the virus' lifecycle.

Hepatitis C virus

Credit: Wikipedia

SBEL1 is a compound isolated from Chinese herbal medicines that was found to inhibit HCV activity by approximately 90%. SBEL1 is extracted from a herb found in certain regions of Taiwan and Southern China. In Chinese medicine, it is used to treat sore throats and inflammations. The function of SBEL1 within the plant is unknown and its role and origins are currently being investigated.

Scientists pre-treated human liver cells in vitro with SBEL1 prior to HCV infection and found that SBEL1 pre-treated cells contained 23 percent less HCV protein than the control, suggesting that SBEL1 blocks virus entry. The liver cells transfected with an HCV internal ribosome entry site (IRES)-driven luciferase reporter that were treated with SBEL1 reduced reporter activity by 50% compared to control. This suggests that that SBEL1 inhibits IRES-mediated translation, a critical process for viral protein production.

In addition, the HCV ribonucleic acid (RNA) levels were significantly reduced by 78 percent in HCV infected cells treated with SBEL1 compared to the control group. This demonstrates that SBEL1 may also affect the viral RNA replication process.

Prof. Markus Peck-Radosavljevic, Secretary-General of the European Association for the Study of the Liver and Associate Professor of Medicine, University of Vienna, Austria, commented: "People infected with hepatitis C are at risk of developing severe liver damage including liver cancer and cirrhosis. 

In the past, less than 20 percent of all HCV patients were treated because the available treatments were unsuitable due to poor efficacy and high toxicity. Recent advances means that we can now virtually cure HCV without unpleasant side effects. However, the different virus genotypes coupled with the complexity of the disease means there is still a major unmet need to improve options for all populations."

Professor Peck-Radosavljevic continued: "SBEL1 has demonstrated significant inhibition of HCV at multiple stages of the viral lifecycle, which is an exciting discovery because it allows us to gain a deeper understanding of the virus and its interactions with other compounds. Ultimately this adds to our library of knowledge that may bring us closer to improving future treatment outcomes."

HCV invades cells in the body by binding to specific receptors on the cell, enabling the virus to enter it.  Once inside, HCV hijacks functions of the cell known as transcription, translation and replication, which enables HCV to make copies of its viral genome and proteins, allowing the virus to spread to other sites of the body.

 When HCV enters the host cell, it releases viral (+)RNA that is transcribed by viral RNA replicase into viral (-)RNA, which can be used as a template for viral genome replication to produce more (+) RNA or for viral protein synthesis. Once the viral RNA is transcribed, HCV initiates a process known as IRES-mediated translation, which allows the viral RNA to be translated into proteins by bypassing certain protein translation checkpoints that would normally be required by the host cell to start protein translation.  Viral RNA is the genetic material that gives HCV its particular characteristics. This process enables the virus to take advantage of the host cell's protein translation machinery for its own purposes.

There are an estimated 150 million to 200 million people living with chronic HCV and more than 350,000 people die annually from HCV-related diseases.[  HCV is transmitted through blood contact between an infected individual and someone who is not infected. This can occur through needlestick injuries or sharing of equipment used to inject drugs.



Contacts and sources:
Courtney Lock
European Association for the Study of the Liver

A Sneak Peek Through The Mist To Future 3D Fog Screen Display Technology

A tabletop display with personal screens made from a curtain of mist that allow users to move images around and push through the fog-screens and onto the display, will be unveiled at an international conference later this month.

MisTable supports different types of 3D interaction
Image courtesy of Bristol Interaction and Graphics group, University of Bristol 

The research paper, to be presented at one of the world’s most important conferences on human-computer interfaces - ACM CHI 2014 [26 April-1 May], could change the way people interact and collaborate in the future.

MisTable, led by Professor Sriram Subramanian and Dr Diego Martinez Plasencia from the University of Bristol’s Department of Computer Science, is a tabletop system that combines a conventional interactive table with personal screens, built using fog, between the user and the tabletop surface.

These personal screens are both see-through and reach-through. The see-through feature provides direct line of sight of the personal screen and the elements behind it on the tabletop. The reach-through feature allows the user to switch from interacting with the personal screen to reaching through it to interact with the tabletop or the space above it.

The personal screen allows a range of customisations and novel interactions such as presenting 2D personal content on the screen, 3D content above the tabletop or supplementing and renewing actual objects differently for each user.



Sriram Subramanian, Professor of Human-Computer Interaction, in the University’s Bristol Interaction and Graphics group, said: “MisTable broadens the potential of conventional tables in many novel and unique ways. The personal screen provides direct line of sight and access to the different interaction spaces. Users can be aware of each other’s actions and can easily switch between interacting with the personal screen to the tabletop surface or the interaction section. This allows users to break in or out of shared tasks and switch between “individual” and “group” work.

“Users can also move content freely between these interaction spaces. Moving content between the tabletop and the personal screen allow users to share it with others or to get exclusive ownership over it. The research team believe MisTable could support new forms of interaction and collaboration in the future.”

With the new system, having personal screens for each user allows the view of each of the users to be customised to them, as well as maintaining all well-established tabletop interface techniques like touch and tangible interactions.


Contacts and sources:
University of Bristol

Paper: MisTable: Reach-through Personal Screens for Tabletops, Diego Martinez Plasencia, Edward Joyce, Sriram Subramanian, Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems, Toronto, Canada, 26 April-1 May 2014.

Friday, April 11, 2014

First Moon Outside Our Solar System Detected

Titan, Europa, Io and Phobos are just a few members of our solar system's pantheon of moons. Are there are other moons out there, orbiting planets beyond our sun?

NASA-funded researchers have spotted the first signs of an "exomoon," and though they say it's impossible to confirm its presence, the finding is a tantalizing first step toward locating others. The discovery was made by watching a chance encounter of objects in our galaxy, which can be witnessed only once.

"We won't have a chance to observe the exomoon candidate again," said David Bennett of the University of Notre Dame, Ind., lead author of a new paper on the findings appearing in the Astrophysical Journal. "But we can expect more unexpected finds like this."

Researchers have detected the first "exomoon" candidate -- a moon orbiting a planet that lies outside our solar system. Using a technique called "microlensing," they observed what could be either a moon and a planet -- or a planet and a star. This artist's conception depicts the two possibilities, with the planet/moon pairing on the left, and star/planet on the right. If the moon scenario is true, the moon would weigh less than Earth, and the planet would be more massive than Jupiter.
  
Credit: NASA/JPL-Caltech

The scientists can't confirm the results partly because microlensing events happen once, due to chance encounters. The events occur when a star or planet happens to pass in front of a more distant star, causing the distant star to brighten. If the passing object has a companion -- either a planet or moon -- it will alter the brightening effect. Once the event is over, it is possible to study the passing object on its own. But the results would still not be able to distinguish between a planet/moon duo and a faint star/planet. Both pairings would be too dim to be seen.

In the future, it may be possible to enlist the help of multiple telescopes to watch a lensing event as it occurs, and confirm the presence of exomoons.

The international study is led by the joint Japan-New Zealand-American Microlensing Observations in Astrophysics (MOA) and the Probing Lensing Anomalies NETwork (PLANET) programs, using telescopes in New Zealand and Tasmania. Their technique, called gravitational microlensing, takes advantage of chance alignments between stars. When a foreground star passes between us and a more distant star, the closer star can act like a magnifying glass to focus and brighten the light of the more distant one. These brightening events usually last about a month.

If the foreground star -- or what astronomers refer to as the lens -- has a planet circling around it, the planet will act as a second lens to brighten or dim the light even more. By carefully scrutinizing these brightening events, astronomers can figure out the mass of the foreground star relative to its planet.

In some cases, however, the foreground object could be a free-floating planet, not a star. Researchers might then be able to measure the mass of the planet relative to its orbiting companion: a moon. While astronomers are actively looking for exomoons -- for example, using data from NASA's Kepler mission - so far, they have not found any.

Credit: NASA/JPL-Caltech

In the new study, the nature of the foreground, lensing object is not clear. The ratio of the larger body to its smaller companion is 2,000 to 1. That means the pair could be either a small, faint star circled by a planet about 18 times the mass of Earth -- or a planet more massive than Jupiter coupled with a moon weighing less than Earth.

The problem is that astronomers have no way of telling which of these two scenarios is correct.

"One possibility is for the lensing system to be a planet and its moon, which if true, would be a spectacular discovery of a totally new type of system," said Wes Traub, the chief scientist for NASA's Exoplanet Exploration Program office at NASA's Jet Propulsion Laboratory, Pasadena, Calif., who was not involved in the study. "The researchers' models point to the moon solution, but if you simply look at what scenario is more likely in nature, the star solution wins."

The answer to the mystery lies in learning the distance to the circling duo. A lower-mass pair closer to Earth will produce the same kind of brightening event as a more massive pair located farther away. But once a brightening event is over, it's very difficult to take additional measurements of the lensing system and determine the distance. The true identity of the exomoon candidate and its companion, a system dubbed MOA-2011-BLG-262, will remain unknown.

In the future, however, it may be possible to obtain these distance measurements during lensing events. For example, NASA's Spitzer and Kepler space telescopes, both of which revolve around the sun in Earth-trailing orbits, are far enough away from Earth to be great tools for the parallax-distance technique.

The basic principle of parallax can be explained by holding your finger out, closing one eye after the other, and watching your finger jump back and forth. A distant star, when viewed from two telescopes spaced really far apart, will also appear to move. When combined with a lensing event, the parallax effect alters how a telescope will view the resulting magnification of starlight. Though the technique works best using one telescope on Earth and one in space, such as Spitzer or Kepler, two ground-based telescopes on different sides of our planet can also be used.

Meanwhile, surveys like MOA and the Polish Optical Gravitational Experiment Lensing Experiment, or OGLE, are turning up more and more planets. These microlensing surveys have discovered dozens of exoplanets so far, in orbit around stars and free-floating. A previous NASA-funded study, also led by the MOA team, was the first to find strong evidence for planets the size of Jupiter roaming alone in space, presumably after they were kicked out of forming planetary systems. (See http://www.jpl.nasa.gov/news/news.php?release=2011-147).

The new exomoon candidate, if real, would orbit one such free-floating planet. The planet may have been ejected from the dusty confines of a young planetary system, while keeping its companion moon in tow.

The ground-based telescopes used in the study are the Mount John University Observatory in New Zealand and the Mount Canopus Observatory in Tasmania.

Additional observations were obtained with the W.M. Keck Observatory in Mauna Kea, Hawaii; European Southern Observatory's VISTA telescope in Chile; the Optical Gravitational Lens Experiment (OGLE) using the Las Campanas Observatory in Chile; the Microlensing Follow-Up Network (MicroFUN) using the Cerro Tololo Interamerican Observatory in Chile; and the Robonet Collaboration using the Faulkes Telescope South in Siding Spring, Australia.



Contacts and sources:
Whitney Clavin 
Jet Propulsion Laboratory, Pasadena, Calif.
More information about exoplanets and NASA's planet-finding program is at http://planetquest.jpl.nasa.gov.

Why Are Night Shining Clouds Increasing?

First spotted in 1885, silvery blue clouds sometimes hover in the night sky near the poles, appearing to give off their own glowing light. Known as noctilucent clouds, this phenomenon began to be sighted at lower and lower latitudes -- between the 40th and 50th parallel -- during the 20th century, causing scientists to wonder if the region these clouds inhabit had indeed changed -- information that would tie in with understanding the weather and climate of all Earth.

Night-shining, or noctilucent clouds on July 3, 2011, in Lock Leven, Fife, Scotland.


Image Credit: Courtesy of Adrian Maricic

A NASA mission called Aeronomy of Ice in the Mesosphere, or AIM, was launched in 2007 to observe noctilucent clouds, but it currently only has a view of the clouds near the poles. Now scientists have gathered information from several other missions, past and present, and combined it with computer simulations to systematically show that the presence of these bright shining clouds have indeed increased in areas between 40 and 50 degrees north latitude, a region which covers the northern third of the United Sates and the lowest parts of Canada. The research was published online in the Journal of Geophysical Research: Atmospheres on March 18, 2014.

NASA's Aeronomy of Ice in the Mesosphere, or AIM, mission captured this image of noctilucent clouds over the poles in 2010. By compiling data from several missions at once, researchers have now created a record of the clouds at lower latitudes as well.

Image Credit: NASA/AIM

"Noctilucent clouds occur at altitudes of 50 miles above the surface -- so high that they can reflect light from the sun back down to Earth," said James Russell, an atmospheric and planetary scientist at Hampton University in Hampton, Va., and first author on the paper. "AIM and other research has shown that in order for the clouds to form, three things are needed: very cold temperatures, water vapor and meteoric dust. The meteoric dust provides sites that the water vapor can cling to until the cold temperatures cause water ice to form."

To study long-term changes in noctilucent clouds, Russell and his colleagues used historical temperature and water vapor records and a validated model to translate this data into information on the presence of the clouds. They used temperature data from 2002 to 2011 from NASA's Thermosphere Ionosphere Mesosphere Energetics and Dynamics, or TIMED, mission and water vapor data from NASA's Aura mission from 2005 to 2011. They used a model previously developed by Mark Hervig, a co-author on the paper at GATS, Inc., in Driggs, Idaho.

The team tested the model by comparing its output to observations from the Osiris instrument on the Swedish Odin satellite, which launched in 2001, and the SHIMMER instrument on the U.S. Department of Defense STPSat-1 mission, both of which observed low level noctilucent clouds over various time periods during their flights. The output correlated extremely well to the actual observations, giving the team confidence in their model.

The model showed that the occurrence of noctilucent clouds had indeed increased from 2002 to 2011. These changes correlate to a decrease in temperature at the peak height where noctilucent clouds exist in the atmosphere. Temperatures at this height do not match temperatures at lower levels – indeed, the coldest place in the atmosphere is at this height during summertime over the poles – but a change there certainly does raise questions about change in the overall climate system.

Russell and his team will research further to determine if the noctilucent cloud frequency increase and accompanying temperature decrease over the 10 years could be due to a reduction in the sun’s energy and heat, which naturally occurred as the solar output went from solar maximum in 2002 to solar minimum in 2009.

"As the sun goes to solar minimum, the solar heating of the atmosphere decreases, and a cooling trend would be expected," said Russell.

NASA's Goddard Space Flight Center in Greenbelt, Md. manages the TIMED mission for the agency's Science Mission Directorate at NASA Headquarters in Washington. The spacecraft was built by the Johns Hopkins University Applied Physics Laboratory in Laurel, Md.





Contacts and sources:
Karen C. Fox
NASA's Goddard Space Flight Center, Greenbelt, Md.

Thursday, April 10, 2014

Hades Project: Scientists Explore One Of Earth's Deepest Ocean Trenches

What lives in the deepest part of the ocean--the abyss?

A team of researchers funded by the National Science Foundation (NSF) will use the world's only full-ocean-depth, hybrid, remotely-operated vehicle, Nereus, and other advanced technology to find out. They will explore the Kermadec Trench at the bottom of the Pacific Ocean.

The trench, located off New Zealand, is the fifth deepest trench in the world. Its maximum depth is 32,963 feet or 6.24 miles (10,047 meters). It's also one of the coldest trenches due to the inflow of deep waters from Antarctica.

The 40-day expedition to the Kermadec Trench, which begins on April 12, 2014, kicks off a three-year collaborative effort.

The project, known as the Hadal Ecosystem Studies Project (HADES), will conduct the first systematic study of life in ocean trenches, comparing it to the neighboring abyssal plains--flat areas of the seafloor usually found at depths between 9,843 and 19,685 feet (3,000 and 6,000 meters).

"The proposal to study the deep-sea environment as part of HADES was high-risk, but, we hope, also high-reward," says David Garrison, program director in NSF's Division of Ocean Sciences, which funds HADES. "Through this exciting project, we will shine a light into the darkness of Earth's deep-ocean trenches, discovering surprising results all along the way."

Among least-explored environments on Earth

A result of extreme pressures in these deep-sea environments and the technical challenges involved in reaching them, ocean trenches remain among the least-explored environments on the planet.

"We know relatively little about life in ocean trenches--the deepest marine habitats on Earth," says Tim Shank, a biologist at the Woods Hole Oceanographic Institution, one of the participating organizations.

"We didn't have the technology to do these kinds of detailed studies before. This will be a first-order look at community structure, adaptation and evolution: how life exists in the trenches."

NSF HADES principal investigators are Shank, Jeff Drazen of the University of Hawaii and Paul Yancey of Whitman College.

Other participating researchers are Malcolm Clark and Ashley Rowden of the National Institute of Water and Atmospheric Research in New Zealand, Henry Ruhl of the National Oceanography Centre at the University of Southampton, Alan Jamieson and Daniel Mayor of the University of Aberdeen and collaborators from the Japan Agency for Marine-Earth Science and Technology, Scripps Institution of Oceanography and the University of Oregon.

A crustacean on the edge of the Marianas Trench in the central Pacific Ocean.
Credit: Alan Jamieson, Oceanlab, University of Aberdeen
Telepresence technology aboard the research vessel Thomas G. Thompson will allow the public to share in the discoveries. Live-streaming Web events from the seafloor will include narration from the science team.

The researchers' work will also be chronicled in video, still images and blog updates on the expedition website.

How does life exist in a deep-sea trench?

What marine animals live in the Kermadec Trench, and how do they survive the crushing pressures found at that depth--some 15,000 pounds per square inch? These are among the questions the scientists will try to answer.

Denizen of The Deep. NSF HADES Project scientists hope to find out what else lives in the abyss.

Credit: RSS James Cook Cruise 62

The biologists plan to conduct research at 15 stations, including sites in shallow water for testing purposes, sites along the trench axis and sites in the abyssal plain.

At each one, they will deploy free-falling, full-ocean-depth, baited imaging landers called Hadal-Landers and "elevators" outfitted with experimental equipment--including respirometers to see how animal metabolism functions, plus water-sampling bottles to investigate microbial activity.

The team will use Nereus, which can remain deployed for up to 12 hours, to collect biological and sediment samples.

Nereus will stream imagery from its video camera to the ship via a fiber-optic filament about the width of human hair.

The expedition will build on earlier studies of the Kermadec Trench by Jamieson and colleagues at the National Institute of Water and Atmospheric Research and the University of Tokyo. Using the Hadal-Lander, they documented new species of animals in the Kermadec and other trenches in the Pacific.

Ocean trenches: home to unique species

Once thought devoid of life, trenches may be home to many unique species. There is growing evidence that food is plentiful there. While it is still unclear why, organic material in the ocean may be transported by currents and deposited into the trenches.

Major Pacific Ocean trenches (1-10) and fracture zones (11-20); Kermadec Trench is at number one.
Credit: Wikimedia Commons

In addition to looking at how food supply varies at different depths, the researchers will investigate the role energy demand and metabolic rates of trench organisms play in animal community structure.

"The energy requirements of hadal animals have never been measured," says Drazen, who will lead efforts to study distribution of food supply and the energetic demands of the trench organisms.

How animals in the trenches evolved to withstand high pressures is unknown, but Shank's objective is to compare the genomes of trench animals to piece together how they can survive there.

Life in trenches like the Kermadec: How do animals survive there?
 
Credit: Alan Jamieson, Oceanlab, University of Aberdeen

"The challenge is to determine whether life in the trenches holds novel evolutionary pathways that are distinct from others in the oceans," he says.

Water pressure, which at depths found in ocean trenches can be up to 1,100 times that at the surface, is known to inhibit the activity of certain proteins.

Yancey will investigate the role that piezolytes--small molecules that protect proteins from pressure--play in the adaptation of trench animals. Piezolytes, which Yancey discovered, may explain previous findings that not all deep-sea proteins are able to withstand high pressures.

"We're trying to understand how life can function under massive pressures in the hadal zone," says Yancey. "Pressure might be the primary factor determining which species are able to live in these extreme environments."

Trenches and climate change

Evidence also suggests that trenches act as carbon sinks, making the research relevant to climate change studies. The V-shaped topography along trench axes funnels resources--including surface-derived organic carbon--downward.

This cusk eel shares the depths of the Marianas Trench with the crustacean in the image above.
Credit: Alan Jamieson, Oceanlab, University of Aberdeen
"The bulk of our knowledge of trenches is only from snapshot visits using mostly trawls and camera landers," Shank says.

"Only detailed systematic studies will reveal the role trenches may play as the final location of where most of the carbon and other chemicals are sequestered in the oceans."



Contacts and sources:
Cheryl Dybas, NSF
Erin Koenig, Woods Hole Oceanographic Institution
Marcie Grabowski, University of Hawaii
Daniel LeRay, Whitman College