Thursday, June 30, 2011

When Viruses Infect Bacteria

Viruses are the most abundant parasites on Earth. Well known viruses, such as the flu virus, attack human hosts, while viruses such as the tobacco mosaic virus infect plant hosts.

Virus-bacterium associations were examined in the natural environment of a termite's hindgut. Three general scenarios were seen. In the first (1) there was a one-to-one association: one type of virus matched one type of bacterium host. In the second (2) the host bacterium was associated with a diverse group of viruses, indicating perhaps a more ancient infection or a more susceptible host. In the third case (3) very similar viruses were seen infecting several different types of bacterial hosts.

This study tested methods of examining virus-bacterium interactions in nature, rather than in vitro--from a culture. It opens a new door to understanding the diverse and highly populated world of viruses and bacteria that we know so little about.
Illustration of viruses infecting bacteria in a termite's hindgut.
Credit: Zina Deretsky, National Science Foundation

More common, but less understood, are cases of viruses infecting bacteria known as bacteriophages, or phages. In part, this is due to the difficulty of culturing bacteria and viruses that have been cut off from their usual biological surroundings in a process called in vitro.

Researchers from the California Institute of Technology, funded in part by the National Science Foundation, were the first to use a clever technique to look at virus-bacterium interactions in vivo, that is, within an organism's normal state. The researchers report their results in the July 1st issue of the journal, Science.

As a test case, Rob Phillips and his team considered the interaction between viruses and bacteria in the hindgut, or posterior part, of a termite. Using new microfluidic technology, they were able to isolate single bacterial cells from the termite hindgut in six-nanoliter chambers on an array containing 765 such chambers.

They were then able to determine whether the chambers contained bacterial DNA, viral DNA or both. In the latter case, the researchers were able to statistically deduce whether the virus was specifically associated with the host--for example, by attaching to the host, shooting its DNA into the host, being incorporated into the host as a prophage (a viral genome inserted and integrated into the bacterial DNA), riding on a plasmid or by assembling new viruses within the host. And through this snapshot, the group recorded virus-bacterium associations.

Frequently there was a one-to-one virus-bacterium correspondence. However in some cases, the host was associated with a viral gene exhibiting marked diversity, suggesting possibly a more ancient infection, a more susceptible host or a phage replicating at a lower fidelity. By analyzing the bacteria and viruses based on their evolutionary development they were able to deduce that horizontal gene transfer, while it may be occurring, is not occurring at a rate high enough to randomize host-virus associations.

This study was by no means exhaustive. Many similar associations may still be found in the termite hindgut. And further inquiry may lead to a better understanding of the coevolution of a virus and its host. However, this was the first in vivo exercise, and it opens the doors of the field much wider than previously possible through in vitro culture alone.

Source: National Science Foundation

'Dirty Hack' Saves Cluster Mission From Near Loss

Cluster satellites study the effects of solar wind. Artist's impression. 
Credits: ESA

Using ingenuity and an unorthodox 'dirty hack', ESA has recovered the four-satellite Cluster mission from near loss. The drama began in March, when a crucial science package stopped responding to commands – one of a mission controller's worst fears.

Since a pair of spectacular dual launches in 2000, the four Cluster satellites have been orbiting Earth in tightly controlled formation. Each of the 550 kg satellites carries an identical payload to investigate Earth's space environment and its interaction with the solar wind – the stream of charged particles pouring out from the Sun.

Among each satellite's 11 instruments, five comprise the Wave Experiment Consortium (WEC), which makes important measurements of electrical and magnetic fields. All four sensors must work together to make carefully orchestrated observations – the loss of any one could seriously affect the unique 'four-satellite science' delivered by the mission.

On 5 March, the WEC package on Cluster's number 3 satellite, Samba, failed to switch on. Ground controllers at ESA's European Space Operations Centre, in Darmstadt, Germany, immediately triggered a series of standard recovery procedures, none of which succeeded.

Even worse, no status information could be coaxed out of the instruments.

Dangerous scenario for orbiting satellite

"With no status data and no response from the instrument, we suspected either that the device's five power switches were locked closed or a failure caused by an electrical short circuit, one of the most dangerous faults on any satellite," said ESA's Jürgen Volpp, Cluster operations manager.

Over the next several weeks, working closely with the satellites' builder, the WEC scientists and manufacturer, and other ESA teams, the Cluster control team diagnosed the problem, eventually making use of some onboard software that had been dormant since just after launch over 10 years ago.

The result ruled out a short circuit and pointed an accusing finger at the five power switches being locked in the 'closed' position.

Tests in 1995 had simulated what might happen if three of the five switches locked close, but no one ever considered how to recover from all five being locked – such a situation had not been deemed possible.

Armed with this information and a great deal of ingenuity, the team painstakingly designed a recovery procedure and tested it on one of Samba's functioning sister satellites.

Solution based on a 'dirty hack'

"The solution was based on a 'dirty hack' – jargon referring to any non-standard procedure – but we really had no other option," said Jürgen.

Finally, on 1 June, in a very tense mission control room, a series of commands was radioed up. To immense relief, these flipped the power switches to 'on and the recalcitrant WEC came back to life.

Cluster has since returned to normal operation and measures are being taken to prevent this failure from happening again.

"When everything goes as planned, flying a mission can be routine," said ESA's Manfred Warhaut, Head of Mission Operations. "But when unexpected trouble occurs, and there's nothing in the manuals, you really want to have an experienced and talented team on hand to solve the problem."

Source: ESA

50 Common Eco-Crimes Committed By College Students

Most people try to be mindful of the environment, students included, but it’s not always easy to remember that almost every action we take has the potential to damage the world. College students especially have many opportunities to change their ways for the better, but for one reason or another, still continue to commit eco-crimes against nature. Read on to learn about 50 crimes you may be committing, and what you can do about it.
  1. Boiling unnecessary water: Overfilling your teakettle or boiling without a lid wastes energy and takes more time to get going.
  2. Buying new books: Instead of buying new books, it’s more eco-friendly to buy used ones, or even rent or check out books from the library instead.
  3. Throwing away pens: Cheap bulk bag pens are convenient, but wasteful. Get a refillable pen instead, and throw away just a small refill rather than an entire pen.
  4. Wasting paper: Unlimited printing at the library doesn’t mean you should go crazy — print only what you absolutely need to have on paper.
  5. Forgetting to turn off the tap: Turn off the tap when you’re brushing your teeth or washing dishes.
  6. Recycle the way they say: Recycling doesn’t work if you don’t do it right. Remove caps, put the right item in the right bin so it can be processed correctly.
  7. Wash full loads: Load the dishwasher and washing machine with full loads instead of half or less.
  8. Writing inside the margins: You’ve probably been taught to stay neatly within the margins of your paper, but that is a wasteful practice when you’re just taking notes. Write all the way to the edge, and you may be surprised how much you can fit on each page.
  9. Dumping unwanted items: When the semester’s over and students move on, students often leave behind items that they don’t really want to take with them.
  10. Leaving your TV on in an empty room: If you’re not watching your TV or listening to the radio, turn them off.
  11. Buying more dorm stuff than necessary: Chances are, you’ll have a roommate in the dorms, and that roommate will be bringing lots of stuff with them. Instead of buying and loading up lots of your own stuff, check with your roommate to see if you can split items.
  12. Disposing electronic waste improperly: When you’re done with your laptop or cell phone, donate it or drop it in a recycling box designated for electronic waste.
  13. Forgetting to recycle: College campuses often have excellent recycling and even composting programs, but college students don’t always take advantage of them. Pay attention to what’s available, and use it.
  14. Forgetting what you have already: Remember to take inventory to avoid duplicates when you’re headed off to college.
  15. Unnecessary driving: College campuses are usually pedestrian friendly, so walk, bike, and avoid using your car unless you need to.
  16. Idling your car: Leaving the motor running when parked waiting for a friend wastes gas. It’s better to shut it off if you’ll be sitting for 10 seconds or longer.
  17. Buying notebooks: If you don’t fill your notebook each semester, you’re wasting paper. Use a refillable binder, or even a laptop instead.
  18. Buying bottled water: Buy a refillable bottle and wash it instead of buying disposable bottles.
  19. Buying cheap supplies: Buying a new backpack is wasteful — just buy a great one to start with, and you won’t have to get a new one later.
  20. Using incandescent light bulbs: If you’re renting or living in a dorm, you’re probably just going to use the bulbs provided, but switching them out for compact fluorescent light bulbs can save energy.
  21. Shipping your stuff everywhere: Instead of shipping your stuff back and forth, look into local storage options.
  22. Using disposables: It’s easy to clean up disposable plates and cups, but they’re terrible for the environment. Stick to inexpensive plates you can wash instead.
  23. Drinking bottled beer: Beer on tap doesn’t create nearly as much waste as bottled beer.
  24. Buying brand new supplies: Save your supplies from semester to semester and avoid having to buy them over again.
  25. Buying highly packaged food: Instead of buying processed foods, get fresh items that come with less packaging.
  26. Cooking with inefficient appliances: Instead of using outdated appliances, use efficient ones like microwaves and toaster ovens.
  27. Using the dryer: Invest in a clothes drying rack to hang dry your clothes instead of using the dryer.
  28. Replacing instead of upgrading: Before buying a new item, find out if you can upgrade first.
  29. Using wasteful beauty products: Look for natural and organic personal care brands to make sure your clean is really clean.
  30. Sleeping with the TV on: Set a timer for your TV to turn off once you’re asleep so it’s not on all night.
  31. Reading the school newspaper: Find out if your school’s newspaper offers an online edition instead of picking up a paper copy.
  32. Working out in the gym: Gym equipment like treadmills put out lots of carbon emissions. Save the earth while you work out by doing it outside.
  33. Don’t take more than you’ll eat: Be careful not to overly fill your plate at the cafeteria — just take what you need.
  34. Leaving your air conditioner on: College students spend lots of time out of their dorms and apartments but may not remember to program their thermostat to go down when they’re not at home.
  35. Printing on one side: There’s no excuse for printing on just one side — learn how to do double-sided printing.
  36. Leaving electronics on: Leaving on your laptop, TV, and other electronics can drain energy without you noticing it-plug your electronics into powerstrips and turn them all off at once.
  37. Eating mini packs: Pack your lunch in reusable bags instead of mini packs of chips and other items.
  38. Using paper napkins: If you’re eating fast food, chances are you’ve gotten napkins. Limit your use — you probably just need one.
  39. Spring breaking: Instead of flying to a far-flung destination, take an eco-friendly spring break trip.
  40. Forgetting to bring reusable shopping bags: Reusable bags are easy — if you actually remember to bring them to the store. Use keychain bags or ones that fold up to fit in your backpack so you won’t forget.
  41. Waiting for the hot water: Let the cool water fill up a bucket to use on your plants and other items while you’re waiting for the hot water to come in.
  42. Eating takeout: Take out is often in wasteful packaging-find restaurants that use less packaging, or just make food for yourself at home.
  43. Throwing away old clothes: Updating your wardrobe doesn’t have to mean being wasteful — donate your old clothes to a charity a homeless shelter.
  44. Letting lint build up: Whether you’re using community dryers or your own at home, always remember to clean the lint filter for a more efficient dryer.
  45. Using unnecessary kitchen items: Tin foil, plastic wrap, disposable cleaning cloths, and more can be switched for reusable items.
  46. Buying new clothing: Buy used clothing, or swap with friends to save resources and money.
  47. Paper statements: Switch to paperless billing for your bank, credit cards, utilities, and more.
  48. Driving alone: On a college campus, you should be able to find someone to share rides with — rideshare home to visit family and friends, or just go to the store together.
  49. Leaving lights on: It doesn’t have to be day-bright in your room all of the time. Dim your lights or turn them off completely unless you really need them.
  50. Throwing away paper: Students deal with lots of paper, and it’s important to dispose of it correctly. Remember to recycle instead of throwing away paper.
Contacts and sources:
Story by Tim Handorf

Quantum ‘Graininess’ Dizzying Physics Beyond Einstein

ESA’s Integral gamma-ray observatory has provided results that will dramatically affect the search for physics beyond Einstein. It has shown that any underlying quantum ‘graininess’ of space must be at much smaller scales than previously predicted.

Einstein’s General Theory of Relativity describes the properties of gravity and assumes that space is a smooth, continuous fabric. Yet quantum theory suggests that space should be grainy at the smallest scales, like sand on a beach.

Integral’s IBIS instrument captured the gamma-ray burst (GRB) of 19 December 2004 that Philippe Laurent and colleagues have now analysed in detail. It was so bright that Integral could also measure its polarisation, allowing Laurent and colleagues to look for differences in the signal from different energies. The GRB shown here, on 25 November 2002, was the first captured using such a powerful gamma-ray camera as Integral’s. When they occur, GRBs shine as brightly as hundreds of galaxies each containing billions of stars.
Credits: ESA/SPI Team/ECF

One of the great concerns of modern physics is to marry these two concepts into a single theory of quantum gravity.

Now, Integral has placed stringent new limits on the size of these quantum ‘grains’ in space, showing them to be much smaller than some quantum gravity ideas would suggest.

According to calculations, the tiny grains would affect the way that gamma rays travel through space. The grains should ‘twist’ the light rays, changing the direction in which they oscillate, a property called polarisation.

High-energy gamma rays should be twisted more than the lower energy ones, and the difference in the polarisation can be used to estimate the size of the grains.

ESA’s Integral gamma-ray observatory is able to detect gamma-ray bursts, the most energetic phenomena in the Universe.
Credits: ESA/Medialab

Philippe Laurent of CEA Saclay and his collaborators used data from Integral’s IBIS instrument to search for the difference in polarisation between high- and low-energy gamma rays emitted during one of the most powerful gamma-ray bursts (GRBs) ever seen.

GRBs come from some of the most energetic explosions known in the Universe. Most are thought to occur when very massive stars collapse into neutron stars or black holes during a supernova, leading to a huge pulse of gamma rays lasting just seconds or minutes, but briefly outshining entire galaxies.

GRB 041219A took place on 19 December 2004 and was immediately recognised as being in the top 1% of GRBs for brightness. It was so bright that Integral was able to measure the polarisation of its gamma rays accurately.

Dr Laurent and colleagues searched for differences in the polarisation at different energies, but found none to the accuracy limits of the data.

Some theories suggest that the quantum nature of space should manifest itself at the ‘Planck scale’: the minuscule 10-35 of a metre, where a millimetre is 10-3 m.

However, Integral’s observations are about 10 000 times more accurate than any previous and show that any quantum graininess must be at a level of 10-48 m or smaller.

“This is a very important result in fundamental physics and will rule out some string theories and quantum loop gravity theories,” says Dr Laurent.

Integral made a similar observation in 2006, when it detected polarised emission from the Crab Nebula, the remnant of a supernova explosion just 6500 light years from Earth in our own galaxy.

This new observation is much more stringent, however, because GRB 041219A was at a distance estimated to be at least 300 million light years.

In principle, the tiny twisting effect due to the quantum grains should have accumulated over the very large distance into a detectable signal. Because nothing was seen, the grains must be even smaller than previously suspected.

“Fundamental physics is a less obvious application for the gamma-ray observatory, Integral,” notes Christoph Winkler, ESA’s Integral Project Scientist. “Nevertheless, it has allowed us to take a big step forward in investigating the nature of space itself.”

Now it’s over to the theoreticians, who must re-examine their theories in the light of this new result.
Contacts and sources: 
Markus Bauer
European Space Agency

Cloud Mystery = Accidental Weather Manipulation: Takeoffs And Landings Cause More Precipitation Near Airports

Researchers have found that areas near commercial airports sometimes experience a small but measurable increase in rain and snow when aircraft take off and land under certain atmospheric conditions.

NASA’s Terra satellite captured this image of hole-punch and canal clouds on Jan. 29, 2007. These unusual gaps in clouds are often caused by aircraft under certain atmospheric conditions.
NASA image by Jeff Schmaltz, MODIS Rapid Response Team, Goddard Space Flight Center

The new study led by the National Center for Atmospheric Research (NCAR), is part of ongoing research that focuses on so-called hole punch and canal clouds that form when planes fly through certain mid-level clouds, forcing nearby air to rapidly expand and cool. This causes water droplets to freeze to ice and then turn to snow as they fall toward the ground, leaving behind odd-shaped gaps in the clouds.

The research team used satellite images and weather forecasting computer models to examine how often this type of inadvertent cloud seeding may occur within 62 miles (100 kilometers) of six commercial airports: London Heathrow, Frankfurt, Charles De Gaulle (Paris), Seattle-Tacoma, O'Hare (Chicago), and Yellowknife (Northwest Territories, Canada), as well as Byrd Station in Antarctica. They found that, depending on the airport and type of plane, the right atmospheric conditions typically exist up to 6 percent of the time, with somewhat more frequency in colder climates.

The lead author, NCAR scientist Andrew Heymsfield, says this phenomenon likely occurs at numerous other airports, especially in mid- and high-latitude areas during colder months. The key variable is whether there are cloud layers in the vicinity that contain water droplets at temperatures far below freezing, which is a common occurrence.

He adds that more research is needed before scientists can determine whether the precipitation produced by this effect is significant. The inadvertent cloud seeding may increase the need to de-ice planes more often, he adds.

"It appears to be a rather widespread effect for aircraft to inadvertently cause some measureable amount of rain or snow as they fly through certain clouds," Heymsfield says. "This is not necessarily enough precipitation to affect global climate, but it is noticeable around major airports in the midlatitudes."

The researchers did not estimate the total amount of rain or snow that would result from such inadvertent cloud seeding. However, they analyzed radar readings that, in one case, indicated a snowfall rate of close to an inch an hour after several planes had passed through.

The study is being published this week in the journal Science. Researchers from NASA Langley Research Center and the University of Wyoming, Laramie, co-authored the paper. Funding came from the National Science Foundation, which is NCAR's sponsor, and from NASA.

Solving a cloud mystery

Scientists for decades have speculated about the origins of mysterious holes and canals in clouds. Heymsfield led a study last year establishing that the gaps, which sometimes look as though a giant hole punch was applied to a cloud, are caused when aircraft fly through midlevel clouds that contain supercooled droplets.

When a turboprop plane flies through such a cloud layer with temperatures about 5 degrees Fahrenheit or lower (about -15 degrees Celsius or lower), the tips of its propellers can cause the air to rapidly expand. As the air expands, it cools and causes the supercooled droplets to freeze into ice particles that evaporate the droplets and grow, falling out of the clouds as snow or rain.

Jet aircraft need colder temperatures (below about -4 to -13 degrees F, or -20 to -25 degrees C) to generate the seeding effect. Air forced to expand over the wings as the aircraft moves forward cools and freezes the cloud droplets.

The effect is unrelated to the trails of condensed water vapor known as contrails made by the exhaust of jet engines.

In the new research, the study team used cloud measurements taken by the NASA CALIPSO satellite to quantify how often such conditions exist within about 62 miles of several airports located in relatively cloudy areas. They chose the 62-mile radius because that is approximately the distance it takes for a commercial aircraft to climb above about 10,000 feet, where many of the supercooled cloud layers are located.

Of the major, mid-latitude airports studied, they found that the Frankfurt, DeGaulle, and O'Hare airports most frequently experienced the right conditions for propeller aircraft to generate precipitation. In each case, the conditions existed more than 5 percent of the time over the course of a year. The researchers found that the right conditions existed more than 3 percent of the time for jets at Heathrow, Frankfurt, and Seattle-Tacoma.

Yellowknife experienced such conditions more often, about 10 percent of the time for propeller planes and 5 percent for jets, presumably because of colder cloud conditions at higher latitudes. Byrd often experienced the very cold conditions that enable jets to cause inadvertent cloud seeding.

The researchers also found that a diverse range of aircraft can induce precipitation. By comparing observations of hole-punch and canal clouds made by a National Oceanic and Atmospheric Administration (NOAA) satellite with flight path records from the Federal Aviation Administration, they confirmed that commercial jets (such as Boeing 757s and the McDonnell Douglas MD-80 series of jets), military aircraft (B-52s), various regional and private jets, turboprops, and prop/piston planes all can induce precipitation.

"It appears that virtually any airplane that flies through clouds containing liquid water at temperatures much below freezing can cause this effect," Heymsfield says.

Satellite readings analyzed by the team showed that holes and canals generated by aircraft can occur with some frequency. For example, an extensive cloud layer over Texas on January 29, 2007, contained 92 such gaps, some of which persisted for more than four hours and reached lengths of 60 miles or more.

Heymsfield and his colleagues also used a powerful software tool, known as the Weather and Research Forecasting model, to learn more about how the holes form and develop. They found that the hole rapidly spreads about 30 to 90 minutes after an aircraft passes through. This would be the peak time for precipitation associated with the cloud-seeding effect. After about 90 minutes, ice and snow begin to dissipate.

The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Contacts and sources:

Red Wine: Exercise In A Bottle, Recommended for Astronauts, Resveratrol Prevents Negative Effects Of Couch Potato Lifetyle

New research in the FASEB Journal suggests that a daily intake of resveratrol prevents the ill effects of simulated weightlessness on muscle and bone metabolism

As strange as it sounds, a new research study published in the FASEB Journal (, suggests that the "healthy" ingredient in red wine, resveratrol, may prevent the negative effects that spaceflight and sedentary lifestyles have on people. The report describes experiments in rats that simulated the weightlessness of spaceflight, during which the group fed resveratrol did not develop insulin resistance or a loss of bone mineral density, as did those who were not fed resveratrol.

According to Gerald Weissmann, M.D., Editor-in-Chief of the FASEB Journal, "There are overwhelming data showing that the human body needs physical activity, but for some of us, getting that activity isn't easy. A low gravity environment makes it nearly impossible for astronauts. For the earthbound, barriers to physical activity are equally challenging, whether they be disease, injury, or a desk job. Resveratrol may not be a substitute for exercise, but it could slow deterioration until someone can get moving again."

Scientists studied rats that underwent simulated weightlessness by hindlimb tail suspension and were given a daily oral load of resveratrol. The control group showed a decrease in soleus muscle mass and strength, the development of insulin resistance, and a loss of bone mineral density and resistance to breakage. The group receiving resveratrol showed none of these complications. Study results further demonstrated some of the underlying mechanisms by which resveratrol acts to prevent the wasting adaptations to disuse-induced mechanical unloading. This study also suggests that resveratrol may be able to prevent the deleterious consequences of sedentary behaviors in humans.

"If resveratrol supplements are not your cup of tea," Weissmann added, "then there's good news. You can find it naturally in red wine, making it the toast of the Milky Way."

Satellite Sees La Niña's Exit & "La Nada" Arrival, Climate Forecasting Enters Limbo, Puzzling Period Between Earth Cycles

It's what Bill Patzert, a climatologist and oceanographer at NASA's Jet Propulsion Laboratory in Pasadena, Calif., likes to call a "La Nada" – that puzzling period between cycles of the El Niño-Southern Oscillation climate pattern in the Pacific Ocean when sea surface heights in the equatorial Pacific are near average.

The latest satellite data of Pacific Ocean sea surface heights from the NASA/European Ocean Surface Topography Mission/Jason-2 satellite show near-normal conditions in the equatorial Pacific. The image is based on the average of 10 days of data centered on June 18, 2011. Higher (warmer) than normal sea surface heights are indicated by yellows and reds, while lower (cooler) than normal sea surface heights are depicted in blues and purples. Green indicates near-normal conditions. 
Image credit: NASA/JPL Ocean Surface Topography Team 

The comings and goings of El Niño and La Niña are part of a long-term, evolving state of global climate, for which measurements of sea surface height are a key indicator. For the past three months, since last year's strong La Niña event dissipated, data collected by the U.S.-French Ocean Surface Topography Mission (OSTM)/Jason-2 oceanography satellite have shown that the equatorial Pacific sea surface heights have been stable and near average. Elsewhere, however, the northeastern Pacific Ocean remains quite cool, with sea levels much lower than normal. The presence of cool ocean waters off the U.S. West Coast has also been a factor in this year's cool and foggy spring there.

The current state of the Pacific is shown in this OSTM/Jason-2 image, based on the average of 10 days of data centered on June 18, 2011. The image depicts places where Pacific sea surface height is higher (warmer) than normal as yellow and red, while places where the sea surface is lower (cooler) than normal are shown in blue and purple. Green indicates near-normal conditions. Sea surface height is an indicator of how much of the sun's heat is stored in the upper ocean.

For oceanographers and climate scientists like Patzert, "La Nada" conditions can bring with them a high degree of uncertainty. While some forecasters (targeting the next couple of seasons) have suggested La Nada will bring about "normal" weather conditions, Patzert cautions previous protracted La Nadas have often delivered unruly jet stream patterns and wild weather swings.

In addition, some climatologists are pondering whether a warm El Niño pattern (which often follows La Niña) may be lurking over the horizon. Patzert says that would be perfectly fine for the United States.

"For the United States, there would be some positives to the appearance of El Niño this summer," Patzert said. "The parched and fire-ravaged southern tier of the country would certainly benefit from a good El Niño soaking. Looking ahead to late August and September, El Niño would also tend to dampen the 2011 hurricane season in the United States. We've had enough wild and punishing weather this year. Relief from the drought across the southern United States and a mild hurricane season would be very welcome."

Jason-2 scientists will continue to monitor Pacific Ocean sea surface heights for signs of El Niño, La Niña or prolonged neutral conditions.

JPL manages the U.S. portion of the OSTM/Jason-2 mission for NASA's Science Mission Directorate, Washington, D.C.

For more information on NASA's ocean surface topography missions, visit: .

To view the latest Jason-1 and OSTM/Jason-2 data, visit: .

Alan Buis 818-354-0474
Jet Propulsion Laboratory, Pasadena, Calif.

'Zombie' Stars Key To Measuring Dark Energy

"Zombie" stars that explode like bombs as they die, only to revive by sucking matter out of other stars. According to an astrophysicist at UC Santa Barbara, this isn't the plot for the latest 3D blockbuster movie. Instead, it's something that happens every day in the universe –– something that can be used to measure dark energy.

This is a Chandra X-ray image of Tycho's supernova remnant. This Type Ia sueprnova was observed by Tycho Brahe in 1572, and today is just an expanding ball of gas. Astronomers used to have to wait years for a close, bright supernova to learn about them. Today big surveys are discovering supernovae by the thousands.
Credit: NASA/Chandra X-ray Observatory

This special category of stars, known as Type Ia supernovae, help to probe the mystery of dark energy, which scientists believe is related to the expansion of the universe.

Andy Howell, adjunct professor of physics at UCSB and staff scientist at Las Cumbres Observatory Global Telescope (LCOGT), wrote a review article about this topic, published recently in Nature Communications. LCOGT, a privately funded global network of telescopes, works closely with UCSB.

Supernovae are stars that have been observed since 1054 A.D., when an exploding star formed the crab nebula, a supernova remnant.

More recently, the discovery of dark energy is one of the most profound findings of the last half-century, according to Howell. Invisible dark energy makes up about three-fourths of the universe. "We only discovered this about 20 years ago by using Type Ia supernovae, thermonuclear supernovae, as standard or 'calibrated' candles," said Howell. "These stars are tools for measuring dark energy. They're all about the same brightness, so we can use them to figure out distances in the universe."

This is Supernovae 1994D. The supernova is the bright point in the lower-left. It is a type Ia thermonuclear supernova like those described by Howell. The supernova is near galaxy NGC 4526, depicted in the center of the image.
Credit: NASA/Hubble Space Telescope

These supernovae are so bright that they shine with the approximate power of a billion suns, noted Howell.

He calls Type Ia supernovae "zombie" stars because they're dead, with a core of ash, but they come back to life by sucking matter from a companion star. Over the past 50 years, astrophysicists have discovered that Type Ia supernovae are part of binary systems –– two stars orbiting each other. The one that explodes is a white dwarf star. "That's what our sun will be at the end of its life," he said. "It will have the mass of the sun crammed into the size of the Earth."

The white dwarf stars that tend to explode as Type Ia supernovae have approximately the same mass. This was considered a fundamental limit of physics, according to Howell. However, in an article in Nature about five years ago, Howell reported his discovery of stars that go beyond this limit. These previously unknown Type Ia supernovae have more than typical mass before they explode –– a fact that confounds scientists.

Howell presented a hypothesis to understand this new class of objects. "One idea is that two white dwarfs could have merged together; the binary system could be two white dwarf stars," he said. "Then, over time, they spiral into each other and merge. When they merge, they blow up. This may be one way to explain what is going on."

Astrophysicists are using Type Ia supernovae to build a map of the history of the universe's expansion. "What we've found is that the universe hasn't been expanding at the same rate," said Howell. "And it hasn't been slowing down as everyone thought it would be, due to gravity. Instead, it has been speeding up. There's a force that counteracts gravity and we don't know what it is. We call it dark energy."

This is D. Andrew Howell from the University of California -- Santa Barbara.
Credit: Katrina Marcinowski

The new findings relate to Einstein's concept of the cosmological constant. This is a term he added into his equations to make them valid. However, Einstein did it because he thought the universe was static; he didn't know the universe was expanding. When it was revealed that the universe is expanding, Einstein believed this concept was his biggest blunder. "It turns out that this cosmological constant was actually one of his greatest successes," said Howell. "This is because it's what we need now to explain the data."

He said that dark energy is probably a property of space. "Space itself has some energy associated with it," said Howell. "That's what the results seem to indicate, that dark energy is distributed everywhere in space. It looks like it's a property of the vacuum, but we're not completely sure. We're trying to figure out how sure are we of that –– and if we can improve Type Ia supernovae as standard candles we can make our measurements better."

Throughout history, people have noticed a few supernovae so bright they could be seen with the naked eye. With telescopes, astronomers have discovered supernovae farther away. "Now we have huge digital cameras on our telescopes, and really big telescopes," said Howell, "We've been able to survey large parts of the sky, regularly. We find supernovae daily." Astronomers have discovered thousands of supernovae in recent years.

During his career, Howell has used these powerful telescopes to study supernovae. Currently, besides teaching at UCSB, he is involved in LCOGT's detailed study of supernovae that is aimed at helping to understand dark energy. With this extensive network of observatories, it will be possible to study the night sky continuously.

"The next decade holds real promise of making serious progress in the understanding of nearly every aspect of supernovae Ia, from their explosion physics, to their progenitors, to their use as standard candles," writes Howell in Nature Communications. "And with this knowledge may come the key to unlocking the darkest secrets of dark energy."

Contacts and sources: 

Bacteria-Virus Infection Networks: How They Work

Bacteria are common sources of infection, but these microorganisms can themselves be infected by even smaller agents: viruses. A new analysis of the interactions between bacteria and viruses has revealed patterns that could help scientists working to understand which viruses infect which bacteria in the microbial world.

A meta-analysis of the interactions shows that the infection patterns exhibit a nested structure, with hard-to-infect bacteria infected by generalist viruses and easy-to-infect bacteria attacked by both generalist and specialist viruses.

"Although it is well known that individual viruses do not infect all bacteria, this study provides an understanding of possibly universal patterns or principles governing the set of viruses able to infect a given bacteria and the set of bacteria that a given virus can infect," said Joshua Weitz, an assistant professor in the School of Biology at the Georgia Institute of Technology.

Of the 38 studies, the researchers found 27 that showed significant nestedness. Nestedness was measured by the extent to which phages that infected the most hosts tended to infect bacteria that were infected by the fewest phages. 
Bacteria-phage nested pattern
Credit: Georgia Tech/Sergi Valverde

Discovering this general pattern of nested bacteria-virus infection could improve predictions of microbial population dynamics and community assembly, which affect human health and global ecosystem function. Knowing the patterns of which bacteria are susceptible to which viruses could also provide insights into strategies for viral-based antimicrobial therapies.

The results of the meta-analysis were published June 27, 2011 in the early edition of the journal Proceedings of the National Academy of Sciences. The work was sponsored by the James S. McDonnell Foundation, the Defense Advanced Projects Research Agency and the Burroughs Wellcome Fund.

Georgia Tech physics graduate student Cesar Flores, Michigan State University zoology graduate student Justin Meyer, Georgia Tech biology undergraduate student Lauren Farr, and postdoctoral researcher Sergi Valverde from the University Pompeu Fabra in Barcelona, Spain also contributed to this study.

The research team compiled 38 laboratory studies of interactions between bacteria and phages, the viruses that infect them. The studies represented approximately 12,000 distinct experimental infection assays across a broad spectrum of diversity, habitat and mode of selection. The studies covered a 20-year period and included hundreds of different host and phage strains.

The researchers converted each study into a matrix with rows containing bacterial types, columns containing phage strains, and cells with zeros or ones to indicate whether a given pair yielded an infection. Then they applied a rigorous network theory approach to examine whether the interaction networks exhibited a nonrandom structure, conformed to a characteristic shape, or behaved idiosyncratically -- making them hard to predict.

Researchers converted each of the 38 studies into a matrix, like the one shown here, with rows containing bacterial types, columns containing virus strains, and white cells indicating that a given pair yielded an infection.
Credit: Georgia Tech/Joshua Weitz

Of the 38 studies, the researchers found 27 that showed significant nestedness. Nestedness was measured by the extent to which phages that infected the most hosts tended to infect bacteria that were infected by the fewest phages. The researchers used statistical tests to rule out forms of bias. However, because the majority of the data consisted of closely related species, the researchers anticipate that more complex patterns of infection may form with species with more genetic diversity.

"Considering the large range of taxa, habitats and sampling techniques used to construct the matrices, the repeated sampling of a nested pattern of host-phage infections is salient, but the process driving the nestedness is not obvious. The pattern suggests a common mechanism or convergent set of mechanisms underlying microbial co-evolution and community assembly," explained Weitz.

The researchers examined three hypotheses to explain the nestedness pattern based on biochemical, ecological and evolutionary principles, but found that additional experiments will be required to determine why this pattern occurs so often.

This meta-analysis demonstrated the utility of network methods as a means for discovering novel interaction patterns. According to the researchers, viewing host-phage interaction networks through this type of unifying lens more often will likely unveil other hidden commonalities of microbial and viral communities that transcend species identity.

This research was supported in part by the Defense Advanced Research Projects Agency (DARPA) (Award No. HR0011-09-1-0055). The content is solely the responsibility of the principal investigator and does not necessarily represent the official views of DARPA.

Contacts and sources:
Abby Robinson

Korean Herbal Medicine Reduces Inflammation In Allergen-Induced Asthma Says Science Study

- Researchers from Boston University School of Medicine (BUSM) using a traditional Korean medicine, SO-CHEONG-RYONG-TANG (SCRT) that has long been used for the treatment of allergic diseases in Asia, found that SCRT treatment alleviates asthma-like pulmonary inflammation via suppression of specific chemokines or proteins. These findings appear online in the Annals of Allergy, Asthma & Immunology.

Asthma is a unique form of chronic respiratory disease characterized by reversible airway obstruction and pulmonary inflammation. It represents one of the most common chronic inflammatory diseases affecting an estimated 300 million people worldwide with an expected increase to 400 million by 2025. The sharply rising prevalence and incidence of asthma causes global concern both in the developed as well as in developing countries.

“In order to elucidate the mechanism of how SCRT modulates the allergic response, we evaluated the immunomodulatory effects of SCRT in a murine model of asthma induced by a house dust extract containing cockroach allergens and endotoxin,” explained Jiyoun Kim, PhD, a research assistant professor of pathology and laboratory medicine at BUSM. “In this study multiple aspects of pulmonary inflammation were examined including the production of inflammatory mediators and the pulmonary recruitment of inflammatory cells,” he added.

The researchers found SCRT treatment significantly reduced airway hyper-reactivity as measured by both whole body plethysmography and direct measurement of airway resistance. The researchers report that the immune response of pulmonary inflammation was significantly inhibited by SCRT treatment as demonstrated by reduced plasma IgE antibody levels and improved lung histology. SCRT significantly reduced the number of neutrophils in the bronchoalveolar (BAL) fluid and also significantly reduced the BAL levels of CXC chemokines both expressed as part of the immune response, providing a potential mechanism for the reduced inflammation.

This study was supported by grants from the National Institutes of Health and the Oriental Medicine R&D Project of the Ministry of Health & Welfare of the Republic of Korea.

Sea Urchins See With Their Whole Body, Animal World Wonder

Many animals have eyes that are incredibly complex – others manage without. Researchers at the University of Gothenburg have shown that sea urchins see with their entire body despite having no eyes at all. The study has been published in the scientific journalProceedings of the National Academy of Sciences (PNAS).

These are sea urchins.
Credit: University of Gothenburg

Most animals react to light and have developed a very sophisticated way of seeing complex images so that they can function in their surroundings. Good examples include insects' compound eyes and the human eye. Charles Darwin and other evolutionary biologists were bewildered by the eye's complexity and wondered how this kind of structure could have evolved through natural selection.

But some creatures, such as sea urchins, can react to light even though they do not have eyes. Previous studies of sea urchins have shown that they have a large number of genes linked to the development of the retina, which is the light-sensitive tissue in the human eye. This means that sea urchins have several genes that are coded for a widely occurring eye protein, opsin.

"It was this discovery that underpinned our research," says Sam Dupont from the University of Gothenburg's Department of Marine Ecology, one of the researchers behind the study and co-authors of the article. "We wanted to see where the opsin was located in sea urchins so that we could find the sensory light structures, or photoreceptors. We quite simply wanted to know where the sea urchin sees from."

The research group behind the study showed that the photoreceptors seem to be located on the tip and base of the tube feet that are found all over the sea urchin's body and are used to move.

"We argue that the entire adult sea urchin can act as a huge compound eye, and that the shadow that is cast by the animal's opaque skeleton over the light-sensitive cells can give it directional vision," says Dupont.

Contacts and sources:
Sam Dupont
University of Gothenburg

Journal: PNAS 2011/04/26
Title: Unique system of photoreceptors in sea urchin tube feet
Authors: Esther M Ullrich-Lüter, Sam Dupont, Enrique Arboledac , Harald Hausend, and Maria Ina Arnonec

Gravity Gone Wild: See Potsdam Gravity Potato, Earth's Eerie Gravity Anomalies Revealed In New Satellite Image

Potsdam Gravity potato, 2011
Geoid 2011, data based on satellite LAGEOS, GRACE and GOCE and surface data (airborne gravimetry and satellite altimetry

The "Potsdam Gravity potato", as this representation of terrestrial gravity has become known, can for the first time display gravity variations that change with time. The seasonal fluctuations of the water balance of continents or melting or growing ice masses, i.e. climate-related variables, are now included in the modeling of the gravity field. "EIGEN-6C" is the name of this latest global gravity field model of the GFZ German Research Centre for Geosciences. 

The irregular gravitational field of the Earth in highly exaggerated representation. Become known as the "Potsdam Gravity potato". 37 satellites launched since 1960 (including ERS-1 and LAGEOS), measured by Satellite LaserRanging (SLR) and other older methods of measurement, plus surface data (airborne gravimetry and satellite altimetry)

Geoid 1995
It was recently calculated in Potsdam in cooperation with the Groupe de Recherche de Géodésie Spaciale from Toulouse. This new gravity field model is based on measurements of the satellites LAGEOS, GRACE and GOCE. These were combined with ground-based gravity measurements and data from the satellite altimetry. EIGEN-6C has a spatial resolution of about 12 kilometres. Compared to the last version of the Potsdam potato, this is a four-fold increase.

Only the data of two new satellites CHAMP and GRACE plus surface data (airborne gravimetry and satellite altimetry) 2005
"Of particular importance is the inclusion of measurements from the satellite GOCE, from which the GFZ did its own calculation of the gravitational field' says Dr. Christoph Foerste, who together with his colleague Dr. Frank Flechtner directs the gravitaty field work group at the GFZ. 

Gravity anomalies in mgal, deduced by CHAMP, GRACE and ground based measurement. 2005

The ESA mission GOCE (Gravity Field and Steady-State Ocean Circulation Explorer) was launched in mid-March 2009 and since then measures the Earth's gravitational field using satellite gradiometry. "This allows the measurement of gravity in inaccessible regions with unprecedented accuracy, for example in Central Africa and the Himalayas" adds Dr. Flechtner. In addition, the Earth's gravity field in the vastness of the oceans can be measured much more accurately with GOCE than with previous satellite missions such as GFZ-CHAMP and GRACE. 

Amongst other advantages, this allows a more faithful determination of the so-called dynamic ocean topography, i.e. the deviation of the ocean surface from the equilibrium with the force of gravity. This ocean topography is essentially determined by ocean currents. Therefore, the gravity field models calculated with GOCE measurements are of great interest for oceanography and climate research.

Besides GOCE, long-term measurement data from the twin-satellite mission GRACE (Gravity Recovery and Climate Experiment) of the GFZ were included in the new EIGEN-6C. GRACE allows the determination of large-scale temporal changes in the gravitational field caused for example by climate-induced mass displacements on the Earth's surface. 

Gravity field model from CHAMP and GRACE data. Gravity anomalies (lower picture) and geoid (upper picture).

These include the melting of large glaciers in the Polar Regions and the seasonal variation of water stored in large river systems. Temporal gravity changes determined with GRACE are included in the EIGEN-6C model. Therefore, the new Potsdamer potato is for the first time no longer a solid body, but a surface that varies over time. Particularly in order to record these climate-related processes for the long term, a follow-on mission for the GRACE mission that ends in 2015 is urgently needed. A comparison of the various "Potsdamer potatoes" since 1995 clearly shows the leaps in quality.

Contacts and sources: 

30 Thriving Careers Your Children Should Consider

Our future is in a constant state of change, and as such, so are the careers of the future. New technologies, developments, and trends have a huge impact on the careers that will be thriving now and in the future. Here, we’ll take a look at 30 careers that are great now, and for the next generation.

  1. Delivery Service Driver: This job doesn’t sound like the most glamorous career, but delivery service drivers are predicted to continue to be in high demand. Drivers for carriers like UPS, FedEx and USPS should enjoy excellent job security as online shopping and delivery increases in use.
  2. Epidemiologist: As more and more outbreaks occur in the world, epidemiologists are needed now more than ever. These experts are often called in to explain how outbreaks can spread, and how they can be contained. You don’t even need an MD to become and epidemiologist.
  3. Tissue Engineer: Researchers have been able to create man-made skin, and are working on artificial cartilage, and growing new tissues can’t be far behind. Tissue engineers can expect to work on growing new organ tissue, including livers, hearts, and kidneys.
  4. Construction Worker: Although the housing market has died down, construction employment is expected to rise by 19%, fueled by commercial and roadway construction. This career is a good alternative for those who would otherwise be interested in manufacturing, as that is a job sector that is expected to decline.
  5. Mobile Application Developer: Mobile Application Developers are in great demand not just as freelancers, but also as paid employees. Whether you’re building your own apps, or working on someone else’s, the growing trend toward mobile application software ensures that these developers will do well for some time.
  6. Debt Counselor: Plenty of people are digging their way out of debt and bad mortgages, but many need help getting there. Debt counselors should be well employed for a good while.
  7. Teacher: Teaching is a safe job for many students. Job opportunities are almost always available, with steady pay and an increasing need for teachers everywhere.
  8. Biologist: Biological scientist jobs are expected to increase much faster than the average for all occupations. It’s not surprising, given that biotechnology is growing, and the world is always working on new ways to clean and preserve the environment.
  9. Medical Records Technician: Medical records is a growing field, as more medical records are being saved and stored electronically. Employment for this career should grow faster than the average, with good job prospects for those with an associate’s degree.
  10. Cybersecurity Specialist: Companies and individuals are placing more information online than ever, and that information needs to be secured. To protect this vital information, cybersecurity specialists are needed and will grow even more valuable in the future.
  11. Pharmacist: Pharmacists are a part of the growing health care field that will benefit from great careers in the future. Through 2018, there should be more job openings than qualified job seekers, with an average salary in the six figures.
  12. Genetic Counselor: The world is growing in its understanding and development of genetics, and soon, families may be able to take advantage of genetic technologies and options. Genetic counselors can help facilitate these choices.
  13. Artist: Starving artists exist and probably always will, but we live in a society where art is increasingly appreciated and monetized. Artists will do well in the new creative class.
  14. Doctor: A classically safe job, doctors can expect to make a good living for the rest of their careers. As the population grows and gets older, more health care, and more doctors will be needed, giving doctors a secure future.
  15. Sustainability Officer: Sustainability officers are in charge of environmental programs within a company, and several companies have already created this position. By 2005, almost all of the 150 largest companies in the world had a sustainability officer, and that number is growing.
  16. Bioinformatician: As DNA and molecular biology continue to grow, the world will need more people to manage biodata. Those entering the bioinformatics field will do well in the future.
  17. Market Researchers: Market researchers collect and analyze information about the public, allowing product manufacturers, movie studios, and other groups to make decisions about business. This career will grow much faster than the average for all careers, especially those those in the survey sector.
  18. Lawyer: Lawyers are almost always needed, and population growth, along with business growth, only serves to increase the need for legal transactions, disputes, and cases.
  19. Simulation Engineer: When you think of simulation, you probably think of arcade games, but simulation engineers do so much more than that. They create simulations that allow risks and benefits to be weighed while working in a virtual environment, good for several different types of research and development projects.
  20. Robotics Technician: Robots are everywhere, from our vacuums to assembly lines. They don’t run by themselves, though — at least not yet — and technicians are needed to build and maintain them.
  21. Gene Programmer: Like genetic counselors, gene programmers will ride the wave of developing genetics. Programmers can create customized prescriptions with gene therapy and smart molecules that can be used to treat just about everything.
  22. Nurse: Just as doctors will do well now and in the future, so will nurses. Employers often have difficulty attracting and retaining enough RNs, and employment is expected to grow much faster than the average for all occupations.
  23. Accountant: Accountants are needed for businesses to keep their tax and financial records accurate. Although the financial sector as a whole is not doing well, accountants and auditors will do well, even in times of recession, as businesses look to more effectively use their finances.
  24. Water Treatment Expert: Water is on its way to becoming a scarce commodity, and it’s important to not only conserve what we use, but to more effectively use what we have. Water treatment experts can offer their services now and in the future when it comes to cleaning and treating water.
  25. Physical Therapy Assistant: The 10-year growth rate for physical therapy assistants is 42%, which is extremely high. This technical job allows you to help people while earning a healthy salary — and you don’t even need a college degree, just a certification.
  26. Earth Scientist: Geologists and geophysicists will be a part of the growing movement to fight climate change, environmental damage, energy needs, and natural disasters.
  27. Stem Cell Researcher: Although controversial, stem cell research is growing. Researchers will be needed to develop the potential of stem cells in medicine and genetics.
  28. Chef: Many people don’t have time to cook, whether they’re busy heads of households or running their own business. Cooks and chefs are needed to prepare meals that will be served to these people.
  29. Geochemist: Now and in the future, geochemists provide information about chemicals found in rocks. Oil companies, environmental management companies, and government agencies have great use for geochemists.
  30. Organic Farmer: Organic farmers are standing out in an otherwise declining career, as organic produce is in higher demand than ever.

Contacts and sources:
Story by Emma Taylor

Neptune As Never Seen Before, Swirling Purple and Green Atmosphere, Rotation Measured For First Time

By tracking atmospheric features on Neptune, a UA planetary scientist has accurately determined the planet's rotation, a feat that had not been previously achieved for any of the gas planets in our solar system except Jupiter.

By tracking atmospheric features on Neptune, a UA planetary scientist has accurately determined the planet's rotation, a feat that had not been previously achieved for any of the gas planets in our solar system except Jupiter. In this animation, the viewer takes the perspective of an object circling Neptune matching its rotational speed, much like a geostationary satellite hovering above the same spot. Only then does the giant gas planet reveal the movements of its features relative to each other, often in opposite directions.
Credit: E. Karkoschka/University of Arizona

A day on Neptune lasts precisely 15 hours, 57 minutes and 59 seconds, according to the first accurate measurement of its rotational period made by University of Arizona planetary scientist Erich Karkoschka.

In this image, the colors and contrasts were modified to make the planet’s atmospheric features stand out. The winds in Neptune's atmosphere can reach the speed of sound or more. Neptune's Great Dark Spot stands out as the most prominent feature on the left. The fainter Dark Spot 2 and the South Polar Feature are locked to the planet's rotation, which allowed Karkoschka to precisely determine how long a day lasts on Neptune.
Credit: Erich Karkoschka

His result is one of the largest improvements in determining the rotational period of a gas planet in almost 350 years since Italian astronomer Giovanni Cassini made the first observations of Jupiter's Red Spot.

"The rotational period of a planet is one of its fundamental properties," said Karkoschka, a senior staff scientist at the UA's Lunar and Planetary Laboratory. "Neptune has two features observable with the Hubble Space Telescope that seem to track the interior rotation of the planet. Nothing similar has been seen before on any of the four giant planets."

The discovery is published in Icarus, the official scientific publication of the Division for Planetary Sciences of the American Astronomical Society.

Unlike the rocky planets – Mercury, Venus, Earth and Mars – which behave like solid balls spinning in a rather straightforward manner, the giant gas planets – Jupiter, Saturn, Uranus and Neptune – rotate more like giant blobs of liquid. Since they are believed to consist of mainly ice and gas around a relatively small solid core, their rotation involves a lot of sloshing, swirling and roiling, which has made it difficult for astronomers to get an accurate grip on exactly how fast they spin around.

"If you looked at Earth from space, you'd see mountains and other features on the ground rotating with great regularity, but if you looked at the clouds, they wouldn't because the winds change all the time," Karkoschka explained. "If you look at the giant planets, you don't see a surface, just a thick cloudy atmosphere."

"On Neptune, all you see is moving clouds and features in the planet's atmosphere. Some move faster, some move slower, some accelerate, but you really don't know what the rotational period is, if there even is some solid inner core that is rotating."

In the 1950s, when astronomers built the first radio telescopes, they discovered that Jupiter sends out pulsating radio beams, like a lighthouse in space. Those signals originate from a magnetic field generated by the rotation of the planet's inner core.

No clues about the rotation of the other gas giants, however, were available because any radio signals they may emit are being swept out into space by the solar wind and never reach Earth.

"The only way to measure radio waves is to send spacecraft to those planets," Karkoschka said. "When Voyager 1 and 2 flew past Saturn, they found radio signals and clocked them at exactly 10.66 hours, and they found radio signals for Uranus and Neptune as well. So based on those radio signals, we thought we knew the rotation periods of those planets."

But when the Cassini probe arrived at Saturn 15 years later, its sensors detected its radio period had changed by about 1 percent. Karkoschka explained that because of its large mass, it was impossible for Saturn to incur that much change in its rotation over such a short time.

"Because the gas planets are so big, they have enough angular momentum to keep them spinning at pretty much the same rate for billions of years," he said. "So something strange was going on."

Even more puzzling was Cassini's later discovery that Saturn's northern and southern hemispheres appear to be rotating at different speeds.

"That's when we realized the magnetic field is not like clockwork but slipping," Karkoschka said. "The interior is rotating and drags the magnetic field along, but because of the solar wind or other, unknown influences, the magnetic field cannot keep up with respect to the planet's core and lags behind."

Instead of spacecraft powered by billions of dollars, Karkoschka took advantage of what one might call the scraps of space science: publicly available images of Neptune from the Hubble Space Telescope archive. With unwavering determination and unmatched patience, he then pored over hundreds of images, recording every detail and tracking distinctive features over long periods of time.

Other scientists before him had observed Neptune and analyzed images, but nobody had sleuthed through 500 of them.

"When I looked at the images, I found Neptune's rotation to be faster than what Voyager observed," Karkoschka said. "I think the accuracy of my data is about 1,000 times better than what we had based on the Voyager measurements – a huge improvement in determining the exact rotational period of Neptune, which hasn't happened for any of the giant planets for the last three centuries."

This is the planet Neptune as seen by the Voyager 2 spacecraft in 1989.
Credit: NASA

Two features in Neptune's atmosphere, Karkoschka discovered, stand out in that they rotate about five times more steadily than even Saturn's hexagon, the most regularly rotating feature known on any of the gas giants.

Named the South Polar Feature and the South Polar Wave, the features are likely vortices swirling in the atmosphere, similar to Jupiter's famous Red Spot, which can last for a long time due to negligible friction. Karkoschka was able to track them over the course of more than 20 years.

An observer watching the massive planet turn from a fixed spot in space would see both features appear exactly every 15.9663 hours, with less than a few seconds of variation.

"The regularity suggests those features are connected to Neptune's interior in some way," Karkoschka said. "How they are connected is up to speculation."

One possible scenario involves convection driven by warmer and cooler areas within the planet's thick atmosphere, analogous to hot spots within the Earth's mantle, giant circular flows of molten material that stay in the same location over millions of years.

"I thought the extraordinary regularity of Neptune's rotation indicated by the two features was something really special," Karkoschka said.

"So I dug up the images of Neptune that Voyager took in 1989, which have better resolution than the Hubble images, to see whether I could find anything else in the vicinity of those two features. I discovered six more features that rotate with the same speed, but they were too faint to be visible with the Hubble Space Telescope, and visible to Voyager only for a few months, so we wouldn't know if the rotational period was accurate to the six digits. But they were really connected. So now we have eight features that are locked together on one planet, and that is really exciting."

In addition to getting a better grip on Neptune's rotational period, the study could lead to a better understanding of the giant gas planets in general.

"We know Neptune's total mass but we don't know how it is distributed," Karkoschka explained. "If the planet rotates faster than we thought, it means the mass has to be closer to the center than we thought. These results might change the models of the planets' interior and could have many other implications."

Contacts and sources:
Daniel Stolte
University of Arizona

Heavy Metal Meets Hard Rock: Battling through the Ocean Crust’s Hardest Rocks to Capture the Boundary Between Magma and Water

Scientists and drillers recovered a remarkable suite of heat-tempered basalts that provide a detailed picture of the rarely seen boundary between magma and seawater. 

These samples were collected during a return to ODP Hole 1256D, one of the deepest “hard rock” penetration sites of scientific ocean drilling. ODP Hole 1256D has been stabilized, cleared to its full depth, and primed for further deepening.

Panama City, Panama – Integrated Ocean Drilling Program (IODP) Expedition 335 Superfast Spreading Rate Crust 4 recently completed operations in Ocean Drilling Program (ODP) Hole 1256D, a deep scientific borehole that extends more than 1500 meters below the seafloor into the Pacific Ocean’s igneous crust – rocks that formed through the cooling and crystallization of magma, and form the basement of the ocean floor.

An international team of scientists led by co-chief scientists Damon Teagle (National Oceanographic Center Southampton, University of Southampton in the UK) and Benoît Ildefonse (CNRS, Université Montpellier 2 in France) returned to ODP Hole 1256D aboard the scientific research vessel, JOIDES Resolution, to sample a complete section of intact oceanic crust down into gabbros.

A granoblastic basalt viewed under the microscope (picture is 2.3 mm across). Magnification shows a rock formed of small rounded mineral grains annealed together. These rocks are the hardest material ever drilled in more than 4 decades of scientific ocean drilling. The rocks are very abrasive and aggressive to the drilling and coring tools, and difficult to penetrate.

Credit: IODP

Still, the samples recovered provide a treasure trove of information, recording the rocks’ initial crystallization as a basaltic dike then their reheating at the top of the mid-ocean ridge magma chamber. These rocks represent the heat exchanger where thermal energy from the cooling and solidifying melt in the magma chamber below is exchanged with seawater infiltrating from the oceans.

This expedition was the fourth in a series and builds on the efforts of three expeditions in 2002 and 2005.

Gabbros are coarse-grained intrusive rocks formed by the slow cooling of basaltic magmas. They make up the lower two-thirds of the ocean crust. The intrusion of gabbros at the mid-ocean ridges is the largest igneous process active on our planet with more than 12 cubic kilometers of new magma from the mantle intruded into the crust each year. The minerals, chemistry, and textures of gabbroic rocks preserve records of the processes that occur deep within the Earth’s mid-ocean ridges, where new ocean crust is formed.

“The formation of new crust is the first step in Earth's plate tectonic cycle,” explained Teagle. “This is the principal mechanism by which heat and material rise from within the Earth to the surface of the planet. And it’s the motion and interactions of Earth’s tectonic plates that drive the formation of mountains and volcanoes, the initiation of earthquakes, and the exchange of elements (such as carbon) between the Earth's interior, oceans, and atmosphere.”

“Understanding the mechanisms that construct new tectonic plates has been a major, long-standing goal of scientific ocean drilling,” added Ildefonse, “but progress has been inhibited by a dearth of appropriate samples because deep drilling (at depths greater than 1000 meters into the crust) in the rugged lavas and intrusive rocks of the ocean crust continues to pose significant technical challenges.”

ODP Hole 1256D lies in the eastern equatorial Pacific Ocean about 900 kilometers to the west of Costa Rica and 1150 kilometers east of the present day East Pacific Rise. This hole is in 15 million year old crust that formed during an episode of “superfast” spreading at the ancient East Pacific Rise, when the newly formed plates were moving apart by more than 200 millimeters per year (mm/yr).

“Although a spreading rate of 200 mm/yr is significantly faster than the fastest spreading rates on our planet today, superfast-spread crust was an attractive target,” stated Teagle, “because seismic experiments at active mid-ocean ridges indicated that gabbroic rocks should occur at much shallower depths than in crust formed at slower spreading rates. In 2005, we recovered gabbroic rocks at their predicted depth of approximately 1400 meters below the seafloor, vindicating the overall ‘Superfast’ strategy.”

Granoblastic dikes samples were recovered in abundance by fishing tools during successive hole remediation operations. Sumiyo Miyashita and Yoshiko Adashi, from Niigata University, Japan, examine large rock samples from Hole 1256D.
Credit: IODP

Previous expeditions to Hole 1256D successfully drilled through the erupted lavas and thin (approximately one-meter-wide) intrusive “dikes” of the upper crust, reaching into the gabbroic rocks of the lower crust. The drilling efforts of Expedition 335 were focused just below the 1500-meter mark in the critical transition zone from dikes to gabbros, where magma at 1200°C exchanges heat with super-heated seawater circulating within cracks in the upper crust. This heat exchange occurs across a narrow thermal boundary that is perhaps only a few tens of meters thick.

In this zone, the intrusion of magma causes profound textural changes to the surrounding rocks, a process known as contact metamorphism. In the mid-ocean ridge environment this results in the formation of very fine-grained granular rocks, called granoblastic basalts, whose constituent minerals recrystallize at a microscopic scale and become welded together by magmatic heat. The resulting metamorphic rock is as hard as any formation encountered by ocean drilling and sometimes even tougher than the most resilient of hard formation drilling and coring bits.

Expedition 335 reentered Hole 1256D more than five years after the last expedition to this site. The expedition encountered and overcame a series of significant engineering challenges, each of which was unique, although difficulties were not unexpected when drilling in a deep, uncased, marine borehole into igneous rocks.

The patient, persistent efforts of the drilling crew successfully cleared a major obstruction at a depth of 920 that had initially prevented reentry into the hole to its full depth of 1507 meters. Then at the bottom of the hole the very hard granular rocks that had proved challenging during the previous Superfast expedition were once more encountered. Although there may only be a few tens of meters of these particularly tenacious granoblastic basalts, their extreme toughness once more proved challenging to sample– resulting in the grinding down of one of the hardest formation coring bits into a smooth stump.

Laying out a fishing tool on the rig floor. An illustration of the hard work by the drill ship crew.
Credit: IODP

A progressive, logical course of action was then undertaken to clear the bottom of the hole of metal debris from the failed coring bit and drilling cuttings. This effort required the innovative use of hole-clearing equipment such as large magnets, and involved over 240 kilometers of drilling pipe deployments (trips) down into the hole and back onto the ship. (The total amount of pipe “tripped” was roughly equivalent to the distance from Paris to the English coast, or from New York City to Philadelphia, or Tokyo to Niigata). These efforts returned hundreds of kilograms of rocks and drill cuttings, including large blocks (up to 5 kilograms) of the culprit granoblastic basalts that hitherto had only been very poorly recovered through coring. A limited number of gabbro boulders were also recovered, indicating that scientists are tantalizingly close to breaking through into the gabbroic layer.

Expedition 335 operations also succeeded in clearing Hole 1256D of drill cuttings, much of which appear to have been circulating in the hole since earlier expeditions.

“We recovered a remarkable sample suite of granoblastic basalts along with minor gabbros, providing a detailed picture of a rarely sampled, yet critical interval of the oceanic crust,” Ildefonse observed. “Most importantly,” he added, “the hole has been stabilized and cleared to its full depth, and is ready for deepening in the near future.”

Contacts and sources: 

The Integrated Ocean Drilling Program (IODP), is an international research program dedicated to advancing scientific understanding of the Earth through drilling, coring, and monitoring the subseafloor. The JOIDES Resolution is a scientific research vessel managed by the US Implementing Organization of IODP (USIO). CHIKYU is a drilling vessel operated by JAMSTEC/CDEX (Japan), and mission-specific platforms are supplied by ECORD (the European Consortium for Ocean Research Drilling). IODP is supported by two lead agencies: the US National Science Foundation (NSF) and Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT). Additional program support comes from ECORD, the Australian-New Zealand IODP Consortium (ANZIC), India’s Ministry of Earth Sciences, the People's Republic of China (Ministry of Science and Technology), and the Korea Institute of Geoscience and Mineral Resources.

Useful Websites:
For more information about IODP Expedition 335, Superfast Spreading Rate Crust 4, visit

For more information about the JOIDES Resolution, visit

For more information about the Integrated Ocean Drilling Program, visit

Earthquake Triggers In Pacific Ocean

New samples of rock and sediment from the depths of the eastern Pacific Ocean may help explain the cause of large, destructive earthquakes similar to the Tohoku Earthquake that struck Japan in mid-March.

Nearly 1500 meters (almost one mile) of core collected from the ocean floor near the coast of Costa Rica reveal detailed records of approximately 2 million years of tectonic activity along a seismic plate boundary.

The samples were retrieved with the scientific drilling vessel JOIDES Resolution during the recent month-long Integrated Ocean Drilling Program (IODP) Costa Rica Seismogenesis Project (CRISP) Expedition. Participating scientists aim to use the samples better understand the processes that control the triggering of large earthquakes at subduction zones, where one plate slides beneath another.

“We know that there are different factors that contribute to seismic activity – these include rock type and composition, temperature differences, and how water moves within the Earth’s crust,” explained co-chief scientist Paola Vannucchi (University of Florence, Italy), who led the expedition with co-chief scientist Kohtaro Ujiie (University of Tsukuba, Japan).

The CRISP research site is located 174 km (108 miles) off the coast of Costa Rica.

She added, “but what we don’t fully understand is how these factors interact with one another and if one may be more important than another in leading up to different magnitudes of earthquakes. This expedition provided us with crucial samples for answering some of these fundamental questions.”

More than 80% of global earthquakes above magnitude 8.0 occur along subduction zones. The Pacific Ocean is famous for these boundaries, known as convergent margins, which are found along the coasts of the East Pacific from Alaska to Patagonia, New Zealand, Tonga, Marianas all the way up to Japan and the Aleutians, making the margins of the world’s largest ocean basin a primary target for research into the triggering mechanisms of large quakes.

During four weeks at sea, the science party and crew successfully drilled four sites, recovering core samples of sand and clay-like sediment and basalt rock. In a preliminary report published this month, CRISP scientists say that they have found evidence for a strong subsidence, or sinking, of the Costa Rica margin combined with a large volume of sediment discharged from the continent and accumulated in the last 2 million years.

“The sediment samples provide novel information on different parameters which may regulate the mechanical state of the plate interface at depth,” said Ujiie. He adds, “knowing how the plates interact at the fault marking their boundary is critical to interpreting the behavior and frequency of earthquakes in the region.”

The rig floor of the JOIDES Resolution scientific drilling vessel.
 Credit: IODP

Vannucchi explains, “for example we now know that fluids from deeper parts of the subduction zone system have percolated up through the layers of sediment. Studying the composition and volume of these fluids, as well as how they have moved through the sediment helps us better understand the relationship between the chemical, thermal, and mass transfer activity in the seafloor and the earthquake-generating, or seismogenic, region of the plate boundary. They may be correlated.”

Cores from the CRISP Expedition are currently being further analyzed by different members of the research party at their home institutions. The scientists will meet beginning August 29 at Texas A&M University to share their initial results.

The CRISP Expedition is unique because it focuses on the properties of erosional convergent margins, where the overriding plate gets “consumed” by subduction processes. These plate boundaries are characterized by trenches with thin sediment cover (less than 400 meters), fast convergence between the plates (at rates greater than 8 centimeters per year), and abundant seismicity.

The seismically active CRISP research area is the only one of its kind that is accessible to research drilling. However, this subduction zone is representative of 50% of global subduction zones, making scientific insights gleaned here relevant to Costa Ricans and others living in earthquake-prone regions all around the Pacific Ocean. The recent Tohoku Earthquake in Japan was generated in an erosive portion of the plate interface.

Kristin Hillis (Texas A&M University) labels new cores in one of the shipboard labs on the JOIDES Resolution.
Credit: IODP

Other geoscience research drilling programs, such as IODP’s Nankai Trough Seismogenic Zone Experiment (NantroSEIZE), near the southeast coast of Japan, focus on accretionary margins, where the front part of the overriding tectonic plate is built up by the subduction processes (sometimes forming mountains) and the plate boundary input is trench material. In these environments, the trench sediments are significantly thick (greater than 1000 meters or over a half a mile). Accretionary margins are known for their large earthquakes as the 1964 Alaska and the 2004 Sumatra quakes. Japan’s Nankai Trough itself was the center of two magnitude 8 earthquakes in 1944 and 1946.

The CRISP team hopes to return to the same drill site in the future to directly sample the plate boundary and fault zone before and after seismic activity in the region. Changes observed through this work may provide new insights into how earthquakes are generated.

Contacts and sources: 

The Integrated Ocean Drilling Program (IODP) is an international research program dedicated to advancing scientific understanding of the Earth through drilling, coring, and monitoring the subseafloor. The JOIDES Resolution is a scientific research vessel managed by the U.S. Implementing Organization of IODP (USIO). Together, Texas A&M University, Lamont-Doherty Earth Observatory of Columbia University, and the Consortium for Ocean Leadership comprise the USIO. IODP is supported by two lead agencies: the U.S. National Science Foundation (NSF) and Japan's Ministry of Education, Culture, Sports, Science, and Technology. Additional program support comes from the European Consortium for Ocean Research Drilling (ECORD), the Australian-New Zealand IODP Consortium (ANZIC), India’s Ministry of Earth Sciences, the People's Republic of China (Ministry of Science and Technology), and the Korea Institute of Geoscience and Mineral Resources.

Useful Websites:
For more information about the IODP Louisville Seamount Trail Expedition, visit

For more information about the JOIDES Resolution, visit

For more information about the Integrated Ocean Drilling Program, visit