Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Friday, August 31, 2012

Old And Impossible Blood Mystery Solved

New intriguing knowledge on blood haemoglobin has just been published in Nature

Scientists at the research centre MEMBRANES at Aarhus University, Denmark, have completed an old puzzle, which since the 60s from many sides has been regarded as impossible to complete. The challenge was to solve the structure of the protecting protein complex that forms when haemoglobin is released from red cells and becomes toxic. This toxic release of haemoglobin occurs in many diseases affecting red cell stability, e.g. malaria.

Atomic model of the haptoglobin-hemoglobin complex which shows a barbell-like structure. When you send powerful X-rays through protein crystals, spread rays. From the intensities of these rays can generate a 3-dimensional map of the atoms in the crystal, which in turn is used to build a model of the protein.

Credit: Aarhus University

Technically, the most important finding in this report in Nature is a high-resolution three-dimensional mapping of the so-called 'haptoglobin-haemoglobin complex'.

"After many failing experiments, our breakthrough came when we gave up using human material and went to the local slaughterhouse to purchase pig blood. Not a particular high-technological approach, but this transition from studying human blood to blood from a species with close homology had magic effects. After running into dead ends for two years and trying out the most complex gene-technological ways to produce the right material, it suddenly worked", says Søren Kragh Moestrup, the head of the research group at Department of Biomedicine.

The discovery provides new essential information on haemoglobin that makes up most of the red cell interior. Haemoglobin is an essential blood component for transport of oxygen, but it becomes toxic with potential damaging effects on tissues, in particular the kidneys, when it is released from the red cells. An excessive release can occur in many diseases, such as malaria and other infections.

However, the body has a sophisticated defence system. The first line defence is carried out by the blood protein haptoglobin, which captures haemoglobin and gates it to a receptor that engulfs the haemoglobin-haptoglobin complex. This function of the receptor named CD163 was originally discovered by the same group.

"We have now shown how this unique protein complex forms by generation of a detailed 3-dimensional map of each atom. This shows for the first time how the complex is formed and explains the tight protein association", says PhD Christian Brix Folsted Andersen. He has together with Master's student Morten Torvund-Jensen been an essential driving force in the project.

The results have also led to an unexpected discovery of a novel type of protein structure and a new patent submission on exploitation of the discovery for use in generation of a new type of synthetic proteins to be used in therapy and diagnostics.

The Nature paper "Structure of the haptoglobin-haemoglobin complex" is available.

MEMBRANES is a research centre at Aarhus University with focus on membrane proteins.

The present international and interdisciplinary project has been headed by Department of Biomedicine, Aarhus University, with participation of Science & Technology, Aarhus University and experts from Brazil and Norway.

Financial support from The Lundbeck Foundation, The Novo Nordisk Foundation, The Research Council of Norway, The European Research Council and The Danish Council for Independent Research.

Contacts and sources:
Soeren Kragh Moestrup
Aarhus University

A Revolution In The Body: Millimeter-Scale, Wirelessly Powered Implantable Medical Devices

A millimeter-scale, wirelessly powered cardiac device developed by Stanford electrical engineers overturns existing models to demonstrate the feasibility of a millimeter-sized, wirelessly powered cardiac device

A team of engineers at Stanford has demonstrated the feasibility of a super-small, implantable cardiac device that gets its power not from batteries, but from radio waves transmitted from outside the body. The implanted device is contained in a cube just eight-tenths of a millimeter in radius. It could fit on the head of pin.

The findings were published in the journal Applied Physics Letters. In their paper, the researchers demonstrated wireless power transfer to a millimeter-sized device implanted five centimeters inside the chest on the surface of the heart—a depth once thought out of reach for wireless power transmission.

The paper’s senior author was Ada Poon, an assistant professor of electrical engineering at Stanford. Sanghoek Kim and John Ho, both doctoral candidates in Poon’s lab, were first authors.

The engineers say the research is a major step toward a day when all implants are driven wirelessly. Beyond the heart, they believe such devices might include swallowable endoscopes—so-called “pillcams” that travel the digestive tract—permanent pacemakers and precision brain stimulators. The devices could potentially be used for virtually any medical applications for which device size and power matter.

A team of engineers at Stanford has shown that, contrary to earlier models, high-frequency wireless power transmission to a device in the human body is possible. These images show power delivery to the human heart from a 200MHz low-frequency transmitter (left) and a 1.7GHz high-frequency transmitter (right). Red indicates greatest power; blue is least. Note focusing of power on the heart in the right image. 
Image courtesy John Ho, Stanford Engineering.

A REVOLUTION IN THE BODY

Implantable medical devices in the human body have revolutionized medicine. Hundreds of thousands if not millions of pacemakers, cochlear implants and drug pumps are today helping people live relatively normal lives, but these devices are not without engineering challenges.

First off, they require power, which means batteries, and batteries are bulky. In a device like a pacemaker, the battery alone accounts for as much as half the volume of the device it drives. Second, batteries have finite lives. New surgery is needed when they wane.

“Wireless power solves both challenges,” said Poon.

Last year, Poon made headlines when she demonstrated a wirelessly powered, self-propelled device capable of swimming through the bloodstream. To get there she needed to overturn some long-held assumptions about delivery of wireless power through the human body.

Her latest device works by a combination inductive and radiative transmission of power. Both are types of electromagnetic transfer in which a transmitter sends radio waves to a coil of wire inside the body. The radio waves produce an electrical current in the coil sufficient to operate a small device.

There is an indirect relationship between the frequency of the transmitted radio waves and the size of the receiving antenna. That is, to deliver a desired level of power, lower frequency waves require bigger coils. Higher frequency waves can work with smaller coils.

“For implantable medical devices, therefore, the goal is a high-frequency transmitter and a small receiver, but there is one big hurdle,” explained Kim.

IGNORING CONSENSUS

Existing mathematical models have held that high frequency radio waves do not penetrate far enough into human tissue, necessitating the use of low-frequency transmitters and large antennas—too large to be practical for implantable devices.

Poon proved the models wrong. Human tissues dissipate electric fields quickly, it is true, but radio waves can travel in a different way—as alternating waves of electric and magnetic fields. With the correct equations in hand, she discovered that high-frequency signals travel much deeper than anyone suspected.

Assistant Professor of Electrical Engineering Ada Poon. 

 Photo: L.A. Cicero / Stanford News Service

“In fact, to achieve greater power efficiency, it is actually advantageous that human tissue is a very poor electrical conductor,” said Kim. “If it were a good conductor, it would absorb energy, create heating and prevent sufficient power from reaching the implant.”

According to their revised models, the researchers found that the maximum power transfer through human tissue occurs at about 1.7 billion cycles per second.

“In this high-frequency range, we can increase power transfer by about 10 times over earlier devices,” said Ho, who honed the mathematical models.

The discovery meant that the team could shrink the receive antenna by a factor of 10 as well, to a scale that makes wireless implantable devices feasible. At that the optimal frequency, a millimeter-radius coil is capable of harvesting more than 50 microwatts of power, well in excess of the needs of a recently demonstrated eight-microwatt pacemaker.
ADDITIONAL CHALLENGES

With the dimensional challenges solved, the team found themselves bound in by other engineering constraints. First, electronic medical devices must meet stringent health standards established by the IEEE, particularly with regard to tissue heating. Second, the team found that receive and transmit antennas had to be optimally oriented to achieve maximum efficiency. Differences in alignment of just a few degrees could produce troubling drops in power.

“This can’t happen with medical devices,” said Poon. “As the human heart and body are in constant motion, solving this issue was critical to the success of our research.”

The team responded by designing an innovative transmit antenna structure that delivers power efficiency regardless of orientation of the two antennas.

The new design serves additionally to focus the radio waves precisely at the point inside the body where the device rests on the surface of the heart, increasing the electric field where it is most needed, but canceling it elsewhere. This helps reduce tissue heating to levels well within the IEEE standards.

This research was made possible by funding from the C2S2 Focus Center, one of six research centers funded under the Focus Center Research Program (FCRP), a Semiconductor Research Corporation entity. Lisa Chen also contributed to this study.
Andrew Myers is associate director of communications for the Stanford University School of Engineering.
Contacts and sources: 
Andrew Myers
Stanford School of Engineering

The "Anternet" Discovered: The Insect Internet

A collaboration between a Stanford ant biologist and a computer scientist has revealed that the behavior of harvester ants as they forage for food mirrors the protocols that control traffic on the Internet.

On the surface, ants and the Internet don't seem to have much in common. But two Stanford researchers have discovered that a species of harvester ants determine how many foragers to send out of the nest in much the same way that Internet protocols discover how much bandwidth is available for the transfer of data. The researchers are calling it the "anternet."

Deborah Gordon, a biology professor at Stanford, has been studying ants for more than 20 years. When she figured out how the harvester ant colonies she had been observing in Arizona decided when to send out more ants to get food, she called across campus to Balaji Prabhakar, a professor of computer science at Stanford and an expert on how files are transferred on a computer network. At first he didn't see any overlap between his and Gordon's work, but inspiration would soon strike.

"The next day it occurred to me, 'Oh wait, this is almost the same as how [Internet] protocols discover how much bandwidth is available for transferring a file!'" Prabhakar said. "The algorithm the ants were using to discover how much food there is available is essentially the same as that used in the Transmission Control Protocol."

Harvester ants.  

Creative commons photo: Steve Jurvetson (MS '89 Electrical Engineering, MBA '95).

Transmission Control Protocol, or TCP, is an algorithm that manages data congestion on the Internet, and as such was integral in allowing the early web to scale up from a few dozen nodes to the billions in use today. Here's how it works: As a source, A, transfers a file to a destination, B, the file is broken into numbered packets. When B receives each packet, it sends an acknowledgment, or an ack, to A, that the packet arrived.

This feedback loop allows TCP to run congestion avoidance: If acks return at a slower rate than the data was sent out, that indicates that there is little bandwidth available, and the source throttles data transmission down accordingly. If acks return quickly, the source boosts its transmission speed. The process determines how much bandwidth is available and throttles data transmission accordingly.

It turns out that harvester ants (Pogonomyrmex barbatus) behave nearly the same way when searching for food. Gordon has found that the rate at which harvester ants – which forage for seeds as individuals – leave the nest to search for food corresponds to food availability.

A forager won't return to the nest until it finds food. If seeds are plentiful, foragers return faster, and more ants leave the nest to forage. If, however, ants begin returning empty handed, the search is slowed, and perhaps called off.

Prabhakar wrote an ant algorithm to predict foraging behavior depending on the amount of food – i.e., bandwidth – available. Gordon's experiments manipulate the rate of forager return. Working with Stanford student Katie Dektar, they found that the TCP-influenced algorithm almost exactly matched the ant behavior found in Gordon's experiments.

"Ants have discovered an algorithm that we know well, and they've been doing it for millions of years," Prabhakar said.

They also found that the ants followed two other phases of TCP. One phase is known as slow start, which describes how a source sends out a large wave of packets at the beginning of a transmission to gauge bandwidth; similarly, when the harvester ants begin foraging, they send out foragers to scope out food availability before scaling up or down the rate of outgoing foragers.

Another protocol, called time-out, occurs when a data transfer link breaks or is disrupted, and the source stops sending packets. Similarly, when foragers are prevented from returning to the nest for more than 20 minutes, no more foragers leave the nest.

Prabhakar said that had this discovery been made in the 1970s, before TCP was written, harvester ants very well could have influenced the design of the Internet.

Gordon thinks that scientists have just scratched the surface for how ant colony behavior could help us in the design of networked systems.

There are 11,000 species of ants, living in every habitat and dealing with every type of ecological problem, Gordon said. "Ants have evolved ways of doing things that we haven't thought up, but could apply in computer systems. Computationally speaking, each ant has limited capabilities, but the collective can perform complex tasks.

"So ant algorithms have to be simple, distributed and scalable – the very qualities that we need in large engineered distributed systems," she said. "I think as we start understanding more about how species of ants regulate their behavior, we'll find many more useful applications for network algorithms."

The paper, "The Regulation of Ant Colony Foraging Activity without Spatial Information," appears in the August 23 issue of PLoS Computational Biology.


Contacts and sources:
By Bjorn Carey
Stanford University 

NORAD, Russia Train to Confront Terrorist Hijackings

It was a scene unthinkable even 30 years ago as U.S., Canadian and Russian militaries worked together this week at the North American Aerospace Command headquarters to confront a common enemy: terrorist hijackers.

Maj. Gen. Sergey Dronov of the Russian air force (L) and Joseph C. Bonnet III, director of joint training and exercises for North American Aerospace Defense Command and U.S. Northern Command (R), during exercise Vigilant Eagle 12 at NORAD headquarters at Peterson Air Force Base, Colo., Aug. 28, 2012. 
U.S. Air Force photo by Tech. Sgt. Thomas J. Doscher

That’s exactly what happened during Vigilant Eagle 12, the third exercise of its kind designed to promote collaboration in detecting hijacked aircraft and scrambling military jets to intercept and escort them to safety.

This year’s three-day exercise was computer-based, with participants at the NORAD headquarters at Peterson Air Force Base, Colo.; Joint Base Elmendorf-Richardson, Alaska; and at two bases in Russia.

The scenario involved commercial airliners on international flights that had been seized by terrorists, Air Force Brig. Gen. Richard W. Scobee, NORAD’s deputy operations director, told reporters as the exercise wrapped up yesterday. One simulated hijacking took off from Alaska and was headed for Russian airspace; the other originated in Russia and was bound for the United States.

The scenarios required NORAD – the U.S.-Canada command that safeguards U.S. skies under Operation Noble Eagle -- and the Russian air force to go through the procedures they would use to dispatch fighter jets to investigate and track the aircraft heading toward each other’s airspace. At that point, they handed off the missions to the other to complete.

Applying lessons learned during last year’s exercise, which involved actual aircraft, the participants worked through escort and handoff procedures using their different communications, command-and-control and air traffic control systems, Scobee explained.

To complicate the scenarios, and to reflect what assets might be available during a real-life hijacking, they had to work without input from the U.S. Air Force’s Airborne Warning and Control System or Russia’s A-50 Beriev system.

NORAD and Russia share surprisingly similar tactics, techniques and procedures, Scobee said yesterday during a post exercise news conference. “It is remarkable that they are so similar,” he said. “Even though we developed them separately, we see the problem similarly.”

Subtle differences became transparent during the exercise, Scobee said, because of the “clean handoff” as one command handed the mission and authority over to the other. “It was like a handshake,” he said.

The unifying factor, Scobee said, was an understanding that actions taken could mean the difference between life and death for passengers. “That is the No. 1 thing – and the Russian Federation is just like NORAD [and] the United States and Canada,” Scobee said. “We want to protect our citizens, and that is our primary goal.”

Scobee and Maj. Gen. Sergey Dronov of the Russian air force, who led Russia's delegation in Colorado, praised the professionalism of both the NORAD and Russian militaries and their shared appreciation of the importance of the mission.

“Right now, we have a common enemy, and that is terrorism,” Dronov said through an interpreter.

“Our countries are uniquely plagued by terrorism,” agreed Scobee. “And this exercise gives us an opportunity to work together, to learn from each other about how we are dealing with those kinds of events.”

The goal, he said, is to increase the complexity of the exercises, refining concepts and procedures in simulation, then applying them in the sky the following year.

“Next year, we will go back and use lessons learned from this exercise and apply them to another live-fly exercise,” he said. “It will be one of those things where we learn from each other and keep building on the exercises we have.”

Future exercises will continue to integrate new curve balls that keep participants on their toes while reflecting how adaptable adversaries operate, Scobee said.

“It is a constant chess game, because just like we don’t keep our tactics stagnant, terrorists do the same thing,” he said. “They are always thinking of another way to try to get past our systems of control. So we always have to think about adjusting our tactics, our training and our procedures.”

Dronov said he was impressed during this year’s exercise by how quickly the participants dealt with challenging scenarios thrown their way. “They are also walking away with some priceless experience of interaction with each other,” he said. “I am confident that in the future, this cooperation will continue.”

The Vigilant Eagle series stems from a 2003 agreement between the U.S. and Russian presidents to promote closer cooperation as they move beyond the Cold War era, Scobee explained. The threat of international hijackers served as a foundation to help advance that effort, resulting in a relevant exercise program that helps address a recognized threat.

“The populations of the United States and Canada and the Russian Federation should hear this loud and clear: We are here to ensure their safety,” Scobee said. “Not only do we practice here at NORAD multiple times a day for this to happen, but now we are also practicing with our international partners to ensure that the air systems of all our countries are safe. And then, if something does go wrong, that we are there to take action.”

This helps to provide a unified front against terrorist hijackers like those who attacked the United States on 9/11, giving birth to the Noble Eagle mission, he said.

“We will never be helpless again,” Scobee added. “[The public] should hear that loud and clear.”

Contacts and sources:
By Donna Miles
American Forces Press Service

DOD Partners With Cities, Countries On Biosurveillance

In line with the first National Biosurveillance Strategy released last month, the Defense Department is working with U.S. cities and countries around the world to enhance capabilities needed to detect and track a range of natural or intentional global disease outbreaks.

Sandia National Laboratory researcher Mark Tucker examines two petri dishes in 1999. On the left is one with a simulant of anthrax growing in it and on the right is one treated with the decontaminating formulation developed at Sandia.
Photo by Randy Montoya, courtesy of Sandia National Laboratory

Biosurveillance involves using experts and a range of technologies to systematically gather, analyze and interpret data related to disease activity and threats to human and animal health for early warning and detection.

Though the strategy is new, a range of national policy documents has addressed biosurveillance, beginning in 2007 with Homeland Security Presidential Directive 21. The directive defined biosurveillance and discussed the need for a national capability.

In 2009, objectives stated in the National Strategy for Countering Biological Attacks sought to protect against the misuse of the life sciences to support biological weapons proliferation and terrorism. And the National Security Strategy of 2010 noted the ability of emerging infectious diseases to cross borders and threaten national security.

“DOD’s involvement in biosurveillance goes back probably before DOD to the Revolutionary War,” Andrew C. Weber, assistant secretary of defense for nuclear, chemical and biological defense programs, told American Forces Press Service.

“We didn’t call it biosurveillance then, but monitoring and understanding infectious disease has always been our priority, because for much of our history, we’ve been a global force,” he added.

Today, as part of its effort to prepare for microbial storms unleashed by nature and by adversaries, DOD works internationally and domestically to improve global biosurveillance cooperation, Weber said.

“While we worry a lot about nonstate actors launching a bioterrorist attack,” he added, “we also have to worry about rogue states like [North] Korea, Iran and Syria that have biological/chemical weapons programs.”

To enhance biodefense capabilities on the Korean peninsula, Weber said DOD and South Korea launched the Able Response exercise in May 2011 and ran it again in May 2012.

“This is a whole-of-government to whole-of-government tabletop exercise focused on a biological incident, not during a conventional war but some type of a covert release, … that could have a major impact on the civilian population … but also on our 28,000 forces deployed on the peninsula,” the assistant secretary said.

At the Defense Threat Reduction Agency, Ryan Madden is a science and technology manager in the chemical and biological technologies directorate’s physical science and technology division. Since 2007, DTRA, the Department of Homeland Security and other federal agencies have worked with the cities of Seattle and Denver, and now are working along with the State Department and with Poland on biosurveillance exercises, Madden told American Forces Press Service.

The first exercise, called the Interagency Biological Restoration Demonstration, or IBRD, ran from 2007 to 2010 in Seattle, he said, calling the demonstration “a very unique partnership” between DOD and Homeland Security.

The exercise was prompted by anthrax attacks that killed five people in the United States in 2001, Madden said.

The scenario involved a large biological anthrax release in a large city. The objective, he explained, was to get “from the baseline of more than 10 years for restoration [of the city after the attack] to a manageable number [of months or years for anthrax cleanup] that allows the city to maintain some form of viability.”

IBRD was conducted in partnership with the Seattle King County Urban Area Security Initiative, Madden said, “and at the end of the program, we had a number of toolsets for decision support or efficacy.”

The IBRD team had done studies on the efficacy of various solutions on bacillus anthracis -- the bacterium that causes anthrax -- on various surfaces, Madden said. “So there was technical data and decision toolsets that help you use that data to inform sampling approaches or decontamination strategies,” he added.

As a result of the exercise, he said, “the [Seattle Urban Area Security Initiative], and their partnership with Joint Base Lewis-McChord as a key military installation there, have a regional consequence management plan that addresses catastrophic biological incidents.”

For a large city like Seattle, the community resilience factor -- based on how long leases and businesses can stay viable if people can’t get to work -- is about six months. “And we’re still not at six months,” Madden said.

Last year, Homeland Security took the lead, working with DOD, the Environmental Protection Agency and the Department of Health and Human Services in a follow-on effort in Denver, Madden said, working with the Denver Urban Area Security Initiative.

The Denver recovery and resilience program, which wraps up this year, “expanded on IBRD with anthrax, but added a blister agent and a radiological dispersion device, and it still focuses on physical contamination [and cleanup],” the science and technology manager said.

During the Denver program, Madden added, “we started looking at how this could apply in working with a partner nation.”

The international effort began in October as a partnership among DOD, the State Department, Homeland Security and Poland.

“I think [it] ties very closely with both the National Strategy for Countering Biological Threats as well as the National Strategy for Biosurveillance,” Madden said, both of which recommend leveraging international collaboration.

Between October 2011 and September 2014, the exercise will use the release of two agents -- one contagion and one environmentally persistent biothreat -- to develop and demonstrate a capability for resilience in countering a threat that affects U.S. and Polish civilians and military personnel and key infrastructure, Madden said.

“[The international effort] is a capability integration and demonstration program, so we’re looking at technical feasibility and then operational utility,” he added. “We’re working so the U.S. European Command, and warfighters are part of it. And later in the program, [we’ll have] field demonstrations and utility assessments.”

The first technical demonstration will be held in August 2013, he said, and the second in the early spring of 2014.

The final operational demonstration, involving the 773rd Civil Support Team in Germany, Eucom assets and Polish officials working together, will be in September 2014, Madden said. In the meantime, he added, “we’re funding Sandia National Laboratory to help with a methodology and a toolset we call Threat Probability to Action. The big gap we’re trying to bridge is between earlier warning and rapid response.

“The quicker you’re warned about something and the quicker you can make decisions about what to do,” he said, “all of that has an impact on [saving lives].”


Contacts and sources:

Orbiter View of Curiosity From Nearly Straight Overhead

Details such as the shadow of the mast on NASA's Mars rover Curiosity appear in an image taken Aug. 17, 2012, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter, from more directly overhead than previous HiRISE images of Curiosity. In this product, cutouts showing the rover and other hardware or ground markings from the landing of the Mars Science Laboratory spacecraft are presented across the top of a larger, quarter-resolution overview keyed to the full-resolution cutouts. North is up. The scale bar is 200 meters (one-eighth of a mile). 

Overhead HiRISE pass of Mars

Curiosity landed Aug. 5, PDT (Aug. 6, EDT). HiRISE imaged the spacecraft during its descent PIA15993, on the first day after landingPIA16001 and on the sixth day after landing PIA16057 . This image was acquired looking more directly down (9 degree roll angle) than the prior images so the pixel scale is improved to approximately 11 inches (27 centimeters) per pixel. Each cutout is individually stretched to best show the information without saturation. A special noise cleaning method was applied to the images by Paul Geissler of U.S. Geological Survey.

The shadow of Curiosity's mast extends southeast from the rover, opposite the solar illumination direction.

Dark spots on the left-side cutouts created streaks radial to the descent-stage impact site. They may be from far-flung rocks or objects associated with the impact. Seven bright spots associated with the descent stage crash site, as well, may be pieces of hardware.

There are also bright pieces scattered around the backshell, mostly downrange, and interesting detail in the parachute.

The rover is approximately 4,900 feet (1,500 meters) away from the heat shield, about 2,020 feet (615 meters) away from the parachute and back shell, and approximately 2,100 feet (650 meters) away from the discoloration consistent with the impact of the sky crane.

Other products from the same HiRISE observation can be found at http://www.uahirise.org/ESP_028401_1755 .

HiRISE is one of six instruments on NASA's Mars Reconnaissance Orbiter. The University of Arizona, Tucson, operates the orbiter's HiRISE camera, which was built by Ball Aerospace & Technologies Corp., Boulder, Colo. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter Project for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the spacecraft.

Image credit: NASA/JPL-Caltech/Univ. of Arizona/USGS

Hubble Spotted A Supernova in NGC 5806

A new image from the NASA/ESA Hubble Space Telescope shows NGC 5806, a spiral galaxy in the constellation Virgo (the Virgin). It lies around 80 million light years from Earth. Also visible in this image is a supernova explosion called SN 2004dg.

The exposures that are combined into this image were carried out in early 2005 in order to help pinpoint the location of the supernova, which exploded in 2004. The afterglow from this outburst of light, caused by a giant star exploding at the end of its life, can be seen as a faint yellowish dot near the bottom of the galaxy.

NGC 5806 was chosen to be one of a number of galaxies in a study into supernovae because Hubble’s archive already contained high resolution imagery of the galaxy, collected before the star had exploded. Since supernovae are both relatively rare, and impossible to predict with any accuracy, the existence of such before-and-after images is precious for astronomers who study these violent events.

Aside from the supernova, NGC 5806 is a relatively unremarkable galaxy: it is neither particularly large or small, nor especially close or distant.

The galaxy’s bulge (the densest part in the center of the spiral arms) is a so-called disk-type bulge, in which the spiral structure extends right to the center of the galaxy, instead of there being a large elliptical bulge of stars present. It is also home to an active galaxy nucleus, a supermassive black hole which is pulling in large amounts of matter from its immediate surroundings. As the matter spirals around the black hole, it heats up and emits powerful radiation.

This image is produced from three exposures in visible and infrared light, observed by Hubble’s Advanced Camera for Surveys. The field of view is approximately 3.3 by 1.7 arcminutes.

A version of this image was entered into the Hubble’s Hidden Treasures Image Processing Competition by contestant Andre van der Hoeven (who won second prize in the competition for his image of Messier 77). Hidden Treasures is an initiative to invite astronomy enthusiasts to search the Hubble archive for stunning images that have never been seen by the general public. The competition has now closed.

Credit: ESA/NASA, acknowledgement: Andre van der Hoeven

Earthquake Hazards Map Study Finds Deadly Flaws, MU Researcher Suggests Improvements

Three of the largest and deadliest earthquakes in recent history occurred where earthquake hazard maps didn’t predict massive quakes. A University of Missouri scientist and his colleagues recently studied the reasons for the maps’ failure to forecast these quakes. They also explored ways to improve the maps. Developing better hazard maps and alerting people to their limitations could potentially save lives and money in areas such as the New Madrid, Missouri fault zone.

Five 5-minute video summary presented at the 2012 UNAVCO science workshop “Bad assumptions or bad luck: Tohoku’s embarrassing lessons for earthquake hazard mapping”

Seth Stein is Deering Professor of Geological Sciences at Northwestern. He graduated from MIT in 1975 (B.S) and Caltech (Ph.D) in 1978. His research interests are in plate tectonics, earthquake seismology, earthquake hazards, and space geodesy. He has been awarded the James B. Macelwane Medal of the American Geophysical Union, the George Woollard Award of the Geological Society of America, and the Stephan Mueller Medal of the European Geosciences Union, elected a foreign member of the Academy of Europe, a Fellow of the American Geophysical Union and Geological Society of America, and named to the Institute for Scientific Information Highly Cited Researchers list. He was one of the organizers of EarthScope, a national initiative to dramatically advance our knowledge of the structure and evolution of North America, served as Scientific Director of the UNAVCO consortium of universities using GPS for earth science, and been Visiting Senior Scientist at NASA's Goddard Space Flight Center.

“Forecasting earthquakes involves many uncertainties, so we should inform the public of these uncertainties,” said Mian Liu, of MU’s department of geological sciences. “The public is accustomed to the uncertainties of weather forecasting, but foreseeing where and when earthquakes may strike is far more difficult. Too much reliance on earthquake hazard maps can have serious consequences. Two suggestions may improve this situation. First, we recommend a better communication of the uncertainties, which would allow citizens to make more informed decisions about how to best use their resources. Second, seismic hazard maps must be empirically tested to find out how reliable they are and thus improve them.”

Liu and his colleagues suggest testing maps against what is called a null hypothesis, the possibility that the likelihood of an earthquake in a given area – like Japan – is uniform. Testing would show which mapping approaches were better at forecasting earthquakes and subsequently improve the maps.

Liu and his colleagues at Northwestern University and the University of Tokyo detailed how hazard maps had failed in three major quakes that struck within a decade of each other. The researchers interpreted the shortcomings of hazard maps as the result of bad assumptions, bad data, bad physics and bad luck.

Wenchuan, China – In 2008, a quake struck China’s Sichuan Province and cost more than 69,000 lives. Locals blamed the government and contractors for not making buildings in the area earthquake-proof, according to Liu, who says that hazard maps bear some of the blame as well since the maps, based on bad assumptions, had designated the zone as an area of relatively low earthquake hazard.

Léogâne, Haiti – The 2010 earthquake that devastated Port-au-Prince and killed an estimated 316,000 people occurred along a fault that had not caused a major quake in hundreds of years. Using only the short history of earthquakes since seismometers were invented approximately one hundred years ago yielded hazard maps that were didn’t indicate the danger there.

Tōhoku, Japan – Scientists previously thought the faults off the northeast coast of Japan weren’t capable of causing massive quakes and thus giant tsunamis like the one that destroyed the Fukushima nuclear reactor. This bad understanding of particular faults’ capabilities led to a lack of adequate preparation. The area had been prepared for smaller quakes and the resulting tsunamis, but the Tōhoku quake overwhelmed the defenses.

“If we limit our attention to the earthquake records in the past, we will be unprepared for the future,” Liu said. “Hazard maps tend to underestimate the likelihood of quakes in areas where they haven’t occurred previously. In most places, including the central and eastern U.S., seismologists don’t have a long enough record of earthquake history to make predictions based on historical patterns. Although bad luck can mean that quakes occur in places with a genuinely low probability, what we see are too many ‘black swans,’ or too many exceptions to the presumed patterns.”

“We’re playing a complicated game against nature,” said the study’s first author, Seth Stein of Northwestern University. “It’s a very high stakes game. We don’t really understand all the rules very well. As a result, our ability to assess earthquake hazards often isn’t very good, and the policies that we make to mitigate earthquake hazards sometimes aren’t well thought out. For example, the billions of dollars the Japanese spent on tsunami defenses were largely wasted.

“We need to very carefully try to formulate the best strategies we can, given the limits of our knowledge,” Stein said. “Understanding the uncertainties in earthquake hazard maps, testing them, and improving them is important if we want to do better than we’ve done so far.”

The study, “Why earthquake hazard maps often fail and what to do about it,” was published by the journalTectonophysics. First author of the study was Seth Stein of Northwestern University. Robert Geller of the University of Tokyo was co-author. Mian Liu is William H. Byler Distinguished Chair in Geological Sciences in the College of Arts and Science at the University of Missouri.

Contacts and sources:
University of Missouri

Weird Chemistry: Study Identifies Prime Source Of Ocean Methane

Up to 4 percent of the methane on Earth comes from the ocean's oxygen-rich waters, but scientists have been unable to identify the source of this potent greenhouse gas. Now researchers report that they have found the culprit: a bit of "weird chemistry" practiced by the most abundant microbes on the planet.

Credit: University of Illinois

The findings appear in the journal Science.

The researchers who made the discovery did not set out to explain ocean geochemistry. They were searching for new antibiotics. Their research, funded by the National Institutes of Health, explores an unusual class of potential antibiotic agents, called phosphonates, already in use in agriculture and medicine.

Many microbes produce phosphonates to thwart their competitors. Phosphonates mimic molecules the microbes use, but tend to be more resistant to enzymatic breakdown. The secret of their success is the durability of their carbon-phosphorus bond.

"We're looking at all kinds of antibiotics that have this carbon-phosphorus bond," said University of Illinois microbiology and Institute for Genomic Biology (IGB) professor William Metcalf, who led the study with chemistry and IGB professor Wilfred van der Donk. "So we found genes in a microbe that we thought would make an antibiotic. They didn't. They made something different altogether."

University of Illinois chemistry professor Wilfred van der Donk (left), microbiology professor William Metcalf and their colleagues discovered the origin of much of the methane in the oxygen-rich regions of the ocean.
 
Credit: L. Brian Stauffer

The microbe was Nitrosopumilus maritimus, one of the most abundant organisms on the planet and a resident of the oxygen-rich regions of the open ocean. When scanning microbial genomes for promising leads, Benjamin Griffin, a postdoctoral researcher in Metcalf's lab, noticed that N. maritimus had a gene for an enzyme that resembled other enzymes involved in phosphonate biosynthesis. He saw that the microbe also contained genes to make a molecule, called HEP, which is an intermediate in phosphonate biosynthesis.

To determine whether N. maritimus was actually producing a desirable phosphonate antibiotic, chemistry postdoctoral researcher Robert Cicchillo cloned the gene for the mysterious enzyme, expressed it in a bacterium (E. coli), and ramped up production of the enzyme. When the researchers added HEP to the enzyme, the chemical reaction that ensued produced a long sought-after compound, one that could explain the origin of methane in the aerobic ocean.

Scientists had been searching for this compound, methylphosphonic acid, since 2008, when David Karl at the University of Hawaii, Edward DeLong at MIT and their colleagues published an elegant – yet unproven – hypothesis to explain how methane was arising in the aerobic ocean. The only microbes known to produce methane are anaerobes, unable to tolerate oxygen. And yet the aerobic ocean is saturated with methane.

To explain this "methane paradox," Karl and DeLong noted that many aerobic marine microbes host an enzyme that can cleave the carbon-phosphorus bond. If that bond were embedded in a molecule with a single carbon atom, methylphosphonic acid, one of the byproducts of this cleavage would be methane. Karl and DeLong even showed that incubation of seawater microbes with methylphosphonic acid led to methane production.

"There was just one problem with this theory," van der Donk said. "Methylphosphonic acid has never been detected in marine ecosystems. And based on known chemical pathways, it was difficult to see how this compound could be made without invoking unusual biochemistry."

Van der Donk's lab conducted further experiments that demonstrated that the N. maritimus was actually synthesizing phosphonic acids.

"The chemical analysis was a Herculean effort," Metcalf said. The microbe is "one-tenth the size of the standard lab rat microbe, E. coli, and grows at much lower cell densities," he said. The team relied on N. maritimus discoverer David Stahl, of the University of Washington, to grow the microbe in culture for their analysis.

"So we grew 100 liters of culture to get a few, maybe 50 or 100 milligrams of cells, of which maybe 1 percent is phosphorus, of which maybe 5 percent is methylphosphonate," Metcalf said.

The experiments indicated that the methylphosphonate was bound to another molecule, likely a sugar attached to the microbe's surface, van der Donk said. When N. maritimus dies, other marine microbes break the carbon-phosphorus bond of the methylphosphonate to gobble up the phosphorus, an element that is rare in the oceans but essential to life. This encounter generates methane.

The biochemistry that allows N. maritimus to produce methylphosphonate is "unprecedented," Metcalf said.

"Organisms that make phosphonates tend to use weird chemistry for all kinds of things," van der Donk said. "But this is very unusual. One of the carbon atoms of the HEP is oxidized by four electrons and the other is turned into a methyl group. I'm not aware of any other cases where that happens."

The new findings will help those modeling the geochemistry of the ocean to understand climate change, Metcalf said.

"We know that about 20 percent of the greenhouse effect comes from methane and 4 percent of that comes from this previously unexplained source," he said. "You have to know where the methane comes from and where it goes to understand what will happen when the system changes." 

Disease Bacteria And Soil Bacteria Trading Antibiotic Resistance Genes

Soil bacteria and bacteria that cause human diseases have recently swapped at least seven antibiotic-resistance genes, researchers at Washington University School of Medicine in St. Louis report Aug. 31 in Science.

According to the scientists, more studies are needed to determine how widespread this sharing is and to what extent it makes disease-causing pathogens harder to control. 

Graduate student Kevin Forsberg and colleagues found evidence that soil bacteria and disease-causing bacteria recently have shared antibiotic resistance genes
 
Credit: Michael C. Purdy
.
“It is commonplace for antibiotics to make their way into the environment,” says first author Kevin Forsberg, a graduate student. “Our results suggest that this may enhance drug resistance in soil bacteria in ways that could one day be shared with bacteria that cause human disease.”

Among the questions still to be answered: Did the genes pass from soil bacteria to human pathogens or vice versa? And are the genes just the tip of a vast reservoir of shared resistance? Or did some combination of luck and a new technique for studying genes across entire bacterial communities lead the scientists to discover the shared resistance genes?

Humans only mix their genes when they produce offspring, but bacteria regularly exchange genes throughout their lifecycles. This ability is an important contributor to the rapid pace of bacterial evolution. When a bacterial strain develops a new way to beat antibiotics, it can share the strategy not only with its descendants but also with other bacteria.

Earlier studies by other scientists have identified numerous resistance genes in strains of soil bacteria. However, unlike the seven genes described in this report, the earlier genes were dissimilar to their analogs in disease-causing bacteria, implying that they had crossed between the bacterial communities a long time ago.

Most of the antibiotics used to fight illness today originated from the soil. Bacteria use the antibiotics, in part, as weapons to compete with each other for resources and survival. Scientists have long acknowledged that gives environmental bacteria an evolutionary incentive to find ways to beat antibiotics.

“We wanted to try to get a broader sense of how often and extensively antibiotic-resistance genes are shared between environmental bacteria and pathogens,” says senior author Gautam Dantas, PhD, assistant professor of pathology and immunology.

The researchers isolated bacteria from soil samples taken at various U.S. locations. The bacteria’s DNA was broken into small chunks and randomly inserted into a strain of Escherichia coli that is vulnerable to antibiotics. Scientists treated the altered E. coli with multiple antibiotics.

“We knew that any E. coli that continued to grow after these treatments had picked up a gene from the soil bacteria that was helping it fight the antibiotics,” Forsberg says.

Scientists took the DNA from soil bacteria out of the surviving E. coli and prepared it for high-throughput sequencing. Dantas’ laboratory has developed techniques that make it possible to simultaneously sequence and analyze thousands of chunks of DNA from many diverse microorganisms. The DNA can be selected for a single function, such as antibiotic resistance.

When the scientists compared antibiotic-resistance genes found in the soil bacteria to disease-causing bacteria, they were surprised to find some genes were identical not only in the sections of the genes that code for proteins but also in nearby non-coding sections that help regulate the genes’ activities.

Since bacteria have such large population sizes and rapid reproduction times, their DNA normally accumulates mutations and other alterations much more quickly than the DNA of humans. The lack of changes in the resistance genes identified in the study suggests that the transfers of the genes must have occurred fairly recently, according to Dantas.

In some soil bacteria, the genes are present in clusters that make the bacteria resistant to multiple classes of antibiotics, including forms of penicillin, sulfonamide and tetracycline.

“I suspect the soil is not a teeming reservoir of resistance genes,” Dantas says. “But if factory farms or medical clinics continue to release antibiotics into the environment, it may enrich that reservoir, potentially making resistance genes more accessible to infectious bacteria.”

Forsberg KJ, Reyes A, Wang B, Selleck EM, Somer MOA, Dantas G. The shared antibiotic resistome of soil bacteria and human pathogens. Science, Aug. 31, 2012.
The Children’s Discovery Institute, the International Center for Advanced Renewable Energy and Sustainability at Washington University and the National Academies Keck Futures Initiatives supported this research.

Washington University School of Medicine’s 2,100 employed and volunteer faculty physicians also are the medical staff of Barnes-Jewish andSt. Louis Children’s hospitals. The School of Medicine is one of the leading medical research, teaching and patient care institutions in the nation, currently ranked sixth in the nation by U.S. News & World Report. Through its affiliations with Barnes-Jewish and St. Louis Children’s hospitals, the School of Medicine is linked to BJC HealthCare.


Contacts and sources:
Story by Michael Purdy

Healthy Living Into Old Age Can Add Up To 6 Years To Your Life

Keeping physically active shows the strongest association with survival

Research: Lifestyle, social factors, and survival after age 75: population based study

Living a healthy lifestyle into old age can add five years to women's lives and six years to men's, finds a study from Sweden published on bmj.com today.

Credit: bmj.com

The authors say this is the first study that directly provides information about differences in longevity according to several modifiable factors.

It is well known that lifestyle factors, like being overweight, smoking and heavy drinking, predict death among elderly people. But is it uncertain whether these associations are applicable to people aged 75 years or more.

So a team of researchers based in Sweden measured the differences in survival among adults aged 75 and older based on modifiable factors such as lifestyle behaviours, leisure activities, and social networks.

The study involved just over 1,800 individuals who were followed for 18 years (1987-2005). Data on age, sex, occupation, education, lifestyle behaviours, social network and leisure activities were recorded.

During the follow-up period 92% of participants died. Half of the participants lived longer than 90 years.

Survivors were more likely to be women, be highly educated, have healthy lifestyle behaviours, have a better social network, and participate in more leisure activities than non-survivors.

The results show that smokers died one year earlier than non-smokers. Former smokers had a similar pattern of survival to never smokers, suggesting that quitting smoking in middle age reduces the effect on mortality.

Of the leisure activities, physical activity was most strongly associated with survival. The average age at death of participants who regularly swam, walked or did gymnastics was two years greater than those who did not.

Overall, the average survival of people with a low risk profile (healthy lifestyle behaviours, participation in at least one leisure activity, and a rich or moderate social network) was 5.4 years longer than those with a high risk profile (unhealthy lifestyle behaviours, no participation in leisure activities, and a limited or poor social network).

Even among those aged 85 years or older and people with chronic conditions, the average age at death was four years higher for those with a low risk profile compared with those with a high risk profile.

In summary, the associations between leisure activity, not smoking, and increased survival still existed in those aged 75 years or more, with women's lives prolonged by five years and men's by six years, say the authors.

These associations, although attenuated, were still present among people aged 85 or more and in those with chronic conditions, they add.

"Our results suggest that encouraging favourable lifestyle behaviours even at advanced ages may enhance life expectancy, probably by reducing morbidity," they conclude.

Contacts and sources

Uncoiling The Cucumber's Mystery

In the creeping plant's tendrils, researchers discover a biological mechanism for coiling and stumble upon an unusual type of spring


An intact cucumber tendril (top) and a fiber ribbon (bottom) that has been extracted from a tendril both coil in the same, predictable way. Studying the cellular structure of these tendrils has helped researchers to understand a new type of spring.

Credit: Joshua Puzey and Sharon Gerbode


Captivated by a strange coiling behavior in the grasping tendrils of the cucumber plant, researchers at Harvard University have characterized a new type of spring that is soft when pulled gently and stiff when pulled strongly.

Instead of unwinding to a flat ribbon under stress, as an untwisted coil normally would, the cucumber's tendrils actually coil further. Understanding this counterintuitive behavior required a combination of head scratching, physical modeling, mathematical modeling, and cell biology—not to mention a large quantity of silicone.

The result, published in the August 31 issue of Science, describes the mechanism by which coiling occurs in the cucumber plant and suggests a new type of bio-inspired twistless spring.

Led by principal investigator L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics at the Harvard School of Engineering and Applied Sciences (SEAS), Professor of Organismic and Evolutionary Biology and Professor of Physics at Harvard, and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard, the researchers were motivated by simple curiosity about the natural world.

"Nature has solved all kinds of energetic and mechanical problems, doing it very slowly and really getting it right," says lead author Sharon Gerbode, a former postdoctoral fellow at SEAS who has now advanced to a faculty position in the physics department at Harvey Mudd College. "But few people have studied biological mechanisms from the point of view of a physicist or an engineer. We barely had to scratch the surface with this question about the cucumber—how does it coil? What could be a simpler question? And what we actually found was this new kind of spring that no one had characterized before."

Well known to botanists and gardeners, the coiling tendrils of climbing plants like cucumbers, sweet peas, and grape vines allow the plants to hoist themselves up towards sunlight and secure themselves tightly to existing structures like trees or trellises. Yet the biological and physical mechanism of this coiling, at the level of the plant's cells and tissues, has remained a mystery.

A cucumber tendril begins as a straight stem that elongates until it finds something to latch onto. Then, secured at both ends, it forms a left-handed helix and a right-handed helix, joined at the center by a "perversion"—Charles Darwin's strikingly Victorian term for the point at which the coiling changes direction.

"It's easy to create one of these twistless springs with a telephone cord," says Gerbode, "and they're annoying. But with the phone cord, you can pull on both ends and it will straighten out into a flat ribbon. What's strange about the cucumber tendril is that if you pull on the ends, it actually overwinds, adding more turns to both helices." 

Cucumber tendrils produce a particular type of strong, flexible spring that has not been characterized before. Shown here is a fiber ribbon extracted from a tendril.
 
Credit: Joshua Puzey and Sharon Gerbode

To explore the mechanism for this behavior, Gerbode and her Harvard colleagues took a closer look at the cells and tissue types inside the tendril.

A fibrous ribbon, made of thread-like cells called gelatinous fiber (g-fiber) cells, runs the length of each tendril. Two cell layers thick, this ribbon appears to provide the force required for the tendril to form a helix without the benefit of muscles. If the cells on one side of such a ribbon were to contract, the researchers thought, it would force the ribbon to curve and coil.

Gerbode and her coauthor Joshua Puzey (Ph.D. '12), who was studying organismic and evolutionary biology in the Graduate School of Arts and Sciences (GSAS) at the time, tried to reconstruct this fiber ribbon with a silicone model. They stretched a sheet of elastic silicone, secured the ends, and then spread a thin layer of silicone caulk across its surface. When the caulk cured, they cut a thin strip off the model, held both ends, and watched it coil into a pair of perfect helices. When they pulled on both ends, however, it simply unraveled and lay flat, adding no extra coils as they had hoped.

"This is when I spent a lot of time pulling on telephone cords," Gerbode admits.

The clue, as it turns out, was inside the g-fiber cells. These cells have been studied extensively in trees; they have the ability to shrink or elongate, thanks to a special type of architecture in the cell wall.

"What we think may be happening is that the inner cell layer of the tendril has more lignin in it, which is a sort of glue that gives cell walls stiffness and holds together the cellulose microfibrils, which are like rebar in the cells," explains Puzey. "We thought this stiffness must be related to the coiling somehow."

To test this idea, Gerbode and Puzey glued a fabric ribbon to one side of their silicone model and a copper wire to the other side. At last, the silicone strip formed a pair of helices that overwound, just like the cucumber tendril.

The structure they stumbled upon is a spring made of two joined, opposite-handed helices whose bending stiffness is higher than their twisting stiffness. In other words, to form this specific structure, the materials involved have to make it easier for the ribbon to twist axially than to change its curvature. Through mathematical models developed by Mahadevan and coauthor Andrew McCormick (a physics graduate student in GSAS), the team was able to fully understand the parameters and synthesize a simple principle for the design of these springs.

The final stage in the research was to address the biological implications. By extracting the fiber ribbon from a cucumber tendril, Mahadevan's group had already noticed that moisture was playing a role in the spring's behavior. As the extracted ribbon dried out, its stiffness increased and it coiled more tightly. Lignin is also known to be hydrophobic, repelling water. What's more, Mahadevan's team measured the mechanical response of young tendrils and older ones, finding that the older tendrils put up much more resistance to pulling, a fact that they explained using a combination of theory and computer simulations.


The lignified cells in the tendril's fiber ribbon glow bright blue under ultraviolet light. The thickened cell walls are clearly visible in the bottom two images.
 
Credit: Joshua Puzey and Sharon Gerbode

Though the group has not yet explored these findings from an evolutionary perspective, they hypothesize that the mature coil structure allows the climbing plants just the right amount of structural flexibility.

"You want the plant to make a nice strong, secure connection, but you also don't want it to be too stiff or to snap," explains Gerbode. "You want it to have a little bit of flexibility so that if the wind blows or an animal brushes past it, it doesn't break. So one possibility is that this overwinding allows the plant to easily accommodate small motions, but then if something really serious happens it can get very stiff and protect itself."

To further study the evolutionary significance of the tendril's morphology, researchers would have to study the coils in numerous species and attempt to reconstruct the evolutionary history of that characteristic. Mahadevan suggests that such a project could provide important ecological insights.

"The advantage of using a tendril is that the plant saves on complex machinery to build structural supports such as trunks and branches," Mahadevan says. "The disadvantage is that it must depend on other species to build these supports. Thus, tendrils are an adaptation that is likely to develop only in regions replete with vegetation that can provide supports and where competition for resources is intense.

"The real question remains this: how difficult is it to evolve such tendril-like solutions?"

Now that nature has done the hard work, though, Mahadevan suggests that the benefits of understanding cucumber coils might be useful in technology—but hastens to add that this work was driven by pure curiosity, not with an end product in mind.

"This is likely to be useful anywhere we need a spring with a tunable mechanical response," he says.


Contacts and sources: 
This work was supported by the MacArthur Foundation, the Wyss Institute for Biologically Inspired Engineering at Harvard, and the Kavli Institute for Bionano Science and Technology at Harvard. The researchers are now pursuing a patent on the technology, through the Wyss Institute.

Smartphone App Can Track Objects On The Battlefield As Well As On The Sports Field


University of Missouri researchers have developed new software using smartphones’ GPS and imaging abilities that determine the exact location of distant objects as well as monitor the speed and direction of moving objects. The software could eventually allow smartphone-armed soldiers to target the location of their enemies. On the home front, the software could be used by everyone, including golfers judging distance to the green and biologists documenting the location of a rare animal without disturbing it.
The PositionIt System: The three major components of the PositionIt system are single-image-based localization, two-image-based localization, and video based remote moving-target tracking.
“The great advantage of a smartphone is that it provides so many tools in a single, readily available, relatively inexpensive package,” said Qia Wang, a doctoral student in MU’s College of Engineering who led the development of the software. “For example, on the battlefield, a soldier needs a rangefinder, compass, GPS and other tools to do reconnaissance before calling in an air strike. With our software, the soldier can have all those instruments in one device that can be purchased off the shelf. When that soldier returns from war, she can use the same software to protect her family by clocking a speeder near her children’s school and catching the culprit on video.”

Single-Image PositionIt: Single-image-based localization workflow. Steps 1, 2, and 4 are screen shots from the phone. Step 3 is an illustration of how to compute the remote target’s GPS
Wang and his colleagues developed their software to locate and track:
  1. Targets of known size – When the size of the target is known, a single image is enough to pinpoint the target’s location. The software computes the latitude and longitude of the target using the smartphones’ GPS location, compass reading and the distance to the target based on the relative size of the target in the image compared to its known real-life size.
  2. Targets of unknown size – If the exact size of a target is unknown, the software uses two images to triangulate the location of the target.
  3. Moving targets – By taking a short video of a moving target, the smartphone software can calculate how fast the target is moving and in what direction it is going.
Two-Image PositionIt: Workflow of the two-image based system. Steps 1 and 3 are screen shots. Step 2 illustrates how to compute the remote target’s GPS
“Currently, our software is limited by the physical abilities of smartphone hardware, but the devices are improving rapidly,” Wang said. “We anticipate that improvements in GPS accuracy, battery life and camera resolution will allow our software to make even more accurate observations. We also are making our software more user-friendly.”
Video PositionIt: Workflow of video-based moving target tracking. Steps 1 and 2 are screen shots. Step 3 shows the target tracking results. Step 4 illustrates the trajectory generation process. Step 5 shows the results of trajectory smoothing and target speed estimation.

 
The targeting and tracking software is not available commercially yet. A prototype version has been created and is currently being tested. More algorithms and methods are being developed to improve the speed and accuracy. Details on the programming and functionality of the software were presented by Wang and his colleagues at the Geospatial InfoFusion II conference and published in the Proceedings of the Society of Photo-Optical Instrumentation Engineers.

Contacts and sources:
Tim Wall
University of Missouri-Columbia

Thursday, August 30, 2012

Four Looks At A Surprisingly Bright Superbubble Orbiting The Milky Way

This composite image shows a superbubble in the Large Magellanic Cloud (LMC), a small satellite galaxy of the Milky Way, located about 160,000 light years from Earth. Many new stars, some of them very massive, are forming in the star clusterNGC 1929, which is embedded in the nebula N44. The massive stars produce intense radiation, expel matter at high speeds, and race through their evolution to explode as supernovas.

Credit: Chandra X-ray Observatory

The winds and supernova shock waves carve out huge cavities called superbubbles in the surrounding gas. X-rays from NASA's Chandra X-ray Observatory (blue) show hot regions created by these winds and shocks, while infrared data from NASA's Spitzer Space Telescope (red) outline where the dust and cooler gas are found. The optical light from the 2.2m Max-Planck-ESO telescope (yellow) in Chile shows where ultraviolet radiation from hot, young stars is causing gas in the nebula to glow.



A long-running problem in high-energy astrophysics has been that somesuperbubbles in the LMC, including N44, give off a lot more X-rays than expected from models of their structure. A Chandra study published in 2011 showed that there are two extra sources of the bright X-ray emission: supernova shock waves striking the walls of the cavities, and hot material evaporating from the cavity walls.

Infrared image of superbubble
NGC 1929
Credit: Chandra X-ray Observatory

The observations show no evidence for an enhancement of elements heavier than hydrogen and helium in the cavities, thus ruling out this possibility as an explanation for the bright X-ray emission. This is the first time that the data have been good enough to distinguish between different sources of the X-rays produced by superbubbles.

X-ray Image of NGC 1929
Credit: Chandra X-ray Observatory

The Chandra study of N44 and another superbubble in the LMC was led by Anne Jaskot from the University of Michigan in Ann Arbor. The co-authors were Dave Strickland from Johns Hopkins University in Baltimore, MD, Sally Oey from University of Michigan, You-Hua Chu from University of Illinois and Guillermo Garcia-Segura from Instituto de Astronomia-UNAM in Ensenada, Mexico.

Optical image of the supperbubble
NGC 1929
Credit: Chandra X-ray Observatory

NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra's science and flight operations from Cambridge, Mass.

Contacts and sources:
Megan Watzke
Chandra X-ray Center

People Merge Supernatural And Scientific Beliefs When Reasoning With the Unknown, Study Shows

Reliance on supernatural explanations for major life events, such as death and illness, often increases rather than declines with age, according to a new psychology study from The University of Texas at Austin.

The study, published in the June issue of Child Development, offers new insight into developmental learning.

“As children assimilate cultural concepts into their intuitive belief systems — from God to atoms to evolution — they engage in coexistence thinking,” said Cristine Legare, assistant professor of psychology and lead author of the study. “When they merge supernatural and scientific explanations, they integrate them in a variety of predictable and universal ways.”

A healing ritual (the laying on of hands) 
 
Credit: Wikipedia

Legare and her colleagues reviewed more than 30 studies on how people (ages 5-75) from various countries reason with three major existential questions: the origin of life, illness and death. They also conducted a study with 366 respondents in South Africa, where biomedical and traditional healing practices are both widely available.

As part of the study, Legare presented the respondents with a variety of stories about people who had AIDS. They were then asked to endorse or reject several biological and supernatural explanations for why the characters in the stories contracted the virus.

According to the findings, participants of all age groups agreed with biological explanations for at least one event. Yet supernatural explanations such as witchcraft were also frequently supported among children (ages 5 and up) and universally among adults.

Among the adult participants, only 26 percent believed the illness could be caused by either biology or witchcraft. And 38 percent split biological and scientific explanations into one theory. For example: “Witchcraft, which is mixed with evil spirits, and unprotected sex caused AIDS.” However, 57 percent combined both witchcraft and biological explanations. For example: “A witch can put an HIV-infected person in your path.”

A painting in the Rila Monastery in Bulgaria, condemning witchcraft and traditional folk magic
File:Rila Monastery wall painting.jpg
Credit: Wikipedia

Legare said the findings contradict the common assumption that supernatural beliefs dissipate with age and knowledge.

“The findings show supernatural explanations for topics of core concern to humans are pervasive across cultures,” Legare said. “If anything, in both industrialized and developing countries, supernatural explanations are frequently endorsed more often among adults than younger children.”

The results provide evidence that reasoning about supernatural phenomena is a fundamental and enduring aspect of human thinking, Legare said.

“The standard assumption that scientific and religious explanations compete should be re-evaluated in light of substantial psychological evidence,” Legare said. “The data, which spans diverse cultural contexts across the lifespan, shows supernatural reasoning is not necessarily replaced with scientific explanations following gains in knowledge, education or technology.”


Contacts and sources:
Cristine Legare
University of Texas at Austin

Monogamy And The Immune System

In the foothills of the Santa Cruz Mountains two closely related species of mice share a habitat and a genetic lineage, but have very different social lives. The California mouse (Peromyscus californicus) is characterized by a lifetime of monogamy; the deer mouse (Peromyscus maniculatus) is sexually promiscuous.

The California mouse (Peromyscus californicus) is a species of rodent in the family Cricetidae, found in northwestern Mexico and central to southern California. Most rodents are polygamous, but the California mouse pair bonds, making it a model organism for researchers studying the genetics and implications of partner fidelity. 
Header Image
Credit:  University of California Berkeley

Researchers at the University of California Berkeley recently showed how these differences in sexual behavior impact the bacteria hosted by each species as well as the diversity of the genes that control immunity. The results were published in the May 2012 edition of PLoS One.

Phylogeny of the 16S rRNA sequences used in this study. Gray branches correspond to the bacterial sequences recovered from the monogamous P. californicus, while the black branches related to the promiscuous P. maniculatus. Phylotypes recovered in both host species were relegated to the host in which they were more common. Click image for larger version.


Credit:  University of California Berkeley  

Monogamy is a fairly rare trait in mammals, possessed by only five percent of species. Rarely do two related, but socially distinguishable, species live side-by-side. This makes these two species of mice interesting subjects for Matthew MacManes, a National Institutes of Health-sponsored post-doctoral fellow at UC Berkeley.

Through a series of analyses,MacManes and researchers from theLacey Lab examined the differences between these two species on the microscopic and molecular levels. They discovered that the lifestyles of the two mice had a direct impact on the bacterial communities that reside within the female reproductive tract. Furthermore, these differences correlate with enhanced diversifying selection on genes related to immunity against bacterial diseases.

Bacteria live on every part of our bodies and have distinctive ecologies. The first step of MacManes project involved testing the bacterial communities that resided in the vaginas of both species of mice — the most relevant area for a study about monogamous and promiscuous mating systems.

Next, MacManes performed a genetic analysis on the variety of DNA present, revealing hundreds of different types of bacteria present in each species. He found that the promiscuous deer mouse had twice the bacterial diversity as the monogamous California mouse. Since many bacteria cause sexually transmitted infections (like chlamydia or gonorrhea), he used the diversity of bacteria as a proxy for risk of disease. Results of the study were published in Naturwissenschaftenin October 2011.

But this wasn't the end of the exploration.

"The obvious next question was, does the bacterial diversity in the promiscuous mice translate into something about the immune system, or how the immune system functions?" MacManes asked.

MacManes hypothesized that selective pressures caused by generation after generation of bacterial warfare had fortified the genomes of the promiscuous deer mouse against the array of bacteria it hosts.

To find out, he sequenced genes related to immune function of the two mice species and compared each species' versions of one important immunity gene, MHC-DQa. Some forms of genes (alleles) are better at recognizing different pathogens than others. If an individual has only a single common allele, it may only recognize a limited set of bacterial pathogens. In contrast, if an individual has two different alleles it may recognize a more diverse set of bacterial pathogens, and thus be more protected against infection.

Peromyscus maniculatus is a rodent native to North America. It is most commonly called the Deer mouse. Like other Peromyscus species, it is a vector and carrier of emerging infectious diseases such as hantaviruses and Lyme disease.
Credit:  University of California Berkeley 

Based on a comparison of the two species' genotypes he confirmed that the promiscuous mice had much more diversity in the genes related to their immune system.

"The promiscuous mice, by virtue of their sexual system, are in contact with more individuals and are exposed to a lot more bacteria," MacManes said. "They need a more robust immune system to fend off all of the bugs that they're exposed to."

The results, published in PLoS One,match findings in humans and other species with differential mating habits. They show that differences in social behavior can lead to changes in the selection pressures and gene-level evolutionary changes in a species.

Motivated by this result, MacManes began work on a project that looked to understand the genetics of a far more complex behavior—whether to stay at home with relatives, or to disperse to a new burrow.

Scientists have been sequencing and exploring the genome for more than a decade. For much of this time, studies have been limited to the most common and well-known species: humans, lab-mice, and fruit flies. But in recent years, as the cost of sequencing has dropped and the methods of exploring genomic information have improved, researchers have begun to analyze other less traditional organisms.

MacManes project was one of the first studies to use next-generation gene sequencing and high performance computers to assess the influence of behavior on genes in a non-model species.

"This is a field that people have always been interested in, but the tools hadn't existed yet for people to really understand how complex the mechanisms were," MacManes said.

Next-generation sequencing determines the order of the nucleotide bases in a molecule of DNA by breaking the double helix into short fragments and rapidly analyzing thousands of chunks at a time. Once hundreds of millions of genetic snippets have been read out by a DNA sequencer, they must be assembled into a single genome, or mapped to a reference genome, and compared to other genetic sequences to be useful.

"The sequencing is something that you can do in any molecular biology lab—that's easy," MacManes said. "But when you try to do an analysis of the data, you get back something like several billion base pairs of data. How to actually analyze the data is the real issue."

As a National Science Foundation (NSF) graduate research fellow, MacManes learned that researchers could access NSF supercomputers through the Extreme Science and Engineering Discovery Environment (XSEDE) to analyze datasets too big for their university laboratory clusters. Once he had his sequences, MacManes turned to the Texas Advanced Computing Center (TACC) at The University of Texas at Austin, a lead partner in XSEDE and home to the Ranger supercomputer.

Matthew MacManes, a National Institutes of Health-sponsored post-doctoral fellow at UC Berkeley.
Credit:  University of California Berkeley 

"When we first started using Ranger, it was a breakthrough moment for us," he said. "We had the data set, but we didn't have any way to do anything with it. Ranger was really our first real chance at analyzing this data. "

The alignment and analysis that MacManes accomplished on Ranger in a few weeks would have taken years with his local resources. It organized the data so MacManes could find insights about the relationship between genes and behavior.

"The ability to isolate and compare genetic differences related to social behavior using advanced computing is a fascinating application of emerging technologies," said Jennifer Verodolin, a researcher specializing in social rodents at the National Evolutionary Synthesis Center in Durham, North Carolina. "We often see individual and population-level social and mating differences within the same species. While ecological factors are linked to this variation, these sophisticated new tools will now allow us to see the genetic signature of how natural selection has shaped behavior."

Mating systems, and social systems more broadly, are important to basic evolutionary biology, MacManes asserted. "The things an animal does, the way it behaves, and who it interacts with, are important to natural selection. These factors can cause immunogenes to evolve at a much faster rate, or slower in the case of monogamous mice. That connection is important and probably under-recognized."

Monogamy and promiscuity are only one of a variety of social behaviors that are thought to influence gene expression. MacManes' current research involves analyzing gene expression in the hippocampus brain region of tuco tucos (a sort of South American gopher) who live together in social groups and others who live independently. He is hoping to find what differentiates the social animals from the loners and what impact this change in their behavior has on their genetic profile.

"Now that we have these new sequencing technologies, people are going to be really interested in looking at the mechanisms that underlie these behaviors," MacManes said. "How might genes control what we do, and how we behave? We're going to see an explosion in these studies where people start to understand the very basic genetic mechanism for all sorts of behaviors that we know are out there."


Contacts and sources:
Aaron Dubrow
University of Texas at Austin, Texas Advanced Computing Center