Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Friday, June 28, 2013

Synthetic Biology: Artificial Ribosomes Created From Scratch

Synthetic biology technology could lead to new antibiotics, modified protein-generators

Synthetic biology researchers at Northwestern University, working with partners at Harvard Medical School, have for the first time synthesized ribosomes -- cell structures responsible for generating all proteins and enzymes in our bodies -- from scratch in a test tube.

Others have previously tried to synthesize ribosomes from their constituent parts, but the efforts have yielded poorly functional ribosomes under conditions that do not replicate the environment of a living cell. In addition, attempts to combine ribosome synthesis and assembly in a single process have failed for decades.

Ribosome mRNA

Credit: Wikipedia

Michael C. Jewett, a synthetic biologist at Northwestern, George M. Church, a geneticist at Harvard Medical School, and colleagues recently took another approach: they mimicked the natural synthesis of a ribosome, allowing natural enzymes of a cell to help facilitate the man-made construction.

The technology could lead to the discovery of new antibiotics targeting ribosome assembly; an advanced understanding of how ribosomes form and function; and the creation of tailor-made ribosomes to produce new proteins with exotic functions that would be difficult, if not impossible, to make in living organisms.

"We can mimic nature and create ribosomes the way nature has evolved to do it, where all the processes are co-activated at the same time," said Jewett, who led the research along with Church. "Our approach is a one-pot synthesis scheme in which we toss genes encoding ribosomal RNA, natural ribosomal proteins, and additional enzymes of an E. coli cell together in a test tube, and this leads to the construction of a ribosome."

Jewett is an assistant professor of chemical and biological engineering at Northwestern's McCormick School of Engineering and Applied Science.

The in vitro construction of ribosomes, as demonstrated in this study, is of great interest to the synthetic biology field, which seeks to transform the ability to engineer new or novel life forms and biocatalytic ensembles for useful purposes.

The findings of the four-year research project were published June 25 in the journal Molecular Systems Biology.

Comprising 57 parts -- three strands of ribonucleic acid (RNA) and 54 proteins -- ribosomes carry out the translation of messenger RNA into proteins, a core process of the cell. The thousands of proteins per cell, in turn, carry out a vast array of functions, from digestion to the creation of antibodies. Cells require ribosomes to live.

Jewett likens a ribosome to a chef. The ribosome takes the recipe, encoded in DNA, and makes the meal, or a protein. "We want to make brand new chefs, or ribosomes," Jewett said. "Then we can alter ribosomes to do new things for us."

"The ability to make ribosomes in vitro in a process that mimics the way biology does it opens new avenues for the study of ribosome synthesis and assembly, enabling us to better understand and possibly control the translation process," he said. "Our technology also may enable us in the future to rapidly engineer modified ribosomes with new behaviors and functions, a potentially significant advance for the synthetic biology field."

The synthesis process developed by Jewett and Church -- termed "integrated synthesis, assembly and translation" (iSAT) technology -- mimics nature by enabling ribosome synthesis, assembly and function in a single reaction and in the same compartment.

Working with E. coli cells, the researchers combined natural ribosomal proteins with synthetically made ribosomal RNA, which self-assembled in vitro to create semi-synthetic, functional ribosomes.

They confirmed the ribosomes were active by assessing their ability to carry out translation of luciferase, the protein responsible for allowing a firefly to glow. The researchers then showed the ability of iSAT to make a modified ribosome with a point mutation that mediates resistance to the antibiotic clindamycin.

The researchers next want to synthesize all 57 ribosome parts, including the 54 proteins.

"I'm really excited about where we are," Jewett said. "This study is an important step along the way to synthesizing a complete ribosome. We will continue to push this work forward."

Jewett and Church, a professor of genetics at Harvard Medical School, are authors of the paper, titled "In Vitro Integration of Ribosomal RNA Synthesis, Ribosome Assembly, and Translation." Other authors are Brian R. Fritz and Laura E. Timmerman, graduate students in chemical and biological engineering at Northwestern.

The work was carried out at both Northwestern University and Harvard Medical School.

Contacts and sources:
Megan Fellman
Northwestern University

Breaking Habits Before They Start

Turning off cells in a habit-associated brain region prevents rats from learning to run a maze on autopilot.

Our daily routines can become so ingrained that we perform them automatically, such as taking the same route to work every day. Some behaviors, such as smoking or biting your fingernails, become so habitual that we can’t stop even if we want to.

Breaking habits before they start
Credit: MIT

Although breaking habits can be hard, MIT neuroscientists have now shown that they can prevent them from taking root in the first place, in rats learning to run a maze to earn a reward. The researchers first demonstrated that activity in two distinct brain regions is necessary in order for habits to crystallize. Then, they were able to block habits from forming by interfering with activity in one of the brain regions — the infralimbic (IL) cortex, which is located in the prefrontal cortex.

The MIT researchers, led by Institute Professor Ann Graybiel, used a technique called optogenetics to block activity in the IL cortex. This allowed them to control cells of the IL cortex using light. When the cells were turned off during every maze training run, the rats still learned to run the maze correctly, but when the reward was made to taste bad, they stopped, showing that a habit had not formed. If it had, they would keep going back by habit.

“It’s usually so difficult to break a habit,” Graybiel says. “It’s also difficult to have a habit not form when you get a reward for what you’re doing. But with this manipulation, it’s absolutely easy. You just turn the light on, and bingo.”

Graybiel, a member of MIT’s McGovern Institute for Brain Research, is the senior author of a paper describing the findings in the June 27 issue of the journal Neuron. Kyle Smith, a former MIT postdoc who is now an assistant professor at Dartmouth College, is the paper’s lead author.

Patterns of habitual behavior

Previous studies of how habits are formed and controlled have implicated the IL cortex as well as the striatum, a part of the brain related to addiction and repetitive behavioral problems, as well as normal functions such as decision-making, planning and response to reward. It is believed that the motor patterns needed to execute a habitual behavior are stored in the striatum and its circuits.

Recent studies from Graybiel’s lab have shown that disrupting activity in the IL cortex can block the expression of habits that have already been learned and stored in the striatum. Last year, Smith and Graybiel found that the IL cortex appears to decide which of two previously learned habits will be expressed.

“We have evidence that these two areas are important for habits, but they’re not connected at all, and no one has much of an idea of what the cells are doing as a habit is formed, as the habit is lost, and as a new habit takes over,” Smith says.

To investigate that, Smith recorded activity in cells of the IL cortex as rats learned to run a maze. He found activity patterns very similar to those that appear in the striatum during habit formation. Several years ago, Graybiel found that a distinctive “task-bracketing” pattern develops when habits are formed. This means that the cells are very active when the animal begins its run through the maze, are quiet during the run, and then fire up again when the task is finished.

This kind of pattern “chunks” habits into a large unit that the brain can simply turn on when the habitual behavior is triggered, without having to think about each individual action that goes into the habitual behavior.

The researchers found that this pattern took longer to appear in the IL cortex than in the striatum, and it was also less permanent. Unlike the pattern in the striatum, which remains stored even when a habit is broken, the IL cortex pattern appears and disappears as habits are formed and broken. This was the clue that the IL cortex, not the striatum, was tracking the development of the habit.

Multiple layers of control

The researchers' ability to optogenetically block the formation of new habits suggests that the IL cortex not only exerts real-time control over habits and compulsions, but is also needed for habits to form in the first place.

“The previous idea was that the habits were stored in the sensorimotor system and this cortical area was just selecting the habit to be expressed. Now we think it’s a more fundamental contribution to habits, that the IL cortex is more actively making this happen,” Smith says.

This arrangement offers multiple layers of control over habitual behavior, which could be advantageous in reining in automatic behavior, Graybiel says. It is also possible that the IL cortex is contributing specific pieces of the habitual behavior, in addition to exerting control over whether it occurs, according to the researchers. They are now trying to determine whether the IL cortex and the striatum are communicating with and influencing each other, or simply acting in parallel.

“A role for the IL cortex in the regulation of habit is not a new idea, but the details of the interaction between it and the striatum that emerge from this analysis are novel and interesting,” says Christopher Pittenger, an assistant professor of psychiatry and psychology at Yale University School of Medicine, who was not part of the research team. “Thinking in the long term, it raises the question of whether targeted manipulations of the IL cortex might be useful for the breaking habits — and exciting possibility with potential clinical ramifications.”

The study suggests a new way to look for abnormal activity that might cause disorders of repetitive behavior, Smith says. Now that the researchers have identified the neural signature of a normal habit, they can look for signs of habitual behavior that is learned too quickly or becomes too rigid. Finding such a signature could allow scientists to develop new ways to treat disorders of repetitive behavior by using deep brain stimulation, which uses electronic impulses delivered by a pacemaker to suppress abnormal brain activity.

The research was funded by the National Institutes of Health, the Office of Naval Research, the Stanley H. and Sheila G. Sydney Fund and funding from R. Pourian and Julia Madadi.

Contacts and sources:
Anne Trafton, MIT News Office

Exploring Dinosaur Growth

Psittacosaurus, the 'parrot dinosaur' is known from more than 1000 specimens from the Cretaceous, 100 million years ago, of China and other parts of east Asia. As part of his PhD thesis at the University of Bristol, Qi Zhao, now on the staff of the Institute for Vertebrate Paleontology in Beijing, carried out the intricate study on bones of babies, juveniles and adults.

Cluster of six juvenile Psittacosaurus from the Lower Cretaceous of Lujiatun, Liaoning Province, China. The cluster contains six aligned juvenile specimens. Bone histology indicates that specimens 2-6 were two years old at time of death, whereas specimen 1 was three years old. 
Image of a cluster of six juvenile Psittacosaurus from the Lower Cretaceous of Lujiatun, Liaoning Province, China
Image by © Institute of Vertebrate Paleontology and Paleoanthropology, Beijing

Dr Zhao said: "Some of the bones from baby Psittacosaurus were only a few millimetres across, so I had to handle them extremely carefully to be able to make useful bone sections. I also had to be sure to cause as little damage to these valuable specimens as possible."

With special permission from the Beijing Institute, Zhao sectioned two arm and two leg bones from 16 individual dinosaurs, ranging in age from less than one year to 10 years old, or fully-grown. He did the intricate sectioning work in a special palaeohistology laboratory in Bonn, Germany.

The one-year-olds had long arms and short legs, and scuttled about on all fours soon after hatching. The bone sections showed that the arm bones were growing fastest when the animals were ages one to three years. Then, from four to six years, arm growth slowed down, and the leg bones showed a massive growth spurt, meaning they ended up twice as long as the arms, necessary for an animal that stood up on its hind legs as an adult.

Skeletal reconstructions of hatchling, juvenile and adult individuals (left to right) showing inferred postural change, from quadrupedal to bipedal, with 178-cm-tall man for scale 
Image of skeletal reconstructions of hatchling, juvenile and adult individuals (left to right) showing inferred postural change, from quadrupedal to bipedal, with 178-cm-tall man for scale
Image by © Institute of Vertebrate Paleontology and Paleoanthropology, Beijing

Professor Xing Xu of the Beijing Institute, one of Dr Zhao's thesis supervisors, said: "This remarkable study, the first of its kind, shows how much information is locked in the bones of dinosaurs. We are delighted the study worked so well, and see many ways to use the new methods to understand even more about the astonishing lives of the dinosaurs."

Professor Mike Benton of the University of Bristol, Dr Zhao's other PhD supervisor, said: "These kinds of studies can also throw light on the evolution of a dinosaur like Psittacosaurus. Having four-legged babies and juveniles suggests that at some time in their ancestry, both juveniles and adults were also four-legged, and Psittacosaurus and dinosaurs in general became secondarily bipedal."

The paper is published today in Nature Communications.

Contacts and sources:
Hannah Johnson
University of Bristol

Low-Power Wi-Fi Signal Tracks Movement -- Even Behind Walls

The comic-book hero Superman uses his X-ray vision to spot bad guys lurking behind walls and other objects. Now we could all have X-ray vision, thanks to researchers at MIT's Computer Science and Artificial Intelligence Laboratory.

New system uses low-power Wi-Fi signal to track moving humans — even behind walls
Credit:  Christine Daniloff

Researchers have long attempted to build a device capable of seeing people through walls. However, previous efforts to develop such a system have involved the use of expensive and bulky radar technology that uses a part of the electromagnetic spectrum only available to the military.

Now a system being developed by Dina Katabi, a professor in MIT's Department of Electrical Engineering and Computer Science, and her graduate student Fadel Adib, could give all of us the ability to spot people in different rooms using low-cost Wi-Fi technology. "We wanted to create a device that is low-power, portable and simple enough for anyone to use, to give people the ability to see through walls and closed doors," Katabi says.

The system, called "Wi-Vi," is based on a concept similar to radar and sonar imaging. But in contrast to radar and sonar, it transmits a low-power Wi-Fi signal and uses its reflections to track moving humans. It can do so even if the humans are in closed rooms or hiding behind a wall.

As a Wi-Fi signal is transmitted at a wall, a portion of the signal penetrates through it, reflecting off any humans on the other side. However, only a tiny fraction of the signal makes it through to the other room, with the rest being reflected by the wall, or by other objects. "So we had to come up with a technology that could cancel out all these other reflections, and keep only those from the moving human body," Katabi says.

Motion detector

To do this, the system uses two transmit antennas and a single receiver. The two antennas transmit almost identical signals, except that the signal from the second receiver is the inverse of the first. As a result, the two signals interfere with each other in such a way as to cancel each other out. Since any static objects that the signals hit — including the wall — create identical reflections, they too are cancelled out by this nulling effect.

In this way, only those reflections that change between the two signals, such as those from a moving object, arrive back at the receiver, Adib says. "So, if the person moves behind the wall, all reflections from static objects are cancelled out, and the only thing registered by the device is the moving human."

Once the system has cancelled out all of the reflections from static objects, it can then concentrate on tracking the person as he or she moves around the room. Most previous attempts to track moving targets through walls have done so using an array of spaced antennas, which each capture the signal reflected off a person moving through the environment. But this would be too expensive and bulky for use in a handheld device.

So instead Wi-Vi uses just one receiver. As the person moves through the room, his or her distance from the receiver changes, meaning the time it takes for the reflected signal to make its way back to the receiver changes too. The system then uses this information to calculate where the person is at any one time.

Possible uses in disaster recovery, personal safety, gaming

Wi-Vi, being presented at the Sigcomm conference in Hong Kong in August, could be used to help search-and-rescue teams to find survivors trapped in rubble after an earthquake, say, or to allow police officers to identify the number and movement of criminals within a building to avoid walking into an ambush.

It could also be used as a personal safety device, Katabi says: "If you are walking at night and you have the feeling that someone is following you, then you could use it to check if there is someone behind the fence or behind a corner."

The device can also detect gestures or movements by a person standing behind a wall, such as a wave of the arm, Katabi says. This would allow it to be used as a gesture-based interface for controlling lighting or appliances within the home, such as turning off the lights in another room with a wave of the arm.

Unlike today's interactive gaming devices, where users must stay in front of the console and its camera at all times, users could still interact with the system while in another room, for example. This could open up the possibility of more complex and interesting games, Katabi says.
Contacts and sources:
Sarah McDonnell
Massachusetts Institute of Technology
Written by Helen Knight, MIT News Office

New Species of the Hornless Rhino Found From the Late Miocene of Nakhon Ratchasima, Thailand

In the Tha Chang area, Nakhon Ratchasima Province, Thailand, several sand pits previously have yielded fossils. The area is 220 km northeast of Bangkok, and the sand pits are located next to the Mun River. The sedimentary sequence of these sand pits consists of unconsolidated mudstone, sandstone, and conglomerate, deposited by the ancient Mun River. Almost all the fossils have been found and collected by local villagers working in these sand pits, and they have been brought to public institutions such as Nakhon Ratchasima Rajabhat University. Consequently, precise field information is unavailable for most of the fossils from the Tha Chang area, including the type mandible of the recently described new hominoid Khoratpithecus piriyai. 

Reconstruction of the Late Miocene habitat of Aceratherium piriyai at Tha Chang 

Illustrated by Chen Yu

Dr. Deng Tao from Institute of Vertebrate Paleontology and Paleoanthropology (IVPP), Chinese Academy of Sciences, and his Thai colleagues from Nakhon Ratchasima Rajabhat University studied the rhino fossils collected from the Tha Chang sand pits and described them as a new species of the subfamily Aceratheriinae, Aceratherium piriyai sp. nov. Its holotype is an adult skull without premaxillae and the anterior portion of nasals, and its paratype is an almost complete mandible. Her Royal Highness Princess Sirindhorn of Thailand was interested in this study and watched the holotype of A. pirayai when Dr. Deng and his Thai colleagues studied these fossils in Nakhon Ratchasima. The study is published online June 26, 2013 in Journal of Vertebrate Paleontology. 

  Location and section of Tha Chang in Nakhon Ratchasima Province, northeastern Thailand

Credit: Institute of Vertebrate Paleontology and Paleoanthropology

Cuvier (1822) created the species Rhinoceros incisivus based on an isolated first upper incisor of large size from the Middle Miocene locality of Weisenau in Germany, but the tooth unambiguously belongs to a genus of the tribe Teleoceratini. Kaup (1832) described two skulls of a hornless rhinoceros from the Late Miocene locality of Eppelsheim in Germany, and he created a new genus Aceratherium for them, but used Cuvier’s species. The prevailing usage of Aceratherium incisivum Kaup, 1832 is conserved in fact. Since Kaup (1832), many rhinoceroses, at least 83 species, have been described as species ofAceratherium, relegating this genus to a wastebasket taxon. Later, however, most of the species were referred to other genera within the subfamily Aceratheriinae or to other rhino groups.

A. piriyai has several characters that are more derived than in A. incisivum and A. depereti, such as very broadly separated parietal crests, a straight nuchal crest, and longer metalophs on M1-2. But A. piriyai also has some more primitive characters than A. incisivum, such as narrow zygomatic arches, a progressive anterior tip of the maxillary zygomatic process, and absence of the medifossette on P4. 

  Holotype skull of Aceratherium piriyai sp. nov. 
Credit: Institute of Vertebrate Paleontology and Paleoanthropology

The very broadly separated parietal crests, an important derived character in the morphological evolution of aceratheres, indicate that the age of A. piriyai must be later than the ages of the time-successive Aceratherium depereti-A. incisivum. A. depereti came from the Lower Miocene deposits of the Turgai region in Kazakhstan, and A. incisivum was distributed in MN 9-10 of the early Late Miocene of Western Europe. As a result, the age of A. piriyai should be the late Late Miocene. The stegolophodonts from the Tha Chang sand pits are more primitive than Stegodon in northern China, suggesting that the Tha Chang sand pits are older than 6 Ma. Based on other mammalian fossils from the Tha Chang area, the age of the fossiliferous deposits in Tha Chang Sand Pit 8 has been estimated to be 9-7 Ma, and later, 7.4-5.9 Ma. A. piriyai indicates that the age of 7.4-5.9 Ma should be reasonable for the Tha Chang sand pits.

On the other hand, while A. piriyai has a mixture of derived and primitive character states compared to A. incisivum, it is not more primitive than A. depereti. As a result, A. depereti could be the ancestor of both A. incisivum and A. piriyai. A. deperetiwas distributed in Central Asia, so its descendents, A. incisivum and A. piriyai would have dispersed westward to Europe and southward to South Asia, respectively, evolving different derived characters in different evolutionary trends from A. depereti.

The occipital surface of A. depereti is vertical or apparently feebly deflected backward, which is a primitive character compared to the posteriorly inclined occiput of A. incisivum. The occipital surface of the Tha Chang rhino is slightly inclined posteriorly or nearly vertical, and the cheek teeth are subhypsodont, both indicating a woodland habitat. This result is identical with the paleobotanical evidence for the Tha Chang sand pits, which indicates the occurrence of wet and tropical forest environments.

This work was supported by the National Basic Research Program of China, the Strategic Priority Research Program of the Chinese Academy of Sciences, and the National Natural Science Foundation of China.

Contacts and sources:
Institute of Vertebrate Paleontology and Paleoanthropology
Chinese Academy of Sciences

Survivor Of Stellar Collision Is New Type Of Pulsating Star

A team of astronomers from the UK, Germany and Spain have observed the remnant of a stellar collision and discovered that its brightness varies in a way not seen before on this rare type of star. By analysing the patterns in these brightness variations, astronomers will learn what really happens when stars collide. This discovery will be published in the 27 June 2013 issue of the journal Nature.

Artist's impression of the eclipsing, pulsating binary star J0247-25 
 Credit: Keele University 

Stars like our Sun expand and cool to become red giant stars when the hydrogen that fuels the nuclear fusion in their cores starts to run out. Many stars are born in binary systems so an expanding red giant star will sometimes collide with an orbiting companion star. As much as 90% of the red giant star’s mass can be stripped off in a stellar collision, but the details of this process are not well understood. 

Only a few stars that have recently emerged from a stellar collision are known, so it has been difficult to study the connection between stellar collisions and the various exotic stellar systems they produce. When an eclipsing binary system containing one such star turned up as a by-product of a search for extrasolar planets, Dr Pierre Maxted and his colleagues decided to use the high-speed camera ULTRACAM to study the eclipses of the star in detail. These new high-speed brightness measurements show that the remnant of the stripped red giant is a new type of pulsating star.

This animation shows the main features of the recently discovered binary system J0247-25. The larger star is an SX Phe-type star that pulsates in multiple modes with periods near 40 minutes. The smaller star, J0247-25B, is the remnant of a star that has been stripped of its outer layers to reveal a shell-hydrogen burning core. This star pulsates in at least 3 modes with periods near 5 minutes.

The relative sizes of the stars are correctly shown relative to their separation. The timescales for the pulsations on both stars are correct relative to the orbital period of the binary, which is about 16 hours. We do not currently known the exact pulsation modes of the stars, so the pattern of the pulsations on the stars are arbitrary. The constrast of the image has been set to show the pulsations clearly, in reality the size of the variations is much more subtle that shown here.
Many stars, including our own Sun, vary in brightness because of pulsations caused by sound waves bouncing around inside the star. For both the Sun and the new variable star, each pulsation cycle takes about 5 minutes. These pulsations can be used to study the properties of a star below its visible surface. Computer models produced by the discovery team show that the sound waves probe all the way to the centre of the new pulsating star. Further observations of this star are now planned to work out how long it will be before the star starts to cool and fade to produce a stellar corpse (“white dwarf’”) of abnormally low mass.

Dr Pierre Maxted from Keele University, who led the study, said “We have been able to find out a lot about these stars, such as how much they weigh, because they are in a binary system. This will really help us to interpret the pulsation signal and so figure out how these stars survived the collision and what will become of them over the next few billion years.”

The team involved in the discovery are: Dr Pierre Maxted and Dr Barry Smalley (Keele University, UK); Dr Aldo Serenelli (CSIC-IEEC, Spain); Andrea Miglio (University of Birmingham, UK); Prof. Thomas Marsh and Dr Elmé Breedt (University of Warwick, UK), Prof. Ulrich Heber and Veronika Schaffenroth (Dr. Karl Remeis-Observatory & ECAP, Germany), Prof. Vikram Dhillon and Dr Stuart Littlefair (University of Sheffield, UK), Dr Chris Copperwheat (Liverpool John Moores University, UK)

ULTRACAM is a high-speed, 3-channel CCD camera for astrophysical research ULTRACAM was funded by PPARC and is a collaboration between, Professor Tom Marsh (University of Warwick) Professor Vik Dhillon (Sheffield) and the Astronomy Technology Centre (Edinburgh).

Contacts and sources: 
Chris Stone,  Keele University  
Dr Pierre Maxted, Keele University 

NASA Launches Satellite to Study How Sun's Atmosphere is Energized

NASA's Interface Region Imaging Spectrograph (IRIS) spacecraft launched Thursday at 7:27 p.m. PDT (10:27 p.m. EDT) from Vandenberg Air Force Base, Calif. The mission to study the solar atmosphere was placed in orbit by an Orbital Sciences Corporation Pegasus XL rocket. 

"We are thrilled to add IRIS to the suite of NASA missions studying the sun," said John Grunsfeld, NASA's associate administrator for science in Washington. "IRIS will help scientists understand the mysterious and energetic interface between the surface and corona of the sun."

IRIS is a NASA Explorer Mission to observe how solar material moves, gathers energy and heats up as it travels through a little-understood region in the sun's lower atmosphere. This interface region between the sun's photosphere and corona powers its dynamic million-degree atmosphere and drives the solar wind. The interface region also is where most of the sun's ultraviolet emission is generated. These emissions impact the near-Earth space environment and Earth's climate.

The Pegasus XL carrying IRIS was deployed from an Orbital L-1011 carrier aircraft over the Pacific Ocean at an altitude of 39,000 feet, off the central coast of California about 100 miles northwest of Vandenberg. The rocket placed IRIS into a sun-synchronous polar orbit that will allow it to make almost continuous solar observations during its two-year mission.

The L-1011 took off from Vandenberg at 6:30 p.m. PDT and flew to the drop point over the Pacific Ocean, where the aircraft released the Pegasus XL from beneath its belly. The first stage ignited five seconds later to carry IRIS into space. IRIS successfully separated from the third stage of the Pegasus rocket at 7:40 p.m. At 8:05 p.m., the IRIS team confirmed the spacecraft had successfully deployed its solar arrays, has power and has acquired the sun, indications that all systems are operating as expected.

"Congratulations to the entire team on the successful development and deployment of the IRIS mission," said IRIS project manager Gary Kushner of the Lockheed Martin Solar and Atmospheric Laboratory in Palo Alto, Calif. "Now that IRIS is in orbit, we can begin our 30-day engineering checkout followed by a 30-day science checkout and calibration period."

IRIS is expected to start science observations upon completion of its 60-day commissioning phase. During this phase the team will check image quality and perform calibrations and other tests to ensure a successful mission.

NASA's Explorer Program at Goddard Space Flight Center in Greenbelt, Md., provides overall management of the IRIS mission. The principal investigator institution is Lockheed Martin Space Systems Advanced Technology Center. NASA's Ames Research Center will perform ground commanding and flight operations and receive science data and spacecraft telemetry.

The Smithsonian Astrophysical Observatory designed the IRIS telescope. The Norwegian Space Centre and NASA's Near Earth Network provide the ground stations using antennas at Svalbard, Norway; Fairbanks, Alaska; McMurdo, Antarctica; and Wallops Island, Va. NASA's Launch Services Program at the agency's Kennedy Space Center in Florida is responsible for the launch service procurement, including managing the launch and countdown. Orbital Sciences Corporation provided the L-1011 aircraft and Pegasus XL launch system.

Contacts and sources:
Susan M. Hendrix
Goddard Space Flight Center
For more information about the IRIS mission, visit: http://www.nasa.gov/iris

City Lights Of Texas From Space

One of the Expedition 36 crew members aboard the International Space Station, some 240 miles above Earth, used a 50mm lens to record this oblique nighttime image of a large part of the nation’s second largest state in area, including the four largest metropolitan areas in population. The extent of the metropolitan areas is easily visible at night due to city and highway lights. The largest metro area, Dallas-Fort Worth, often referred to informally as the Metroplex, is the heavily cloud-covered area at the top center of the photo. Neighboring Oklahoma, on the north side of the Red River, less than 100 miles to the north of the Metroplex, appears to be experiencing thunderstorms. 
City lights in Texas
The Houston metropolitan area, including the coastal city of Galveston, is at lower right. To the east near the Texas border with Louisiana, the metropolitan area of Beaumont-Port Arthur appears as a smaller blotch of light, also hugging the coast of the Texas Gulf. Moving inland to the left side of the picture one can delineate the San Antonio metro area. The capital city of Austin can be seen to the northeast of San Antonio. This and hundreds of thousands of other Earth photos taken by astronauts and cosmonauts over the past 50 years are available on http://eol.jsc.nasa.gov

Thursday, June 27, 2013

Voyager 1 Explores Final Frontier of Our 'Solar Bubble'

Data from Voyager 1, now more than 11 billion miles from the sun, suggest the spacecraft is closer to becoming the first human-made object to reach interstellar space. 

This artist's concept shows NASA's Voyager 1 spacecraft exploring a region called the "depletion region" or "magnetic highway" at the outer limits of our heliosphere, the bubble the sun blows around itself. In this region, the magnetic field lines generated by our sun (yellow arcs) are piling up and intensifying and low-energy charged particles that are accelerated in the heliosphere's turbulent outer later (green dots) have disappeared. Scientists think the depletion region is the last region Voyager 1 has to cross before reaching interstellar space, which is the space between stars, Voyager 1 passed a shockwave known as the termination shock in 2004, where solar wind suddenly slowed down and became turbulent. In 2010, it then passed into an area called the "stagnation region" where the outward velocity of the solar wind slowed to zero and sporadically reversed direction. In the slow-down and stagnation regions, the prevalence of low-energy charged particles from our heliosphere jumped dramatically and is indicated by the green dots. On Aug. 25, 2012.

Voyager 1 entered the depletion or magnetic highway region, where the magnetic field acts as a kind of "magnetic highway" allowing energetic ions from inside the heliosphere to escape out, and cosmic rays from interstellar space zoom in. (To learn more about how this region acts as a magnetic highway
This artist's concept shows NASA's Voyager 1 spacecraft exploring a region called the

Research using Voyager 1 data and published in the journal Science Thursday provides new detail on the last region the spacecraft will cross before it leaves the heliosphere, or the bubble around our sun, and enters interstellar space. Three papers describe how Voyager 1's entry into a region called the magnetic highway resulted in simultaneous observations of the highest rate so far of charged particles from outside heliosphere and the disappearance of charged particles from inside the heliosphere.

Scientists have seen two of the three signs of interstellar arrival they expected to see: charged particles disappearing as they zoom out along the solar magnetic field and cosmic rays from far outside zooming in. Scientists have not yet seen the third sign, an abrupt change in the direction of the magnetic field, which would indicate the presence of the interstellar magnetic field.

"This strange, last region before interstellar space is coming into focus, thanks to Voyager 1, humankind's most distant scout," said Ed Stone, Voyager project scientist at the California Institute of Technology in Pasadena. "If you looked at the cosmic ray and energetic particle data in isolation, you might think Voyager had reached interstellar space, but the team feels Voyager 1 has not yet gotten there because we are still within the domain of the sun's magnetic field."

Scientists do not know exactly how far Voyager 1 has to go to reach interstellar space. They estimate it could take several more months, or even years, to get there. The heliosphere extends at least 8 billion miles beyond all the planets in our solar system. It is dominated by the sun's magnetic field and an ionized wind expanding outward from the sun. Outside the heliosphere, interstellar space is filled with matter from other stars and the magnetic field present in the nearby region of the Milky Way. 

This artist's concept shows NASA's two Voyager spacecraft exploring a turbulent region of space known as the heliosheath, the outer shell of the bubble of charged particles around our sun. 
This artist's concept shows NASA's two Voyager spacecraft exploring a turbulent region of space known as the heliosheath, the outer shell of the bubble of charged particles around our sun. Image Credit: NASA/JPL-Caltech
Image Credit: NASA/JPL-Caltech

Voyager 1 and its twin spacecraft, Voyager 2, were launched in 1977. They toured Jupiter, Saturn, Uranus and Neptune before embarking on their interstellar mission in 1990. They now aim to leave the heliosphere. Measuring the size of the heliosphere is part of the Voyagers' mission.

The Science papers focus on observations made from May to September 2012 by Voyager 1's cosmic ray, low-energy charged particle and magnetometer instruments, with some additional charged particle data obtained through April of this year.

Voyager 2 is about 9 billion miles from the sun and still inside the heliosphere. Voyager 1 was about 11 billion miles from the sun Aug. 25 when it reached the magnetic highway, also known as the depletion region, and a connection to interstellar space. This region allows charged particles to travel into and out of the heliosphere along a smooth magnetic field line, instead of bouncing round in all directions as if trapped on local roads. For the first time in this region, scientists could detect low-energy cosmic rays that originate from dying stars.

"We saw a dramatic and rapid disappearance of the solar-originating particles. They decreased in intensity by more than 1,000 times, as if there was a huge vacuum pump at the entrance ramp onto the magnetic highway," said Stamatios Krimigis, the low-energy charged particle instrument's principal investigator at the Johns Hopkins University Applied Physics Laboratory in Laurel, Md. "We have never witnessed such a decrease before, except when Voyager 1 exited the giant magnetosphere of Jupiter, some 34 years ago."

Other charged particle behavior observed by Voyager 1 also indicates the spacecraft still is in a region of transition to the interstellar medium. While crossing into the new region, the charged particles originating from the heliosphere that decreased most quickly were those shooting straightest along solar magnetic field lines. Particles moving perpendicular to the magnetic field did not decrease as quickly. However, cosmic rays moving along the field lines in the magnetic highway region were somewhat more populous than those moving perpendicular to the field. In interstellar space, the direction of the moving charged particles is not expected to matter.

In the span of about 24 hours, the magnetic field originating from the sun also began piling up, like cars backed up on a freeway exit ramp. But scientists were able to quantify the magnetic field barely changed direction -- by no more than 2 degrees.

"A day made such a difference in this region with the magnetic field suddenly doubling and becoming extraordinarily smooth," said Leonard Burlaga, the lead author of one of the papers, and based at NASA's Goddard Space Flight Center in Greenbelt, Md. "But since there was no significant change in the magnetic field direction, we're still observing the field lines originating at the sun."

NASA's Jet Propulsion Laboratory, in Pasadena, Calif., built and operates the Voyager spacecraft. California Institute of Technology in Pasadena manages JPL for NASA. The Voyager missions are a part of NASA's Heliophysics System Observatory, sponsored by the Heliophysics Division of the Science Mission Directorate at NASA Headquarters in Washington. 

Contacts and sources:
Jia-Rui C. Cook
Jet Propulsion Laboratory

For more information about the Voyager spacecraft mission, visit:
http://www.nasa.gov/voyager and http://voyager.jpl.nasa.gov

The Fastest Winds Of Venus Are Getting Faster

The most detailed record of cloud motion in the atmosphere of Venus chronicled by ESA’s Venus Express has revealed that the planet’s winds have steadily been getting faster over the last six years.

Tracking Clouds on Venus 
Tracking clouds on Venus
Credit: ESA

Venus is well known for its curious super-rotating atmosphere, which whips around the planet once every four Earth days. This is in stark contrast to the rotation of the planet itself – the length of the day – which takes a comparatively laborious 243 Earth days.

By tracking the movements of distinct cloud features in the cloud tops some 70 km above the planet’s surface over a period of 10 venusian years (6 Earth years), scientists have been able to monitor patterns in the long-term global wind speeds.

When Venus Express arrived at the planet in 2006, average cloud-top wind speeds between latitudes 50º on either side of the equator were clocked at roughly 300 km/h. The results of two separate studies have revealed that these already remarkably rapid winds are becoming even faster, increasing to 400 km/h over the course of the mission.

“This is an enormous increase in the already high wind speeds known in the atmosphere. Such a large variation has never before been observed on Venus, and we do not yet understand why this occurred,” says Igor Khatuntsev from the Space Research Institute in Moscow and lead author of the Russian-led paper to be published in the journal Icarus.

Increasing wind speeds on Venus
Credit: ESA

Dr Khatuntsev’s team determined the wind speeds by measuring how cloud features in images moved between frames: over 45 000 features were painstakingly tracked by hand and more than 350 000 further features were tracked automatically using a computer programme.

In a complementary study, a Japanese-led team used their own automated cloud tracking method to derive the cloud motions: their results are to be published in the Journal of Geophysical Research.

On top of this long-term increase in the average wind speed, however, both studies have also revealed regular variations linked to the local time of day and the altitude of the Sun above the horizon, and to the rotation period of Venus.

One regular oscillation occurs roughly every 4.8 days near the equator and is thought to be connected to atmospheric waves at lower altitudes.

But the research also unveiled some harder-to-explain curiosities.

“Our analysis of cloud motions at low latitudes in the southern hemisphere showed that over the six years of study the velocity of the winds changed by up 70 km/h over a time scale of 255 Earth days – slightly longer than a year on Venus,” says Toru Kouyama from the Information Technology Research Institute in Ibaraki, Japan.

Contacts and sources:

NASA Thruster Achieves World-Record 5+ Years of Operation

A NASA advanced ion propulsion engine has successfully operated for more than 48,000 hours, or 5 and a half years, making it the longest test duration of any type of space propulsion system demonstration project ever.

The thruster was developed under NASA's Evolutionary Xenon Thruster (NEXT) Project at NASA's Glenn Research Center in Cleveland. Glenn manufactured the test engine's core ionization chamber. Aerojet Rocketdyne of Sacramento, Calif., designed and built the ion acceleration assembly.

The 7-kilowatt class thruster could be used in a wide range of science missions, including deep space missions identified in NASA's Planetary Science Decadal Survey.

"The NEXT thruster operated for more than 48,000 hours," said Michael J. Patterson, principal investigator for NEXT at Glenn. "We will voluntarily terminate this test at the end of this month, with the thruster fully operational. Life and performance have exceeded the requirements for any anticipated science mission."

While the Dawn spacecraft is visiting the asteroids Vesta and Ceres, NASA Glenn has been developing the next generation of ion thrusters for future missions. NASA's Evolutionary Xenon Thruster (NEXT) Project has developed a 7-kilowatt ion thruster that can provide the capabilities needed in the future.

An ion thruster produces small levels of thrust relative to chemical thrusters, but does so at higher specific impulse (or higher exhaust velocities), which means that an ion thruster has a fuel efficiency of 10-12 times greater than a chemical thruster. The higher the rocket's specific impulse (fuel efficiency), the farther the spacecraft can go with a given amount of fuel. Given that an ion thruster produces small levels of thrust relative to chemical thrusters, it needs to operate in excess of 10,000 hours to slowly accelerate the spacecraft to speeds necessary to reach the asteroid belt or beyond.

The NEXT ion thruster has been operated for over 43,000 hours, which for rocket scientists means that the thruster has processed over 770 kilograms of xenon propellant and can provide 30 million-newton-seconds of total impulse to the spacecraft. This demonstrated performance permits future science spacecraft to travel to varied destinations, such as extended tours of multi-asteroids, comets, and outer planets and their moons.
NEXT Ion Thruster
Image Credit: NASA, Christopher J. Lynch (Wyle Information Systems, LLC)

The NEXT engine is a type of solar electric propulsion in which thruster systems use the electricity generated by the spacecraft's solar panel to accelerate the xenon propellant to speeds of up to 90,000 mph. This provides a dramatic improvement in performance compared to conventional chemical rocket engines.

During the endurance test performed in a high vacuum test chamber at Glenn, the engine consumed about 1,918 pounds (870 kilograms) of xenon propellant, providing an amount of total impulse that would take more than 22,000 (10,000 kilograms) of conventional rocket propellant for comparable applications.

"Aerojet Rocketdyne fully supports NASA's vision to develop high power solar electric propulsion for future exploration," said Julie Van Kleeck, Aerojet Rocketdyne's vice president for space advanced programs. "NASA-developed next generation high power solar electric propulsion systems will enhance our nation's ability to perform future science and human exploration missions."

The NEXT project is a technology development effort led by Glenn to develop a next generation electric propulsion system, including power processing, propellant management and other components. The project, conducted under the In-Space Propulsion Technology Program at Glenn, is managed by NASA's Science Mission Directorate in Washington.

Aerojet Rocketdyne provides propulsion expertise for domestic and international markets. For more information about Aerojet Rocketdyne, visit: http://www.Rocket.com

To view the NEXT ion engine in operation, visit: http://go.nasa.gov/16v9y8g

Contacts and sources:
Katherine K. Martin
Glenn Research Center, Cleveland

Glenn Mahone
Aerojet Rocketdyne

Wednesday, June 26, 2013

Quantum Engines Must Break Down

Our present understanding of thermodynamics is fundamentally incorrect if applied to small systems and need to be modified, according to new research from University College London and the University of Gdansk

Credit: thequantumlife.tumblr.com

Our present understanding of thermodynamics is fundamentally incorrect if applied to small systems and needs to be modified, according to new research from University College London (UCL) and the University of Gdańsk. The work establishes new laws in the rapidly emerging field of quantum thermodynamics.

The findings, published today in Nature Communications, have wide applications in small systems, from nanoscale engines and quantum technologies, to biological motors and systems found in the body.

The laws of thermodynamics govern much of the world around us – they tell us that a hot cup of tea in a cold room will cool down rather than heat up; they tell us that unless we are vigilant, our houses will become untidy rather than spontaneously tidy; they tell us how efficient the best heat engines can be.

The current laws of thermodynamics only apply to large objects, when many particles are involved. The laws of thermodynamics for smaller systems are not well understood but will have implications for the construction of molecular motors and quantum computers, and might even determine how efficient energy extracting processes such as photosynthesis can be.

In this study researchers used results from quantum information theory to adapt the laws of thermodynamics for small systems, such as microscopic motors, nanoscale devices and quantum technologies.

Small systems behave very differently to large systems composed of many particles. And when systems are very small, then quantum effects come into play. The researchers found a set of laws which determine what happens to such microscopic systems when we heat them up or cool them down. An important consequence of their laws is that there is more fundamental irreversibility in small systems, and this means that microscopic heat engines can not be as efficient as their larger counterparts.

"We see that nature imposes fundamental limitations to extracting energy from microscopic systems and heat engines. A quantum heat engine is not as efficient as a macroscopic one, and will sometimes fail," said Professor Oppenheim, a Royal Society University Research Fellow at UCL's Department of Physics and Astronomy and one of the authors of the research. "The limitations are due to both finite size effects, and to quantum effects."

The researchers investigated the efficiency of microscopic heat engines and found that one of the basic quantities in thermodynamics, the free energy, does not determine what can happen in small systems, and especially in quantum mechanical systems. Instead, several new free energies govern the behaviour of these microscopic systems.

In large systems, if you put pure energy into a system, then you can recover all this energy back to use to power an engine which can perform work (such as lifting a heavy weight). But the researchers found that this was not the case for microscopic systems. If you put work into a quantum system you generally cannot get it all back.

Professor Michal Horodecki of the University of Gdansk, and co-author of the paper, said: "Thermodynamics at the microscopic scale is fundamentally irreversible. This is dramatically different to larger systems where all thermodynamic processes can be made reversible if we change systems slowly enough."

Contacts and sources:
Rosie Waldron
University College London

Breastfeeding Boosts Ability To Climb Social Ladder

Breastfeeding not only boosts children’s chances of climbing the social ladder, but it also reduces the chances of downwards mobility, suggests a large study published online in the Archives of Disease in Childhood.
Credit: by Fikirbaz on Flickr

The findings, produced by ESRC-funded researchers in the International Centre for Lifecourse Studies in Society and Health at UCL, are based on changes in the social class of two groups of individuals born in 1958 (17,419 people) and in 1970 (16,771 people).

The researchers asked each of the children’s mothers, when their child was five or seven years old, whether they had breastfed him/her.

They then compared people’s social class as children - based on the social class of their father when they were 10 or 11 - with their social class as adults, measured when they were 33 or 34.

Social class was categorised on a four-point scale ranging from unskilled/semi-skilled manual to professional/managerial.

This is the first large scale study to find that the benefits of breastfeeding extend beyond infancy and childhood into adulthood.

Dr Amanda Sacker, International Centre for Lifecourse Studies in Society and Health at UCL

The research also took account of a wide range of other potentially influential factors, derived from regular follow-ups every few years. These included children’s brain (cognitive) development and stress scores, which were assessed using validated tests at the ages of 10-11.

Significantly fewer children were breastfed in 1970 than in 1958. More than two-thirds (68%) of mothers breastfed their children in 1958, compared with just over one in three (36%) in 1970.

Social mobility also changed over time, with those born in 1970 more likely to be upwardly mobile, and less likely to be downwardly mobile, than those born in 1958.

None the less, when background factors were accounted for, children who had been breastfed were consistently more likely to have climbed the social ladder than those who had not been breastfed. This was true of those born in both 1958 and 1970.

What’s more, the size of the “breastfeeding effect” was the same in both time periods. Breastfeeding increased the odds of upwards mobility by 24% and reduced the odds of downward mobility by around 20% for both groups.

Intellect and stress accounted for around a third (36%) of the total impact of breastfeeding: breastfeeding enhances brain development, which boosts intellect, which in turn increases upwards social mobility. Breastfed children also showed fewer signs of stress.

The evidence suggests that breastfeeding confers a range of long-term health, developmental, and behavioural advantages to children, which persist into adulthood, say the authors.

They note that it is difficult to pinpoint which affords the greatest benefit to the child - the nutrients found in breast milk, or the skin to skin contact and associated bonding during breastfeeding.

“Perhaps the combination of physical contact and the most appropriate nutrients required for growth and brain development is implicated in the better neurocognitive and adult outcomes of breastfed infants,” they suggest.

Lead author Professor Amanda Sacker, of the ESRC International Centre for Lifecourse Studies in Society and Health at UCL, says: “This is the first large scale study to find that the benefits of breastfeeding extend beyond infancy and childhood into adulthood. Independent of other biological, social and economic circumstances, those who were breastfed were about 1.25 times more likely to be upwardly mobile."

Yukon Gold Mine Yields Ancient Horse Fossil

When University of Alberta researcher Duane Froese found an unusually large horse fossil in the Yukon permafrost, he knew it was important. Now, in a new study published online today in Nature, this fossil is rewriting the story of equine evolution as the ancient horse has its genome sequenced.
University of Alberta researcher Duane Froese with the skull of the extinct Late Pleistocene horse Equus lambei in the Klondike area, Yukon.
Credit: Photo by Grant Zazula

Unlike the small ice age horse fossils that are common across the unglaciated areas of the Yukon, Alaska and Siberia that date to the last 100,000 years, this fossil was at least the size of a modern domestic horse. Froese, an associate professor in the U of A Department of Earth and Atmospheric Sciences, and Canada Research Chair in Northern Environmental Change, had seen these large horses only a few times at geologically much older sites in the region—but none were so remarkably well preserved in permafrost.

Froese and his colleagues from the University of Copenhagen, who led the study, had dated the permafrost at the site from volcanic ashes in the deposits and knew that it was about 700,000 years old—representing some of the oldest known ice in the northern hemisphere. They also knew the fossil was similarly old. The team, which also included collaborators from the Yukon and the University of California, Santa Cruz, extracted collagen from the fossil and found it had preserved blood proteins and that short fragments of ancient DNA were present within the bone. The DNA showed that the horse fell outside the diversity of all modern and ancient horse DNA ever sequenced consistent with its geologic age. After several years of work, a draft genome of the horse was assembled and is providing new insight into the evolution of horses.

The study showed that the horse fell within a line that includes all modern horses and the last remaining truly wild horses, the Przewalski's Horse from the Mongolian steppes. The 700,000-year-old horse genome—along with the genome of a 43,000-year-old horse, six present-day horses and a donkey—has allowed the research team to estimate how fast mutations accumulate through time.

In addition, the new genomes revealed episodes of severe demographic fluctuations in horse populations in phase with major climatic changes.

Contacts and sources: 
Bev Betkowski
University of Alberta

Hunting For Neutrinos

Physicist Joseph Formaggio seeks new ways to detect and measure the elusive particles.

Every second, trillions of particles called neutrinos pass through your body. These particles have a mass so tiny it has never been measured, and they interact so weakly with other matter that it is nearly impossible to detect them, making it very difficult to study their behavior.

Since arriving at MIT in 2005, Joseph Formaggio, an associate professor of physics, has sought new ways to measure the mass of neutrinos. Nailing down that value — and answering questions such as whether neutrinos are identical to antineutrinos — could help scientists refine the Standard Model of particle physics, which outlines the 16 types of subatomic particles (including the three neutrinos) that physicists have identified.

Those discoveries could also shed light on why there is more matter than antimatter in the universe, even though they were formed in equal amounts during the Big Bang.

“There are big questions that we still haven’t answered, all centered around this little particle. It’s not just measuring some numbers; it’s really about understanding the nature of the equation that explains particle physics. That’s really exciting,” Formaggio says.

Joseph Formaggio

Photo credit:  M. Scott Brauer
Paradigm shift

Formaggio, the only child of Italian immigrants, was the first in his family to attend college. Born in New York City, he spent part of his childhood in Sicily, his parents’ homeland, before returning to New York. From an early age, he was interested in science, especially physics and math.

At Yale University, he studied physics but was also interested in creative writing. The summer after his freshman year, in search of a summer job, he “called every publishing house in New York City, all of which resoundingly rejected me,” he says. However, his call to the Yale physics department yielded an immediate offer to work with a group that was doing research at the Collider Detector at Fermilab. That led to a senior thesis characterizing the excited states of the upsilon particle, which had recently been discovered.

As a student, Formaggio was drawn to both particle physics and astrophysics. At Columbia University, where he earned his PhD, he started working in an astrophysics group that was studying dark matter. Neutrinos were then thought to be a prime candidate for dark matter, and the mysterious particles intrigued Formaggio. He eventually joined a neutrino research group at Columbia, which included Janet Conrad, a professor who is now at MIT.

While a postdoc at the University of Washington, Formaggio participated in experiments at the Sudbury Neutrino Observatory (SNO), located in a Canadian nickel mine some 6,800 feet underground. Those were the first experiments to show definitively that neutrinos have mass — albeit a very tiny mass.

Until then, “there were definitely hints that neutrinos undergo this process called oscillation where they transmute from one type to another, which is a signature for mass, but all the evidence was sort of murky and not quite definitive,” Formaggio says.

The SNO experiments revealed that there are three “flavors” of neutrino that can morph from one to the other. Those experiments “basically put the nail in the coffin and said that neutrinos change flavors, so they must have mass,” Formaggio says. “It was a big paradigm shift in thinking about neutrinos, because the Standard Model of particle physics wants neutrinos to be massless, and the fact that they’re not means we don’t understand it at some very deep level.”

Another possible discovery that could throw a wrench into the Standard Model is the existence of a fourth type of neutrino. There have been hints of such a particle but no definitive observation yet. “If you put in four neutrinos, the Standard Model is done,” Formaggio says, “but we’re not there yet.”

‘A giant electromagnetic problem’

In his current work, Formaggio is focused on trying to measure the mass of neutrinos. In one approach, he is working with an international team on a detector called KATRIN, located in a small town in southwest Germany. This detector, about the size of a large hangar, is filled with tritium, an unstable radioactive isotope. When tritium decays, it produces neutrinos and electrons. By measuring the energy of the electron released during the decay, physicists hope to be able to calculate the mass of the neutrino — an approach based on Einstein’s E=mc2 equation.

“Because energy is conserved, if you know how much you started out with and how much the electron took away, you can figure out how much the neutrino weighs,” Formaggio says. “It’s a very hard measurement but I like it because the experiment is a giant electromagnetic problem.”

The KATRIN detector is under construction and scheduled to begin taking data within the next two years. Formaggio is also developing another tritium detector, known as Project 8, which uses the radio frequency of electrons to measure their energies.

Formaggio hopes that one day, tritium-based detectors could be used to find neutrinos still lingering from the Big Bang, which would require even larger quantities of tritium.

“There are many holy grails in physics, and finding those neutrinos is definitely one of them. People look at the light from the Big Bang, but that’s actually closer to 300,000 years old, or thereabouts. Neutrinos from the Big Bang have been around since the first second of the universe,” Formaggio says.

Contacts and sources:
Anne Trafton, MIT News Office

Solar Power Heads In A Thinner Direction

Most efforts at improving solar cells have focused on increasing the efficiency of their energy conversion, or on lowering the cost of manufacturing. But now MIT researchers are opening another avenue for improvement, aiming to produce the thinnest and most lightweight solar panels possible.

Such panels, which have the potential to surpass any substance other than reactor-grade uranium in terms of energy produced per pound of material, could be made from stacked sheets of one-molecule-thick materials such as graphene or molybdenum disulfide.

The MIT team found that an effective solar cell could be made from a stack of two one-molecule-thick materials: Graphene (a one-atom-thick sheet of carbon atoms, shown at bottom in blue) and molybdenum disulfide (above, with molybdenum atoms shown in red and sulfur in yellow). The two sheets together are thousands of times thinner than conventional silicon solar cells. 
Solar power heads in a new direction: thinner
Graphic credit: Jeffrey Grossman and Marco Bernardi

Jeffrey Grossman, the Carl Richard Soderberg Associate Professor of Power Engineering at MIT, says the new approach "pushes towards the ultimate power conversion possible from a material" for solar power. Grossman is the senior author of a new paper describing this approach, published in the journal Nano Letters.

Although scientists have devoted considerable attention in recent years to the potential of two-dimensional materials such as graphene, Grossman says, there has been little study of their potential for solar applications. It turns out, he says, "they're not only OK, but it's amazing how well they do."

Using two layers of such atom-thick materials, Grossman says, his team has predicted solar cells with 1 to 2 percent efficiency in converting sunlight to electricity, That's low compared to the 15 to 20 percent efficiency of standard silicon solar cells, he says, but it's achieved using material that is thousands of times thinner and lighter than tissue paper. The two-layer solar cell is only 1 nanometer thick, while typical silicon solar cells can be hundreds of thousands of times that. The stacking of several of these two-dimensional layers could boost the efficiency significantly.

"Stacking a few layers could allow for higher efficiency, one that competes with other well-established solar cell technologies," says Marco Bernardi, a postdoc in MIT's Department of Materials Science who was the lead author of the paper. Maurizia Palummo, a senior researcher at the University of Rome visiting MIT through the MISTI Italy program, was also a co-author.

For applications where weight is a crucial factor — such as in spacecraft, aviation or for use in remote areas of the developing world where transportation costs are significant — such lightweight cells could already have great potential, Bernardi says.

Pound for pound, he says, the new solar cells produce up to 1,000 times more power than conventional photovoltaics. At about one nanometer (billionth of a meter) in thickness, "It's 20 to 50 times thinner than the thinnest solar cell that can be made today," Grossman adds. "You couldn't make a solar cell any thinner."

This slenderness is not only advantageous in shipping, but also in ease of mounting solar panels. About half the cost of today's panels is in support structures, installation, wiring and control systems, expenses that could be reduced through the use of lighter structures.

In addition, the material itself is much less expensive than the highly purified silicon used for standard solar cells — and because the sheets are so thin, they require only minuscule amounts of the raw materials.

The MIT team's work so far to demonstrate the potential of atom-thick materials for solar generation is "just the start," Grossman says. For one thing, molybdenum disulfide and molybdenum diselenide, the materials used in this work, are just two of many 2-D materials whose potential could be studied, to say nothing of different combinations of materials sandwiched together. "There's a whole zoo of these materials that can be explored," Grossman says. "My hope is that this work sets the stage for people to think about these materials in a new way."

While no large-scale methods of producing molybdenum disulfide and molybdenum diselenide exist at this point, this is an active area of research. Manufacturability is "an essential question," Grossman says, "but I think it's a solvable problem."

An additional advantage of such materials is their long-term stability, even in open air; other solar-cell materials must be protected under heavy and expensive layers of glass. "It's essentially stable in air, under ultraviolet light, and in moisture," Grossman says. "It's very robust."

The work so far has been based on computer modeling of the materials, Grossman says, adding that his group is now trying to produce such devices. "I think this is the tip of the iceberg in terms of utilizing 2-D materials for clean energy" he says.

Contacts and sources:
Andrew Carleen
Massachusetts Institute of Technology

A Clue To A Great Science Mystery: How Earth Got Its Oxygen

Caltech researchers find evidence of an early manganese-oxidizing photosystem

For most terrestrial life on Earth, oxygen is necessary for survival. But the planet's atmosphere did not always contain this life-sustaining substance, and one of science's greatest mysteries is how and when oxygenic photosynthesis—the process responsible for producing oxygen on Earth through the splitting of water molecules—first began. Now, a team led by geobiologists at the California Institute of Technology (Caltech) has found evidence of a precursor photosystem involving manganese that predates cyanobacteria, the first group of organisms to release oxygen into the environment via photosynthesis.

Caltech graduate student Jena Johnson examines a 2.415 billion-year-old rock in South Africa where evidence of an early manganese-oxidizing photosystem was found
Credit: Caltech

The findings, outlined in the June 24 early edition of the Proceedings of the National Academy of Sciences (PNAS), strongly support the idea that manganese oxidation—which, despite the name, is a chemical reaction that does not have to involve oxygen—provided an evolutionary stepping-stone for the development of water-oxidizing photosynthesis in cyanobacteria.

"Water-oxidizing or water-splitting photosynthesis was invented by cyanobacteria approximately 2.4 billion years ago and then borrowed by other groups of organisms thereafter," explains Woodward Fischer, assistant professor of geobiology at Caltech and a coauthor of the study. "Algae borrowed this photosynthetic system from cyanobacteria, and plants are just a group of algae that took photosynthesis on land, so we think with this finding we're looking at the inception of the molecular machinery that would give rise to oxygen."

Photosynthesis is the process by which energy from the sun is used by plants and other organisms to split water and carbon dioxide molecules to make carbohydrates and oxygen. Manganese is required for water splitting to work, so when scientists began to wonder what evolutionary steps may have led up to an oxygenated atmosphere on Earth, they started to look for evidence of manganese-oxidizing photosynthesis prior to cyanobacteria. Since oxidation simply involves the transfer of electrons to increase the charge on an atom—and this can be accomplished using light or O2—it could have occurred before the rise of oxygen on this planet.

"Manganese plays an essential role in modern biological water splitting as a necessary catalyst in the process, so manganese-oxidizing photosynthesis makes sense as a potential transitional photosystem," says Jena Johnson, a graduate student in Fischer's laboratory at Caltech and lead author of the study.

To test the hypothesis that manganese-based photosynthesis occurred prior to the evolution of oxygenic cyanobacteria, the researchers examined drill cores (newly obtained by the Agouron Institute) from 2.415 billion-year-old South African marine sedimentary rocks with large deposits of manganese.

Manganese is soluble in seawater. Indeed, if there are no strong oxidants around to accept electrons from the manganese, it will remain aqueous, Fischer explains, but the second it is oxidized, or loses electrons, manganese precipitates, forming a solid that can become concentrated within seafloor sediments.

"Just the observation of these large enrichments—16 percent manganese in some samples—provided a strong implication that the manganese had been oxidized, but this required confirmation," he says.

To prove that the manganese was originally part of the South African rock and not deposited there later by hydrothermal fluids or some other phenomena, Johnson and colleagues developed and employed techniques that allowed the team to assess the abundance and oxidation state of manganese-bearing minerals at a very tiny scale of 2 microns.

"And it's warranted—these rocks are complicated at a micron scale!" Fischer says. "And yet, the rocks occupy hundreds of meters of stratigraphy across hundreds of square kilometers of ocean basin, so you need to be able to work between many scales—very detailed ones, but also across the whole deposit to understand the ancient environmental processes at work."

Using these multiscale approaches, Johnson and colleagues demonstrated that the manganese was original to the rocks and first deposited in sediments as manganese oxides, and that manganese oxidation occurred over a broad swath of the ancient marine basin during the entire timescale captured by the drill cores.

"It's really amazing to be able to use X-ray techniques to look back into the rock record and use the chemical observations on the microscale to shed light on some of the fundamental processes and mechanisms that occurred billions of years ago," says Samuel Webb, coauthor on the paper and beam line scientist at the SLAC National Accelerator Laboratory at Stanford University, where many of the study's experiments took place. "Questions regarding the evolution of the photosynthetic pathway and the subsequent rise of oxygen in the atmosphere are critical for understanding not only the history of our own planet, but also the basics of how biology has perfected the process of photosynthesis."

Once the team confirmed that the manganese had been deposited as an oxide phase when the rock was first forming, they checked to see if these manganese oxides were actually formed before water-splitting photosynthesis or if they formed after as a result of reactions with oxygen. They used two different techniques to check whether oxygen was present. It was not—proving that water-splitting photosynthesis had not yet evolved at that point in time. The manganese in the deposits had indeed been oxidized and deposited before the appearance of water-splitting cyanobacteria. This implies, the researchers say, that manganese-oxidizing photosynthesis was a stepping-stone for oxygen-producing, water-splitting photosynthesis.

"I think that there will be a number of additional experiments that people will now attempt to try and reverse engineer a manganese photosynthetic photosystem or cell," Fischer says. "Once you know that this happened, it all of a sudden gives you reason to take more seriously an experimental program aimed at asking, 'Can we make a photosystem that's able to oxidize manganese but doesn't then go on to split water? How does it behave, and what is its chemistry?' Even though we know what modern water splitting is and what it looks like, we still don't know exactly how it works. There is a still a major discovery to be made to find out exactly how the catalysis works, and now knowing where this machinery comes from may open new perspectives into its function—an understanding that could help target technologies for energy production from artificial photosynthesis. "

Next up in Fischer's lab, Johnson plans to work with others to try and mutate a cyanobacteria to "go backwards" and perform manganese-oxidizing photosynthesis. The team also plans to investigate a set of rocks from western Australia that are similar in age to the samples used in the current study and may also contain beds of manganese. If their current study results are truly an indication of manganese-oxidizing photosynthesis, they say, there should be evidence of the same processes in other parts of the world.

"Oxygen is the backdrop on which this story is playing out on, but really, this is a tale of the evolution of this very intense metabolism that happened once—an evolutionary singularity that transformed the planet," Fischer says. "We've provided insight into how the evolution of one of these remarkable molecular machines led up to the oxidation of our planet's atmosphere, and now we're going to follow up on all angles of our findings."

Funding for the research outlined in the PNAS paper, titled "Manganese-oxidizing photosynthesis before the rise of cyanobacteria," was provided by the Agouron Institute, NASA's Exobiology Branch, the David and Lucile Packard Foundation, and the National Science Foundation Graduate Research Fellowship program. Joseph Kirschvink, Nico and Marilyn Van Wingen Professor of Geobiology at Caltech, also contributed to the study along with Katherine Thomas and Shuhei Ono from the Massachusetts Institute of Technology.

Contacts and sources:
Written by Katie Neith

First Transiting Planets Discovered In A Star Cluster

All stars begin their lives in groups. Most stars, including our Sun, are born in small, benign groups that quickly fall apart. Others form in huge, dense swarms that survive for billions of years as stellar clusters. Within such rich and dense clusters, stars jostle for room with thousands of neighbors while strong radiation and harsh stellar winds scour interstellar space, stripping planet-forming materials from nearby stars.

In the star cluster NGC 6811, astronomers have found two planets smaller than Neptune orbiting Sun-like stars. 
Credit: Michael Bachofner

It would thus seem an unlikely place to find alien worlds. Yet 3,000 light-years from Earth, in the star cluster NGC 6811, astronomers have found two planets smaller than Neptune orbiting Sun-like stars. The discovery, published in the journal Nature, shows that planets can develop even in crowded clusters jam-packed with stars.

"Old clusters represent a stellar environment much different than the birthplace of the Sun and other planet-hosting field stars," says lead author Soren Meibom of the Harvard-Smithsonian Center for Astrophysics (CfA). "And we thought maybe planets couldn't easily form and survive in the stressful environments of dense clusters, in part because for a long time we couldn't find them."

The two new alien worlds appeared in data from NASA's Kepler spacecraft. Kepler hunts for planets that transit, or cross in front of, their host stars. During a transit, the star dims by an amount that depends on the size of the planet, allowing the size to be determined. Kepler-66b and Kepler-67b are both less than three times the size of Earth, or about three-fourths the size of Neptune (mini-Neptunes).

Of the more than 850 known planets beyond our solar system, only four - all similar to or greater than Jupiter in mass - were found in clusters. Kepler-66b and -67b are the smallest planets to be found in a star cluster, and the first cluster planets seen to transit their host stars, which enables the measurement of their sizes.

Meibom and his colleagues have measured the age of NGC 6811 to be one billion years. Kepler-66b and Kepler-67b therefore join a small group of planets with precisely determined ages, distances, and sizes.

Considering the number of stars observed by Kepler in NGC 6811, the detection of two such planets implies that the frequency and properties of planets in open clusters are consistent with those of planets around field stars (stars not within a cluster or association) in the Milky Way galaxy.

"These planets are cosmic extremophiles," says Meibom. "Finding them shows that small planets can form and survive for at least a billion years, even in a chaotic and hostile environment."

Headquartered in Cambridge, Mass., the Harvard-Smithsonian Center for Astrophysics (CfA) is a joint collaboration between the Smithsonian Astrophysical Observatory and the Harvard College Observatory. CfA scientists, organized into six research divisions, study the origin, evolution and ultimate fate of the universe.

Contacts and sources: 
Christine Pulliam
Harvard-Smithsonian Center for Astrophysics

Homo Erectus Was The Original Starting Pitcher

It's completely ordinary to see today's athletes throw a javelin hundreds of feet in the air or fire baseballs accurately and in excess of 90 mph dozens of times during a game. However, not every close human relative has that ability to throw, despite the great strength that many possess. Researchers say they traced that ability back to three changes to the waist, shoulder and upper arm that happened about 2 million years ago in the early human Homo erectus.

These two images show the muscular and skeletal differences in the position of the shoulder between chimpanzees (left on both) and humans (right on both).
Image credit:  Brian Roach/Neil Roach

Making a strong, accurate throw requires the different parts of the body to work together in what biomechanics researchers call a kinetic chain -- the rapid and sequential activation of different muscles. The motion that launches a throw begins with the legs, moves through the hips, torso, shoulder, and through the arm to the hand. Throwing projectiles fast and with high accuracy requires coordination, and also the anatomical features that first appeared together in Homo erectus.

A team of researchers, reporting in Nature, found that the three key traits can be found in humans, but not our closest relatives, chimpanzees. Each feature allows the body to store more energy before a quick rotation that releases it: tall and mobile waists that permit torso rotation; the way the elbow and the bone in the upper arm, the humerus, join together and rotate; and the placement of the shoulders. Each trait has "a major role in storing and releasing elastic energy during throwing," the researchers wrote.

The change to the shoulder is crucial, explained Neil Roach, a biological anthropologist at George Washington University in Washington, D.C. While chimpanzee shoulders sit very high and close to the neck, almost as if the animal is permanently shrugging its shoulders, human shoulders are much more relaxed.

"That change in the shoulder really brings all of those things together and that didn't happen until 2 million years ago," said Roach. "That allows us to essentially use the arm like a catapult, to store energy as we cock our arm or rotate our arm away from the target before we rapidly rotate it toward the target."

The rotation of the humerus is the fastest motion the human body produces, said Roach, at over 9,000 degrees per second. 

Sending Modern Baseball Players Back in Time

The researchers studied both the fossil record and Harvard University baseball players in order to develop their insights. They used motion capture technology to track the way experienced throwers launch the ball.

The researchers also studied restricted motion using braces, Roach said. They prevented subjects from relaxing their shoulders and restricted the motion of the arm.

"What that did was give us the ability to at least mimic what the ancestral anatomy would have been like," said Roach.

The resulting observations allowed the researchers to zero in on the most important features for throwing: the elbow, shoulder and waist. The fossil record showed that when Homo erectus developed these features together, it made them the first of our relatives that could throw like modern humans.

William Hopkins, a neuroscientist at Georgia State University in Atlanta, said that most other research on the origin of throwing focused primarily on the hand and wrist. He studies chimpanzee behavior, including throwing.

The researchers, Hopkins said, "have really pushed the area forward in terms of describing exactly what changes biomechanically that allows for this enormous skill in humans."

Throwing and Hunting

When Homo erectus appeared in the fossil record about 2 million years ago, it coincided with an increasing amount of meat consumption, and probably more hunting. Roach thinks that the shifts in arm and shoulder structure probably made that easier.

"Given that this important change in terms of our throwing performance occurs at the time that we see a real intensification of hunting … we think there's a good possibility that that's the case," said Roach.

The researchers plan to investigate primitive spears and their effectiveness in injuring or killing animals, as well as how the throwing motion differs for launching different objects.

Throwing may have been connected to other behaviors as well, such as defense. Slinging rocks at a potential predator might have offered early humans protection. Scientists can look to chimpanzees for clues about early throwing.

With chimpanzees, Hopkins found that some of the animals threw and some didn't. He began investigating and noticed that chimpanzees mostly throw food or feces, often in defense, and that some of them are better at it than others.

"The more interesting forms of throwing are what we call aimed throwing," said Hopkins. "In many ways they look like a baseball pitcher."

But chimpanzees can only throw about 20 mph, despite their great strength, while baseball pitchers, cricket bowlers or even football quarterbacks can greatly exceed that figure with their respective projectiles.

Roach said that human ancestors prior to Homo erectus probably had better throwing performance than chimpanzees, but not nearly the same capacity that the anatomical changes made possible.

From the Mound to the Classroom

While the paper's information about throwing mechanics is generally well-known to professional baseball players, the link to evolution is particularly interesting, said Tim Layden, head baseball coach and evolutionary biology teacher at Florida's Montverde Academy. Incidentally, Layden was also a freshman All-American pitcher at Duke University and pitched in the Chicago Cubs’ minor league system.

"I'm probably going to use this next semester in my primate course, for sure," said Layden. "It makes perfect sense from an evolutionary standpoint that there would be a selective force for high velocity throwing and the buildup of energy within the shoulder joint."

Although there's no evidence of sports among Homo erectus, Roach said that play could have been crucial to learning to throw.

"Like any ability that requires incredible performance, play is an important mechanism for learning that behavior," said Roach.

Contacts and sources:
By Chris Gorski
Inside Science News Service

Microscopy Technique Could Help Computer Industry Develop 3-D Components

A technique developed several years ago at the National Institute of Standards and Technology (NIST) for improving optical microscopes now has been applied to monitoring the next generation of computer chip circuit components, potentially providing the semiconductor industry with a crucial tool for improving chips for the next decade or more.

These three-dimensional tri-gate (FinFET) transistors are among the 3-D microchip structures that could be measured using through-focus scanning optical microscopy (TSOM).
Courtesy Intel Corporation
The technique, called Through-Focus Scanning Optical Microscopy (TSOM), has now been shown able to detect tiny differences in the three-dimensional shapes of circuit components, which until very recently have been essentially two-dimensional objects. TSOM is sensitive to features that are as small as 10 nanometers (nm) across, perhaps smaller—addressing some important industry measurement challenges for the near future for manufacturing process control and helping maintain the viability of optical microscopy in electronics manufacturing.

For decades, computer chips have resembled city maps in which components are essentially flat. But as designers strive to pack more components onto chips, they have reached the same conclusion as city planners: The only direction left to build is upwards. New generations of chips feature 3-D structures that stack components atop one another, but ensuring these components are all made to the right shapes and sizes requires a whole new dimension—literally—of measurement capability.

"Previously, all we needed to do was show we could accurately measure the width of a line a certain number of nanometers across," explains NIST's Ravikiran Attota. "Now, we will need to measure all sides of a three-dimensional structure that has more nooks and crannies than many modern buildings. And the nature of light makes that difficult."

Part of the trouble is that components now are growing so small that a light beam can't quite get at them. Optical microscopes are normally limited to features larger than about half the wavelength of the light used—about 250 nanometers for green light. So microscopists have worked around the issue by lining up a bunch of identical components at regular distances apart and observing how light scatters off the group and fitting the data with optical models to determine the dimensions. But these optical measurements, as currently used in manufacturing, have great difficulty measuring newer 3-D structures.

Other non-optical methods of imaging such as scanning probe microscopy are expensive and slow, so the NIST team decided to test the abilities of TSOM, a technique that Attota played a major role in developing. The method uses a conventional optical microscope, but rather than taking a single image, it collects 2-D images at different focal positions forming a 3-D data space. A computer then extracts brightness profiles from these multiple out-of-focus images and uses the differences between them to construct the TSOM image. The TSOM images it provides are somewhat abstract, but the differences between them are still clear enough to infer minute shape differences in the measured structures—bypassing the use of optical models, which introduce complexities that industry must face.

"Our simulation studies show that TSOM might measure features as small as 10 nm or smaller, which would be enough for the semiconductor industry for another decade," Attota says. "And we can look at anything with TSOM, not just circuits. It could become useful to any field where 3-D shape analysis of tiny objects is needed."

*R. Attota, B. Bunday and V. Vartanian. Critical dimension metrology by through-focus scanning optical microscopy beyond the 22 nm node. Applied Physics Letters, DOI: 10.1063/1.4809512, published online June 6, 2013.

Contacts and sources: