Saturday, April 30, 2011

The Universe's First Stars Were Extraordinarily Fast Spinners

Astronomers using data from the Very Large Telescope (VLT) of the European Southern Observatory (ESO) have spotted the remains of some of the Universe's very first stars in the Milky Way. The gas cloud left behind when the stars exploded billions of years ago contains elements in proportions different to those found in new stars, shedding light on the 'missing links' between the Big Bang and today's Universe.

Even with the most powerful telescopes, it is not possible to observe these stars directly. They are so old that only the most massive, with eight times or more the mass of our sun, would have had the time to die and pollute the gas from which they were formed with elements heavier than helium. These stars lived fast and died young, after no more than 30 million years. 

Illustration of this article

'We think that the first generations of massive stars were very fast rotators - that's why we called them spinstars', explains Christina Chiappini from the Leibniz Institute for Astrophysics Potsdam (AIP) in Germany and the Istituto Nazionale di Astrofisica (INAF) in Italy, who led the study published in the journal Nature.

Dr Chiappini and her colleagues have found the remains of these stars in the oldest known globular cluster in our Galaxy, the 12-billion-year-old NGC 6522 which probably witnessed the early phases of the seeding of heavy elements across the Universe. Professor Georges Meynet, from the University of Geneva in Switzerland, explains that it is like trying 'to reveal the character of a cook from the taste of his dishes'.

The researchers discovered eight old stars with strangely high levels of the rare elements strontium and yttrium. They also calculated that the stars would have whirled with a surface speed of 1.8 million kilometres per hour. By comparison, massive stars in the Milky Way typically spin at about 360,000 kilometres per hour.

This high rate of spin would cause overlap between inner and outer gas layers of the stars that would not otherwise mix. The resulting cascade of nuclear reactions would generate radioactive neon, which in turn would emit neutrons that would collide with iron and other heavy atoms to create strontium and yttrium. After the spinstars' death, these elements made their way into new star-forming clouds and eventually into the stars of NGC 6522.

These findings suggest that these fast spinners may have changed the face of the Universe in dramatic ways. For instance, their fast spinning could have led them to create and disperse heavy elements across the Universe much earlier than previously thought. It could also have led to a greater-than-expected number of gamma ray bursts, the most powerful explosions known in the Universe.

However, 'alternative scenarios cannot yet be discarded, but we show that if the first generations of massive stars were spinstars, this would offer a very elegant explanation to this puzzle!', says Cristina Chiappini. Therefore, Urs Frischknecht, a PhD student at the University of Basel in Switzerland, is currently working on further testing the proposed scenario.

Source: CORDIS 

For more information, please visit:
Leibniz Institute for Astrophysics Potsdam:

Tox21 To Test 10,000 Chemicals for Toxicity

Several federal agencies, including the National Institutes of Health, last month unveiled a new high-speed robot screening system that will test 10,000 different chemicals for potential toxicity. The system marks the beginning of a new phase of an ongoing collaboration, referred to as Tox21, that is working to protect human health by improving how chemicals are tested in the United States.

 High-speed robot screening system
Credit: NIH

The robot system, which is located at the NIH Chemical Genomics Center (NCGC) in Rockville, Md., was purchased as part of the Tox21 collaboration. Tox21 was established in 2008 between the National Institute of Environmental Health Sciences National Toxicology Program (NTP), the National Human Genome Research Institute (NHGRI), and the U.S. Environmental Protection Agency (EPA), with the addition of the U.S. Food and Drug Administration (FDA) in 2010. Tox21 merges existing agency resources (research, funding, and testing tools) to develop ways to more effectively predict how chemicals will affect human health and the environment.

Produced using time-lapse photography, this video shows a new high-speed robot screening system being built that will test 10,000 different chemicals for potential toxicity. The system marks the beginning of a new phase of an ongoing collaboration, referred to as Tox21, that is working to protect human health by improving how chemicals are tested in the United States.

The 10,000 chemicals screened by the robot system include compounds found in industrial and consumer products, food additives, and drugs. A thorough analysis and prioritization process from more than 200 public databases of chemicals and drugs used in the United States and abroad was conducted to select the initial 10,000 chemicals for testing. Testing results will provide information useful for evaluating if these chemicals have the potential to disrupt human body processes enough to lead to adverse health effects.

“Tox21 has used robots to screen chemicals since 2008, but this new robotic system is dedicated to screening a much larger compound library,” said NHGRI Director Eric Green, M.D., Ph.D. The director of the NCGC at NHGRI, Christopher Austin, M.D., added “The Tox21 collaboration will transform our understanding of toxicology with the ability to test in a day what would take one year for a person to do by hand.”

“The addition of this new robot system will allow the National Toxicology Program to advance its mission of testing chemicals smarter, better, and faster,” said Linda Birnbaum, Ph.D., NIEHS and NTP director. “We will be able to more quickly provide information about potentially dangerous substances to health and regulatory decision makers, and others, so they can make informed decisions to protect public health.”

Tox21 has already screened more than 2,500 chemicals for potential toxicity, using robots and other innovative chemical screening technologies.

“Understanding the molecular basis of hazard is fundamental to the protection of human health and the environment,” said Paul Anastas, Ph.D., assistant administrator of the EPA Office of Research and Development. “Tox21 allows us to obtain deeper understanding and more powerful insights, faster than ever before.”

“This partnership builds upon FDA's commitment to developing new methods to evaluate the toxicity of the substances that we regulate,” said Janet Woodcock, M.D., director of the FDA Center for Drug Evaluation and Research.

For b-roll clips from the NCGC facility, see .

For more information about Tox 21, visit

The NIEHS supports research to understand the effects of the environment on human health and is part of NIH. For more information on environmental health topics, visit Subscribe to one or more of the NIEHS news lists to stay current on NIEHS news, press releases, grant opportunities, training, events, and publications.

The NTP is an interagency program established in 1978. The program was created as a cooperative effort to coordinate toxicology testing programs within the federal government, strengthen the science base in toxicology, develop and validate improved testing methods, and provide information about potentially toxic chemicals to health, regulatory, and research agencies, scientific and medical communities, and the public. The NTP is headquartered at the NIEHS. For more information about the NTP, visit

The National Human Genome Research Institute is part of the National Institutes of Health. For more about NHGRI, visit .

The National Institutes of Health (NIH) — The Nation's Medical Research Agency — includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. It is the primary federal agency for conducting and supporting basic, clinical and translational medical research, and it investigates the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit .

Errors Put Infants, Children At Risk For Overdose Of Painkillers; Prescriptions For Narcotics Often Contain Too Much Medication Per Dose

Parents who give young children prescription painkillers should take extra care to make sure they give just the right amount. What they may be surprised to learn, however, is that the dose given to them by the pharmacy could be too high, according to research to be presented Saturday, April 30, at the Pediatric Academic Societies (PAS) annual meeting in Denver.

Researchers from South Carolina identified the top 19 narcotic-containing drugs prescribed to children ages 0-36 months who were enrolled in the Medicaid program from 2000-2006. For each of 50,462 outpatient prescriptions, they calculated the expected daily dose of the narcotic based on an estimate of the child's weight, age and gender. Then they compared that dosage with the actual amount of painkiller dispensed by the pharmacy.

Results showed that 4.1 percent of all children received an overdose amount.

Of more concern was the finding that the youngest children had the greatest chance of receiving an overdose, according to lead researcher William T. Basco Jr., MD, MS, FAAP, associate professor and director of the Division of General Pediatrics at the Medical University of South Carolina.

"Our goal was to determine the magnitude of overdosing for this high-risk drug class in a high-risk population, and these results are concerning," Dr. Basco said.

Narcotics such as codeine and hydrocodone can be dangerous for infants and children because of their sedative effects.

About 40 percent of children younger than 2 months of age received an overdose amount compared to 3 percent of children older than 1 year. For the average child who had an overdose quantity dispensed, the amount of narcotic drug dispensed was 42 percent greater than would have been expected.

"Almost one in 10 of the youngest infants ages 0-2 months received more than twice the dose that they should have received based on their age, gender and a conservative estimate of their weight," Dr. Basco said.

"Since we know that parents have difficulty measuring doses of liquid medication accurately," Dr. Basco concluded, "it is critical to strive for accurate narcotic prescribing by providers and dispensing by pharmacies."

Contacts and Sources:

The Pediatric Academic Societies (PAS) are four individual pediatric organizations who co-sponsor the PAS Annual Meeting – the American Pediatric Society, the Society for Pediatric Research, the Academic Pediatric Association, and the American Academy of Pediatrics. Members of these organizations are pediatricians and other health care providers who are practicing in the research, academic and clinical arenas. The four sponsoring organizations are leaders in the advancement of pediatric research and child advocacy within pediatrics, and all share a common mission of fostering the health and well being of children worldwide. For more information, visit Follow news of the PAS meeting on Twitter at

Chemical Found In Crude Oil and Cleaning Agents Linked To Congenital Heart Disease: Study Shows Fetal Exposure To Solvents May Damage Heart

While it may be years before the health effects of the 2010 oil spill in the Gulf of Mexico are known, a new study shows that fetal exposure to a chemical found in crude oil is associated with an increased risk of congenital heart disease (CHD).

The study, to be presented Saturday, April 30, at the Pediatric Academic Societies (PAS) annual meeting in Denver, also showed that babies who had been exposed in utero to a chemical found in cleaning agents and spot removers were at increased risk of CHD.

Deepwater Horizon oil spill
Deepwater Horizon oil spill - May 24, 2010 - with locator.jpg
Credit: NASA

Environmental causes of CHD have been suspected, and animal studies have suggested certain chemicals may cause CHD, a problem with the heart's structure and function due to abnormal heart development before birth.

"Congenital heart disease is a major cause of childhood death and life-long health problems," said D. Gail McCarver, MD, FAAP, lead author of the study and professor of pediatrics at the Medical College of Wisconsin and Children's Research Institute, Milwaukee. "Thus, identifying risk factors contributing to CHD is important to public health."

Dr. McCarver and her colleagues sought to determine whether human fetal exposure to solvents is associated with increased risk for CHD. The researchers tested samples of meconium, or fetal stool, from 135 newborns with CHD and 432 newborns without CHD. Meconium has been used to assess fetal exposure to illicit drugs such as cocaine. Seventeen compounds were measured in meconium samples using methods that detect very low levels of chemicals.

Additional data collected included race of the mothers and infants, family history for CHD, and maternal alcohol, tobacco, vitamin and drug use.

Infants with chromosomal abnormalities known to be linked to CHD, and babies of diabetic mothers were excluded from the study.

Results showed that 82 percent of infants had evidence of intrauterine exposure to one or more of the solvents measured.

Among white infants, but not black infants, fetal exposure to ethyl benzene was associated with a four-fold increased risk of CHD. In addition, exposure to trichloroethylene was associated with a two-fold increased risk for CHD among white infants and an eight-fold increased risk among black infants.

"This is the first report that exposure to ethyl benzene, a compound present in crude oil, was associated with CHD," Dr. McCarver said. Humans also can be exposed to ethyl benzene through inhalation of motor vehicle emissions, gasoline pump vapors and cigarette smoke.

"The association with ethyl benzene exposure is concerning, particularly considering recent oil spills," she said. "However, additional confirmatory studies are needed."

The study also adds to existing concerns about trichloroethylene (TCE). "This is of particular importance because TCE is a commonly used degreasing agent, which also is present in many cleaners and spot removers. TCE also has been the most common chemical identified around hazardous waste sites," Dr. McCarver said.

"Limiting known maternal exposure to this compound during early pregnancy appears prudent, particularly among those at increased CHD risk," Dr. McCarver concluded.

Contacts and Sources:

To view the abstract, go to

Friday, April 29, 2011

ARL Funding Develops Concepts In Collective Intelligence

The newly-developed concept of collective intelligence could someday predict how an existing group or organization will perform on new tasks, help select group members from a population to form maximally-functional teams, and eventually do the same for groups made of humans and machines working together.

ARL funding develops concepts in collective intelligence
Image: USARL

General intelligence measurement in individuals is essentially factor analysis, seeking to explain the variability in performance in a number of diverse cognitive tasks in terms of a smaller number of unobserved variables, the most significant of which is labeled general intelligence ("g").

ARL-funded principal investigators (PIs) at the Massachusetts Institute of Technology and their colleagues at Carnegie Mellon University are developing analogous measures for groups. Through performance measurements of many diverse groups of two to five people on a diverse set of cognitive tasks, the researchers have identified a single significant factor that explains most of the variation in group performance; they have labeled this factor collective intelligence ("c").

These researchers have found that group characteristics most significantly increasing collective intelligence are: social perceptiveness, evenness of interactions among members, and higher proportion of female.

Experiments, and thus results, are so far limited to small face-to-face ad hoc human groups; researchers plan to soon investigate extensions to larger, functional human groups. While general intelligence in individuals is difficult to change, this project has shown that collective intelligence can be modified through changes in personnel, motivation, procedural rules, and collaboration tools (e.g., email, wikis, etc).

Potential uses of this new understanding include the ability to predict performance of an existing group or organization on new and different tasks, to predict performance of a not-yet assembled group on a variety of tasks, to select group members from a population in order to form maximally-functional teams, and eventually to do all of the above for human-machine groups as well.

The United States Army Fires Center of Excellence at Fort Sill, Okla. is discussing with the PI the possibility of doing trials involving their Air Defense Artillery teams, and discussions with the Office of the Director of National Intelligence on possible trials are ongoing.

This research has received considerable media attention over the past three months. It has appeared more than 40 times in publications, such as Science, Wall Street Journal, New York Times, NPR, the web version of Scientific American, and other print, radio and web sources.

Article by Dr. Joseph Myers, U.S. Army Research Office Information Sciences Directorate
U.S. Army Research Laboratory

Black And White Cardiac Arrest Victims Both Less Apt To Survive At Hospitals Treating Large Black Populations

Black cardiac arrest victims are more likely to die when they're treated in hospitals that care for a large black population than when they're brought to hospitals with a greater proportion of white patients, according to new research from the University of Pennsylvania School of Medicine. The study is published in the April issue of the American Heart Journal.

The Penn team found that, among 68,115 cardiac arrest admissions analyzed through Medicare records, only 31 percent of black patients treated in hospitals that care for a higher proportion of black patients survived to be discharged from the hospital, compared to 46 of those cared for in predominantly white hospitals. Results showed that even white patients were less likely to survive when treated at these hospitals which provide care for higher proportions of black patients.

"Our results also found that black patients were much more likely to be admitted to hospitals with low survival rates," says lead author Raina M. Merchant, MD, MS, an assistant professor of Emergency Medicine. "Since cardiac arrest patients need help immediately and are brought to the nearest hospital, these findings appear to show geographic disparities in which minority patients have limited access to hospitals that have better arrest outcomes. For example, these hospitals may not utilize best practices in post-arrest care, such as therapeutic hypothermia and coronary artery stenting procedures. These findings have implications for patients of all races, since these same hospitals had poor survival rates across the board."

Among factors that may influence the disparities, several include: differences in staff quality and training, patient/family preferences regarding end-of-life care and withdrawal of life support during the post-arrest period where prognosis is often uncertain, and variation in ancillary supports such as laboratory, cardiac testing or pharmacy services. Merchant and her colleagues suggest that further research into how the use of advanced postresuscitation therapies influence survival is necessary to improve outcomes for all patients, perhaps leading to the development of a regionalized care model for cardiac arrest, similar to the system that funnels trauma patients to hospitals that meet strict national standards.

Contacts and Sources:

Other authors of the study include Lance B. Becker, MD, Feifei Yang, MS, and Peter W. Groeneveld, MD, MS.

Penn Medicine is one of the world's leading academic medical centers, dedicated to the related missions of medical education, biomedical research, and excellence in patient care. Penn Medicine consists of the University of Pennsylvania School of Medicine (founded in 1765 as the nation's first medical school) and the University of Pennsylvania Health System, which together form a $4 billion enterprise.

Penn's School of Medicine is currently ranked #2 in U.S. News & World Report's survey of research-oriented medical schools and among the top 10 schools for primary care. The School is consistently among the nation's top recipients of funding from the National Institutes of Health, with $507.6 million awarded in the 2010 fiscal year.

The University of Pennsylvania Health System's patient care facilities include: The Hospital of the University of Pennsylvania â€" recognized as one of the nation's top 10 hospitals by U.S. News & World Report; Penn Presbyterian Medical Center; and Pennsylvania Hospital – the nation's first hospital, founded in 1751. Penn Medicine also includes additional patient care facilities and services throughout the Philadelphia region.

Penn Medicine is committed to improving lives and health through a variety of community-based programs and activities. In fiscal year 2010, Penn Medicine provided $788 million to benefit our community.

Everything Interesting In Space Weather Happens Due To A Phenomenon Called Magnetic Reconnection

Electromagnetic energy from the sun interacts with Earth's magnetosphere during magnetic reconnection events that kick off additional bursts of energy.

Credit: NASA/Goddard/Conceptual Image Lab

Whether it's a giant solar flare or a beautiful green-blue aurora, just about everything interesting in space weather happens due to a phenomenon called magnetic reconnection. Reconnection occurs when magnetic field lines cross and create a burst of energy. These bursts can be so big they're measured in megatons of TNT.

Several spacecraft have already sent back tantalizing data when they happened to witness a magnetic reconnection event in Earth's magnetosphere. However, there are no spacecraft currently dedicated to the study of this phenomenon.

All this will change in 2014 when NASA launches the Magnetospheric Multiscale (MMS) mission, a fleet of four identical spacecraft that will focus exclusively on this dynamic magnetic system that stretches from the sun to Earth and beyond.

At NASA's Goddard Space Flight Center in Greenbelt, Md., a team of scientists and engineers are working on a crucial element of the MMS instrument suite: the Fast Plasma Instrument (FPI). Some 100 times faster than any previous similar instrument, the FPI will collect a full sky map of data at the rate of 30 times per second – a necessary speed given that MMS will only travel through the reconnection site for under a second.

"Imagine flying by a tiny object on an airplane very rapidly," says Craig Pollock, the Co-Investigator for FPI at Goddard. "You want to capture a good picture of it, but you don't get to just walk around it and take your time snapping photos from different angles. You have to grab a quick shot as you're passing. That's the challenge."

Previous spacecraft, such as Cluster and THEMIS have helped narrow down the regions near Earth where magnetic reconnection happens. The solar wind streams towards Earth until it hits our planet’s magnetic field, says Tom Moore, the project scientist for MMS at Goddard. "The solar wind comes flying in and the terrestrial stuff is like molasses – slow, cold and reluctant to do whatever the solar wind wants. So there is a contest of wills whenever the two fields connect up via reconnection."

That's what happens on the sun side of Earth. On the other side, the night side, magnetic reconnection in Earth's magnetic tail causes a geometry change in the shape of the field lines. Portions of the magnetic field get disconnected from the rest of the tail and shoot away from Earth.

The orbit for MMS will be tailored to hit these spots of magnetic reconnection on a regular basis. The first year and a half will be spent in the day side and the last six months in the night side. In the case of both day- and night-side reconnection, the changing magnetic fields also send the local ionized gas, or plasma, off with a great push. Measuring that plasma – a concrete, physical entity unlike the more abstract magnetic fields themselves – is one way to learn more about what's happening in that process.

"Right now the state of reconnection knowledge is simply that we know it's going on," says Moore. "One of the fundamental questions is to figure out what controls the process – the little stuff deep inside or the larger, external, boundary conditions. Some conditions produce a small burst of energy and sometimes, during what we think are the same external conditions, there's a huge burst of energy. That might be explained if the reconnection event depended crucially on what's going on deep inside, in an area we've never been able to see before."

A prototype of the dual electron spectrometer being built in a clean room at Goddard Space Flight Center in Greenbelt, Md.
The MMS dual electron spectrometer being built in the Goddard clean room.
Credit: NASA/Karen C. Fox

The FPI instrument will measure the plasma in these small regions using electron and ion spectrometers. In order to capture as much as possible in the second-long journey through a magnetic reconnection site, each detector will be made of two spectrometers whose field of view is separated by 45 degrees, each of which can scan through a 45-degree arc for a larger panorama. There will be four dual electron spectrometers and four dual ion spectrometers onboard each MMS spacecraft. In combination, the ion spectrometers will produce a three-dimensional picture of the ion plasma every 150 milliseconds, while the electron spectrometers will do the same for the electrons every 30 milliseconds.

Not only is this approach an improvement of 100 times over previous plasma data collection, it's an advancement in terms of instrument building. For those doing the math: there are four plus four instruments plus one data processing unit on each of four spacecraft, which equates to 32 sensors and four data processing units, 36 boxes total.

"That's a huge number," says Pollock. "We're used to delivering one box, or occasionally two or three."

These instruments are, in turn, just part of the 100 instruments being built for MMS, each tailored to measure various electric and magnetic signals in space. The production is made even more challenging, says Karen Halterman, the program manager for MMS who oversees all pre-launch activities of the mission, because the entire spacecraft must be created to exacting standards. "You can't have a satellite that produces its own large electromagnetic signature when you're trying to precisely measure electromagnetism outside the satellite," she says. "We can't even use standard metal tools to build the hundreds of pieces in each satellite since they will add magnetic signatures into spacecraft."

An artist's rendition of MMS as it sweeps through a magnetic reconnection event caused when the solar wind meets Earth's magnetic fields.
Artist rendition of MMS spacecraft.
Credit: SWRI

The Southwest Research Institute in San Antonio, who designed the original instrument suite, is overseeing all the instruments for MMS, which are being built all over the country and globe—including in Japan. A Japanese company called Meisei has been contracted to build the ion spectrometers for the FPI.

"In the middle of a huge catastrophe," says Pollock, "the (Japanese) response has been remarkable and admirable. They have their problems, not least of which is rolling blackouts when some of the upcoming tests will require achieving vacuums that need several days of continuous electricity. We came up with a lot of contingency plans, but it turns out they don’t need much help."

The Japanese instruments are still on track. The engineering test unit of the ion spectrometers is scheduled to arrive at Goddard -- after testing is completed at Japan's Institute of Space and Astronautical Science and NASA’s Marshall Space Flight Center in Huntsville, Ala. -- early this summer. Indeed, the first FPI instrument for the first MMS spacecraft is due to arrive at the Southwest Research Institute in March of 2012.

Naturally, it's a busy time. The FPI team is finalizing the hardware and making sure all the parts pass a variety of standard tests, from ensuring the instrument won't vibrate apart during launch to making sure they still function properly when placed next to the electromagnetic signals streaming out from other instruments.

"What MMS is looking for is not something visible," says Halterman. "If you have a mission to study the sun or Jupiter, you can look at a picture and see the sun or Jupiter. Magnetic reconnection is a fundamental physics process. It happens on stars, on the sun, all over the universe, but it's much harder to get a deep understanding of it. FPI and the rest of the MMS instrument suite, with their great improvement in speed and resolution, are going to help change that."

For more information about MMS, visit:

Contacts and Sources:

US Appeals Court Opens Federal Funding For Stem Cell Research

The U.S. Federal Court of Appeals has overturned an August 2010 ban on federal funding of embryonic stem cell research, paving the way for broader exploration of how stem cells function and how they can be harnessed to treat a wide range of currently incurable diseases.

Arnold Kriegstein, MD, PhD
Arnold Kriegstein, MD, PhD
Photo: Susan Merrell/UCSF

The ruling has been welcomed by the Obama Administration, which attempted to lift the ban in 2009, and by the nation’s top researchers in the field, including Arnold Kriegstein, MD, PhD, director of the Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research at UCSF.

“This is a victory not only for the scientists, but for the patients who are waiting for treatments and cures for terrible diseases,” Kriegstein said. “This ruling allows critical research to move forward, enabling scientists to compare human embryonic stem cells to other forms of stem cells, such as the cell lines which are derived from skin cells, and to pursue potentially life-saving therapies based on that research.”

Kriegstein said the ruling will make a significant difference for stem cell research in general, including at UCSF, where the majority of stem cell investigators receive some funding from the National Institutes of Health for their research, as well as from private sources and from the state. The ruling enables those scientists to integrate research from various funding sources, thereby more quickly addressing the causes and therapies for diseases.

Kriegstein was one of two University of California scientists to file a Declaration in September 2010 in support of the UC Board of Regents’ motion to intervene in the August lawsuit, Sherley v. Sebelius.

Sherly v. Sebelius had argued that when the Obama Administration lifted a ban on federal funding for the research in March 2009, it had violated the 1996 Dickey-Wicker Amendment which barred using taxpayer funds in research that destroyed embryos.

In response, a U.S. District Court judge temporarily ordered a ban on the use of federal money for the research until the court battle could be resolved.

The Appeals Court decision put the Dickey-Wicker question to rest, ruling that the amendment was “ambiguous” and that the NIH “seems reasonably to have concluded that although Dickey-Wicker bars funding for the destructive act of deriving an ESC (embryonic stem cell) from an embryo, it does not prohibit funding a research project in which an ESC will be used,” according to the 2-1 decision.

“I am very happy with this decision, although I am surprised that it wasn’t a unanimous vote,” Kriegstein said. “In my opinion, the evidence in favor of pursuing this research is overwhelming compared to the arguments submitted to stop the research.”

UCSF is one of two universities, along with the University of Wisconsin, that pioneered human embryonic stem cell research in the United States, beginning in the late 1990s.

UCSF has developed one of the largest programs in the nation, primarily funded by the California Institute for Regenerative Medicine, a voter-supported initiative that provided $3 billion to fund statewide research in lieu of federal funding for it. Funding from the NIH, private philanthropy and other state sources also have been critical for the program.

UCSF also launched the nation’s first stem cell PhD program in 2010, for which the first class already has been chosen and will begin in fall 2011.

Contacts and Sources:

UCSF is a leading university dedicated to promoting health worldwide through advanced biomedical research, graduate-level education in the life sciences and health professions, and excellence in patient care. For more information, visit

Diabetes Breakthrough: Researchers Discover Mechanism That Could Convert Certain Cells Into Insulin-Making Cells

Findings of UCLA study hold promise for fight against diabetes

Simply put, people develop diabetes because they don't have enough pancreatic beta cells to produce the insulin necessary to regulate their blood sugar levels.

But what if other cells in the body could be coaxed into becoming pancreatic beta cells? Could we potentially cure diabetes?

Researchers from UCLA's Larry L. Hillblom Islet Research Center have taken an important step in that direction. They report in the April issue of the journal Developmental Cell that they may have discovered the underlying mechanism that could convert other cell types into pancreatic beta cells.

While the current standard of treatment for diabetes — insulin therapy — helps patients maintain sugar levels, it isn't perfect, and many patients remain at high risk of developing a variety of medical complications. Replenishing lost beta cells could serve as a more permanent solution, both for those who have lost such cells due to an immune assault (Type 1 diabetes) and those who acquire diabetes later in life due to insulin resistance (Type 2).

"Our work shows that beta cells and related endocrine cells can easily be converted into each other," said study co-author Dr. Anil Bhushan, an associate professor of medicine in the endocrinology division at the David Geffen School of Medicine at UCLA and in the UCLA Department of Molecular, Cell and Developmental Biology.

It had long been assumed that the identity of cells was "locked" into place and that they could not be switched into other cell types. But recent studies have shown that some types of cells can be coaxed into changing into others — findings that have intensified interest in understanding the mechanisms that maintain beta cell identity.

The UCLA researchers show that chemical tags called "methyl groups" that bind to DNA — where they act like a volume knob, turning up or down the activity of certain genes — are crucial to understanding how cells can be converted into insulin-secreting beta cells. They show that DNA methylation keeps ARX, a gene that triggers the formation of glucagon-secreting alpha cells in the embryonic pancreas, silent in beta cells.

Deletion of Dnmt1, the enzyme responsible for DNA methylation, from insulin-producing beta cells converts them into alpha cells.

These findings suggest that a defect in beta cells' DNA methylation process interferes with their ability to maintain their "identity." So if this "epigenetic mechanism," as the researchers call it, can produce alpha cells, there may be an analogous mechanism that can produce beta cells that would maintain blood sugar equilibrium.

"We show that the basis for this conversion depends not on genetic sequences but on modifications to the DNA that dictates how the DNA is wrapped within the cell," Bhushan said. "We think this is crucial to understanding how to convert a variety of cell types, including stem cells, into functional beta cells."

According to the American Diabetes Association, 25.8 million children and adults in the U.S. — 8.3 percent of the population — have diabetes.

Contacts and Sources:

The National Institute of Diabetes and Digestive and Kidney Diseases, the Juvenile Diabetes Research Foundation, and the Helmsley Trust funded this study.

Additional co-authors of the study are Sangeeta Dhawan, Senta Georgia, Shuen-ing Tschen and Guoping Fan, all of UCLA.

The Larry L. Hillblom Islet Research Center at UCLA, established in 2004, is the first center dedicated to the study of the Islets of Langerhans, which include the insulin-producing cells in the pancreas. An understanding of the causes of islet cell destruction is key to finding a cure for diabetes. The center's faculty members, recruited from around the world, provide leadership in the worldwide fight against the disease. The center is made possible through a grant from the Larry Hillblom Foundation, established to support medical research in the state of California.

Discovery of Structure of Radio Source from a Pulsar Orbiting a Massive Star

In work led by researchers from the University of Barcelona, for the first time the morphology of an extended radio source in a binary system formed of a pulsar and a massive star has been determined. In a few such systems, the strong interactions of the stellar winds produces high-energy gamma radiation, up to 10 million times more energetic than visible light. The results, published in Astrophysical Journal Letters, show for the first time the effect of the winds colliding and support existing theoretical models of radiation emitted by this type of high-energy binary systems, known as gamma-ray binaries.

Images taken 1 and 21 days after the periastron passage of PSR B1259-63 around the massive star LS 2883 with the Long Baseline Array (LBA) radio interferometer. The changing colour represents the intensity of the radiation detected. The small green ellipse is the projection of the orbit of the binary system and the black line shows a model of the trajectory of the nebular flow of particles that emit the synchrotron radiation.
Credit: Universidad de Barcelona

The research was carried out by Javier Moldón, Marc Ribó and Josep Maria Paredes, of the Department of Astronomy and Meteorology at the University of Barcelona and the UB Institute of Cosmos Sciences, together with Simon Johnston, of the Australia Telescope National Facility (Australia) and Adam Deller, of the National Radio Astronomy Observatory (USA), and in it they studied the only gamma-ray binary that is known to be formed of a pulsar (PSR B1259-63; that is, a neutron star with a radius of some 10 km that is spinning extremely fast) and a massive star (LS 2883), which is 30 times the mass of the Sun.

As the researchers from the UB explain, it is the first time that anyone has been able to observe the morphology, at different positions in the orbit, of the radio source of a gamma-ray binary in which the pulsar has known properties. The results show how the emission forms a type of cometary tail which moves around as the pulsar traces out its orbit. They have thus been able to see that the radio source is up to ten times larger than the orbit of the binary system.

The radio emission is produced during the periastron passage of the system—which is the point at which the two components of the binary system are at their closet to each other—once every 3.4 years. It has been shown that the radio emission is due to the synchrotron radiation produced by electrons that escape from the binary system at relativistic speeds of up to 100,000 km per second. This has allowed limits to be placed on the magnetization, which is essential for understanding the relativistic winds emitted by pulsars.

The PSR B1259-63/LS 2883 system is 7,500 light years away, in the direction of the constellation of Centaurus. The pulsar’s orbit is 14 times larger than the Earth’s orbit around the Sun, but because of its extreme eccentricity, the pulsar passes within just 0.9 AU (astronomical units: the Earth-Sun distance) during periastron. At such short distances the powerful wind from the massive star, travelling at over 1,000 km per second, collides with the wind from the pulsar, which is less dense but which travels at 100,000 km per second. This shock of winds accelerates particles that emit photons throughout the whole electromagnetic spectrum through synchrotron emission and inverse Compton scattering. The new radio observations directly show the radiation in the tail of particles accelerated in the shock, which spreads out over some 120 UA. This has allowed the researchers to infer the conditions under which the acceleration of the particles in produced in the region of the shock.

The observations of the binary system PSR B1259-63/LS 2883 were performed using the Australian Long Baseline Array (LBA) made up of five antennas separated by distances of up to 1,500 km. Using interferometry techniques, this network allowed the researchers to explore spatial scales of the order of 0.02 seconds of arc, an unprecedented resolution for observations of this binary system. To give an idea of the resolution involved, it corresponds to distinguishing features just 40 metres long on the surface of the Moon observed from the Earth.

Source: Universidad de Barcelona

Citation: J. Moldón, S. Johnston, M. Ribó, J.M. Paredes, A.T. Deller. «Discovery of extended and variable radio structure from the gamma-ray binary system PSR B1259-63/LS 2883». Astrophysical Journal Letters (2011). Doi: 10.1088/2041-8205/732/1/L10.

50 Year Old Textbook Concept of Columnar Nerve Cells in Cortex Revised

For more than 50 years, a dominating assumption in brain research was that nerve cells in the cortex of the brain are organised in the form of microscopically small columns. Subsequently, it became a textbook standard that connections are created predominantly between nerve cells within these columns. In a review article for the journal “Frontiers in Neuroscience”, Clemens Boucsein and colleagues from the Bernstein Centers in Freiburg and Berlin show that this view has to be revised: input from cells that lie outside this column plays a much more important role than hitherto assumed.

Wide cylinder, not slender column: a nerve cell of the cortex receives much of its input from the wider surroundings (yellow), not just from the narrow column (blue) of its direct vicinity
Credit: Boucsein/University of Freiburg

It was one of the great discoveries of the 20th century in the neurosciences that nerve cells lying on top of each other in the cortex react to the same stimulus – for example edges of different orientation that are presented to the eye. Investigations of the connectivity between nerve cells further supported the assumption that these column-like units might constitute the basic building blocks of the cortex. In the following decades, much research was conducted on cortical columns, not least because the investigation of long-range connections within the brain is a very complicated affair.

But now, these assumptions about a columnar cortex structure come under scrutiny. New experimental techniques allow the tracing of connections over long distances. Boucsein and his colleagues at the University of Freiburg refined a technique to use laser flashes to specifically activate single nerve cells and to analyse their connections. These experiments led to surprising results: less than half of the input that a cortical nerve cell receives originates from peers within the same column. Many more connections reached the cells from more distant, surrounding regions.

The experiments also revealed that these horizontal connections operate very accurately in terms of timing. To the scientists, this is an indication that the brain may use the exact point in time of an electrical impulse to code information, a hypothesis that is gaining more and more experimental support. These new insights into structure and function of the brain suggest that the idea of a column-based structure of the cortex has to be replaced with that of a densely woven tapestry, in which nerve cells are connected over long distances.

Source: Albert-Ludwigs-Universität Freiburg

Citation: Boucsein C, Nawrot MP, Schnepel P and Aertsen A (2011) Beyond the cortical column: abundance and physiology of horizontal connections imply a strong role for inputs from the surround. Front. Neurosci. 5:32. doi: 10.3389/fnins.2011.00032

Electric Cooperation Between Cells: Cell Signals Via Membrane Nanotubes

Most of the body’s cells communicate with each other by sending electrical signals through nano-thin membrane tubes. A sensational Norwegian research discovery may help to explain how cells cooperate to develop tissue in the embryo [D1] and how wounds heal.

For nearly ten years, researchers have known that cells can “grow” ultra-thin tubes named tunnelling nanotubes (TNTs) between one another. These nanotubes – the length of two to three cells and just 1/500th the thickness of a human hair – are connections that develop between nearly all cell types to form a communication channel different from any previously known mechanisms.

In 2010, Dr. Xiang Wang and Professor Hans-Hermann Gerdes – colleagues at the University of Bergen’s Department of Biomedicine – discovered that electrical signals were being passed through nanotubes from one cell to another at high speed (roughly 1-2 m/sec). Their research receives funding under the Research Council’s large-scale research programme Nanotechnology and New Materials (NANOMAT).

The breakthrough
In their key experiment, Dr Wang used fluorescent dye that changes in intensity as the electric potential of the cell membrane changes. When two cells connected by forming a nanotube, he poked into one of them with a microinjection needle to depolarise that cell’s membrane potential. This caused the fluorescent indicator on the cell membrane to light up like a firework, and it was soon followed by a similar light display in the cell on the other end of the nanotube.

The breakthrough discovery began with an experiment demonstrating intercellular transmission of electrical signals via nanotubes in 2007. The researchers then carried out similar trials with a number of other cell types, observing similar occurrences.

“We confirmed that this is a common phenomenon between cells,” explains Professor Gerdes. “Still, this characteristic is not in every cell type.”

The experiment was replicated a number of times to obtain statistically reliable data. The electrophysiology group at the University of Bergen took precise conductivity measurements of the cell systems to determine the strength of the electrical coupling. In autumn 2010 the results were published in Proceedings of the National Academy of Sciences (PNAS) and were highlighted as top news in Nature News

Short lifespan
Intercellular nanotubes are far from permanent. Most of them last only a few minutes. This means the researchers cannot predict where and when the cells will form nanotube connections.

“It is truly painstaking work,” says Professor Gerdes. “You may sit there examining cells for hours through a microscope without seeing a single tube. If you are lucky, however, you catch sight of a nanotube being created and can film the event.”

To raise the likelihood of finding nanotubes, the researchers developed a micro-matrix consisting of thousands of points and bridges on a plate surface. Smaller than a postage stamp, the plate is covered by a nano-structured material to which the cells adhere. The researchers place one cell onto each point and hope that nanotubes will form along the bridges between the points. The camera is focused on these bridges.

Once the nanotubes have been established, the researchers manipulate the cells at specified times; meanwhile the microscope is programmed to photograph, say, 50 preselected points every five minutes. The team can thus obtain data about many nanotube connections in a short time.

How do cells do this?
Dr. Wang quickly discovered that the mere presence of a nanotube was not sufficient to transmit an electrical signal. There had to be another mechanism involved as well.

Many cells form tiny membrane pores with each other called gap junctions, which are made up of ring-shaped proteins. Back in the 1960s it was discovered that directly adjacent cells could exchange electrical impulses through these gap junctions. What Dr Wang found was that one end of the nanotube was always connected to cells by a gap junctions before it transmitted its electrical impulses.

He also found that in some coupled cells voltage-gated calcium channels were involved in the forwarding of the incoming signals. When the electrical signal being sent through the nanotube reaches the membrane of the receiving cell, the membrane surface is depolarised, opening the calcium channel and allowing calcium – a vital ion in cell signalling – to enter.

“In other words,” explains Professor Gerdes, “there are two components: a nanotube and a gap junction. The nanotube grows out from one cell and connects to the other cell through a gap junction. Only then can the two cells be coupled electrically.”

Controls embryonic cells?
Now the scientists are seeking answers as to why the cells send signals to each other in this way.

“It’s quite possible that the discovery of nanotubes will give us new insight into intercellular communication,” asserts Professor Gerdes. “The process could explain how cells are coordinated during embryo growth. In that phase cells travel long distances – yet they demonstrate a kind of collective behaviour, and move together like a flock of birds can.”

Nanotubes may also be a factor in explaining cell movement associated with wound healing, since cells move toward a wound in order to close it. We already know that electrical signals are somehow involved in this process; scientists can only speculate as to whether nanotubes are involved here as well, stresses Professor Gerdes.

Perhaps brain cells, too?
In terms of electronic signal processing, the human brain surpasses all other organs. If this same signalling mechanism proves to be present in human brain cells, it could add a new dimension to understanding how the brain functions. Communication channels involving synapses and dendrites that are already identified differ widely from nanotubes.

The Bergen-based neuroscientists see this research as an opportunity to formulate better explanations for phenomena related to consciousness and electrical connections in the brain. In the project “Cell-to-cell communication: Mechanism of tunnelling nanotube formation and function”, they are now studying precisely how nanotube mechanisms function in brain cells.

Professor Gerdes is currently conducting research at the European Molecular Biology Laboratory in Heidelberg. By studying the electrical connections in vivo he hopes to figure out how the mechanisms work in live subjects. The results could enhance understanding of diseases that occur when cell mechanisms fail to function properly.

Source: Research Council of Norway

Citation: Xiang Wang, Margaret Lin Veruki, Nickolay V. Bukoreshtliev, Espen Hartveit, and Hans-Hermann Gerdes: "Animal cells connected by nanotubes can be electrically coupled through interposed gap-junction channels"
PNAS 2010 107 (40) 17194-17199; published ahead of print September 20, 2010, doi:10.1073/pnas.1006785107

A Hot, Dense Super-Earth Found: Planet "55 Cancri e"

A planet that we thought we knew turns out to be rather different than first suspected. Our revised view comes from new data released today by an international team of astronomers. They made their observations of the planet "55 Cancri e" based on calculations by Harvard graduate student Rebekah Dawson (Harvard-Smithsonian Center for Astrophysics), who worked with Daniel Fabrycky (now at the University of California, Santa Cruz) to predict when the planet crosses in front of its star as seen from Earth. Such transits give crucial information about a planet's size and orbit.

This illustration shows the current night sky at 9:00 p.m. Local time. The constellation Cancer the Crab is well placed for viewing.
Credit: Created with Voyager 4, copyright Carina Software

The team found that 55 Cancri e is 60 percent larger in diameter than Earth but eight times as massive. (A super-Earth has one to 10 times the mass of Earth.) It's the densest solid planet known, almost as dense as lead. Even better, the star it orbits is so close and bright that it's visible to the naked eye in the constellation Cancer the Crab. This makes it an excellent target for follow-up studies.

Dawson and Fabrycky's prediction played a crucial role in this new work by motivating the search for transits. When the planet was discovered by a Texas team in 2004, it was calculated to orbit its star every 2.8 days. Dawson and Fabrycky reanalyzed the data and found that 55 Cancri e was much closer to its star, orbiting it in less than 18 hours. As a result, the chances of seeing a transit were much higher.

This close-up of the constellation Cancer shows the location of 55 Cancri (circled in red). Its larger component, 55 Cancri A, hosts a planetary system that includes the hottest, densest super-Earth currently known: 55 Cancri e.
Credit: Created with Voyager 4, copyright Carina Software

Josh Winn of MIT and Smithsonian astronomer Matthew Holman brought the new calculation to Jaymie Matthews (University of British Columbia), who scheduled observations with Canada's MOST (Microvariability & Oscillations of STars) satellite. The research team found that 55 Cancri e transits its star every 17 hours and 41 minutes, just as Dawson and Fabrycky predicted.

"I'm excited that by calculating the planet's true orbital period, we were able to detect transits, which tell us so much more about it," said Dawson.

The new technique applies to planets discovered by the radial velocity method, in which astronomers hunt for a star that "wobbles" from the gravitational tug of an orbiting world.

The initial confusion about the orbit of 55 Cancri e arose because of natural gaps in the radial velocity data (because astronomers can only observe a star at night and when it's above the horizon). Sometimes these gaps introduce "ghost" signals that can masquerade as the planet's true signal.

Dawson and Fabrycky chose to analyze six planetary systems where the data seemed particularly ambiguous. In two cases they confirmed previous results, while some remained unclear. For 55 Cancri e, a period revision was certainly needed.

"It became very clear that the planet's actual orbital period was closer to 18 hours," stated Dawson.

This places the planet so close to its star that it's blasted with heat, baked to a temperature of 4,900 degrees F.

The star itself, 55 Cancri A, is a yellow star very similar to the Sun and located 40 light-years away. It's the brightest, closest star known to have a transiting planet.

Dawson recommends that the analysis method she developed with Fabrycky be used on future planet discoveries. "We've cleared up some confusion in the systems we studied, and we believe we've provided a way to avoid future confusion," she said. Headquartered in Cambridge, Mass., the Harvard-Smithsonian Center for Astrophysics (CfA) is a joint collaboration between the Smithsonian Astrophysical Observatory and the Harvard College Observatory. CfA scientists, organized into six research divisions, study the origin, evolution and ultimate fate of the universe.

For more information, contact:

David A. Aguilar
Director of Public Affairs
Harvard-Smithsonian Center for Astrophysics

3D Nanocones Boost Solar Cell Efficiency

With the creation of a 3-D nanocone-based solar cell platform, a team led by Oak Ridge National Laboratory's Jun Xu has boosted the light-to-power conversion efficiency of photovoltaics by nearly 80 percent.

The technology substantially overcomes the problem of poor transport of charges generated by solar photons. These charges -- negative electrons and positive holes -- typically become trapped by defects in bulk materials and their interfaces and degrade performance.

Nanocone-based solar cell consisting of n-type nanocones, p-type matrix, transparent conductive oxide (TCO) and glass substrate.
Credit: ORNL

"To solve the entrapment problems that reduce solar cell efficiency, we created a nanocone-based solar cell, invented methods to synthesize these cells and demonstrated improved charge collection efficiency," said Xu, a member of ORNL's Chemical Sciences Division.

The new solar structure consists of n-type nanocones surrounded by a p-type semiconductor. The n-type nanoncones are made of zinc oxide and serve as the junction framework and the electron conductor. The p-type matrix is made of polycrystalline cadmium telluride and serves as the primary photon absorber medium and hole conductor.

With this approach at the laboratory scale, Xu and colleagues were able to obtain a light-to-power conversion efficiency of 3.2 percent compared to 1.8 percent efficiency of conventional planar structure of the same materials.

"We designed the three-dimensional structure to provide an intrinsic electric field distribution that promotes efficient charge transport and high efficiency in converting energy from sunlight into electricity," Xu said.

Key features of the solar material include its unique electric field distribution that achieves efficient charge transport; the synthesis of nanocones using inexpensive proprietary methods; and the minimization of defects and voids in semiconductors. The latter provides enhanced electric and optical properties for conversion of solar photons to electricity.

Because of efficient charge transport, the new solar cell can tolerate defective materials and reduce cost in fabricating next-generation solar cells.

"The important concept behind our invention is that the nanocone shape generates a high electric field in the vicinity of the tip junction, effectively separating, injecting and collecting minority carriers, resulting in a higher efficiency than that of a conventional planar cell made with the same materials," Xu said.

Research that forms the foundation of this technology was accepted by this year's Institute of Electrical and Electronics Engineers photovoltaic specialist conference and will be published in the IEEE Proceedings. The papers are titled "Efficient Charge Transport in Nanocone Tip-Film Solar Cells" and "Nanojunction solar cells based on polycrystalline CdTe films grown on ZnO nanocones."

The research was supported by the Laboratory Directed Research and Development program and the Department of Energy's Office of Nonproliferation Research and Engineering.
Other contributors to this technology are Sang Hyun Lee, X-G Zhang, Chad Parish, Barton Smith, Yongning He, Chad Duty and Ho Nyung Lee.

UT-Battelle manages ORNL for DOE's Office of Science.

Contacts And Sources:

New Method Found For Controlling Conductivity: Reversible Control Of Electrical And Thermal Properties Could Find Uses In Storage Systems.

A team of researchers at MIT has found a way to manipulate both the thermal conductivity and the electrical conductivity of materials simply by changing the external conditions, such as the surrounding temperature. And the technique they found can change electrical conductivity by factors of well over 100, and heat conductivity by more than threefold.

An artistic rendering of the suspension as it freezes shows graphite flakes clumping together to form a connected network (dark spiky shapes at center), as they are pushed into place by the crystals that form as the liquid hexadecane surrounding them begins to freeze.
Image: Jonathan Tong

“It’s a new way of changing and controlling the properties” of materials — in this case a class called percolated composite materials — by controlling their temperature, says Gang Chen, MIT’s Carl Richard Soderberg Professor of Power Engineering and director of the Pappalardo Micro and Nano Engineering Laboratories. Chen is the senior author of a paper describing the process that was published online on April 19 and will appear in a forthcoming issue of Nature Communications. The paper’s lead authors are former MIT visiting scholars Ruiting Zheng of Beijing Normal University and Jinwei Gao of South China Normal University, along with current MIT graduate student Jianjian Wang. The research was partly supported by grants from the National Science Foundation.

The system Chen and his colleagues developed could be applied to many different materials for either thermal or electrical applications. The finding is so novel, Chen says, that the researchers hope some of their peers will respond with an immediate, “I have a use for that!”

One potential use of the new system, Chen explains, is for a fuse to protect electronic circuitry. In that application, the material would conduct electricity with little resistance under normal, room-temperature conditions. But if the circuit begins to heat up, that heat would increase the material’s resistance, until at some threshold temperature it essentially blocks the flow, acting like a blown fuse. But then, instead of needing to be reset, as the circuit cools down the resistance decreases and the circuit automatically resumes its function.

Another possible application is for storing heat, such as from a solar thermal collector system, later using it to heat water or homes or to generate electricity. The system’s much-improved thermal conductivity in the solid state helps it transfer heat.

Essentially, what the researchers did was suspend tiny flakes of one material in a liquid that, like water, forms crystals as it solidifies. For their initial experiments, they used flakes of graphite suspended in liquid hexadecane, but they showed the generality of their process by demonstrating the control of conductivity in other combinations of materials as well. The liquid used in this research has a melting point close to room temperature — advantageous for operations near ambient conditions — but the principle should be applicable for high-temperature use as well.

The process works because when the liquid freezes, the pressure of its forming crystal structure pushes the floating particles into closer contact, increasing their electrical and thermal conductance. When it melts, that pressure is relieved and the conductivity goes down. In their experiments, the researchers used a suspension that contained just 0.2 percent graphite flakes by volume. Such suspensions are remarkably stable: Particles remain suspended indefinitely in the liquid, as was shown by examining a container of the mixture three months after mixing.

By selecting different fluids and different materials suspended within that liquid, the critical temperature at which the change takes place can be adjusted at will, Chen says.

“Using phase change to control the conductivity of nanocomposites is a very clever idea,” says Li Shi, a professor of mechanical engineering at the University of Texas at Austin. Shi adds that as far as he knows “this is the first report of this novel approach” to producing such a reversible system.

“I think this is a very crucial result,” says Joseph Heremans, professor of physics and of mechanical and aerospace engineering at Ohio State University. “Heat switches exist,” but involve separate parts made of different materials, whereas “here we have a system with no macroscopic moving parts,” he says. “This is excellent work.”

Contacts and sources:
Story by David L. Chandler, MIT News Office

Feared Brown Recluse Spider Habitat Range May Expand as Climate Warms Says KU Researcher

One of the most feared spiders in North America is the subject a new University of Kansas study that aims to predict its distribution and how that distribution may be affected by climate changes.

When provoked, the spider, commonly known as the brown recluse (Loxosceles reclusa), injects powerful venom that can kill the tissues at the site of the bite. This can lead to a painful deep sore and occasional scarring.

 Brown recluse (Loxosceles reclusa)
Image credit: Wikipedia

But the wounds are not always easy to diagnose. Medical practitioners can confuse the bite with other serious conditions, including Lyme disease and various cancers. The distribution of the spider is poorly understood as well, and medical professionals routinely diagnose brown recluse bites outside of the areas where it is known to exist.

The brown recluse spider has three pairs of eyes
File:Loxosceles reclusa adult male 4.jpg
Image: Wikipedia

By better characterizing its distribution, and by examining potential new areas of distribution with future climate change scenarios, the medical community and the public can be more informed about this species, said study author Erin Saupe, a graduate student in geology and a Biodiversity Institute student.

To address the issue of brown recluse distribution, Saupe and other researchers used a predictive mapping technique called ecological niche modeling. They applied future climate change scenarios to the spider’s known distribution in the Midwest and southern United States. The researchers concluded that the range may expand northward, potentially invading previously unaffected regions. Newly influenced areas may include parts of Nebraska, Minnesota, Wisconsin, Michigan, South Dakota, Ohio and Pennsylvania.

Current habitat
Image Wikipedia

“These results illustrate a potential negative consequence of climate change on humans and will aid medical professionals in proper bite identification and treatment, potentially reducing bite misdiagnoses,” Saupe said.

The paper was published in the March 25 edition of the journal PLoS ONE. The research team included Saupe; Monica Papes, a Biodiversity Institute and ecology and evolutionary biology alumna; Paul Selden, director of the Paleontological Institute and the Gulf-Hedberg Distinguished Professor of Invertebrate Paleontology; and Richard S. Vetter, University of California-Riverside.

Contacts and sources:

 Jen Humphrey, Natural History Museum and Biodiversity Institute

KU Researchers Assess Power Of Stereo 3-D Technology To Boost Geography Instruction

With the recent popularity of stereoscopic 3-D movies like “Avatar” and “Alice in Wonderland,” a new generation of Americans are donning plastic, polarized glasses and enjoying images that seem to spill from the screen into the theater itself.

Now, supported by a two-year, $150,000 grant from the National Science Foundation, some of that stereoscopic pizzazz is finding its way into geography classrooms at the University of Kansas and Haskell Indian Nations University.

“Our central question is whether stereo will lead to an improvement over a simple three-dimensional representation,” said Terry Slocum, associate professor of geography, who is leading the research. “The results would affect anyone who would want to use stereo either in a classroom or in research. So we would argue it could affect potentially thousands and maybe even millions of viewers.”

Slocum and his co-investigators now are developing stereoscopic 3-D materials for introductory physical geography classes at KU and Haskell. Those classes will be taught with and without stereoscopic 3-D visual cues. Then, researchers will analyze the speed and accuracy of students’ performance on tests, conduct interviews with students and gather focus groups to judge the impact of stereo 3-D displays. The Center for Teaching Excellence at KU will evaluate the results.

“Anything that you can show in three dimensions has the potential to be an improvement,” said Slocum. “For example, in cartography we could have data values for counties, which we call ‘enumeration units.’ We can raise those counties to a height proportional to the data, creating what are called ‘prisms’ above those counties, which will look three dimensional, then we can also show that in stereo. So the question would be does the stereo option enhance just the three-dimensional representation?”

A classroom system for displaying stereoscopic images, including photographs, maps and other visual materials, is commonly referred to as a GeoWall. Separate GeoWall systems are being installed at KU and Haskell. The GeoWall images appear to float off the screen, allowing the viewer to see a degree of depth impossible with conventional displays.

While the stereoscopic 3-D images can make for more exciting viewing, Slocum and his colleagues are more interested in discovering if the technology improves geographic education in the classroom.

“We can show things in 3-D without stereo — there are lots of visual cues we can use to determine whether or not something is 3-D,” said Slocum. “The question is what does the stereo capability add that is not in the 3-D format? Do you gain anything by going to a stereo option?”

While on face value it may seem that stereoscopic 3-D would enhance the classroom experience, the KU researcher said that the technology could have drawbacks as well.

“It’s definitely more work at the front end for the instructor,” said Slocum. “For a lot of images, you’ve got to have special software or you’ve got to have a camera that can take 3-D. And there’s a potential for misleading people, because in order to show, say, a three-dimensional structure of elevation, what you have to do is exaggerate the surface. So it’s possible to actually mislead students.”

Slocum’s colleagues on the project include KU professors Steve Egbert, William Johnson and Dan Hirmas; KU graduate students Travis White and Alan Halfen; and Dave McDermott, a geography professor at Haskell.

Contacts and sources:
Brendan M. Lynch, University Relations

Thursday, April 28, 2011

Animation of Pacific Ocean Sea Surface Heights Showing El Nino and La Nina Events 1993 to 2011

Animation of Pacific Ocean Sea Surface Heights Showing El Nino and La Nina Events 1993 to 4/28/2011

This animation depicts year-to-year variability in sea surface height, and chronicles two decades of El Nino and La Nina events. It was created using NASA ocean altimetry data from 1993 to 2011, as measured by several NASA spacecraft.

This remarkable video chronicles the almost 19 years of sea-surface height measurements collected from the precise and continuous TOPEX/Poseidon, Jason-1 and Jason-2 altimetric satellites. These ocean viewing spacecraft use radar altimetry to collect sea surface height data of all the world's oceans.

What is sea-surface height? The height (or "relief") of the sea surface is caused by both gravity (which doesn't change much over 100's of years), and the active (always changing) ocean circulation. The normal slow, regular circulation (ocean current) patterns of sea-surface height move up and down (due to warming and cooling and wind forcing) with the normal progression of the seasons ... winter to spring to summer to fall. Using theory of ocean dynamics, TOPEX/Poseidon and Jason sea-surface heights can be used to calculate how much heat is stored in the ocean below. The year-to-year and, even, decade-to-decade changes in the ocean that indicate climate events such as the El Niño, La Niña and Pacific Decadal Oscillation are dramatically visualized by these data. Sea-surface height is the most modern and powerful tool for taking the "pulse" of the global oceans.

In this video the ocean height measurements are processed to highlight the interannual ocean signals of sea surface height. An example of “interannual’ variability is the 3 to 5 year flipping of the climate from El Nino to La Nina, and back to La Nina. The blue areas represent a drop in ocean height where cooler than normal water (and more dense) is located. The La Nina event is identified by a pool of colder than normal water in the eastern Pacific Ocean. The red areas represent a rise in ocean height where sea surface temperatures are warmer than normal. El Nino is identified by a pool of warmer than normal sea surface temperatures in the eastern Pacific Ocean.

Below the Pacific Basin animation is shown the growth (from 1992 to 2011) of the globally averaged sea surface height. To simplify this presentation, the seasonal signal (annual heating during summer and cooling during winter) has been removed. What is seen is the gradual, steady rise in sea level during these two decades. The average worldwide rise in sea level is almost 60 mm (2.4 inches), about 3 mm/year (1/8 inch/year). This rise is due to the warming of the global oceans and an increase in mass (new water) from the observed melting of terrestrial glaciers and polar icecaps.

This animation of images shows sea surface height anomalies with the seasonal cycle (the effects of summer, fall, winter, and spring) removed. The differences between what we see and what is normal for different times and regions are called anomalies, or residuals. When oceanographers and climatologists view these "anomalies" they can identify unusual patterns and can tell us how heat is being stored in the ocean to influence future planetary climate events.

Contacts and Sources:
Text Credit:
Alan Buis/NASA's Jet Propulsion Laboratory
Rob Gutro/NASA's Goddard Space Flight Center

A Tale Of 2 Lakes: One Gives Early Warning Signal For Ecosystem Collapse

Researchers eavesdropping on complex signals from a remote Wisconsin lake have detected what they say is an unmistakable warning--a death knell--of the impending collapse of the lake's aquatic ecosystem.

The finding, reported today in the journal Science by a team of researchers led by Stephen Carpenter, an ecologist at the University of Wisconsin-Madison (UW-Madison), is the first experimental evidence that radical change in an ecosystem can be detected in advance, possibly in time to prevent ecological catastrophe.

"For a long time, ecologists thought these changes couldn't be predicted," says Carpenter. "But we've now shown that they can be foreseen. The early warning is clear. It is a strong signal."

A tale of two lakes: Paul (reference lake) is smaller lake; Peter (manipulated lake) in background.
Photo of two lakes, Paul the smaller lake and Peter the manipulated lake in the background.
Credit: Steve Carpenter

The implications of the National Science Foundation (NSF)-supported study are big, says Carpenter.

"This research shows that, with careful monitoring, we can foresee shifts in the structure of ecosystems despite their complexity," agrees Alan Tessier, program director in NSF's Division of Environmental Biology. "The results point the way for ecosystem management to become a predictive science."

The findings suggest that, with the right kind of monitoring, it may be possible to track the vital signs of any ecosystem and intervene in time to prevent what is often irreversible damage to the environment.

"With more work, this could revolutionize ecosystem management," Carpenter says. "The concept has now been validated in a field experiment and the fact that it worked in this lake opens the door to testing it in rangelands, forests and marine ecosystems."

"Networks for long-term ecological observation, such as the [NSF] Long-Term Ecological Research network, increase the possibility of detecting early warnings through comparisons across sites and among regions," the scientists write in their paper.

Ecosystems often change in radical ways. Lakes, forests, rangelands, coral reefs and many other ecosystems are often transformed by overfishing, insect pests, chemical changes in the environment, overgrazing and shifting climate.

For humans, ecosystem change can impact economies and livelihoods such as when forests succumb to an insect pest, rangelands to overgrazing, or fisheries to overexploitation.

A vivid example of a collapsed resource is the Atlantic cod fishery.

Once the most abundant and sought-after fish in the North Atlantic, cod stocks collapsed in the 1990s due to overfishing, causing widespread economic hardship in New England and Canada. Now, the ability to detect when an ecosystem is approaching the tipping point could help prevent such calamities.

In the new study, the Wisconsin researchers, collaborating with scientists at the Cary Institute for Ecosystem Studies in Millbrook, N.Y., the University of Virginia in Charlottesville and St. Norbert College in De Pere, Wis., focused their attention on Peter and Paul Lakes, two isolated and undeveloped lakes in northern Wisconsin.

Peter is a six-acre lake whose biota were manipulated for the study and nearby Paul served as a control.

An explosion of largemouth bass young-of-year accelerated the manipulated lake's changes.
Photo of the explosion of young largemouth bass in the manipulated lake.
Credit: Tim Cline

The group led by Carpenter experimentally manipulated Peter Lake over a three-year period by gradually adding predatory largemouth bass to the lake, which was previously dominated by small fish that consumed water fleas, a type of zooplankton.

The purpose, Carpenter notes, was to destabilize the lake's food web to the point where it would become an ecosystem dominated by large predators.

In the process, the researchers expected to see a relatively rapid cascading change in the lake's biological community, one that would affect all its plants and animals in significant ways.

"We start adding these big ferocious fish and almost immediately this instills fear in the other fish," Carpenter says.

"The small fish begin to sense there is trouble and they stop going into the open water and instead hang around the shore and structures, things like sunken logs. They become risk-averse."

The biological upshot, says Carpenter, is that the lake became "water flea heaven."

The system becomes one where the phytoplankton, the preferred food of the lake's water fleas, is highly variable.

"The phytoplankton get hammered and at some point the system snap into a new mode," says Carpenter.

Throughout the lake's three-year manipulation, all its chemical, biological and physical vital signs were continuously monitored to track even the smallest changes that would announce what ecologists call a "regime shift," where an ecosystem undergoes radical and rapid change from one type to another.

It was in these massive sets of data that Carpenter and his colleagues were able to detect the signals of the ecosystem's impending collapse.

Ecologists first discovered similar signals in computer simulations of spruce budworm outbreaks.

Every few decades the insect's populations explode, causing widespread deforestation in boreal forests in Canada. Computer models of a virtual outbreak, however, seemed to undergo odd blips just before the outbreak.

The problem was solved by William "Buz" Brock, a UW-Madison economist who for decades has worked on the mathematical connections of economics and ecology.

Brock utilized a branch of applied mathematics known as bifurcation theory to show that the odd behavior was in fact an early warning of catastrophic change.

In short, he devised a way to sense the transformation of an ecosystem by detecting subtle changes in the system's natural patterns of variability.

The upshot of the Peter Lake field experiment, says Carpenter, is a validated statistical early warning system for ecosystem collapse.

The catch, however, is that for the early warning system to work, intense and continuous monitoring of an ecosystem's chemistry, physical properties and biota are required.

Such an approach may not be practical for every threatened ecosystem, says Carpenter, but he also cites the price of doing nothing.

"These regime shifts tend to be hard to reverse. It is like a runaway train once it gets going and the costs--both ecological and economic--are high."

In addition to Carpenter and Brock, authors of the Science paper include Jonathan Cole of the Cary Institute of Ecosystem Studies; Michael Pace, James Coloso and David Seekell of the University of Virginia at Charlottesville; James Hodgson of St. Norbert College; and Ryan Batt, Tim Cline, James Kitchell, Laura Smith and Brian Weidel of UW-Madison.

Contacts and sources: