Monday, January 31, 2011

Quantum-Mechanical Implementation Of 'Shell Game' Leading To Quantum Computer

Inspired by the popular confidence trick known as "shell game," researchers at UC Santa Barbara have demonstrated the ability to hide and shuffle "quantum-mechanical peas" –– microwave single photons –– under and between three microwave resonators, or "quantized shells."

In a paper published in the Jan. 30 issue of the journal Nature Physics, UCSB researchers show the first demonstration of the coherent control of a multi-resonator architecture. This topic has been a holy grail among physicists studying photons at the quantum-mechanical level for more than a decade.

The UCSB researchers are Matteo Mariantoni, postdoctoral fellow in the Department of Physics; Haohua Wang, postdoctoral fellow in physics; John Martinis, professor of physics; and Andrew Cleland, professor of physics.

According to the paper, the "shell man," the researcher, makes use of two superconducting quantum bits (qubits) to move the photons –– particles of light –– between the resonators. The qubits –– the quantum-mechanical equivalent of the classical bits used in a common PC –– are studied at UCSB for the development of a quantum super computer. They constitute one of the key elements for playing the photon shell game.

"This is an important milestone toward the realization of a large-scale quantum register," said Mariantoni. "It opens up an entirely new dimension in the realm of on-chip microwave photonics and quantum-optics in general."

The researchers fabricated a chip where three resonators of a few millimeters in length are coupled to two qubits. "The architecture studied in this work resembles a quantum railroad," said Mariantoni. "Two quantum stations –– two of the three resonators –– are interconnected through the third resonator which acts as a quantum bus. The qubits control the traffic and allow the shuffling of photons among the resonators."

The photon shell game architecture: Two superconducting phase qubits (squares in the center of the image) are connected to three microwave resonators (three meander lines).
Credit: UCSB

In a related experiment, the researchers played a more complex game that was inspired by an ancient mathematical puzzle developed in an Indian temple called the Towers of Hanoi, according to legend.

The Towers of Hanoi puzzle consists of three posts and a pile of disks of different diameter, which can slide onto any post. The puzzle starts with the disks in a stack in ascending order of size on one post, with the smallest disk at the top. The aim of the puzzle is to move the entire stack to another post, with only one disk being moved at a time, and with no disk being placed on top of a smaller disk.

In the quantum-mechanical version of the Towers of Hanoi, the three posts are represented by the resonators and the disks by quanta of light with different energy. "This game demonstrates that a truly Bosonic excitation can be shuffled among resonators –– an interesting example of the quantum-mechanical nature of light," said Mariantoni.

Mariantoni was supported in this work by an Elings Prize Fellowship in Experimental Science from UCSB's California NanoSystems Institute.

Contacts and sources:
Gail Gallessich
University of California - Santa Barbara

Newly Discovered Dinosaur Likely Father of Triceratops

Triceratops and Torosaurus have long been considered the kings of the horned dinosaurs. But a new discovery traces the giants' family tree further back in time, when a newly discovered species appears to have reigned long before its more well-known descendants, making it the earliest known member of its family.

The new species, called Titanoceratops after the Greek myth of the Titans, rivaled Triceratops in size, with an estimated weight of nearly 15,000 pounds and a massive eight-foot-long skull.

The skull on the left is the Titanoceratops skull, the missing parts of which were reconstructed to look like a Pentaceratops. The illustration on the right shows the missing parts of the frill (shaded).
Credit: Yale Universitry

Titanoceratops, which lived in the American southwest during the late Cretaceous period around 74 million years ago, is the earliest known triceratopsin, suggesting the group evolved its large size more than five million years earlier than previously thought, according to Nicholas Longrich, the paleontologist at Yale who made the discovery. The finding, which will appear in an upcoming issue of the journal Cretaceous Research, helps shed light on the poorly understood origins of these giant horned dinosaurs.

Longrich was searching through scientific papers when he came across a description of a partial skeleton of a dinosaur discovered in New Mexico in 1941. The skeleton went untouched until 1995, when it was finally prepared and identified incorrectly as Pentaceratops, a species common to the area. When the missing part of its frill — the signature feature of the horned dinosaurs — was reconstructed for display in the Oklahoma Museum of Natural History, it was modeled after Pentaceratops.

As this chart shows, Titanoceratops rivaled Triceratops in size.

Illustration by Nicholas Longrich

"When I looked at the skeleton more closely, I realized it was just too different from the other known Pentaceratops to be a member of the species," Longrich said, adding that the specimen's size indicated that it likely weighed about twice as much as adult Pentaceratops. The new species is very similar to Triceratops, but with a thinner frill, longer nose and slightly bigger horns, Longrich said.

Instead, Longrich believes that Titanoceratops is the ancestor of both Triceratops and Torosaurus, and that the latter two split several millions years after Titanoceratops evolved. "This skeleton is exactly what you would expect their ancestor to look like," he said.

Titanoceratops was probably only around for about a million years, according to Longrich, while the triceratopsian family existed for a total of about 10 million years and roamed beyond the American southwest into other parts of the country and as far north as Canada.

In order to confirm the discovery beyond any trace of a doubt, Longrich hopes paleontologists will find other fossil skeletons that include intact frills, which would help confirm the differences between Titanoceratops and Pentaceratops.

"There have got to be more of them out there," Longrich said.

Contacts and sources"
Yale University
 Suzanne Taylor Muzzin

Citation: DOI: 10.1016/j.cretres.2010.12.007

Nanosilver: A New Name – Well Known Effects At Work Against Microbes For Over A Century,

Nanosilver is not a new discovery by nanotechnologists – it has been used in various products for over a hundred years, as is shown by a new Empa study. The antimicrobial effects of minute silver particles, which were then known as “colloidal silver”, were known from the earliest days of its use.

As early as the 19th century minute silver particles were used, for example in antibacterial water filters.
Symbolic image, iStock

Numerous nanomaterials are currently at the focus of public attention. In particular silver nanoparticles are being investigated in detail, both by scientists as well as by the regulatory authorities. The assumption behind this interest is that they are dealing with a completely new substance.

However, Empa researchers Bernd Nowack and Harald Krug, together with Murray Heights of the company HeiQ have shown in a paper recently published in the journal «Environmental Science & Technology» that nanosilver is by no means the discovery of the 21st century. Silver particles with diameters of seven to nine nm were mentioned as early as 1889. They were used in medications or as biocides to prevent the growth of bacteria on surfaces, for example in antibacterial water filters or in algaecides for swimming pools.

TEM image of silver nanoparticles in the algicide Algaedyn used for swimming pools.
Credit: Empa

The material has always been the same
The nanoparticles were known as “colloidal silver” in those days, but what was meant was the same then as now – extremely small particles of silver. The only new aspect is the use today of the prefix "nano". "However," according to Bernd Nowack, "nano does not mean something new, and nor does it mean something that is harmful." When "colloidal silver" became available on the market in large quantities in the 1920s it was the topic of numerous studies and subject to appropriate regulation by the authorities. Even in those days the significance of the discovery of nanoparticles and how they worked was realized. "That is not to say that the possible side-effects of nanoparticles on humans and the environment should be played down or ignored," adds Nowack. It is important to characterize in exact detail the material properties of nanosilver and not just to believe unquestioningly the doubts and reservations surrounding the product.

Nanosilver has different effects than silver
The term nanoparticle is understood to refer to particles whose dimensions are less than 100 nm. Because of their minute size nanoparticles have different properties than those of larger particles of the same material. For example, for a given volume nanoparticles have a much greater surface area, so they are frequently much more reactive than the bulk material. In addition, even in small quantities nanosilver produces more silver ions than solid silver. These silver ions are toxic to bacteria. Whether or not nanosilver represents a risk to humans and the environment is currently the subject of a great deal of investigation.

Nanosilver in wastewater treatment plants
Currently there are hundreds of products in circulation which contain silver nanoparticles. Examples include cosmetics, food packaging materials, disinfectants, cleaning agents and – not least – antibacterial socks and underwear. Every year some 320 tonnes of nanosilver are used worldwide, some of which is released into wastewater, thus finding its way into natural water recirculation systems. What effects solar particles have on rivers, soil and the organisms that live in them has not yet been clarified in detail. A

commentary by Bernd Nowack in the scientific journal "Science" discusses the implications of the newest studies on nanosilver in sewage treatment plants. More than 90% remains bound in the sewage sludge in the form of silver sulfide, a substance which is extremely insoluble and orders of magnitude less poisonous than free silver ions. It apparently does not matter what the original form of the silver in the wastewater was, whether as metallic nanoparticles, as silver ions in solution or as precipitated insoluble silver salts.

"As far as the environmental effects are concerned, it seems that nanosilver in consumer goods is no different than other forms of silver and represents only a minor problem for eco-systems," says Nowack. What is still to be clarified, however, is in what form the unbound silver is present in the treated water released from sewage works, and what happens to the silver sulfide in natural waters. Is this stable and unreactive or is it transformed into other forms of silver?

Contacts and sources:

PD Dr. Bernd NowackTechnology and Society

120 Years of Nanosilver History: Implications for Policy Makers, Bernd Nowack, Harald F. Krug, Murray Height, Environ Sci Technol, 2011, DOI: 10.1021/es103316q

Nanosilver Revisited Downstream, Bernd Nowack, Science, 2010, Vol. 330 no. 6007, pp. 1054-1055, DOI: 10.1126/science.1198074

Greater Horn of Africa Droughts To Worsen, 17.5 Million At Risk For Food Shortages

The increased frequency of drought observed in eastern Africa over the last 20 years is likely to continue as long as global temperatures continue to rise, according to new research published in Climate Dynamics.

This poses increased risk to the estimated 17.5 million people in the Greater Horn of Africa who currently face potential food shortages.

Credit: USGS

Scientists from the U.S. Geological Survey and the University of California, Santa Barbara, determined that warming of the Indian Ocean, which causes decreased rainfall in eastern Africa, is linked to global warming. These new projections of continued drought contradict previous scenarios by the Intergovernmental Panel on Climate Change predicting increased rainfall in eastern Africa.

This new research supports efforts by the USGS and the U.S. Agency for International Development to identify areas of potential drought and famine in order to target food aid and help inform agricultural development, environmental conservation, and water resources planning.

Credit: USGS

The increased frequency of drought observed in eastern Africa over the last 20 years is likely to continue as long as global temperatures continue to rise, according to new research published in Climate Dynamics.

This poses increased risk to the estimated 17.5 million people in the Greater Horn of Africa who currently face potential food shortages.
Credit: USGS

“Global temperatures are predicted to continue increasing, and we anticipate that average precipitation totals in Kenya and Ethiopia will continue decreasing or remain below the historical average,” said USGS scientist Chris Funk. “The decreased rainfall in eastern Africa is most pronounced in the March to June season, when substantial rainfall usually occurs. Although drought is one reason for food shortages, it is exacerbated by stagnating agricultural development and continued population growth.”

As the globe has warmed over the last century, the Indian Ocean has warmed especially fast. The resulting warmer air and increased humidity over the Indian Ocean produce more frequent rainfall in that region. The air then rises, loses its moisture during rainfall, and then flows westward and descends over Africa, causing drought conditions in Ethiopia and Kenya.

“Forecasting precipitation variability from year to year is very difficult, and research on the links between global change and precipitation in specific regions is ongoing so that more accurate projections of future precipitation can be developed,” said University of California, Santa Barbara, scientist Park Williams. “It is also important to note that while sea-surface temperatures are expected to continue to increase in the Indian Ocean and cause an average decrease in rainfall in eastern Africa, there will still occasionally be very wet years because there are many factors that influence precipitation.”

Credit: USGS

Scientists compiled existing datasets on temperature, wind speed and precipitation to see what was driving climate variations in the tropical Indian and Pacific Ocean region. Most of the Indian Ocean warming is linked to human activities, particularly greenhouse gas and aerosol emissions. The Indian Ocean has warmed especially fast because it is quickly being encroached upon by the Tropical Warm Pool, which is an area with the warmest ocean surface temperatures of anywhere on earth.

This research supports efforts by the USGS and the U.S. Agency for International Development through the Famine Early Warning Systems Network. FEWS NET is a decision support system that helps target more than two billion dollars of food aid to more than 40 countries each year. Through this system, scientists are helping with early identification of agricultural drought that might trigger food insecurity. For more information, visit

The article, “A westward extension of the warm pool intensifies the walker circulation, drying eastern Africa,” was published in Climate Dynamics and can be found at

Source: USGS

Super Bowls Kill Maniac Fans, Doctors Expect Rise in Cardiac Deaths When The Packers Meet The Steelers

Some armchair quarterbacks may be leaving it all on the field on Super Bowl Sunday, paying the ultimate price of fandom.

A new study published in the journal Clinical Cardiology reveals that a Super Bowl loss for a home team was associated with increased death rates in both men and women and in older individuals.

Sports fans may be emotionally involved in watching their favorite teams. When the team loses, it can cause some degree of emotional stress.

Led by Robert A. Kloner, MD, PhD, of the Heart Institute, Good Samarian Hospital and Keck School of Medicine at USC in Los Angeles, researchers assessed how often this emotional stress may translate to increases in cardiac death. They ran regression models for mortality rates for cardiac causes for the 1980 Los Angeles Super Bowl loss and for the 1984 Los Angeles Super Bowl win.

Results show that the Los Angeles Super Bowl loss of 1980 increased total and cardiac deaths in both men and women and triggered more death in older than younger patients. In contrast, there was a trend for a Super Bowl win to reduce death more frequently in older people and in women.

Specifically, in men there was a 15 percent increase in all circulatory deaths associated with the Super Bowl loss; in women there was a 27 percent increase in all circulatory deaths associated with the loss. Thus, unlike previous reports from some soccer games, the findings were not confined mainly to male fans, but also were seen in women. In older patients, there was a 22 percent increase in circulatory deaths associated with the Super Bowl loss.

"Physicians and patients should be aware that stressful games might elicit an emotional response that could trigger a cardiac event," Kloner notes. "Stress reduction programs or certain medications might be appropriate in individual cases."

Contacts and sources:

Sunday, January 30, 2011

Skin Cells Converted to Beating Heart Cells; Breakthrough Discovery Offers Hope for New Therapies for Range of Diseases

Scripps Research Institute scientists have converted adult skin cells directly into beating heart cells efficiently without having to first go through the laborious process of generating embryonic-like stem cells. The powerful general technology platform could lead to new treatments for a range of diseases and injuries involving cell loss or damage, such as heart disease, Parkinson’s, and Alzheimer’s disease.

Scripps Research scientists have created mature heart muscle cells directly from skin cells.

Credit: Scripps Research Institute

The work was published January 30, 2011, in an advance, online issue of Nature Cell Biology.

"This work represents a new paradigm in stem cell reprogramming," said Scripps Research Associate Professor Sheng Ding, Ph.D., who led the study. "We hope it helps overcome major safety and other technical hurdles currently associated with some types of stem cell therapies."

Making Stem Cells

As the human body develops, embryonic-like stem cells multiply and transform themselves into more mature cell types through a process known as differentiation, producing all of the body’s different cell types and tissues. Past the embryonic stage, however, the human body has limited capacity to generate new cells to replace ones that have been lost or damaged.

Thus, scientists have been trying to develop ways to “reprogram” adult human cells back to a more embryonic-like, or pluripotent, state, from which they are able to divide and then change into any of the body’s cell types. Using these techniques, scientists aim to someday be able to take a patient’s own cells, say skin cells, change them into heart or brain cells, and then insert them back into the patient to fix damaged tissues. In 2006, Japanese scientists reported that they could reprogram mouse skin cells to become pluripotent simply by inserting a set of four genes into the cells.

Scripps Research Associate Professor Sheng Ding led the new research.
Credit: Scripps Research Institute

Although the technology to generate these cells, dubbed induced pluripotent stem (iPS) cells, represents a major advance, there are some hurdles to overcome before it can be adapted to therapies.

“It takes a long time to generate iPS cells and then differentiate them into tissue-specific functional cell types," said Ding, "and it’s a tedious process. Also, what you generate is not ideal.”

Specifically, it takes some two to four weeks for scientists to create iPS cells from skin cells and the process is far from efficient, with only one cell out of thousands making the complete transformation. Furthermore, once scientists obtain iPS cells, they then have to go through the tricky procedure of inducing the iPS cells to differentiate into desired types of cells, which takes an additional two to four weeks.

In addition, the process of generating mature cells from iPS cells is not foolproof. When, for example, scientists induce iPS cells to become heart cells, the resulting cells are a mix of heart cells and some lingering iPS cells. Scientists are concerned that giving these new heart cells (along with the remaining pluripotent cells) to patients might be dangerous. When pluripotent cells are injected in mice, they cause cancer-like growths.

Because of these concerns, Ding and colleagues decided to try to tweak the process by completely bypassing the iPS stage and going directly from one type of mature cell (a skin cell) to another (a heart cell).

Bypassing the Stem Cell Stage

The team introduced the same four genes initially used to make iPS cells into adult skin fibroblast cells, but instead of letting the genes be continuously active in cells for several weeks, they switched off their activities just after a few days, long before the cells had turned into iPS cells. Once the four genes were switched off, the scientists gave a signal to the cells to make them turn into heart cells.

“In 11 days, we went from skin cells to beating heart cells in a dish,” said Ding. “It was phenomenal to see.”

Ding points out the protocol is fundamentally different from what has been done by other scientists in the past and notes that giving the cells a different kind of signal could turn them into brain cells or pancreatic cells.

“It is like launching a rocket," he said. "Until now, people thought you needed to first land the rocket on the moon and then from there you could go to other planets. But here we show that just after the launch you can redirect the rocket to another planet without having to first go to the moon. This is a totally new paradigm.”

In addition to better understanding the basic biology of stem cells, the next step will be to modify this technique further to remove the need for inserting the four genes, which have been linked to the development of cancer. As a result, many scientists, including Ding, have been working on new techniques to develop iPS cells without use of these genes. That has proven difficult. But with the new protocol, which bypasses the iPS cell stage, the genes are needed for a much shorter time.

“Action for such a short period of time is a lot easier to replace,” Ding noted.

In addition to Ding, authors of the paper, "Conversion of mouse fibroblasts into cardiomyocytes using a direct reprogramming strategy," are Jem A. Efe, Simon Hilcove, Janghwan Kim, and Hongyan Zhou of Scripps Research, and Kunfu Ouyang, Gang Wang, and Ju Chen of the University of California, San Diego.

The research was funded by The Scripps Research Institute, California Institute for Regenerative Medicine, Fate Therapeutics, and the Esther B. O’Keeffe Foundation.

Contacts and sources:
The Scripps Research Institute

Full bibliographic information
Journal: Nature Cell Biology
Title: "Conversion of mouse fibroblasts into cardiomyocytes using a direct reprogramming strategy"
Date: Advance, online publication scheduled for January 30, 2011
Authors: Jem A. Efe, Simon Hilcove, Janghwan Kim, Hongyan Zhou, Kunfu Ouyang, Gang Wang, Ju Chen, and Sheng Ding

What An Expectant Mother Eats Affects Children’s Psychology in Later Life

How maternal essential fatty acid deficiency impact on its progeny is poorly understood. Dietary insufficiency in omega-3 fatty acid has been implicated in many disorders.

Researchers from Inserm and INRA and their collaborators in Spain collaboration, have studied mice fed on a diet low in omega-3 fatty acid. They discovered that reduced levels of omega-3 had deleterious consequences on synaptic functions and emotional behaviours. Details of this work are available in the online version of the journal Nature neuroscience, which can be accessed at:

In industrialized nations, diets have been impoverished in essential fatty acids since the beginning of the 20th century. The dietary ratio between omega-6 polyunsaturated fatty acid and omega-3 polyunsaturated fatty acid omega-3 increased continuously over the course of the 20th century.

These fatty acids are "essential" lipids because the body cannot synthesize them from new. They must therefore be provided through food and their dietary balance is essential to maintain optimal brain functions.

Olivier Manzoni (Head of Research Inserm Unit 862, "Neurocentre Magendie", in Bordeaux and Unit 901 "Institut de Neurobiologie de la Méditerranée" in Marseille), and Sophie Layé (Head of Research at INRA Unit 1286, "Nutrition et Neurobiologie Intégrative" in Bordeaux) and their co-workers hypothesized that chronic malnutrition during intra-uterine development, may later influence synaptic activity involved in emotional behavior (e.g. depression, anxiety) in adulthood.

To verify their hypotheses, the researchers studied mice fed a life-long diet imbalanced in omega-3 and omega-6 fatty acids. They found that omega-3 deficiency disturbed neuronal communication specifically. The researchers observed that only the cannabinoid receptors, which play a strategic role in neurotransmission, suffer a complete loss of function.

This neuronal dysfunction was accompanied by depressive behaviours among the malnourished mice. Among omega-3 deficient mice, the usual effects produced by cannabinoid receptor activation, on both the synaptic and behavioural levels, no longer appear. Thus, the CB1R receptors lose their synaptic activity and the antioxidant effect of the cannabinoids disappears.

Consequently, the researchers discovered that among mice subjected to an omega-3 deficient dietary regime, synaptic plasticity, which is dependent on the CB1R cannabinoid receptors, is disturbed in at least two structures involved with reward, motivation and emotional regulation: the prefrontal cortex and the nucleus accumbens.

 These parts of the brain contain a large number of CB1R cannabinoid receptors and have important functional connections with each other.

"Our results can now corroborate clinical and epidemiological studies which have revealed associations between an omega-3/omega-6 imbalance and mood disorders", explain Olivier Manzoni and Sophie Layé. "To determine if the omega-3 deficiency is responsible for these neuropsychiatric disorders additional studies are, of course, required".

In conclusion, the authors estimate that their results provide the first biological components of an explanation for the observed correlation between omega-3 poor diets, which are very widespread in the industrialized world, and mood disorders such as depression.

Contacts and sources:
 Olivier Manzoni ,Directeur de recherche Inserm
  Sophie Layé , Directeur de recherche INRA
Citation:  “ Nutritional Omega-3 deficiency abolishes endocannabinoid mediated neuronal functions “
Mathieu Lafourcade1,3#, Thomas Larrieu2,3#, Susana Mato4#, Anais Duffaud2,3,
Marja Sepers1,3, Isabelle Matias1,3, Veronique De Smedt2,3, Virginie Labrousse2,3,
Lionel Bretillon6, Carlos Matute4, Rafael Rodríguez-Puertas5, Sophie Layé2,3,¶,°
and Olivier J. Manzoni1,3,7,8,9, ¶,°
1 Unité Inserm 862, Physiopathology of Synaptic Plasticity Group, Neurocentre Magendie, 146 Rue Léo—Saignat, F 33077 Bordeaux Cedex, France.
2 INRA UMR 1286, CNRS UMR 5226, PsyNuGen, F 33077 Bordeaux Cedex, France.
3 University of Bordeaux, Bordeaux, F 33077, France.
4 Department of Neuroscience and 5 Department of Pharmacology, University of the Basque Country, 48940 Leioa, Bizkaia, Spain.
6 UMR1324 CGSA, INRA, 17 Rue Sully, 21065 Dijon, France.
7 Unité Inserm901, Marseille, 13009, France.
8 Université de la Méditerranée UMR S901 Aix-Marseille 2, France.
9 INMED, Marseille,
Nature Neuroscience, 30 janvier 2011

Smaller And 100,000 Times More Energy-Efficient Electronic Chips Could Be Made Using Molybdenite, A Material Developed In Switzerland

Smaller and more energy-efficient electronic chips could be made using molybdenite. In an article appearing online January 30 in the journal Nature Nanotechnology, EPFL's Laboratory of Nanoscale Electronics and Structures (LANES) publishes a study showing that this material has distinct advantages over traditional silicon or graphene for use in electronics applications.

This is a digital model showing how molybdenite can be integrated into a transistor.
Credit: EPFL

A discovery made at EPFL could play an important role in electronics, allowing us to make transistors that are smaller and more energy efficient. Research carried out in the Laboratory of Nanoscale Electronics and Structures (LANES) has revealed that molybdenite, or MoS2, is a very effective semiconductor. This mineral, which is abundant in nature, is often used as an element in steel alloys or as an additive in lubricants. But it had not yet been extensively studied for use in electronics.

100,000 times less energy
"It's a two-dimensional material, very thin and easy to use in nanotechnology. It has real potential in the fabrication of very small transistors, light-emitting diodes (LEDs) and solar cells," says EPFL Professor Andras Kis, whose LANES colleagues M. Radisavljevic, Prof. Radenovic et M. Brivio worked with him on the study.

He compares its advantages with two other materials: silicon, currently the primary component used in electronic and computer chips, and graphene, whose discovery in 2004 earned University of Manchester physicists André Geim and Konstantin Novoselov the 2010 Nobel Prize in Physics.

One of molybdenite's advantages is that it is less voluminous than silicon, which is a three-dimensional material. "In a 0.65-nanometer-thick sheet of MoS2, the electrons can move around as easily as in a 2-nanometer-thick sheet of silicon," explains Kis. "But it's not currently possible to fabricate a sheet of silicon as thin as a monolayer sheet of MoS2."

Another advantage of molybdenite is that it can be used to make transistors that consume 100,000 times less energy in standby state than traditional silicon transistors. A semi-conductor with a "gap" must be used to turn a transistor on and off, and molybdenite's 1.8 electron-volt gap is ideal for this purpose.

Better than graphene
In solid-state physics, band theory is a way of representing the energy of electrons in a given material. In semi-conductors, electron-free spaces exist between these bands, the so-called "band gaps." If the gap is not too small or too large, certain electrons can hop across the gap. It thus offers a greater level of control over the electrical behavior of the material, which can be turned on and off easily.

The existence of this gap in molybdenite also gives it an advantage over graphene. Considered today by many scientists as the electronics material of the future, the "semi-metal" graphene doesn't have a gap, and it is very difficult to artificially reproduce one in the material.
For more information:
Direct link to the article:
Contact: Professor Andras Kis, Laboratory of Nanoscale Electronics and Structures (LANES),, tel: +41 21 693 39 25
Other links:
Groups which were involved in or supported this research:
European Research Council (

U.S. Mineral Production Up 9% to $64 Billion in 2010

The value of mineral production in the U.S. increased 9 percent in 2010 from that of 2009, suggesting that the nonfuel minerals industries, particularly metals, were beginning to recover from the economic recession that began in December 2007 and lasted well into 2009.

The value of raw, nonfuel minerals mined in the U.S. was $64 billion in 2010, up from $59 billion in 2009, according to the U.S. Geological Survey’s annual release of mineral production statistics and summary of events and trends affecting domestic and global nonfuel minerals.

"During the past year, we began to see increases in domestic mineral production, after significant declines in 2009," said USGS Mineral Resources Program Coordinator Kathleen Johnson. "This report allows for timely research and analysis of our nation’s minerals sector."

The metals sector was marked by higher prices across the board and a substantial increase in tonnage of iron ore mined. The metals industries supported the overall gains in the minerals sector, offsetting a 6 percent decline in the value of non-metals in 2010.

The non-metallic minerals sector continued to decline in 2010, but at a slower rate than in 2009. More non-metallic mineral commodities showed increases in mine production and value than those that decreased, but the production and consumption of dominant materials, particularly those used in construction, declined.

U.S. dependence on foreign sources for minerals increased, continuing a trend that has been evident for more than 30 years. The U.S. relied on foreign sources to supply more than 50 percent of domestic consumption of 43 mineral commodities in 2010. The U.S. was 100 percent reliant on imports for 18 mineral commodities in 2010.

Minerals are a fundamental component to the U.S. economy. Final products, such as cars and houses, produced by major U.S. industries using mineral materials made up about 13 percent (more than $2.1 trillion) of the 2010 gross domestic product. Domestic raw materials, along with domestically recycled materials, were used to process mineral materials worth $578 billion, such as aluminum, brick, copper, fertilizers, and steel. These products were, in turn, used to produce cars, houses, and other products.

The report, Mineral Commodity Summaries 2011, is an annual report that includes statistics on about 90 mineral commodities and addresses events, trends, and issues in the domestic and international minerals industries. The report is used by public and private sector analysts regarding planning and decision making for government and business.

The USGS is the sole Federal provider of objective assessments on mineral resources, production, consumption, and environmental effects. The USGS collects, analyzes, and disseminates current data on minerals industries in the United States and about 180 other countries.

The USGS report "Mineral Commodity Summaries 2011" is available online.

Source: USGS

Saturday, January 29, 2011

Fusion Energy A Step Close To Reality: Plasma Stability Controlled

If you haven't heard about ITER, chances are you will soon. The scale and scope of the ITER project rank it among the most ambitious science endeavors of our time. With the Organization in place and site work completed, scientists are now poised to begin construction on the buildings that will house the ITER fusion experiments. Scroll down for some interesting facts about the project.

The ITER Tokamak will weigh 23 000 tons. The metal contained in the Eiffel Tower can't compare - it only weighs 7 300 tons. The ITER Tokamak will be as heavy as three Eiffel Towers.

Incorporation of control coils in the plasma vessel of the ASDEX Upgrade fusion device
Photo: IPP; Volker Rohde

Fusion is the process at the core of our Sun. What we see as light and feel as warmth is the result of a fusion reaction: Hydrogen nuclei collide, fuse into heavier Helium atoms and release tremendous amounts of energy in the process.

 (Click to view larger version...)
Credit: ITER

In the stars of our universe, gravitational forces have created the necessary conditions for fusion. Over billions of years, gravity gathered the Hydrogen clouds of the early Universe into massive stellar bodies. In the extreme density and temperature of their cores, fusion occurs.
Fusion is the energy source of the Universe, occuring in the core of the Sun and stars. (Click to view larger version...)
Credit: ITER

Atoms never rest: the hotter they are, the faster they move. In the core of our Sun, temperatures reach 15 000 000° Celsius. Hydrogen atoms are in a constant state of agitation, colliding at very great speeds. The natural electrostatic repulsion that exists between the positive charges of their nuclei is overcome, and the atoms fuse. The fusion of two light Hydrogen atoms (H-H) produces a heavier element, Helium.

The mass of the resulting Helium atom is not the exact sum of the two initial atoms, however: some mass has been lost and great amounts of energy have been gained. This is what Einstein's formula E=mc² describes: the tiny bit of lost mass (m), multiplied by the square of the speed of light (c²), results in a very large figure (E) which is the amount of energy created by a fusion reaction.

Every second, our Sun turns 600 million tons of Hydrogen into Helium, releasing an enormous amount of energy. But without the benefit of gravitational forces at work in our Universe, achieving fusion on Earth has required a different approach.

20th century fusion science has identified the most efficient fusion reaction to reproduce in the laboratory setting: the reaction between two Hydrogen (H) isotopes Deuterium (D) and Tritium (T). The D-T fusion reaction produces the highest energy gain at the 'lowest' temperatures. It requires nonetheless temperatures of 150 000 000° Celsius to take place - ten times higher that the H-H reaction occurring at the Sun's core.

At extreme temperatures, electrons are separated from nuclei and a gas becomes a plasma - a hot, electrically charged gas. In a star as in a fusion device, plasmas provide the environment in which light elements can fuse and yield energy.

In ITER, the fusion reaction will be achieved in a tokamak device that uses magnetic fields to contain and control the hot plasma. The fusion between Deuterium and Tritium (D-T) will produce one Helium nuclei, one neutron and energy.
Three, two, one ... We have plasma! Inside the European JET Tokamak, both before and during operation.
Photo: EFDA, JET.

Compensation of edge instabilities in ASDEX Upgrade successful pointing the way for ITER

The Helium nucleus carries an electric charge which will respond to the magnetic fields of the tokamak, and remain confined within the plasma. However some 80% of the energy produced is carried away from the plasma by the neutron which has no electrical charge and is therefore unaffected by magnetic fields. The neutrons will be absorbed by the surrounding walls of the tokamak, transferring their energy to the walls as heat.

In ITER, this heat will be dispersed through cooling towers. In the subsequent fusion plant prototype DEMO and in future industrial fusion installations, the heat will be used to produce steam and - by way of turbines and alternators - electricity.

Fusion Reactor

Credit: ITER

After barely a year of modification work the first experiments conducted have already proved successful. Eight magnetic control coils on the wall of the plasma vessel of the ASDEX Upgrade fusion device have now succeeded in reducing perturbing instabilities of the plasma, so-called ELMs, to the level required. If these outbursts of the edge plasma become too severe, they can cause major damage to the plasma vessel in devices of the ITER class. The results now achieved go a long way towards solving this important problem for ITER.

The research objective of Max Planck Institute for Plasma Physics (IPP) at Garching is to develop a power plant that, like the sun, derives energy from fusion of atomic nuclei. Whether this is feasible is to be demonstrated with a fusion power of 500 megawatts by the ITER (Latin for “the way”) experimental fusion reactor, now being built at Cadarache, France, as an international cooperation. This requires that the fuel, an ionized low-density hydrogen gas – a plasma – be confined in a magnetic field cage without touching the wall of the plasma vessel and heated to ignition temperatures of over 100 million degrees.

The complex interaction between the charged plasma particles and the confining magnetic field can cause all kinds of perturbations of the plasma confinement. Edge Localized Modes (ELMS) are very much under discussion at present in relation to ITER. These cause the edge plasma to briefly lose its confinement and periodically hurl bundled plasma particles and energies outwards to the vessel wall. Up to one-tenth of the total energy content is thus ejected. Whereas the present generation of medium-sized fusion devices can easily cope with this, it might cause overloading in large-scale devices such as ITER of, in particular, the divertor – specially equipped collector plates at the bottom of the vessel, to which the plasma edge layer is magnetically diverted. This would make continuous operation inconceivable.

This ELM instability is, however, not altogether unwelcome, because it expels undesirable impurities from the plasma. Instead of the usual hefty impacts the aim is therefore to achieve weaker but more frequent ELMs. The 300-million-euro decision, originally scheduled for last year, on how to achieve this tailor-made solution for ITER was postponed by the ITER team, pending incorporation of the control coils in ASDEX Upgrade. This was because other fusion devices using similar coils – DIII-D at San Diego being the first – came up with conflicting results.

The experiments on ASDEX Upgrade now pave the way to clarification: Shortly after the power in the new control coils is switched on, the ELM impacts decline to a harmless level. But they occur often enough to prevent the accumulation of impurities in the plasma. The good confinement of the main plasma is also maintained. The ELMs do not regain their original intensity till the coil field is switched of. This experimental result goes a long way to answering the question how the energy produced in the ITER plasma can be properly extracted.

But the goal has not quite been attained: This is because the plasma edge of ITER cannot be completely simulated in smaller devices such as ASDEX Upgrade. It is therefore all the more important to understand exactly the processes underlying the suppression of ELMs; this calls for sophisticated measuring facilities for observation and a powerful theory group for clarification. The physical theory hitherto acquired at IPP does fit the present results, but has yet to be checked and expanded. Till the decision on ITER scheduled for 2012 there is time for solving the problem for the test reactor – and for a future power plant.

The possibilities afforded by control coils on ASDEX Upgrade are then still far from being exhausted: Another eight coils as of 2012 are to make lots of new investigations possible.

Max-Planck-Institut fuer Plasmaphysik

New Insight Into Greenland's Plumbing Provided by ERS Satellite

Warmer summers may paradoxically slow down the speed of glaciers flowing towards the sea, suggests new research. This investigation, using data from ESA's oldest environmental satellite, has important implications for future estimates of sea-level rise.

It has been well understood that, in recent years, glaciers on Greenland's massive ice sheet have been flowing towards the sea faster than they did in the past. This has been attributed, in part, to higher temperatures melting the surface of the ice sheet.

A moulin is a hole in a glacier that funnels meltwater from the surface to the bedrock beneath. This flow of water has importance consequences for the speed at which the glacier moves.
Meltwater pouring into a moulin on Greenland
Credits: J. Box

The surface meltwater winds its way down to the bedrock through cracks and holes in the ice called moulins. At the base of the glacier, it is generally thought that this water lubricant helps the ice sheet to flow rapidly towards the sea.

However, acceleration of ice flow during the summer has proved difficult for scientists to model, leading to uncertainties in projections of future sea-level rise.

Ice-velocity map: An example of a two-dimensional ice-velocity map of the study area in southwest Greenland. The map is derived from Synthetic Aperture Radar images from ESA's veteran ERS-1 satellite separated by 35 days: 2 June to 7 July 1995.

Ice velocity map
Credits: University of Leeds

The letter published in this week's Nature journal explains how increased melting in the summer may actually be slowing down the flow of glaciers.

Using observations from ESA's veteran ERS-1 satellite, which in July will have been in orbit for 20 years, new research suggests that the internal drainage system of the ice sheet adapts to accommodate more meltwater, without speeding up the flow of the glacier.

Prof. Andrew Shepherd from the University of Leeds, UK, who led the study said, "It had been thought that more surface melting would speed up flow and cause the ice sheet to retreat faster, but our research suggests the process is more complicated."

Research centred on six landlocked glaciers in the southwest of Greenland and used data from the radar on ERS-1 from 1992 to 1998. This period included particularly warm summers in Greenland, with 1998 being one of the warmest on record.

"We used ERS-1 data and a technique called 'intensity tracking' over periods of 35 days to estimate the speed at which the glaciers were moving throughout the study," explained Prof. Shepherd.

"Our research suggests that increases in surface melting may not change the rate of flow at all. However, this doesn't mean that the ice sheet is safe from climate change because changes in ocean melting also play an important role."

Edge of the Greenland ice sheet near Ilulissat 
Edge of Greenland ice sheet
Credits: ESA

The observations from ERS-1 showed that although the initial ice speed-up was similar in all years, the glacier experienced a dramatic late summer slowdown in warmer years when there was more meltwater. The research team put this down to an efficient subglacial drainage during warm melt seasons – a process that is commonly observed in Alpine glacier systems.

Although there is more to understand about the dynamics of glacier motion, these new findings will need to be taken into account when assessing how much the Greenland ice sheet may contribute to future sea-level rise.

ERS-2 satellite: Like its predecessor ERS-1 (launched in July 1991 by Ariane 4, the ERS-2 satellite (launched on 21 April 1995 also by Ariane 4) monitors the ground day and night under all weather conditions thanks to its powerful sharp-eyed, cloud-piercing radars. ERS-2 also carries an instrument to help monitor the ozone layer. 
ERS satellite
Credits: ESA

Launched in 1991, ERS-1 was Europe's first radar satellite dedicated to environmental monitoring. The success of this first mission provided the basis for the routine remote sensing we have come to rely upon today to unravel the complexities of the way Earth works.

ERS-1 and the follow-on ERS-2 have proved very innovative missions. To exploit the outstanding science derived from ERS data even more, investigations are currently being made into the possibility of an additional ERS-2 phase dedicated to ice monitoring before the end of the mission in mid-2011.

Source: ESA

Smoking Habits Are Transmitted From Mother To Daughter And Father To Son, Single Parent To Both Sexes

A European research group has studied how smoking habits are transmitted within the home. The results show that, in homes where both parents are present, there is a significant degree of inter-generational transmission of smoking habits between parents and children, particularly between individuals of the same gender.

A Nazi anti-smoking ad titled "The chain-smoker" saying "He does not devour it [the cigarette], it devours him"
Credit: Wikipedia

"Fathers transmit their smoking habits to a statistically significant level to their sons, and the same is true of mothers and daughters. However, if a mother smokes it does not seem to impact on the probability of her son smoking, and similarly a father that smokes does not affect his daughter", Loureiro, a researcher at the USC and co-author of the study, tells SINC.

The research, which has been published in the journal Oxford Bulletin of Economics and Statistics, is based on information from the British Household Panel Survey 1994-2002. "We selected this data source because it gives detailed information on the products consumed in households, including tobacco, making it possible to analyse the transmission of smoking habits between generations", the experts explain.

The study was carried out in homes where both parents were present as well as in single parent households, which were primarily headed by mothers.

"The results obtained show that, in terms of smoking habits, after taking socio-economic variables into account, daughters tend to imitate their mothers, while sons imitate their mothers", says Loureiro.

The estimated probabilities of a son smoking if both parents smoke is 24%, but this falls to almost 12% if neither of the parents smokes. For daughters, the probability of smoking if both parents smoke is 23%, also falling to 12% if neither of the parents smokes.

In single-parent households, mothers transmit their smoking habits to their children - regardless of their gender. In this case, a son's likelihood of smoking if the mother smokes is 32%, and 28% for a daughter.

"These results have clear importance in terms of designing public policies to combat smoking. Policies that are successful in reducing smoking habits among parents will also affect their children. Anti-smoking policies for young people need to be put in place that will also include the family and social context in which they live", explains Loureiro.

Source: Plataforma SINC

Psychology Of The Seven Deadly Sins

We see evidence of the seven deadly sins in action in the news almost daily and this month’s Psychologist magazine explores how relevant these ancient vices are to modern life.

Dr Christian Jarrett, editor of the award winning Research Digest Blog, analyses the contemporary psychology view and provides evidence based advice on how to avoid the temptations of sin.

The Research Digest blog will host a ‘sin week’ starting on the 7 February. This will include top psychologists confessing their own sins and seven new sins for the 21st century such as ‘iphonophilia – the sin of checking ones smart phone for updates whilst in conversation with people in the real world.’

Christian said: “The original deadly sins were inspired by humankind’s struggle to rise above animalistic instincts and rein in our emotions. It’s the occasional success at these that makes us human. To sacrifice our own needs for the good of others or to postpone gratification today for a greater reward tomorrow. It’s our frequent inability to achieve this level of control that makes the sins as relevant today as they ever were.”

Elsewhere in the Psychologist Louise Elliman looks at the strengths associated with Asperger’s syndrome in ‘Asperger’s syndrome – difference or disorder’ and Paul Howard-Jones asks how insights from neuroscience to provide more effective teaching and learning in ‘From brain scan to lesson plan’.

Click here to sign up to the free fortnightly Research Digest blog.

Source: British Psychological Society (BPS)

Why Best Actress Oscar Curse Is Real Explained by Behaviorists

Will Academy Award nominees Nicole Kidman and Annette Bening be at higher risk for a divorce if they win the Oscar for best actress next month? A long line of best actress winners including Joan Crawford, Bette Davis, Halle Berry and Kate Winslet experienced the end of their marriages not long after taking home their awards.

Joan Crawford, 1948
Credit: Wikipedia

A study by researchers at the University of Toronto’s Rotman School of Management and Carnegie Mellon University finds that Oscar winners in the Best Actress category are at a higher risk of divorce than nominees who do not win. By contrast, Best Actor winners do not experience an increase in the risk of divorce after an Oscar.

“Research has shown that, in the general population, gender differences have historically given roles with greater power and status to men and roles with lesser status and power to women. Studies have demonstrated that breaching this social norm within a marriage—for example, when a wife earns more than her husband—can strain the relationship,” says Tiziana Casciaro, an assistant professor of organizational behavior at the Rotman School, who co-authored the study with Colleen Stuart, a post-doctoral fellow at Carnegie Mellon University, and Sue Moon, a PhD student at the Rotman School.

“It appears that even the marriages of Hollywood actresses at the top of their careers are not immune to the consequences of violating social norms that affect the wider population. Our results suggest that the sudden success reduces the longevity of their marriages,” says Stuart.

The study looked at the 751 nominees in the best actor and actress categories of the Academy Awards between1936 to 2010. The results show that Best Actress winners have a 63% chance of their marriages ending sooner than the marriages of non-winners. The median marriage duration for Best Actress winners was 4.30 years, substantially lower than the 9.51 year marriage duration for non-winners. By contrast, the difference between Best Actor non-winners (median = 12.66 years) and Best Actor winners (median = 11.97 years) was not statistically significant.

The complete study is available at here

For the latest thinking on business, management and economics from the Rotman School of Management, visit New Thinking

The Rotman School of Management at the University of Toronto is redesigning business education for the 21st century with a curriculum based on Integrative Thinking. Located in the world's most diverse city, the Rotman School fosters a new way to think that enables the design of creative business solutions. The School is currently raising $200 million to ensure Canada has the world-class business school it deserves. For more information, visit

Contacts and sources:
Ken McGuffin
University of Toronto, Rotman School of Management

Molecular Machine Is A 1.2 Nanometer Record Player Molecule, A World First

Scientists at Kiel University have developed a molecular machine in form of record player.

A Kiel research group headed by the chemist, Professor Rainer Herges, has succeeded for the first time in directly controlling the magnetic state of a single molecule at room temperature. The paper will appear this Friday (28 January 2011) in Science magazine.

The switchable molecule, which is the result of a sub-project of the Collaborative Research Centre 677 "Function by Switching", could be used both in the construction of tiny electromagnetic storage units and in the medical imaging.

An enhanced model of the molecular switch developed by the Kiel researchers on an old grammophone. This tiny machine basically works like a record player under blue-green/blue-violet light.
Copyright: CAU, Photo: Rainer Herges/Torsten Winkler

The scientists at the Kiel University developed a molecular machine constructed in a similar way to a record player. The molecule consists of a nickel ion surrounded by a pigment ring (porphyrin), and a nitrogen atom which hovers above the ring like the tone arm on a record player. "When we irradiate this molecule with blue-green light, the nitrogen atom is placed exactly vertically to the nickel ion like a needle", Rainer Herges explains. "This causes the nickel ion to become magnetic, because the pairing of two electrons is cancelled out", says the chemistry professor.

The counter effect is blue-violet light: The nitrogen atom is raised, the electrons form a pair and the nickel ion is no longer magnetic. "We can repeat this switching of the magnetic state over 10,000 times by varied irradiation with the two different wavelengths of light, without wearing out the molecular machine or encountering side reactions", Herges enthuses.

Professor Rainer Herges (left) and Marcel Dommaschk irradiate a solution of the molecular magnet switch with blue-green and blue-violet light. The researchers can change the magnetic state of the molecule using the light waves.

Copyright: CAU, Photo: Torsten Winkler

The switch which has been discovered, with its diameter of only 1.2 nanometres, could be used as a tiny magnetic reservoir in molecular electronics. Most of all, hard disk manufacturers may be interested in this, as a higher storage capacity can be achieved by reducing the size of the magnetic particles on the surface of the disks. Professor Herges also believes the use of the magnetic switch in the medical field is feasible:

"The record player molecule can be used intravenously as a contrast agent in MRT (magnetic resonance tomography) in order to search for tumours or constricted blood vessels. Initial tests in the University Medical Center Schleswig-Holstein’s neuroradiology department were successful."

Caption: The record player molecule as a model. The arrows symbolise the magnetic state in the nickel ion which can be directly switched by contact with the nitrogen atom on the 'tone arm'.
Copyright: CAU, Diagram: Rainer Herges

As the signal-to-noise ratio is improved by the switching process, a smaller amount of the contrast agent is required than for the magnetic salts currently being used. In addition, according to Herges, the molecular machine could also serve as a basis for developing new contrast agents to depict such features as temperature, pH value or even certain biochemical markers in the body in a three-dimensional form. Rainer Herges lists the possible fields of application: "Using contrast agents such as these, it could be possible to localise centres of inflammation, detect tumours and visualise many metabolic processes."

The Christian-Albrechts-Universität zu Kiel has proven international expertise as a North German research university in the field of Nanoscience, for example, in the German Research Foundation’s Collaborative Research Centre 677 "Function by Switching". Furthermore, the CAU is applying for the current round of the Excellence Initiative with a nano-excellence cluster.

Source: Christian-Albrechts-Universität zu Kiel

Winter Storms, Etna Eruption, Australian Floods: EUMETSAT Extends Coverage Of Direct Readout Of Metop Data

EUMETSAT has extended the existing coverage of direct readout of data from the instruments on board its Metop polar-orbiting satellite via the Advanced High Resolution Picture Transmission (AHRPT) system, making the data available to more users globally.

“This is good news for users around the world who can benefit from the real-time transmission of Metop data,” said the EUMETSAT Director-General, Dr. Lars Prahm.

Metop-A AVHRR observes the record floods in the government area of Rockhampton (Queensland, Australia), along the winding Fitzroy River and the discharge into the sea.

Metop-A AVHRR observes the record floods in the government area of Rockhampton (Queensland, Australia), along the winding Fitzroy River and the discharge into the sea.


With this initiative, AHRPT coverage is now being extended beyond the limited area over the North Atlantic and Europe covered by the operational AHRPT service since it was resumed at the beginning of 2009 following the onboard failure of the system on Metop-A. In July 2007, the satellite’s Solid State Power Amplifier (SSPA) stopped working due to heavy ion radiation affecting a power transistor.

Brisbane Flood: The city of Brisbane faces a partial evacuation as fresh rainfall brings more flooding to the region. In Toowoomba, west of Brisbane, flash floods resulted in multiple fatalities on Monday following heavy rains
The city of Brisbane faces a partial evacuation as fresh rainfall brings more flooding to the region. In Toowoomba, west of Brisbane, flash floods resulted in multiple fatalities on Monday following heavy rains.
Credit: Metop A

The extended AHRPT service now includes ascending orbits and covers much of Africa, Asia and the Pacific Ocean, in addition to the North Atlantic and Europe, thus reaching more users worldwide. The ascending orbits will be made while maintaining the same operational restrictions over the polar caps and South Atlantic Earth Magnetic Anomaly. More stringent risk reduction measures will be taken than for the descending passes because the Fast Dump Extract System covers the Northern Hemisphere with dumps on Svalbard during the satellite’s northbound passes, resulting in very little delay between the time of the observation and the dump itself for the northern regions and offering a safe and competitive alternative to AHRPT.

Etnas brief eruption beginning at around 21:30 UTC was observed by Meteosat-9. The eruption produced a mixed ice and SO2 cloud, and a faintly visible lava hot spot at the site of the eruption. Initial analysis does not indicate the presence of ash.

Etnas brief eruption beginning at around 21:30 UTC was observed by Meteosat-9. The eruption produced a mixed ice and SO2 cloud, and a faintly visible lava hot spot at the site of the eruption. Initial analysis does not indicate the presence of ash.

Credit: EUMETS

The SSPA components on the future Metop-B and -C satellites are being modified to cope with the radiation that adversely affected Metop-A. The AHRPT sub-system on these satellites is being fully re-engineered to ensure the continued availability of AHRPT data once they are launched in 2012 and 2016, respectively.

The first major winter cyclone over the eastern Mediterranean Sea brings long expected rains to the Middle East, causing stormy seas along the Egyptian coastline and a large scale dust outbreak over Syria, Iraq and Saudi Arabia.

The first major winter cyclone over the eastern Mediterranean Sea brings long expected rains to the Middle East, causing stormy seas along the Egyptian coastline and a large scale dust outbreak over Syria, Iraq and Saudi Arabia.


The European Organisation for the Exploitation of Meteorological Satellites is an intergovernmental organisation based in Darmstadt, Germany, currently with 26 European Member States (Austria, Belgium, Croatia, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Luxembourg, the Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey and the United Kingdom) and five Cooperating States (Bulgaria, Estonia, Iceland, Lithuania, and Serbia).

EUMETSAT operates the geostationary satellites Meteosat-8 and -9 over Europe and Africa, and Meteosat-6 and -7 over the Indian Ocean.

Metop-A, the first European polar-orbiting meteorological satellite, was launched in October 2006 and has been delivering operational data since 15 May 2007.

The Jason-2 ocean altimetry satellite, launched on 20 June 2008, added ocean surface topography to the missions EUMETSAT conducts.

The data and products from EUMETSAT’s satellites make a significant contribution to weather forecasting and to the monitoring of the global climate.

Megi, a category 5 super typhoon shortly before it hit the Philippines, crossing the island of Luzan. It then become a category 2 typhoon as it headed towards southern China in October 2010 (source: SMHI). 

18 October 2010
Megi, a category 5 super typhoon shortly before it hit the Philippines, crossing the island of Luzan. It has since become a category 2 typhoon as it heads towards southern China (source: SMHI).


A Little Nano-Disorder Goes A Long Way When Harnessing Solar Energy to Produce Hydrogen From Water

A little disorder goes a long way, especially when it comes to harnessing the sun’s energy. Scientists from the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) jumbled the atomic structure of the surface layer of titanium dioxide nanocrystals, creating a catalyst that is both long lasting and more efficient than most materials in using the sun’s energy to extract hydrogen from water.

A nanoscale look at a photocatalyst that is both durable and very efficient. This high-resolution transmission electron microscope image of a titanium dioxide nanocrystal after hydrogenation reveals engineered disorder on the crystal's surface, a change that enables the photocatalyst to absorb infrared light.
A nanoscale look at a photocatalyst that is both durable and very efficient. This high-resolution transmission electron microscope image of a titanium dioxide nanocrystal after hydrogenation reveals engineered disorder on the crystal's surface, a change that enables the photocatalyst to absorb infrared light.
Their photocatalyst, which accelerates light-driven chemical reactions, is the first to combine durability and record-breaking efficiency, making it a contender for use in several clean-energy technologies.

It could offer a pollution-free way to produce hydrogen for use as an energy carrier in fuel cells. Fuel cells have been eyed as an alternative to combustion engines in vehicles. Molecular hydrogen, however, exists naturally on Earth only in very low concentrations. It must be extracted from feedstocks such as natural gas or water, an energy-intensive process that is one of the barriers to the widespread implementation of the technology.

“We are trying to find better ways to generate hydrogen from water using sunshine,” says Samuel Mao, a scientist in Berkeley Lab’s Environmental Energy Technologies Division who led the research. “In this work, we introduced disorder in titanium dioxide nanocrystals, which greatly improves its light absorption ability and efficiency in producing hydrogen from water.”

Mao is the corresponding author of a paper on this research that was published online Jan. 20, 2011 in Science Express with the title “Increasing Solar Absorption for Photocatalysis with Black, Hydrogenated Titanium Dioxide Nanocrystals.” Co-authoring the paper with Mao are fellow Berkeley Lab researchers Xiaobo Chen, Lei Liu, and Peter Yu.

Mao and his research group started with nanocrystals of titanium dioxide, which is a semiconductor material that is used as a photocatalyst to accelerate chemical reactions, such as harnessing energy from the sun to supply electrons that split water into oxygen and hydrogen. Although durable, titanium dioxide isn’t a very efficient photocatlayst. Scientists have worked to increase its efficiency by adding impurities and making other modifications.

The Berkeley Lab scientists tried a new approach. In addition to adding impurities, they engineered disorder into the ordinarily perfect atom-by-atom lattice structure of the surface layer of titanium dioxide nanocrystals. This disorder was introduced via hydrogenation.

Berkeley Lab scientist Samuel Mao leads a research team that is searching for sustainable ways to generate hydrogen for use in clean-energy technologies. In a first-of-its-kind development, they jumbled the surface layer of titanium dioxide nanocrystals, a feat that turned the material from white to black. It also created a photocatalyst whose efficiency outpaces others in using the sun’s energy to extract hydrogen from water.
(Photo by Roy Kaltschmidt, Berkeley Lab Public Affairs)

The result is the first disorder-engineered nanocrystal. One transformation was obvious: the usually white titanium dioxide nanocrystals turned black, a sign that engineered disorder yielded infrared absorption.

The scientists also surmised disorder boosted the photocatalyst’s performance. To find out if their hunch was correct, they immersed disorder-engineered nanocrystals in water and exposed them to simulated sunlight. They found that 24 percent of the sunlight absorbed by the photocatalyst was converted into hydrogen when using a sacrificial reagent, a production rate that is about 100 times greater than the yields of most semiconductor photocatalysts under the same conditions. More work needs to be done in order to reach comparable efficiency without the use of a sacrificial reagent, according to Mao.

In addition, their photocatalyst did not show any signs of degradation during a 22-day testing period, meaning it is potentially durable enough for real-world use.

Its landmark efficiency stems largely from the photocatalyst’s ability to absorb infrared light, making it the first titanium dioxide photocatalyst to absorb light in this wavelength. It also absorbs visible and ultraviolet light. In contrast, most titanium dioxide photocatalysts only absorbs ultraviolet light, and those containing defects may absorb visible light. Ultraviolet light accounts for less than ten percent of solar energy.

“The more energy from the sun that can be absorbed by a photocatalyst, the more electrons can be supplied to a chemical reaction, which makes black titanium dioxide a very attractive material,” says Mao, who is also an adjunct engineering professor in the University of California at Berkeley.

The team’s intriguing experimental findings were further elucidated by theoretical physicists Peter Yu and Lei Liu, who explored how jumbling the latticework of atoms on the nanocrystal’s surface via hydrogenation changes its electronic properties. Their calculations revealed that disorder, in the form of lattice defects and hydrogen, makes it possible for incoming photons to excite electrons, which then jump across a gap where no electron states can exist. Once across this gap, the electrons are free to energize the chemical reaction that splits water into hydrogen and oxygen.

“By introducing a specific kind of disorder, mid-gap electronic states are created accompanied by a reduced band gap,” says Yu, who is also a professor in the University of California at Berkeley’s Physics Department. “This makes it possible for the infrared part of the solar spectrum to be absorbed and contribute to the photocatalysis.”

This research was supported by the Department of Energy’s Office of Energy Efficiency and Renewable Energy. Transmission electron microscopy imaging used to study the nanocrystals at the atomic scale was performed at the National Center for Electron Microscopy, a national user facility located at Berkeley Lab.

Lawrence Berkeley National Laboratory is a U.S. Department of Energy (DOE) national laboratory managed by the University of California for the DOE Office of Science. Berkeley Lab provides solutions to the world’s most urgent scientific challenges including sustainable energy, climate change, human health, and a better understanding of matter and force in the universe. It is a world leader in improving our lives through team science, advanced computing, and innovative technology.

Source: Lawrence Berkeley National Laboratory