ADS


Unseen Is Free

Unseen Is Free
Try It Now

OpenX

Google Translate

Tuesday, July 29, 2014

Brainwaves Can Predict Audience Reaction For Television Programming

Media and marketing experts have long sought a reliable method of forecasting responses from the general population to future products and messages. According to a study conducted at the City College of New York (CCNY) in partnership with Georgia Tech, it appears that the brain responses of just a few individuals are a remarkably strong predictor.

By analyzing the brainwaves of 16 individuals as they watched mainstream television content, researchers were able to accurately predict the preferences of large TV audiences, up to 90 percent in the case of Super Bowl commercials. The findings appear in a paper entitled "Audience Preferences Are Predicted by Temporal Reliability of Neural Processing," which was just published in the latest edition of Nature Communications.



"Alternative methods such as self-reports are fraught with problems as people conform their responses to their own values and expectations," said Jacek Dmochowski, lead author of the paper and a postdoctoral fellow at CCNY at the time the study was being conducted. However, brain signals measured using electroencephalography (EEG) can, in principle, alleviate this shortcoming by providing immediate physiological responses immune to such self-biasing. "Our findings show that these immediate responses are in fact closely tied to the subsequent behavior of the general population," he added.

Lucas Parra, Herbert Kayser Professor of Biomedical Engineering at CCNY and the paper's senior author explained that, "when two people watch a video, their brains respond similarly – but only if the video is engaging. Popular shows and commercials draw our attention and make our brainwaves very reliable; the audience is literally 'in-sync'."

In the study, participants watched scenes from The Walking Dead TV show and several commercials from the 2012 and 2013 Super Bowls. EEG electrodes were placed on their heads to capture brain activity. The reliability of the recorded neural activity was then compared to audience reactions in the general population using publicly available social media data provided by the Harmony Institute and ratings from USA Today's Super Bowl Ad Meter.


"Brain activity among our participants watching The Walking Dead predicted 40 percent of the associated Twitter traffic," said Parra. "When brainwaves were in agreement, the number of tweets tended to increase." Brainwaves also predicted 60 percent of the Nielsen ratings that measure the size of a TV audience.

The study was even more accurate (90 percent) when comparing preferences for Super Bowl ads. For instance, researchers saw very similar brainwaves from their participants as they watched a 2012 Budweiser commercial that featured a beer-fetching dog. The general public voted the ad as their second favorite that year. The study found little agreement in the brain activity among participants when watching a GoDaddy commercial featuring a kissing couple. It was among the worst rated ads in 2012.

The CCNY researchers collaborated with Matthew Bezdek and Eric Schumacher from Georgia Tech to identify which brain regions are involved and explain the underlying mechanisms. Using functional magnetic resonance imaging (fMRI), they found evidence that brainwaves for engaging ads could be driven by activity in visual, auditory and attention brain areas.

"Interesting ads may draw our attention and cause deeper sensory processing of the content," said Bezdek, a postdoctoral researcher at Georgia Tech's School of Psychology.

Apart from applications to marketing and film, Parra is investigating whether this measure of attentional draw can be used to diagnose neurological disorders such as attention deficit disorder or mild cognitive decline. Another potential application is to predict the effectiveness of online educational videos by measuring how engaging they are.



Contacts and sources:
Jason Maderer
Georgia Institute of Technology

The Control Of Nature: Stewardship Of Fire Ecology By Native Californian Cultures

Before the colonial era, 100,000s of people lived on the land now called California, and many of their cultures manipulated fire to control the availability of plants they used for food, fuel, tools, and ritual. Contemporary tribes continue to use fire to maintain desired habitat and natural resources.

Frank Lake, an ecologist with the U.S. Forest Service’s Pacific Southwest Station, will lead a field trip to the Stone Lake National Wildlife Refuge during the Ecological Society of America’s 99th Annual Meeting in Sacramento, Cal., this August. Visitors will learn about plant and animal species of cultural importance to local tribes. Don Hankins, a faculty associate at California State University at Chico and a member of the Miwok people, will co-lead the trip, which will end with a visit to California State Indian Museum.

Stone Lake National Wildlife Refuge in Elk Grove, Cal.

Credit, Justine Belson/ USFWS.

Lake will also host a special session on a “sense of place,” sponsored by the Traditional Ecological Knowledge section of the Ecological Society, that will bring representatives of local tribes into the Annual Meeting to share their cultural and professional experiences working on tribal natural resources issues.

“The fascinating thing about the Sacramento Valley and the Miwok lands where we are taking the field trip is that it was a fire and flood system,” said Lake. “To maintain the blue and valley oak, you need an anthropogenic fire system.”

Lake, raised among the Yurok and Karuk tribes in the Klamath River area of northernmost California, began his career with an interest in fisheries, but soon realized he would need to understand fire to restore salmon. Fire exerts a powerful effect on ecosystems, including the quality and quantity of water available in watersheds, in part by reducing the density of vegetation.

“Those trees that have grown up since fire suppression are like straws sucking up the groundwater,” Lake said.

The convergence of the Sacramento and San Joaquin rivers was historically one of the largest salmon bearing runs on the West Coast, Lake said, and the Miwok, Patwin and Yokut tribal peoples who lived in the area saw and understood how fire was involved.

California native cultures burned patches of forest in deliberate sequence to diversify the resources available within their region. The first year after a fire brought sprouts for forage and basketry. In 3 to 5 years, shrubs produced a wealth of berries. Mature trees remained for the acorn harvest, but burning also made way for the next generation of trees, to ensure a consistent future crop. Opening the landscape improved game and travel, and created sacred spaces.

“They were aware of the succession, so they staggered burns by 5 to 10 years to create mosaics of forest in different stages, which added a lot of diversity for a short proximity area of the same forest type,” Lake said. “Complex tribal knowledge of that pattern across the landscape gave them access to different seral stages of soil and vegetation when tribes made their seasonal rounds.”

In oak woodlands, burning killed mold and pests like the filbert weevil and filbert moth harbored by the duff and litter on the ground. People strategically burned in the fall, after the first rain, to hit a vulnerable time in the life cycle of the pests, and maximize the next acorn crop. Lake thinks that understanding tribal use of these forest environments has context for and relevance to contemporary management and restoration of endangered ecosystems and tribal cultures.

“Working closely with tribes, the government can meet its trust responsibility and have accountability to tribes, and also fulfill the public trust of protection of life, property, and resources,” Lake said. “By aligning tribal values with public values you can get a win-win, reduce fire along wildlife-urban interfaces, and make landscapes more resilient.”

Contacts and sources:
Liza Lester
Ecological Society of America

Violent Aftermath For The Warriors At Alken Enge

Four pelvic bones on a stick and bundles of desecrated bones testify to the ritual violence perpetrated on the corpses of the many warriors who fell in a major battle close to the Danish town of Skanderborg around the time Christ was born.

Denmark attracted international attention in 2012 when archaeological excavations revealed the bones of an entire army, whose warriors had been thrown into the bogs near the Alken Enge wetlands in East Jutland after losing a major engagement in the era around the birth of Christ. Work has continued in the area since then and archaeologists and experts from Aarhus University, Skanderborg Museum and Moesgaard Museum have now made sensational new findings.

Four pelvic bones on a stick. 

Photo: Peter Jensen, Aarhus University

“We have found a wooden stick bearing the pelvic bones of four different men. In addition, we have unearthed bundles of bones, bones bearing marks of cutting and scraping, and crushed skulls. Our studies reveal that a violent sequel took place after the fallen warriors had lain on the battlefield for around six months,” relates Project Manager Mads Kähler Holst from Aarhus University.
Religious act

The remains of the fallen were gathered together and all the flesh was cleaned from the bones, which were then sorted and brutally desecrated before being cast into the lake. The warriors’ bones are mixed with the remains of slaughtered animals and clay pots that probably contained food sacrifices.

“We are fairly sure that this was a religious act. It seems that this was a holy site for a pagan religion – a sacred grove – where the victorious conclusion of major battles was marked by the ritual presentation and destruction of the bones of the vanquished warriors,” adds Mads Kähler Holst.
Remains of corpses thrown in the lake

Geological studies have revealed that back in the Iron Age, the finds were thrown into the water from the end of a tongue of land that stretched out into Mossø lake, which was much larger back then than it is today.

“Most of the bones we find here are spread out over the lake bed seemingly at random, but the new finds have suddenly given us a clear impression of what actually happened. This applies in particular to the four pelvic bones. They must have been threaded onto the stick after the flesh was cleaned from the skeletons,” explains Field Director Ejvind Hertz from Skanderborg Museum.
Internal Germanic conflict

The battles near Alken Enge were waged during that part of the Iron Age when major changes were taking place in Northern Europe because the Roman Empire was expanding northwards, putting pressure on the Germanic tribes. This resulted in wars between the Romans and the Germanic tribes, and between the Germanic peoples themselves.

Archaeologists assume that the recent finds at the Alken dig stem from an internal conflict of this kind. Records kept by the Romans describe the macabre rituals practised by the Germanic peoples on the bodies of their vanquished enemies, but this is the first time that traces of an ancient holy site have been unearthed.


Contacts and sources:
Mads Kähler Holst
Aarhus University

Rocket Research Confirms X-Ray Glow Emanates From Galactic Hot Bubble

When we look up to the heavens on a clear night, we see an immense dark sky with uncountable stars. With a small telescope we can also see galaxies, nebulae, and the disks of planets. If you look at the sky with an X-ray detector, you would see many of these same familiar objects; in addition, you would see the whole sky glowing brightly with X-rays. This glow is called the “diffuse X-ray background.”


Credit: NASA

While, at higher energies, the diffuse emission is due to point sources too far away and faint to be seen individually, the origins of the soft X-ray glow have been controversial, even 50 years after it was first discovered. The longstanding debate centers around whether the soft X-ray emission comes from outside our solar system, from a hot bubble of gas called the local hot bubble, or whether the emission comes from within the solar system, due to the solar wind colliding with diffuse gas.

New findings settle this controversy. A study published online Sunday in the journal Nature shows that the emission is dominated by the local hot bubble of gas (1 million degrees), with, at most, 40 percent of the emission originating within the solar system. The findings should put to rest the disagreement about the origin of the X-ray emission and confirm the existence of the local hot bubble.



“We now know that the emission comes from both sources, but is dominated by the local hot bubble,” said Massimiliano Galeazzi, professor and associate chair in the Department of Physics in the College of Arts and Sciences, and principal investigator of the study. “This is a significant discovery. Specifically, the existence or nonexistence of the local bubble affects our understanding of the galaxy close to the sun and can be used as the foundation for future models of the galaxy structure.”

Galeazzi, who led the investigation, and his collaborators from NASA, the University of Wisconsin-Madison, the University of Michigan, the University of Kansas, the Johns Hopkins University and CNES in France, launched a sounding rocket to analyze the diffuse X-ray emission, with the goal of identifying how much of that emission comes from within our solar system and how much from the local hot bubble.

UM’s Massimiliano Galeazzi, in blue on the left, and his collaborators ready the sounding rocket for launch with NASA engineers.
Credit: UM

“The DXL team is an extraordinary example of cross-disciplinary science, bringing together astrophysicists, planetary scientists, and heliophysicists,” said F. Scott Porter, astrophysicist at NASA’s Goddard Space Flight Center. “It’s unusual but very rewarding when scientists with such diverse interests come together to produce such groundbreaking results.”

The study measured the diffuse X-ray emission at low energy, what is referred to as the 1/4 keV band, corresponding to radiation with wavelength of the order of 5 nm.

“At that low energy, the light gets absorbed by the neutral gas in our galaxy, so the fact that we observe it means that the source must be ‘local,’ possibly within a few hundred light-years from earth,” Galeazzi said. “However, until now it was unclear whether it comes from within the solar system (within few astronomical units from earth), or a very hot bubble of gas in the solar neighborhood (hundreds of light-years from earth). This is like traveling at night and seeing a light, not knowing if the light comes from 10 yards or 1,000 miles away.”

Interstellar bubbles are probably created by stellar winds and supernova explosions, which cast material outward, forming large cavities in the interstellar medium—the material that fills the space between the stars in a galaxy. Hot X-ray emitting gas can fill the bubble, if a second supernova occurs within the empty cavity.

X-ray emission also occurs within our solar system when the solar wind collides with interplanetary neutral gas. The solar wind is a stream of charged particles released, with great energy, from the atmosphere of the sun. They create a solar wind that travels vast distances, forming a region called the heliosphere. As these particles travel through space at supersonic speeds, they may collide with neutral hydrogen and helium that enters the solar system due to the motion of the sun in the galaxy, capturing an electron and emitting X-rays. This is called the solar wind charge exchange process.

The team refurbished and modernized an X-ray detector that was mounted on a sounding rocket. The X-ray detector was originally flown by the University of Wisconsin-Madison on multiple missions during the 1970s to map the soft X-ray sky. The current team, led by Galeazzi, rebuilt, tested, calibrated, and adapted the detectors to a modern NASA suborbital sounding rocket. Components from a 1993 Space Shuttle mission also were used. The sounding rocket mission, known as “The Diffuse X-ray emission from the Local Galaxy,” aimed at separating and quantifying the X-ray emission from the two suspected sources: the local hot bubble and the solar wind charge exchange. This was the first mission designed for this kind of study.

“X-ray telescopes on satellites can observe for long periods of time and have reasonably large collecting areas, but very tiny fields of view, so they are very good for studying a small area in great detail,” said Dan McCammon, professor of physics at the University of Wisconsin-Madison and one of the scientists who built the original instrument. “However, the observations for this experiment needed to look at a large part of the sky in a short time, to make sure the solar wind did not change during the measurements. The sounding rocket could do it 4,000 times faster.”

The rocket was launched with the support of NASA’s Wallops Flight Facility, from White Sands Missile Range in New Mexico, on December 12, 2012. It reached an altitude of 258 km (160 miles), and stayed above the Earth’s atmosphere for five minutes, enough time to carry out its mission successfully. The information collected was transmitted directly to researchers on the ground at the launch facility.

“The sounding rocket program allows us to conduct high-risk, high-payoff science quickly and inexpensively,” Porter said. “It is really one of NASA’s crown jewels.”

Galeazzi and collaborators are already planning the next launch, planned for December 2015. That mission will be similar in design and goals, but will have multiple instruments to characterize the emission in more detail.

The Nature article is titled “The origin of the ‘local’ ¼ keV X-ray flux in both charge exchange and a hot bubble.” Other authors are M. Chiao, M.R. Collier, F. S. Porter, S. L. Snowden, N. E. Thomas and B. M. Walsh, from NASA’s Goddard Space Flight Center; T. Cravens and I. Robertson, from Department of Physics and Astronomy, University of Kansas; D. Koutroumpa, from Universitè Versailles St-Quentin; Sorbonne Universitès & CNRS/INSU, LATMOS-IPSL; K.D. Kuntz, from The Henry A. Rowland Department of Physics and Astronomy, Johns Hopkins University; R. Lallement, from GEPI Observatoire de Paris, CNRS, Université Paris Diderot; S. T. Lepri from the Department of Atmospheric, Oceanic, and Space Sciences, University of Michigan; D. McCammon and K. Morgan, from the Department of Physics, University of Wisconsin-Madison; and Y. Uprety and E. Ursino, from the UM Department of Physics.


Contacts and sources:
By Marie Guma-Diaz and Annette Gallagher
University of Miami

Citation:  Galeazzi et al. "The origin of the local 1/4-keV X-ray flux in both charge exchange and a hot bubble." Nature online, 27 July 2014.  

The Real Price Of Steak

New research reveals the comparative environmental costs of livestock-based foods.

We are told that eating beef is bad for the environment, but do we know its real cost? Are the other animal or animal-derived foods better or worse? New research at the Weizmann Institute of Science, conducted in collaboration with scientists in the US, compared the environmental costs of various foods and came up with some surprisingly clear results.

The findings, which appear in the Proceedings of the National Academy of Sciences (PNAS), will hopefully not only inform individual dietary choices, but those of governmental agencies that set agricultural and marketing policies.

Dr. Ron Milo
of the Institute’s Plant Sciences Department, together with his research student Alon Shepon, in collaboration with Tamar Makov of Yale University and Dr. Gidon Eshel in New York, asked which types of animal based-food should one consume, environmentally speaking. Though many studies have addressed parts of the issue, none has done a thorough, comparative study that gives a multi-perspective picture of the environmental costs of food derived from animals.

Credit: Weizmann Institute of Science

The team looked at the five main sources of protein in the American diet: dairy, beef, poultry, pork and eggs. Their idea was to calculate the environmental inputs – the costs – per nutritional unit: a calorie or gram of protein. The main challenge the team faced was to devise accurate, faithful input values. 

For example, cattle grazing on arid land in the western half of the US use enormous amounts of land, but relatively little irrigation water. Cattle in feedlots, on the other hand, eat mostly corn, which requires less land, but much more irrigation and nitrogen fertilizer. The researchers needed to account for these differences, but determine aggregate figures that reflect current practices and thus approximate the true environmental cost for each food item.

The inputs the researchers employed came from the US Department of Agriculture databases, among other resources. Using the US for this study is ideal, says Milo, because much of the data quality is high, enabling them to include, for example, figures on import-export imbalances that add to the cost. The environmental inputs the team considered included land use, irrigation water, greenhouse gas emissions, and nitrogen fertilizer use. Each of these costs is a complex environmental system. For example, land use, in addition to tying up this valuable resource in agriculture, is the main cause of biodiversity loss. Nitrogen fertilizer creates water pollution in natural waterways.

When the numbers were in, including those for the environmental costs of different kinds of feed (pasture, roughage such as hay, and concentrates such as corn), the team developed equations that yielded values for the environmental cost – per calorie and then per unit of protein, for each food.

The calculations showed that the biggest culprit, by far, is beef. That was no surprise, say Milo and Shepon. The surprise was in the size of the gap: In total, eating beef is more costly to the environment by an order of magnitude – about ten times on average – than other animal-derived foods, including pork and poultry. 

Cattle require on average 28 times more land and 11 times more irrigation water, are responsible for releasing 5 times more greenhouse gases, and consume 6 times as much nitrogen, as eggs or poultry. Poultry, pork, eggs and dairy all came out fairly similar. That was also surprising, because dairy production is often thought to be relatively environmentally benign. But the research shows that the price of irrigating and fertilizing the crops fed to milk cows – as well as the relative inefficiency of cows in comparison to other livestock – jacks up the cost significantly.

Milo believes that this study could have a number of implications. In addition to helping individuals make better choices about their diet, it should hopefully help inform agricultural policy. And the tool the team has created for analyzing the environmental costs of agriculture can be expanded and refined to be applied, for example, to understanding the relative cost of plant-based diets, or those of other nations. In addition to comparisons, it can point to areas that might be improved. Models based on this study can help policy makers decide how to better ensure food security through sustainable practices.

Dr. Ron Milo’s research is supported by the Mary and Tom Beck-Canadian Center for Alternative Energy Research; the Lerner Family Plant Science Research Endowment Fund; the European Research Council; the Leona M. and Harry B. Helmsley Charitable Trust; Dana and Yossie Hollander, Israel; the Jacob and Charlotte Lehrman Foundation; the Larson Charitable Foundation; the Wolfson Family Charitable Trust; Charles Rothschild, Brazil; Selmo Nissenbaum, Brazil; and the estate of David Arthur Barton. Dr. Milo is the incumbent of the Anna and Maurice Boukstein Career Development Chair in Perpetuity.


Contacts and sources:
Weizmann Institute of Science

Mutations From Venus, Mutations From Mars

Weizmann Institute researchers explain why genetic fertility problems can persist in a population

Some 15% of adults suffer from fertility problems, many of these due to genetic factors. This is something of a paradox: We might expect such genes, which reduce an individual’s ability to reproduce, to disappear from the population. Research at the Weizmann Institute of Science that recently appeared in Nature Communications may now have solved this riddle. Not only can it explain the high rates of male fertility problems, it may open new avenues in understanding the causes of genetic diseases and their treatment.

Various theories explain the survival of harmful mutations: A gene that today causes obesity, for example may have once granted an evolutionary advantage, or disease-causing gene may persist because it is passed on in a small, relatively isolated population.

Dr. Moran Gershoni, a postdoctoral fellow in the group of Prof. Shmuel Pietrokovski of the Molecular Genetics Department, decided to investigate another approach – one based on differences between males and females. Although males and females carry nearly identical sets of genes, many are activated differently in each sex. So natural selection works differently on the same genes in males and females.

Genes that affect only half the population will have double the mutation rate


Take, for example, a mutation that impairs breast milk. It will undergo negative selection only in women. Conversely, a hypothetical gene variant that benefits women but is harmful to men could spread in a population, as it undergoes positive selection in half that population. Gershoni and Pietrokovski created a mathematical model for harmful mutations that affect only half the population; their model showed that these mutations should occur twice as often as those that affect males and females equally.

To test the model, the researchers searched in a computational analysis of the activities of all the human genes that appear in public databases, identifying 95 genes that are exclusively active in the testes. Most of these genes are vital for procreation; and damage to them leads, in many cases, to male sterility.

The researchers then looked at these 95 genes in people whose genomes had been made available through the 1000 Genomes Project, which gave them a broad cross-section of human populations. Their analysis revealed that genes that are active only in the testes have double the harmful mutation rate of those that are active in both sexes – right in line with the mathematical model. Pietrokovski and his team are now conducting follow-up experiments to see whether the mutations they identified do, indeed, play a role in these problems and whether the “sex-difference” approach can explain their survival.

This new understanding of the persistence of genetic mutations could yield insights into other diseases with genetic components, especially those that affect the sexes asymmetrically, including schizophrenia and Parkinson’s, which are more likely to affect men, and depression and autoimmune diseases, which affect more women. And, say Gershoni and Pietrokovski, these findings highlight the need to fit even common medical treatments to the gender of the patient.

Prof. Shmuel Pietrokovski is the incumbent of the Herman and Lilly Schilling Foundation Professorial Chair.

Contacts and sources:
Weizmann Institute of Science

Measuring The Smallest Magnets - Two Single Electrons

Weizmann Institute of Science physicists measured magnetic interactions between single electrons

Imagine trying to measure a tennis ball that bounces wildly, every time to a distance a million times its own size. The bouncing obviously creates enormous “background noise” that interferes with the measurement. But if you attach the ball directly to a measuring device, so they bounce together, you can eliminate the noise problem.

As reported recently in Nature, physicists at the Weizmann Institute of Science used a similar trick to measure the interaction between the smallest possible magnets – two single electrons – after neutralizing magnetic noise that was a million times stronger than the signal they needed to detect.

An illustration showing the magnetic field lines of two electrons, arranged so that their spins point in opposite directions

Dr. Roee Ozeri of the Institute’s Physics of Complex Systems Department says: “The electron has spin, a form of orientation involving two opposing magnetic poles. In fact, it’s a tiny bar magnet.” The question is whether pairs of electrons act like regular bar magnets in which the opposite poles attract one another.

Dr. Shlomi Kotler performed the study while a graduate student under Dr. Ozeri’s guidance, with Drs. Nitzan Akerman, Nir Navon and Yinnon Glickman. Detecting the magnetic interaction of two electrons poses an enormous challenge: When the electrons are at a close range – as they normally are in an atomic orbit – forces other than the magnetic one prevail. On the other hand, if the electrons are pulled apart, the magnetic force becomes dominant, but so weak in absolute terms that it’s easily drowned out by ambient magnetic noise emanating from power lines, lab equipment and the earth’s magnetic field.

The scientists overcame the problem by borrowing a trick from quantum computing that protects quantum information from outside interference. This technique binds two electrons together so that their spins point in opposite directions. Thus, like the bouncing tennis ball attached to the measuring device, the combination of equal but opposite spins makes the electron pair impervious to magnetic noise.

The Weizmann scientists built an electric trap in which two electrons are bound to two strontium ions that are cooled close to absolute zero and separated by 2 micrometers (millionths of a meter). At this distance, which is astronomic by the standards of the quantum world, the magnetic interaction is very weak. But because the electron pairs were not affected by external magnetic noise, the interactions between them could be measured with great precision. The measurement lasted for 15 seconds – tens of thousands of times longer than the milliseconds during which scientists have until now been able to preserve quantum data.

The measurements showed that the electrons interacted magnetically just as two large magnets do: Their north poles repelled one another, rotating on their axes until their unlike poles drew near. This is in line with the predictions of the Standard Model, the currently accepted theory of matter. Also as predicted, the magnetic interaction weakened as a function of the distance between them to the power of three.

In addition to revealing a fundamental principle of particle physics, the measurement approach may prove useful in such areas as the development of atomic clocks or the study of quantum systems in a noisy environment.

Dr. Roee Ozeri’s research is supported by the Crown Photonics Center; the Yeda-Sela Center for Basic Research; the Wolfson Family Charitable Trust; Martin Kushner Schnur, Mexico; Friends of the Weizmann Institute of Science in Memory of Richard Kronstein; and the Zumbi Stiftung.



Contacts and sources:
Weizmann Institute of Science

Learning The Smell Of Fear: Mothers Teach Babies Their Own Fears Via Odor, U-M Research Finds

Babies can learn what to fear in the first days of life just by smelling the odor of their distressed mothers, new research suggests. And not just “natural” fears: If a mother experienced something before pregnancy that made her fear something specific, her baby will quickly learn to fear it too -- through the odor she gives off when she feels fear.

The study involved rat mothers and pups, and found that mothers conditioned to fear the smell of peppermint could transmit that fear to their babies simply through the odor they gave off while feeling that fear. 

Photo illustration - research animals not shown 
 Credit: University of Michigan Health System

In the first direct observation of this kind of fear transmission, a team ofUniversity of Michigan Medical School and New York Universitystudied mother rats who had learned to fear the smell of peppermint – and showed how they “taught” this fear to their babies in their first days of life through their alarm odor released during distress.

In a new paper in the Proceedings of the National Academy ofSciences, the team reports how they pinpointed the specific area of the brain where this fear transmission takes root in the earliest days of life.

Their findings in animals may help explain a phenomenon that has puzzled mental health experts for generations: how a mother’s traumatic experience can affect her children in profound ways, even when it happened long before they were born.

The researchers also hope their work will lead to better understanding of why not all children of traumatized mothers, or of mothers with major phobias, other anxiety disorders or major depression, experience the same effects.

Jacek Debiec, M.D., Ph.D.
 Credit: University of Michigan Health System

“During the early days of an infant rat’s life, they are immune to learning information about environmental dangers. But if their mother is the source of threat information, we have shown they can learn from her and produce lasting memories,” says Jacek Debiec, M.D., Ph.D., the U-M psychiatrist and neuroscientist who led the research.

“Our research demonstrates that infants can learn from maternal expression of fear, very early in life,” he adds. “Before they can even make their own experiences, they basically acquire their mothers’ experiences. Most importantly, these maternally-transmitted memories are long-lived, whereas other types of infant learning, if not repeated, rapidly perish.”

Peering inside the fearful brain

Debiec, who treats children and mothers with anxiety and other conditions in theU-M Department of Psychiatry, notes that the research on rats allows scientists to see what’s going on inside the brain during fear transmission, in ways they could never do in humans.

He began the research during his fellowship at NYU with Regina Marie Sullivan, Ph.D., senior author of the new paper, and continues it in his new lab at U-M’sMolecular and Behavioral Neuroscience Institute.

The researchers taught female rats to fear the smell of peppermint by exposing them to mild, unpleasant electric shocks while they smelled the scent, before they were pregnant. Then after they gave birth, the team exposed the mothers to just the minty smell, without the shocks, to provoke the fear response. They also used a comparison group of female rats that didn’t fear peppermint.

They exposed the pups of both groups of mothers to the peppermint smell, under many different conditions with and without their mothers present.

Using special brain imaging, and studies of genetic activity in individual brain cells and cortisol in the blood, they zeroed in on a brain structure called the lateral amygdala as the key location for learning fears. During later life, this area is key to detecting and planning response to threats – so it makes sense that it would also be the hub for learning new fears.

But the fact that these fears could be learned in a way that lasted, during a time when the baby rat’s ability to learn any fears directly was naturally suppressed, is what makes the new findings so interesting, says Debiec.

The team even showed that the newborns could learn their mothers’ fears even when the mothers weren’t present. Just the piped-in scent of their mother reacting to the peppermint odor she feared was enough to make them fear the same thing.

 Credit: University of Michigan Health System

Even when just the odor of the frightened mother was piped in to a chamber where baby rats were exposed to peppermint smell, the babies developed a fear of the same smell, and their blood cortisol levels rose when they smelled it.

And when the researchers gave the baby rats a substance that blocked activity in the amygdala, they failed to learn the fear of peppermint smell from their mothers. This suggests, Debiec says, that there may be ways to intervene to prevent children from learning irrational or harmful fear responses from their mothers, or reduce their impact.

From animals to humans: next steps

The new research builds on what scientists have learned over time about the fear circuitry in the brain, and what can go wrong with it. That work has helped psychiatrists develop new treatments for human patients with phobias and other anxiety disorders – for instance, exposure therapy that helps them overcome fears by gradually confronting the thing or experience that causes their fear.

In much the same way, Debiec hopes that exploring the roots of fear in infancy, and how maternal trauma can affect subsequent generations, could help human patients. While it’s too soon to know if the same odor-based effect happens between human mothers and babies, the role of a mother’s scent in calming human babies has been shown.

Debiec, who hails from Poland, recalls working with the grown children of Holocaust survivors, who experienced nightmares, avoidance instincts and even flashbacks related to traumatic experiences they never had themselves. While they would have learned about the Holocaust from their parents, this deeply ingrained fear suggests something more at work, he says.

Going forward, he hopes to work with U-M researchers to observe human infants and their mothers -- including U-M psychiatrist Maria Muzik, M.D. and psychologist Kate Rosenblum, Ph.D., who run a Women and Infants Mental Health clinic and research program and also work with military families. The program is currently seeking women and their children to take part in a range of studies; those interested in learning more can call the U-M Mental Health Research Line at (734) 232-0255.

The research was supported by the National Institutes of Health (DC009910, MH091451), and by a NARSAD Young Investigator Award from the Brain and Behavior Research Foundation, and University of Michigan funds. Reference:www.pnas.org/cgi/doi/10.1073/pnas.1316740111



Contacts and sources:
University of Michigan Health System

Impact Of Deepwater Horizon Oil Spill On Coral Communities Is Deeper And Broader Than Predicted

A new discovery of two additional coral communities showing signs of damage from the Deepwater Horizon oil spill expands the impact footprint of the 2010 spill in the Gulf of Mexico.

A colony of coral at the impacted site 6 km from the Deepwater Horizon oil spill taken in June 2014. The patchy brown growth on the normally gold-colored coral is not found on healthy colonies and is diagnostic for corals impacted during the spill.
Credit: Fisher lab, Penn State University

The discovery was made by a team led by Charles Fisher, professor of biology at Penn State University. A paper describing this work and additional impacts of human activity on corals in the Gulf of Mexico will be published during the last week of July 2014 in the online Early Edition of the journal Proceedings of the National Academy of Sciences.



"The footprint of the impact of the spill on coral communities is both deeper and wider than previous data indicated," said Fisher. "This study very clearly shows that multiple coral communities, up to 22 kilometers from the spill site and at depths over 1800 meters, were impacted by the spill."

The oil from the spill in the Gulf of Mexico has largely dissipated, so other clues now are needed to identify marine species impacted by the spill. Fisher's team used the current conditions at a coral community known to have been impacted by the spill in 2010 as a model "fingerprint" for gauging the spill's impact in newly discovered coral communities.

Unlike other species impacted by the spill whose remains quickly disappeared from the ocean floor, corals form a mineralized skeleton that can last for years after the organism has died. "One of the keys to coral's usefulness as an indicator species is that the coral skeleton retains evidence of the damage long after the oil that caused the damage is gone," said Fisher. 

The scientists compared the newly discovered coral communities with one they had discovered and studied around the time of the oil spill, using it as a model for the progression of damage caused by the spill over time. "We were able to identify evidence of damage from the spill in the two coral communities discovered in 2011 because we know exactly what our model coral colonies, impacted by the oil spill in 2010, looked like at the time we found the new communities."

Corals are sparse in the deep waters of the Gulf of Mexico, but because they act as an indicator species for tracking the impact of environmental disasters like the Deepwater Horizon blowout, the effort to find them pays off in useful scientific data. "We were looking for coral communities at depths of over 1000 meters that are often smaller than the size of a tennis court," said Fisher. "We needed high-resolution images of the coral colonies that are scattered across these communities and that range in size from a small houseplant to a small shrub."

Healthy colonies of coral with attached anemone and brittle stars at a site 1050 m deep, 183 km from the Deepwater Horizon oil spill. 
Credit: Fisher lab, Penn State University

To begin the search, the team used 3D seismic data from the Bureau of Ocean Energy Management to identify 488 potential coral habitats in a 40 km radius around the spill site. From that list they chose the 29 sites they judged most likely to contain corals impacted by the spill. The team then used towed camera systems and Sentry, an autonomous underwater vehicle (AUV), which they programmed to autonomously travel back-and-forth across specific areas collecting images of the sites from just meters above the ocean floor. Finally, the team used a Shilling ultra-heavy-duty remote-operated vehicle (ROV), to collect high-resolution images of corals at the sites where they were discovered.

"With the cameras on board the ROV we were able to collect beautiful, high-resolution images of the corals," said Fisher. "When we compared these images with our example of known oil damage, all the signs were present providing clear evidence in two of the newly discovered coral communities of the impact of the Deepwater Horizon oil spill."

In searching for coral communities impacted by the Deepwater Horizon oil spill, the team also found two coral sites entangled with commercial fishing line. These additional discoveries serve as a reminder that the Gulf is being impacted by a diversity of human activities. 

A colony of coral from a newly discovered coral community with attached anemones and brittle stars from a site 6 km from the Deepwater Horizon oil spill site. The patchy brown growth on the normally gold-colored coral is not found on healthy colonies and is diagnostic for corals impacted during the spill.

Credit: Fisher lab, Penn State University

In addition to Fisher, the research team includedPen-Yuan Hsing, Samantha P. Berlet, Miles G. Saunders and Elizabeth A. Larcom from Penn State; Carl L. Kaiser, Dana R. Yoerger, andTimothy M. Shank from the Woods Hole Oceanographic Institution; Harry H. Roberts from Louisiana State University; William W. Shedd from the Bureau of Ocean Energy Management; Erik E. Cordes from Temple University; and James M. Brooks from TDI-Brooks International Inc.

 From jmneiderer

The research was supported by the Assessment and Restoration Division of the National Oceanic and Atmospheric Administration (NOAA), the Gulf of Mexico Research Initiative funding to support the Ecosystem Impacts of Oil and Gas Inputs to the Gulf (ECOGIG) consortium administered by the University of Mississippi, and B P as part of the Deepwater Horizon Oil Spill Natural Resource Damage Assessment.



Contacts and sources:
Charles Fisher  
Barbara Kennedy (PIO)  
Penn State University

'Holy Grail' In Battery Design Close

Engineers across the globe have been racing to design smaller, cheaper and more efficient rechargeable batteries to meet the power storage needs of everything from handheld gadgets to electric cars.  Now pure lithium anodes are closer to reality with development of protective layer of interconnected carbon domes.

In a paper published in the journal Nature Nanotechnology, researchers at Stanford University report that they have taken a big step toward accomplishing what battery designers have been trying to do for decades – design a pure lithium anode.

All batteries have three basic components: an electrolyte to provide electrons, an anode to discharge those electrons, and a cathode to receive them.

Today, we say we have lithium batteries, but that is only partly true. What we have are lithium ion batteries. The lithium is in the electrolyte, but not in the anode. An anode of pure lithium would be a huge boost to battery efficiency.

"Of all the materials that one might use in an anode, lithium has the greatest potential. Some call it the Holy Grail," said Yi Cui, a Stanford professor of materials science and engineering and leader of the research team. "It is very lightweight and it has the highest energy density. You get more power per volume and weight, leading to lighter, smaller batteries with more power."

Yi Cui, Stanford professor of materials science and engineering, and his team are designing a pure lithium anode for rechargeable batteries.
Credit: Steve Castillo

But engineers have long tried and failed to reach this Holy Grail.

"Lithium has major challenges that have made its use in anodes difficult," said Guangyuan Zheng, a doctoral candidate in Cui's lab and first author of the paper. "Many engineers had given up the search, but we found a way to protect the lithium from the problems that have plagued it for so long."

In addition to Cui and Zheng, the research team includes Steven Chu, the former U.S. secretary of energy and Nobel laureate who recently resumed his professorship at Stanford.

"In practical terms, if we can triple the energy density and simultaneously decrease the cost four-fold, that would be very exciting," Chu said. "We would have a cell phone with triple the battery life and an electric vehicle with a 300-mile range that cost $25,000 – and with better performance than an internal combustion engine car getting 40 mpg."

The engineering challenge

In the paper, the authors explain how they are overcoming the problems posed by lithium.

Steven Chu
Credit: L.A. Cicero


Most lithium ion batteries, like those in a smart phone or hybrid car, work similarly. The key components include an anode, the negative pole from which electrons flow out and into a power-hungry device, and the cathode, where the electrons re-enter the battery once they have traveled through the circuit. Separating them is an electrolyte, a solid or liquid loaded with positively charged lithium ions that travel between the anode and cathode.

During charging, the positively charged lithium ions in the electrolyte are attracted to the negatively charged anode and the lithium accumulates on the anode. Today, the anode in a lithium ion battery is actually made of graphite or silicon.

Engineers would like to use lithium for the anode, but so far they have been unable to do so. That's because the lithium ions expand as they gather on the anode during charging.

All anode materials, including graphite and silicon, expand somewhat during charging, but not like lithium. Researchers say that lithium's expansion during charging is "virtually infinite" relative to the other materials. Its expansion is also uneven, causing pits and cracks to form in the outer surface, like paint on the exterior of a balloon that is being inflated.

The resulting fissures on the surface of the anode allow the precious lithium ions to escape, forming hair-like or mossy growths, called dendrites. Dendrites, in turn, short-circuit the battery and shorten its life.

Preventing this buildup is the first challenge of using lithium for the battery's anode.

The second engineering challenge involves finding a way to deal with the fact that lithium anodes are highly chemically reactive with the electrolyte. It uses up the electrolyte and reduces battery life.

An additional problem is that the anode and electrolyte produce heat when they come into contact. Lithium batteries, including those in use today, can overheat to the point of fire, or even explosion. They are, therefore, a serious safety concern. The recent battery fires in Tesla cars and on Boeing's Dreamliner jet plane are prominent examples of the challenges of lithium ion batteries.

To solve these problems the Stanford researchers built a protective layer of interconnected carbon domes on top of their lithium anode. This layer is what the team has called nanospheres.

The Stanford team's nanosphere layer resembles a honeycomb: it creates a flexible, uniform and non-reactive film that protects the unstable lithium from the drawbacks that have made it such a challenge. The carbon nanosphere wall is just 20 nanometers thick. It would take 5,000 layers stacked one atop another to equal the width of a single human hair.

"The ideal protective layer for a lithium metal anode needs to be chemically stable to protect against the chemical reactions with the electrolyte and mechanically strong to withstand the expansion of the lithium during charge," said Cui, who is a member of the Stanford Institute for Materials and Energy Sciences at SLAC National Accelerator Laboratory.

The Stanford nanosphere layer is just that. It is made of amorphous carbon, which is chemically stable, yet strong and flexible so as to move freely up and down with the lithium as it expands and contracts during the battery's normal charge-discharge cycle.

Ideal within reach

In technical terms, the nanospheres improve the coulombic efficiency of the battery – a ratio of the amount of lithium that can be extracted from the anode when the battery is in use compared to the amount put in during charging. A single round of this give-and-take process is called a cycle.

Generally, to be commercially viable, a battery must have a coulombic efficiency of 99.9 percent or more, ideally over as many cycles as possible. Previous anodes of unprotected lithium metal achieved approximately 96 percent efficiency, which dropped to less than 50 percent in just 100 cycles – not nearly good enough. The Stanford team's new lithium metal anode achieves 99 percent efficiency even at 150 cycles.

"The difference between 99 percent and 96 percent, in battery terms, is huge," Cui said. "So, while we're not quite to that 99.9 percent threshold where we need to be, we're close and this is a significant improvement over any previous design. With some additional engineering and new electrolytes, we believe we can realize a practical and stable lithium metal anode that could power the next generation of rechargeable batteries."


Contacts and sources:
Andrew Myers
Stanford University

101 Geysers And More Revealed On Icy Saturn Moon By Cassini Spacecraft

Scientists using mission data from NASA’s Cassini spacecraft have identified 101 distinct geysers erupting on Saturn’s icy moon Enceladus. Their analysis suggests it is possible for liquid water to reach from the moon’s underground sea all the way to its surface.

This artist's rendering shows a cross-section of the ice shell immediately beneath one of Enceladus' geyser-active fractures, illustrating the physical and thermal structure and the processes ongoing below and at the surface.

Image Credit: NASA/JPL-Caltech/Space Science Institute

These findings, and clues to what powers the geyser eruptions, are presented in two articles published in the current online edition of the Astronomical Journal.

Over a period of almost seven years, Cassini’s cameras surveyed the south polar terrain of the small moon, a unique geological basin renowned for its four prominent "tiger stripe” fractures and the geysers of tiny icy particles and water vapor first sighted there nearly 10 years ago. The result of the survey is a map of 101 geysers, each erupting from one of the tiger stripe fractures, and the discovery that individual geysers are coincident with small hot spots. These relationships pointed the way to the geysers’ origin.

This Cassini narrow-angle camera image -- one of those acquired in the survey conducted by the Cassini imaging science team of the geyser basin at the south pole of Enceladus -- was taken as Cassini was looking across the moon's south pole. At the time, the spacecraft was essentially in the moon's equatorial plane. The image scale is 1280 feet (390 meters) per pixel and the sun-Enceladus-spacecraft, or phase, angle is 162.5 degrees.


Credit: NASA

After the first sighting of the geysers in 2005, scientists suspected repeated flexing of Enceladus by Saturn’s tides as the moon orbits the planet had something to do with their behavior. One suggestion included the back-and-forth rubbing of opposing walls of the fractures generating frictional heat that turned ice into geyser-forming vapor and liquid.

Alternate views held that the opening and closing of the fractures allowed water vapor from below to reach the surface. Before this new study, it was not clear which process was the dominating influence. Nor was it certain whether excess heat emitted by Enceladus was everywhere correlated with geyser activity.

Dramatic plumes, both large and small, spray water ice and vapor from many locations along the famed "tiger stripes" near the south pole of Saturn's moon Enceladus. The tiger stripes are four prominent, approximately 84-mile- (135-kilometer-) long fractures that cross the moon's south polar terrain.



Credit: NASA

To determine the surface locations of the geysers, researchers employed the same process of triangulation used historically to survey geological features on Earth, such as mountains. When the researchers compared the geysers’ locations with low-resolution maps of thermal emission, it became apparent the greatest geyser activity coincided with the greatest thermal radiation. Comparisons between the geysers and tidal stresses revealed similar connections. However, these correlations alone were insufficient to answer the question, “What produces what?”

The answer to this mystery came from comparison of the survey results with high-resolution data collected in 2010 by Cassini’s heat-sensing instruments. Individual geysers were found to coincide with small-scale hot spots, only a few dozen feet (or tens of meters) across, which were too small to be produced by frictional heating, but the right size to be the result of condensation of vapor on the near-surface walls of the fractures. This immediately implicated the hot spots as the signature of the geysering process.

On this polar stereographic map of Enceladus' south polar terrain, all 100 geysers have been plotted whose source locations have been determined in Cassini's imaging survey of the moon's geyser basin. The uncertainty attached to each location is given by the size of the surrounding circle.

Five sources are indicated by dashed circles. Each of these jets appeared only in images taken very close together in time; in other words, the source locations have been confidently determined, but their tilts are uncertain.
Credit: NASA

The two crosses -- one on Alexandria and one at the end of Baghdad -- indicate two jets. Each was observed in one image only but each was intersected by the shadow of Enceladus, as in PIA17184, allowing a determination of the fracture on which they lie.

“Once we had these results in hand we knew right away heat was not causing the geysers, but vice versa,” said Carolyn Porco, leader of the Cassini imaging team from the Space Science Institute in Boulder, Colorado, and lead author of the first paper. “It also told us the geysers are not a near-surface phenomenon, but have much deeper roots.”

This artist's rendering shows a regional cross-section of the ice shell underlying Enceladus' south polar terrain, illustrating our current knowledge of the physical and thermal structure and processes ongoing below and at the surface.

Credit:  NASA

Thanks to recent analysis of Cassini gravity data, the researchers concluded the only plausible source of the material forming the geysers is the sea now known to exist beneath the ice shell. They also found that narrow pathways through the ice shell can remain open from the sea all the way to the surface, if filled with liquid water.

In the companion paper, the authors report the brightness of the plume formed by all the geysers, as seen with Cassini’s high resolution cameras, changes periodically as Enceladus orbits Saturn. Armed with the conclusion the opening and closing of the fractures modulates the venting, the authors compared the observations with the expected venting schedule due to tides.

They found the simplest model of tidal flexing provides a good match for the brightness variations Cassini observes, but it does not predict the time when the plume begins to brighten. Some other important effect is present and the authors considered several in the course of their work.

The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory (JPL) in Pasadena, California, manages the mission for NASA's Science Mission Directorate in Washington. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging team consists of scientists from the United States, England, France and Germany. The imaging team is based at the Space Science Institute.


Contacts and sources:
Dwayne Brown
NASA Headquarters, Washington

Preston Dyches
Jet Propulsion Laboratory, Pasadena, Calif.

Steve Mullins
Space Science Institute, Boulder, Colo.

Friday, July 25, 2014

Saharan Dust Is Key To The Formation Of Bahamas' Great Bank Says New Research


A new study suggests that Saharan dust played a major role in the formation of the Bahamas islands. Researchers from the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science showed that iron-rich Saharan dust provides the nutrients necessary for specialized bacteria to produce the island chain's carbonate-based foundation.

Distribution of insoluble material in the sediments and collection sites are shown. The insoluble material is derived from atmospheric dust.
Credit: Peter Swart, Ph.D., UM Rosenstiel School of Marine and Atmospheric Science

UM Rosenstiel School Lewis G. Weeks Professor Peter Swart and colleagues analyzed the concentrations of two trace elements characteristic of atmospheric dust – iron and manganese – in 270 seafloor samples collected along the Great Bahama Bank over a three-year period. The team found that the highest concentrations of these trace elements occurred to the west of Andros Island, an area which has the largest concentration of whitings, white sediment-laden bodies of water produced by photosynthetic cyanobacteria.

"Cyanobacteria need 10 times more iron than other photosynthesizers because they fix atmospheric nitrogen," said Swart, lead author of the study. "This process draws down the carbon dioxide and induces the precipitation of calcium carbonate, thus causing the whiting. The signature of atmospheric nitrogen, its isotopic ratio is left in the sediments."


  This is the Great Bahama Bank.
Credit: NASA

Swart's team suggests that high concentrations of iron-rich dust blown across the Atlantic Ocean from the Sahara is responsible for the existence of the Great Bahama Bank, which has been built up over the last 100 million years from sedimentation of calcium carbonate. The dust particles blown into the Bahamas' waters and directly onto the islands provide the nutrients necessary to fuel cyanobacteria blooms, which in turn, produce carbonate whitings in the surrounding waters.

Persistent winds across Africa's 3.5-million square mile Sahara Desert lifts mineral-rich sand into the atmosphere where it travels the nearly 5,000-mile northwest journey towards the U.S. and Caribbean.

The paper, titled "The fertilization of the Bahamas by Saharan dust: A trigger for carbonate precipitation?" was published in the early online edition of the journal Geology. The paper's authors include Swart, Amanda Oehlert, Greta Mackenzie, Gregor Eberli from the UM Rosenstiel School's Department of Marine Geosciences and John Reijmer of VU University Amsterdam in the Netherlands.



Contacts and sources:

Total Darkness At Night Is Key To Success Of Breast Cancer Therapy -- Tulane Study


Exposure to light at night, which shuts off nighttime production of the hormone melatonin, renders breast cancer completely resistant to tamoxifen, a widely used breast cancer drug, says a new study by Tulane University School of Medicine cancer researchers.

Principal investigators and co-leaders of Tulane's Circadian Cancer Biology Group, Steven Hill (left) and David Blask (right), and team members Robert Dauchy and Shulin Xiang.
Credit: Photograph by Paula Burch-Celentano, Tulane University

The study, "Circadian and Melatonin Disruption by Exposure to Light at Night Drives Intrinsic Resistance to Tamoxifen Therapy in Breast Cancer," published in the journal Cancer Research, is the first to show that melatonin is vital to the success of tamoxifen in treating breast cancer.

Principal investigators and co-leaders of Tulane's Circadian Cancer Biology Group, Steven Hill and David Blask, along with team members Robert Dauchy and Shulin Xiang, investigated the role of melatonin on the effectiveness of tamoxifen in combating human breast cancer cells implanted in rats.

"In the first phase of the study, we kept animals in a daily light/dark cycle of 12 hours of light followed by 12 hours of total darkness (melatonin is elevated during the dark phase) for several weeks," says Hill. "In the second study, we exposed them to the same daily light/dark cycle; however, during the 12 hour dark phase, animals were exposed to extremely dim light at night (melatonin levels are suppressed), roughly equivalent to faint light coming under a door."

Melatonin by itself delayed the formation of tumors and significantly slowed their growth but tamoxifen caused a dramatic regression of tumors in animals with either high nighttime levels of melatonin during complete darkness or those receiving melatonin supplementation during dim light at night exposure.

These findings have potentially enormous implications for women being treated with tamoxifen and also regularly exposed to light at night due to sleep problems, working night shifts or exposed to light from computer and TV screens.

"High melatonin levels at night put breast cancer cells to 'sleep' by turning off key growth mechanisms. These cells are vulnerable to tamoxifen. But when the lights are on and melatonin is suppressed, breast cancer cells 'wake up' and ignore tamoxifen," Blask says.

The study could make light at night a new and serious risk factor for developing resistance to tamoxifen and other anticancer drugs and make the use of melatonin in combination with tamoxifen, administered at the optimal time of day or night, standard treatment for breast cancer patients.



Contacts and sources:
Arthur Nead
Tulane University

Bacteria Build Shelters Of Salt To Sleep In

For the first time, Spanish researchers have detected an unknown interaction between microorganisms and salt. When Escherichia coli cells are introduced into a droplet of salt water and is left to dry, bacteria manipulate the sodium chloride crystallisation to create biomineralogical biosaline 3D morphologically complex formations, where they hibernate.

Afterwards, simply by rehydrating the material, bacteria are revived. The discovery was made by chance with a home microscope, but it made the cover of the 'Astrobiology' journal and may help us find signs of life on other planets.

Dried biosaline patterns formed by the interaction of Escherichia coli cells with common salt. 
Credit: J. M. Gómez-Gómez

The bacterium Escherichia coli is one of the most studied living forms by biologists, but none had to date noticed what this microorganism can do within a simple drop of salt water: create impressive biomineralogical patterns in which it shelters itself when it dries.

"It was a complete surprise, a fully unexpected result, when I introduced E.. coli cells into salt water and I realised that the bacteria had the ability to join the salt crystallisation and modulate the development and growth of the sodium chloride crystals," biologist José María Gómez told SINC.

"Thus, in around four hours, in the drop of water that had dried, an impressive tapestry of biosaline patterns was created with complex 3D architecture," added the researcher, who made the discovery with the microscope in his house, although he later confirmed it with the help of his colleagues from the Laboratory of BioMineralogy and Astrobiological Research (LBMARS, University of Valladolid-CSIC), Spain.

Until present, we knew of similar patterns created from saline solutions and isolated proteins, but this is the first report that demonstrates how whole bacterial cells can manage the crystallisation of sodium chloride (NaCl) and generate self-organised biosaline structures of a fractal or dendritic appearance. The study and the striking three-dimensional patterns are on the front cover of this month's 'Astrobiology' edition.

"The most interesting result is that the bacteria enter a state of hibernation inside these desiccated patterns, but they can later be 'revived' simply by rehydration," said Gómez, who highlighted a very important result from an astrobiological point of view: "Given the richness and complexity of these formations, they may be used as biosignatures in the search for life in extremely dry environments outside our own planet, such as the surface of Mars or that of Jupiter's satellite, Europa".

In fact, the LBMARS laboratory participates in the development of the Raman RLS instrument of the ExoMars rover, the mission that the European Space Agency (ESA) will send to the red planet in 2018, and this new finding may help them search for possible biological signs. According to the researcher, "the patterns observed will help calibrate the instrument and test its detection of signs of hibernation or traces of Martian life".

"The challenge we now face is to understand how the bacteria control the crystallisation of NaCl to create these incredible 3D structures and vice-versa, how salt influences this action, as well as studying the structure of these microorganisms that withstand desiccation," said Gómez, who reminds us that a simple curiosity and excitement about science, although it may be with simple means, still allows us to make some interesting discoveries: "This is a tribute to scientists such as the Spaniard Santiago Ramón y Cajal and the Dutch scientist Anton van Leeuwenhoek, who also worked from home with their own microscopes"


Contacts and sources:
Plataforma SINC


Citation:  José María Gómez Gómez, Jesús Medina, David Hochberg, Eva Mateo-Martí, Jesús Martínez-Frías, Fernando Rull "Drying Bacterial Biosaline Patterns Capable of Vital Reanimation upon Rehydration: Novel Hibernating Biomineralogical Life Formations". Astrobiology 14 (7): 589-602, 2014. Doi: 10.1089/ast.2014.1162

New Fast Charging Nano-Supercapacitors For Electric Cars Crush The Commercial Competition

Innovative nano-material based supercapacitors are set to bring mass market appeal a good step closer to the lukewarm public interest in Germany. This movement is currently being motivated by the advancements in the state-of-the-art of this device.

Electric cars are very much welcomed in Norway and they are a common sight on the roads of the Scandinavian country – so much so that electric cars topped the list of new vehicle registrations for the second time. This poses a stark contrast to the situation in Germany, where electric vehicles claim only a small portion of the market.

Innovative nano-material based supercapacitors are set to bring mass market appeal a good step closer to the lukewarm public interest in Germany. This movement is current-ly being motivated by the advancements in the state-of-the-art of this device.
Credit: © Fraunhofer IPA

Of the 43 million cars on the roads in Germany, only a mere 8000 are electric powered. The main factors discouraging motorists in Germany from switching to electric vehicles are the high investments cost, their short driving ranges and the lack of charging stations. Another major obstacle en route to the mass acceptance of electric cars is the charging time involved. 

The minutes involved in refueling conventional cars are so many folds shorter that it makes the situation almost incomparable. However, the charging durations could be dramatically shortened with the inclusion of supercapacitors. These alternative energy storage devices are fast charging and can therefore better support the use of economical energy in electric cars. 

Taking traditional gasoline-powered vehicles for instance, the action of braking converts the kinetic energy into heat which is dissipated and unused. Per contra, generators on electric vehicles are able to tap into the kinetic energy by converting it into electricity for further usage. This electricity often comes in jolts and requires storage devices that can withstand high amount of energy input within a short period of time. 

In this example, supercapacitors with their capability in capturing and storing this converted energy in an instant fits in the picture wholly. Unlike batteries that offer limited charging/discharging rates, supercapacitors require only seconds to charge and can feed the electric power back into the air-conditioning systems, defogger, radio, etc. as required.

Rapid energy storage devices are distinguished by their energy and power density characteristics – in other words, the amount of electrical energy the device can deliver with respect to its mass and within a given period of time. 

Supercapacitors are known to possess high power density, whereby large amounts of electrical energy can be provided or captured within short durations, albeit at a short-coming of low energy density. The amount of energy in which supercapacitors are able to store is generally about 10% that of electrochemical batteries (when the two devices of same weight are being compared). 

This is precisely where the challenge lies and what the “ElectroGraph” project is attempting to address. ElectroGraph is a project supported by the EU and its consortium consists of ten partners from both research institutes and industries. One of the main tasks of this project is to develop new types of supercapacitors with significantly improved energy storage capacities. 

As the project is approaches its closing phase in June, the project coordinator at Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart, Carsten Glanz explained the concept and approach taken en route to its successful conclusion: “during the storage process, the electrical energy is stored as charged particles attached on the electrode material.” “So to store more energy efficiently, we designed light weight electrodes with larger, usable surfaces.”

Graphene electrodes significantly improve energy efficiency

In numerous tests, the researcher and his team investigated the nano-material graphene, whose extremely high specific surface area of up to 2,600 m2/g and high electrical conductivity practically cries out for use as an electrode material. It consists of an ultrathin monolayer lattice made of carbon atoms. When used as an electrode material, it greatly increases the surface area with the same amount of material. From this aspect, graphene is showing its potential in replacing activated carbon – the material that has been used in commercial supercapacitors to date – which has a specific surface area between 1000 and 1800 m2/g.

“The space between the electrodes is filled with a liquid electrolyte,” revealed Glanz. “We use ionic liquids for this purpose. Graphene-based electrodes together with ionic liquid electrolytes present an ideal material combination where we can operate at higher voltages.” 

“By arranging the graphene layers in a manner that there is a gap between the individual layers, the researchers were able to establish a manufacturing method that efficiently uses the intrinsic surface area available of this nano-material. This prevents the individual graphene layers from restacking into graphite, which would reduce the storage surface and consequently the amount of energy storage capacity. “Our electrodes have already surpassed commercially available one by 75 percent in terms of storage capacity,” emphasizes the engineer. 

“I imagine that the cars of the future will have a battery connected to many capacitors spread throughout the vehicle, which will take over energy supply during high-power demand phases during acceleration for example and ramming up of the air-conditioning system. These capacitors will ease the burden on the battery and cover voltage peaks when starting the car. As a result, the size of massive batteries can be reduced.”

In order to present the new technology, the ElectroGraph consortium developed a demonstrator consisting of supercapacitors installed in an automobile side-view mirror and charged by a solar cell in an energetically self-sufficient system. The demonstrator will be unveiled at the end of May during the dissemination workshop at Fraunhofer IPA.

New Steel-Reinforced Concrete Better Protects Buildings From Bomb Attacks

A new type of steel-reinforced concrete protects buildings better from bomb attacks. Researchers have developed a formula to quickly calculate the concrete’s required thickness. The material will be used in the One World Trade Center at Ground Zero.

Earthquakes and explosions produce tremendous forces. Pressures in the immediate vicinity of a car bomb are in the range of several thousand megapascals, and even further away from the detonation itself, pressures are still in the order of several hundred kilopascals. Pressure in a bicycle tire – at about three bar – corresponds to about 300 kilopascals.

The One World Trade Center at Ground Zero shortly before the official opening. One safety measure adopted was the use of specially formulated safety concrete, developed by DUCON Europe GmbH & CO KG. Fraunhofer scientists were able to accurately compute how much of this concrete could be efficiently used to best effect. 

Credit: © Fraunhofer EMI

“So people at a good distance from the detonation point are not so much endangered by a pressure wave – our bodies can usually cope pretty well with them – it’s flying debris that poses the real danger,” explains Dr. Alexander Stolz from the Safety Technology and Protective Structures department at the Fraunhofer Institute for High Speed Dynamics, Ernst Mach-Institut, EMI in Efringen-Kirchener, a German town just north of Basel. This is exactly what happens to conventional reinforced concrete when it is hit by an explosion’s pressure wave: it is so brittle that individual and often large pieces are torn off and fly through the air uncontrolled.

Dr. Stephan Hauser, managing director of DUCON Europe GmbH & CoKG, has developed a concrete that merely deforms when subjected to such pressures – and doesn’t break. Behind the development is a special mixture made from very hard high-performance concrete, combined with finely meshed reinforced steel. The EMI has been supporting Hauser for many years in the optimization of his patented innovation. 

In particular, the researchers take responsibility for dynamic qualification testing of the material under extreme loads. This also involves characterizing the material and calculating characteristic curve profiles. The researchers have developed a mathematical formula that simply and quickly computes the required thickness of the new concrete for each specific application. “Calculations used to be based on comparable and historical values,” says Stolz. “Now we can use a universal algorithm.”

The formula was developed during a test series with the new shock tube in Efringen-Kirchen. “We can simulate detonations of different blasting forces – from 100 to 2,500 kilograms TNT at distances from 35 to 50 meters from buildings. And that’s without even having to use explosives,” says Stolz. The principle behind it is this: The shock tube consists of a (high-pressure) driver section and a (low-pressure) driven section, which are separated by a steel diaphragm. Air can be compressed in the driver section to a pressure of up to 30, bar, i.e. to approximately 30 times atmospheric pressure at sea level. The steel diaphragm is ruptured when the desired level of pressure is reached: the air is forced through the driven section as a uniform shock front that hits the concrete sample being tested, attached to the end of the shock tube. 

“With conventional concrete, the impact pressure ripped out parts of the sample concrete wall, which failed almost instantly, while the ductile and more flexible security version of the concrete merely deformed. There was no debris, and the material remained intact,” says Stolz. Thanks to its ductile qualities, the security concrete is considerably less bulky and yet more stable than conventional steel-reinforced concrete. Thinner building components are possible. “As a rule of thumb, you get the same stability with half the thickness,” says Stolz.

Formula also appropriate for earthquake and blast protection

Designing elements with the ductile concrete is easier with the new computational formula. The material’s high load capacity, many years of experience in its use in a variety of applications, and ultimately its load limits under explosive charge led to it being used in the new One World Trade Center in New York. 

The building rests on a 20 story, bombproof foundation that reaches 60 meters underground. Overall, at points within the building where safety is especially critical, several thousand square meters of safety concrete have been used to shore up the construction. Over the past few years, the skyscraper has been growing steadily upwards on the southern tip of Manhattan, on the site of the old World Trade Center’s Twin Towers. 

On September 11, 2001, an unprecedented act of terror resulted in the collapse of the towers, burying more than 3000 people under the debris. At 541.3 meters, the One World Trade Center is the tallest building in the USA and the third tallest in the world. “Our formula allows us to calculate the exact thickness of the concrete required to meet the safety considerations posed by such a special building,” says Stolz.


Contacts and sources:
Dr. Alexander StolzFraunhofer-Gesellschaft

Zika Virus Escapes Africa, Carried By Mosquitoes, Coming To America And Europe, Causes Epidemics In Asia

A newcomer among arboviruses

In the group of viruses that includes dengue and chikungunya, a newcomer now has people talking about it. Also originating in Africa, zika was isolated in humans in the 1970s. Several years earlier, only a few human cases had been reported. It took until 2007 for the virus to show its epidemic capacity, with 5,000 cases in Micronesia in the Pacific, and then especially, at the end of 2013 in Polynesia, where 55,000 people were affected.

Tiger mosquito Aedes albopictus
Credit; © IRD / M. Jacquet

In light of these recent events, researchers from IRD and the CIRMF in Gabon restarted work on the concomitant dengue and chikungunya epidemic that occurred in 2007 in the capital, Libreville, and which affected 20,000 people. Showing almost the same symptoms as its two dreaded cousins, did zika pass unnoticed by the researchers?

As many cases of zika fever as of dengue and chikungunya

To remove any doubt, the researchers conducted a second analysis of the blood samples taken seven years earlier from the patients. The result: many cases were due to the zika virus. The latter infected the inhabitants of Libreville with the same frequency as by the dengue or chikungunya viruses. Therefore, the capital actually experienced a concomitant epidemic of dengue, chikungunya, and zika in 2007. Additionally, analysis of the phylogenetic tree of the zika viruses detected in Libreville confirms that it was a strain belonging to the old African line. In other words, the latter was found to be more virulent than thought.

An emerging threat to human health

The researchers also re-analysed the mosquitoes captured in 2007. These studies attested to the first known presence of zika in Aedes albopictus, better known as the tiger mosquito. Thus, this insect, known to be the vector of dengue and chikungunya, also carries the zika virus. It is the predominant species in Libreville, where it represents more than 55% of the mosquitoes collected. The tiger mosquito prospers in small bodies of standing water such as in broken bottles, tins, flowerpots, abandoned used tires, etc.

Originally from Asia, the tiger mosquito was introduced to Africa in 1991 and detected in Gabon in 2007, where its arrival undoubtedly contributed to the emergence of dengue, chikungunya, and as shown by this new study, zika. The rapid geographic expansion of this invasive species in Africa, Europe, and America allows for a risk of propagation of zika fever around the world, including in the south of France.


Contacts and sources:
Institut de Recherche pour le Développement (IRD)


Citation:  Grard G., Caron M., Mombo I. M., Nkoghe D., Ondo S. M., Jiolle Davy, Fontenille Didier, Paupy Christophe, Leroy Eric. Zika virus in Gabon (Central Africa) - 2007: a new threat from Aedes albopictus ?. Plos Neglected Tropical Diseases, 2014, 8 (2), art. e2681 [6 p.] ISSN 1935-2735doi:10.1371/journal.pntd.0002681