Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Wednesday, April 30, 2014

A Mystery Of Thermoelectrics

New analysis explains why some materials are good thermal insulators while similar ones are not

Materials that can be used for thermoelectric devices — those that turn a temperature difference into an electric voltage — have been known for decades. But until now there has been no good explanation for why just a few materials work well for these applications, while most others do not. Now researchers at MIT and elsewhere say they have finally found a theoretical explanation for the differences, which could lead to the discovery of new, improved thermoelectric materials.

This image shows the resonant bonding in lead telluride, one of the materials whose properties the team studied. It shows the calculated electron density distribution within the material.

Illustration courtesy of Sangyeop Lee 

The findings — by MIT graduate student Sangyeop Lee; Gang Chen, the Carl Richard Soderberg Professor of Power Engineering; and four others — are reported this week in the journal Nature Communications.

For thermoelectric applications, Chen explains, "It is important to find a material with low thermal conductivity" — since thermoelectrics work by maintaining a temperature difference from one side of a device to the other. If a material conducts heat well, then heat leaks quickly from the hot side to the cold side, reducing its efficiency in converting heat to electricity. But predicting which materials have low conductivity — which is to say, those that are good thermal insulators — has proved elusive.

For example, some compounds that are good insulators are made up of elements similar to those found in other compounds that are not good insulators at all. "Why," Chen wondered, "does one material have a low thermal conductivity, while another that is very similar does not?"

The solution to the puzzle turned out to come from work in other areas, including research to understand a different class called phase-change materials. These are being studied as a potential basis for computer memory devices that would retain information even when power is switched off. Phase-change materials change from an orderly, crystalline structure to a disordered structure in response to a change in temperature; they can then be switched back again with another temperature change.

Analysis of phase-change materials showed that they work because of a particular kind of chemical bonding, called resonant bonding — a type of bond in which electrons flip back and forth between several adjacent atoms. While resonant bonds' effects on electrical and optical properties have been studied, nobody had previously examined their effect on thermal properties, Lee says.

"There is little communication between people doing phase-change research and those doing thermoelectric research," Lee says. Interdisciplinary meetings at MIT helped lay the foundation for this research, he says: "This is an example where communication between people with different backgrounds can lead to new opportunities and boost understanding."

It turns out that electrons' "flipping" in resonant bonding leads to long-range interactions among their atoms, Chen says — producing the material's low thermal conductivity.

Using first-principles calculations to account for such behavior with resonant bonding, Lee was able to demonstrate that this effect could explain known discrepancies between similar materials that of low and high thermal conductivity.

"We found some general rules which can be used to explain other materials," Lee says.

This could lead to the discovery of new kinds of materials that also have very low thermal conductivity.

That, however, is just "one piece of the puzzle," Chen says: In order to be useful for thermoelectric devices, a material must combine low thermal conductivity with high electrical conductivity. Figuring out which materials possess that combination of characteristics will require further research, he says.

The work, which also included researchers at Rutgers University and at the University of Notre Dame, was partly supported by the U.S. Department of Energy, through the Solid State Solar-Thermal Energy Conversion Center, and by the Department of Defense.




Contacts and sources: 
Written by David Chandler, MIT News Office
Andrew Carleen
Massachusetts Institute of Technology

Octillions Of Microbes In The Seas: Ocean Microbes Show Incredible Genetic Diversity

The smallest, most abundant marine microbe, Prochlorococcus,is a photosynthetic bacterial species essential to the marine ecosystem.

It's estimated that billions of the single-celled creatures live in the oceans, forming the center of the marine food web.

Artist's interpretation of Prochlorococcus diversity in a drop of seawater.

Credit: Carly Sanker, MIT

They occupy a range of ecological niches based on temperature, light, water chemistry and interactions with other species.

But the diversity within this single species remains a puzzle.

To probe this question, scientists at the Massachusetts Institute of Technology (MIT) recently performed a cell-by-cell genomic analysis of a wild population of Prochlorococcus living in a milliliter of ocean water--less than a quarter of a teaspoon--and found hundreds of distinct genetic subpopulations.

Each subpopulation in those few drops of water is characterized by a set of core gene alleles linked to a few associated flexible genes--a combination the scientists call the "genomic backbone."

Scanning electron micrograph of the marine microbe Prochlorococcus.

Credit: Anne Thompson, MIT

This backbone gives the subpopulation a finely tuned ability to fill a particular ecological niche.

Diversity also exists within backbone subpopulations; most individual cells in the samples carried at least one set of flexible genes not found in any other cell in its subpopulation.

A report on the research by Sallie Chisholm and Nadav Kashtan at MIT, along with co-authors, appears in this week's issue of the journal Science.

The National Science Foundation (NSF), through its Divisions of Environmental Biology and Ocean Sciences, supported the research.

"In this extraordinary finding on the power of natural selection, the scientists have discovered a mosaic of genetically distinct populations of one of the most abundant organisms on Earth," says George Gilchrist, program director in NSF's Division of Environmental Biology.

"In spite of the constant mixing of the oceans," Gilchrist says, "variations in light, temperature and chemistry create unique habitats that evolution has filled with an enormous diversity of populations over millions of years."

Adds David Garrison, program director in NSF's Division of Ocean Sciences, "The results will change the way marine ecologists think about how planktonic microbes and, in turn, planktonic communities may respond to climate and environmental change."

The scientists estimate that the subpopulations diverged at least a few million years ago.

The backbone is an older, more slowly evolving, component of the genome, while the flexible genes reside in areas of the genome where gene exchange is relatively frequent, facilitating more rapid evolution.

The study also revealed that the relative abundance of the backbone subpopulations changes with the seasons at the study site near Bermuda, adding strength to the argument that each subpopulation is finely tuned for optimal growth under different conditions.

"The sheer enormity of diversity that must be in the octillionProchlorococcus cells living in the seas is daunting to consider," Chisholm says. "It creates a robust and stable population in the face of environmental instability."

Ocean turbulence also plays a role in the evolution and diversity of Prochlorococcus.

A fluid mechanics model predicts that in typical ocean flow, just-divided daughter cells drift rapidly, placing them centimeters apart from one another in minutes, tens of meters apart in an hour, and kilometers apart in a week's time.

"The interesting question is, 'Why does such a diverse set of subpopulations exist?'" Kashtan says.

"The huge population size of Prochlorococcus suggests that this remarkable diversity and the way it is organized is not random, but is a masterpiece product of natural selection."

Chisholm and Kashtan say the evolutionary and ecological distinction among the subpopulations is probably common in other wild, free-living (not attached to particles or other organisms) bacteria species with large populations and highly mixed habitats.

Other co-authors of the paper are Sara Roggensack, Sébastien Rodrigue, Jessie Thompson, Steven Biller, Allison Coe, Huiming Ding, Roman Stocker and Michael Follows of MIT; Pekka Marttinen of the Helsinki Institute for Information Technology; Rex Malmstrom of the U.S. Department of Energy Joint Genome Institute and Ramunas Stepanauskas of the Bigelow Laboratory for Ocean Sciences.

The NSF Center for Microbial Oceanography, U.S. Department of Energy Genomics Science Program and the Gordon and Betty Moore Foundation Marine Microbiology Initiative also supported the work.





Contacts and sources:
Cheryl Dybas
National Science Foundation

Magnitude Of Quake Scales With Maturity Of Fault, Suggests New German Study

The oldest sections of transform faults, such as the North Anatolian Fault Zone (NAFZ) and the San Andreas Fault, produce the largest earthquakes, putting important limits on the potential seismic hazard for less mature parts of fault zones, according to a new study to be presented today at the Seismological Society of America (SSA) 2014 Annual Meeting in Anchorage, Alaska. The finding suggests that maximum earthquake magnitude scales with the maturity of the fault.

San Andreas Fault
Credit: Wikipedia

Identifying the likely maximum magnitude for the NAFZ is critical for seismic hazard assessments, particularly given its proximity to Istanbul.

"It has been argued for decades that fault systems evolving over geological time may unify smaller fault segments, forming mature rupture zones with a potential for larger earthquake," said Marco Bohnhoff, professor of geophysics at the German Research Center for Geosciences in Potsdam, Germany, who sought to clarify the seismic hazard potential from the NAFZ. "With the outcome of this study it would in principal be possible to improve the seismic hazard estimates for any transform fault near a population center, once its maturity can be quantified," said Bohnhoff.

Bohnhoff and colleagues investigated the maximum magnitude of historic earthquakes along the NAFZ, which poses significant seismic hazard to northwest Turkey and, specifically, Istanbul.

Relying on the region's extensive literary sources that date back more than 2000 years, Bohnhoff and colleagues used catalogues of historical earthquakes in the region, analyzing the earthquake magnitude in relation to the fault-zone age and cumulative offset across the fault, including recent findings on fault-zone segmentation along the NAFZ.

"What we know of the fault zone is that it originated approximately 12 million years ago in the east and migrated to the west," said Bohnhoff. "In the eastern portion of the fault zone, individual fault segments are longer and the offsets are larger."

The largest earthquakes of approximately M 8.0 are exclusively observed along the older eastern section of the fault zone, says Bohnhoff. The younger western sections, in contrast, have historically produced earthquakes of magnitude no larger than 7.4.

"While a 7.4 earthquake is significant, this study puts a limit on the current seismic hazard to northwest Turkey and its largest regional population and economical center Istanbul," said Bohnhoff.

Bohnhoff compared the study of the NAFZ to the San Andreas and the Dead Sea Transform Fault systems. While the earlier is well studied instrumentally with few historic records, the latter has an extensive record of historical earthquakes but few available modern fault-zone investigations. Both of these major transform fault systems support the findings for the NAFZ that were derived based on a unique combination of long historical earthquake records and in-depth fault-zone studies.

Bohnhoff will present his study, "Fault-Zone Maturity Defines Maximum Earthquake Magnitude," today at the SSA Annual Meeting. SSA is an international scientific society devoted to the advancement of seismology and the understanding of earthquakes for the benefit of society. Its 2014 Annual Meeting will be held Anchorage, Alaska on April 30 – May 2, 2014.



Contacts and sources:
Nan Broadbent
Seismological Society of America

Tuesday, April 29, 2014

Search For Extraterrestrial Life More Difficult Than First Thought

A new study from the University of Toronto Scarborough suggests the search for life on planets outside our solar system may be more difficult than previously thought.

The study, authored by a team of international researchers led by UTSC Assistant Professor Hanno Rein from the Department of Physical and Environmental Science, finds the method used to detect biosignatures on such planets, known as exoplanets, can produce a false positive result.

UTSC Assistant Professor Hanno Rein 

Credit: University of Toronto

The presence of multiple chemicals such as methane and oxygen in an exoplanet’s atmosphere is considered an example of a biosignature, or evidence of past or present life. Rein’s team discovered that a lifeless planet with a lifeless moon can mimic the same results as a planet with a biosignature.

“You wouldn’t be able to distinguish between them because they are so far away that you would see both in one spectrum,” says Rein.

The resolution needed to properly identify a genuine biosignature from a false positive would be impossible to obtain even with telescopes available in the foreseeable future, says Rein.

“A telescope would need to be unrealistically large, something one hundred metres in size and it would have to be built in space,” he says. “This telescope does not exist, and there are no plans to build one any time soon.”

Current methods can estimate the size and temperature of an exoplanet planet in order to determine whether liquid water could exist on the planet’s surface, believed to be one of the criteria for a planet hosting the right conditions for life.

While many researchers use modeling to imagine the atmosphere of these planets, they still aren’t able to make conclusive observations, says Rein. “We can’t get an idea of what the atmosphere is actually like, not with the methods we have at our disposal.”

There are 1,774 confirmed exoplanets known to exist, but there could be more than 100 billion planets in the Milky Way Galaxy alone. Despite the results, Rein is optimistic the search for life on planets outside our own is possible if done the right way.

The artist's rendering depicts the multiple planet systems discovered by NASA's Kepler mission. Out of hundreds of candidate planetary systems, scientists had previously verified six systems with multiple transiting planets (denoted here in red). Now, Kepler observations have verified planets (shown here in green) in 11 new planetary systems. Many of these systems contain additional planet candidates that are yet to be verified (shown here in dark purple). For reference, the eight planets of our Solar System are shown in blue along the left edge of the image. 
Credit: NASA Ames/Jason Steffen, Fermilab Center for Particle Astrophysics

“We should make sure we are looking at the right objects,” he says, adding that the search for life within our solar system should remain a priority. He points to the recent discovery of a liquid ocean on Enceladus, one of Saturn’s larger moons, as a prime example.

“As for exoplanets we want to broaden the search and study planets around stars that are cooler and fainter than our own Sun. One example is the recently discovered planet Kepler-186f, which is orbiting an M-dwarf star,” says Rein.

Rein says locating a planet in a habitable zone while being able to obtain a good resolution to model the atmosphere will help determine what’s on the planet.

“There are plenty of reasons to be optimistic that we will find hints of extraterrestrial life within the next few decades, just maybe not on an Earth-like planet around a Sun-like star.”



Contacts and sources:
Hanno Rein
University of Toronto

The Intergalactic Medium Unveiled: Caltech's Cosmic Web Imager Directly Observes "Dim Matter"

Caltech astronomers have taken unprecedented images of the intergalactic medium (IGM)—the diffuse gas that connects galaxies throughout the universe—with the Cosmic Web Imager, an instrument designed and built at Caltech.

Comparison of Lyman alpha blob observed with Cosmic Web Imager and a simulation of the cosmic web based on theoretical predictions.

Credit: Christopher Martin, Robert Hurt 

Until now, the structure of the IGM has mostly been a matter for theoretical speculation. However, with observations from the Cosmic Web Imager, deployed on the Hale 200-inch telescope at Palomar Observatory, astronomers are obtaining our first three-dimensional pictures of the IGM. The Cosmic Web Imager will make possible a new understanding of galactic and intergalactic dynamics, and it has already detected one possible spiral-galaxy-in-the-making that is three times the size of our Milky Way.

The Cosmic Web Imager was conceived and developed by Caltech professor of physics Christopher Martin. "I've been thinking about the intergalactic medium since I was a graduate student," says Martin. "Not only does it comprise most of the normal matter in the universe, it is also the medium in which galaxies form and grow."

Observation of quasar (QSO 1549+19) taken with Caltech's Cosmic Web Imager. Blue shows hydrogen gas surrounding and inflowing to quasar.
Credit: Christopher Martin, Robert Hurt 

Since the late 1980s and early 1990s, theoreticians have predicted that primordial gas from the Big Bang is not spread uniformly throughout space, but is instead distributed in channels that span galaxies and flow between them. This "cosmic web"—the IGM—is a network of smaller and larger filaments crisscrossing one another across the vastness of space and back through time to an era when galaxies were first forming and stars were being produced at a rapid rate.

Martin describes the diffuse gas of the IGM as "dim matter," to distinguish it from the bright matter of stars and galaxies, and the dark matter and energy that compose most of the universe. Though you might not think so on a bright sunny day or even a starlit night, fully 96 percent of the mass and energy in the universe is dark energy and dark matter (first inferred by Caltech's Fritz Zwicky in the 1930s), whose existence we know of only due to its effects on the remaining 4 percent that we can see: normal matter. Of this 4 percent that is normal matter, only one-quarter is made up of stars and galaxies, the bright objects that light our night sky. The remainder, which amounts to only about 3 percent of everything in the universe, is the IGM.

Observation of Lyman alpha blob in emerging galaxy cluster SSA22 taken with Caltech's Cosmic Web Imager, showing gas filaments flowing into blob as shown by arrows.

Credit: Christopher Martin, Robert Hurt 

As Martin's name for the IGM suggests, "dim matter" is hard to see. Prior to the development of the Cosmic Web Imager, the IGM was observed primarily via foreground absorption of light—indicating the presence of matter—occurring between Earth and a distant object such as a quasar (the nucleus of a young galaxy).

"When you look at the gas between us and a quasar, you have only one line of sight," explains Martin. "You know that there's some gas farther away, there's some gas closer in, and there's some gas in the middle, but there's no information about how that gas is distributed across three dimensions."

Matt Matuszewski, a former graduate student at Caltech who helped to build the Cosmic Web Imager and is now an instrument scientist at Caltech, likens this line-of-sight view to observing a complex cityscape through a few narrow slits in a wall: "All you would know is that there is some concrete, windows, metal, pavement, maybe an occasional flash of color. Only by opening the slit can you see that there are buildings and skyscrapers and roads and bridges and cars and people walking the streets. Only by taking a picture can you understand how all these components fit together, and know that you are looking at a city."

Image of quasar (QSO 1549+19) taken with Caltech's Cosmic Web Imager, showing surrounding gas (in blue) and direction of filamentary gas inflow.
Credit: Christopher Martin, Robert Hurt 

Martin and his team have now seen the first glimpse of the city of dim matter. It is not full of skyscrapers and bridges, but it is both visually and scientifically exciting.

The first cosmic filaments observed by the Cosmic Web Imager are in the vicinity of two very bright objects: a quasar labeled QSO 1549+19 and a so-called Lyman alpha blob in an emerging galaxy cluster known as SSA22. These objects were chosen by Martin for initial observations because they are bright, lighting up the surrounding IGM and boosting its detectable signal.

Observations show a narrow filament, one million light-years long, flowing into the quasar, perhaps fueling the growth of the galaxy that hosts the quasar. Meanwhile, there are three filaments surrounding the Lyman alpha blob, with a measured spin that shows that the gas from these filaments is flowing into the blob and affecting its dynamics.

The Cosmic Web Imager is a spectrographic imager, taking pictures at many different wavelengths simultaneously. This is a powerful technique for investigating astronomical objects, as it makes it possible to not only see these objects but to learn about their composition, mass, and velocity. Under the conditions expected for cosmic web filaments, hydrogen is the dominant element and emits light at a specific ultraviolet wavelength called Lyman alpha. Earth's atmosphere blocks light at ultraviolet wavelengths, so one needs to be outside Earth's atmosphere, observing from a satellite or a high-altitude balloon, to observe the Lyman alpha signal.

The Cosmic Web Imager installed in the Cassegrain cage of the Hale 200 inch telescope at Palomar Observatory.

Credit: Matt Matuszewski 

However, if the Lyman alpha emission lies much further away from us—that is, it comes to us from an earlier time in the universe—then it arrives at a longer wavelength (a phenomenon known as redshifting). This brings the Lyman alpha signal into the visible spectrum such that it can pass through the atmosphere and be detected by ground-based telescopes like the Cosmic Web Imager.

The objects the Cosmic Web Imager has observed date to approximately 2 billion years after the Big Bang, a time of rapid star formation in galaxies. "In the case of the Lyman alpha blob," says Martin, "I think we're looking at a giant protogalactic disk. It's almost 300,000 light-years in diameter, three times the size of the Milky Way."

The Cosmic Web Imager was funded by grants from the NSF and Caltech. Having successfully deployed the instrument at the Palomar Observatory, Martin's group is now developing a more sensitive and versatile version of the Cosmic Web Imager for use at the W. M. Keck Observatory atop Mauna Kea in Hawaii. "The gaseous filaments and structures we see around the quasar and the Lyman alpha blob are unusually bright. Our goal is to eventually be able to see the average intergalactic medium everywhere. It's harder, but we'll get there," says Martin.

Plans are also under way for observations of the IGM from a telescope aboard a high-altitude balloon, FIREBALL (Faint Intergalactic Redshifted Emission Balloon); and from a satellite, ISTOS (Imaging Spectroscopic Telescope for Origins Surveys). By virtue of bypassing most, if not all, of our atmosphere, both instruments will enable observations of Lyman alpha emission—and therefore the IGM—that are closer to us; that is, that are from more recent epochs of the universe.

Two papers describing the initial data from the Cosmic Web Imager have been published in the Astrophysical Journal: "Intergalactic Medium Observations with the Cosmic Web Imager: I. The Circum-QSO Medium of QSO 1549+19, and Evidence for a Filamentary Gas Inflow" and "Intergalactic Medium Observations with the Cosmic Web Imager: II. Discovery of Extended, Kinematically-linked Emission around SSA22 Lyα Blob 2." The Cosmic Web Imager was built principally by three Caltech graduate students—the late Daphne Chang, Matuszewski, and Shahinur Rahman—and by Caltech principal research scientist Patrick Morrissey, who are all coauthors on the papers. Additional coauthors are Christopher Martin, Anna Moore, Charles Steidel, and Yuichi Matsuda.



Contacts and sources:
Written by Cynthia Eller
Brian Bell
California Institute of Technology

Monday, April 28, 2014

The Funniest Cities In America Ranked

Chicago is the funniest city in the United States, according to a University of Colorado Boulder study.

Chicago skyline

Credit: Wikimedia Commons

Boston is the No. 2 wise guy, followed by Atlanta in third place. Denver made the top 10 list at No. 8.

The study out today is the most comprehensive analysis of humorous cities and was led by Peter McGraw, associate professor of marketing and psychology at CU-Boulder’s Leeds School of Business. His team collected data across the nation using an algorithm created at his Humor Research Lab (HuRL).

According to the findings, the following are the top 10 funniest cities in the United States:

1. Chicago
2. Boston
3. Atlanta
4. Washington, D.C.
5. Portland, Ore.
6. New York
7. Los Angeles
8. Denver
9. San Francisco
10. Seattle

The project grew out of McGraw’s new book co-authored with Journalist Joel Warner, “The Humor Code: A Global Search for What Makes Things Funny.”

“We found humor often has a local flavor,” said McGraw. “The jokes that get laughs at comedy clubs in Denver seem unlikely to fly with a cartoon editor at The New Yorker, for example. The kind of torturous game shows that some Japanese find amusing would likely fall flat to a sitcom producer in Los Angeles.”

Over a nine-month period, McGraw and his team surveyed the 50 largest U.S. cities to track the frequency of visits by community members to comedy websites; the number of comedy clubs per square mile; traveling comedians’ ratings of each city’s comedy club audiences; the number of native-born famous comedians; the number of local funny tweeters; the number of local comedy radio stations; and the frequency of humor-related Web searches originating in each city.

Co-authors of the study were Warner, Adrian Ward, senior research associate at the Leeds School and Caleb Warren, assistant professor of marketing at Texas A&M University.

“A city’s humor score isn’t just a measure of historic reputation or big-name productions,” said Ward. “It’s a way of looking at the day-to-day lives of people in that city. A city’s sense of humor is a living, breathing thing, created by everything from coffee shop conversations to Web videos shared between friends to the laughter that erupts at comedy clubs.”

The researchers also conducted a survey of more than 900 residents from the top 10 cities deemed funny by the algorithm. The team asked the residents about the kinds of funny entertainment they enjoy and whether they look for humor in their friends and partners. In addition the residents took a personality test assessing their “need for levity.”

Participants also described their city’s sense of humor and told their favorite joke.

“The result was a window into the humor profiles of each of the top 10 cities,” said McGraw. “Boston residents balance high-brow intellectualism with drunken rowdiness while Washington, D.C., finds humor in the absurdities of political systems. Portlanders are just plain weird."

Also involved in the study were CU-Boulder undergraduate students Christopher Miller, Alexandra Weiner, Allison Paul, Anthony Levy, Hayley Dunn, Alec Wilkie and graduate student Erin Percival Carter.

To see the study, including a list of all 50 funniest cities and humor profiles, visit http://humorcode.com/funniest-cities.

“The Humor Code: A Global Search for What Makes Things Funny” details McGraw and Warner’s international journey inspired by McGraw’s benign violation theory of humor.


Contacts and sources:
Peter McGraw

Thin-Crusted U.S. Sierra Nevada Mountains: Where Did the Earth Go?

In an addition to Geosphere's ongoing themed issue series, "Geodynamics and Consequences of Lithospheric Removal in the Sierra Nevada, California," Craig H. Jones of the University of Colorado Boulder and colleagues examine the seismological study of the entire extent of the U.S. Sierra Nevada range using seismograms collected in the Sierra Nevada EarthScope field experiment from 2005 to 2007.

Geologic map showing seismic stations used in the study by C.H. Jones and colleagues, "P-wave tomography of potential convective downwellings and their source regions, Sierra Nevada, California."
Credit: U.S. Geological Society 

The southern Sierra Nevada is known to have unusually thin crust for mountains with such high elevations (peaks higher than 4 km/14,000 ft, and average elevations near 3 km/10,000 ft). Jones and his team use measurements of the arrival times of seismic waves (called P-waves) from earthquakes around the globe to image the earth under the Sierra Nevada and neighboring locations.

Their results reveal that the entire eastern Sierra overlies low-velocity upper mantle and lacks the dense, quartz-poor lower crust that they say must have existed 80 million years ago when the granites of the range were created. 

U.S. Sierra Nevada   
Credit: NASA

Jones and colleagues write that this missing dense material probably was removed within the past 10 million years. "Previous workers," they note, "have suggested it might be within a high-velocity mantle anomaly under the southeastern San Joaquin Valley," which is "the right size to be the old, dense rock previously under the eastern Sierra."

They argue, however, that the geometry and extent of earth within the anomaly does not appear to be consistent with it being a piece of old subducted ocean floor. This would mean that a long strip of dense rock under the Sierra somehow deformed into a steeply plunging ellipsoid at the southwestern edge of the range. This conclusion suggests that the range rose within the past 10 million years as this dense material fell away to the west and south. Finally, Jones and colleagues note that something similar might be underway at the northern edge of the range.


Contacts and sources:

Mystery Of Animal And Plant Domestication Deepens

In recent decades, research has unraveled much of what we thought we knew about this crucial event in human history

We all think we have a rough idea of what happened 12,000 years ago when people at several different spots around the globe brought plants under cultivation and domesticated animals for transport, food or fiber. But how much do we really know?

Recent research suggests less than we think. For example, why did people domesticate a mere dozen or so of the roughly 200,000 species of wild flowering plants? And why only about five of the 148 species of large wild mammalian herbivores or omnivores? And while we’re at it, why haven’t more species of either plants or animals been domesticated in modern times?

If nothing else, the tiny percentages of domesticates suggests there are limitations to human agency, and that it almost certainly is not true that people can step in and completely remodel through artificial selection an organism shaped for millennia by natural selection.

The small number of domesticates is just one of many questions raised in a special issue of the Proceedings of the National Academy of Sciences published online April 21.

The issue is the product of a 2011 meeting of scholars with an interest in domestication at the National Evolutionary Synthesis Center, a nonprofit science center jointly operated by Duke University, the University of North Carolina at Chapel Hill and North Carolina State University.

Of the 25 scholars at the conference, two were from Washington University in St. Louis: Arts & Sciences’ Fiona Marshall, PhD, professor of archaeology, who studies animal domestication, and Kenneth Olsen, PhD, associate professor of biology, who studies plant domestication.

Both Marshall and Olsen are currently engaged in research on the crumbling margins of domestication where questions about this evolutionary process loom the largest.

Marshall studies two species that are famously ambivalently domesticated: donkeys and cats. Olsen studies rice and cassava and is currently interested in rice mimics, weeds that look enough like rice that they fly under the radar even when rice fields are handweeded.

Both Marshall and Olsen contributed articles to the special PNAS issue (seeThe story of animal domestication retold and Genetic study tackles mystery of slow plant domestications) and helped write the introductory essay that raises the big questions confronting the field.

“This workshop was especially fun,” said Olsen, “because it brought together people working on plants and animals and archeologists and geneticists. I hadn’t really thought much about animal domestication because I work primarily with plants, so it was exciting to see the same problem from a very different perspective.”

How much of it was our doing? 

Many of our ideas about domestication are derived from modern experience with animal breeding. Anyone familiar with the huge variety of dog breeds, all of which belong to the same subspecies of the gray wolf, has some appreciation of the power of selective breeding to alter appearance and behavior.



Credit: Brian Hare, Duke University

Perhaps the most famous experiment in domestication is a project in Russia that turned silver morphs of the wild red fox into tamer and more dog-like silver foxes in just 40 generations. But the silver foxes were kept in cages on a fox farm where they were sheltered and fed and illicit liaisons with wild foxes were thwarted. How representative was this experiment of prehistoric domestication events?

But what about self-fertilizing or wind-pollinated plants, or for that matter, domesticated animals accidentally or deliberately bred with wild relatives?

Recent evidence that cereal crops, such as wheat or barley, evolved domestication traits much more slowly than had been thought has led to renewed interest in the idea that selection during domestication may have been partly accidental.

Charles Darwin himself drew a distinction between conscious selection, in which humans directly select for desirable traits, and unconscious selection, where traits evolve as a byproduct of natural selection in crop fields or from selection on other traits.

“The big focus right now is how much unintentional change people were causing environmentally that resulted in natural selection altering both plants and animals,” said Marshall.

“We used to think cats and dogs were real outliers in the animal domestication process because they were attracted to human settlements for food and in some sense domesticated themselves. But new research is showing that other domesticated animals may be more like cats and dogs than we thought.

Why weren’t zebras ever domesticated? Baron Rothschild frequently drove a carriage pulled by zebras through the streets of 19th-century London. In “Guns, Germs and Steel,” Jared Diamond says the reason zebras were not domesticated is that they are extraordinarily vicious and will bite and not let go. But why weren’t people able to modify this temperament if they were able to gentle wolves into dogs?

 Credit: Creative Commons 

“Once animals such as donkeys or cattle were caught,” Marshall said, “the changes humans sought to make were pretty minimal. Really it just came down to culling a few of the males and breeding all of the females.”

Even today, she points out, African pastoralists can afford to kill only four out of every 100 cows or they run the risk that drought and disease will wipe out the entire herd. “So I think outside of industrialized societies or special situations, artificial selection was very weak,” she said.

“In the donkeys and other transport animals, it’s not affiliative [tame] behavior the herders want,” Marshall said. “What they care about more than anything else is that their animals stay alive.”

So artificial selection is acting in the same direction as natural selection, or maybe pushing even harder, because humans often place animals in harsher conditions than natural ones.

“The comparable idea for plants,” said Olsen, “is the dump heap hypothesis, originally proposed by Edgar Anderson, a botany professor here at Washington University. The idea is that when people threw out the refuse of plant foods, including seeds, some grew and again set seed, and in this way people inadvertently selected species they were eating that also did well in the disturbed and nutrient-rich environment of the dump heap.”

“Cultivation practices play a huge role in selection,” said Olsen. “Traditionally in Southeast Asia, many different varieties of rice were grown simultaneously in a given field. It was a bet-hedging strategy,” he said, “that ensured some plants would survive and produce seed even in a bad season.” So it wasn’t people selecting the crop plants directly so much as people changing the landscape in ways that altered the selection pressure on plants.

How best to time travel

Questions about the original domestication events are difficult to answer because plants and animals were domesticated before humans invented writing, and so figuring out what happened has been a matter of making do with the limited evidence that has survived.

The problem is particularly difficult for animal domestication because what matters most is animal behavior, which leaves few traces. In the past, scientists tried measuring bones or examining teeth, looking for age or size differences or pathology that might plausibly be related to animals living with people.

“Sometimes there aren’t morphological shifts that are easy to find or they’re too late to tell us anything,” Marshall said. “We’ve gone away from morphological identifiers of domestication, and we’re going with behavior now, however we can get it. If we’ve got concentrations of dung, that means animals were being corralled,” she said.

Olsen, on the other hand, seeks to identify genes in modern crop species that are associated with domestication traits in the plant, such as an erect rather than a sprawling architecture. The techniques used to isolate these genes are difficult and time consuming and may not always penetrate as deeply into the past as scientists had once assumed because present-day plants are only a subset of the crop varieties that may have once existed.


Credit: Prof. Saxx/Wikimedia Commons 
 
Aurochs, the ancestors of modern cattle, depicted in this cave painting in Lascaux, France, are now extinct. The last recorded auroch died in Poland in 1627. Marshall worries that the erosion of genetic diversity symbolized by this extinction might make it harder to remold domesticated species to meet the challenges of climate change.

So both Marshall and Olsen are excited by recent successes in sequencing ancient DNA. Ancient DNA, they say, will allow hypotheses about domestication to be tested over the entire evolutionary time period of domestication.

Another only recently appreciated clue to plant domestication is the presence of enriched soils, created through human activities. One example is the terra preta in the Amazon basin, which bears silent witness to the presence of a pre-Columbian agricultural society in what had been thought to be untouched forest.

By mapping distributions of enriched soils, scientists hope to better understand how ancient people altered landscapes and the effects that had on plant communities.

Washington University biologist Ken Olsen, who studies the genetic basis of evolution in plants, and archeologist Fiona Marshall, whose research focuses on animal domestication in Africa, enjoy an interdisciplinary chat.
Credit: Sid Hastings/Washington University 

“It is really clear,” Marshall said, “that we need all the different approaches that we can possibly get in order to triangulate back. We’re using all kinds of ways, coarse-grained and fine, long-term and short, because the practical implications for us are quite great.”

After all, the first domestications may have been triggered by climate change at the end of the last ice age — in combination with social issues.

As a result, people abandoned the hunter-gatherer lifestyle they had successfully followed for 95 percent of human history and turned instead to the new strategies of farming and herding.

As we head into a new era of climate change, Marshall said it would be comforting to know that we understood what happened then and why.

“The Modern View of Domestication,” a special issue of PNAS edited by Greger Larson and Dolores R. Piperno, resulted from a meeting titled “Domestication as an Evolutionary Phenomenon: Expanding the Synthesis,” held April 7–11, 2011, that was funded and hosted by the National Evolutionary Synthesis Centre (National Science Foundation EF-0905606) in 2011.



Contacts and sources:
By Diana Lutz

Laughter Increases Memory And Learning Ability In Elderly People

Watching a funny video increased memory, learning ability in elderly people. 

Too much stress can take its toll on the body, mood, and mind. As we age it can contribute to a number of health problems, including high blood pressure, diabetes, and heart disease. Recent research has shown that the stress hormone cortisol damages certain neurons in the brain and can negatively affect memory and learning ability in the elderly. 

Credit: Wikipedia

Researchers at Loma Linda University have delved deeper into cortisol’s relationship to memory and whether humor and laughter—a well-known stress reliever—can help lessen the damage that cortisol can cause. Their findings were presented on Sunday, April 27, at the Experimental Biology meeting (San Diego Convention Center from 12:45–3:00 PM PDT).

Gurinder Singh Bains et al. showed a 20-minute laugh-inducing funny video to a group of healthy elderly individuals and a group of elderly people with diabetes. The groups where then asked to complete a memory assessment that measured their learning, recall, and sight recognition. Their performance was compared to a control group of elderly people who also completed the memory assessment, but were not shown a funny video. Cortisol concentrations for both groups were also recorded at the beginning and end of the experiment.

The research team found a significant decrease in cortisol concentrations among both groups who watched the video. Video-watchers also showed greater improvement in all areas of the memory assessment when compared to controls, with the diabetic group seeing the most dramatic benefit in cortisol level changes and the healthy elderly seeing the most significant changes in memory test scores.

“Our research findings offer potential clinical and rehabilitative benefits that can be applied to wellness programs for the elderly,” Dr. Bains said. “The cognitive components—learning ability and delayed recall—become more challenging as we age and are essential to older adults for an improved quality of life: mind, body, and spirit. Although older adults have age-related memory deficits, complimentary, enjoyable, and beneficial humor therapies need to be implemented for these individuals.”

Study co-author and long-time psychoneuroimmunology humor researcher, Dr. Lee Berk, added, “It’s simple, the less stress you have the better your memory. Humor reduces detrimental stress hormones like cortisol that decrease memory hippocampal neurons, lowers your blood pressure, and increases blood flow and your mood state. The act of laughter—or simply enjoying some humor—increases the release of endorphins and dopamine in the brain, which provides a sense of pleasure and reward. These positive and beneficial neurochemical changes, in turn, make the immune system function better. There are even changes in brain wave activity towards what's called the "gamma wave band frequency", which also amp up memory and recall. So, indeed, laughter is turning out to be not only a good medicine, but also a memory enhancer adding to our quality of life.” 


Contacts and sources: 
Federation of American Societies for Experimental Biology (FASEB)

Flexible Metallic Wires On Three Atoms Wide, 1/1000th The Width Of Current Wires Connections Transistors

Junhao Lin, a Vanderbilt University Ph.D. student and visiting scientist at Oak Ridge National Laboratory (ORNL), has found a way to use a finely focused beam of electrons to create some of the smallest wires ever made. The flexible metallic wires are only three atoms wide: One thousandth the width of the microscopic wires used to connect the transistors in today’s integrated circuits.

Molecular model of nanowires made out of TMDC.

Credit: Junhao Lin, Vanderbilt University

Lin’s achievement is described in an article published online on April 28 by the journal Nature Nanotechnology. According to his advisor Sokrates Pantelides, University Distinguished Professor of Physics and Engineering at Vanderbilt University, and his collaborators at ORNL, the technique represents an exciting new way to manipulate matter at the nanoscale and should give a boost to efforts to create electronic circuits out of atomic monolayers, the thinnest possible form factor for solid objects.

“Junhao took this project and really ran with it,” said Pantelides.

Lin made the tiny wires from a special family of semiconducting materials that naturally form monolayers. These materials, called transition-metal dichalcogenides (TMDCs), are made by combining the metals molybdenum or tungsten with either sulfur or selenium. The best-known member of the family is molybdenum disulfide, a common mineral that is used as a solid lubricant.

Atomic monolayers are the object of considerable scientific interest these days because they tend to have a number of remarkable qualities, such as exceptional strength and flexibility, transparency and high electron mobility. This interest was sparked in 2004 by the discovery of an easy way to create graphene, an atomic-scale honeycomb lattice of carbon atoms that has exhibited a number of record-breaking properties, including strength, electricity and heat conduction. Despite graphene’s superlative properties, experts have had trouble converting them into useful devices, a process materials scientists call functionalization. So researchers have turned to other monolayer materials like the TMDCs.

Other research groups have already created functioning transistors and flash memory gates out of TMDC materials. So the discovery of how to make wires provides the means for interconnecting these basic elements. Next to the transistors, wiring is one of the most important parts of an integrated circuit. Although today’s integrated circuits (chips) are the size of a thumbnail, they contain more than 20 miles of copper wiring.

“This will likely stimulate a huge research interest in monolayer circuit design,” Lin said. “Because this technique uses electron irradiation, it can in principle be applicable to any kind of electron-based instrument, such as electron-beam lithography.”

One of the intriguing properties of monolayer circuitry is its toughness and flexibility. It is too early to predict what kinds of applications it will produce, but “If you let your imagination go, you can envision tablets and television displays that are as thin as a sheet of paper that you can roll up and stuff in your pocket or purse,” Pantelides commented.

In addition, Lin envisions that the new technique could make it possible to create three-dimensional circuits by stacking monolayers “like Lego blocks” and using electron beams to fabricate the wires that connect the stacked layers.

The nanowire fabrication was carried out at ORNL in the microscopy group that was headed until recently by Stephen J. Pennycook, as part of an ongoing Vanderbilt-ORNL collaboration that combines microscopy and theory to study complex materials systems. Junhao is a graduate student who pursues both theory and electron microscopy in his doctoral research. His primary microscopy mentor has been ORNL Wigner Fellow Wu Zhou.

“Junhao used a scanning transmission electron microscope (STEM) that is capable of focusing a beam of electrons down to a width of half an angstrom (about half the size of an atom) and aims this beam with exquisite precision,” Zhou said.

The collaboration included a group headed by Kazu Suenaga at the National Institute of Advanced Industrial Science and Technology in Tsukuba, Japan, where the electrical measurements that confirmed the theoretical predictions were made by post-doctoral associate Ovidiu Cretu. Other collaborators at ORNL, the University of Tennessee in Knoxville, Vanderbilt University, and Fisk University contributed to the project.

Primary funding for the research was provided by the Department of Energy’s Office of Science grant DE-FG02-09ER46554 and by the ORNL Wigner Fellowship. The work was carried at the ORNL Center for Nanophase Materials Science user facility. Computations were done at the National Energy Research Scientific Computer Center.


Contacts and sources:
Vanderbilt University

This Mighty Mite Runs At Equivalent Of 1300 Miles Per Hour

Move over, Australian tiger beetle. There’s a new runner in town.

Relative to its size, a Southern California mite runs faster than any other animal, also thrives in temperatures that would kill most other animals

Although the mite Paratarsotomus macropalpis is no bigger than a sesame seed, it was recently recorded running at up to 322 body lengths per second, a measure of speed that reflects how quickly an animal moves relative to its body size. The previous record-holder, the Australian tiger beetle, tops out at 171 body lengths per second. By comparison, a cheetah running at 60 miles per hour attains only about 16 body lengths per second.

Paratarsotomus macropalpi
Credit: Samuel Rubin (W.M. Keck Science Center, Pitzer College), Dr. J.C. Wright Laboratory (Department Of Biology, Pomona College), The Claremont University Consortium, Claremont, CA.

Extrapolated to the size of a human, the mite’s speed is equivalent to a person running roughly 1300 miles per hour.

The California college student who spent a summer chasing down the remarkable mites says the discovery is exciting not only because it sets a new world record, but also for what it reveals about the physiology of movement and the physical limitations of living structures.

“It’s so cool to discover something that’s faster than anything else, and just to imagine, as a human, going that fast compared to your body length is really amazing,” said Samuel Rubin, a junior and physics major at Pitzer College who led much of the fieldwork to document the mite’s movements. “But beyond that, looking deeper into the physics of how they accomplish these speeds could help inspire revolutionary new designs for things like robots or biomimetic devices.”

Rubin’s advisor, Jonathan Wright, Ph.D., a professor of biology at Pomona College, became interested in the mites while studying the effect of muscle biochemistry on how quickly animals can move their legs. But it wasn’t until Rubin and other students documented the mites’ running speeds in their natural environment that the research team knew they had found a new world record.

Both relative speed and stride frequency increase as animals get smaller, and in theory, muscle physiology should at some point limit how fast a leg can move. “We were looking at the overarching question of whether there is an upper limit to the relative speed or stride frequency that can be achieved,” said Wright. “When the values for mites are compared with data from other animals, they indicate that, if there is an upper limit, we haven't found it yet.”

The mite is local to Southern California and is often found running along rocks or sidewalks. Although it was first identified in 1916, little is known about its habits or food sources.

The research team used high-speed cameras to record the mites’ sprints in the laboratory and in their natural environment. “It was actually quite difficult to catch them, and when we were filming outside, you had to follow them incredibly quickly as the camera’s field of view is only about 10 centimeters across,” said Rubin.

The research team was also surprised to find the mites running on concrete up to 140 degrees Fahrenheit (60 degrees Celsius), a temperature significantly higher than the upper lethal temperature of most animals. “They’re operating at temperatures that seem to preclude activities of any other animal group. We’ve seen them running where there were no other animals visibly active,” said Wright.

The mites also are adept at stopping and changing directions extremely quickly, attributes the researchers are investigating further for potential insights that may be relevant to bioengineering applications.


Contacts and sources:
Federation of American Societies for Experimental Biology (FASEB)

Anti-Allergy GM Apples

Scientists are trying to engineer apples so that the most widely consumed fruit in Europe no longer triggers allergic reactions. But would people want to eat them?

Peanut, egg and soy are more common food known to trigger an allergic reaction, a problem affecting around 8% of children in the EU. Intuitively, you might not list apples as causing allergic reactions. But in fact 75% of people allergic to birch pollen are also allergic to apples. This happens because a protein in the pollen, which causes an allergic reaction, is similar to a protein found in apples and some other fruit and vegetables. The issue is more common in regions with many birch trees, such as central and northern Europe.

Credit: Jeremy Hiebert

This kind of allergies are not easy to identify. “Some people who are allergic may simply say they don’t like apples, since they’ve a very mild reaction after eating them,” explains Eric van de Weg, plant scientist at Wageningen University in the Netherlands, “but others will suffer blistering, problems catching their breadth and swollen lips, tongue and throat.” He is among a group of scientists in Europe working to develop new non-allergenic fruit. “We wanted to increase the low availability of hypo-allergenic fruit but also come to a better understanding of the genes and proteins involved,” van de Weg says. One solution, tried in a previous European project called ISAFRUIT, was to genetically modify apples.

This was done by gene silencing—designed to produce a genetically modified (GM) fruit. Scientists hunted out the proteins which caused the allergic reactions and then switched off the genes responsible. Though van de Weg used some fungal genetic material for the initial apple experiments, he believes the genes could be switched off using apple genetics only, without involving any other species in the genetic engineering process. “When you silence a gene you are not making any new protein, so this means the risks are lower,” van de Weg adds.

Risks of producing GM apples may be limited, but a focus group study under another part of the ISAFRUIT project in four European countries showed that the idea of genetically modified fruit provoked heated debate. And it was uncertain if non-allergenic GM apples would be acceptable to consumers. The reduction of allergens in the food chain is extremely important, according to Lynn Frewer, an expert in risk communication in Newcastle University, UK. However, studies she was involved in suggested that non-allergenic apples may not open the door to GM fruits in our supermarkets. “Although consumers – and in particular food-allergic consumers – were more positive about the [GM] apple, there was still a clear preference for traditional breeding methods applied to the same end if possible, even for food allergic consumers,” she recalls.

Nature itself may shed further light into this issue. “There are hundreds of apple varieties already available,” notes Allessandro Botton, a plant geneticist at the University of Padova, Italy, and some of these may hold answers. For instance, it is known that apple varieties such as “Golden Delicious” and “Granny Smith” are part of the high-allergenic group, whereas “Jonagold” and “Gloster” induce only low allergenic responses.

Botton, who worked on the genetics of fruit allergens in the project, says it is possible to focus on existing biological variability to look for low-allergenic apples, and says the time is not yet right to think about gene silencing techniques. “We must understand the biological functions of these proteins,” first, he says, adding that its not certain what effect on the plant’s health silencing genes might have.

Asked if gene silencing and the use of genetic material from only apples might make a difference to public attitudes to GM, US geneticist Nina Fedoroff at the King Abdullah University of Science and Technology in Saudi Arabia, believes that “public understanding is not generally sophisticated enough to make such fine distinctions.” She adds: “I don’t think the general views [to genetically-modified organisms] are all that different in the US and Europe, although the reasons given for being against GM are sometimes a bit different.” She concludes: “What is different in reality is that the pipeline hasn’t been completely closed off in the US by over-regulation, whereas it has in Europe. That is largely a consequence of the fact that all members of the EU have to agree on approval and just one member can keep approval from happening.”

For now, van de Weg has no plans to grow non-allergenic apples in Europe, but there is now the potential to use genetic engineering to produce such fruit. We have enough basic knowledge, he says, but whether society wants to go in that direction is another issue.


Contacts and sources: 
by Anthony King
http://www.youris.com

Beautiful Nebula

"Beautiful Nebula discovered between the Balance [Libra] and  the Serpent [Serpens] ..." begins the description of the 5th entry in 18th century astronomer Charles Messier's famous catalog of nebulae and star clusters. Though it appeared to Messier to be fuzzy and round and without stars, Messier 5 (M5) is now known to be a globular star cluster, 100,000 stars or more, bound by gravity and packed into a region around 165 light-years in diameter. It lies some 25,000 light-years away.

Roaming the halo of our galaxy, globular star clusters are ancient members of the Milky Way. M5 is one of the oldest globulars, its stars estimated to be nearly 13 billion years old. The beautiful star cluster is a popular target for Earthbound telescopes. Of course, deployed in low Earth orbit on April 25, 1990, the Hubble Space Telescope has also captured its own stunning close-up view that spans about 20 light-years near the central region of M5. Even close to its dense core at the left, the cluster's aging red and blue giant stars and rejuvenated blue stragglers stand out in yellow and blue hues in the sharp color image.
"Beautiful Nebula discovered between the Balance [Libra] & the Serpent [Serpens] ..." begins the description of the 5th entry in 18th century astronomer Charles Messier's famous catalog of nebulae and star clusters.
Image Credit: NASA, Hubble Space Telescope, ESA

Does Too Much Hygiene Cause Diabetes?

Scientists in northern Europe are conducting a major survey to determine whether standards of hygiene contribute to the development of auto-immune diseases such as type 1 diabetes

The incidence of auto-immune diseases like type 1 diabetes and allergies has risen dramatically in developed countries over the past fifty years. The reasons for this trend are not fully understood but a theory known as the ‘hygiene hypothesis’ links it to a rise in hygiene standards. According to this theory, eliminating bacteria in food and the environment of infants may be depriving the immune system of the stimulus it needs to develop adequately, especially during the first critical years of childhood.



Now, an EU-funded project, called Diabimmune, has set out to test the hygiene hypothesis. Finland and its neighbouring countries are an ideal place to do this, according to the project director, Mikael Knip, professor of pediatrics at the University of Helsinki Children’s Hospital. Finland has the highest incidence in the world of type 1 diabetes. Across the border in Russian Karelia, standards of living and hygiene are significantly poorer than in Finland, and the incidence of the disease is six times lower. To the south, in Estonia, a country with an intermediate standard of living and hygiene, the incidence is just under half that of Finland. Nowhere else in the world is there such a contrast in the same geographic area.


File:Main symptoms of diabetes.png
Credit: Wikipedia

The project has been studying about 2,000 children between the age of 3 and 5 years old and 300 babies, who are up to 3 months old, in each of the three countries. The participants were followed over a period of three years from 2010 to 2013.

The study is based on a vast array of tests. Families had to respond to an extensive questionnaire covering the child’s home environment, contact with animals, diet and family predisposition to immune related diseases, such as allergies. Children were also subjected to a battery of tests including blood and stool samples, allergy, and the presence of dust under the child’s bed was even recorded.

The project scientists now need to analyse the data and the tens of thousands of samples collected to try to identify the bacteria involved, or if, in fact, it is the total number of infections rather than a specific germ that is the critical factor.

Should the project find specific bacteria, this would open the prospect of developing preventive therapies through vaccines or probiotic additives to food products.



Contacts and sources:
David Hover
youris.com
 

Common Links Between Neurodegenerative Diseases Identified

The pattern of brain alterations may be similar in several different neurodegenerative diseases, which opens the door to alternative therapeutic strategies to tackle these diseases

Diseases of the central nervous system are a big burden to society. According to estimates, they cost EUR 800 billion per year in Europe. And for most of them, there is no definitive cure. This is true, for example, for Parkinson disease. Although good treatments exist to manage its symptoms, they become more and more ineffective as the disease progresses. 

Credit: M.R. McGill

Now, the EU-funded REPLACES project, completed in 2013, which associated scientists with clinicians, has shed light on the abnormal working of a particular brain circuitry related to Parkinson’s disease. The results of the project suggest that these same circuits are implicated in different forms of pathologies. And this gives important insights into the possible common links between neurodegenerative diseases such as Parkinson and intellective disabilities or autism.

Existing treatments for Parkinson are very effective at the beginning. When the disease progresses, however, drugs, such as levodopa and so-called dopamine agonists, produce side effects that are sometimes even worse than the initial symptoms of the condition. In particular, they cause a complication called dyskinesia, characterised by abnormal involuntary movements. Therapies are therefore sought that allow better management of symptoms.

The project focused on the study of a highly plastic brain circuitry, which connects regions of the cerebral cortex with the basal ganglia. It is involved in very important functions such as learning and memory. “This system, based on glutamate as a mean of signalling between neurons, has also been discovered to be damaged in Parkinson disease,” saysMonica Di Luca, professor of neuropharmacology at the University of Milan, Italy, and the project coordinator. She adds: “Parkinson’s more well-known and characteristic trait is the selective loss of cells producers of neurotransmitter dopamine.”

Researchers involved into the project studied the function and plasticity of this circuit in different animal models of Parkinson disease, from mice to non-human primates. They found that exactly the same alterations were present and conserved. This makes it an interesting and alternative target for trying to re-establish the correct functioning and reverse the symptoms of the disease.

One expert agrees with the need to target alternative target systems. 'What researchers are trying to do is to intervene to modulate other systems that do not involve dopamine and obtain a better symptoms management,' explains Erwan Bezard, a researcher at the Neurodenerative Diseases Institute at the University of Bordeaux, in France. He also works on alternative targets in Parkinson disease. In monkeys, compounds that target glutamate receptors, used in combination with traditional drugs, have previously shown to improve some deficits in voluntary motor control.

But the research has also shed some light into apparently unrelated diseases. It is becoming more and more obvious that the same alterations in the working of the communication systems among neurons are shared among different diseases. 'This is why we speak about ‘synaptopathies’: there are common players among Parkinson disease, autism and other forms of intellectual disabilities and even schizophrenia. Several of the mutated genes are the same, and affect the signalling systems through common molecules,' says Claudia Bagni, who works on synaptic plasticity in the context of intellectual disabilities at the University of Leuven, in Belgium and University of Rome Tor Vergata, in Italy. 'For example, the glutamatergic system is also affected in the X-fragile syndrome, the most common form of inherited intellectual disability.'

Progress is in sight thanks to a much better understanding of the working of the abnormal synapses in Parkinson disease, and experiments performed in monkeys showing encouraging results. Indeed, 'the team studied human primates, the model system closest to humans, and therefore their findings are relevant to human health.' says Bagni. Project researchers hope the door is now opened for the first clinical trials in humans. 'We have identified a potential new target for treatment, and tested a couple of molecules in animals,” says Di Luca, the “next step would be to find a partnership with pharmaceutical industries interested in pursuing this research.'


Contacts and sources:
by Chiara Palmerini
youris.com 

Mother’s Diet Mirrors Kid's Food Allergies

A long-term study evaluating maternal diet’s impact on food allergy in later life is expected to uncover causes of allergy in children.

About 20 million Europeans are subject to food allergies. Now scientists are looking at these allergies in new ways. It involves the food industry in its work and pays special attention to the link between early diets and allergy in later life. Clare Mills, professor of allergy in the university’s Institute of Inflammation and Repair, at the University of Manchester, UK, is the coordinator of iFAAM. This EU-funded research project follows in the footsteps of European research projects dating back for over a decade. 


Credit: Wikipedia

In particular, the conclusions from a long-term study of a cohort of young people, now six years old, who have been tracked from birth and whose diets and allergies have been recorded, are now in sight. 'Our aim is to see the allergy outcomes of their diet in early life, and even before they were born, as we have information on their mothers’ diets and on their weaning,' Mills tells CommNet, 'This work has been coordinated at the Charité [University Hospital] in Berlin and involves 12 000 people in samples from Iceland to Greece.'

Mills says that although the project has only been going for a year, this work is already producing interesting pointers. For example, a comparison between the UK and Israel shows that children in Israel typically eat nuts at an earlier age than in the UK. This suggests that such dietary habits may have a protective effect against nut allergies later on. 'This means that the current advice that young children should avoid nuts may make things worse,' she observes.

A particular focus for the project is the different effects of allergenic foods in different contexts. 'Someone might react very differently to nuts in a cookie or in a chocolate dessert,' says Mills. The project aims to produce risk models, which will enable food manufacturers to look at these issues, perhaps leading them to alter cleaning protocols in their factories.

In addition, project researchers are working with allergy patient groups. Mills tells CommNet: 'Often people don’t report allergies, but instead just cope with them. This means that we don’t get to know about them. So we are working with patient groups, and setting up an online tool to allow people to record their allergy experiences.'

One expert recognises the focus on maternal diet is the right one. 'There is reason to worry about maternal diet during breast feeding and pregnancy with regard to food allergy outcomes in children. The diet may alter the nutrients and proteins in breast milk and affect the immune system. Studies thus far mostly suggest that a ‘healthy’ diet is important,' says Scott Sicherer, professor of paediatrics, allergy and immunology at the Icahn School of Medicine at Mount Sinai, in New York, USA.

He adds that the food industry has a significant part to play, saying: 'Proper labelling of food allergens is important for keeping persons with food allergies safe.' But he cautions that it is not the whole story. 'It is also important not to label foods overly cautiously. Rampant use of cautionary labels might, if used improperly, be overly limiting,' Sicherer tells youris.com.

However, the link between early and maternal diet and the onset of allergy is not proven and no valuable biomarkers for allergy have yet been found, according to Karin Hoffmann-Sommergruber, associate professor in the department of pathophysiology and allergy research at Vienna Medical University, Austria. Allergy is a global health issue, with rising incidence in newly industrializing nations, she believes. It varies in nature from place to place, with rice more of a problem in Asia, peanuts in Europe and the US, and fish and seafood everywhere.

In addition, Hoffmann-Sommergruber pinpoints allergy as a key issue that the European food industry has yet to tackle. She concludes: '[The] food industry has to set up a risk assessment and risk management plan in compliance with the current allergen labeling legislation.'



Contacts and sources:
by Martin Ince 

Quantum Chaos In Ultracold Gas Discovered

A specialized University of Innsbruck laboratory has successfully identified chaotic behavior in atoms by using ultracold gas. This breakthrough could enable physicists to better understand the world of quantum mechanics.

In a relatively short space of time, the study of ultracold gas has become one of the most interesting and potentially significant fields of atomic and molecular physics. This is because, in an ultracold world, scientists can control and observe atoms in a manner that is not possible in other conditions.

Quantum chaos in ultracold gas discovered 

 Illustration: Erbium Team

In labs where temperatures are measured in micro- or nano-kelvin (millionth or even billionth degree above absolute zero), atoms move incredibly slowly, and their behavior changes. This provides physicists with the opportunity to better understand the world of quantum physics (i.e. what happens at the sub-atomic or nanoscopic scale). Indeed, if sufficiently low temperatures are reached, atoms form a new state of matter that is governed by quantum mechanics.

One laboratory with ultracold facilities is situated at the Institute for Experimental Physics at the University of Innsbruck. Ground-breaking research carried out at the lab - funded under the FP7 project ERBIUM - identified chaotic behavior of particles in a quantum gas. This discovery is significant because it opens up new possibilities of observing interactions between quantum particles.

'For the first time we have been able to observe quantum chaos in the scattering behavior of ultracold atoms,' says team leader Francesca Ferlaino. 'Our work represents a turning point in the world of ultracold gases.'

Observing random behavior

Chaos to physicists does not mean disorder but rather a well-ordered system which, due to its complexity, shows random behavior. In order to observe quantum chaos, the physicists in Innsbruck cooled erbium atoms (erbium is a silvery white solid metal) to a few hundred nano-kelvin, and loaded them into a trap composed of laser beams. They then used a magnetic field to encourage particles to scatter, and after 400 milliseconds, recorded the number of atoms remaining in the trap.

This enabled the team to determine at which magnetic field two atoms are coupled to form a weakly-bound molecule. At this magnetic field, so-called Fano-Feshbach resonances emerge. After varying the magnetic field in each experimental cycle and repeating the experiment 14,000 times, the physicists identified 200 resonances - an amount unprecedented in ultracold quantum gases.

The scientists were able to show that the particular properties of erbium caused a highly complex coupling behaviour between particles, which could be described as chaotic. Erbium is a relatively heavy and highly magnetic mineral, and the interaction between two erbium atoms was shown to be significantly different from other quantum gases that have been investigated up until now.

While the experiment was unable to characterize the behavior of single atoms, it did enable the team, through complex statistical methods, to describe the behavior of particles. Ferlaino compares the method with sociology, which studies the behavior of bigger communities of people (whereas psychology describes relations between individuals).

The research has been published in the journal Nature. 'In the experiment, an ultracold gas of erbium atoms is shown to exhibit many Fano-Feshbach resonances,' the team writes in its summary. 'Analysis verifies that their distribution of nearest-neighbor spacings is what one would expect from random matrix theory ... our results therefore reveal chaotic behavior in the native interaction between ultracold atoms.'



Contacts and sources:
CORDIS
University of Innsbruck Institute for Experimental Physics
http://www.ultracold.at/

Volcanoes: A Friendly Force?

Some of the most famous and devastating natural disasters in history relate to volcano eruptions. It is estimated that more than 260 000 people have died in the past 300 years from eruptions and their aftermath. But volcanoes should not be judged as purely destructive forces - they may also have played a vital part in ensuring life could evolve on Earth and they may now be helping to slow down the warming of the atmosphere. 


Credit: Wikimedia Commons 

According to New Scientist, we now have the best evidence yet that volcanoes were responsible for pulling the Earth out of period of frigid chill over 600 million years ago. This may make them the driving force for evolutionary explosions that made life more diverse and laid the foundations for future animal species.

New Scientist reports that Ryan McKenzie of the University of Texas at Austin and colleagues have shown that volcanism may have shaped life during the crucial Cambrian period. McKenzie's study of volcanic rocks from early in life's evolutionary story shows that volcanic eruptions coincided with a change in the climate from frigid chill to sweltering heat.

This swing, and the way it affected the oceans, caused an explosion of evolutionary diversity, followed by a mass extinction when temperatures got too hot. Then, when Gondwana had formed and the volcanism died down, the planet cooled and life began to bloom again.

Volcanic activity during the formation of Gondwana had previously been suggested as a driver of these violent changes, but McKenzie's new evidence (based on counts of zircon crystals formed in particular volcanic eruptions) strengthens the argument.

Volcanoes are also getting good press in the Guardian which reports on a study which focused extensively on volcanoes as a factor in the slowed warming of the atmosphere. In the study, Dr. Ben Santer and colleagues asked whether small volcanoes could be causing a slight reduction in the amount of sunlight that reaches the Earth.

The Guardian quotes co-author Carl Mears who says, 'We were able to show that part of the cause of the recent lack of temperature increase is the large number of minor volcanic eruptions during the last 15 years. The ash and chemicals from these eruptions caused less sunlight than usual to arrive at the Earth's surface, temporarily reducing the amount of temperature increase we measured at the surface and in the lower troposphere. 

The most recent round of climate models studied for the IPCC report did not adequately include the effects of these volcanoes, making their predictions show too much warming. For climate models to make accurate predictions, it is necessary that the input data that is fed into the model is accurate. Examples of input data include information about changes in greenhouse gases, atmospheric particles and solar output.'
  

Contacts and sources:
CORDIS
Nature http://www.nature.com/ngeo/journal/v7/n3/full/ngeo2098.html

Paradigm Change: Swarming Robots Have Far-Reaching Implications

EVOLVINGROBOT is a European Union (EU)-funded research project which has developed an artificial intelligence system to control tiny robots, enabling them to replicate the ‘swarming’ behavior seen in insects such as bees or ants, or even in birds and fish. It is an innovation which could have far-reaching implications for a range of human activities, from medical to industrial, military and disaster relief.



Credit: © Peter Galbraith fotolia

“The breakthrough, allowing robots to act collectively in a way never achieved before, could herald a ‘paradigm change’ in robotics,” according to the project leader, Dr Roderich Gross of the Natural Robotics Lab at Sheffield University in the United Kingdom.

“The project addressed one of the grand challenges in robotics – how a robotic system can exhibit properties of living beings,” explains Dr Gross, whose work was supported by a Marie Curie European Reintegration Grant (ERG). The project succeeded in getting miniature robots, acting autonomously on the basis of very simple pre-programming, to perform a number of collective tasks. These tasks included gathering together in a single place, segregating themselves into distinct sub-groups, and cooperatively transporting objects.

The most important aspect was that these tasks required only minimal computing power. None of them required the robots to communicate with each other, and some did not require the robots to have memory or be able to compute at all. For example, to move an object, the robots were simply programmed to position themselves on the side of the object which obscured their view of the target area they were aiming for. Without communicating, the robots were thus able to ‘swarm’ in the right place to begin cooperatively pushing the object in the necessary direction.

“These results may lead to a paradigm change in robotics,” says Dr Gross. “Rather than building robots of increasing complexity, the results suggest that a range of capabilities could be realised with exceedingly simple mechanisms. Our truly minimalistic approach may pave the way for implementing massively distributed robotic systems at scales where conventional approaches to sensing and information processing are no longer applicable, for example, at the nanoscale. The amount of computing that can be used at such a scale is next to nothing.”

Enabling this technology at the nanoscale would be of major interest to the field of micro-medicine, where ‘nanobots’ are seen as the key to non-invasive treatment. The robots ability to gather together autonomously means individual robots could be injected into the body and then ‘self-assemble’ into the necessary larger groupings needed to carry out specific tasks.

“The idea is to make a robot so simple in terms of intelligence required that it could be further miniaturised, hopefully to a few microns in the next five to ten years – the size of a red blood cell. These robots could be used to transport drugs around the body or carry out treatments such as clearing blockages in the vascular system,” explains Dr Gross.

The technology also has exciting implications for manufacturing, based on the same idea of robotic self-assembly. “If you want to make a machine with dimensions of less than a millimetre, how can you have manufacturing equipment that is precise enough to do that?” asks Dr Gross. “In the future,” he says, “the goods might manufacture themselves, using robot ‘modules’ programmed to replicate a given template.”

At a larger scale, ‘swarming’ robots could be used in situations too dangerous or impractical for humans, such as search and rescue or military situations.

For development at the nanoscale, one issue that would need to be addressed is how to power the robots, since batteries are not possible. EVOLVINGROBOT has been examining ways in which the robots can harvest energy from their environment – another important property of living beings. The project has developed solar-powered robots, but other ways such as magnetic induction may also prove fruitful in the future.

For Dr Gross, the exciting significance of EVOLVINGROBOT is clear. “Less is more,” he says. “We have shown that robots can use very limited information and yet achieve tasks that, until now, no one knew could be done without computation. That is the most striking result.”


Contacts and sources: 
European Commission Research & Innovation

Solarjet: Sunlight + Water + CO2 = Jet Fuel

In the framework of the EU-project Solarjet, scientists demonstrate for the first time the entire production path to liquid hydrocarbon fuels from water, CO2 and solar energy. The key technological component is a solar reactor developed at ETH Zurich.



A European consortium with the participation of ETH Zurich has experimentally demonstrated the first ever production of jet fuel via a thermochemical process using concentrated solar energy. Researchers from ETH Zurich conducted the EU funded project Solarjet together with the German Aerospace Center (DLR), the fuel company Shell, the think-tank Bauhaus Luftfahrt, and the consulting firm Arttic.

The key component of the production process of sustainable “solar kerosene” is a high-temperature solar reactor developed by the group of Aldo Steinfeld, Professor of Renewable Energy Carriers at ETH Zurich and Head of the Solar Technology Laboratory at the Paul Scherrer Institute. The reactor contains a porous ceramic solar absorber made of the metal oxide ceria, which enables the molecular splitting of water and CO2 in a cyclic two-step reduction-oxidation (redox) process.
Syngas for kerosene synthesis

The first, energy-intensive step proceeds at 1500 degrees Celsius using concentrated solar radiation as the energy source. The metal oxide releases oxygen, assuming a reduced state. In the second step at 700 degrees Celsius, the reduced metal oxide reacts with water and CO2, thus re-acquiring oxygen. As the metal oxide is thereby returned to its original state, it can enter the next cycle of the redox process. The net chemical product is synthesis gas – or syngas – a gas mixture of hydrogen (H2) and carbon monoxide (CO), which serves as the precursor for the synthesis of liquid hydrocarbon fuels.

“We were able to successfully perform 240 consecutive cycles,” says Daniel Marxer, PhD student of Steinfeld’s group. The yield was 750 litres of syngas, which were shipped in a pressurized vessel from Zurich to Amsterdam. There, at a Shell research centre, the solar syngas was finally converted into kerosene by an established method (Fischer-Tropsch process).
Exploring the industrial application

 Artist's rendering of the functional principle 
llustration: Solarjet 

In a next phase the scientists aim to optimize the solar reactor technology. “Enhanced heat transfer and fast reaction kinetics are crucial for maximizing the solar-to-fuel energy conversion efficiency,” says Steinfeld. The industrial application in megawatt solar tower systems, such as those already applied commercially for electricity generation, is being explored.

It might also be possible in the future to obtain the required feedstock of CO2 from flue gas separation or directly from atmospheric air, thereby closing the material cycle for a CO2-neutral process. The scientists are well aware of the large areas required for fueling commercial aviation with solar kerosene. “The long term goal is to reach a 15 per cent efficiency with the solar-driven cyclic process“, says Steinfeld. 20,000 liters kerosene per day could then be produced in a solar tower system of one square kilometer.


Contacts and sources:
ETH Zurich