Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Tuesday, August 31, 2010

Quantum Interference Effect Transistor (QuIET) Developed by University of Arizona Scientists

University of Arizona (Tucson, AZ) earned U.S. Patent 7,786,472 for a quantum interference effect transistor (QuIET).

According to inventors Charles Allen Stafford, David Michael Cardamone and Sumitendra Mazumdar the transistor is a molecular-based switching device that includes a method for controlling charge transport across a molecule.

The molecular-based switching device includes a molecule having first and second nodes in between which destructive quantum interference restricts electrical conduction from the first node to the second node in an off-state, a first electrode connected to the first node and configured to supply charge carriers to the first node, a second electrode connected to the second node and configured to remove the charge carriers from the second node, and a control element configured to reduce coherence in or alter charge transport paths between the first and second nodes so as to reduce existing destructive quantum interference and permit flow of the charge carriers from the first node to the second node.

The method applies an electric potential between the first and second electrodes, controls coherence in charge transport paths between the first and second nodes so as to maintain or reduce destructive interference between the first and second nodes of the molecule, and injects charge carriers from the first electrode into the first node and collects the charge carriers from the second node at the second electrode when the coherence is controlled to reduce destructive interference.

 One problem for nanoscale device technologies is fanout; due to the need for intermediate current drive (e.g., amplifiers) in these nanoscale devices. One solution according to the QuIET transistor is to incorporate nanoscale devices onto a conventional transistor amplifier layout serving as the fabrication substrate and thus provide a "hybrid" device structure. After coating the conventional transistor amplifier layout with an insulating layer, the metal interconnect and molecular layers are fabricated on top, with input to the amplifiers made via holes. This configuration permits one to drive or input an internal signal anywhere in the array. 

Developments In Nanobiotechnology At UCSB Point To Medical Applications



Two new groundbreaking scientific papers by researchers at UC Santa Barbara demonstrate the synthesis of nanosize biological particles with the potential to fight cancer and other illnesses. The studies introduce new approaches that are considered "green" nanobiotechnology because they use no artificial compounds.

Luc Jaeger, associate professor of chemistry and biochemistry at UCSB, explained that there is nothing short of a revolution going on in his field –– one that permeates all areas of biochemistry, especially his area of nanobiotechnology. The revolution involves understanding the role of RNA in cells.

Top row, three different RNA objects rendered from molecular computer models: from left, RNA antiprism composed of eight RNAs, a six-stranded RNA cube, and a 10-stranded RNA cube. Bottom row, the corresponding three-dimensional reconstructions of the objects obtained from cryo-electron microscopy.
 
Credit: Cody Geary and Kirill A. Afonin

"Considering the fact that up to 90 percent of the human genome is transcribed into RNA, it becomes clear that RNA is one of the most important biopolymers on which life is based," said Jaeger. "We are still far from understanding all the tremendous implications of RNA in living cells."

Jaeger's team is putting together complex three-dimensional RNA molecules –– nanosize polyhedrons that could be used to fight disease. The molecules self assemble into the new shapes. The work is funded by the National Institutes of Health (NIH), and there is a patent pending jointly between NIH and UCSB on the new designs.

"We are interested in using RNA assemblies to deliver silencing RNAs and therapeutic RNA aptamers to target cancer and other diseases," said Jaeger. "It is clear that RNA is involved in a huge number of key processes that are related to health issues."

Jaeger believes the RNA-based approaches to delivering new therapies in the body will be safer than those using artificial compounds that might have undesirable side effects down the line.

"Considering the fact that up to 90 percent of the human genome is transcribed into RNA, it becomes clear that RNA is one of the most important biopolymers on which life is based," said Jaeger. "We are still far from understanding all the tremendous implications of RNA in living cells."

Jaeger's team is putting together complex three-dimensional RNA molecules –– nanosize polyhedrons that could be used to fight disease. The molecules self assemble into the new shapes. The work is funded by the National Institutes of Health (NIH), and there is a patent pending jointly between NIH and UCSB on the new designs.

"We are interested in using RNA assemblies to deliver silencing RNAs and therapeutic RNA aptamers to target cancer and other diseases," said Jaeger. "It is clear that RNA is involved in a huge number of key processes that are related to health issues."

Jaeger believes the RNA-based approaches to delivering new therapies in the body will be safer than those using artificial compounds that might have undesirable side effects down the line.

"By using RNA molecules as our primary medium, we are practicing 'green' nanobiotechnology," explained Jaeger. "The research program developed in my lab at UCSB aims at contributing in a positive way to medicine and synthetic biology. We try to avoid any approaches that raise controversial bioethical issues in the public square. It's not an easy task, but I am convinced that it will pay off in the long run."

The more recent of the two scientific papers describing the new work –– "In vitro assembly of cubic RNA-based scaffolds designed in silicon" –– published online Monday, August 30, by Nature Nanotechnology. The earlier paper –– "A polyhedron made of tRNAs" by Severcan and colleagues –– was published online on July 18 by Nature Chemistry. The print edition of this article will be published in Nature Chemistry's September issue.

The second author on the Nature Chemistry paper is Cody Geary, a postdoctoral fellow in Jaeger's lab. Kirill A. Afonin, also a postdoctoral fellow in Jaeger's lab, is the first author on the Nature Nanotechnnology article.

Bruce Shapiro, a senior author on the Nature Nanotechnology article, is based at the National Cancer Institute in Frederick, Md. and is also funded by NIH. Jaeger and his team worked with Shapiro to develop a computerized approach for facilitating the design of self-assembling RNA strands. Further assistance came from the National Resource for Automated Molecular Microscopy located at Scripps Institute in La Jolla, Calif.


Contacts and sources:
http://www.eurekalert.org/images/shared/spacer.gif

To Triple Fuel-Efficiency Cars Need To Be Wired With Better Brainpower Says Researcher


A University of Michigan researcher says it's possible to triple fuel economy in gasoline-powered cars by 2035, but it'll mean getting our automotive kicks from smart electronic technology and other forms of virtual performance rather than horsepower. 

As federal regulators are poised to propose the next round of fuel economy mandates, John DeCicco, a senior lecturer at the School of Natural Resources and Environment and faculty fellow with the Michigan Memorial Phoenix Energy Institute, says the most cost-effective answer is steady progress in advanced combustion engines and hybrid drive---but stopping short of plugging in and requiring super batteries or gaseous fuels.

He finds that the solution is in our garages if Americans shift gears in terms of priorities. What DeCicco calls a “revolution by evolution” avoids politically trendy breakthrough technologies that will remain too expensive for most consumers.\

“If we really prioritize efficiency, we can get just as far with less sticker shock,” he said. “Evolutionary change can be of profound consequence for cutting oil use and greenhouse gas emissions, and do so with manageable costs and minimal risks for automakers.”

DeCicco has completed a study for The Energy Foundation examining how far fuel economy can be taken if it becomes a top priority in product planning. See the study here.

His analysis shows that optimizing internal combustion engines plus rising adoption of grid-free hybrids will enable new fleet efficiency to reach 52 mpg by 2025 and 74 mpg by 2035.

Reaching such a horizon would entail cultural change in a gearhead world attuned to nuances of power performance. DeCicco identifies emerging trends for what he dubs “efficiency compatible” design strategies, enticing buyers away from brute force and toward smart technologies, intelligent safety features and svelte styling. Amenities like Bluetooth hookups, communication bandwidth and other information technology enhance customer value with minimal demands on power.

The report develops new interpretations of technology cost estimates that better depict the benefits of ongoing innovation while acknowledging the limits of how much   consumers can spend. The analysis reflects the three-way trade-off among efficiency, performance and cost that the car market is likely to face in the years ahead.

“The fleet I’ve modeled for 2025 does not give up any of the performance and creature comforts consumers already enjoy,” he said. “You don’t have to go back to being Fred Flintstone, but you will see lower fuel costs instead of ever more mass and muscle.”

At the U-M, DeCicco teaches courses in sustainable energy and transportation energy policy and researches solutions to transportation energy and climate problems. Before returning to academia, he was the green world’s top vehicle technology expert, most recently as senior fellow for automotive strategies at the Environmental Defense Fund.

DeCicco’s earlier studies of auto efficiency were influential in building the analytic foundation for major policy changes, including recent updates to Corporate Average Fuel Economy (CAFE) standards. He also pioneered environmental rating methodologies for motor vehicles as father of ACEEE's Green Book and designer of the Yahoo! Autos Green Ratings.

The Michigan Memorial Phoenix Energy Institute develops, coordinates and promotes multidisciplinary energy research and education.\

Contacts and sources:
 The Energy Foundation


IceCube Neutrino Observatory Nears Complete


In December 2010, IceCube -- the world's first kilometer-scale neutrino observatory, whiuch is located beneath the Antarctic ice -- will finally be completed after two decades of planning. In an article in the AIP's Review of Scientific Instruments, Francis Halzen, the principal investigator of the IceCube project, and his colleague Spencer Klein of Lawrence Berkeley National Laboratory provide a comprehensive description of the observatory, its instrumentation, and its scientific mission—including its most publicized goal: finding the sources of cosmic rays.

Signals from the sensors are carried by cables to the IceCube counting house that houses a large cluster of computers to reconstruct in real time some 2,000 muon tracks every second.
 
Credit: J. Haugen

"Almost a century after their discovery, we do not know from where the most energetic particles to hit the Earth originate and how they acquire their incredible energies," says Halzen, a professor of physics at the University of Wisconsin in Madison.

After light, neutrinos, which are created in the decay of radioactive particles, are the most abundant particles in the universe. High-energy neutrinos are formed in the universe's most violent events, like exploding stars and gamma ray bursts. Because the neutrino has no charge, essentially no mass, and only interacts weakly with matter, trillions of neutrinos pass through our bodies each day, without effect. On extremely rare occasions, a neutrino will strike the nucleus of an atom, creating a particle, called a muon, and blue light that can be detected with optical sensors. The trick is spying those collisions—and, in particular, the collisions of high-energy neutrinos. IceCube does it by sheer virtue of its size.

At 1 kilometer on a side -- with 5,160 optical sensors occupying a gigaton of ice -- the observatory is orders of magnitude bigger than other neutrino detectors; the Superkamiokande detector in the Japanese Alps, for example, is only 40 meters on a side.

"IceCube has been totally optimized for size in order to be sensitive to the very small neutrino fluxes that may reveal the sources of cosmic rays and the particle nature of dark matter," Halzen says.

 IceCube scientists deploy a calibration light source, called the Standard Candle in one of the 2.5 km deep holes. Each of the 86 holes contains a string of 60 Digital Optical Modules (DOMs) that detect the blue light from neutrino events in the deep, clear ice.
 
Credit: J. Haugen

Contacts and sources:
The article, "IceCube: An instrument for neutrino astronomy" by Francis Halzen and Spencer R. Klein appears in the journal Review of Scientific Instruments. See: http://rsi.aip.org/resource/1/rsinak/v81/i8/p081101_s1

Tiny Rulers To Measure Nanoscale Structures

Physicists at China's Wuhan University discovered that nanospheres combined with a nanorod dimer could be used to solve the problem of measurement sensitivity at the nanoscale -- work reported in the Journal of Applied Physics.

With the advent of nanometer-sized machines, there is considerable demand for stable, precise tools to measure absolute distances and distance changes. One way to do this is with a plasmon ruler. In physics jargon, a "plasmon" is the quasiparticle resulting from the quantization of plasma oscillation; it's essentially the collective oscillations of the free electron gas at a metallic surface, often at optical frequencies.

A noble metallic dimer (a molecule that results from combining two entities of the same species) has been used as a plasmon ruler to make absolute distance and distance change measurements.

In contrast to a conventional nanoparticle dimer plasmon ruler, this new one shows an approximately linear relationship between the resonance wavelength shifts and nanosphere dimer interparticle separation for a linear plasmon ruler.
 
Credit:  Wuhan University/American Institute of Physic


Physicists at China's Wuhan University discovered that nanospheres combined with a nanorod dimer could be used to solve the problem of measurement sensitivity. They provide details about their findings in the American Institute of Physics' Journal of Applied Physics.

Shao-Ding Liu and Mu-Tian Cheng used a nanostructure as a linear plasmon ruler. Nanospheres were used to modify surface plasmon coupling of a nanorod dimer. They found that the resonance wavelength shift increases approximately linearly with the increasing of a nanosphere's interparticle separations -- resulting in a structure that's useful as a plasmon ruler with homogenous measurement sensitivity.

"A nanoparticle dimer plasmon ruler possesses many advantages because its measurement sensitivity is homogeneous, it can operate in the near-infrared region, and the structure's size and nanorod aspect ratio can be modified freely to get the desired measurement range and sensitivity," notes Liu.

Applications for the linear plasmon ruler extend beyond studies of optical properties of metallic nanostructures to single-molecule microscopy, surface-enhanced Raman spectroscopy, waveguiding and biosensing.

Contacts and sources:
The article, "Linear plasmon ruler with tunable measurement range and sensitivity" by Shao-Ding Liu and Mu-Tian Cheng will appear in the Journal of Applied Physics. http://jap.aip.org/resource/1/japiau/v108/i3/p034313_s1

Scientists Discover New 'Sprouty' Protein To Control Obesity And Osteoporosis


New research in the FASEB Journal suggests that the 'Sprouty' protein could be a therapeutic target for patients with obesity and/or osteoporosis, as well as diabetes, osteoarthritis and heart disease

Here's good news for anyone trying to lose weight or has osteoporosis: Scientists from Maine are on the trail of a weight loss drug that may revolutionize how we treat these two conditions. In a new research report published in the September 2010 print issue of The FASEB Journal, the researchers describe a newly discovered protein, called "Sprouty," responsible for regulating body fat and bone mass. Then they manipulated how much of this protein was expressed in different groups of mice specially bred to have some human genes. They found that the more of this protein that the transgenic mice expressed, the leaner and stronger they became. Furthermore, the scientists found that when mice with low levels of the Sprouty protein were made to express more of it, they lost weight and increased bone density.

"When the U.S. military has to turn to fitness gurus like Tony Horton to help its soldiers slim down, you know obesity is a serious problem," said Gerald Weissmann, M.D., Editor-in-Chief of The FASEB Journal, "and all you have to do is visit to a nursing home to see the devastating effects of osteoporosis. "Sprouty" –well named– gets to the roots of extra fat and shrinking bone."

To make this discovery, the researchers studied two groups of transgenic mice, one group with a genetic deletion of the Sprouty gene in cells that develop into fat and bone, and the second group with high levels of expressed Sprouty proteins in the same cell types. Results showed that the mice with the deleted Sprouty gene had increased body fat and loss of bone mass similar to osteoporosis as compared to normal mice. Bone loss was then reversed by adding more Sprouty protein. The group with excess Sprouty expression produced lean mice with increased bone mass.

"Our study provides insight into the regulation of bone mass and body fat," said Lucy Liaw, Ph.D., co-author of the study from the Maine Medical Center Research Institute in Scarborough, ME. "Therefore, future application of this knowledge may help treat common conditions such as bone loss and obesity."

Receive monthly highlights from The FASEB Journal by e-mail. Sign up at http://www.faseb.org/fjupdate.aspx. The FASEB Journal (http://www.fasebj.org) is published by the Federation of the American Societies for Experimental Biology (FASEB). The journal has been recognized by the Special Libraries Association as one of the top 100 most influential biomedical journals of the past century and is the most cited biology journal worldwide according to the Institute for Scientific Information.

FASEB comprises 23 societies with more than 100,000 members, making it the largest coalition of biomedical research associations in the United States. FASEB enhances the ability of scientists and engineers to improve—through their research—the health, well-being and productivity of all people. FASEB's mission is to advance health and welfare by promoting progress and education in biological and biomedical sciences through service to our member societies and collaborative advocacy.

Contacts and sources:
FASEB Journal
Publicatdion: Sumithra Urs, Deepak Venkatesh, Yuefeng Tang, Terry Henderson, Xuehui Yang, Robert E. Friesel, Clifford J. Rosen, and Lucy Liaw. Sprouty1 is a critical regulatory switch of mesenchymal stem cell lineage allocation. FASEB J. 2010 24: 3264-3273. doi: 10.1096/fj.10-155127 ;http://www.fasebj.org/cgi/content/abstract/24/9/3264

Silicon Oxide Circuits Break Barrier: The First Two-Terminal Memory Chips That Use Only Silicon


Rice University scientists have created the first two-terminal memory chips that use only silicon, one of the most common substances on the planet, in a way that should be easily adaptable to nanoelectronic manufacturing techniques and promises to extend the limits of miniaturization subject to Moore's Law. 

Last year, researchers in the lab of Rice Professor James Tour showed how electrical current could repeatedly break and reconnect 10-nanometer strips of graphite, a form of carbon, to create a robust, reliable memory "bit." At the time, they didn't fully understand why it worked so well.

Now, they do. A new collaboration by the Rice labs of professors Tour, Douglas Natelson and Lin Zhong proved the circuit doesn't need the carbon at all.
\
A 1k silicon oxide memory has been assembled by Rice and a commercial partner as a proof-of-concept. Silicon nanowire forms when charge is pumped through the silicon oxide, creating a two-terminal resistive switch.
 
Images courtesy Jun Yao/Rice University

Jun Yao, a graduate student in Tour's lab and primary author of the paper to appear in the online edition of Nano Letters, confirmed his breakthrough idea when he sandwiched a layer of silicon oxide, an insulator, between semiconducting sheets of polycrystalline silicon that served as the top and bottom electrodes.

Applying a charge to the electrodes created a conductive pathway by stripping oxygen atoms from the silicon oxide and forming a chain of nano-sized silicon crystals. Once formed, the chain can be repeatedly broken and reconnected by applying a pulse of varying voltage.

The nanocrystal wires are as small as 5 nanometers (billionths of a meter) wide, far smaller than circuitry in even the most advanced computers and electronic devices.

"The beauty of it is its simplicity," said Tour, Rice's T.T. and W.F. Chao Chair in Chemistry as well as a professor of mechanical engineering and materials science and of computer science. That, he said, will be key to the technology's scalability. Silicon oxide switches or memory locations require only two terminals, not three (as in flash memory), because the physical process doesn't require the device to hold a charge.

It also means layers of silicon-oxide memory can be stacked in tiny but capacious three-dimensional arrays. "I've been told by industry that if you're not in the 3-D memory business in four years, you're not going to be in the memory business. This is perfectly suited for that," Tour said.

Silicon-oxide memories are compatible with conventional transistor manufacturing technology, said Tour, who recently attended a workshop by the National Science Foundation and IBM on breaking the barriers to Moore's Law, which states the number of devices on a circuit doubles every 18 to 24 months.

"Manufacturers feel they can get pathways down to 10 nanometers. Flash memory is going to hit a brick wall at about 20 nanometers. But how do we get beyond that? Well, our technique is perfectly suited for sub-10-nanometer circuits," he said.

Austin tech design company PrivaTran is already bench testing a silicon-oxide chip with 1,000 memory elements built in collaboration with the Tour lab. "We're real excited about where the data is going here," said PrivaTran CEO Glenn Mortland, who is using the technology in several projects supported by the Army Research Office, National Science Foundation, Air Force Office of Scientific Research, and the Navy Space and Naval Warfare Systems Command Small Business Innovation Research (SBIR) and Small Business Technology Transfer programs.

"Our original customer funding was geared toward more high-density memories," Mortland said. "That's where most of the paying customers see this going. I think, along the way, there will be side applications in various nonvolatile configurations."

Yao had a hard time convincing his colleagues that silicon oxide alone could make a circuit. "Other group members didn't believe him," said Tour, who added that nobody recognized silicon oxide's potential, even though it's "the most-studied material in human history."

"Most people, when they saw this effect, would say, 'Oh, we had silicon-oxide breakdown,' and they throw it out," he said. "It was just sitting there waiting to be exploited."

In other words, what used to be a bug turned out to be a feature.

Yao went to the mat for his idea. He first substituted a variety of materials for graphite and found none of them changed the circuit's performance. Then he dropped the carbon and metal entirely and sandwiched silicon oxide between silicon terminals. It worked.

"It was a really difficult time for me, because people didn't believe it," Yao said. Finally, as a proof of concept, he cut a carbon nanotube to localize the switching site, sliced out a very thin piece of silicon oxide by focused ion beam and identified a nanoscale silicon pathway under a transmission electron microscope.

"This is research," Yao said. "If you do something and everyone nods their heads, then it's probably not that big. But if you do something and everyone shakes their heads, then you prove it, it could be big.

"It doesn't matter how many people don't believe it. What matters is whether it's true or not."

Silicon-oxide circuits carry all the benefits of the previously reported graphite device. They feature high on-off ratios, excellent endurance and fast switching (below 100 nanoseconds).

They will also be resistant to radiation, which should make them suitable for military and NASA applications. "It's clear there are lots of radiation-hardened uses for this technology," Mortland said.

Silicon oxide also works in reprogrammable gate arrays being built by NuPGA, a company formed last year through collaborative patents with Rice University. NuPGA's devices will assist in the design of computer circuitry based on vertical arrays of silicon oxide embedded in "vias," the holes in integrated circuits that connect layers of circuitry. Such rewritable gate arrays could drastically cut the cost of designing complex electronic devices.

###
Zhengzong Sun, a graduate student in Tour's lab, was co-author of the paper with Yao; Tour; Natelson, a Rice professor of physics and astronomy; and Zhong, assistant professor of electrical and computer engineering.

The David and Lucille Packard Foundation, the Texas Instruments Leadership University Fund, the National Science Foundation, PrivaTran and the Army Research Office SBIR supported the research.

Read the abstract here: http://pubs.acs.org/journal/nalefd

Contacts and sources:
Nano Letters
 David and Lucille Packard Foundation
Texas Instruments Leadership University Fund
PrivaTran, Army Research Office SBIR

Monday, August 30, 2010

Researchers To Activate Anti-Cancer Genes to Fight Colon Cancer

Researchers at the University of Copenhagen's Faculty of Health Sciences have succeeded in decoding the genetic key that gives particular intestinal cells their identity. With this knowledge of the complex network of genes the researchers now hope to stop colon cancer by activating special anti-cancer genes. 
Grafisk fremstilling af det menneskelige taarmsystem

Colon sloughs lining
The intestines have to work properly if we are to benefit from the food we eat. Digestive juices must be secreted, the food broken down into smaller components and then transported through the gut wall and onwards to muscles and organs. The lining of the gut is coated in epithelial cells, a specialised layer that produces mucous and hormones while keeping dangerous bacteria and toxins at bay. Close contact with pathogenic microbes and toxins means that the epithelial cells may mutate to form cancer. The small intestine therefore secretes the entire epithelial layer in the course of two to five days, while the large intestine takes three weeks to perform the same process.

Gene provides cell ID
A triggered CDX2 gene tells a cell that it is located in the epithelial tissue of the intestine and thus enables the cell to do its job correctly. Associate Professor Jesper Troelsen and colleagues from the University of Copenhagen made this discovery several years ago: CDX2 may thus be regarded as an identity gene.

Cancer cells deactivate important gene
Using advanced equipment for DNA sequestration at the Department of Cellular and Molecular Medicine the research group has now revealed that CDX2 controls more than 600 other genes governing the way the cells of the intestinal epithelial tissue work, ensuring that the intestine functions properly. The discovery has now been published in the Journal of Biological Chemistry.

- "Among the 600 genes we have found five that you can call anti-cancer genes", Associate Professor Troelsen says. "We have also studied early stages of colon cancer. We observed that before the colonic cancer cells began to invade the tissue outside the colon, they deactivated the CDX2 gene, removing their "ID".

- "We are now applying for funds to study the properties of CDX2 that enable it to suppress colon cancer and to find a way of reactivating the CDX2 gene to allow us to halt the progression of colon cancer".

Sources and contacts:
Journal of Biological Chemistry

Feasts At A Funeral, a 12,000 Year Old Tradition


A University of Connecticut (UConn) anthropologist says there is new evidence that nearly 12,000 years ago, feasts were used to celebrate burial of the dead, bringing about the world's first established communities.
Whether the occasion is a wedding reception or another milestone in life, the feast is a time-honored ritual in which a large meal marks a significant occasion. We know that the Romans, Greeks and Vikings did it, and today it's still an active part of occasions such as birthdays, weddings and anniversaries. Now a University of Connecticut (UConn) anthropologist says there is new evidence that nearly 12,000 years ago, feasts were used to celebrate burial of the dead, bringing about the world's first established communities.

UConn Associate Professor of Anthropology Natalie Munro and a team of scientists found clear evidence of feasting at the ancient Hilazon Tachtit Cave burial site near Karmiel, Israel. Unusually high densities of butchered tortoise and wild cattle led them to conclude that the Natufian community members who lived in the area at the time gathered at the site for "special rituals to commemorate the burial of the dead, and that feasts were central elements."

See the remains of the ancient feast up close in this photo gallery.
Photo of woman excavating and the words Photo Gallery
Credit: Lisa Raffensperger, National Science Foundation

Some 14,500 to 11,500 years before the present, the Natufian people occupied the area around Karmiel, near the Mediterranean Sea. They lived there during the region's pre-Neolithic period, which marked the end of the very long Stone Age period.

"Feasting [...] is one of humanity's most universal and unique social behaviors," the researchers write in their report published in the Aug. 30 early online edition of Proceedings of the National Academy of Sciences.

"Our paper documents the first good evidence for feasting in the archaeological record that we know of," said Munro. She said that although many researchers believe feasting likely began with the emergence of modern humans, compelling supporting proofs have not been found.

A structure at Hilazon Tachtit Cave, Israel, contained the remains from at least three aurochs (wild cattle) that were consumed by humans as part of a feast.
Photo of auroch remains consumed as part of a feast at a cave in Israel.
Credit: Photo by Naftali Hilger

Detection of feasting nearly 12,000 years ago may signal important culture changes.  The Natufian people were the first to settle into more or less permanent communities and the act of settling would have been a time of social and economic upheaval.

Prior to this, populations were more mobile and could separate into smaller groups for food and other resources to deal with disputes.  But, settling down probably strained social relationships.

The researchers theorize that feasts may have played a significant role in easing the potentially rocky transition from a hunting-gathering lifestyle to one of agricultural dependency.

"Sedentary communities require other means to resolve conflict, smooth tensions and provide a sense of community," said Munro. "We believe that feasts, especially in funerary contexts, served to integrate communities by providing this sense of community."

Funerals may have provided special opportunities to bring communities together to mark the last event in a person's life and send the deceased off to another life. Instilled with additional layers of spiritual meaning, they may have provided an opportunity to commemorate an individual's life and soothe social disputes. And it appears that feasts would have played a significant role in that.

The discovery of cattle and other animals at "Hilazon Tachtit testifies to symbolic and ritual continuity with the succeeding Neolithic cultures," the researchers write. "This continuity in tradition emphasizes the importance of local contributions to the agricultural transition."

Leore Grosman of the Institute of Archaeology at Hebrew University in Jerusalem also contributed to this research. A grant from the National Science Foundation's Behavioral and Cognitive Sciences division supported it.

Sources and contacts:
Bobbie Mixon, NSF   
Program Contacts
rincipal Investigators
Natalie Munro, University of Connecticut (860) 486-0090
Proceedings of the National Academy of Sciences
http://www.pnas.org/

Microfluidic Device Allows Collection, Analysis Of Hard-To-Handle Immune Cells

A team led by Massachusetts General Hospital scientists has developed a new microfluidic tool for quickly and accurately isolating neutrophils -- the most abundant type of white blood cell -- from small blood samples, an accomplishment that could provide information essential to better understanding the immune system's response to traumatic injury.  The system, described in a Nature Medicine paper that received advance online release, also can be adapted to isolate almost any type of cell.

"Neutrophils are currently garnering a lot of interest from researchers and clinicians, but collecting and processing them has been a real challenge," says Kenneth Kotz, PhD, of the MGH Center for Engineering in Medicine, lead author of the study. "This tool will allow a new range of studies and diagnostics based on cell-specific genomic and proteomic signatures."

Part of the body's first-line defense against injury or infection, neutrophils were long thought to play fairly simple roles, such as releasing antimicrobial proteins and ingesting pathogens. But recent studies find their actions to be more complex and critical to both chronic and acute inflammation, particularly the activation of the immune system in response to injury.

Studying patterns of gene expression and protein synthesis in neutrophils could reveal essential information about the immune response, but gathering the cells for analysis has been challenging. Standard isolation procedures take more than two hours and require relatively large blood samples. Neutrophils also are sensitive to handling and easily become activated, changing the molecular patterns of interest, and they contain very small amounts of messenger RNA, which is required for studies of gene expression.

Building on their experience developing silicon-chip-based devices that capture CD4 T cells for HIV diagnosis or isolate circulating tumor cells, Kotz's team developed a system that gathers a neutrophil-rich sample from microliter-sized blood samples in less than 5 minutes, reducing the risk of disturbing cells in the process. To meet the requirements for speed and precision, the researchers completely redesigned the geometry, antibody-based coating and other aspects of the cell-capture module at the heart of the device. The samples collected were successful in revealing differences in gene and protein activity relevant to the cells' activation status.

While the laboratory tests were encouraging, samples from critically injured patients need to be handled and processed in real-world clinical environments. Through the efforts of study co-author Lyle Moldawer, PhD, of the University of Florida College of Medicine, the devices were tested at six sites participating in a major National Institutes of Health-sponsored study of the immune response to injury, led by Ronald Tompkins, MD, ScD, chief of the MGH Burns Service and also a study co-author. Analyzing samples from 26 patients with serious burns or other traumatic injuries revealed complex gene expression patterns that shifted during the 28 days after injury, probably reflecting complex interactions between various immune system components.

Kotz says, "Until now, it's been logistically impossible to study neutrophils to the extent we have in this paper." He notes that their analysis of neutrophil samples from trauma patients is the largest such investigation to date and adds, "This technology – which is much faster and gentler than current approaches to isolating cells – can be scaled and modified to capture just about any cell type, and we're working to apply it to other cell-based assays."

Mehmet Toner, PhD, director of the BioMEMS Resource Center in the MGH Center for Engineering in Medicine, is senior author of the Nature Medicine article. In addition to Tompkins and Moldawer, primary co-authors are Aman Russom, Alan Rosenbach, Jeremy Goverman, Shawn Fagan and Daniel Irimia, MGH; Wenzong Xiao, Weihong Xu, Julie Wilhelmy, Michael Mindrinos, and Ronald Davis, Stanford Genome Technology Center; Carol Miller-Graziano, Asit De and Paul Bankey, University of Rochester School of Medicine; Wei-Jun Qian, Brianne Petritis, David Camp, and Richard Smith, Pacific Northwest National Laboratory; Elizabeth Warner, University of Florida College of Medicine; and Bernard Brownstein, Washington University of St. Louis. The study was supported by grants from the National Institutes of Health.

Massachusetts General Hospital, established in 1811, is the original and largest teaching hospital of Harvard Medical School. The MGH conducts the largest hospital-based research program in the United States, with an annual research budget of more than $600 million and major research centers in AIDS, cardiovascular research, cancer, computational and integrative biology, cutaneous biology, human genetics, medical imaging, neurodegenerative disorders, regenerative medicine, systems biology, transplantation biology and photomedicine.

Contacts and sources:

Going Live To The Beating Heart: Filming Organs And Joints In Real Time Using Magnetic Resonance

Max-Planck-Gesellschaft scientists succeed in filming organs and joints in real time using magnetic resonance imaging.

"Please hold absolutely still": This instruction is crucial for patients being examined by magnetic resonance imaging (MRI). It is the only way to obtain clear images for diagnosis. Up to now, it was therefore almost impossible to image moving organs using MRI. Max Planck researchers from Göttingen have now succeeded in significantly reducing the time required for recording images - to just one fiftieth of a second. 

With this breakthrough, the dynamics of organs and joints can be filmed "live" for the first time: movements of the eye and jaw as well as the bending knee and the beating heart. The new MRI method promises to add important information about diseases of the joints and the heart. In many cases MRI examinations may become easier and more comfortable for patients. (NMR in Biomedicine 2010, Journal of Cardiovascular Magnetic Resonance 2010

Real-time MRI of the heart with a measurement time of 33 milliseconds per image and 30 images per second. The spatial resolution is 1.5 millimetres in the image plane (section thickness 8 millimetres). The eight successive images show the movement of the heart muscle of a healthy subject for a period of 0.264 seconds during a single heartbeat. The images range from the systolic phase (arrow, top left: contraction of the heart muscle) to the diastolic phase (arrow, bottom right: relaxation and expansion). The bright signal in the heart chambers is the blood.
 
Image: Frahm

A process that required several minutes until well into the 1980s, now only takes a matter of seconds: the recording of cross-sectional images of our body by magnetic resonance imaging (MRI). This was enabled by the FLASH (fast low angle shot) method developed by Göttingen scientists Jens Frahm and Axel Haase at the Max Planck Institute for Biophysical Chemistry. FLASH revolutionised MRI and was largely responsible for its establishment as a most important modality in diagnostic imaging. MRI is completely painless and, moreover, extremely safe. Because the technique works with magnetic fields and radio waves, patients are not subjected to any radiation exposure as is the case with X-rays. At present, however, the procedure is still too slow for the examination of rapidly moving organs and joints. For example, to trace the movement of the heart, the measurements must be synchronised with the electrocardiogram (ECG) while the patient holds the breath. Afterwards, the data from different heart beats have to be combined into a film.

Future prospect: extended diagnostics for diseases

The researchers working with Jens Frahm, Head of the non-profit "Biomedizinische NMR Forschungs GmbH", now succeeded in further accelerating the image acquisition process. The new MRI method developed by Jens Frahm, Martin Uecker and Shuo Zhang reduces the image acquisition time to one fiftieth of a second (20 milliseconds), making it possible to obtain "live recordings" of moving joints and organs at so far inaccessible temporal resolution and without artefacts. Filming the dynamics of the jaw during opening and closing of the mouth is just as easy as filming the movements involved in speech production or the rapid beating of the heart.

"A real-time film of the heart enables us to directly monitor the pumping of the heart muscle and the resulting blood flow - heartbeat by heartbeat and without the patient having to hold the breath," explains Frahm. The scientists believe that the new method could help to improve the diagnosis of conditions such as coronary heart disease and myocardial insufficiency. Another application involves minimally invasive interventions which, thanks to this discovery, could be carried out in future using MRI instead of X-rays. "However, as it was the case with FLASH, we must first learn how to use the real-time MRI possibilities for medical purposes," says Frahm. "New challenges therefore also arise for doctors. The technical progress will have to be ‘translated’ into clinical protocols that provide optimum responses to the relevant medical questions."


Less is more: acceleration through better image reconstruction

To achieve the breakthrough to MRI measurement times that only take very small fractions of a second, several developments had to be successfully combined with each other. Whilst still relying on the FLASH technique, the scientists used a radial encoding of the spatial information which renders the images insensitive to movements.

Mathematics was then required to further reduce the acquisition times. "Considerably fewer data are recorded than are usually necessary for the calculation of an image. We developed a new mathematical reconstruction technique which enables us to calculate a meaningful image from data which are, in fact, incomplete," explains Frahm. In the most extreme case it is possible to calculate an image of comparative quality out of just five percent of the data required for a normal image - which corresponds to a reduction of the measurement time by a factor of 20. As a result, the Göttingen scientists have accelerated MRI from the mid 1980s by a factor of 10000.

Although these fast MRI measurements can be easily implemented on today’s MRI devices, something of a bottleneck exists when it comes to the availability of sufficiently powerful computers for image reconstruction. Physicist Martin Uecker explains: "The computational effort required is gigantic. For example, if we examine the heart for only a minute in real time, between 2000 and 3000 images arise from a data volume of two gigabytes."

Uecker consequently designed the mathematical process in such a way that it is divided into steps that can be calculated in parallel. These complex calculations are carried out using fast graphical processing units that were originally developed for computer games and three-dimensional visualization. "Our computer system requires about 30 minutes at present to process one minute’s worth of film," says Uecker. Therefore, it will take a while until MRI systems are equipped with computers that will enable the immediate calculation and live presentation of the images during the scan. In order to minimise the time their innovation will take to reach practical application, the Göttingen researchers are working in close cooperation with the company Siemens Healthcare. 

Contacts and sources:
Journal of Cardiovascular Magnetic Resonance
Publication: Martin Uecker, Shuo Zhang, Dirk Voit, Alexander Karaus, Klaus-Dietmar Merboldt, Jens Frahm, Real-time MRI at a resolution of 20 ms. 
NMR in Biomedicine 23, doi:10.1002/nbm.1585 (Online)




Related links:

Dramatic Climate Change Is Unpredictable

The fear that global temperature can change very quickly and cause dramatic climate changes that may have a disastrous impact on many countries and populations is great around the world. But what causes climate change and is it possible to predict future climate change? 

New research from the Niels Bohr Institute at the University of Copenhagen shows that it may be due to an accumulation of different chaotic influences and as a result would be difficult to predict. The results have just been published in Geophysical Research Letters.

For millions of years the Earth's climate has alternated between about 100,000 years of ice age and approximately 10-15,000 years of a warm climate like we have today. The climate change is controlled by the Earth's orbit in space, that is to say the Earth's tilt and distance from the sun. But there are also other climatic shifts in the Earth's history and what caused those?

Dramatic climate change of the past
By analysing the ice cores that are drilled through the more than three kilometer thick ice sheet in Greenland, scientists can obtain information about the temperature and climate going back around 140,000 years.

The most pronounced climate shifts besides the end of the ice age is a series of climate changes during the ice age where the temperature suddenly rose 10-15 degrees in less than 10 years. The climate change lasted perhaps 1000 years, then - bang – the temperature fell drastically and the climate changed again. This happened several times during the ice age and these climate shifts are called the Dansgaard-Oeschger events after the researchers who discovered and described them. Such a sudden, dramatic shift in climate from one state to another is called a tipping point. However, the cause of the rapid climate change is not known and researchers have been unable to reproduce them in modern climate models.

The climate in the balance
"We have made a theoretical modelling of two different scenarios that might trigger climate change. We wanted to investigate if it could be determined whether there was an external factor which caused the climate change or whether the shift was due to an accumulation of small, chaotic fluctuations", explains Peter Ditlevsen, a climate researcher at the Niels Bohr Institute.

He explains that in one scenario the climate is like a seesaw that has tipped to one side. If sufficient weight is placed on the other side the seesaw will tip – the climate will change from one state to another. This could be, for example, an increase in the atmospheric content of CO2 triggering a shift in the climate.

In the second scenario the climate is like a ball in a trench, which represents one climate state. The ball will be continuously pushed by chaos-dynamical fluctuations such as storms, heat waves, heavy rainfall and the melting of ice sheets, which affect ocean currents and so on. The turmoil in the climate system may finally push the ball over into the other trench, which represents a different climate state.

Peter Ditlevsen's research shows that you can actually distinguish between the two scenarios and it was the chaos-dynamical fluctuations that were the triggering cause of the dramatic climate changes during the ice age. This means that they are very difficult to predict.

Warm future climate
But what about today – what can happen to the climate of the future? "Today we have a different situation than during the ice age. The Earth has not had such a high CO2 content in the atmosphere since more than 15 million years ago, when the climate was very warm and alligators lived in England. So we have already started tilting the seesaw and at the same time the ball is perhaps getting kicked more and could jump over into the other trench. This could mean that the climate might not just slowly gets warmer over the next 1000 years, but that major climate changes theoretically could happen within a few decades", estimates Peter Ditlevsen, but stresses that his research only deals with investigating the climate of the past and not predictions of the future climate.

Contacts and sources:
Peter Ditlevsen, climate researcher, PhD. Dr. Scient., Associate professor, Centre for Ice and Climate, Niels Bohr Institute, University of Copenhagen,  
Link to article in Geophysical Research Lettershttp://www.agu.org/journals/gl/papersinpress.shtml#id2010GL044486

'Greener' Than Expected: The Ecobalance Of Li-Ion Rechargeable Batteries For Electric Cars

Battery powered cars will play a major role in future of mobility. What was not known so far, was how environmentally friendly the manufacture, operation and disposal of the batteries are. Empa researchers have now calculated the ecological footprint of the most commonly used type, the lithium-ion battery. A car with a petrol engine must consume less than 4 liters of fuel per 100km or about 70 mpg (miles per gallon) in order to be as environmentally friendly as modern electric cars.

It is not an easy task to compare the environmental effects of battery powered cars to those caused by conventionally fuelled automobiles. The degree to which manufacture, usage and disposal of the batteries used to store the necessary electrical energy are detrimental to the environment is not exactly known. Now, for the first time, a team of Empa scientists have made a detailed life cycle assessment (LCA) or ecobalance of lithium-ion (Li-ion) batteries, in particular the chemically improved (i.e. more environmentally friendly) version of the ones most frequently used in electric vehicles.

The investigation shows that if the power used to charge the battery is not derived from purely hydroelectric sources, then it is primarily the operation of the electric car, which has an environmental impact, exactly as is the case with conventionally fuelled automobiles. The size of the environmental footprint depends on which sources of power are used to "fuel" the e-mobile. The Li-ion battery itself has, in contrast, a limited effect on the LCA of the electric vehicle. This is contrary to initial expectations that the manufacture of the batteries could negate the advantages of the electric drive.

The environmental impact of batteries for electric vehicles

Battery powered electric cars are often promoted as the ideal solution to the challenges of future mobility, since they produce no exhaust gases in operation. Li-ion batteries have established themselves over competing lead-acid and nickel metal-hydride (NiMH) types because they are lighter and can store more energy. Li-ion batteries are also basically maintenance-free, display no memory effect (loss of capacity when repeatedly charged after partial discharge), have a low self-discharge rate and are regarded as safe and long-lived. For these reasons they find use in many products such as laptop computers. But are they also environmentally friendly?

Researchers at Empa's "Technology and Society Laboratory" decided to find out for sure. They calculated the ecological footprints of electric cars fitted with Li-ion batteries, taking into account all possible relevant factors, from those associated with the production of individual parts all the way through to the scrapping of the vehicle and the disposal of the remains, including the operation of the vehicle during its lifetime. Data with which to evaluate the rechargeable batteries was not available and had to be obtained specifically for this purpose. In doing so the researchers made intentionally unfavorable assumptions. One such was to ignore the fact that after use in a car, a battery might well be used in a stationary setting for other purposes. Other relevant LCA information was obtained from the "ecoinvent" database (www.ecoinvent.org), managed by Empa. The electric vehicles evaluated were equivalent in size and performance to a VW Golf, and the power used to charge the batteries was assumed to be derived from sources representing an average European electricity mix.

A new petrol-engined car, meeting the Euro 5 emission regulations, was used for comparison. It consumes on average 5.2 liter per 100 kilometers when put through the New European Driving Cycle (NEDC), a value significantly lower than the European average. In this respect, therefore, the conventional vehicle belongs to the best of its class on the market.

More a question of the power source rather than the battery

The study shows that the electric car's Li-ion battery drive is in fact only a moderate environmental burden. At most only 15 per cent of the total burden can be ascribed to the battery (including its manufacture, maintenance and disposal). Half of this figure, that is about 7.5 per cent of the total environmental burden, occurs during the refining and manufacture of the battery's raw materials, copper and aluminium. The production of the lithium, in the other hand, is responsible for only 2.3 per cent of the total. "Lithium-ion rechargeable batteries are not as bad as previously assumed," according to Dominic Notter, coauthor of the study which has just been published in the scientific journal "Environmental Science & Technology".

The outlook is not as rosy when one looks at the operation of an electric vehicle over an expected lifetime of 150'000 kilometers. The greatest ecological impact is caused by the regular recharging of the battery, that is, the "fuel" of the e-car. "Refueling" with electricity sourced from a mixture of atomic, coal-fired and hydroelectric power stations, as is usual in Europe, results in three times as much pollution as from the Li-ion battery alone. It is therefore worth considering alternative power sources: If the electricity is generated exclusively by coal-fired power stations, the ecobalance worsens by another 13 per cent. If, on the other hand, the power is purely hydroelectric, then this figure improves by no less than 40 per cent.

The conclusion drawn by the Empa team: a petrol-engined car must consume between three and four liters per 100 kilometers (or about 70 mpg) in order to be as environmentally friendly as the e-car studied, powered with Li-ion batteries and charged with a typical European electricity mix.

Sources and contacts:
Environmental Science and Technology
Publication: "Contribution of Li-Ion Batteries to the Environmental Impact of Electric Vehicles", D.A. Notter, M. Gauch, R. Widmer, P. Waeger, A. Stamp, R. Zah, H.J. Althaus, Environmental Science & Technology, 9 August 2010, DOI: 10.1021/es903729a

Two Heads Are Better Than One Say MINDBRIDGE Scientists

Are two heads better than one when it comes to solving problems and reaching decisions? Yes, says a team of EU-funded researchers from Denmark and the UK after discovering that two heads are indeed better but only when both partners are equally competent and could agree after discussing the problem soundly.

The work, published in the journal Science, is an outcome of the MINDBRIDGE ('Measuring consciousness - bridging the mind-brain gap') project, which received EUR 2.14 million under the ' New and emerging science and technology' (NEST) Activity of the EU's Sixth Framework Programme (FP6) to develop strategies and methodologies to bridge the gap between subjective experience and objective observation of neural phenomena.

In their study, Professor Chris Frith of the UK's Wellcome Trust Centre for Neuroimaging at University College London (UCL) and Professor Niels Bohr of Aarhus University in Denmark, along with their colleagues, investigated whether two people have the ability to combine their sensory information. Their results show that human beings have a knack for combining information from various sensory sources in order to make a decision - one that is a great deal more solid than one that comes from either source on its own.

'When we are trying to solve problems, we usually put out heads together in teams, calling on each other's opinions,' explained UCL's Dr Bahador Bahrami, lead author of the study. 'For our study, we wanted to see if two people could combine information from each other in a difficult judgment task and how much this would improve their performance.'

In a first experiment, the researchers called on the study's participants, who worked in pairs, to detect a very weak signal delivered via computer screen. If the volunteers disagreed about when the signal occurred, then they conferred until they reached a joint decision.

Based on the experiment's findings, joint decisions are much better than decisions made by the 'better-performing' person. So in a nutshell, two heads are indeed better than one.

The team also carried out another two experiments which indicated that the stronger result depends on the partners' capacity to speak to one another. Just telling a person they're right is not good enough, the researchers said.

The fourth and final experiment found the opposite, however. The participants, again working in pairs, performed the same task but one of the volunteers did not know that they were secretly made 'incompetent' by being shown a noisy image whose signal was not easy to detect. In this particular case, two heads are not better than one. The researchers found that the participants would have had better results had they paid no attention to what the 'incompetent' partner said.

'When two people working together can discuss their disagreements, two heads can be better than one,' Professor Frith said. 'But, when one person is working with flawed information - or perhaps is less able at their job - then this can have a very negative effect on the outcome.

'Being able to work together successfully requires that we know how competent we are. Joint decisions don't work when a member of the team is incompetent, but doesn't know it.

'We know all too well about the catastrophic consequences of consulting "evidence" of unknown reliability on problems as diverse as the existence of weapons of mass destruction and the possibility of risk free investments.'

Sources and contacts:
Publication:  Bahrami, B., et al. (2010) Optimally interacting minds. Science 329: 1022-1023. DOI: 10.1126/science.1185718.
For more information, please visit:
Wellcome Trust Centre for Neuroimaging:
Aarhus University
Science: 

Sunday, August 29, 2010

Project "MobilityFirst: Improving the Internet for 4 Billion Wireless Devices by 2015

The National Science Foundation has awarded a three-year, $7.5 million grant to a Rutgers University led research team to develop a future Internet design optimized for mobile networking and communication. The team of nine universities and several industrial partners has dubbed its project "MobilityFirst," reflecting the Internet's evolution away from traditional wired connections between desktop PCs and servers toward wireless data services on mobile platforms.

The group will design a “clean-slate” network architecture to accommodate the shift of Internet traffic to smart cellular phones, tablet computers and emerging mobile data services, said Dipankar Raychaudhuri, professor of electrical and computer engineering and director of Rutgers’ Wireless Information Network Laboratory.

There are more than four billion mobile devices in use worldwide today, and experts predict that by 2015, these wireless devices will significantly outnumber wired devices on the Internet.

NSF-funded ORBIT wireless network testbed at Rutgers WINLAB, featuring an indoor grid of 400 radio transmitters to simulate a variety of network conditions and proposed mobile services.
Credit: Carl Blesch

“The mobile Internet will do much more than support today’s impressive lineup of smart cellular phones. It will simplify people’s interactions with their physical world,” Raychaudhuri said.  For instance, he said, it will enable location-aware computing, allowing people to find nearby merchants or get driving or public transit directions, even if they don’t know their location. It also will support machine-to-machine communications, such as wearable devices that monitor your health and communicate with hospitals or cars that alert other cars to congestion and send split-second commands to each other to avert collisions.

The research team will address technical issues, such as ensuring reliable data networking in spite of variations in wireless signal quality and strength and determining how to route traffic across the burgeoning number of nodes in the Internet. It also will address security and privacy needs in both mobile and wired networks and explore how the network can best support features such as location awareness.

“The goal is to make the mobile Internet reliable, available, secure and trustworthy,” said Raychaudhuri.


Dipankar Raychaudhuri, principal investigator on Rutgers-led MobilityFirst research team
 Raychaudhuri
Credit: Nick Romanenko

The MobilityFirst research team is one of four chosen by the National Science Foundation (NSF) to participate in its Future Internet Architecture (FIA) program. The awards, each worth up to $8 million over three years, will enable researchers at dozens of institutions across the country to pursue new ways to build a more trustworthy and robust Internet than the current network the world has come to depend on.

Collaborating on MobilityFirst are experts in computer and communications networking and security from Rutgers, University of Massachusetts Amherst, University of Michigan, Duke University, University of North Carolina, Massachusetts Institute of Technology, University of Wisconsin, University of Massachusetts Lowell and University of Nebraska.

Rutgers leadership and contributions to the MobilityFirst team are based in the university’s Wireless Information Network Laboratory (WINLAB), a 20-year-old industry-university cooperative research center focused on developing the architectural and technical underpinnings for the mobile Internet. Affiliated with the university’s School of Engineering, WINLAB operates sophisticated laboratories used by more than 250 universities and corporate research groups worldwide to test new wireless capabilities and services. The crown jewel of WINLAB is its NSF-funded ORBIT wireless network testbed, featuring an indoor grid of 400 radio transmitters to simulate a variety of network conditions and proposed mobile services.

Raychaudhuri will serve as principal investigator for the MobilityFirst project and collaborate with the following site leaders: Arun Venkataramani at UMass Amherst, Z. Morley Mao at Michigan, Xaiowei Yang at Duke, Michael Reiter at North Carolina, William Lehr at MIT, Suman Banerjee at Wisconsin, Guanling Chen at UMass-Lowell and Byrav Ramamurthy at Nebraska.

Contacts and sources:

North American Continent Is A Layer Cake, Scientists Discover How Continent Grew

The North American continent is not one thick, rigid slab, but a layer cake of ancient, 3-billion-year-old rock on top of much newer material probably less than 1 billion years old, according to a new study by UC Berkeley seismologists. The new findings by Barbara Romanowicz and Huaiyu Yuan also indicate that the continent grew by addition of rock from subducting ocean floor, not by mantle plume upwelling from below.

A diagram showing the three layers beneath North America. The top layer, the ancient craton, is chemically distinct from younger lithosphere below (the thermal root), which is separated from the asthenosphere by a boundary layer (LAB).
 diagram showing the three layers beneath North America
Credit: Barbara Romanowicz, UC Berkeley 

The finding, which is reported in the Aug. 26 issue of Nature, explains inconsistencies arising from new seismic techniques being used to explore the interior of the Earth, and illuminates the mystery of how the Earth's continents formed.

This graphic shows the thickness (in kilometers) of the North American lithosphere. The blue area is about 250 km thick and, based on new findings reported in Nature, is composed of a 3-billion-year old craton underlain by younger lithosphere deposited as ocean floor subducted under the continent within the past billion years. The green, yellow and red areas are younger and thinner continental lithosphere added around the margins of the original craton, also by subducting sea floor. The thick broken line indicates the borders of the stable part of the continent.
 
Credit: Barbara Romanowicz and Huaiyu Yuan, UC Berkeley

"This is exciting because it is still a mystery how continents grow," said study co-author Barbara Romanowicz, director of the Berkeley Seismological Laboratory and a UC Berkeley professor of earth and planetary science. "We think that most of the North American continent was constructed in the Archean (eon) in several episodes, perhaps as long ago as 3 billion years, though now, with the present regime of plate tectonics, not much new continent is being formed."

The Earth's original continents started forming some 3 billion years ago when the planet was much hotter and convection in the mantle more vigorous, Romanowicz said. The continental rocks rose to the surface – much like scum floats to the top of boiling jam – and eventually formed the lithosphere, Earth's hard outer layer. These old floating pieces of the lithosphere, called cratons, apparently stopped growing about 2 billion years ago as the Earth cooled, though within the last 500 million years, and perhaps for as long as 1 billion years, the modern era of plate tectonics has added new margins to the original cratons, slowly expanding the continents.

"Since the Archean, the continents have been broken up in pieces, glued back together and then broken up again, but those pieces of the very old lithosphere – very old pieces of continents – have been there for a very long time," she said.

One of those original continents is the North American craton, located mostly in the Canadian part of North America. The study suggests that what continental lithosphere has been added since the original North American craton formed was scraped off of the ocean floor as it plunged beneath the continent, not deposited from below by plumes of hot material welling up through the mantle.

The history of the Earth's oldest continental plates is vague because details of their interiors are hidden from geologists. The top 40 km of the lithosphere is crust that is chemically distinct from the mantle below, and while activities such as mountain building can dredge up deeper material, mountain building is rare in the planet's stable cratons. The deep interior of the North American craton is known only from so-called xenoliths – rock inclusions in igneous rock – or xenocrysts such as diamonds that have been delivered to the surface from deep below by volcanoes.

Seismologists, however, have the ability to probe the Earth's interior thanks to seismic waves from earthquakes around the globe, which can be used much like sound waves are used to probe the interior of the human body. Such seismic tomography has established that the bottom of the North American craton is about 250 km deep at its thickest, thinning out toward the margins where new chunks have been added to the continental lithosphere. Below the rigid lithosphere is the softer asthenosphere, on which the continental and oceanic plates ride.

Romanowicz and UC Berkeley postdoctoral fellow Huaiyu Yuan are testing a new technique, seismic azimuthal anisotropy, to look for the boundary between the lithosphere and asthenosphere. The technique takes advantage of the fact that seismic waves travel faster when moving in the same direction that a rock has been stretched than when traveling across the stretch marks. The difference in speed makes it possible to detect layers that have been stretched in different directions.

"As the lithosphere moves over the asthenosphere, the material gets stretched and acquires texture, which indicates the direction in which the plates are moving," she said.

Surprisingly, they found a sharp boundary 150 kilometers below the surface, far too shallow to be the lithosphere-asthenosphere boundary. The scientists believe that the sharp boundary is between two types of lithosphere: the old craton and the younger material that should match the chemical composition of the sea floor. Their interpretation fits with studies of xenoliths and xenocrysts, which indicate that there are two chemically distinct layers within the Archean crust.

Coincidentally, three years ago, researchers using a popular new technique called receiver function studies detected a sharp boundary below the North American craton at a depth of about 120 km. Receiver function studies take advantage of the fact that seismic waves change character – converting from a P wave to an S wave, for example – at sharp boundaries.

"We think they are seeing the same layering we are seeing, a sharp boundary within the lithosphere," Romanowicz said.

The stretch marks revealed by azimuthal anisotropy seem to rule out one theory of how the older continents have accrued more lithosphere.

"One hypothesis was that the bottom part was formed by underplating," Romanowicz said. "You would have a big plume of material, an upwelling, that would get stuck under the root. But what we are observing is not consistent with that. The material would spread in all directions and you would see anisotropy that is pointing like spokes in a bicycle."

"We are seeing a very consistent direction across the whole craton. In the top lithospheric layer the fast axis is, on average, aligned northeast-southwest. In the bottom layer it is aligned more north-south. So underplating doesn't work," she said.

If subduction is adding to the continental lithosphere, on the other hand, the north-south strike of the subduction zones on the east and west sides of the North American craton is consistent with the direction Romanowicz and Yuan found.

"I think our paper will stimulate people to look more carefully at distinguishing the ages of the lithosphere as a function of depth," she said. "Any information we can provide that constrains models of continental formation is really useful to the geodynamicists."

The study was supported by a grant from the Earthscope program of the National Science Foundation, and relied on seismic data from Earthscope, the Geological Survey of Canada and the Northern California Earthquake Data Center.

Contacts and sources: