Unseen Is Free

Unseen Is Free
Try It Now

OpenX

Google Translate

Friday, February 27, 2015

Alien Life Possible On Titan: Oxygen Free Life Form Say Cornell Researchers

A new type of methane-based, oxygen-free life form that can metabolize and reproduce similar to life on Earth has been modeled by a team of Cornell University researchers.

Taking a simultaneously imaginative and rigidly scientific view, chemical engineers and astronomers offer a template for life that could thrive in a harsh, cold world - specifically Titan, the giant moon of Saturn. A planetary body awash with seas not of water, but of liquid methane, Titan could harbor methane-based, oxygen-free cells. 

Credit: NASA/JPL/University of Arizona

Their theorized cell membrane, composed of small organic nitrogen compounds and capable of functioning in liquid methane temperatures of 292 degrees below zero, is published inScience Advances, Feb. 27. The work is led by chemical molecular dynamics expert Paulette Clancy and first author James Stevenson, a graduate student in chemical engineering. The paper's co-author is Jonathan Lunine, director for Cornell's Center for Radiophysics and Space Research.

 A representation of a nine nanometer azotosome, about the size of the virus, wiht a piece of membrane cut away to show the interior 
Credit: James Stevenson

Lunine is an expert on Saturn's moons and an interdisciplinary scientist on the Cassini-Huygens mission that discovered methane-ethane seas on Titan. Intrigued by the possibilities of methane-based life on Titan, and armed with a grant from the Templeton Foundation to study non-aqueous life, Lunine sought assistance about a year ago from Cornell faculty with expertise in chemical modeling. Clancy, who had never met Lunine, offered to help.

"We're not biologists, and we're not astronomers, but we had the right tools," Clancy said. "Perhaps it helped, because we didn't come in with any preconceptions about what should be in a membrane and what shouldn't. We just worked with the compounds that we knew were there and asked, 'If this was your palette, what can you make out of that?'"

On Earth, life is based on the phospholipid bilayer membrane, the strong, permeable, water-based vesicle that houses the organic matter of every cell. A vesicle made from such a membrane is called a liposome. Thus, many astronomers seek extraterrestrial life in what's called the circumstellar habitable zone, the narrow band around the sun in which liquid water can exist. But what if cells weren't based on water, but on methane, which has a much lower freezing point?

The engineers named their theorized cell membrane an "azotosome," "azote" being the French word for nitrogen. "Liposome" comes from the Greek "lipos" and "soma" to mean "lipid body;" by analogy, "azotosome" means "nitrogen body."

The azotosome is made from nitrogen, carbon and hydrogen molecules known to exist in the cryogenic seas of Titan, but shows the same stability and flexibility that Earth's analogous liposome does. This came as a surprise to chemists like Clancy and Stevenson, who had never thought about the mechanics of cell stability before; they usually study semiconductors, not cells.

The engineers employed a molecular dynamics method that screened for candidate compounds from methane for self-assembly into membrane-like structures. The most promising compound they found is an acrylonitrile azotosome, which showed good stability, a strong barrier to decomposition, and a flexibility similar to that of phospholipid membranes on Earth. Acrylonitrile - a colorless, poisonous, liquid organic compound used in the manufacture of acrylic fibers, resins and thermoplastics - is present in Titan's atmosphere.

Excited by the initial proof of concept, Clancy said the next step is to try and demonstrate how these cells would behave in the methane environment - what might be the analogue to reproduction and metabolism in oxygen-free, methane-based cells.

Lunine looks forward to the long-term prospect of testing these ideas on Titan itself, as he put it, by "someday sending a probe to float on the seas of this amazing moon and directly sampling the organics."

Stevenson said he was in part inspired by science fiction writer Isaac Asimov, who wrote about the concept of non-water-based life in a 1962 essay, "Not as We Know It."

Said Stevenson: "Ours is the first concrete blueprint of life not as we know it."


Contacts and sources:
Syl Kacapyr
Cornell University

Artificial Leg Muscles Grown For Grafting In A Dish

A team of researchers from Italy, Israel and the United Kingdom has succeeded in generating mature, functional skeletal muscles in mice using a new approach for tissue engineering. The scientists grew a leg muscle starting from engineered cells cultured in a dish to produce a graft. The subsequent graft was implanted close to a normal, contracting skeletal muscle where the new muscle was nurtured and grown. In time, the method could allow for patient-specific treatments for a large number of muscle disorders. The results are published in EMBO Molecular Medicine.

Skeletal muscle

Credit: Eastern Kentucky University

The scientists used muscle precursor cells - mesoangioblasts - grown in the presence of a hydrogel (support matrix) in a tissue culture dish. The cells were also genetically modified to produce a growth factor that stimulates blood vessel and nerve growth from the host. Cells engineered in this way express a protein growth factor that attracts other essential cells that give rise to the blood vessels and nerves of the host, contributing to the survival and maturation of newly formed muscle fibres. After the graft was implanted onto the surface of the skeletal muscle underneath the skin of the mouse, mature muscle fibres formed a complete and functional muscle within several weeks. Replacing a damaged muscle with the graft also resulted in a functional artificial muscle very similar to a normal Tibialis anterior.

Tissue engineering of skeletal muscle is a significant challenge but has considerable potential for the treatment of the various types of irreversible damage to muscle that occur in diseases like Duchenne muscular dystrophy. So far, attempts to re-create a functional muscle either outside or directly inside the body have been unsuccessful. In vitro-generated artificial muscles normally do not survive the transfer in vivo because the host does not create the necessary nerves and blood vessels that would support the muscle's considerable requirements for oxygen.

"The morphology and the structural organisation of the artificial organ are extremely similar to if not indistinguishable from a natural skeletal muscle," says Cesare Gargioli of the University of Rome, one of the lead authors of the study.

In future, irreversibly damaged muscles could be restored by implanting the patient's own cells within the hydrogel matrix on top of a residual muscle, adjacent to the damaged area. "While we are encouraged by the success of our work in growing a complete intact and functional mouse leg muscle we emphasize that a mouse muscle is very small and scaling up the process for patients may require significant additional work," comments EMBO Member Giulio Cossu, one of the authors of the study. The next step in the work will be to use larger animal models to test the efficacy of this approach before starting clinical studies.




Contacts and sources:
Barry Whyte
EMBO Molecular Medicine

Citation: In vivo generation of a mature and functional artificial skeletal muscle:  Claudia Fuoco, Roberto Rizzi, Antonella Biondo, Emanuela Longa, Anna Mascaro, Keren Shapira-Schweitzer, Olga Kossovar, Sara Benedetti, Maria L Salvatori, Sabrina Santoleri, Stefano Testa, Sergio Bernardini, Roberto Bottinelli, Claudia Bearzi, Stefano M Cannata, Dror Seliktar, Giulio Cossu and Cesare Gargioli. doi: 10.15252/emmm.201404062

Thursday, February 26, 2015

The Monster Of Monsters From The Dawn Of Time


Scientists have discovered the brightest quasar in the early universe, powered by the most massive black hole yet known at that time. The international team led by astronomers from Peking University in China and from the University of Arizona announce their findings in the scientific journal Nature on Feb. 26.

This is an artist's impression of a quasar with a supermassive black hole in the distant universe.

Credit:  Zhaoyu Li/NASA/JPL-Caltech/Misti Mountain Observatory  


The discovery of this quasar, named SDSS J0100+2802, marks an important step in understanding how quasars, the most powerful objects in the universe, have evolved from the earliest epoch, only 900 million years after the Big Bang, which is thought to have happened 13.7 billion years ago. The quasar, with its central black hole mass of 12 billion solar masses and the luminosity of 420 trillion suns, is at a distance of 12.8 billion light-years from Earth.

The discovery of this ultraluminous quasar also presents a major puzzle to the theory of black hole growth at early universe, according to Xiaohui Fan, Regents' Professor of Astronomy at the UA's Steward Observatory, who co-authored the study.

"How can a quasar so luminous, and a black hole so massive, form so early in the history of the universe, at an era soon after the earliest stars and galaxies have just emerged?" Fan said. "And what is the relationship between this monster black hole and its surrounding environment, including its host galaxy?

"This ultraluminous quasar with its supermassive black hole provides a unique laboratory to the study of the mass assembly and galaxy formation around the most massive black holes in the early universe."

The quasar dates from a time close to the end of an important cosmic event that astronomers referred to as the "epoch of reionization": the cosmic dawn when light from the earliest generations of galaxies and quasars is thought to have ended the "cosmic dark ages" and transformed the universe into how we see it today.

Discovered in 1963, quasars are the most powerful objects beyond our Milky Way galaxy, beaming vast amounts of energy across space as the supermassive black hole in their center sucks in matter from its surroundings. Thanks to the new generation of digital sky surveys, astronomers have discovered more than 200,000 quasars, with ages ranging from 0.7 billion years after the Big Bang to today.

The newly discovered quasar SDSS J0100+2802 is the one with the most massive black hole and the highest luminosity among all known distant quasars. The background photo, provided by Yunnan Observatory, shows the dome of the 2.4meter telescope and the sky above it.

Credit:  Zhaoyu Li/Shanghai Observatory

Shining with the equivalent of 420 trillion suns, the new quasar is seven times brighter than the most distant quasar known (which is 13 billion years away). It harbors a black hole with mass of 12 billion solar masses, proving it to be the most luminous quasar with the most massive black hole among all the known high redshift (very distant) quasars.

"By comparison, our own Milky Way galaxy has a black hole with a mass of only 4 million solar masses at its center; the black hole that powers this new quasar is 3,000 time heavier," Fan said.

Feige Wang, a doctoral student from Peking University who is supervised jointly by Fan and Prof. Xue-Bing Wu at Peking University, the study's lead author, initially spotted this quasar for further study.

"This quasar was first discovered by our 2.4-meter Lijiang Telescope in Yunnan, China, making it the only quasar ever discovered by a 2-meter telescope at such distance, and we're very proud of it," Wang said. "The ultraluminous nature of this quasar will allow us to make unprecedented measurements of the temperature, ionization state and metal content of the intergalactic medium at the epoch of reionization."

Following the initial discovery, two telescopes in southern Arizona did the heavy lifting in determining the distance and mass of the black hole: the 8.4-meter Large Binocular Telescope, or LBT, on Mount Graham and the 6.5-meter Multiple Mirror Telescope, or MMT, on Mount Hopkins. Additional observations with the 6.5-meter Magellan Telescope in Las Campanas Observatory, Chile, and the 8.2-meter Gemini North Telescope in Mauna Kea, Hawaii, confirmed the results.

"This quasar is very unique," said Xue-Bing Wu, a professor of the Department of Astronomy, School of Physics at Peking University and the associate director of the Kavli Institute of Astronomy and Astrophysics. "Just like the brightest lighthouse in the distant universe, its glowing light will help us to probe more about the early universe."

Wu leads a team that has developed a method to effectively select quasars in the distant universe based on optical and near-infrared photometric data, in particular using data from the Sloan Digital Sky Survey and NASA's Wide-Field Infrared Explorer, or WISE, satellite.

"This is a great accomplishment for the LBT," said Fan, who chairs the LBT Scientific Advisory Committee and also discovered the previous record holders for the most massive black hole in the early universe, about a fourth of the size of the newly discovered object. "The especially sensitive optical and infrared spectrographs of the LBT provided the early assessment of both the distance of the quasars and the mass of the black hole at the quasar's center."

For Christian Veillet, director of the Large Binocular Telescope Observatory, or LBTO, this discovery demonstrates both the power of international collaborations and the benefit of using a variety of facilities spread throughout the world.

"This result is particularly gratifying for LBTO, which is well on its way to full nighttime operations," Veillet said. "While in this case the authors used two different instruments in series, one for visible light spectroscopy and one for near-infrared imaging, LBTO will soon offer a pair of instruments that can be used simultaneously, effectively doubling the number of observations possible in clear skies and ultimately creating even more exciting science."

To further unveil the nature of this remarkable quasar, and to shed light on the physical processes that led to the formation of the earliest supermassive black holes, the research team will carry out further investigations on this quasar with more international telescopes, including the Hubble Space Telescope and the Chandra X-ray Telescope.


Contacts and sources:
Daniel Stolte
University of Arizona

How Eyelash Length Affects Eye Health


It started with a trip to the basement of the American Museum of Natural History in New York to inspect preserved animal hides. Later, Georgia Institute of Technology researchers built a wind tunnel about 2 feet tall, complete with a makeshift eye. By putting both steps together, the team discovered that 22 species of mammals - from humans, to hedgehogs, to giraffes ¬- are the same: their eyelash length is one-third the width of their eye. Anything shorter or longer, including the fake eyelashes that are popular in Hollywood and make-up aisles, increases airflow around the eye and leads to more dust hitting the surface.

Giraffes and 21 other mammals, including humans, all eyelashes that are one third the width of their eye.

Credit: Georgia Institute of Technology

"Eyelashes form a barrier to control airflow and the rate of evaporation on the surface of the cornea," said Guillermo Amador, a Georgia Tech Ph.D. candidate in the George W. Woodruff School of Mechanical Engineering who authored the study. "When eyelashes are shorter than the one-third ratio, they have only a slight effect on the flow. Their effect is more pronounced as they lengthen up until one-third. After that, they start funneling air and dust particles into the eye."

The study is currently published in the Journal of the Royal Society Interface.

Amador and the research team, which is led by Assistant Professor David Hu, sent a student to the museum in 2012 to measure eyes and eyelashes of various animals. Aside from an elephant, which has extremely long eyelashes, every species studied had evolved to the same ratio of lash length to eye width.

These are the eye lashes of a goat.

Credit: Georgia Institute of Technology

The team then built the wind tunnel to re-create air flows on a mimic of an adult, human eye. A 4-millimeter deep, 20-millimeter diameter aluminum dish served as the cornea. It sat on top of an acrylic plate, which imitated the rest of the face. Mesh surrounded the dish to replicate the eyelashes.

They discovered the ideal ratio while varying the mesh length during evaporation and particle deposition studies.

"As short lashes grew longer, they reduced air flow, creating a layer of slow-moving air above the cornea," said Hu. "This kept the eye moist for a longer time and kept particles away. The majority of air essentially hit the eyelashes and rolled away from the eye."

This image shows the eye and eyelashes of an ostrich.

Credit: Georgia Institute of Technology

The opposite process occurred with longer eyelashes. The lashes extended further into the airflow and created a cylinder. The air and its molecules channeled toward the eye and led to faster evaporation.

"This is why long, elegant, fake eyelashes aren't ideal," said Amador. "They may look good, but they're not the best thing for the health of your eyes."

There are exceptions, though. The research team notes that people who can't grow eyelashes could wear fake ones, if they're the correct length, for extra protection and to reduce dry eye.

"Even if they're not the correct length, more eyelashes are always better than less," said Alexander Alexeev, an associate professor in the School of Mechanical Engineering. "If fake eyelashes are dense enough, they may give the same overall effect in protecting the eye even if they are longer than one-third."

The team also says the findings could be used to create eyelash-inspired filaments to protect solar panels, photographic sensors or autonomous robots in dusty environments.


Contacts and sources:
Jason Maderer
Georgia Institute of Technology

Origin Of Matter Mystery: Possible Solution To Puzzle Offered

Most of the laws of nature treat particles and antiparticles equally, but stars and planets are made of particles, or matter, and not antiparticles, or antimatter. That asymmetry, which favors matter to a very small degree, has puzzled scientists for many years.

UCLA physicists offer a possible solution to the mystery of the origin of matter in the universe.

Credit: NASA

New research by UCLA physicists, published in the journal Physical Review Letters, offers a possible solution to the mystery of the origin of matter in the universe.

Alexander Kusenko, a professor of physics and astronomy in the UCLA College, and colleagues propose that the matter-antimatter asymmetry could be related to the Higgs boson particle, which was the subject of prominent news coverage when it was discovered at Switzerland's Large Hadron Collider in 2012.

Specifically, the UCLA researchers write, the asymmetry may have been produced as a result of the motion of the Higgs field, which is associated with the Higgs boson, and which could have made the masses of particles and antiparticles in the universe temporarily unequal, allowing for a small excess of matter particles over antiparticles.

If a particle and an antiparticle meet, they disappear by emitting two photons or a pair of some other particles. In the "primordial soup" that existed after the Big Bang, there were almost equal amounts of particles of antiparticles, except for a tiny asymmetry: one particle per 10 billion. As the universe cooled, the particles and antiparticles annihilated each other in equal numbers, and only a tiny number of particles remained; this tiny amount is all the stars and planets, and gas in today's universe, said Kusenko, who is also a senior scientist with the Kavli Institute for the Physics and Mathematics of the Universe.

The research also is highlighted by Physical Review Letters in a commentary in the current issue.

The 2012 discovery of the Higgs boson particle was hailed as one of the great scientific accomplishments of recent decades. The Higgs boson was first postulated some 50 years ago as a crucial element of the modern theory of the forces of nature, and is, physicists say, what gives everything in the universe mass. Physicists at the LHC measured the particle's mass and found its value to be peculiar; it is consistent with the possibility that the Higgs field in the first moments of the Big Bang was much larger than its "equilibrium value" observed today.

The Higgs field "had to descend to the equilibrium, in a process of 'Higgs relaxation,'" said Kusenko, the lead author of the UCLA research.

Two of Kusenko's graduate students, Louis Yang of UCLA and Lauren Pearce of the University of Minnesota, Minneapolis, were co-authors of the study. The research was supported by the U.S. Department of Energy (DE-SC0009937), the World Premier International Research Center Initiative in Japan and the National Science Foundation (PHYS-1066293).



Contacts and sources: 
Stuart Wolpert
UCLA

SOHO Sees Something New Near The Sun



An unusual comet skimmed past the sun on Feb 18-21, 2015, as captured by the European Space Agency (ESA) and NASA's Solar and Heliospheric Observatory, or SOHO.

This comet was interesting for two reasons. First it's what's called a non-group comet, meaning it's not part of any known family of comets. Most comets seen by SOHO belong to the Kreutz family - all of which broke off from a single giant comet many centuries ago.
Image Credit: NASA/Goddard Space Flight Center/Duberstein

Watch the video to see the comet fly around the sun. Toward the end of the video, as the comet begins to develop a tail, the sun releases an eruption of solar material, called a coronal mass ejection, or CME, to add something more to the scene.

Image Credit: ESA/NASA/SOHO/Hill

The second reason it's interesting is because the vast majority of comets that come close enough to the sun to be seen by SOHO do not survive the trip. Known as sungrazers, these comets usually evaporate in the intense sunlight. This comet made it to within 2.2 million miles of the sun's surface - but survived the trip intact.

A description of sungrazer comets and where they come from.
Image Credit: NASA/Goddard Space Flight Center/Duberstein

"There's a half-decent chance that ground observers might be able to detect it in the coming weeks," said Karl Battams, a solar scientist at the Naval Research Lab in Washington, D.C. "But it's also possible that events during its trip around the sun will cause it to die fairly fast."

Since launching in 1995, SOHO has become the number one comet finder of all time -- this was comet discovery number 2,875. However, SOHO sees non-group comets like this only a few times a year.
 

Contacts and sources:


Quasars--supermassive black holes found at the center of distant massive galaxies--are the most-luminous beacons in the sky. These central supermassive black holes actively accrete the surrounding materials and release a huge amount of their gravitational energy. An international team of astronomers, including Carnegie's Yuri Beletsky, has discovered the brightest quasar ever found in the early universe, which is powered by the most massive black hole observed for an object from that time. Their work is published February 26 by Nature.

This is an artist's rendering of a very distant, very ancient quasar, courtesy of the European Southern Observatory.

Credit: ESO/M. Kornmesser

The quasar was found at a redshift of z=6.30. This is a measurement of how much the wavelength of light emitted from it that reaches us on Earth is stretched by the expansion of the universe. As such, it can be used to calculate the quasar's age and distance from our planet. A higher redshift means larger distance and hence looking further back in time.

At a distance of 12.8 billion light years from Earth, this quasar was formed only 900 million years after the Big Bang. Named SDSS J0100+2802, studying this quasar will help scientists understand how quasars evolved in the earliest days of the universe. There are only 40 known quasars have a redshift of higher than 6, a point that marks the beginning of the early universe.

"This quasar is very unique. Just like the brightest lighthouse in the distant universe, its glowing light will help us to probe more about the early universe," said team-leader Xue-Bing Wu of Peking University and the Kavli Institute of Astronomy and Astrophysics.

With a luminosity of 420 trillion that of our own Sun's, this new quasar is seven times brighter than the most distant quasar known (which is 13 billion years away). It harbors a black hole with mass of 12 billion solar masses, proving it to be the most luminous quasar with the most massive black hole among all the known high redshift quasars.

The team developed a method of detecting quasars at redshifts of 5 and higher. These detections were verified by the 6.5-meter Multiple Mirror Telescope (MMT) and 8.4m Large Binocular Telescope (LBT) in Arizona; the 6.5m Magellan Telescope at Carnegie's Las Campanas Observatory in Chile; and the 8.2m Gemini North Telescope in Hawaii.

"This quasar is a unique laboratory to study the way that a quasar's black hole and host galaxy co-evolve," Beletsky said. "Our findings indicate that in the early Universe, quasar black holes probably grew faster than their host galaxies, although more research is needed to confirm this idea."


Contacts and sources:
Yuri Beletsky
Carnegie Institution

Tuesday, February 24, 2015

NASA Satellite Reveals How Much Saharan Dust Feeds Amazonian Plants

What connects Earth's largest, hottest desert to its largest tropical rain forest?

The Sahara Desert is a near-uninterrupted brown band of sand and scrub across the northern third of Africa. The Amazon rain forest is a dense green mass of humid jungle that covers northeast South America. But after strong winds sweep across the Sahara, a tan cloud rises in the air, stretches between the continents, and ties together the desert and the jungle. It’s dust. And lots of it.

The lidar instrument aboard the CALIPSO satellite sends out pulses of light that bounce off particles in the atmosphere and back to the satellite. It distinguishes dust from other particles based on optical properties.

Image Credit: NASA Goddard's Scientific Visualization Studio

For the first time, a NASA satellite has quantified in three dimensions how much dust makes this trans-Atlantic journey. Scientists have not only measured the volume of dust, they have also calculated how much phosphorus – remnant in Saharan sands from part of the desert’s past as a lake bed – gets carried across the ocean from one of the planet’s most desolate places to one of its most fertile.

For the first time, a NASA satellite has quantified in three dimensions how much dust makes the trans-Atlantic journey from the Sahara Desert the Amazon rain forest. Among this dust is phosphorus, an essential nutrient that acts like a fertilizer, which the Amazon depends on in order to flourish.

Image Credit: NASA's Goddard Space Flight Center

A new paper published Feb. 24 in Geophysical Research Letters, a journal of the American Geophysical Union, provides the first satellite-based estimate of this phosphorus transport over multiple years, said lead author Hongbin Yu, an atmospheric scientist at the University of Maryland who works at NASA's Goddard Space Flight Center in Greenbelt, Maryland. A paper published online by Yu and colleagues Jan. 8 in Remote Sensing of the Environment provided the first multi-year satellite estimate of overall dust transport from the Sahara to the Amazon.

This trans-continental journey of dust is important because of what is in the dust, Yu said. Specifically the dust picked up from the Bodélé Depression in Chad, ann ancient lake bed where rock minerals composed of dead microorganisms are loaded with phosphorus. Phosphorus is an essential nutrient for plant proteins and growth, which the Amazon rain forest depends on n order to flourish.

Nutrients – the same ones found in commercial fertilizers – are in short supply in Amazonian soils. Instead they are locked up in the plants themselves. Fallen, decomposing leaves and organic matter provide the majority of nutrients, which are rapidly absorbed by plants and trees after entering the soil. But some nutrients, including phosphorus, are washed away by rainfall into streams and rivers, draining from the Amazon basin like a slowly leaking bathtub.

The phosphorus that reaches Amazon soils from Saharan dust, an estimated 22,000 tons per year, is about the same amount as that lost from rain and flooding, Yu said. The finding is part of a bigger research effort to understand the role of dust and aerosols in the environment and on local and global climate.

Dust in the Wind

"We know that dust is very important in many ways. It is an essential component of the Earth system. Dust will affect climate and, at the same time, climate change will affect dust," said Yu. To understand what those effects may be, "First we have to try to answer two basic questions. How much dust is transported? And what is the relationship between the amount of dust transport and climate indicators?"

Conceptual image of dust from the Saharan Desert crossing the Atlantic Ocean to the Amazon rainforest in South America.
Credit: Conceptual Image Lab, NASA/Goddard Space Flight Center

The new dust transport estimates were derived from data collected by a lidar instrument on NASA's Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation, or CALIPSO, satellite from 2007 though 2013.

The data show that wind and weather pick up on average 182 million tons of dust each year and carry it past the western edge of the Sahara at longitude 15W. This volume is the equivalent of 689,290 semi trucks filled with dust. The dust then travels 1,600 miles across the Atlantic Ocean, though some drops to the surface or is flushed from the sky by rain. Near the eastern coast of South America, at longitude 35W, 132 million tons remain in the air, and 27.7 million tons – enough to fill 104,908 semi trucks – fall to the surface over the Amazon basin. About 43 million tons of dust travel farther to settle out over the Caribbean Sea, past longitude 75W.

Yu and colleagues focused on the Saharan dust transport across the Atlantic Ocean to South America and then beyond to the Caribbean Sea because it is the largest transport of dust on the planet.

Dust collected from the Bodélé Depression and from ground stations on Barbados and in Miami give scientists an estimate of the proportion of phosphorus in Saharan dust. This estimate is used to calculate how much phosphorus gets deposited in the Amazon basin from this dust transport.

The seven-year data record, while too short for looking at long-term trends, is nevertheless very important for understanding how dust and other aerosols behave as they move across the ocean, said Chip Trepte, project scientist for CALIPSO at NASA's Langley Research Center in Virginia, who was not involved in either study.

"We need a record of measurements to understand whether or not there is a fairly robust, fairly consistent pattern to this aerosol transport," he said.

Looking at the data year by year shows that that pattern is actually highly variable. There was an 86 percent change between the highest amount of dust transported in 2007 and the lowest in 2011, Yu said.

Why so much variation? Scientists believe it has to do with the conditions in the Sahel, the long strip of semi-arid land on the southern border of the Sahara. After comparing the changes in dust transport to a variety of climate factors, the one Yu and his colleagues found a correlation to was the previous year's Sahel rainfall. When Sahel rainfall increased, the next year's dust transport was lower.

The mechanism behind the correlation is unknown, Yu said. One possibility is that increased rainfall means more vegetation and less soil exposed to wind erosion in the Sahel. A second, more likely explanation is that the amount of rainfall is related to the circulation of winds, which are what ultimately sweep dust from both the Sahel and Sahara into the upper atmosphere where it can survive the long journey across the ocean.

CALIPSO collects "curtains" of data that show valuable information about the altitude of dust layers in the atmosphere. Knowing the height at which dust travels is important for understanding, and eventually using computers to model, where that dust will go and how the dust will interact with Earth's heat balance and clouds, now and in future climate scenarios.

"Wind currents are different at different altitudes," said Trepte. "This is a step forward in providing the understanding of what dust transport looks like in three dimensions, and then comparing with these models that are being used for climate studies."

Climate studies range in scope from global to regional changes, such as those that may occur in the Amazon in coming years. In addition to dust, the Amazon is home to many other types of aerosols like smoke from fires and biological particles, such as bacteria, fungi, pollen, and spores released by the plants themselves. In the future, Yu and his colleagues plan to explore the effects of those aerosols on local clouds – and how they are influenced by dust from Africa.

"This is a small world," Yu said, "and we're all connected together."


Contacts and sources:
Ellen Gray
NASA's Earth Science News Team
NASA's Goddard Space Flight Center,

Monday, February 23, 2015

With Low Level Electricity, Scientists Change The Way We Think

Scientists use electrical stimulus of frontal lobes triggers spontaneous 'mind wandering' -- alongside improved task performance.

Does your mind wander when performing monotonous, repetitive tasks? Of course! But daydreaming involves more than just beating back boredom. In fact, according to a new study published in the Proceedings of the National Academy of Sciences, a wandering mind can impart a distinct cognitive advantage.

Art by Giora Eshkol

Scientists at Bar-Ilan University are the first to demonstrate how an external stimulus of low-level electricity can literally change the way we think, producing a measurable up-tick in the rate at which daydreams - or spontaneous, self-directed thoughts and associations - occur. Along the way, they made another surprising discovery: that while daydreams offer a welcome "mental escape" from boring tasks, they also have a positive, simultaneous effect on task performance.

The new study was carried out in Bar-Ilan's Cognitive Neuroscience Laboratory supervised by Prof. Moshe Bar, part of the University's Gonda (Goldschmied) Multidisciplinary Brain Research Center which Prof. Bar also directs.

What Makes a Mind Wander?

While a far cry from the diabolical manipulation of dream content envisioned in "Inception" - the science-fiction thriller starring Leonardo DiCaprio - the Bar-Ilan University study is the first to prove that a generic external stimulus, unrelated to sensory perception, triggers a specific type of cognitive activity.

In the experiment - designed and executed by Prof. Bar's post-doctoral researcher Dr. Vadim Axelrod - participants were treated with transcranial direct current stimulation (tDCS), a non-invasive and painless procedure that uses low-level electricity to stimulate specific brain regions. During treatment, the participants were asked to track and respond to numerals flashed on a computer screen. They were also periodically asked to respond to an on-screen "thought probe" in which they reported - on a scale of one to four - the extent to which they were experiencing spontaneous thoughts unrelated to the numeric task they had been given.

The Brain-Daydream Connection

According to Prof. Bar - a long-time faculty member at Harvard Medical School who has authored several studies exploring the link between associative thinking, memory and predictive ability - the specific brain area targeted for stimulation in this study was anything but random.

"We focused tDCS stimulation on the frontal lobes because this brain region has been previously implicated in mind wandering, and also because is a central locus of the executive control network that allows us to organize and plan for the future," Bar explains, adding that he suspeced that there might be a connection between the two.

As a point of comparison and in separate experiments, the researchers used tDCS to stimulate the occipital cortex - the visual processing center in the back of the brain. They also conducted control studies where no tDCS was used.

While the self-reported incidence of mind wandering was unchanged in the case of occipital and sham stimulation, it rose considerably when this stimulation was applied to the frontal lobes. "Our results go beyond what was achieved in earlier, fMRI-based studies," Bar states. "They demonstrate that the frontal lobes play a causal role in the production of mind wandering behavior."

Improved "Cognitive Capacity" of the Wandering Mind

In an unanticipated finding, the present study demonstrated how the increased mind wandering behavior produced by external stimulation not only does not harm subjects' ability to succeed at an appointed task, it actually helps. Bar believes that this surprising result might stem from the convergence, within a single brain region, of both the "thought controlling" mechanisms of executive function and the "thought freeing" activity of spontaneous, self-directed daydreams.

"Over the last 15 or 20 years, scientists have shown that - unlike the localized neural activity associated with specific tasks - mind wandering involves the activation of a gigantic default network involving many parts of the brain," Bar says. "This cross-brain involvement may be involved in behavioral outcomes such as creativity and mood, and may also contribute to the ability to stay successfully on-task while the mind goes off on its merry mental way."

While it is commonly assumed that people have a finite cognitive capacity for paying attention, Bar says that the present study suggests that the truth may be more complicated.

"Interestingly, while our study's external stimulation increased the incidence of mind wandering, rather than reducing the subjects' ability to complete the task, it caused task performance to become slightly improved. The external stimulation actually enhanced the subjects' cognitive capacity."

Toward A Less-Mysterious Mind

Bar says that, in the future, he would be interested in studying how external stimulation might affect other cognitive behaviors, such as the ability to focus or perform multiple tasks in parallel. And while any therapeutic application of this technique is speculative at best, he believes that it might someday help neuroscientists understand the behavior of people suffering from low or abnormal neural activity.

In the meantime, Bar's team at the Bar-Ilan University Lab for Cognitive Neuroscience is pleased to note that in their work on mind wandering - probably the most omnipresent internal cognitive function - they have made the human brain just a little less mysterious.

The research described above was funded, in part, by the Israeli Center of Research Excellence in Cognition (ICORE).


Contacts and sources: 
Elana Oberlander
Bar-Ilan University

Love Hormone Sobering Effect When Drinking

Oxytocin, sometimes referred to as the 'love' or 'cuddle' hormone, has a legendary status in popular culture due to its vital role in social and sexual behaviour and long-term bonding.

Now researchers from the University of Sydney and the University of Regensburg have discovered it also has a remarkable influence on the intoxicating effect of alcohol, which they report in the scientific journal Proceedings of the National Academy of Sciences on 24 February.

Dr Michael Bowen, School of Psychology, University of Sydney is lead author on a PNAS paper showing oxytocin counteracts the intoxicating effect of alcohol in rats.
Credit:  University of Sydney

When the researchers infused oxytocin into the brains of rats which were then given alcohol it prevented the drunken lack of coordination caused by the alcohol.

"In the rat equivalent of a sobriety test, the rats given alcohol and oxytocin passed with flying colours, while those given alcohol without oxytocin were seriously impaired," Dr Bowen said.

The researchers demonstrated that oxytocin prevents alcohol from accessing specific sites in the brain that cause alcohol's intoxicating effects, sites known as delta-subunit GABA-A receptors.

"Alcohol impairs your coordination by inhibiting the activity of brain regions that provide fine motor control. Oxytocin prevents this effect to the point where we can't tell from their behaviour that the rats are actually drunk. It's a truly remarkable effect," Dr Bowen said.

This 'sobering-up' effect of oxytocin has yet to be shown in humans but the researchers plan to conduct these studies in the near future.

"The first step will be to ensure we have a method of drug delivery for humans that allows sufficient amounts of oxytocin to reach the brain. If we can do that, we suspect that oxytocin could also leave speech and cognition much less impaired after relatively high levels of alcohol consumption," Dr Bowen said.

It's worth noting that oxytocin can't save you from being arrested while driving home from the pub.

"While oxytocin might reduce your level of intoxication, it won't actually change your blood alcohol level," Dr Bowen said. "This is because the oxytocin is preventing the alcohol from accessing the sites in the brain that make you intoxicated, it is not causing the alcohol to leave your system any faster".

Some people might worry a drug which decreases your level of intoxication could encourage you to drink more. As it turns out, separate experiments conducted by the researchers and other groups have shown that taking oxytocin actually reduces alcohol consumption and craving in both rats and humans.

"We believe that the effects of oxytocin on alcohol consumption and craving act through a similar mechanism in the brain to the one identified in our research," said Dr Bowen.

Their findings could see the development of new oxytocin-based treatments for alcohol-use disorders that target this mechanism.



 Contacts and sources:
Verity Leatherdale 
University of Sydney

Threat Of Ocean Acidification To U.S. Outlined


Coastal communities in 15 states that depend on the $1 billion shelled mollusk industry (primarily oysters and clams) are at long-term economic risk from the increasing threat of ocean acidification, a new report concludes.

This first nationwide vulnerability analysis, which was funded through the National Science Foundation's National Socio-Environmental Synthesis Center, was published today in the journal Nature Climate Change.

Oysters at hatcheries in Oregon and Washington are showing the effects of ocean acidification.

Credit:  Oregon State University

The Pacific Northwest has been the most frequently cited region with vulnerable shellfish populations, the authors say, but the report notes that newly identified areas of risk from acidification range from Maine to the Chesapeake Bay, to the bayous of Louisiana.

"Ocean acidification has already cost the oyster industry in the Pacific Northwest nearly $110 million and jeopardized about 3,200 jobs," said Julie Ekstrom, who was lead author on the study while with the Natural Resources Defense Council. She is now at the University of California at Davis.

George Waldbusser, an Oregon State University marine ecologist and biogeochemist, said the spreading impact of ocean acidification is due primarily to increases in greenhouse gases.

"This clearly illustrates the vulnerability of communities dependent on shellfish to ocean acidification," said Waldbusser, a researcher in OSU's College of Earth, Ocean, and Atmospheric Sciences and co-author on the paper. "We are still finding ways to increase the adaptive capacity of these communities and industries to cope, and refining our understanding of various species' specific responses to acidification.

"Ultimately, however, without curbing carbon emissions, we will eventually run out of tools to address the short-term and we will be stuck with a much larger long-term problem," Waldbusser added.

The analysis identified several "hot zones" facing a number of risk factors. These include:


The Pacific Northwest: Oregon and Washington coasts and estuaries have a "potent combination" of risk factors, including cold waters, upwelling currents that bring corrosive waters closer to the surface, corrosive rivers, and nutrient pollution from land runoff;

New England: The product ports of Maine and southern New Hampshire feature poorly buffered rivers running into cold New England waters, which are especially enriched with acidifying carbon dioxide;

Mid-Atlantic: East coast estuaries including Narragansett Bay, Chesapeake Bay, and Long Island Sound have an abundance of nitrogen pollution, which exacerbates ocean acidification in waters that are shellfish-rich;

Gulf of Mexico: Terrebonne and Plaquemines Parishes of Louisiana, and other communities in the region, have shellfish economies based almost solely on oysters, giving this region fewer options for alternative - and possibly more resilient - mollusk fisheries.

The project team has also developed an interactive map to explore the vulnerability factors regionally.

One concern, the authors say, is that many of the most economically dependent regions - including Massachusetts, New Jersey, Virginia and Louisiana - are least prepared to respond, with minimal research and monitoring assets for ocean acidification.

The Pacific Northwest, on the other hand, has a robust research effort led by Oregon State University researchers, who already have helped oyster hatcheries rebound from near-disastrous larval die-offs over the past decade. The university recently announced plans to launch a Marine Studies Initiative that would help address complex, multidisciplinary problems such as ocean acidification.

"The power of this project is the collaboration of natural and social scientists focused on a problem that has and will continue to impact industries dependent on the sea," Waldbusser said.

Waldbusser recently led a study that documented how larval oysters are sensitive to a change in the "saturation state" of ocean water - which ultimately is triggered by an increase in carbon dioxide. The inability of ecosystems to provide enough alkalinity to buffer the increase in CO2 is what kills young oysters in the environment.


Contacts and sources:
George Waldbusser
Oregon State University 

'Walking Football' Phenomenon Has Great Health Benefits

Aston University (UK) researchers have said that 'walking football' could have a multitude of health benefits.

The new sporting craze of ‘Walking Football’ may enable people to continue playing football into their 60s and 70s while reaping a multitude of health benefits, according to Aston University researchers.

Walking Football participants
Credit: Aston University

The sport was created in 2011 to help keep older players involved in football for longer and from hanging up their boots before they need to. Games are played at a slower pace to reduce the threat of pain, discomfort and injury, with players briskly walking through matches. Across the country, new Walking Football clubs and groups are setting up every week as its popularity rockets.

Although it is known that regular football, including 11-a-side and 5-a-side versions of the sport, has considerable health benefits such as reducing the risk of cardiovascular disease and improving blood pressure, little research has been done into the impact of Walking Football.

Aston University researcher, Peter Reddy, felt compelled to conduct a study into the sport to discover just how healthy it is. The investigation will assess two groups of men and women over the age of 48 playing Walking Football once a week for 12 weeks. Participants will be regularly assessed to measure changes in their postural balance, blood pressure and resting heart rate, cholesterol, blood sugar and bone density – all indicators of general good health.

The study will also look into the psychological advantages of playing Walking Football. Recent research into older males exposed to lifelong football found they had high levels of ‘flow’ while playing football – a state of psychological reward and satisfaction. They also reported low levels of stress and exertion while playing, despite working hard. Peter hopes to see similarly positive results in his investigation.

Peter, a Reader in Psychology, said: “Football is a fantastically good way of staying fit and healthy. Studies have shown it can be effective in the treatment of mild to moderate hypertension and that it can produce high aerobic activity with marked improvements in fat oxidation and aerobic power. Most people who play the sport, at amateur and professional levels, give it up in their late 30s but there’s no reason not to enjoy the beautiful game until well into your 60s and even 70s.

“We hope this study will establish that Walking Football has health benefits on proportionally similar lines to regular football and that older adults can happily play every week without pain or discomfort. If the data is positive, it will form a basis for local and national charities and authorities to set up and support local Walking Football groups. We want to ensure people are healthy for longer – and that they can enjoy a kickabout at any age.”

In the context of an ageing society, rising levels of obesity and the growing incident of late onset diabetes, it is thought Walking Football has the potential to make a significant impact.

In the UK around 22% of men die before the age of 65, compared to 13% of women. Although physically active men have a 20 – 30% reduced risk of premature death and 50% less chronic disease, by the age of 55-64 only 32% of men say they take the recommended half hour of exercise five times a week.


Contacts and sources:
Peter Reddy
Aston University 

New Nanogel Improves Drug Delivery

A self-healing nano gel developed by MIT researchers can be injected into the body and act as a long-term drug depot.

Scientists are interested in using gels to deliver drugs because they can be molded into specific shapes and designed to release their payload over a specified time period. However, current versions aren’t always practical because must be implanted surgically.

To help overcome that obstacle, MIT chemical engineers have designed a new type of self-healing hydrogel that could be injected through a syringe. Such gels, which can carry one or two drugs at a time, could be useful for treating cancer, macular degeneration, or heart disease, among other diseases, the researchers say.

These scanning electron microscopy images, taken at different magnifications, show the structure of new hydrogels made of nanoparticles interacting with long polymer chains.

Courtesy of the researchers

The new gel consists of a mesh network made of two components: nanoparticles made of polymers entwined within strands of another polymer, such as cellulose.

“Now you have a gel that can change shape when you apply stress to it, and then, importantly, it can re-heal when you relax those forces. That allows you to squeeze it through a syringe or a needle and get it into the body without surgery,” says Mark Tibbitt, a postdoc at MIT’s Koch Institute for Integrative Cancer Research and one of the lead authors of a paper describing the gel in Nature Communications on Feb. 19.

Koch Institute postdoc Eric Appel is also a lead author of the paper, and the paper’s senior author is Robert Langer, the David H. Koch Institute Professor at MIT. Other authors are postdoc Matthew Webber, undergraduate Bradley Mattix, and postdoc Omid Veiseh.

Heal thyself

Scientists have previously constructed hydrogels for biomedical uses by forming irreversible chemical linkages between polymers. These gels, used to make soft contact lenses, among other applications, are tough and sturdy, but once they are formed their shape cannot easily be altered.

The MIT team set out to create a gel that could survive strong mechanical forces, known as shear forces, and then reform itself. Other researchers have created such gels by engineering proteins that self-assemble into hydrogels, but this approach requires complex biochemical processes. The MIT team wanted to design something simpler.

“We’re working with really simple materials,” Tibbitt says. “They don’t require any advanced chemical functionalization.”

The MIT approach relies on a combination of two readily available components. One is a type of nanoparticle formed of PEG-PLA copolymers, first developed in Langer’s lab decades ago and now commonly used to package and deliver drugs. To form a hydrogel, the researchers mixed these particles with a polymer — in this case, cellulose.

Each polymer chain forms weak bonds with many nanoparticles, producing a loosely woven lattice of polymers and nanoparticles. Because each attachment point is fairly weak, the bonds break apart under mechanical stress, such as when injected through a syringe. When the shear forces are over, the polymers and nanoparticles form new attachments with different partners, healing the gel.

Using two components to form the gel also gives the researchers the opportunity to deliver two different drugs at the same time. PEG-PLA nanoparticles have an inner core that is ideally suited to carry hydrophobic small-molecule drugs, which include many chemotherapy drugs. Meanwhile, the polymers, which exist in a watery solution, can carry hydrophilic molecules such as proteins, including antibodies and growth factors.

Long-term drug delivery

In this study, the researchers showed that the gels survived injection under the skin of mice and successfully released two drugs, one hydrophobic and one hydrophilic, over several days.

This type of gel offers an important advantage over injecting a liquid solution of drug-delivery nanoparticles: While a solution will immediately disperse throughout the body, the gel stays in place after injection, allowing the drug to be targeted to a specific tissue. Furthermore, the properties of each gel component can be tuned so the drugs they carry are released at different rates, allowing them to be tailored for different uses.

The researchers are now looking into using the gel to deliver anti-angiogenesis drugs to treat macular degeneration. Currently, patients receive these drugs, which cut off the growth of blood vessels that interfere with sight, as an injection into the eye once a month. The MIT team envisions that the new gel could be programmed to deliver these drugs over several months, reducing the frequency of injections.

Another potential application for the gels is delivering drugs, such as growth factors, that could help repair damaged heart tissue after a heart attack. The researchers are also pursuing the possibility of using this gel to deliver cancer drugs to kill tumor cells that get left behind after surgery. In that case, the gel would be loaded with a chemical that lures cancer cells toward the gel, as well as a chemotherapy drug that would kill them. This could help eliminate the residual cancer cells that often form new tumors following surgery.

“Removing the tumor leaves behind a cavity that you could fill with our material, which would provide some therapeutic benefit over the long term in recruiting and killing those cells,” Appel says. “We can tailor the materials to provide us with the drug-release profile that makes it the most effective at actually recruiting the cells.”

The research was funded by the Wellcome Trust, the Misrock Foundation, the Department of Defense, and the National Institutes of Health.


Contacts and sources:
Anne Trafton
MIT News Office

How Brain Wave Frequencies Guide Memory Formation:

Neurons hum at different frequencies to tell the brain which memories it should store.

Our brains generate a constant hum of activity: As neurons fire, they produce brain waves that oscillate at different frequencies. Long thought to be merely a byproduct of neuron activity, recent studies suggest that these waves may play a critical role in communication between different parts of the brain.

Two areas of the brain — the hippocampus (yellow) and the prefrontal cortex (blue) — use two different brain-wave frequencies to communicate as the brain learns to associate unrelated objects.

Illustration: Jose-Luis Olivares/MIT

A new study from MIT neuroscientists adds to that evidence. The researchers found that two brain regions that are key to learning — the hippocampus and the prefrontal cortex — use two different brain-wave frequencies to communicate as the brain learns to associate unrelated objects. Whenever the brain correctly links the objects, the waves oscillate at a higher frequency, called “beta,” and when the guess is incorrect, the waves oscillate at a lower “theta” frequency.

“It’s like you’re playing a computer game and you get a ding when you get it right, and a buzz when you get it wrong. These two areas of the brain are playing two different ‘notes’ for correct guesses and wrong guesses,” says Earl Miller, the Picower Professor of Neuroscience, a member of MIT’s Picower Institute for Learning and Memory, and senior author of a paper describing the findings in the Feb. 23 online edition of Nature Neuroscience.

Furthermore, these oscillations may reinforce the correct guesses while repressing the incorrect guesses, helping the brain learn new information, the researchers say.

Signaling right and wrong

Miller and lead author Scott Brincat, a research scientist at the Picower Institute, examined activity in the brain as it forms a type of memory called explicit memory — memory for facts and events. This includes linkages between items such as names and faces, or between a location and an event that took place there.

During the learning task, animals were shown pairs of images and gradually learned, through trial and error, which pairs went together. Each correct response was signaled with a reward.

As the researchers recorded brain waves in the hippocampus and the prefrontal cortex during this task, they noticed that the waves occurred at different frequencies depending on whether the correct or incorrect response was given. When the guess was correct, the waves occurred in the beta frequency, about 9 to 16 hertz (cycles per second). When incorrect, the waves oscillated in the theta frequency, about 2 to 6 hertz.

Previous studies by MIT’s Mark Bear, also a member of the Picower Institute, have found that stimulating neurons in brain slices at beta frequencies strengthens the connections between the neurons, while stimulating the neurons at theta frequencies weakens the connections.

Miller believes the same thing is happening during this learning task.

“When the animal guesses correctly, the brain hums at the correct answer note, and that frequency reinforces the strengthening of connections,” he says. “When the animal guesses incorrectly, the ‘wrong’ buzzer buzzes, and that frequency is what weakens connections, so it’s basically telling the brain to forget about what it just did.”

The findings represent a major step in revealing how memories are formed, says Howard Eichenbaum, director of the Center for Memory and Brain at Boston University.

“This study offers a very specific, detailed story about the role of different directions of flow, who’s sending information to whom, at what frequencies, and how that feedback contributes to memory formation,” says Eichenbaum, who was not part of the research team.

The study also highlights the significance of brain waves in cognitive function, which has only recently been discovered by Miller and others.

“Brain waves had been ignored for decades in neuroscience. It’s been thought of as the humming of a car engine,” Miller says. “What we’re discovering through this experiment and others is that these brain waves may be the infrastructure that supports neural communication.”

Enhancing memory

The researchers are now investigating whether they can speed up learning by delivering noninvasive electrical stimulation that oscillates at beta frequencies when the correct answer is given and at theta frequencies when the incorrect answer is given. “The idea is that you make the correct guesses feel more correct to the brain, and the incorrect guesses feel more incorrect,” Miller says.

This form of very low voltage electrical stimulation has already been approved for use in humans.

“This is a technique that people have used in humans, so if it works, it could potentially have clinical relevance for enhancing memory or treating neurological disorders,” Brincat says.

The research was funded by the National Institute of Mental Health and the Picower Foundation.

 


Contacts and sources:
Anne Trafton
MIT News Office

Radio Chip For The “Internet Of Things”

A circuit  developed at MIT that reduces power leakage when transmitters are idle could greatly extend battery life.

At this year's Consumer Electronics Show in Las Vegas, the big theme was the "Internet of things" -- the idea that everything in the human environment, from kitchen appliances to industrial equipment, could be equipped with sensors and processors that can exchange data, helping with maintenance and the coordination of tasks.

Realizing that vision, however, requires transmitters that are powerful enough to broadcast to devices dozens of yards away but energy-efficient enough to last for months -- or even to harvest energy from heat or mechanical vibrations.

Credit: Illustration: Jose-Luis Olivares/MIT

"A key challenge is designing these circuits with extremely low standby power, because most of these devices are just sitting idling, waiting for some event to trigger a communication," explains Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering at MIT. "When it's on, you want to be as efficient as possible, and when it's off, you want to really cut off the off-state power, the leakage power."

This week, at the Institute of Electrical and Electronics Engineers' International Solid-State Circuits Conference, Chandrakasan's group will present a new transmitter design that reduces off-state leakage 100-fold. At the same time, it provides adequate power for Bluetooth transmission, or for the even longer-range 802.15.4 wireless-communication protocol.

"The trick is that we borrow techniques that we use to reduce the leakage power in digital circuits," Chandrakasan explains. The basic element of a digital circuit is a transistor, in which two electrical leads are connected by a semiconducting material, such as silicon. In their native states, semiconductors are not particularly good conductors. But in a transistor, the semiconductor has a second wire sitting on top of it, which runs perpendicularly to the electrical leads. Sending a positive charge through this wire -- known as the gate -- draws electrons toward it. The concentration of electrons creates a bridge that current can cross between the leads.

But while semiconductors are not naturally very good conductors, neither are they perfect insulators. Even when no charge is applied to the gate, some current still leaks across the transistor. It's not much, but over time, it can make a big difference in the battery life of a device that spends most of its time sitting idle.

Going negative

Chandrakasan -- along with Arun Paidimarri, an MIT graduate student in electrical engineering and computer science and first author on the paper, and Nathan Ickes, a research scientist in Chandrakasan's lab -- reduces the leakage by applying a negative charge to the gate when the transmitter is idle. That drives electrons away from the electrical leads, making the semiconductor a much better insulator.

Of course, that strategy works only if generating the negative charge consumes less energy than the circuit would otherwise lose to leakage. In tests conducted on a prototype chip fabricated through the Taiwan Semiconductor Manufacturing Company's research program, the MIT researchers found that their circuit spent only 20 picowatts of power to save 10,000 picowatts in leakage.

To generate the negative charge efficiently, the MIT researchers use a circuit known as a charge pump, which is a small network of capacitors -- electronic components that can store charge -- and switches. When the charge pump is exposed to the voltage that drives the chip, charge builds up in one of the capacitors. Throwing one of the switches connects the positive end of the capacitor to the ground, causing a current to flow out the other end. This process is repeated over and over. The only real power drain comes from throwing the switch, which happens about 15 times a second.

Turned on

To make the transmitter more efficient when it's active, the researchers adopted techniques that have long been a feature of work in Chandrakasan's group. Ordinarily, the frequency at which a transmitter can broadcast is a function of its voltage. But the MIT researchers decomposed the problem of generating an electromagnetic signal into discrete steps, only some of which require higher voltages. For those steps, the circuit uses capacitors and inductors to increase voltage locally. That keeps the overall voltage of the circuit down, while still enabling high-frequency transmissions.


What those efficiencies mean for battery life depends on how frequently the transmitter is operational. But if it can get away with broadcasting only every hour or so, the researchers' circuit can reduce power consumption 100-fold.


Contacts and sources:
Larry Hardesty 
MIT News Office

50/50 Chance Of 30 Year Mega Drought For American Southwest While Pacific Northwest To Get Drier Summers And Wetter Winters Say Scientists,

Climate scientists now put the odds that the American Southwest is headed into a 30-year "mega drought" at 50/50. Meanwhile, the forecast for the Pacific Northwest is continued warming with slightly drier summers and even wetter winters.

However, 21,000 years ago, at the peak of the last Ice Age, a period known as the Last Glacial Maximum, the Southwest was wetter than it is today - much wetter - and the Northwest was drier - much drier.
  
Reconstruction of the climate 21,000 years ago at the peak of the last ice age in the western US found that the transition between the dryer zone in the north and wetter zone in the south ran diagonally from the northwest to southeast.

Credit: Jessica Oster, Vanderbilt University

A team of scientists from Vanderbilt and Stanford universities have created the first comprehensive map of the topsy-turvy climate of the period and are using it to test and improve the global climate models that have been developed to predict how precipitation patterns will change in the future. Their efforts are described in a paper published online on Feb. 23 by the journalNature Geoscience.

"Most of the previous research of the past climate in this region is based on detailed studies of specific sites," said the lead author Jessica Oster, assistant professor of earth and environmental sciences at Vanderbilt University. "We combined these records to create a detailed map of past climate change in the American West. We then compared this map to computer climate models to understand what caused these changes."

"Our previous research used field studies to understand the history of climate change in the Western US," said study coauthor Kate Maher, assistant professor of geological and environmental sciences at Stanford University. " It was amazing to see how our results, when combined with work of many other research groups and compared to the newest generation of climate models, revealed a consistent story about how rainfall patterns were altered in the past."

One of the reasons that Oster and her colleagues picked this region to map is because the mid-latitudes, in general, and the western United States, in particular, are regions where the climate models tend to disagree on the magnitude and, in some cases, even the direction that precipitation patterns will change in the future.

"This is a transition zone. There are strong competing effects such as changes in the large-scale atmospheric circulation, sea surface temperature changes like El Niño and La Niña and the dynamics of westerly storm tracks that all interact at the mid-latitudes," said Stanford co-author Matthew Winnick who contributed to the study with fellow doctoral student Daniel Ibarra. "As a result, understanding the exact nature of how these different effects express themselves to form the north/south transition zone will be extremely important for freshwater resource management in major population centers across the Western US."

Of course, there aren't any direct records of precipitation levels thousands of years ago. So climate scientists rely on indirect means, called proxies, to reconstruct past variations in precipitation patterns. In this case, the researchers combined records of ancient lake levels, location and extent of glaciation, variations in the composition of stalagmites in caves, and evidence for changes in vegetation and subsurface soil deposits associated with water table depth. (One of the smelliest proxies that they used is pollen preserved in ancient packrat middens.)

During the Last Glacial Maximum, Canada was completely inundated by the massive Laurentide Ice Sheet. A number of the site-specific studies in the Northwest had provided evidence for a drier climate during the period, while similar studies in the Southwest found evidence for a wetter climate. For instance, a 1997 vegetation study from the University of Wisconsin found that much of the Northwest was covered by polar desert or tundra while the Southwest was covered by extensive conifer and broadleaf forest. However, there was also conflicting evidence of drier conditions at some sites in Utah and Colorado and of wetter conditions in Idaho and Montana.

"People hypothesized that the transition between the two climate zones ran along a straight east-west line, but that didn't work very well," said Oster. "Our study indicates that the transition zone is angled from the northwest to the southeast." This explains the drier conditions in Utah and Colorado. Their analysis also found that the wetter sites in the north were situated next to large inland lakes that existed at the time, so they attribute them to local, lake effects.

Two basic theories have been advanced to explain the dramatically different rainfall patterns of this period:

One is that the cold air above the Laurentide Ice Sheet created a tremendous high pressure system that shifted the polar jet stream to the south, pushing the track followed by winter storms down into the Southwest, which had the effect of dramatically reducing the amount of rainfall in the Northwest while increasing it in the Southwest.

An alternative explanation is that the subtropical jet stream was enhanced, increasing the frequency with which the Southwest was hit by "Pineapple Expresses:" water-saturated subtropical plumes of air that periodically swing up from Hawaii and hit the West Coast and these days cause serious flooding. When combined with a strengthened summer monsoon, this could also explain the wetter conditions in the Southwest.

When the researchers compared their results with the output of a number of climate models, they found that several of the newer models that have higher resolution and use updated ice sheet configurations do "a very good job" of reproducing the patterns observed in the proxy records.

"According to these models, it is the high pressure cells that are really important in steering winter storms, and in determining the shape and location of the transition zone," said Oster. "Some models do hint at an increase in subtropical winter moisture, but we don't see evidence of an enhanced summer monsoon."

Given the prospect of continued global warming, there is no chance that this ancient weather pattern will return in foreseeable future. Curiously, however, a similar pattern re-emerges periodically during the warm phase of the El Nino-Southern Oscillation, which produces drier than normal winters in the Northwest and wetter than normal winters in the Southwest.


Contacts and sources:
Jessica Oster
Vanderbilt University

Kate Maher
Stanford University

Sauna Use Associated With Reduced Risk Of Cardiac, All-Cause Mortality

A sauna may do more than just make you sweat. A new study suggests men who engaged in frequent sauna use had reduced risks of fatal cardiovascular events and all-cause mortality, according to an article published online by JAMA Internal Medicine.

Although some studies have found sauna bathing to be associated with better cardiovascular and circulatory function, the association between regular sauna bathing and risk of sudden cardiac death (SCD) and fatal cardiovascular diseases (CVD) is not known.

A sauna  is a small room or building designed as a place to experience dry or wet heat sessions, or an establishment with one or more of these and auxiliary facilities. The steam and high heat make the bathers perspire. Saunas can be divided into two basic styles: conventional saunas that warm the air or infrared saunas that warm objects.
Credit; Wikipedia

Jari A. Laukkanen, M.D., Ph.D., of the University of Eastern Finland, Kuopio, and coauthors investigated the association between sauna bathing and the risk of SCD, fatal coronary heart disease (CHD), fatal CVD and all-cause mortality in a group of 2,315 middle-aged men (42 to 60 years old) from eastern Finland.

Results show that during a median (midpoint) follow-up of nearly 21 years, there were 190 SCDs, 281 fatal CHDs, 407 fatal CVDs and 929 deaths from all causes. Compared with men who reported one sauna bathing session per week, the risk of SCD was 22 percent lower for 2 to 3 sauna bathing sessions per week and 63 percent lower for 4 to 7 sauna sessions per week. The risk of fatal CHD events was 23 percent lower for 2 to 3 bathing sessions per week and 48 percent lower for 4 to 7 sauna sessions per week compared to once a week. CVD death also was 27 percent lower for men who took saunas 2 to 3 times a week and 50 percent lower for men who were in the sauna 4 to 7 times a week compared with men who indulged just once per week. For all-cause mortality, sauna bathing 2 to 3 times per week was associated with a 24 percent lower risk and 4 to 7 times per week with a 40 percent reduction in risk compared to only one sauna session per week.

The amount of time spent in the sauna seemed to matter too. Compared with men who spent less than 11 minutes in the sauna, the risk of SCD was 7 percent lower for sauna sessions of 11 to 19 minutes and 52 percent less for sessions lasting more than 19 minutes. Similar associations were seen for fatal CHDs and fatal CVDs but not for all-cause mortality events.

"Further studies are warranted to establish the potential mechanism that links sauna bathing and cardiovascular health," the study concludes.



Contacts and sources:
Jari A. Laukkanen, M.D.
University of Eastern Finland
JAMA Internal Medicine.

Fever Alarm Armband: A Wearable, Printable, Temperature Sensor


University of Tokyo researchers have developed a "fever alarm armband," a flexible, self-powered wearable device that sounds an alarm in case of high body temperature.

The armband is 30 cm long and 18 cm wide, and can be worn either directly on the skin or on top of clothing. The device is designed so that the thermal sensor is located between the arm and the body. The organic power supply circuit is located under the piezo film speaker to reduce surface area.
Credit:  © 2015 Sakurai Lab. / Someya Lab.

This armband will be presented at the 2015 IEEE International Solid State Circuits Conference, San Francisco, on 22-26 February, 2015. The flexible organic components developed for this device are well-suited to wearable devices that continuously monitor vital signs including temperature and heart rate for applications in healthcare settings.

The new device developed by research groups lead by Professor Takayasu Sakurai at the Institute of Industrial Science and Professor Takao Someya at the Graduate School of Engineering combines a flexible amorphous silicon solar panel, piezoelectric speaker, temperature sensor, and power supply circuit created with organic components in a single flexible, wearable package.

Constant monitoring of health indicators such as heart rate and body temperature is the focus of intense interest in the fields of infant, elderly and patient care. Sensors for such applications need to be flexible and wireless for patient comfort, maintenance-free and not requiring external energy supply, and cheap enough to permit disposable use to ensure hygiene. Conventional sensors based on rigid components are unable to meet these requirements, so the researchers have developed a flexible solution that incorporates organic components that can be printed by an inkjet printer on a polymeric film.

The fever alarm armband incorporates several first-ever achievements. It is the first organic circuit able to produce a sound output, and the first to incorporate an organic power supply circuit. The former enables the device to provide audible information when the flexible thermal sensor detects a pre-set value within the ranges of 36.5 ºC to 38.5 ºC, while the latter increases the range of operational illumination by 7.3 times in indoor lighting conditions.

"Our fever alarm armband demonstrates that it is possible to produce flexible, disposable devices that can greatly enhance the amount of information available to carers in healthcare settings," says Professor Someya. "We have demonstrated the technology with a temperature sensor and fever alarm, but the system could also be adapted to provide audible feedback on body temperature, or combined with other sensors to register wetness, pressure or heart rate."


Contacts and sources: 
Takao Someya
University of Tokyo


Citation: Hiroshi Fuketa, Masamune Hamamatsu, Tomoyuki Yokota, Wakako Yukita, Teruki Someya, Tsuyoshi Sekitani, Makoto Takamiya, Takao Someya, and Takayasu Sakurai "Energy Autonomous Fever Alarm Armband Integrating Fully Flexible Solar Cells, Piezoelectric Speaker, Temperature Detector, and 12V Organic Complementary FET Circuits." Paper to be presented at the 2015 IEEE International Solid State Circuits Conference San Francisco 22-26 February, 2015

Memory Effect In Bacteria


Bacteria are masters in adapting to their environment. This adaptability contributes to the bacteria's survival inside their host. Researchers at the Vetmeduni Vienna now demonstrated that the bacterial pathogen Listeria monocytogenes adapts its metabolism specifically to the host genotype. The bacterial metabolic fingerprint correlated with the susceptibility of the infected mouse strain. The researchers published their results in the journal PLOS ONE.

This image shows bacteria on a plate and the respective FTIR spectrum in front.

Photo: Tom Grunert/Vetmeduni Vienna

Monika Ehling-Schulz's group from the Institute of Microbiology, together with Mathias Müller's group at the Institute of Animal Breeding and Genetics studied the influence of host organisms on bacterial metabolism. The researchers infected three different lineages of mice with the bacteria Listeria monocytogenes. The mouse strains showed significant differences in their response to the infection and in the severity of the clinical symptoms.

The researchers isolated the bacteria days after infection and analysed them for changes in their metabolism. They used a specific infrared spectroscopy method (FTIR) to monitor metabolic changes. The chemometric analysis of the bacterial metabolic fingerprints revealed host genotype specific imprints and adaptations of the bacterial pathogen.

"Our findings may have implications on how to treat infectious diseases in general. Every patient is different and so are their bacteria", first author Tom Grunert states.

Memory effect in bacteria

After isolation from the mice, all bacteria were cultured under laboratory conditions. After prolonged cultivation under laboratory conditions all three bacterial batches switched back to the same metabolic fingerprint. "Based on our results it can be assumed that bacteria have some sort of memory. It takes some time under host-free laboratory conditions for this 'memory effect' to vanish," explains the head of the Institute, Monika Ehling-Schulz.

Vibrating molecules decipher bacterial metabolism

The researchers employed a technique known as Fourier-transform infrared (FTIR) spectroscopy to monitor the metabolism in the bacteria. An infrared beam directed through the bacteria causes molecules such as proteins, polysaccharides and fatty acids to vibrate. The molecules variably allow more or less light to pass. The different molecular composition in the bacteria yields different spectral data providing information about the molecules inside.

"This method is used especially in microbiological diagnostics to identify bacteria. But we refined the method to decipher and monitor differences in the metabolic fingerprint of the same bacteria," says Grunert.

In the future, the researchers want to extend the concept to other species of bacteria and further study the impact of host organisms on pathogens. In a next step, the team plans to find out what exactly it is, that leads to metabolic changes in bacteria.


 
Contacts and sources:
 Susanna Kautschitsch, Science Communication / Public Relations
Prof. Monika Ehling-Schulz, Unit of functional Microbiology
University of Veterinary Medicine Vienna (Vetmeduni Vienna) 

Citation:  "Deciphering Host Genotype-Specific Impacts on the Metabolic Fingerprint of Listeria monocytogenes by FTIR Spectroscopy" by Tom Grunert, Avril Monahan, Caroline Lassnig, Claus Vogl, Mathias Müller and Monika Ehling-Schulz was published in the journalPLOS One. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0115959




Arsenic In Baby Formula, Breast Feeding Safer


Baby formula poses higher arsenic risk to newborns than breast milk, shows Dartmouth study

In the first U.S. study of urinary arsenic in babies, Dartmouth College researchers found that formula-fed infants had higher arsenic levels than breast-fed infants, and that breast milk itself contained very low arsenic concentrations.
 
Credit: Dartmouth College

The findings appear Feb. 23 online in the journal Environmental Health Perspectives. A PDF is available on request.

The researchers measured arsenic in home tap water, urine from 72 six-week-old infants and breast milk from nine women in New Hampshire. Urinary arsenic was 7.5 times lower for breast-fed than formula-fed infants. The highest tap water arsenic concentrations far exceeded the arsenic concentrations in powdered formulas, but for the majority of the study's participants, both the powder and water contributed to exposure.

"This study's results highlight that breastfeeding can reduce arsenic exposure even at the relatively low levels of arsenic typically experienced in the United States," says lead author Professor Kathryn Cottingham. "This is an important public health benefit of breastfeeding."

Arsenic occurs naturally in bedrock and is a common global contaminant of well water. It causes cancers and other diseases, and early-life exposure has been associated with increased fetal mortality, decreased birth weight and diminished cognitive function. The Environmental Protection Agency has set a maximum contaminant level for public drinking water, but private well water is not subject to regulation and is the primary water source in many rural parts of the United States.

"We advise families with private wells to have their tap water tested for arsenic," says senior author Professor Margaret Karagas, principal investigator at Dartmouth's Children's Environmental Health and Disease Prevention Research Center. Added study co-lead author Courtney Carignan: "We predict that population-wide arsenic exposure will increase during the second part of the first year of life as the prevalence of formula-feeding increases."



Contacts and sources:
 Professor Kathryn Cottingham