Unseen Is Free

Unseen Is Free
Try It Now


Google Translate

Wednesday, August 27, 2014

Best View Yet Of Merging Galaxies In Distant Universe

Using the NASA/ESA Hubble Space Telescope and many other telescopes on the ground and in space, an international team of astronomers has obtained the best view yet of a collision that took place between two galaxies when the Universe was only half its current age.
Credit: ESO

They enlisted the help of a galaxy-sized magnifying glass to reveal otherwise invisible detail. These new studies of the galaxy H-ATLAS J142935.3-002836 have shown that this complex and distant object looks like the well-known local galaxy collision, the Antennae Galaxies.

The famous fictional detective Sherlock Holmes used a magnifying lens to reveal barely visible but important evidence. Astronomers are now combining the power of many telescopes on Earth and in space [1] with a vastly larger form of lens to study a case of vigorous star formation in the early Universe.

This diagram shows how the effect of gravitational lensing around a normal galaxy focuses the light coming from a very distant star-forming galaxy merger to created a distorted, but brighter view. 
 Credit: ESA/ESO/M. Kornmesser

"While astronomers are often limited by the power of their telescopes, in some cases our ability to see detail is hugely boosted by natural lenses, created by the Universe," explains lead author Hugo Messias of the Universidad de Concepción (Chile) and the Centro de Astronomia e Astrofísica da Universidade de Lisboa (Portugal), "Einstein predicted in his theory of general relativity that, given enough mass, light does not travel in a straight line but will be bent in a similar way to light refracted by a normal lens."

These cosmic lenses are created by massive structures like galaxies and galaxy clusters, which deflect the light from objects behind them due to their strong gravity — an effect, called gravitational lensing. The magnifying properties of this effect allow astronomers to study objects that would not be visible otherwise and to directly compare local galaxies with much more remote ones, seen when the Universe was significantly younger.

This video takes the viewer deep into an apparently sparsely occupied region of the constellation of Virgo (The Virgin). Here at the centre, looking like many other faint spots, is a remarkable object, a gravitationally lensed view of a distant galaxy merger.

Credit: NASA/ESA/W. M. Keck Observatory/Digitized Sky Survey 2. Music: movetwo

But for these gravitational lenses to work, the lensing galaxy, and the one far behind it, need to be very precisely aligned.

"These chance alignments are quite rare and tend to be hard to identify," adds Messias, "but, recent studies have shown that by observing at far-infrared and millimetre wavelengths we can find these cases much more efficiently."

This artist's impression shows how the effect of gravitational lensing by an intervening galaxy magnifies, brightens and distorts the appearance of a remote merging galaxy far behind it.

The viewpoint of the observer moves sideways so that that the distant galaxy merger appears first to one side, where it is faint, and then appears right behind the foreground object and is dramatically magnified and its total apparent brightness increases.

Credit: ESA/Hubble & ESO/M. Kornmesser

H-ATLAS J142935.3-002836 (or just H1429-0028 for short) is one of these sources and was found in the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS). It is among the brightest gravitationally lensed objects in the far-infrared regime found so far, even though we are seeing it at a time when the Universe was just half its current age.

Probing this object was at the limit of what is possible, so the international team of astronomers started an extensive follow-up campaign using the NASA/ESA Hubble Space Telescope alongside other space telescopes and some of the most powerful telescopes on the ground — including the Atacama Large Millimeter/submillimeter Array (ALMA), the Keck Observatory, the Karl Jansky Very Large Array (JVLA), and others. The different telescopes provided different views, which could be combined to get the best insight yet into the nature of this unusual object.

The Hubble and Keck images revealed a detailed gravitationally-induced ring of light around the foreground galaxy. These high resolution images also showed that the lensing galaxy is an edge-on disc galaxy — similar to our galaxy, the Milky Way — which obscures parts of the background light due to the large dust clouds it contains.

"We need to observe with Hubble to find cases of gravitational lensing and to highlight in high resolution the clues left by these huge cosmic lenses", adds Rob Ivison, co-author and ESO's Director for Science

But, it is not possible to see past the large dust clouds of the foreground galaxy with Hubble. The obscuration was overcome by ALMA and the JVLA, since these two facilities observe the sky at longer wavelengths, which are unaffected by dust. Using the combined data the team discovered that the background system was actually an ongoing collision between two galaxies.

Further characterisation of the object was undertaken by ALMA which traced carbon monoxide, allowing for detailed studies of star formation mechanisms in galaxies and for the motion of the material in the galaxy to be measured. This confirmed that the lensed object is indeed an ongoing galactic collision forming hundreds of new stars each year, and that one of the colliding galaxies still shows signs of rotation; an indication that it was a disc galaxy just before this encounter.

The system of these two colliding galaxies resembles the Antennae Galaxies an object much closer to us than H1429-0028 and which Hubble has imaged several times before in stunning detail. This is a spectacular collision between two galaxies, which are believed to have had a disc structure in the past. While the Antennae system is forming stars with a total rate of only a few tens of times the mass of our Sun each year, H1429-0028 each year turns more than 400 times the mass of the Sun of gas into new stars each year.

Ivison concludes: "With the combined power of Hubble and these other telescopes we have been able to locate this very fortunate alignment, take advantage of the foreground galaxy's lensing effects and characterise the properties of this distant merger and the extreme starburst within it. It is very much a testament to the power of telescope teamwork."

[1] The telescopes and surveys that were employed were: the NASA/ESA Hubble Space Telescope, ALMA, APEX, VISTA, the Gemini South telescope, the Keck-II telescope, the NASA Spitzer Space Telescope, the Jansky Very Large Array, CARMA, IRAM, and SDSS and WISE.

Contacts and sources:
Georgia Bladon
ESA/Hubble Information Centre

What Lit Up The Universe

New research from UCL shows we will soon uncover the origin of the ultraviolet light that bathes the cosmos, helping scientists understand how galaxies were built.

A computer model shows one scenario for how light is spread through the early universe on vast scales (more than 50 million light years across). Astronomers will soon know whether or not these kinds of computer models give an accurate portrayal of light in the real cosmos.

Credit: Andrew Pontzen/Fabio Governato

The study published today in The Astrophysical Journal Letters by UCL cosmologists Dr Andrew Pontzen and Dr Hiranya Peiris (both UCL Physics & Astronomy), together with collaborators at Princeton and Barcelona Universities, shows how forthcoming astronomical surveys will reveal what lit up the cosmos.

"Which produces more light? A country's biggest cities or its many tiny towns?" asked Dr Pontzen, lead author of the study. "Cities are brighter, but towns are far more numerous. Understanding the balance would tell you something about the organisation of the country. We're posing a similar question about the universe: does ultraviolet light come from numerous but faint galaxies, or from a smaller number of quasars?"

Quasars are the brightest objects in the Universe; their intense light is generated by gas as it falls towards a black hole. Galaxies can contain millions or billions of stars, but are still dim by comparison. Understanding whether the numerous small galaxies outshine the rare, bright quasars will provide insight into the way the universe built up today's populations of stars and planets. It will also help scientists properly calibrate their measurements of dark energy, the agent thought to be accelerating the universe's expansion and determining its far future.

The new method proposed by the team builds on a technique already used by astronomers in which quasars act as beacons to understand space. The intense light from quasars makes them easy to spot even at extreme distances, up to 95% of the way across the observable universe. The team think that studying how this light interacts with hydrogen gas on its journey to Earth will reveal the main sources of illumination in the universe, even if those sources are not themselves quasars.

Two types of hydrogen gas are found in the universe – a plain, neutral form and a second charged form which results from bombardment by UV light. These two forms can be distinguished by studying a particular wavelength of light called 'Lyman-alpha' which is only absorbed by the neutral type of hydrogen. Scientists can see where in the universe this 'Lyman-alpha' light has been absorbed to map the neutral hydrogen.

Since the quasars being studied are billions of light years away, they act as a time capsule: looking at the light shows us what the universe looked like in the distant past. The resulting map will reveal where neutral hydrogen was located billions of years ago as the universe was vigorously building its galaxies.

An even distribution of neutral hydrogen gas would suggest numerous galaxies as the source of most light, whereas a much less uniform pattern, showing a patchwork of charged and neutral hydrogen gas, would indicate that rare quasars were the primary origin of light.

Current samples of quasars aren't quite big enough for a robust analysis of the differences between the two scenarios; however, a number of surveys currently being planned should help scientists find the answer.

Chief among these is the DESI (Dark Energy Spectroscopic Instrument) survey which will include detailed measurements of about a million distant quasars. Although these measurements are designed to reveal how the expansion of the universe is accelerating due to dark energy, the new research shows that results from DESI will also determine whether the intervening gas is uniformly illuminated. In turn, the measurement of patchiness will reveal whether light in our universe is generated by 'a few cities' (quasars) or by 'many small towns' (galaxies).

Co-author Dr Hiranya Peiris, said: "It's amazing how little is known about the objects that bathed the universe in ultraviolet radiation while galaxies assembled into their present form. This technique gives us a novel handle on the intergalactic environment during this critical time in the universe's history."

Dr Pontzen, said: "It's good news all round. DESI is going to give us invaluable information about what was going on in early galaxies, objects that are so faint and distant we would never see them individually. And once that's understood in the data, the team can take account of it and still get accurate measurements of how the universe is expanding, telling us about dark energy. It illustrates how these big, ambitious projects are going to deliver astonishingly rich maps to explore. We're now working to understand what other unexpected bonuses might be pulled out from the data."

Contacts and sources:
Rebecca Caygill
University College London

Graphene Gets Competition

A new argument has just been added to the growing case for graphene being bumped off its pedestal as the next big thing in the high-tech world by the two-dimensional semiconductors known as MX2materials.

Illustration of a MoS2/WS2 heterostructure with a MoS2 monolayer lying on top of a WS2 monolayer. Electrons and holes created by light are shown to separate into different layers.
Image courtesy of Feng Wang group

An international collaboration of researchers led by a scientist with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) has reported the first experimental observation of ultrafast charge transfer in photo-excited MX2 materials. The recorded charge transfer time clocked in at under 50 femtoseconds, comparable to the fastest times recorded for organic photovoltaics.

“We’ve demonstrated, for the first time, efficient charge transfer in MX2 heterostructures through combined photoluminescence mapping and transient absorption measurements,” says Feng Wang, a condensed matter physicist with Berkeley Lab’s Materials Sciences Division and the University of California (UC) Berkeley’s Physics Department. “Having quantitatively determined charge transfer time to be less than 50 femtoseconds, our study suggests that MX2 heterostructures, with their remarkable electrical and optical properties and the rapid development of large-area synthesis, hold great promise for future photonic and optoelectronic applications.”

Feng Wang is a condensed matter physicist with Berkeley Lab’s Materials Sciences Division and UC Berkeley’s Physics Department.
 Photo by Roy Kaltschmidt

Wang is the corresponding author of a paper in Nature Nanotechnology describing this research. The paper is titled “Ultrafast charge transfer in atomically thin MoS2/WS2 heterostructures.” Co-authors are Xiaoping Hong, Jonghwan Kim, Su-Fei Shi, Yu Zhang, Chenhao Jin, Yinghui Sun, Sefaattin Tongay, Junqiao Wu and Yanfeng Zhang.

MX2 monolayers consist of a single layer of transition metal atoms, such as molybdenum (Mo) or tungsten (W), sandwiched between two layers of chalcogen atoms, such as sulfur (S). The resulting heterostructure is bound by the relatively weak intermolecular attraction known as the van der Waals force. These 2D semiconductors feature the same hexagonal “honeycombed” structure as graphene and superfast electrical conductance, but, unlike graphene, they have natural energy band-gaps. This facilitates their application in transistors and other electronic devices because, unlike graphene, their electrical conductance can be switched off.

“Combining different MX2 layers together allows one to control their physical properties,” says Wang, who is also an investigator with the Kavli Energy NanoSciences Institute (Kavli-ENSI). “For example, the combination of MoS2 and WS2 forms a type-II semiconductor that enables fast charge separation. The separation of photoexcited electrons and holes is essential for driving an electrical current in a photodetector or solar cell.”

In demonstrating the ultrafast charge separation capabilities of atomically thin samples of MoS2/WS2 heterostructures, Wang and his collaborators have opened up potentially rich new avenues, not only for photonics and optoelectronics, but also for photovoltaics.

Photoluminescence mapping of a MoS2/WS2 heterostructure with the color scale representing photoluminescence intensity shows strong quenching of the MoS2 photoluminescence.

Image courtesy of Feng Wang group

“MX2 semiconductors have extremely strong optical absorption properties and compared with organic photovoltaic materials, have a crystalline structure and better electrical transport properties,” Wang says. “Factor in a femtosecond charge transfer rate and MX2semiconductors provide an ideal way to spatially separate electrons and holes for electrical collection and utilization.”

Wang and his colleagues are studying the microscopic origins of charge transfer in MX2 heterostructures and the variation in charge transfer rates between different MX2 materials.

“We’re also interested in controlling the charge transfer process with external electrical fields as a means of utilizing MX2 heterostructures in photovoltaic devices,” Wang says.

This research was supported by an Early Career Research Award from the DOE Office of Science through UC Berkeley, and by funding agencies in China through the Peking University in Beijing.

Contacts and sources:
Lynn Yarris
DOE/Lawrence Berkeley National Laboratory

Composition Of Earth’s Mantle Revisited Thanks To Research At Argonne’s Advanced Photon Source

We live atop the thinnest layer of the Earth: the crust. Below is the mantle (red), outer core (orange), and finally inner core (yellow-white). The lower portion of the mantle is the largest layer – stretching from 400 to 1,800 miles below the surface. Research at Argonne’s Advanced Photon Source recently suggested the makeup of the lower mantle is significantly different from what was previously thought. 
Image by Johan Swanepoel/Shutterstock.

Research published this past June in Science suggested that the makeup of the Earth's lower mantle, which makes up the largest part of the Earth by volume, is significantly different than previously thought.

The work, performed at the Advanced Photon Source at the U.S. Department of Energy’s Argonne National Laboratory, will have a significant impact on our understanding of the lower mantle, scientists said. Understanding the composition of the mantle is essential to seismology, the study of earthquakes and movement below the Earth's surface, and should shed light on unexplained seismic phenomena observed there.

Though humans haven't yet managed to drill further than seven and a half miles into the Earth, we've built a comprehensive picture of what's beneath our feet through calculations and limited observation. We all live atop the crust, the thin outer layer; just beneath is the mantle, outer core and finally inner core. The lower portion of the mantle is the largest layer — stretching from 400 to 1,800 miles below the surface — and gives off the most heat. Until now, the entire lower mantle was thought to be composed of the same mineral throughout: ferromagnesian silicate, arranged in a type of structure called perovskite.

The pressure and heat of the lower mantle is intense — more than 3,500° Fahrenheit. Materials may have very different properties at these conditions; structures may exist there that would collapse at the surface.

To simulate these conditions, researchers use special facilities at the Advanced Photon Source, where they shine high-powered lasers to heat up the sample inside a pressure cell made of a pair of diamonds. Then they aim powerful beams of X-rays at the sample, which hit and scatter in all directions. By gathering the scatter data, scientists can reconstruct how the atoms in the sample were arranged.

The team found that at conditions that exist below about 1,200 miles underground, the ferromagnesian silicate perovskite actually breaks into two separate phases. One contains nearly no iron, while the other is full of iron. The iron-rich phase, called the H-phase, is much more stable under these conditions.

"We still don't fully understand the chemistry of the H-phase," said lead author and Carnegie Institution of Washington scientist Li Zhang. "But this finding indicates that all geodynamic models need to be reconsidered to take the H-phase into account. And there could be even more unidentified phases down there in the lower mantle as well, waiting to be identified."

The facilities at Argonne’s Advanced Photon Source were key to the findings, said Carnegie scientist Yue Meng, also an author on the paper. "Recent technological advances at our beamline allowed us to create the conditions to simulate these intense temperatures and pressures and probe the changes in chemistry and structure of the sample in situ," she said.

"What distinguished this work was the exceptional attention to detail in every aspect of the research — it demonstrates a new level for high-pressure research," Meng added.

The paper, "Disproportionation of (Mg,Fe)SiO3 perovskite in Earth’s deep lower mantle," was published in Science.Other Argonne coauthors were Wenjun Liu and Ruqing Xu.

The work was performed at the High Pressure Collaborative Access Team (HPCAT) beamline, which is run by the Geophysical Laboratory at the Carnegie Institution of Washington. Wenge Yang and Lin Wang from the APS-Carnegie Institution's High Pressure Synergetic Consortium (HPSynC) also contributed to the paper.

The research was supported by the National Science Foundation and performed at the HPCAT beamline of the Advanced Photon Source, which is supported by the U.S. Department of Energy's Office of Basic Energy Sciences, the National Nuclear Security Administration and the National Science Foundation. Portions of this work were performed at GeoSoilEnviroCARS at the APS, run by theUniversity of Chicago and supported by the National Science Foundation and the DOE; at 34ID-E beamline; and at Shanghai Synchrotron Radiation Facility. The Advanced Photon Source is supported by DOE's Office of Basic Energy Sciences.

Contacts and sources:
Tona Kunz
DOE/Argonne National Laboratory

Sunday, August 24, 2014

Neptune's Strange Moon Triton, Best-Ever Global Color Map From Voyager's Historic Footage, Video

NASA's Voyager 2 spacecraft gave humanity its first glimpse of Neptune and its moon Triton in the summer of 1989. Like an old film, Voyager’s historic footage of Triton has been “restored” and used to construct the best-ever global color map of that strange moon.

The Voyager 2 spacecraft flew by Triton, a moon of Neptune, in the summer of 1989. Paul Schenk, a scientist at the Lunar and Planetary Institute in Houston, used Voyager data to construct the best-ever global color map of Triton. This map has a resolution of 1,970 feet (600 meters) per pixel.
Image Credit: NASA/JPL-Caltech/Lunar & Planetary Institute

The map, produced by Paul Schenk, a scientist at the Lunar and Planetary Institute in Houston, has also been used to make a movie recreating that historic Voyager encounter, which took place 25 years ago, on August 25, 1989.

The new Triton map has a resolution of 1,970 feet (600 meters) per pixel. The colors have been enhanced to bring out contrast but are a close approximation to Triton’s natural colors. Voyager’s “eyes” saw in colors slightly different from human eyes, and this map was produced using orange, green and blue filter images.

The Voyager 2 spacecraft flew by Triton, a moon of Neptune, on August 25, 1989. Paul Schenk, a scientist at the Lunar and Planetary Institute in Houston, used Voyager data to construct this video recreating that exciting encounter.
Image Credit: NASA/JPL-Caltech/Lunar & Planetary Institute

In 1989, most of the northern hemisphere was in darkness and unseen by Voyager. Because of the speed of Voyager's visit and the slow rotation of Triton, only one hemisphere was seen clearly at close distance. The rest of the surface was either in darkness or seen as blurry markings.

The production of the new Triton map was inspired by anticipation of NASA's New Horizons encounter with Pluto, coming up a little under a year from now. Among the improvements on the map are updates to the accuracy of feature locations, sharpening of feature details by removing some of the blurring effects of the camera, and improved color processing.

Although Triton is a moon of a planet and Pluto is a dwarf planet, Triton serves as a preview of sorts for the upcoming Pluto encounter. Although both bodies originated in the outer solar system, Triton was captured by Neptune and has undergone a radically different thermal history than Pluto. Tidal heating has likely melted the interior of Triton, producing the volcanoes, fractures and other geological features that Voyager saw on that bitterly cold, icy surface.

Pluto is unlikely to be a copy of Triton, but some of the same types of features may be present. Triton is slightly larger than Pluto, has a very similar internal density and bulk composition, and has the same low-temperature volatiles frozen on its surface. The surface composition of both bodies includes carbon monoxide, carbon dioxide, methane and nitrogen ices.

Voyager also discovered atmospheric plumes on Triton, making it one of the known active bodies in the outer solar system, along with objects such as Jupiter's moon Io and Saturn's moon Enceladus. Scientists will be looking at Pluto next year to see if it will join this list. They will also be looking to see how Pluto and Triton compare and contrast, and how their different histories have shaped the surfaces we see.

Although a fast flyby, New Horizons' Pluto encounter on July 14, 2015, will not be a replay of Voyager but more of a sequel and a reboot, with a new and more technologically advanced spacecraft and, more importantly, a new cast of characters. Those characters are Pluto and its family of five known moons, all of which will be seen up close for the first time next summer.

Triton may not be a perfect preview of coming attractions, but it serves as a prequel to the cosmic blockbuster expected when New Horizons arrives at Pluto next year.

In another historic milestone for the Voyager mission, Aug. 25 also marks the two-year anniversary of Voyager 1 reaching interstellar space.

The Voyager mission is managed by NASA's Jet Propulsion Laboratory, in Pasadena, California, for NASA's Science Mission Directorate at NASA Headquarters in Washington. Caltech manages JPL for NASA. The Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, manages the New Horizons mission for NASA's SMD.

Contacts and sources:
Elizabeth Landau/Preston Dyches
 Jet Propulsion Laboratory, Pasadena, Calif.

Paul Schenk
Lunar and Planetary Institute, Houston, Texas

Michael Buckley
Johns Hopkins University Applied Physics Laboratory,

New Ground X-Vehicle Technology (Gxv-T) Program Aims To Break The “More Armor” Paradigm For Protection

GXV-T seeks to develop revolutionary technologies to make future armored fighting vehicles more mobile, effective and affordable

For the past 100 years of mechanized warfare, protection for ground-based armored fighting vehicles and their occupants has boiled down almost exclusively to a simple equation: More armor equals more protection. Weapons’ ability to penetrate armor, however, has advanced faster than armor’s ability to withstand penetration. As a result, achieving even incremental improvements in crew survivability has required significant increases in vehicle mass and cost.

The trend of increasingly heavy, less mobile and more expensive combat platforms has limited Soldiers’ and Marines’ ability to rapidly deploy and maneuver in theater and accomplish their missions in varied and evolving threat environments. 

Moreover, larger vehicles are limited to roads, require more logistical support and are more expensive to design, develop, field and replace. The U.S. military is now at a point where—considering tactical mobility, strategic mobility, survivability and cost—innovative and disruptive solutions are necessary to ensure the operational viability of the next generation of armored fighting vehicles. 

Ground-based armored fighting vehicles and their occupants have traditionally relied on armor and maneuverability for protection. The amount of armor needed for today’s threat environments, however, is becoming increasingly burdensome and ineffective against ever-improving weaponry. combat situations.

DARPA's Ground X-Vehicle Technology (GXV-T) program seeks to develop revolutionary technologies to enable a layered approach to protection that would use less armor more strategically and improve vehicles’ ability to avoid detection, engagement and hits by adversaries. Such capabilities would enable smaller, faster vehicles in the future to more efficiently and cost-effectively tackle varied and unpredictable 

DARPA has created the Ground X-Vehicle Technology (GXV-T) program to help overcome these challenges and disrupt the current trends in mechanized warfare. GXV-T seeks to investigate revolutionary ground-vehicle technologies that would simultaneously improve the mobility and survivability of vehicles through means other than adding more armor, including avoiding detection, engagement and hits by adversaries. This improved mobility and warfighting capability would enable future U.S. ground forces to more efficiently and cost-effectively tackle varied and unpredictable combat situations.

DARPA’s Ground X-Vehicle Technology (GXV-T) program seeks to investigate revolutionary technologies for ground-based armored fighting vehicles that would significantly improve the mobility and survivability of vehicles through means other than adding more armor.

“GXV-T’s goal is not just to improve or replace one particular vehicle—it’s about breaking the ‘more armor’ paradigm and revolutionizing protection for all armored fighting vehicles,” said Kevin Massey, DARPA program manager. “Inspired by how X-plane programs have improved aircraft capabilities over the past 60 years, we plan to pursue groundbreaking fundamental research and development to help make future armored fighting vehicles significantly more mobile, effective, safe and affordable.”

GXV-T’s technical goals include the following improvements relative to today’s armored fighting vehicles:
Reduce vehicle size and weight by 50 percent
Reduce onboard crew needed to operate vehicle by 50 percent
Increase vehicle speed by 100 percent
Access 95 percent of terrain
Reduce signatures that enable adversaries to detect and engage vehicles

The GXV-T program provides the following four technical areas as examples where advanced technologies could be developed that would meet the program’s objectives:
Radically Enhanced Mobility – Ability to traverse diverse off-road terrain, including slopes and various elevations; advanced suspensions and novel track/wheel configurations; extreme speed; rapid omnidirectional movement changes in three dimensions
Survivability through Agility – Autonomously avoid incoming threats without harming occupants through technologies such as agile motion (dodging) and active repositioning of armor
Crew Augmentation – Improved physical and electronically assisted situational awareness for crew and passengers; semi-autonomous driver assistance and automation of key crew functions similar to capabilities found in modern commercial airplane cockpits
Signature Management – Reduction of detectable signatures, including visible, infrared (IR), acoustic and electromagnetic (EM)

Technology development beyond these four examples is desired so long as it supports the program’s goals. DARPA is particularly interested in engaging nontraditional contributors to help develop leap-ahead technologies in the focus areas above, as well as other technologies that could potentially improve both the survivability and mobility of future armored fighting vehicles.

DARPA aims to develop GXV-T technologies over 24 months after initial contract awards, which are currently planned on or before April 2015. The GXV-T program plans to pursue research, development, design and testing and evaluation of major subsystem capabilities in multiple technology areas with the goal of integrating these capabilities into future ground X-vehicle demonstrators.

Synapse Program Develops Advanced Brain-Inspired Chip

New chip design mimics brain’s power-saving efficiency; uses 100x less power for complex processing than state-of-the-art chips

DARPA-funded researchers have developed one of the world’s largest and most complex computer chips ever produced—one whose architecture is inspired by the neuronal structure of the brain and requires only a fraction of the electrical power of conventional chips.

A circuit board shows 16 of the new brain-inspired chips in a 4 X 4 array along with interface hardware. The board is being used to rapidly analyze high-resolutions images. 
Courtesy: IBM

Designed by researchers at IBM in San Jose, California, under DARPA’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program, the chip is loaded with more than 5 billion transistors and boasts more than 250 million “synapses,” or programmable logic points, analogous to the connections between neurons in the brain. That’s still orders of magnitude fewer than the number of actual synapses in the brain, but a giant step toward making ultra-high performance, low-power neuro-inspired systems a reality.

Many tasks that people and animals perform effortlessly, such as perception and pattern recognition, audio processing and motor control, are difficult for traditional computing architectures to do without consuming a lot of power. Biological systems consume much less energy than current computers attempting the same tasks. The SyNAPSE program was created to speed the development of a brain-inspired chip that could perform difficult perception and control tasks while at the same time achieving significant energy savings.

The SyNAPSE-developed chip, which can be tiled to create large arrays, has one million electronic “neurons” and 256 million electronic synapses between neurons. Built on Samsung Foundry's 28nm process technology, the 5.4 billion transistor chip has one of the highest transistor counts of any chip ever produced.  
Each chip consumes less than 100 milliWatts of electrical power during operation. When applied to benchmark tasks of pattern recognition, the new chip achieved two orders of magnitude in energy savings compared to state-of-the-art traditional computing systems.

The high energy efficiency is achieved, in part, by distributing data and computation across the chip, alleviating the need to move data over large distances. In addition, the chip runs in an asynchronous manner, processing and transmitting data only as required, similar to how the brain works. The new chip’s high energy efficiency makes it a candidate for defense applications such as mobile robots and remote sensors where electrical power is limited.

“Computer chip design is driven by a desire to achieve the highest performance at the lowest cost. Historically, the most important cost was that of the computer chip. But Moore’s law—the exponentially decreasing cost of constructing high-transistor-count chips—now allows computer architects to borrow an idea from nature, where energy is a more important cost than complexity, and focus on designs that gain power efficiency by sparsely employing a very large number of components to minimize the movement of data. 

IBM’s chip, which is by far the largest one yet made that exploits these ideas, could give unmanned aircraft or robotic ground systems with limited power budgets a more refined perception of the environment, distinguishing threats more accurately and reducing the burden on system operators,” said Gill Pratt, DARPA program manager. 

“Our troops often are in austere environments and must carry heavy batteries to power mobile devices, sensors, radios and other electronic equipment. Air vehicles also have very limited power budgets because of the impact of weight. For both of these environments, the extreme energy efficiency achieved by the SyNAPSE program’s accomplishments could enable a much wider range of portable computing applications for defense.”

Another potential application for the SyNAPSE-developed chip is neuroscience modelling. The large number of electronic neurons and synapses in each chip and the ability to tile multiple chips could lead to the development of complex, networked neuromorphic simulators for testing network models in neurobiology and deepening current understanding of brain function.

A technical paper on the new chip is available here:http://www.sciencemag.org/content/345/6197/668.full

Northrop Grumman Developing XS-1 Experimental Spaceplane Design for DARPA

Northrop Grumman Corporation with Scaled Composites and Virgin Galactic is developing a preliminary design and flight demonstration plan for the Defense Advanced Research Projects Agency's (DARPA) Experimental Spaceplane XS-1 program.

Credit:  Northrop Grumman

XS-1 has a reusable booster that when coupled with an expendable upper stage provides affordable, available and responsive space lift for 3,000-pound class spacecraft into low Earth orbit. Reusable boosters with aircraft-like operations provide a breakthrough in space lift costs for this payload class, enabling new generations of lower cost, innovative and more resilient spacecraft.

The company is defining its concept for XS-1 under a 13-month, phase one contract valued at $3.9 million. In addition to low-cost launch, the XS-1 would serve as a test-bed for a new generation of hypersonic aircraft.

A key program goal is to fly 10 times in 10 days using a minimal ground crew and infrastructure. Reusable aircraft-like operations would help reduce military and commercial light spacecraft launch costs by a factor of 10 from current launch costs in this payload class.

To complement its aircraft, spacecraft and autonomous systems capabilities, Northrop Grumman has teamed with Scaled Composites of Mojave, which will lead fabrication and assembly, and Virgin Galactic, the privately-funded spaceline, which will head commercial spaceplane operations and transition.

"Our team is uniquely qualified to meet DARPA's XS-1 operational system goals, having built and transitioned many developmental systems to operational use, including our current work on the world's only commercial spaceline, Virgin Galactic's SpaceShipTwo," said Doug Young, vice president, missile defense and advanced missions, Northrop Grumman Aerospace Systems.

"We plan to bundle proven technologies into our concept that we developed during related projects for DARPA, NASA and the U.S. Air Force Research Laboratory, giving the government maximum return on those investments," Young added.

The design would be built around operability and affordability, emphasizing aircraft-like operations including:

– Clean pad launch using a transporter erector launcher, minimal infrastructure and ground crew;

– Highly autonomous flight operations that leverage Northrop Grumman's unmanned aircraft systems experience; and

– Aircraft-like horizontal landing and recovery on standard runways.

Contacts and sources: 
Northrop Grumman

Wednesday, August 20, 2014

Neanderthals 'Overlapped' With Modern Humans For Up To 5,400 Years

Neanderthals and modern humans were both living in Europe for between 2,600 and 5,400 years, according to a new paper published in the journal, Nature. For the first time, scientists have constructed a robust timeline showing when the last Neanderthals died out.

The image shows a Neanderthal model from the Natural History Museum. The Museum carried out the research in collaboration with Oxford.
Credit: University of Oxford

Significantly, the research paper says there is strong evidence to suggest that Neanderthals disappeared at different times across Europe rather than being rapidly replaced by modern humans.

A team, led by Professor Thomas Higham of the University of Oxford, obtained new radiocarbon dates for around 200 samples of bone, charcoal and shell from 40 key European archaeological sites. The sites, ranging from Russia in the east to Spain in the west, were either linked with the Neanderthal tool-making industry, known as Mousterian, or were ‘transitional’ sites containing stone tools associated with either early modern humans or Neanderthals.

The chronology was pieced together during a six-year research project by building mathematical models that combine the new radiocarbon dates with established archaeological stratigraphic evidence. The results showed that both groups overlapped for a significant period, giving ‘ample time’ for interaction and interbreeding. The paper adds, however, it is not clear where interbreeding may have happened in Eurasia or whether it occurred once or several times.

Professor Thomas Higham said: "‘Other recent studies of Neanderthal and modern human genetic make-up suggest that both groups interbred outside Africa, with 1.5%-2.1% or more of the DNA of modern non-African human populations originating from Neanderthals."

He added, "We believe we now have the first robust timeline that sheds new light on some of the key questions around the possible interactions between Neanderthals and modern humans. The chronology also pinpoints the timing of the Neanderthals’ disappearance, and suggests they may have survived in dwindling populations in pockets of Europe before they became extinct."

In 2011, another Nature paper featuring Dr Katerina Douka of the Oxford team obtained some very early dates (around 45,000 years old) for the so-called ‘transitional’ Uluzzian stone-tool industry of Italy and identified teeth remains in the site of the Grotta del Cavallo, Apulia, as those of anatomically modern humans. 

Under the new timeline published today, the Mousterian industry (attributed to Neanderthals and found across vast areas of Europe and Eurasia) is shown to have ended between 41,030 to 39,260 years ago. This suggests strongly that there was an extensive overlapping period between Neanderthals and modern humans of several thousand years. The scientific team has for the first time specified exactly how long this overlap lasted, with 95% probability.

The Uluzzian also contains objects, such as shell beads, that scholars widely believe signify symbolic or advanced behaviour in early human groups. One or two of the Châtelperronian sites of France and northern Spain (currently, although controversially, associated with Neanderthals) contain some similar items. 

This supports the theory first advanced several years ago that the arrival of early modern humans in Europe may have stimulated the Neanderthals into copying aspects of their symbolic behaviour in the millennia before they disappeared. The paper also presents an alternative theory: that that the similar start dates of the two industries could mean that Châtelperronian sites are associated with modern humans and not Neanderthals after all.

There is currently no evidence to show that Neanderthals and early modern humans lived closely together, regardless of whether the Neanderthals were responsible for the Châtelperronian culture, the paper says. Rather than modern humans rapidly replacing Neanderthals, there seems to have been a more complex picture ‘characterised by a biological and cultural mosaic that lasted for several thousand years’. 

The Châtelperronian industry follows the Mousterian in archaeological layers at all sites where both occur. Importantly, however, the Châtelperronian industry appears to have started significantly before the end of Mousterian at some sites in Europe. This suggests that if Neanderthals were responsible for both cultures, there may have been some regional variation in their tool-making, says the paper.

Professor Higham said: ‘Previous radiocarbon dates have often underestimated the age of samples from sites associated with Neanderthals because the organic matter was contaminated with modern particles. We used ultrafiltration methods, which purify the extracted collagen from bone, to avoid the risk of modern contamination. This means we can say with more confidence that we have finally resolved the timing of the disappearance of our close cousins, the Neanderthals. Of course the Neanderthals are not completely extinct because some of their genes are in most of us today.’

Previous research had suggested that the Iberian Peninsula (modern-day Spain and Portugal) and the site of Gorham’s Cave, Gibraltar, might have been the final places in Europe where Neanderthals survived. Despite extensive dating work, the research team could not confirm the previous dates. The paper suggests that poor preservation techniques for the dating material could have led to contamination and false ‘younger’ dates previously.

Contacts and sources:
University of Oxford

Tuesday, August 19, 2014

Solar Energy That Doesn't Block The View

A team of researchers at Michigan State University has developed a new type of solar concentrator that when placed over a window creates solar energy while allowing people to actually see through the window.

It is called a transparent luminescent solar concentrator and can be used on buildings, cell phones and any other device that has a clear surface.

And, according to Richard Lunt of MSU’s College of Engineering, the key word is “transparent.”

Solar power with a view: MSU doctoral student Yimu Zhao holds up a transparent luminescent solar concentrator module.
Photo by Yimu Zhao.

Research in the production of energy from solar cells placed around luminescent plastic-like materials is not new. These past efforts, however, have yielded poor results – the energy production was inefficient and the materials were highly colored.

“No one wants to sit behind colored glass,” said Lunt, an assistant professor of chemical engineering and materials science. “It makes for a very colorful environment, like working in a disco. We take an approach where we actually make the luminescent active layer itself transparent.”

The solar harvesting system uses small organic molecules developed by Lunt and his team to absorb specific nonvisible wavelengths of sunlight.

“We can tune these materials to pick up just the ultraviolet and the near infrared wavelengths that then ‘glow’ at another wavelength in the infrared,” he said.

The “glowing” infrared light is guided to the edge of the plastic where it is converted to electricity by thin strips of photovoltaic solar cells.

A transparent luminescent solar concentrator waveguide is shown with colorful traditional luminescent solar concentrators in the background. The new LSC can create solar energy but is not visible on windows or other clear surfaces.
Photo by G.L. Kohuth   

“Because the materials do not absorb or emit light in the visible spectrum, they look exceptionally transparent to the human eye,” Lunt said.

One of the benefits of this new development is its flexibility. While the technology is at an early stage, it has the potential to be scaled to commercial or industrial applications with an affordable cost.

“It opens a lot of area to deploy solar energy in a non-intrusive way,” Lunt said. “It can be used on tall buildings with lots of windows or any kind of mobile device that demands high aesthetic quality like a phone or e-reader. Ultimately we want to make solar harvesting surfaces that you do not even know are there.”

Lunt said more work is needed in order to improve its energy-producing efficiency. Currently it is able to produce a solar conversion efficiency close to 1 percent, but noted they aim to reach efficiencies beyond 5 percent when fully optimized. The best colored LSC has an efficiency of around 7 percent.

The research was featured on the cover of a recent issue of the journal Advanced Optical Materials.

Other members of the research team include Yimu Zhao, an MSU doctoral student in chemical engineering and materials science; Benjamin Levine, assistant professor of chemistry; and Garrett Meek, doctoral student in chemistry.

Contacts and sources:
Tom Oswald
Michigan State University

Climate Change Will Threaten Fish By Drying Out Southwest U.S. Streams, Study Predicts

Fish species native to a major Arizona watershed may lose access to important segments of their habitat by 2050 as surface water flow is reduced by the effects of climate warming, new research suggests.

Most of these fish species, found in the Verde River Basin, are already threatened or endangered. Their survival relies on easy access to various resources throughout the river and its tributary streams. The species include the speckled dace (Rhinichthys osculus), roundtail chub (Gila robusta) and Sonora sucker (Catostomus insignis).

 Speckled Dace 
Credit: Wikipedia

A key component of these streams is hydrologic connectivity – a steady flow of surface water throughout the system that enables fish to make use of the entire watershed as needed for eating, spawning and raising offspring.

Models that researchers produced to gauge the effects of climate change on the watershed suggest that by the mid 21st century, the network will experience a 17 percent increase in the frequency of stream drying events and a 27 percent increase in the frequency of zero-flow days.

“We have portions of the channel that are going to dry more frequently and for longer periods of time,” said lead author Kristin Jaeger, assistant professor in The Ohio State University School of Environment and Natural Resources. “As a result, the network will become fragmented, contracting into isolated, separated pools.

Kristin Jaeger
Credit: OSU

“If water is flowing throughout the network, fish are able to access all parts of it and make use of whatever resources are there. But when systems dry down, temporary fragmented systems develop that force fish into smaller, sometimes isolated channel reaches or pools until dry channels wet up again.”

This study covers climate change’s effects on surface water availability from precipitation and temperature changes. It does not take into account any withdrawals of groundwater that will be needed during droughts to support the estimated 50 percent or more increase in Arizona’s population by 2050.

“These estimates are conservative,” said Jaeger, who conducted the study with co-authors Julian Olden and Noel Pelland of the University of Washington. The study is published in the Proceedings of the National Academy of Sciences.

The researchers used a rainfall runoff model, the Soil and Water Assessment Tool (SWAT), which incorporates the study basin’s elevation, terrain, soil, land use, vegetation coverage, and both current and future climate data, including precipitation and temperature.

“It’s a hydrological model that routes water received from precipitation through the landscape, a portion of which eventually becomes streamflow in the river,” Jaeger said. “We partitioned the watershed into many smaller pieces all linked to each other, with nodes placed 2 kilometers apart throughout the entire river network to evaluate if that portion of the river channel at an individual node supported streamflow for a given day.”

Jaeger describes the river network, as envisioned by this model, as a mosaic of wet and dry patches. Piecing data from all of those nodes together, the researchers established an index of connectivity for the entire watershed, which predicts that the mid-century and late-century climate will reduce connectivity by 6 to 9 percent over the course of a year and by up to 12 to 18 percent during spring spawning months.

“The index decreases that are predicted by the model will affect spawning the most,” said Jaeger, who also holds an appointment with the Ohio Agricultural Research and Development Center. “During the spring spawning period, fish are more mobile, traveling longer distances to access necessary habitat. Projected decreased connectivity compromises access to different parts of the network.”

Flowing portions of the system will diminish between 8 and 20 percent in spring and early summer, producing lengthier channels that will dry more frequently and over longer periods of time. These changes will reduce available habitat for fish and force them to travel longer distances for resources once channels rewet, Jaeger said.

The fish are already subject to stressors on the system, including both surface and groundwater extraction for irrigation and drinking water, loss of habitat and the introduction of nonnative species that prey on the native fish, Jaeger noted. The overall system’s connectivity is also already compromised, as well, because of existing dry conditions in the American Southwest.

“These fish are important cogs in the wheel of this greater ecosystem,” Jaeger said. “Loss of endemic species is a big deal in and of itself, and native species evaluated in this study are particularly evolved to this watershed. In this river network that currently supports a relatively high level of biodiversity, the suite of endemic fish species are filling different niches in the ecosystem, which allows the system to be more resilient to disturbances such as drought.

“If species are pushed over the edge to extinction, then what they bring to the ecosystem will be lost and potentially very difficult to replace.”

This project was funded by the Department of Defense Strategic Environmental Research and Development Program.

Contacts and sources:
By: Emily Caldwell

First Indirect Evidence Of So-Far Undetected Strange Baryons

New supercomputing calculations provide the first evidence that particles predicted by the theory of quark-gluon interactions but never before observed are being produced in heavy-ion collisions at the Relativistic Heavy Ion Collider (RHIC), a facility that is dedicated to studying nuclear physics.

Brookhaven theoretical physicist Swagato Mukherjee
Credit: BNL 

These heavy strange baryons, containing at least one strange quark, still cannot be observed directly, but instead make their presence known by lowering the temperature at which other strange baryons "freeze out" from the quark-gluon plasma (QGP) discovered and created at RHIC, a U.S. Department of Energy (DOE) Office of Science user facility located at DOE's Brookhaven National Laboratory.

RHIC is one of just two places in the world where scientists can create and study a primordial soup of unbound quarks and gluons—akin to what existed in the early universe some 14 billion years ago. The research is helping to unravel how these building blocks of matter became bound into hadrons, particles composed of two or three quarks held together by gluons, the carriers of nature's strongest force.

Added Berndt Mueller, Associate Laboratory Director for Nuclear and Particle Physics at Brookhaven, "This finding is particularly remarkable because strange quarks were one of the early signatures of the formation of the primordial quark-gluon plasma. Now we're using this QGP signature as a tool to discover previously unknown baryons that emerge from the QGP and could not be produced otherwise."

"Baryons, which are hadrons made of three quarks, make up almost all the matter we see in the universe today," said Brookhaven theoretical physicist Swagato Mukherjee, a co-author on a paper describing the new results in Physical Review Letters. 

"The theory that tells us how this matter forms—including the protons and neutrons that make up the nuclei of atoms—also predicts the existence of many different baryons, including some that are very heavy and short-lived, containing one or more heavy 'strange' quarks. Now we have indirect evidence from our calculations and comparisons with experimental data at RHIC that these predicted higher mass states of strange baryons do exist," he said. 
Freezing point depression and supercomputing calculations

The evidence comes from an effect on the thermodynamic properties of the matter nuclear physicists can detect coming out of collisions at RHIC. Specifically, the scientists observe certain more-common strange baryons (omega baryons, cascade baryons, lambda baryons) "freezing out" of RHIC's quark-gluon plasma at a lower temperature than would be expected if the predicted extra-heavy strange baryons didn't exist.

"It's similar to the way table salt lowers the freezing point of liquid water," said Mukherjee. "These 'invisible' hadrons are like salt molecules floating around in the hot gas of hadrons, making other particles freeze out at a lower temperature than they would if the 'salt' wasn't there."

To see the evidence, the scientists performed calculations using lattice QCD, a technique that uses points on an imaginary four-dimensional lattice (three spatial dimensions plus time) to represent the positions of quarks and gluons, and complex mathematical equations to calculate interactions among them, as described by the theory of quantum chromodynamics (QCD).

"The calculations tell you where you have bound or unbound quarks, depending on the temperature," Mukherjee said.

The scientists were specifically looking for fluctuations of conserved baryon number and strangeness and exploring how the calculations fit with the observed RHIC measurements at a wide range of energies.

The calculations show that inclusion of the predicted but "experimentally uncharted" strange baryons fit better with the data, providing the first evidence that these so-far unobserved particles exist and exert their effect on the freeze-out temperature of the observable particles.

These findings are helping physicists quantitatively plot the points on the phase diagram that maps out the different phases of nuclear matter, including hadrons and quark-gluon plasma, and the transitions between them under various conditions of temperature and density.

"To accurately plot points on the phase diagram, you have to know what the contents are on the bound-state, hadron side of the transition line—even if you haven't seen them," Mukherjee said. "We've found that the higher mass states of strange baryons affect the production of ground states that we can observe. And the line where we see the ordinary matter moves to a lower temperature because of the multitude of higher states that we can't see."

The research was carried out by the Brookhaven Lab's Lattice Gauge Theory group, led by Frithjof Karsch, in collaboration with scientists from Bielefeld University, Germany, and Central China Normal University. The supercomputing calculations were performed using GPU-clusters at DOE's Thomas Jefferson National Accelerator Facility (Jefferson Lab), Bielefeld University, Paderborn University, and Indiana University with funding from the Scientific Discovery through Advanced Computing (SciDAC) program of the DOE Office of Science (Nuclear Physics and Advanced Scientific Computing Research), the Federal Ministry of Education and Research of Germany, the German Research Foundation, the European Commission Directorate-General for Research & Innovation and the GSI BILAER grant. The experimental program at RHIC is funded primarily by the DOE Office of Science.

Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Contacts and sources: 
Karen McNulty Walsh
DOE/Brookhaven National Laboratory

Seafood Substitutions Can Expose Consumers To Unexpectedly High Mercury

New measurements from fish purchased at retail seafood counters in 10 different states show the extent to which mislabeling can expose consumers to unexpectedly high levels of mercury, a harmful pollutant.

Fishery stock "substitutions"—which falsely present a fish of the same species, but from a different geographic origin—are the most dangerous mislabeling offense, according to new research by University of Hawai‘i at Mānoa scientists.

Chilean sea bass fillet 
Photo courtesy Flickr user Artizone

“Accurate labeling of seafood is essential to allow consumers to choose sustainable fisheries,” said UH Mānoa biologist Peter B. Marko, lead author of the new study published in the scientific journal PLOS One. “But consumers also rely on labels to protect themselves from unhealthy mercury exposure. Seafood mislabeling distorts the true abundance of fish in the sea, defrauds consumers, and can cause unwanted exposure to harmful pollutants such as mercury.”

The study included two kinds of fish: those labeled as Marine Stewardship Council- (MSC-) certified Chilean sea bass, and those labeled simply as Chilean sea bass (uncertified). The MSC-certified version is supposed to be sourced from the Southern Ocean waters of South Georgia, near Antarctica, far away from man-made sources of pollution. MSC-certified fish is often favored by consumers seeking sustainably harvested seafood but is also potentially attractive given its consistently low levels of mercury.

In a previous study, the scientists had determined that fully 20 percent of fish purchased as Chilean sea bass were not genetically identifiable as such. Further, of those Chilean sea bass positively identified using DNA techniques, 15 percent had genetic markers that indicated that they were not sourced from the South Georgia fishery.

In the new study, the scientists used the same fish samples to collect detailed mercury measurements. When they compared the mercury in verified, MSC-certified sea bass with the mercury levels of verified, non-certified sea bass, they found no significant difference in the levels. That’s not the story you would have expected based on what is known about geographic patterns of mercury accumulation in Chilean sea bass.

Fish market in Oahu's Chinatown
Photo courtesy Flickr user Michelle Lee.

“What’s happening is that the species are being substituted,” Marko explained. “The ones that are substituted for MSC-certified Chilean sea bass tend to have very low mercury, whereas those substituted for uncertified fish tend to have very high mercury. These substitutions skew the pool of fish used for MSC comparison purposes, making certified and uncertified fish appear to be much more different than they actually are.”

But there’s another confounding factor. Even within the verified, MSC-certified Chilean sea bass samples, certain fish had very high mercury levels—up to 2 or 3 times higher than expected, and sometimes even greater than import limits to some countries.

Marko and his team again turned to genetics to learn more about these fishes’ true nature. “It turns out that the fish with unexpectedly high mercury originated from some fishery other than the certified fishery in South Georgia,” said Marko. “Most of these fish had mitochondrial DNA that indicated they were from Chile. Thus, fishery stock substitutions are also contributing to the pattern by making MSC-certified fish appear to have more mercury than they really should have.”

The bottom line: Most consumers already know that mercury levels vary between species, and many public outreach campaigns have helped educate the public about which fish species to minimize or avoid. Less appreciated is the fact that mercury varies considerably within a species.

“Because mercury accumulation varies within a species’ geographic range, according to a variety of environmental factors, the location where the fish is harvested matters a great deal,” Marko said.

“Although on average MSC-certified fish is a healthier option than uncertified fish, with respect to mercury contamination, our study shows that fishery-stock substitutions can result in a larger proportional increase in mercury,” Marko said. “We recommend that consumer advocates take a closer look at the variation in mercury contamination depending on the geographic source of the fishery stock when they consider future seafood consumption guidelines.”

Contacts and sources:
Peter Marko, Associate Professor, Biology
Talia Ogliore, PIO
University of Hawaiʻi at Mānoa

Citation:  Marko PB, Nance HA, van den Hurk P (2014) Seafood Substitutions Obscure Patterns of Mercury Contamination in Patagonian Toothfish (Dissostichus eleginoides) or “Chilean Sea Bass”. PLoS ONE 9(8): e104140. doi: 10.1371/journal.pone.0104140

Has The Puzzle Of Rapid Climate Change In The Last Ice Age Been Solved?

How rapid temperature changes might have occurred during times when the Northern Hemisphere ice sheets were at intermediate sizes  
The Northern Hemisphere in a cold (stadial) phase: During the cold stadial periods of the last ice age, massive ice sheets covered northern parts of North America and Europe. Strong northwest winds drove the Arctic sea ice southward, even as far as the French coast. Since the extended ice cover over the North Atlantic prevented the exchange of heat between the atmosphere and the ocean, the strong driving forces for the ocean currents that prevail today were lacking. Ocean circulation, which is a powerful “conveyor belt” in the world’s oceans, was thus much weaker than at present, and consequently transported less heat to northern regions.

Map: Alfred-Wegener-Institut 
During the last ice age a large part of North America was covered with a massive ice sheet up to 3km thick. The water stored in this ice sheet is part of the reason why the sea level was then about 120 meters lower than today. 

Has the puzzle of rapid climate change in the last ice age been solved? New report published in Nature shows that small variations in the climate system can result in dramatic temperature changes

Over the past one hundred thousand years cold temperatures largely prevailed over the planet in what is known as the last ice age. However, the cold period was repeatedly interrupted by much warmer climate conditions. Scientists have long attempted to find out why these drastic temperature jumps of up to ten degrees took place in the far northern latitudes within just a few decades.

Now, for the first time, a group of researchers at the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI), have been able to reconstruct these climate changes during the last ice age using a series of model simulations. The surprising finding is that minor variations in the ice sheet size can be sufficient to trigger abrupt climate changes. 

The Northern Hemisphere in a warm phase (a brief, warm interstadial phase during glacial climates) During the extended cold phases the ice sheets continued to thicken. When higher ice sheets prevailed over North America, typical in periods of intermediate sea levels, the prevailing northwest winds split into two branches.
Map: Alfred-Wegener-Institut

In the map, the major wind field ran to the north of the so-called Laurentide Ice Sheet and ensured that the sea ice boundary off the European coast shifted to the north. Ice-free seas permit heat exchange to take place between the atmosphere and the ocean. At the same time, the southern branch of the northwesterly winds drove warmer water into the ice-free areas of the northeast Atlantic and thus amplified the transportation of heat to the north.

The map shows modified conditions stimulated enhanced circulation in the ocean. Consequently, a thicker Laurentide Ice Sheet over North America resulted in increased ocean circulation and therefore greater transportation of heat to the north. The climate in the Northern Hemisphere became dramatically warmer within a few decades until, due to the retreat of the glaciers over North America and the renewed change in wind conditions, it began to cool off again.

The new study was published online in the scientific journal Nature last week and will be appearing in the 21 August print issue.

Young Chinese scientist Xu Zhang, lead author of the study who undertook his PhD at the Alfred Wegener Institute, explains, “The rapid climate changes known in the scientific world as Dansgaard-Oeschger events were limited to a period of time from 110,000 to 23,000 years before present. The abrupt climate changes did not take place at the extreme low sea levels, corresponding to the time of maximum glaciation 20,000 years ago, nor at high sea levels such as those prevailing today - they occurred during periods of intermediate ice volume and intermediate sea levels.” 

The results presented by the AWI researchers can explain the history of climate changes during glacial periods, comparing simulated model data with that retrieved from ice cores and marine sediments.

During the cold stadial periods of the last ice age, massive ice sheets covered northern parts of North America and Europe. Strong westerly winds drove the Arctic sea ice southward, even as far as the French coast. Since the extended ice cover over the North Atlantic prevented the exchange of heat between the atmosphere and the ocean, the strong driving forces for the ocean currents that prevail today were lacking. Ocean circulation, which is a powerful “conveyor belt” in the world’s oceans, was thus much weaker than at present, and consequently transported less heat to northern regions.

During the extended cold phases the ice sheets continued to thicken. When higher ice sheets prevailed over North America, typical in periods of intermediate sea levels, the prevailing westerly winds split into two branches. The major wind field ran to the north of the so-called Laurentide Ice Sheet and ensured that the sea ice boundary off the European coast shifted to the north. 

Ice-free seas permit heat exchange to take place between the atmosphere and the ocean. At the same time, the southern branch of the northwesterly winds drove warmer water into the ice-free areas of the northeast Atlantic and thus amplified the transportation of heat to the north. 

The modified conditions stimulated enhanced circulation in the ocean. Consequently, a thicker Laurentide Ice Sheet over North America resulted in increased ocean circulation and therefore greater transportation of heat to the north. The climate in the Northern Hemisphere became dramatically warmer within a few decades until, due to the retreat of the glaciers over North America and the renewed change in wind conditions, it began to cool off again.

“Using the simulations performed with our climate model, we were able to demonstrate that the climate system can respond to small changes with abrupt climate swings,” explains Professor Gerrit Lohmann, leader of the Paleoclimate Dynamics group at the Alfred Wegener Institute, Germany. 

Schematic depiction of current climate conditions in the Northern Hemisphere
At present, the extent of the Arctic sea ice is far less than during the last glacial period. The Laurentide Ice Sheet, the major driving force for ocean circulation during the glacials, has also disappeared. 
Map: Alfred-Wegener-Institut

The model simulations shown above demonstrate that today’s climate is much more robust in resisting the changes which existed during phases of intermediate ice thickness and intermediate sea levels. It was then, during the last ice age, that the most rapid temperature swings in the Northern Hemisphere took place. 

In doing so he illustrates the new study’s significance with regards to contemporary climate change. “At medium sea levels, powerful forces, such as the dramatic acceleration of polar ice cap melting, are not necessary to result in abrupt climate shifts and associated drastic temperature changes.”

At present, the extent of Arctic sea ice is far less than during the last glacial period. The Laurentide Ice Sheet, the major driving force for ocean circulation during the glacials, has also disappeared. Climate changes following the pattern of the last ice age are therefore not to be anticipated under today’s conditions.

“There are apparently some situations in which the climate system is more resistant to change while in others the system tends toward strong fluctuations,” summarises Gerrit Lohmann. “In terms of the Earth’s history, we are currently in one of the climate system’s more stable phases. The preconditions which gave rise to rapid temperature changes during the last ice age do not exist today. But this does not mean that sudden climate changes can be excluded in the future.”

Contacts and sources:
Sina Loeschke
Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research

Citation:  Xu Zhang, Gerrit Lohmann, Gregor Knorr, Conor Purcell:Abrupt glacial climate shifts controlled by ice sheet changes. Nature, DOI: 10.1038/nature13592

Why Global Warming Is Taking A Break

The average temperature on Earth has barely risen over the past 16 years. ETH researchers have now found out why. And they believe that global warming is likely to continue again soon.

The number of sunspots (white area here) varies in multi-year cycles. As a result, solar irradiance, which influences the Earth's climate, also fluctuates. The photo shows a UV image of the sun.

 Image: Trace Project / NASA 

Global warming is currently taking a break: whereas global temperatures rose drastically into the late 1990s, the global average temperature has risen only slightly since 1998 – surprising, considering scientific climate models predicted considerable warming due to rising greenhouse gas emissions. 

Climate sceptics used this apparent contradiction to question climate change per se – or at least the harm potential caused by greenhouse gases – as well as the validity of the climate models. Meanwhile, the majority of climate researchers continued to emphasise that the short-term ‘warming hiatus’ could largely be explained on the basis of current scientific understanding and did not contradict longer term warming.

Researchers have been looking into the possible causes of the warming hiatus over the past few years. For the first time, Reto Knutti, Professor of Climate Physics at ETH Zurich, has systematically examined all current hypotheses together with a colleague. In a study published in the latest issue of the journal Nature Geoscience, the researchers conclude that two important factors are equally responsible for the hiatus.
El Niño warmed the Earth

One of the important reasons is natural climate fluctuations, of which the weather phenomena El Niño and La Niña in the Pacific are the most important and well known. "1998 was a strong El Niño year, which is why it was so warm that year," says Knutti. In contrast, the counter-phenomenon La Niña has made the past few years cooler than they would otherwise have been.

Although climate models generally take such fluctuations into account, it is impossible to predict the year in which these phenomena will emerge, says the climate physicist. To clarify, he uses the stock market as an analogy: "When pension funds invest the pension capital in shares, they expect to generate a profit in the long term." 

At the same time, they are aware that their investments are exposed to price fluctuations and that performance can also be negative in the short term. However, what finance specialists and climate scientists and their models are not able to predict is when exactly a short-term economic downturn or a La Niña year will occur.

Longer solar cycles

According to the study, the second important reason for the warming hiatus is that solar irradiance has been weaker than predicted in the past few years. This is because the identified fluctuations in the intensity of solar irradiance are unusual at present: whereas the so-called sunspot cycles each lasted eleven years in the past, for unknown reasons the last period of weak solar irradiance lasted 13 years. 

Furthermore, several volcanic eruptions, such as Eyjafjallajökull in Iceland in 2010, have increased the concentration of floating particles (aerosol) in the atmosphere, which has further weakened the solar irradiance arriving at the Earth's surface.

The scientists drew their conclusions from corrective calculations of climate models. In all climate simulations, they looked for periods in which the El Niño/La Niña patterns corresponded to the measured data from the years 1997 to 2012. With a combination of over 20 periods found, they were able to arrive at a realistic estimate of the influence of El Niño and La Niña. They also retroactively applied in the model calculations the actual measured values for solar activity and aerosol concentration in the Earth's atmosphere. Model calculations corrected in this way match the measured temperature data much more closely.
Incomplete measured data

The discrepancy between the climate models and measured data over the past 16 years cannot solely be attributed to the fact that these models predict too  much warming, says Knutti. The interpretation of the official measured data should also be critically scrutinised.

According to Knutti, measured data is likely to be too low, since the global average temperature is only estimated using values obtained from weather stations on the ground, and these do not exist everywhere on Earth. From satellite data, for example, scientists know that the Arctic region in particular has become warmer over the past years, but because there are no weather stations in that area, there are measurements that show strong upward fluctuations. As a result, the specified average temperature is too low.

Last year, British and Canadian researchers proposed an alternative temperature curve with higher values, in which they incorporated estimated temperatures from satellite data for regions with no weather stations. If the model data is corrected downwards, as suggested by the ETH researchers, and the measurement data is corrected upwards, as suggested by the British and Canadian researchers, then the model and actual observations are very similar.
Warming to recommence

Despite the warming hiatus, Knutti is convinced there is no reason to doubt either the existing calculations for the climate activity of greenhouse gases or the latest climate models. "Short-term climate fluctuations can easily be explained. They do not alter the fact that the climate will become considerably warmer in the long term as a result of greenhouse gas emissions," says Knutti. He believes that global warming will recommence as soon as solar activity, aerosol concentrations in the atmosphere and weather phenomena such as El Niño naturally start returning to the values of previous decades.

Contacts and sources:
Fabio Bergamin
ETH Zurich

Citation: Huber M, Knutti R: Natural variability, radiative forcing and climate response in the recent hiatus reconciled. Nature Geoscience, online publication 17 August 2014, doi: 10.1038/ngeo2228

Love Makes Sex Better For Most Women Says Study

Love and commitment can make sex physically more satisfying for many women, according to a Penn State Abington sociologist.

In a series of interviews, heterosexual women between the ages of 20 and 68 and from a range of backgrounds said that they believed love was necessary for maximum satisfaction in both sexual relationships and marriage. The benefits of being in love with a sexual partner are more than just emotional. Most of the women in the study said that love made sex physically more pleasurable.

Credit: Wikimedia Commons

"Women said that they connected love with sex and that love actually enhanced the physical experience of sex," said Beth Montemurro, associate professor of sociology.

Women who loved their sexual partners also said they felt less inhibited and more willing to explore their sexuality.

"When women feel love, they may feel greater sexual agency because they not only trust their partners but because they feel that it is OK to have sex when love is present," Montemurro said.

While 50 women of the 95 that were interviewed said that love was not necessary for sex, only 18 of the women unequivocally believed that love was unnecessary in a sexual relationship.

Older women who were interviewed indicated that this connection between love, sex and marriage remained important throughout their lifetimes, not just in certain eras of their lives.

The connection between love and sex may show how women are socialized to see sex as an expression of love, Montemurro said. Despite decades of the women's rights movement and an increased awareness of women's sexual desire, the media continue to send a strong cultural message for women to connect sex and love and to look down on girls and women who have sex outside of committed relationships.

"On one hand, the media may seem to show that casual sex is OK, but at the same time, movies and television, especially, tend to portray women who are having sex outside of relationships negatively," said Montemurro.

In a similar way, the media often portray marriage as largely sexless, even though the participants in the study said that sex was an important part of their marriage, according to Montemurro, who presented her findings today (Aug. 19) at the annual meeting of the American Sociological Association.

"For the women I interviewed, they seemed to say you need love in sex and you need sex in marriage," said Montemurro.

From September 2008 to July 2011, Montemurro conducted in-depth interviews with 95 women who lived in Pennsylvania, New Jersey and New York. The interviews generally lasted 90 minutes.

Although some of the women who were interviewed said they had sexual relationships with women, most of the women were heterosexual and all were involved in heterosexual relationships.

Funds from the Career Development Professorship and the Rubin Fund supported this work.

Contacts and sources:
Matt Swayne
Penn State