Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

Monday, August 31, 2015

99% of All Seabirds Will Have Plastic in Their Gut by 2050, 60% Have It Now

Researchers from CSIRO and Imperial College London have assessed how widespread the threat of plastic is for the world's seabirds, including albatrosses, shearwaters and penguins, and found the majority of seabird species have plastic in their gut.

The study, led by Dr Chris Wilcox with co-authors Dr Denise Hardesty and Dr Erik van Sebille and published today in the journal PNAS, found that nearly 60 per cent of all seabird species have plastic in their gut.

A red-footed booby on Christmas Island, in the Indian Ocean.
Credit: © CSIRO, Britta Denise Hardesty

Based on analysis of published studies since the early 1960s, the researchers found that plastic is increasingly common in seabird's stomachs.

In 1960, plastic was found in the stomach of less than 5 per cent of individual seabirds, rising to 80 per cent by 2010.

The researchers predict that plastic ingestion will affect 99 per cent of the world's seabird species by 2050, based on current trends.

The scientists estimate that 90 per cent of all seabirds alive today have eaten plastic of some kind.

This includes bags, bottle caps, and plastic fibres from synthetic clothes, which have washed out into the ocean from urban rivers, sewers and waste deposits.

Birds mistake the brightly coloured items for food, or swallow them by accident, and this causes gut impaction, weight loss and sometimes even death.

"For the first time, we have a global prediction of how wide-reaching plastic impacts may be on marine species - and the results are striking," senior research scientist at CSIRO Oceans and Atmosphere Dr Wilcox said.

Plastic fragments washing in the surf on Christmas Island, in the northeastern Indian Ocean.
Credit: © CSIRO, Britta Denise Hardesty

"We predict, using historical observations, that 90 per cent of individual seabirds have eaten plastic. This is a huge amount and really points to the ubiquity of plastic pollution."

Dr Denise Hardesty from CSIRO Oceans and Atmosphere said seabirds were excellent indicators of ecosystem health.

"Finding such widespread estimates of plastic in seabirds is borne out by some of the fieldwork we've carried out where I've found nearly 200 pieces of plastic in a single seabird," Dr Hardesty said.

The researchers found plastics will have the greatest impact on wildlife where they gather in the Southern Ocean, in a band around the southern edges of Australia, South Africa and South America.

Dr van Sebille, from the Grantham Institute at Imperial College London, said the plastics had the most devastating impact in the areas where there was the greatest diversity of species.

"We are very concerned about species such as penguins and giant albatrosses, which live in these areas," Erik van Sebille said.

"While the infamous garbage patches in the middle of the oceans have strikingly high densities of plastic, very few animals live here."

Dr Hardesty said there was still the opportunity to change the impact plastic had on seabirds.

"Improving waste management can reduce the threat plastic is posing to marine wildlife," she said.

"Even simple measures can make a difference. Efforts to reduce plastics losses into the environment in Europe resulted in measureable changes in plastic in seabird stomachs with less than a decade, which suggests that improvements in basic waste management can reduce plastic in the environment in a really short time."

Chief Scientist at the US-based Ocean Conservancy Dr George H. Leonard said the study was highly important and demonstrated how pervasive plastics were in oceans.

"Hundreds of thousands of volunteers around the world come face-to-face with this problem during annual Coastal Cleanup events," Dr Leonard said.

"Scientists, the private sector and global citizens working together against the growing onslaught of plastic pollution can reduce plastic inputs to help protect marine biodiversity."



Contacts and sources:
Simon Torok
CSIRO 

Mummified Life Found Deep Beneath the Ocean Floor, Like Life Found at "Lost City"

Ancient rocks harbored microbial life deep below the seafloor, reports a team of scientists from the Woods Hole Oceanographic Institution (WHOI), Virginia Tech, and the University of Bremen. This new evidence was contained in drilled rock samples of Earth's mantle - thrust by tectonic forces to the seafloor during the Early Cretaceous period. The new study was published today in the Proceedings of the National Academy of Sciences.

Scientist found mummified microbial life in rocks from a seafloor hydrothermal system that was active more than 100 million years ago during the Early Cretaceous when the supercontinent Pangaea was breaking apart and the Atlantic ocean was just about to open. Buried under almost 700 meters of sediment, the samples were recovered by the seafloor drilling vessel JOIDES Resolution near the coast of Portugal. Hydrothermal fluids rich in hydrogen and methane mixed with seawater about 65 meters below the seafloor. This process supported bacteria and archaea in what scientists call 'the deep biosphere' in rocks from Earth's mantle. Conditions for microbial life were nearly ideal, the study showed, in this seemingly inhospitable environment.

Credit: Illustration by Jack Cook, Woods Hole Oceanographic Institution. Inset paleogeographic reconstruction by Ron Blakey, Colorado Plateau Geosystems

The discovery confirms a long-standing hypothesis that interactions between mantle rocks and seawater can create potential for life even in hard rocks deep below the ocean floor. The fossilized microbes are likely the same as those found at the active Lost City hydrothermal field, providing potentially important clues about the conditions that support 'intraterrestrial' life in rocks below the seafloor.

"We were initially looking at how seawater interacts with mantle rocks, and how that process generates hydrogen," said Frieder Klein, an associate scientist at WHOI and lead author of the study. "But during our analysis of the rock samples, we discovered organic-rich inclusions that contained lipids, proteins and amino acids - the building blocks of life - mummified in the surrounding minerals."

These remarkable rocks, recovered by the Ocean Drilling Program (ODP) on board the drilling vessel JOIDES Resolution, are from the Earth's upper mantle that underwent intense alteration by heated seawater. The rocks show a systematic change in color from rusty brown (top) to green and black (bottom), reflecting the chemical gradients across the fluid mixing zone. These chemical gradients played a key role in supporting microbes with chemical energy and the substrates they needed to thrive. Fossilized microbes were found in white veins consisting of the minerals calcite and brucite.
Photo courtesy of Ocean Drilling Program

This study, which was a collaborative effort between Klein, WHOI scientists Susan Humphris, Weifu Guo and William Orsi, Esther Schwarzenbach from Virginia Tech and Florence Schubotz from the University of Bremen, focused on mantle rocks that were originally exposed to seawater approximately 125 million years ago when a large rift split the massive supercontinent known as Pangaea. The rift, which eventually evolved into the Atlantic Ocean, pulled mantle rocks from Earth's interior to the seafloor, where they underwent chemical reactions with seawater, transforming the seawater into a hydrothermal fluid.

"The hydrothermal fluid likely had a high pH and was depleted in carbon and electron acceptors," Klein said. "These extreme chemical conditions can be challenging for microbes. However, the hydrothermal fluid contained hydrogen and methane and seawater contains dissolved carbon and electron acceptors. So when you mix the two in just the right proportions, you can have the ingredients to support life."

According to Dr. Everett Shock, a professor at Arizona State University's School of Earth and Science Exploration, the study underscores the influence major geologic processes can have on the prospect for life.

"This research makes the connection all the way from convection of the mantle to the break-up of the continents to ultimately providing geochemical options for microbiology," Shock said. "It's just such a nice demonstration of real-world geobiology with a lot of 'geo' in it."

Drilling Deep

The rock samples analyzed in the study were originally drilled from the Iberian continental margin off the coast of Spain and Portugal in 1993. During the expedition aboard the research vessel JOIDES Resolution operated by the Ocean Drilling Program (ODP) - researchers drilled through 690 meters of mud and sediment deposited onto to the ocean floor to reach the ancient seafloor created during the break-up of the supercontinent Pangaea and the opening of the Atlantic Ocean. The drill samples had been stored in core repositories at room temperature for more than two decades, before Klein and his colleagues began their investigation and discovered the fossilized microbial remains.

"Colonies of bacteria and archaea were feeding off the seawater-hydrothermal fluid mix and became engulfed in the minerals growing in the fractured rock," Klein said. "This kept them completely isolated from the environment. The minerals proved to be the ultimate storage containers for these organisms, preserving their lipids and proteins for over 100 million years."

"It's exciting that the research team was able to go back and examine samples that had been collected years ago for other reasons and find new discoveries," Shock said. "There will always be active new drilling, but this study raises the possibility of there being a lot more out there in the way of existing samples that could be analyzed."

In the lab, samples from the rock interior had to be extracted since the outside of the drill core was stored under non-sterile conditions. So Klein and his colleagues took a number of careful steps to ensure the integrity of the sample interior wasn't compromised, and then analyzed the rocks with high-resolution microscopes, a confocal Raman spectrometer and a range of isotope techniques.

A Link to the Lost City

While Raman spectroscopy enabled Klein to verify the presence of amino acids, proteins and lipids in the samples, it did not provide enough detailed information to correlate them with other hydrothermal systems. The lipids were of particular interest to Klein since they tend to be better preserved over long timescales, and have been studied in a wide range of seafloor environments. This prompted Klein to ask Schubotz, an expert in lipid biomarker analysis at the University of Bremen, if she could tease out further information about the lipids from these ancient rocks.

Schubotz ran the lipids through an advanced liquid chromatography-based mass spectrometer system to separate out and identify their biochemical components. The analysis led to a remarkable discovery: the lipids from the Iberian margin match up with those from the Lost City hydrothermal field, which was discovered in 2000 in the Mid-Atlantic Ridge during an expedition on board the WHOI-operated research vessel Atlantis. This is significant because researchers believe the Lost City is a present-day analog to ancient hydrothermal systems on early Earth where life may have emerged.

The active Lost City hydrothermal field, located at the Mid-Atlantic Ridge, is hosted by rocks very similar to those from the Iberia continental margin analyzed in this study. Lost City will be drilled during a forthcoming expedition by the International Ocean Discovery Program (IODP). Klein and his colleagues hope to gain more detailed insight in the subseafloor life by comparing rocks from the Iberia continental margin with those from other ODP and IODP drill cores.
Photo by National Science Foundation, Univ. of Washington, Woods Hole Oceanographic Institution

"I was stoked when I saw Dr. Schubotz's email detailing the analytical results," Klein said. "It was fascinating to find these particular biological substances - which had previously been found only at the Lost City hydrothermal field and in cold seeps - in rocks below the seafloor where life is extremely challenging. At that point we knew we were onto something really cool!"

A Deeper Understanding

According to Klein, confirmation that life is possible in mantle rocks deep below the seafloor may have important implications for understanding subseafloor life across a wide range of geologic environments.

"All the ingredients necessary to drive these ecosystems were made entirely from scratch," he said. "Similar systems have likely existed throughout most of Earth's history to the present day and possibly exist(ed) on other water-bearing rocky planetary bodies, such as Jupiter's moon Europa."

The study reinforces the idea that life springs up anywhere there is water, even in seemingly hostile geological environments - a tantalizing prospect as scientists find more and more water elsewhere in the solar system. But Klein contends that, while scientists have long understood many of the forces driving microbial life above the seafloor, there is still a great deal of uncertainty when it comes to understanding biogeochemical processes occurring in the oceanic basement.

"In the future, we'll be trying to learn more about these particular microorganisms and what the environmental conditions were in the mixing zone in that location. We also plan to go to different places where we think similar processes may have taken place, such as along the Newfoundland margin, and analyze samples to see if we find similar signatures. Broadening this research could provide additional insights about Earth's history and the search for life in the solar system."


Contacts and sources:
Woods Hole Oceanographic Institution (WHOI)

Saturday, August 29, 2015

NASA Team Selects Potential Kuiper Belt Object for New Horizons Flyby Target, A Billion Miles from Pluto

NASA has selected the potential next destination for the New Horizons mission to visit after its historic July 14 flyby of the Pluto system. The destination is a small Kuiper Belt object (KBO) known as 2014 MU69 that orbits nearly a billion miles beyond Pluto.

Artist's impression of NASA's New Horizons spacecraft encountering a Pluto-like object in the distant Kuiper Belt.
Credits: NASA/JHUAPL/SwRI/Alex Parker

This remote KBO was one of two identified as potential destinations and the one recommended to NASA by the New Horizons team. Although NASA has selected 2014 MU69 as the target, as part of its normal review process the agency will conduct a detailed assessment before officially approving the mission extension to conduct additional science.

“Even as the New Horizon’s spacecraft speeds away from Pluto out into the Kuiper Belt, and the data from the exciting encounter with this new world is being streamed back to Earth, we are looking outward to the next destination for this intrepid explorer,” said John Grunsfeld, astronaut and chief of the NASA Science Mission Directorate at the agency headquarters in Washington. “While discussions whether to approve this extended mission will take place in the larger context of the planetary science portfolio, we expect it to be much less expensive than the prime mission while still providing new and exciting science.”

Like all NASA missions that have finished their main objective but seek to do more exploration, the New Horizons team must write a proposal to the agency to fund a KBO mission. That proposal – due in 2016 – will be evaluated by an independent team of experts before NASA can decide about the go-ahead.

Early target selection was important; the team needs to direct New Horizons toward the object this year in order to perform any extended mission with healthy fuel margins. New Horizons will perform a series of four maneuvers in late October and early November to set its course toward 2014 MU69 – nicknamed “PT1” (for “Potential Target 1”) – which it expects to reach on January 1, 2019. Any delays from those dates would cost precious fuel and add mission risk.

“2014 MU69 is a great choice because it is just the kind of ancient KBO, formed where it orbits now, that the Decadal Survey desired us to fly by,” said New Horizons Principal Investigator Alan Stern, of the Southwest Research Institute (SwRI) in Boulder, Colorado. “Moreover, this KBO costs less fuel to reach [than other candidate targets], leaving more fuel for the flyby, for ancillary science, and greater fuel reserves to protect against the unforeseen.”

New Horizons was originally designed to fly beyond the Pluto system and explore additional Kuiper Belt objects. The spacecraft carries extra hydrazine fuel for a KBO flyby; its communications system is designed to work from far beyond Pluto; its power system is designed to operate for many more years; and its scientific instruments were designed to operate in light levels much lower than it will experience during the 2014 MU69 flyby.”

The 2003 National Academy of Sciences’ Planetary Decadal Survey (“New Frontiers in the Solar System”) strongly recommended that the first mission to the Kuiper Belt include flybys of Pluto and small KBOs, in order to sample the diversity of objects in that previously unexplored region of the solar system. The identification of PT1, which is in a completely different class of KBO than Pluto, potentially allows New Horizons to satisfy those goals.

But finding a suitable KBO flyby target was no easy task. Starting a search in 2011 using some of the largest ground-based telescopes on Earth, the New Horizons team found several dozen KBOs, but none were reachable within the fuel supply available aboard the spacecraft.

The powerful Hubble Space Telescope came to the rescue in summer 2014, discovering five objects, since narrowed to two, within New Horizons’ flight path. Scientists estimate that PT1 is just under 30 miles (about 45 kilometers) across; that’s more than 10 times larger and 1,000 times more massive than typical comets, like the one the Rosetta mission is now orbiting, but only about 0.5 to 1 percent of the size (and about 1/10,000th the mass) of Pluto. As such, PT1 is thought to be like the building blocks of Kuiper Belt planets such as Pluto.

Path of NASA's New Horizons spacecraft toward its next potential target, the Kuiper Belt object 2014 MU69, nicknamed "PT1" (for "Potential Target 1") by the New Horizons team. NASA must approve any New Horizons extended mission to explore a KBO.

Credits: NASA/JHUAPL/SwRI/Alex Parker

Unlike asteroids, KBOs have been heated only slightly by the Sun, and are thought to represent a well preserved, deep-freeze sample of what the outer solar system was like following its birth 4.6 billion years ago.

“There’s so much that we can learn from close-up spacecraft observations that we’ll never learn from Earth, as the Pluto flyby demonstrated so spectacularly,” said New Horizons science team member John Spencer, also of SwRI. “The detailed images and other data that New Horizons could obtain from a KBO flyby will revolutionize our understanding of the Kuiper Belt and KBOs.”

The New Horizons spacecraft – currently 3 billion miles [4.9 billion kilometers] from Earth – is just starting to transmit the bulk of the images and other data, stored on its digital recorders, from its historic July encounter with the Pluto system. The spacecraft is healthy and operating normally.

New Horizons is part of NASA’s New Frontiers Program, managed by the agency’s Marshall Space Flight Center in Huntsville, Ala. The Johns Hopkins University Applied Physics Laboratory in Laurel, Md., designed, built, and operates the New Horizons spacecraft and manages the mission for NASA’s Science Mission Directorate. SwRI leads the science mission, payload operations, and encounter science planning. 


Contacts and sources:
Tricia Talbert
NASA

Artificial Leaf Harnesses Produces Fuel Efficiently Using Sunlight

Generating and storing renewable energy, such as solar or wind power, is a key barrier to a clean-energy economy. When the Joint Center for Artificial Photosynthesis (JCAP) was established at Caltech and its partnering institutions in 2010, the U.S. Department of Energy (DOE) Energy Innovation Hub had one main goal: a cost-effective method of producing fuels using only sunlight, water, and carbon dioxide, mimicking the natural process of photosynthesis in plants and storing energy in the form of chemical fuels for use on demand.

Illustration of an efficient, robust and integrated solar-driven prototype featuring protected photoelectrochemical assembly coupled with oxygen and hydrogen evolution reaction catalysts. 
Credit: Image provided courtesy of Joint Center for Artificial Photosynthesis; artwork by Darius Siwek. 

Over the past five years, researchers at JCAP have made major advances toward this goal, and they now report the development of the first complete, efficient, safe, integrated solar-driven system for splitting water to create hydrogen fuels.

"This result was a stretch project milestone for the entire five years of JCAP as a whole, and not only have we achieved this goal, we also achieved it on time and on budget," says Caltech's Nate Lewis, George L. Argyros Professor and professor of chemistry, and the JCAP scientific director.

The new solar fuel generation system, or artificial leaf, is described in the August 27 online issue of the journal Energy and Environmental Science. The work was done by researchers in the laboratories of Lewis and Harry Atwater, director of JCAP and Howard Hughes Professor of Applied Physics and Materials Science.

"This accomplishment drew on the knowledge, insights and capabilities of JCAP, which illustrates what can be achieved in a Hub-scale effort by an integrated team," Atwater says. "The device reported here grew out of a multi-year, large-scale effort to define the design and materials components needed for an integrated solar fuels generator."

Solar Fuels Prototype in Operation
A fully integrated photoelectrochemical device performing unassisted solar water splitting for the production of hydrogen fuel. 
Credit: Erik Verlage and Chengxiang Xiang/Caltech

The new system consists of three main components: two electrodes—one photoanode and one photocathode—and a membrane. The photoanode uses sunlight to oxidize water molecules, generating protons and electrons as well as oxygen gas. The photocathode recombines the protons and electrons to form hydrogen gas. A key part of the JCAP design is the plastic membrane, which keeps the oxygen and hydrogen gases separate. If the two gases are allowed to mix and are accidentally ignited, an explosion can occur; the membrane lets the hydrogen fuel be separately collected under pressure and safely pushed into a pipeline.

Semiconductors such as silicon or gallium arsenide absorb light efficiently and are therefore used in solar panels. However, these materials also oxidize (or rust) on the surface when exposed to water, so cannot be used to directly generate fuel. A major advance that allowed the integrated system to be developed was previous work in Lewis's laboratory, which showed that adding a nanometers-thick layer of titanium dioxide (TiO2)—a material found in white paint and many toothpastes and sunscreens—onto the electrodes could prevent them from corroding while still allowing light and electrons to pass through. The new complete solar fuel generation system developed by Lewis and colleagues uses such a 62.5-nanometer-thick TiO2 layer to effectively prevent corrosion and improve the stability of a gallium arsenide–based photoelectrode.

Another key advance is the use of active, inexpensive catalysts for fuel production. The photoanode requires a catalyst to drive the essential water-splitting reaction. Rare and expensive metals such as platinum can serve as effective catalysts, but in its work the team discovered that it could create a much cheaper, active catalyst by adding a 2-nanometer-thick layer of nickel to the surface of the TiO2. This catalyst is among the most active known catalysts for splitting water molecules into oxygen, protons, and electrons and is a key to the high efficiency displayed by the device.

A highly efficient photoelectrochemical (PEC) device uses the power of the sun to split water into hydrogen and oxygen. The stand-alone prototype includes two chambers separated by a semi-permeable membrane that allows collection of both gas products.
Credit: Lance Hayashida/Caltech 


The photoanode was grown onto a photocathode, which also contains a highly active, inexpensive, nickel-molybdenum catalyst, to create a fully integrated single material that serves as a complete solar-driven water-splitting system.

A critical component that contributes to the efficiency and safety of the new system is the special plastic membrane that separates the gases and prevents the possibility of an explosion, while still allowing the ions to flow seamlessly to complete the electrical circuit in the cell. All of the components are stable under the same conditions and work together to produce a high-performance, fully integrated system. The demonstration system is approximately one square centimeter in area, converts 10 percent of the energy in sunlight into stored energy in the chemical fuel, and can operate for more than 40 hours continuously.

"This new system shatters all of the combined safety, performance, and stability records for artificial leaf technology by factors of 5 to 10 or more ," Lewis says.

(From left to right): Chengxiang Xiang and Erik Verlage assemble a monolithically integrated III-V device, protected by a TiO2 stabilization layer, which performs unassisted solar water splitting with collection of hydrogen fuel and oxygen.

Credit:  Lance Hayashida/Caltech

"Our work shows that it is indeed possible to produce fuels from sunlight safely and efficiently in an integrated system with inexpensive components," Lewis adds, "Of course, we still have work to do to extend the lifetime of the system and to develop methods for cost-effectively manufacturing full systems, both of which are in progress."

Because the work assembled various components that were developed by multiple teams within JCAP, coauthor Chengxiang Xiang, who is co-leader of the JCAP prototyping and scale-up project, says that the successful end result was a collaborative effort. "JCAP's research and development in device design, simulation, and materials discovery and integration all funneled into the demonstration of this new device," Xiang says.

These results are published in a paper titled "A monolithically integrated, intrinsically safe, 10% efficient, solar-driven water-splitting system based on active, stable earth-abundant electrocatalysts in conjunction with tandem III-V light absorbers protected by amorphous TiO2 films." In addition to Lewis, Atwater, and Xiang, other Caltech coauthors include graduate student Erik Verlage, postdoctoral scholars Shu Hu and Ke Sun, material processing and integration research engineer Rui Liu, and JCAP mechanical engineer Ryan Jones. Funding was provided by the Office of Science at the U.S. Department of Energy, and the Gordon and Betty Moore Foundation.


Contacts and sources:
Judy Asbury
Caltech
Written by Jessica Stoller-Conrad

Researchers See Quantum Motion For the First Time


Consider the pendulum of a grandfather clock. If you forget to wind it, you will eventually find the pendulum at rest, unmoving. However, this simple observation is only valid at the level of classical physics—the laws and principles that appear to explain the physics of relatively large objects at human scale. However, quantum mechanics, the underlying physical rules that govern the fundamental behavior of matter and light at the atomic scale, state that nothing can quite be completely at rest.

Credit: Chan Lei and Keith Schwab/Caltech

For the first time, a team of Caltech researchers and collaborators has found a way to observe—and control—this quantum motion of an object that is large enough to see. Their results are published in the August 27 online issue of the journal Science.

Researchers have known for years that in classical physics, physical objects indeed can be motionless. Drop a ball into a bowl, and it will roll back and forth a few times. Eventually, however, this motion will be overcome by other forces (such as gravity and friction), and the ball will come to a stop at the bottom of the bowl.

"In the past couple of years, my group and a couple of other groups around the world have learned how to cool the motion of a small micrometer-scale object to produce this state at the bottom, or the quantum ground state," says Keith Schwab, a Caltech professor of applied physics, who led the study. "But we know that even at the quantum ground state, at zero-temperature, very small amplitude fluctuations—or noise—remain."

Because this quantum motion, or noise, is theoretically an intrinsic part of the motion of all objects, Schwab and his colleagues designed a device that would allow them to observe this noise and then manipulate it.

The micrometer-scale device consists of a flexible aluminum plate that sits atop a silicon substrate. The plate is coupled to a superconducting electrical circuit as the plate vibrates at a rate of 3.5 million times per second. According to the laws of classical mechanics, the vibrating structures eventually will come to a complete rest if cooled to the ground state.

But that is not what Schwab and his colleagues observed when they actually cooled the spring to the ground state in their experiments. Instead, the residual energy—quantum noise—remained.

"This energy is part of the quantum description of nature—you just can't get it out," says Schwab. "We all know quantum mechanics explains precisely why electrons behave weirdly. Here, we're applying quantum physics to something that is relatively big, a device that you can see under an optical microscope, and we're seeing the quantum effects in a trillion atoms instead of just one."

Because this noisy quantum motion is always present and cannot be removed, it places a fundamental limit on how precisely one can measure the position of an object.

But that limit, Schwab and his colleagues discovered, is not insurmountable. The researchers and collaborators developed a technique to manipulate the inherent quantum noise and found that it is possible to reduce it periodically. Coauthors Aashish Clerk from McGill University and Florian Marquardt from the Max Planck Institute for the Science of Light proposed a novel method to control the quantum noise, which was expected to reduce it periodically. This technique was then implemented on a micron-scale mechanical device in Schwab's low-temperature laboratory at Caltech.

"There are two main variables that describe the noise or movement," Schwab explains. "We showed that we can actually make the fluctuations of one of the variables smaller—at the expense of making the quantum fluctuations of the other variable larger. That is what's called a quantum squeezed state; we squeezed the noise down in one place, but because of the squeezing, the noise has to squirt out in other places. But as long as those more noisy places aren't where you're obtaining a measurement, it doesn't matter."

The ability to control quantum noise could one day be used to improve the precision of very sensitive measurements, such as those obtained by LIGO, the Laser Interferometry Gravitational-wave Observatory, a Caltech-and-MIT-led project searching for signs of gravitational waves, ripples in the fabric of space-time.

"We've been thinking a lot about using these methods to detect gravitational waves from pulsars—incredibly dense stars that are the mass of our sun compressed into a 10 km radius and spin at 10 to 100 times a second," Schwab says. "In the 1970s, Kip Thorne [Caltech's Richard P. Feynman Professor of Theoretical Physics, Emeritus] and others wrote papers saying that these pulsars should be emitting gravity waves that are nearly perfectly periodic, so we're thinking hard about how to use these techniques on a gram-scale object to reduce quantum noise in detectors, thus increasing the sensitivity to pick up on those gravity waves," Schwab says.

In order to do that, the current device would have to be scaled up. "Our work aims to detect quantum mechanics at bigger and bigger scales, and one day, our hope is that this will eventually start touching on something as big as gravitational waves," he says.

These results were published in an article titled, "Quantum squeezing of motion in a mechanical resonator." In addition to Schwab, Clerk, and Marquardt, other coauthors include former graduate student Emma E. Wollman (PhD '15); graduate students Chan U. Lei and Ari J. Weinstein; former postdoctoral scholar Junho Suh; and Andreas Kronwald of Friedrich-Alexander-Universität in Erlangen, Germany. The work was funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency, and the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center that also has support from the Gordon and Betty Moore Foundation.



Contacts and sources:
Judy Asbury
Caltech
Written by Jessica Stoller-Conrad


One of the most pressing issues in modern biological conservation is "invasion biology". Due to unprecedented contacts between peoples and culture in today's "global village" certain animal and plant species are spreading widely throughout the world, often causing enormous damage to local species.

This is the structure of Iron Age Floral List at each site. Circle size reflects the total number of new plant species recognized in Iron Age sites. Red indicates new species that appeared only in Philistine Iron Age sites. Green indicates species that appeared only in non-Philistine Iron Age contexts. Blue denotes species shared by Philistine and non-Philistine sites. The three numbers represent the quantity of Philistine species/non-Philistine species/shared species, at a site.
Credit: Map produced by M. Frumin using ArcGIS for Desktop (ArcMap 10.1), ESRI.

Recent studies have shown that alien species have had a substantial impact not only in recent times but also in antiquity. This is exemplified in a study published in the August 25th issue of Scientific Reports by a team led by archaeologists from Bar-Ilan University's Martin (Szusz) Department of Land of Israel Studies and Archaeology (Suembikya (Sue) Frumin, Prof. Ehud Weiss and Prof. Aren Maeir) and the Hebrew University (Dr. Liora Kolska Horwitz), describing the bio-archaeological remains of the

Philistine culture during the Iron Age (12th century to 7th century BCE). The team compiled a database of plant remains extracted from Bronze and Iron Ages sites in the southern Levant, both Philistine and non-Philistine. By analyzing this database, the researchers concluded that the Philistines brought to Israel not just themselves but also their plants.

The species they brought are all cultivars that had not been seen in Israel previously. This includes edible parts of the opium poppy (Papaver somniferum) which originates in western Europe; the sycamore tree (Ficus sycomorus), whose fruits are known to be cultivated in the eastern Mediterranean, especially Egypt, and whose presence in Israel as a locally grown tree is first attested to in the Iron Age by the presence of its fruit; and finally, cumin (Cuminum cyminum), a spice originating in the Eastern Mediterranean. 

Sue Frumin, a PhD student at Prof. Ehud Weiss's archaeobotanical lab, Bar-Ilan University, explains that "the edible parts of these species - opium poppy, sycamore, and cumin - were not identified in the archaeobotanical record of Israel prior to the Iron Age, when the Philistine culture first appeared in the region. None of these plants grows wild in Israel today, but instead grows only as cultivated plants."

Species turnover between the Bronze and Iron Age at Iron Age sites. Each site is marked by two columns. The green column marks the number of Bronze Age species found in the Iron Age floral list. The red column marks the number of new species in Iron Age sites. Numbers beneath the site name give the absolute numbers of Bronze Age /Iron Age species.
Map produced by M. Frumin using ArcGIS for Desktop (ArcMap 10.1), ESRI.

In addition to the translocation of exotic plants from other regions, the Philistines were the first community to exploit over 70 species of synanthropic plants (species which benefit from living in the vicinity of man) that were locally available in Israel, such as Purslane, Wild Radish, Saltwort, Henbane and Vigna. These plant species were not found in archaeological sites pre-dating the Iron Age, or in Iron Age archaeological sites recognized as belonging to non-Philistine cultures - Canaanite, Israelite, Judahite, and Phoenician. The "agricultural revolution" that accompanied the Philistine culture reflects a different agrarian regime and dietary preferences to that of their contemporaries.

The fact that the three exotic plants introduced by the Philistines originate from different regions accords well with the diverse geographic origin of these people. The Philistines - one of the so called Sea Peoples, and mentioned in the Bible and other ancient sources - were a multi-ethnic community with origins in the Aegean, Turkey, Cyprus and other regions in the Eastern Mediterranean who settled on the southern coastal plain of Israel in the early Iron Age (12th century BCE), and integrated with Canaanite and other local populations, finally to disappear at the end of the Iron Age (ca. 600 BCE).

The results of this research indicate that the ca. 600 year presence of the Philistine culture in Israel had a major and long-term impact on local floral biodiversity. The Philistines left as a biological heritage a variety of plants still cultivated in Israel, including, among others, sycamore, cumin, coriander, bay tree and opium poppy.

The Philistines also left their mark on the local fauna. In a previous study also published inScientific Reports in which two of the present authors (Maeir and Kolska Horwitz) participated, DNA extracted from ancient pig bones from Philistine and non-Philistine sites in Israel demonstrated that European pigs were introduced by the Philistines into Israel and slowly swamped the local pig populations through inter-breeding. As a consequence, modern wild boar in Israel today bears a European haplotype rather than a local, Near Eastern one.

As illustrated by these studies, the examination of the ancient bio-archaeological record has the potential to help us understand the long-term mechanisms and vectors that have contributed to current floral and faunal biodiversity, information that may also assist contemporary ecologists in dealing with the pressing issue of invasive species.



Contacts and sources: 
Elana Oberlander
Bar-Ilan University

Two Black Holes Found in Quasar Nearest Earth


A University of Oklahoma astrophysicist and his Chinese collaborator have found two supermassive black holes in Markarian 231, the nearest quasar to Earth, using observations from NASA's Hubble Space Telescope. The discovery of two supermassive black holes--one larger one and a second, smaller one--are evidence of a binary black hole and suggests that supermassive black holes assemble their masses through violent mergers.

OU astrophysicist and his Chinese collaborator used observations from NASA's Hubble Space Telescope to find two supermassive black holes in Markarian 231.

Credit:  Space Telescope Science Institute, Baltimore, Maryland

Xinyu Dai, professor in the Homer L. Dodge Department of Physics and Astronomy, OU College of Arts and Sciences, collaborated on this project with Youjun Lu of the National Astronomical Observatories of China, Chinese Academy of Sciences. Dai and Lu looked at ultraviolet radiation emitted from the center of the Mrk 231 from Hubble observations, then applied a model developed by Lu to the spectrum of the galaxy. As a result, they were able to predict the existence of the binary black holes in Mrk 231.

"We are extremely excited about this finding because it not only shows the existence of a close binary black hole in Mrk 231, but also paves a new way to systematically search binary black holes via the nature of their ultraviolet light emission," said Lu, National Astronomical Observatories of China, Chinese Academy of Sciences.

"The structure of our universe, such as those giant galaxies and clusters of galaxies, grows by merging smaller systems into larger ones, and binary black holes are natural consequences of these mergers of galaxies," said Dai.

So over time, the two black holes discovered by Dai and Lu in Mrk 231 will collide and merge to form a quasar with a supermassive black hole. A quasar is an active galaxy with an illuminated center, which is short lived compared to the age of the universe.




Contacts and sources:
 Xinyu Dai  
Jana Smith
University of Oklahoma 

The results of this project were published in the August 14, 2015, edition of The Astrophysical Journal

Thursday, August 27, 2015

NASA Finds Vegetation Essential For Limiting City Warming Effects


Cities are well known hot spots - literally. The urban heat island effect has long been observed to raise the temperature of big cities by 1 to 3°C (1.8 to 5.4°F), a rise that is due to the presence of asphalt, concrete, buildings, and other so-called impervious surfaces disrupting the natural cooling effect provided by vegetation. According to a new NASA study that makes the first assessment of urbanization impacts for the entire continental United States, the presence of vegetation is an essential factor in limiting urban heating.

The temperature difference between urban areas and surrounding vegetated land due to the presence of impervious surfaces across the continental United States.

Credits: NASA's Earth Observatory

Impervious surfaces' biggest effect is causing a difference in surface temperature between an urban area and surrounding vegetation. The researchers, who used multiple satellites' observations of urban areas and their surroundings combined into a model, found that averaged over the continental United States, areas covered in part by impervious surfaces, be they downtowns, suburbs, or interstate roads, had a summer temperature 1.9°C higher than surrounding rural areas. In winter, the temperature difference was 1.5 °C higher in urban areas.

"This has nothing to do with greenhouse gas emissions. It's in addition to the greenhouse gas effect. This is the land use component only," said Lahouari Bounoua, research scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland, and lead author of the study.

The study, published this month in Environmental Research Letters, also quantifies how plants within existing urban areas, along roads, in parks and in wooded neighborhoods, for example, regulate the urban heat effect.

"Everybody thinks, 'urban heat island, things heat up.' But it's not as simple as that. The amount and type of vegetation plays a big role in how much the urbanization changes the temperature," said research scientist and co-author Kurtis Thome of Goddard.

The urban heat island effect occurs primarily during the day when urban impervious surfaces absorb more solar radiation than the surrounding vegetated areas, resulting in a few degrees temperature difference. The urban area has also lost the trees and vegetation that naturally cool the air. As a by-product of photosynthesis, leaves release water back into to the atmosphere in a process called evapotranspiration, which cools the local surface temperature the same way that sweat evaporating off a person's skin cools them off. Trees with broad leaves, like those found in many deciduous forests on the East coast, have more pores to exchange water than trees with needles, and so have more of a cooling effect.

Impervious surface and vegetation data from NASA/U.S. Geologic Survey's Landsat 7 Enhanced Thematic Mapper Plus (EMT+) sensor and NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on the Terra and Aqua satellites were combined with NASA's Simple Biosphere model to recreate the interaction between vegetation, urbanization and the atmosphere at five-kilometer resolution and at half-hour time steps across the continental United States for the year 2001. The temperatures associated with urban heat islands range within a couple degrees, even within a city, with temperatures peaking in the central, often tree-free downtown and tapering out over tree-rich neighborhoods often found in the suburbs.

The northeast I-95 corridor, Baltimore-Washington, Atlanta and the I-85 corridor in the southeast, and the major cities and roads of the Midwest and West Coast show the highest urban temperatures relative to their surrounding rural areas. Smaller cities have less pronounced increases in temperature compared to the surrounding areas. In cities like Phoenix built in the desert, the urban area actually has a cooling effect because of irrigated lawns and trees that wouldn't be there without the city.

"Anywhere in the U.S. small cities generate less heat than mega-cities," said Bounoua. The reason is the effect vegetation has on keeping a lid on rising temperatures.

Bounoua and his colleagues used the model environment to simulate what the temperature would be for a city if all the impervious surfaces were replaced with vegetation. Then slowly they began reintroducing the urban impervious surfaces one percentage point at a time, to see how the temperature rose as vegetation decreased and impervious surfaces expanded.

What they found was unexpected. When the impervious surfaces were at one percent the corresponding rise in temperature was about 1.3°C. That temperature difference then held steady at about 1.3°C as impervious surfaces increased to 35 percent. As soon as the urban impervious surfaces surpassed 35 percent of the city's land area, then temperature began increasing as the area of urban surfaces increased, reaching 1.6°C warmer by 65 percent urbanization.

At the human level, a rise of 1°C can raise energy demands for air conditioning in the summer from 5 to 20 percent in the United States, according the Environmental Protection Agency. So even though 0.3°C may seem like a small difference, it still may have impact on energy use, said Bounoua, especially when urban heat island effects are exacerbated by global temperature rises due to climate change.

Understanding the tradeoffs between urban surfaces and vegetation may help city planners in the future mitigate some of the heating effects, said Thome.

"Urbanization is a good thing," said Bounoua. "It brings a lot of people together in a small area. Share the road, share the work, share the building. But we could probably do it a little bit better."



Contacts and sources:
Ellen Gray
NASA Goddard Space Flight Center

3-D Cancer Models Give Fresh Perspective on Progress of Disease


Computer models of developing cancers reveal how tiny movements of cells can quickly transform the makeup of an entire tumour.

The models reinforce laboratory studies of how tumours evolve and spread, and why patients can respond well to therapy, only to relapse later.

This is a three-dimensional model of a tumor showing cell types in varying colors.

Credit: Bartek Waclaw and Martin Nowak

Researchers used mathematical algorithms to create three-dimensional simulations of cancers developing over time. They studied how tumours begin with one rogue cell which multiplies to become a malignant mass containing many billions of cells.

Their models took into account changes that occur in cancerous cells as they move within the landscape of a tumour, and as they replicate or die. They also considered genetic variation, which makes some cells more suited to the environment of a tumour than others.

They found that movement and turnover of cells in a tumour allows those that are well suited to the environment to flourish. Any one of these can take over an existing tumour, replacing the original mass with new cells quickly - often within several months.

This helps explain why tumours are comprised mostly of one type of cell, whereas healthy tissue tends to be made up of a mixture of cell types.

However, this mechanism does not entirely mix the cells inside the tumour, the team say. This can lead to parts of the tumour becoming immune to certain drugs, which enables them to resist chemotherapy treatment. Those cells that are not killed off by treatment can quickly flourish and repopulate the tumour as it regrows. Researchers say treatments that target small movements of cancerous cells could help to slow progress of the disease.

The study, a collaboration between the University of Edinburgh, Harvard University and Johns Hopkins University, is published in the journal Nature. The research was supported by the Leverhulme Trust and The Royal Society of Edinburgh.

Dr Bartlomiej Waclaw, of the University of Edinburgh's School of Physics and Astronomy, who is the lead author of the study, said: "Computer modelling of cancer enables us to gain valuable insight into how this complex disease develops over time and in three dimensions."


Contacts and sources:
Catriona Kelly
University of Edinburgh

'Brainbow' Reveals Surprising Data about Visual Connections in Brain

Neuroscientists know that some connections in the brain are pruned through neural development. Function gives rise to structure, according to the textbooks. But scientists at the Virginia Tech Carilion Research Institute have discovered that the textbooks might be wrong.

Their results were published today in Cell Reports.

"Retinal neurons associated with vision generate connections in the brain, and as the brain develops it strengthens and maintains some of those connections more than others. The disused connections are eliminated," said Michael Fox, an associate professor at the Virginia Tech Carilion Research Institute who led the study. "We found that this activity-dependent pruning might not be as simple as we'd like to believe."

Fox and his team of researchers used two different techniques to examine how retinal ganglion cells - neurons that live in the retina and transmit visual information to the visual centers in the brain - develop in a mouse model.

"It's widely accepted that synaptic connections from about 20 retinal ganglion cells converge onto cells in the lateral geniculate nucleus during development, but that number reduces to just one or two by the third week of a mouse's life," Fox said. "It was thought that the mature retinal ganglion cells develop several synaptic terminals that cluster around information exchange points."

The theory of several terminals blossoming from the same retinal ganglion cell had not been proved, though, so Fox and his researchers decided to follow the terminals to their roots.

Using a technique dubbed "brainbow," the scientists tagged the terminals with proteins that fluoresce different colors. The researchers thought one color, representing the single source of the many terminals, would dominate in the clusters. Instead, several different colors appeared together, intertwined but distinct.''

Using a technique dubbed "brainbow," the Virginia Tech Carilion Research Institute scientists tagged synaptic terminals with proteins that fluoresce different colors. The researchers thought one color, representing the single source of the many terminals, would dominate in the clusters. Instead, several different colors appeared together, intertwined but distinct.

Credit: Virginia Tech

"The samples showed a true 'brainbow,'" said Aboozar Monavarfeshani, a graduate student in Fox's laboratory who tagged the terminals. "I could see, right in front of me, something very different than the concept I learned from my textbooks."

The results showed individual terminals from more than one retinal ganglion cell in a mature mouse brain.

The study is a direct contradiction to some other research indicating neural development weeds out most connections between retinal ganglion axons and target cells in the brain, and Fox and his team have more questions.

"Is this a discrepancy a technical issue with the different types of approaches applied in all of these disparate studies?" Fox asked. "Possibly, but perhaps it's more likely that retinal ganglion cells are more complex than previously thought."

Along with the brainbow technique, Fox's team also imaged these synaptic connections with electron microscopy.

Sarah Hammer, currently a sophomore at Virginia Tech, traced individual retinal terminals through hundreds of serial images.

The data confirmed the results from "brainbow" analysis - retinal axons from numerous retinal ganglion cells remained present on adult brain cells.

"These results are not what we expected, and they will force us to reevaluate our understanding of the architecture and flow of visual information through neural pathways," Fox said. "The dichotomy of these results also sheds important light on the benefits of combining approaches to understand complicated problems in science."

Albert Pan, an assistant professor in the Medical College of Georgia at Georgia Regents University, who is an expert in neural circuitry development, said the results are unexpected.

"The research provides strong evidence for multiple innervation and calls for a reevaluation of the current understanding of information flow and neural circuit maturation in the visual system" said Pan, who was not involved in the study. "The paper probably generates more questions than it answers, which is a hallmark of an exciting research study."

The research continues, as Fox's team works to understand exactly how many retinal terminals converge and how they might convey information differently. Once the scientists understand the intricacies of the brain's visual circuitry, they might be able to start developing therapeutics for when it goes wrong.

"The lesson in this particular study is that no single technique gives us all the right answers," Fox said. "Science is never as simple as we like to make it seem."



Contacts and sources:
Paula Brewer ByronVirginia Tech 

What Would A Tsunami In The Mediterranean Look Like?


A team of European researchers have developed a model to simulate the impact of tsunamis generated by earthquakes and applied it to the Eastern Mediterranean. The results show how tsunami waves could hit and inundate coastal areas in southern Italy and Greece. The study is published today (27 August) in Ocean Science, an open access journal of the European Geosciences Union (EGU).

This  animation shows water elevation for an earthquake-induced tsunami at the Southwest of Crete.   
Credit: Samaras et al., Ocean Science, 2015

Though not as frequent as in the Pacific and Indian oceans, tsunamis also occur in the Mediterranean, mainly due to earthquakes generated when the African plate slides underneath the Eurasian plate. About 10% of all tsunamis worldwide happen in the Mediterranean, with on average, one large tsunami happening in the region once a century. The risk to coastal areas is high because of the high population density in the area - some 130 million people live along the sea's coastline. Moreover, tsunami waves in the Mediterranean need to travel only a very short distance before hitting the coast, reaching it with little advance warning. The new study shows the extent of flooding in selected areas along the coasts of southern Italy and Greece, if hit by large tsunamis in the region, and could help local authorities identify vulnerable areas.

Beaches in southern Crete could be affected by an Eastern Mediterranean tsunami 
Credit: Olaf Tausch

"The main gap in relevant knowledge in tsunami modelling is what happens when tsunami waves approach the nearshore and run inland," says Achilleas Samaras, the lead author of the study and a researcher at the University of Bologna in Italy. The nearshore is the zone where waves transform - becoming steeper and changing their propagation direction - as they propagate over shallow water close to the shore. "We wanted to find out how coastal areas would be affected by tsunamis in a region that is not only the most active in the Mediterranean in terms of seismicity and tectonic movements, but has also experienced numerous tsunami events in the past."

The team developed a computer model to represent how tsunamis in the Mediterranean could form, propagate and hit the coast, using information about the seafloor depth, shoreline and topography. "We simulate tsunami generation by introducing earthquake-generated displacements at either the sea bed or the surface," explains Samaras. "The model then simulates how these disturbances - the tsunami waves - propagate and are transformed as they reach the nearshore and inundate coastal areas."

This  animation shows water elevation for an earthquake-induced tsunami at the East of Sicily. 

Credit: Samaras et al., Ocean Science, 2015

As detailed in the Ocean Science study, the team applied their model to tsunamis generated by earthquakes of approximately M7.0 magnitude off the coasts of eastern Sicily and southern Crete. Results show that, in both cases, the tsunamis would inundate the low-lying coastal areas up to approximately 5 metres above sea level. The effects would be more severe for Crete where some 3.5 square kilometres of land would be under water.

"Due to the complexity of the studied phenomena, one should not arbitrarily extend the validity of the presented results by assuming that a tsunami with a magnitude at generation five times larger, for example, would result in an inundation area five times larger," cautions Samaras. "It is reasonable, however, to consider such results as indicative of how different areas in each region would be affected by larger events."

"Although the simulated earthquake-induced tsunamis are not small, there has been a recorded history of significantly larger events, in terms of earthquake magnitude and mainshock areas, taking place in the region," says Samaras. For example, a clustering of earthquakes, the largest with magnitude between 8.0 and 8.5, hit off the coast of Crete in 365 AD. The resulting tsunami destroyed ancient cities in Greece, Italy and Egypt, killing some 5000 people in Alexandria alone. More recently, an earthquake of magnitude of about 7.0 hit the Messina region in Italy in 1908, causing a tsunami that killed thousands, with observed waves locally exceeding 10 metres in height.

The team sees the results as a starting point for a more detailed assessment of coastal flooding risk and mitigation along the coasts of the Eastern Mediterranean. "Our simulations could be used to help public authorities and policy makers create a comprehensive database of tsunami scenarios in the Mediterranean, identify vulnerable coastal regions for each scenario, and properly plan their defence."



Contacts and sources:
Barbara Ferreira
European Geoscience Union

Citation: Samaras, A. G., Karambas, Th. V., and Archetti, R.: Simulation of tsunami generation, propagation and coastal inundation in the Eastern Mediterranean, Ocean Sci., 11, 643-655, doi:10.5194/os-11-643-2015, 2015.

Discovering Dust-Obscured Active Galaxies As They Grow

A group of researchers from Ehime University, Princeton University, and the National Astronomical Observatory of Japan (NAOJ) among others has performed an extensive search for Dust Obscured Galaxies (DOGs) using data obtained from the Subaru Strategic Program with Hyper Suprime-Cam (HSC). HSC is a new wide-field camera mounted at the prime focus of the Subaru Telescope and is an ideal instrument for searching for this rare and important class of galaxy. The research group discovered 48 DOGs, and has measured how common they are. Since DOGs are thought to harbor a rapidly growing black hole in their centers, these results give us clues for understanding the evolution of galaxies and supermassive black holes.

The left, middle, and right panels show optical image from HSC, near-infrared image from VIKING, and mid-infrared image from WISE, respectively. The image size is 20 square arcsecond (1 arcsecond is 1/3600 degree). It is clear that DOGs are faint in the optical, but are extremely bright in the infrared.

Credit: Ehime University/NAOJ/NASA/ESO

How did galaxies form and evolve during the 13.8-billion-year history of the universe? This question has been the subject of intense observational and theoretical investigation. Recent studies have revealed that almost all massive galaxies harbor a supermassive black hole whose mass reaches up to a hundred thousand or even a billion times the mass of the sun, and their masses are tightly correlated with those of their host galaxies. This correlation suggests that supermassive black holes and their host galaxies have evolved together, closely interacting as they grow.

The group of researchers, lead by Dr. Yoshiki Toba (Ehime University), focused on the Dust Obscured Galaxies (DOGs) as a key population to tackle the mystery of the co-evolution of galaxies and black holes. DOGs are very faint in visible light, because of the large quantity of obscuring dust, but are bright in the infrared. The brightest infrared DOGs in particular are expected to harbor the most actively growing black hole. In addition, most DOGs are seen in the epoch when the star formation activity of galaxies reached its peak, 8-10 billion years ago. Thus both DOGs and their black holes are rapidly growing, at an early phase of their co-evolution. However, since DOGs are rare and are hidden behind significant amount of dust, previous visible light surveys have found very few such objects.

Hyper Suprime-Cam (HSC) is a new instrument installed on the 8.2 meter Subaru Telescope in 2012. It is a wide-field camera with a field of view nine times the size of the full moon. An ambitious legacy survey with HSC started in March 2014 as a "Subaru strategic program (Note 1)"; total of 300 nights have been allocated for a five year period. The Subaru strategic program with HSC started to deliver large quantities of excellent imaging data.

The research team selected DOGs from early data from the HSC Subaru Strategic Program (SSP). DOGs are thousand times brighter in the infrared than the optical and the team selected their targets using the HSC and NASA's Wide-field Infrared Survey Explorer (WISE: Note 2). They also utilized the data from the VISTA Kilo-degree Infrared Galaxy survey (VIKING: Note 3). The all-sky survey data with WISE are crucial to discover spatially rare DOG while the VIKING data are useful to identify the DOGs more precisely.

The number density of DOGs that were newly selected in this study, as a function of infrared luminosity. Data represented by the red star is the HSC result. The research team found that (i) their infrared luminosity exceeds 10 trillion suns, and (ii) their number density is about 300 per cubic gigaparsecs.

Credit: Ehime University/NAOJ/NASA/ESO

Consequently, 48 DOGs were discovered. Each of these is 10 trillion times more luminous in the infrared than the sun. The number density of these luminous DOGs is about 300 per cubic gigaparsecs. It is theoretically predicted that these DOGs harbor an actively evolving supermassive black hole. This result provides researchers new insights into the mysteries of the co-evolution of galaxies and supermassive black holes from the unique observational prospects.

In this research, the research team discovered 48 Dust Obscured Galaxies and revealed their statistical properties of infrared luminous DOGs in particular, for the first time.

The first author of the paper Dr. Yoshiki Toba said, "There are no instruments on large telescopes with the sensitivity and field of view of HSC, and hence HSC is unique in its ability to search for DOGs. The HSC survey will cover more than 100 times as much area of the sky as the area used for this study when it is complete, allowing the identification of thousands of DOGs in the near future. We are planning to investigate the detailed properties of DOGs and their central black holes using observations from many telescope."

Also, Professor Tohru Nagao, second author of the paper, said "The Subaru Strategic Program with HSC has just begun. In the near future, exciting results will be released not only from studies on galaxy evolution, but also from in fields such as solar systems, stars, nearby galaxies, and cosmology."



Contacts and sources:
Saeko S. Hayashi
National Institutes of Natural Sciences

New Theory Leads To Radiationless Revolution

Physicists have found a radical new way confine electromagnetic energy without it leaking away, akin to throwing a pebble into a pond with no splash.

The theory could have broad ranging applications from explaining dark matter to combating energy losses in future technologies.

Visualization of dark matter as energy confined within non-radiating anapoles.
Credit: Andrey Miroshnichenko

However, it appears to contradict a fundamental tenet of electrodynamics, that accelerated charges create electromagnetic radiation, said lead researcher Dr Andrey Miroshnichenko from The Australian National University (ANU).

"This problem has puzzled many people. It took us a year to get this concept clear in our heads," said Dr Miroshnichenko, from the ANU Research School of Physics and Engineering.

The fundamental new theory could be used in quantum computers, lead to new laser technology and may even hold the key to understanding how matter itself hangs together.

"Ever since the beginning of quantum mechanics people have been looking for a configuration which could explain the stability of atoms and why orbiting electrons do not radiate," Dr Miroshnichenko said.

Dr. Miroshnichenko with his visualization of anapoles as dark matter.

Credit: Stuart Hay, ANU
The absence of radiation is the result of the current being divided between two different components, a conventional electric dipole and a toroidal dipole (associated with poloidal current configuration), which produce identical fields at a distance.

If these two configurations are out of phase then the radiation will be cancelled out, even though the electromagnetic fields are non-zero in the area close to the currents.

Dr Miroshnichenko, in collaboration with colleagues from Germany and Singapore, successfully tested his new theory with a single silicon nanodiscs between 160 and 310 nanometres in diameter and 50 nanometres high, which he was able to make effectively invisible by cancelling the disc's scattering of visible light.

This type of excitation is known as an anapole (from the Greek, 'without poles').

Dr Miroshnichenko's insight came while trying to reconcile differences between two different mathematical descriptions of radiation; one based on Cartesian multipoles and the other on vector spherical harmonics used in a Mie basis set.

"The two gave different answers, and they shouldn't. Eventually we realised the Cartesian description was missing the toroidal components," Dr Miroshnichenko said.

"We realised that these toroidal components were not just a correction, they could be a very significant factor."

Dr Miroshnichenko said the confined energy of anapoles could be important in the development of tiny lasers on the surface of materials, called spasers, and also in the creation of efficient X-ray lasers by high-order harmonic generation.



Contacts and sources:
Dr. Andrey Miroshnichenko
Australian National University

Unravelling the History and Metamorphosis of Galaxies

A team of international scientists, led by astronomers from the Cardiff University School of Physics and Astronomy, has shown for the first time that galaxies can change their structure over the course of their lifetime.

By observing the sky as it is today, and peering back in time using the Hubble and Herschel telescopes, the team have shown that a large proportion of galaxies have undergone a major ‘metamorphosis’ since they were initially formed after the Big Bang.

 

By providing the first direct evidence of the extent of this transformation, the team hope to shed light on the processes that caused these dramatic changes, and therefore gain a greater understanding of the appearance and properties of the Universe as we know it today.

In their study, which has been published in the Monthly Notices of the Royal Astronomical Society¸ the researchers observed around 10,000 galaxies currently present in the Universe using a survey of the sky created by the Herschel ATLAS and GAMA projects.

The researchers then classified the galaxies into the two main types: flat, rotating, disc-shaped galaxies (much like our own galaxy, the Milky Way); and large, spherical galaxies with a swarm of disordered stars.

Using the Hubble and Herschel telescopes, the researchers then looked further out into the Universe, and thus further back in time, to observe the galaxies that formed shortly after the Big Bang.

The researchers showed that 83 per cent of all the stars formed since the Big Bang were initially located in a disc-shaped galaxy.

However, only 49 per cent of stars that exist in the Universe today are located in these disc-shaped galaxies—the remainder are located in spherical-shaped galaxies.

The results suggest a massive transformation in which disc-shaped galaxies became spherical-shaped galaxies.

A popular theory is that the this transformation was caused by many cosmic catastrophes, in which two disk-dominated galaxies, straying too close to each other, were forced by gravity to merge into a single galaxy, with the merger destroying the disks and producing a huge pileup of stars. An opposing theory is that the transformation was a more gentle process, with stars formed in a disk gradually moving to the centre of a disk and producing a central pile-up of stars.

Lead author of the study Professor Steve Eales, from Cardiff University’s School of Physics and Astronomy, said: “Many people have claimed before that this metamorphosis has occurred, but by combining Herschel and Hubble, we have for the first time been able to accurately measure the extent of this transformation.

“Galaxies are the basic building blocks of the Universe, so this metamorphosis really does represent one of the most significant changes in its appearance and properties in the last 8 billion years.”



Contacts and sources:
Cardiff University 

Did Alien Life Arise Spontaneously? Seeds of Life Spread from One Living Planet in All Directions: New Theory Says "Custers of Life Form, Grow and Overlap"

We only have one example of a planet with life: Earth. But within the next generation, it should become possible to detect signs of life on planets orbiting distant stars. If we find alien life, new questions will arise. For example, did that life arise spontaneously? Or could it have spread from elsewhere? If life crossed the vast gulf of interstellar space long ago, how would we tell?

In this theoretical artist's conception of the Milky Way galaxy, translucent green "bubbles" mark areas where life has spread beyond its home system to create cosmic oases, a process called panspermia. New research suggests that we could detect the pattern of panspermia, if it occurs.

Credit: NASA/JPL/R. Hurt

New research by Harvard astrophysicists shows that if life can travel between the stars (a process called panspermia), it would spread in a characteristic pattern that we could potentially identify.

"In our theory clusters of life form, grow, and overlap like bubbles in a pot of boiling water," says lead author Henry Lin of the Harvard-Smithsonian Center for Astrophysics (CfA).

There are two basic ways for life to spread beyond its host star. The first would be via natural processes such as gravitational slingshotting of asteroids or comets. The second would be for intelligent life to deliberately travel outward. The paper does not deal with how panspermia occurs. It simply asks: if it does occur, could we detect it? In principle, the answer is yes.

The model assumes that seeds from one living planet spread outward in all directions. If a seed reaches a habitable planet orbiting a neighboring star, it can take root. Over time, the result of this process would be a series of life-bearing oases dotting the galactic landscape.

"Life could spread from host star to host star in a pattern similar to the outbreak of an epidemic. In a sense, the Milky Way galaxy would become infected with pockets of life," explains CfA co-author Avi Loeb.

If we detect signs of life in the atmospheres of alien worlds, the next step will be to look for a pattern. For example, in an ideal case where the Earth is on the edge of a "bubble" of life, all the nearby life-hosting worlds we find will be in one half of the sky, while the other half will be barren.

Lin and Loeb caution that a pattern will only be discernible if life spreads somewhat rapidly. Since stars in the Milky Way drift relative to each other, stars that are neighbors now won't be neighbors in a few million years. In other words, stellar drift would smear out the bubbles.



Contacts and sources:
Christine Pulliam
Harvard-Smithsonian Center for Astrophysics (CfA)

Wednesday, August 26, 2015

Giant Collision Triggered “Radio Phoenix” Suggests Chandra Data

Astronomers have found evidence for a faded electron cloud “coming back to life,” much like the mythical phoenix, after two galaxy clusters collided. This “radio phoenix,” so-called because the high-energy electrons radiate primarily at radio frequencies, is found in Abell 1033. The system is located about 1.6 billion light years from Earth.

Abell 1033 galaxy cluster
Image credit: X-ray: NASA/CXC/Univ of Hamburg/F. de Gasperin et al; Optical: SDSS; Radio: NRAO/VLA

By combining data from NASA’s Chandra X-ray Observatory, the Westerbork Synthesis Radio Telescope in the Netherlands, NSF’s Karl Jansky Very Large Array (VLA), and the Sloan Digital Sky Survey (SDSS), astronomers were able to recreate the scientific narrative behind this intriguing cosmic story of the radio phoenix.

Galaxy clusters are the largest structures in the Universe held together by gravity. They consist of hundreds or even thousands of individual galaxies, unseen dark matter, and huge reservoirs of hot gas that glow in X-ray light. Understanding how clusters grow is critical to tracking how the Universe itself evolves over time.

Astronomers think that the supermassive black hole close to the center of Abell 1033 erupted in the past. Streams of high-energy electrons filled a region hundreds of thousands of light years across and produced a cloud of bright radio emission. This cloud faded over a period of millions of years as the electrons lost energy and the cloud expanded.

The radio phoenix emerged when another cluster of galaxies slammed into the original cluster, sending shock waves through the system. These shock waves, similar to sonic booms produced by supersonic jets, passed through the dormant cloud of electrons. The shock waves compressed the cloud and re-energized the electrons, which caused the cloud to once again shine at radio frequencies.

A new portrait of this radio phoenix is captured in this multiwavelength image of Abell 1033. X-rays from Chandra are in pink and radio data from the VLA are colored green. The background image shows optical observations from the SDSS. A map of the density of galaxies, made from the analysis of optical data, is seen in blue. Mouse over the image to see the location of the radio phoenix.

The Chandra data show hot gas in the clusters, which seems to have been disturbed during the same collision that caused the re-ignition of radio emission in the system. The peak of the X-ray emission is seen to the south (bottom) of the cluster, perhaps because the dense core of gas in the south is being stripped away by surrounding gas as it moves. The cluster in the north may not have entered the collision with a dense core, or perhaps its core was significantly disrupted during the merger. On the left side of the image, a so-called wide-angle tail radio galaxy shines in the radio. The lobes of plasma ejected by the supermassive black hole in its center are bent by the interaction with the cluster gas as the galaxy moves through it.

Astronomers think they are seeing the radio phoenix soon after it had reborn, since these sources fade very quickly when located close to the center of the cluster, as this one is in Abell 1033. Because of the intense density, pressure, and magnetic fields near the center of Abell 1033; a radio phoenix is only expected to last a few tens of millions of years.

A paper describing these results was published in a recent issue of the Monthly Notices of the Royal Astronomical Society and a preprint is available online. The authors are Francesco de Gasperin from the University of Hamburg, Germany; Georgiana Ogrean and Reinout van Weeren from the Harvard-Smithsonian Center for Astrophysics; William Dawson from the Lawrence Livermore National Lab in Livermore, California; Marcus Brüggen and Annalisa Bonafede from the University of Hamburg, Germany, and Aurora Simionescu from the Japan Aerospace Exploration Agency in Sagamihara, Japan.

NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations.




Contacts and sources: 
Janet Anderson
Marshall Space Flight Center, Huntsville, Ala.

Megan Watzke
Chandra X-ray Center, Cambridge, Mass.

Black Holes Store, And Garble, Information: Stephen Hawking Offers New Solution To Black Hole Mystery

Black holes don't actually swallow and destroy physical information, according to an idea proposed today by Stephen Hawking at the Hawking Radiation conference being held at KTH Royal Institute of Technology. Instead, they store it in a two-dimensional hologram.



One of the most baffling questions facing a generation of physicists is what happens to the information about the physical state of things that are swallowed up by black holes? Is it destroyed, as our understanding of general relativity would predict? If so, that would violate the laws of quantum mechanics.

This artist's concept illustrates a supermassive black hole with millions to billions times the mass of our sun. Supermassive black holes are enormously dense objects buried at the hearts of galaxies.
Image credit: NASA/JPL-Caltech

Today at the Hawking Radiation conference, Hawking presented his latest idea about how this paradox can be solved — that is, how information is preserved even if it's sucked into a black hole.

Nobel physics laureate Gerard 't Hooft, of Utrecht University, the Netherlands, confers with Stephen Hawking after the Cambridge professor presented his solution to the information loss paradox. Hawking is in town for a weeklong conference on the information loss paradox, which is co-hosted by Nordita at KTH Royal Institute of Technology.  
Photo: Håkan Lindgren
Hawking is in town for the weeklong conference, which is co-sponsored by Nordita, UNC and the Julian Schwinger Foundation. Nordita is co-hosted by KTH and Stockholm University. UNC Physicist Laura Mersini-Houghton was instrumental in assembling 32 of the world's leading physicists to tackle the problem, which stems from contradications between quantum mechanics and general relativity.

Everything in our world is encoded with quantum mechanical information; and according to the laws of quantum mechanics, this information should never entirely disappear, no matter what happens to it. Not even if it gets sucked into a black hole.

But Hawking's new idea is that the information doesn't make it inside the black hole at all. Instead, it's permanently encoded in a 2D hologram at the surface of the black hole's event horizon, or the field surrounding each black hole which represents its point of no return.

As we understand them, black holes are regions of space-time where stars, having exhausted their fuel, collapse under their own gravity, creating a bottomless pit that swallows anything approaching too closely. Not even light can escape them, since their gravitational pull is so infinitely powerful.

"The information is not stored in the interior of the black hole as one might expect, but in its boundary — the event horizon," he said. Working with Cambridge Professor Malcolm Perry (who spoke afterward) and Harvard Professor Andrew Stromberg, Hawking formulated the idea that information is stored in the form of what are known as super translations.

Conference participants wait while Stephen Hawking composes an answer to a question.

Photo, Håkan Lindgren

"The idea is the super translations are a hologram of the ingoing particles," Hawking said. "Thus they contain all the information that would otherwise be lost."

This information is emitted in the quantum fluctuations that black holes produce, albeit in "chaotic, useless form," Hawking said. "For all practical purposes the information is lost."

But in his lecture in Stockholm the previous night, Hawking also offered compelling thoughts about where things that fall into a black hole could eventually wind up.

"The existence of alternative histories with black holes suggests this might be possible," Hawking said. "The hole would need to be large and if it was rotating it might have a passage to another universe. But you couldn't come back to our universe.

"So although I'm keen on space flight, I'm not going to try that."


Contacts and sources:
 David Callahan
KTH Royal Institute of Technology

Tuesday, August 25, 2015

First of Its Kind Fuel Cell Tri-Generator Promises To Reduce Energy Loss, Costs and Emissions


TRISOFC coordinator Dr Mark Worall speaks about the project’s unique solid oxide fuel cell (SOFC)  tri-generator which has the potential to increase the utilisation of available energy, reduce costs, add value, and decrease primary energy use and emissions.
 
The complete TriSOFC system schematic is shown below.

Credit: © TRISOFC

Almost a half of the world’s primary energy consumption is in the provision of electricity, heating and cooling. Most of this energy comes from centralised power stations where up to 70 % of available energy is wasted. The inefficiency of this model is unacceptably high, leading to considerable CO2 emissions and unnecessarily high running costs. These problems could be addressed if we move from conventional centralised power generation systems to efficient onsite micro-generation technology, and one promising possibility in this line is the solid oxide fuel cell (SOFC).

SOFC technology combines hydrogen and oxygen in an electro-chemical reaction to generate electricity, with the only by-products being water vapour, heat and a modest amount of carbon dioxide. Hydrogen can be supplied from hydrocarbon fuels such as natural gas, which is widely available for domestic and public buildings. For three years, the TRISOFC project team worked to advance this type of technology by developing a low-cost durable low temperature (LT) SOFC tri-generation (cooling, heating and power) prototype.

TRISOFC coordinator Dr Mark Worall from the University of Nottingham provided more specific details on the outcomes of the project, which officially concluded at the end of July: ‘The team designed, optimised and built an LT-SOFC tri-generation prototype, based on the integration of a novel LT-SOFC stack and a desiccant cooling unit.’ Additional components of the system are a fuel processor to generate reformate gas and other equipment for the electrical, mechanical and control balance of plant (BoP).

TRISOFC unique features

The TRISOFC’s system boasts a number of unique features that set it apart from anything that has been done before. In particular, the operating temperature of the TRISOFC system is between 500 and 600 degrees Celsius, in comparison to normal SOFCs of 800 to 1000 degrees Celsius. ‘This is important,’ Dr Worall notes, ‘Because it enables BoP and other temperature dependent components to be manufactured from relatively low cost materials, such as stainless steel, and so potentially it substantially reduces costs of materials and components.’ 

Additionally, the LT-SOFC is based on a single component nanocomposite material, an invention of a team led by Professor Binzhu Zhu of KTH, one of the consortium partners, which is unique in that it can act as an anode, cathode and an electrolyte. Dr Worall adds, ‘Again, this has the potential to reduce costs and complexity and increase reliability and durability.’ Finally, the system has been integrated with an open cycle desiccant dehumidification and cooling system to provide heating, cooling and thermal storage. This has not been used before in fuel cells and it has the advantage of potentially increasing the utilisation of the waste heat (currently 40 % to 50 % of the total energy input is wasted).

Dr Worall notes, ‘In our system, the waste heat from the SOFC is used to re-concentrate the solution. This is a form of thermal storage, which allows us to operate the fuel cell when we don’t need cooling and use it when we do. Our system has three main advantages: firstly, it increases the conversion efficiency of the SOFC from 45 % to 55% to potentially 85 % to 95 %; secondly, it reduces the demand for electrical energy that would be needed to provide comfort cooling (and by reducing electrical energy use, we are also reducing primary energy consumption and carbon dioxide emissions) and thirdly, it reduces cooling provided by vapour-compression refrigeration systems, which currently rely on working fluids that have a global warming potential (GWP) when released.’

Putting the LT- SOFC tri-generation system to the test

The team has successfully proved the concept of a LT- SOFC tri-generation system. Dr Worall draws attention to two particularly significant test results: ‘Tests of two cell 6cm x 6cm LT-planar type SOFCs have shown power density of up to 1100W/cm² with of a power output of 22W, at 530 degrees Celsius. Researchers are in the process of developing 200We stacks and we should be able to demonstrate large scale, low temperature electrical output.’

‘Additionally, tests on the desiccant dehumidification system showed that a coefficient of performance (COP) of above 1.0 was achieved. COP is the ratio of the cooling output to the total energy input, and so represents a key performance parameter… In overall conversion term, our heat powered cooler is competitive with other systems.’

TRISOFC impact

Now the concept has been proven, the next steps will be to prove long-term durability, scale-up production and reduce costs further. Dr Worall and the team expect the system developed under TRISOFC to have a significant impact on a number of levels: ‘This system is a first-of-its-kind fuel cell tri-generator and has great potential to increase the utilisation of the available energy, reduce costs, add value, reduce primary energy use and emissions and promote distributed energy production.’

One group that could feel the benefits the most is consumers, as Dr Worall explains: ‘Most buildings use primary energy for heating, cooling and electricity, so by generating electricity at domestic level, consumers can potentially benefit from the sale of excess electricity production (depending on local energy costs, incentives and tariffs), reduce demand for heat for the provision hot water and heating, and the provision of electricity for cooling. As we are getting three for the price of one, consumers should benefit financially and in terms of reducing their impact on the environment.’

The team is confident that both as an integrated system and as individual components the LT-SOFC tri-generation system has potential for commercialisation. ‘We are actively engaged with industry and end users to develop user-friendly, reliable and financial systems and subsystems,’ Dr Worall concludes.

 

Contacts and sources:
CORDIS
TRISOFC