Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

Thursday, August 27, 2015

NASA Finds Vegetation Essential For Limiting City Warming Effects

Cities are well known hot spots - literally. The urban heat island effect has long been observed to raise the temperature of big cities by 1 to 3°C (1.8 to 5.4°F), a rise that is due to the presence of asphalt, concrete, buildings, and other so-called impervious surfaces disrupting the natural cooling effect provided by vegetation. According to a new NASA study that makes the first assessment of urbanization impacts for the entire continental United States, the presence of vegetation is an essential factor in limiting urban heating.

The temperature difference between urban areas and surrounding vegetated land due to the presence of impervious surfaces across the continental United States.

Credits: NASA's Earth Observatory

Impervious surfaces' biggest effect is causing a difference in surface temperature between an urban area and surrounding vegetation. The researchers, who used multiple satellites' observations of urban areas and their surroundings combined into a model, found that averaged over the continental United States, areas covered in part by impervious surfaces, be they downtowns, suburbs, or interstate roads, had a summer temperature 1.9°C higher than surrounding rural areas. In winter, the temperature difference was 1.5 °C higher in urban areas.

"This has nothing to do with greenhouse gas emissions. It's in addition to the greenhouse gas effect. This is the land use component only," said Lahouari Bounoua, research scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland, and lead author of the study.

The study, published this month in Environmental Research Letters, also quantifies how plants within existing urban areas, along roads, in parks and in wooded neighborhoods, for example, regulate the urban heat effect.

"Everybody thinks, 'urban heat island, things heat up.' But it's not as simple as that. The amount and type of vegetation plays a big role in how much the urbanization changes the temperature," said research scientist and co-author Kurtis Thome of Goddard.

The urban heat island effect occurs primarily during the day when urban impervious surfaces absorb more solar radiation than the surrounding vegetated areas, resulting in a few degrees temperature difference. The urban area has also lost the trees and vegetation that naturally cool the air. As a by-product of photosynthesis, leaves release water back into to the atmosphere in a process called evapotranspiration, which cools the local surface temperature the same way that sweat evaporating off a person's skin cools them off. Trees with broad leaves, like those found in many deciduous forests on the East coast, have more pores to exchange water than trees with needles, and so have more of a cooling effect.

Impervious surface and vegetation data from NASA/U.S. Geologic Survey's Landsat 7 Enhanced Thematic Mapper Plus (EMT+) sensor and NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on the Terra and Aqua satellites were combined with NASA's Simple Biosphere model to recreate the interaction between vegetation, urbanization and the atmosphere at five-kilometer resolution and at half-hour time steps across the continental United States for the year 2001. The temperatures associated with urban heat islands range within a couple degrees, even within a city, with temperatures peaking in the central, often tree-free downtown and tapering out over tree-rich neighborhoods often found in the suburbs.

The northeast I-95 corridor, Baltimore-Washington, Atlanta and the I-85 corridor in the southeast, and the major cities and roads of the Midwest and West Coast show the highest urban temperatures relative to their surrounding rural areas. Smaller cities have less pronounced increases in temperature compared to the surrounding areas. In cities like Phoenix built in the desert, the urban area actually has a cooling effect because of irrigated lawns and trees that wouldn't be there without the city.

"Anywhere in the U.S. small cities generate less heat than mega-cities," said Bounoua. The reason is the effect vegetation has on keeping a lid on rising temperatures.

Bounoua and his colleagues used the model environment to simulate what the temperature would be for a city if all the impervious surfaces were replaced with vegetation. Then slowly they began reintroducing the urban impervious surfaces one percentage point at a time, to see how the temperature rose as vegetation decreased and impervious surfaces expanded.

What they found was unexpected. When the impervious surfaces were at one percent the corresponding rise in temperature was about 1.3°C. That temperature difference then held steady at about 1.3°C as impervious surfaces increased to 35 percent. As soon as the urban impervious surfaces surpassed 35 percent of the city's land area, then temperature began increasing as the area of urban surfaces increased, reaching 1.6°C warmer by 65 percent urbanization.

At the human level, a rise of 1°C can raise energy demands for air conditioning in the summer from 5 to 20 percent in the United States, according the Environmental Protection Agency. So even though 0.3°C may seem like a small difference, it still may have impact on energy use, said Bounoua, especially when urban heat island effects are exacerbated by global temperature rises due to climate change.

Understanding the tradeoffs between urban surfaces and vegetation may help city planners in the future mitigate some of the heating effects, said Thome.

"Urbanization is a good thing," said Bounoua. "It brings a lot of people together in a small area. Share the road, share the work, share the building. But we could probably do it a little bit better."

Contacts and sources:
Ellen Gray
NASA Goddard Space Flight Center

3-D Cancer Models Give Fresh Perspective on Progress of Disease

Computer models of developing cancers reveal how tiny movements of cells can quickly transform the makeup of an entire tumour.

The models reinforce laboratory studies of how tumours evolve and spread, and why patients can respond well to therapy, only to relapse later.

This is a three-dimensional model of a tumor showing cell types in varying colors.

Credit: Bartek Waclaw and Martin Nowak

Researchers used mathematical algorithms to create three-dimensional simulations of cancers developing over time. They studied how tumours begin with one rogue cell which multiplies to become a malignant mass containing many billions of cells.

Their models took into account changes that occur in cancerous cells as they move within the landscape of a tumour, and as they replicate or die. They also considered genetic variation, which makes some cells more suited to the environment of a tumour than others.

They found that movement and turnover of cells in a tumour allows those that are well suited to the environment to flourish. Any one of these can take over an existing tumour, replacing the original mass with new cells quickly - often within several months.

This helps explain why tumours are comprised mostly of one type of cell, whereas healthy tissue tends to be made up of a mixture of cell types.

However, this mechanism does not entirely mix the cells inside the tumour, the team say. This can lead to parts of the tumour becoming immune to certain drugs, which enables them to resist chemotherapy treatment. Those cells that are not killed off by treatment can quickly flourish and repopulate the tumour as it regrows. Researchers say treatments that target small movements of cancerous cells could help to slow progress of the disease.

The study, a collaboration between the University of Edinburgh, Harvard University and Johns Hopkins University, is published in the journal Nature. The research was supported by the Leverhulme Trust and The Royal Society of Edinburgh.

Dr Bartlomiej Waclaw, of the University of Edinburgh's School of Physics and Astronomy, who is the lead author of the study, said: "Computer modelling of cancer enables us to gain valuable insight into how this complex disease develops over time and in three dimensions."

Contacts and sources:
Catriona Kelly
University of Edinburgh

'Brainbow' Reveals Surprising Data about Visual Connections in Brain

Neuroscientists know that some connections in the brain are pruned through neural development. Function gives rise to structure, according to the textbooks. But scientists at the Virginia Tech Carilion Research Institute have discovered that the textbooks might be wrong.

Their results were published today in Cell Reports.

"Retinal neurons associated with vision generate connections in the brain, and as the brain develops it strengthens and maintains some of those connections more than others. The disused connections are eliminated," said Michael Fox, an associate professor at the Virginia Tech Carilion Research Institute who led the study. "We found that this activity-dependent pruning might not be as simple as we'd like to believe."

Fox and his team of researchers used two different techniques to examine how retinal ganglion cells - neurons that live in the retina and transmit visual information to the visual centers in the brain - develop in a mouse model.

"It's widely accepted that synaptic connections from about 20 retinal ganglion cells converge onto cells in the lateral geniculate nucleus during development, but that number reduces to just one or two by the third week of a mouse's life," Fox said. "It was thought that the mature retinal ganglion cells develop several synaptic terminals that cluster around information exchange points."

The theory of several terminals blossoming from the same retinal ganglion cell had not been proved, though, so Fox and his researchers decided to follow the terminals to their roots.

Using a technique dubbed "brainbow," the scientists tagged the terminals with proteins that fluoresce different colors. The researchers thought one color, representing the single source of the many terminals, would dominate in the clusters. Instead, several different colors appeared together, intertwined but distinct.''

Using a technique dubbed "brainbow," the Virginia Tech Carilion Research Institute scientists tagged synaptic terminals with proteins that fluoresce different colors. The researchers thought one color, representing the single source of the many terminals, would dominate in the clusters. Instead, several different colors appeared together, intertwined but distinct.

Credit: Virginia Tech

"The samples showed a true 'brainbow,'" said Aboozar Monavarfeshani, a graduate student in Fox's laboratory who tagged the terminals. "I could see, right in front of me, something very different than the concept I learned from my textbooks."

The results showed individual terminals from more than one retinal ganglion cell in a mature mouse brain.

The study is a direct contradiction to some other research indicating neural development weeds out most connections between retinal ganglion axons and target cells in the brain, and Fox and his team have more questions.

"Is this a discrepancy a technical issue with the different types of approaches applied in all of these disparate studies?" Fox asked. "Possibly, but perhaps it's more likely that retinal ganglion cells are more complex than previously thought."

Along with the brainbow technique, Fox's team also imaged these synaptic connections with electron microscopy.

Sarah Hammer, currently a sophomore at Virginia Tech, traced individual retinal terminals through hundreds of serial images.

The data confirmed the results from "brainbow" analysis - retinal axons from numerous retinal ganglion cells remained present on adult brain cells.

"These results are not what we expected, and they will force us to reevaluate our understanding of the architecture and flow of visual information through neural pathways," Fox said. "The dichotomy of these results also sheds important light on the benefits of combining approaches to understand complicated problems in science."

Albert Pan, an assistant professor in the Medical College of Georgia at Georgia Regents University, who is an expert in neural circuitry development, said the results are unexpected.

"The research provides strong evidence for multiple innervation and calls for a reevaluation of the current understanding of information flow and neural circuit maturation in the visual system" said Pan, who was not involved in the study. "The paper probably generates more questions than it answers, which is a hallmark of an exciting research study."

The research continues, as Fox's team works to understand exactly how many retinal terminals converge and how they might convey information differently. Once the scientists understand the intricacies of the brain's visual circuitry, they might be able to start developing therapeutics for when it goes wrong.

"The lesson in this particular study is that no single technique gives us all the right answers," Fox said. "Science is never as simple as we like to make it seem."

Contacts and sources:
Paula Brewer ByronVirginia Tech 

What Would A Tsunami In The Mediterranean Look Like?

A team of European researchers have developed a model to simulate the impact of tsunamis generated by earthquakes and applied it to the Eastern Mediterranean. The results show how tsunami waves could hit and inundate coastal areas in southern Italy and Greece. The study is published today (27 August) in Ocean Science, an open access journal of the European Geosciences Union (EGU).

This  animation shows water elevation for an earthquake-induced tsunami at the Southwest of Crete.   
Credit: Samaras et al., Ocean Science, 2015

Though not as frequent as in the Pacific and Indian oceans, tsunamis also occur in the Mediterranean, mainly due to earthquakes generated when the African plate slides underneath the Eurasian plate. About 10% of all tsunamis worldwide happen in the Mediterranean, with on average, one large tsunami happening in the region once a century. The risk to coastal areas is high because of the high population density in the area - some 130 million people live along the sea's coastline. Moreover, tsunami waves in the Mediterranean need to travel only a very short distance before hitting the coast, reaching it with little advance warning. The new study shows the extent of flooding in selected areas along the coasts of southern Italy and Greece, if hit by large tsunamis in the region, and could help local authorities identify vulnerable areas.

Beaches in southern Crete could be affected by an Eastern Mediterranean tsunami 
Credit: Olaf Tausch

"The main gap in relevant knowledge in tsunami modelling is what happens when tsunami waves approach the nearshore and run inland," says Achilleas Samaras, the lead author of the study and a researcher at the University of Bologna in Italy. The nearshore is the zone where waves transform - becoming steeper and changing their propagation direction - as they propagate over shallow water close to the shore. "We wanted to find out how coastal areas would be affected by tsunamis in a region that is not only the most active in the Mediterranean in terms of seismicity and tectonic movements, but has also experienced numerous tsunami events in the past."

The team developed a computer model to represent how tsunamis in the Mediterranean could form, propagate and hit the coast, using information about the seafloor depth, shoreline and topography. "We simulate tsunami generation by introducing earthquake-generated displacements at either the sea bed or the surface," explains Samaras. "The model then simulates how these disturbances - the tsunami waves - propagate and are transformed as they reach the nearshore and inundate coastal areas."

This  animation shows water elevation for an earthquake-induced tsunami at the East of Sicily. 

Credit: Samaras et al., Ocean Science, 2015

As detailed in the Ocean Science study, the team applied their model to tsunamis generated by earthquakes of approximately M7.0 magnitude off the coasts of eastern Sicily and southern Crete. Results show that, in both cases, the tsunamis would inundate the low-lying coastal areas up to approximately 5 metres above sea level. The effects would be more severe for Crete where some 3.5 square kilometres of land would be under water.

"Due to the complexity of the studied phenomena, one should not arbitrarily extend the validity of the presented results by assuming that a tsunami with a magnitude at generation five times larger, for example, would result in an inundation area five times larger," cautions Samaras. "It is reasonable, however, to consider such results as indicative of how different areas in each region would be affected by larger events."

"Although the simulated earthquake-induced tsunamis are not small, there has been a recorded history of significantly larger events, in terms of earthquake magnitude and mainshock areas, taking place in the region," says Samaras. For example, a clustering of earthquakes, the largest with magnitude between 8.0 and 8.5, hit off the coast of Crete in 365 AD. The resulting tsunami destroyed ancient cities in Greece, Italy and Egypt, killing some 5000 people in Alexandria alone. More recently, an earthquake of magnitude of about 7.0 hit the Messina region in Italy in 1908, causing a tsunami that killed thousands, with observed waves locally exceeding 10 metres in height.

The team sees the results as a starting point for a more detailed assessment of coastal flooding risk and mitigation along the coasts of the Eastern Mediterranean. "Our simulations could be used to help public authorities and policy makers create a comprehensive database of tsunami scenarios in the Mediterranean, identify vulnerable coastal regions for each scenario, and properly plan their defence."

Contacts and sources:
Barbara Ferreira
European Geoscience Union

Citation: Samaras, A. G., Karambas, Th. V., and Archetti, R.: Simulation of tsunami generation, propagation and coastal inundation in the Eastern Mediterranean, Ocean Sci., 11, 643-655, doi:10.5194/os-11-643-2015, 2015.

Discovering Dust-Obscured Active Galaxies As They Grow

A group of researchers from Ehime University, Princeton University, and the National Astronomical Observatory of Japan (NAOJ) among others has performed an extensive search for Dust Obscured Galaxies (DOGs) using data obtained from the Subaru Strategic Program with Hyper Suprime-Cam (HSC). HSC is a new wide-field camera mounted at the prime focus of the Subaru Telescope and is an ideal instrument for searching for this rare and important class of galaxy. The research group discovered 48 DOGs, and has measured how common they are. Since DOGs are thought to harbor a rapidly growing black hole in their centers, these results give us clues for understanding the evolution of galaxies and supermassive black holes.

The left, middle, and right panels show optical image from HSC, near-infrared image from VIKING, and mid-infrared image from WISE, respectively. The image size is 20 square arcsecond (1 arcsecond is 1/3600 degree). It is clear that DOGs are faint in the optical, but are extremely bright in the infrared.

Credit: Ehime University/NAOJ/NASA/ESO

How did galaxies form and evolve during the 13.8-billion-year history of the universe? This question has been the subject of intense observational and theoretical investigation. Recent studies have revealed that almost all massive galaxies harbor a supermassive black hole whose mass reaches up to a hundred thousand or even a billion times the mass of the sun, and their masses are tightly correlated with those of their host galaxies. This correlation suggests that supermassive black holes and their host galaxies have evolved together, closely interacting as they grow.

The group of researchers, lead by Dr. Yoshiki Toba (Ehime University), focused on the Dust Obscured Galaxies (DOGs) as a key population to tackle the mystery of the co-evolution of galaxies and black holes. DOGs are very faint in visible light, because of the large quantity of obscuring dust, but are bright in the infrared. The brightest infrared DOGs in particular are expected to harbor the most actively growing black hole. In addition, most DOGs are seen in the epoch when the star formation activity of galaxies reached its peak, 8-10 billion years ago. Thus both DOGs and their black holes are rapidly growing, at an early phase of their co-evolution. However, since DOGs are rare and are hidden behind significant amount of dust, previous visible light surveys have found very few such objects.

Hyper Suprime-Cam (HSC) is a new instrument installed on the 8.2 meter Subaru Telescope in 2012. It is a wide-field camera with a field of view nine times the size of the full moon. An ambitious legacy survey with HSC started in March 2014 as a "Subaru strategic program (Note 1)"; total of 300 nights have been allocated for a five year period. The Subaru strategic program with HSC started to deliver large quantities of excellent imaging data.

The research team selected DOGs from early data from the HSC Subaru Strategic Program (SSP). DOGs are thousand times brighter in the infrared than the optical and the team selected their targets using the HSC and NASA's Wide-field Infrared Survey Explorer (WISE: Note 2). They also utilized the data from the VISTA Kilo-degree Infrared Galaxy survey (VIKING: Note 3). The all-sky survey data with WISE are crucial to discover spatially rare DOG while the VIKING data are useful to identify the DOGs more precisely.

The number density of DOGs that were newly selected in this study, as a function of infrared luminosity. Data represented by the red star is the HSC result. The research team found that (i) their infrared luminosity exceeds 10 trillion suns, and (ii) their number density is about 300 per cubic gigaparsecs.

Credit: Ehime University/NAOJ/NASA/ESO

Consequently, 48 DOGs were discovered. Each of these is 10 trillion times more luminous in the infrared than the sun. The number density of these luminous DOGs is about 300 per cubic gigaparsecs. It is theoretically predicted that these DOGs harbor an actively evolving supermassive black hole. This result provides researchers new insights into the mysteries of the co-evolution of galaxies and supermassive black holes from the unique observational prospects.

In this research, the research team discovered 48 Dust Obscured Galaxies and revealed their statistical properties of infrared luminous DOGs in particular, for the first time.

The first author of the paper Dr. Yoshiki Toba said, "There are no instruments on large telescopes with the sensitivity and field of view of HSC, and hence HSC is unique in its ability to search for DOGs. The HSC survey will cover more than 100 times as much area of the sky as the area used for this study when it is complete, allowing the identification of thousands of DOGs in the near future. We are planning to investigate the detailed properties of DOGs and their central black holes using observations from many telescope."

Also, Professor Tohru Nagao, second author of the paper, said "The Subaru Strategic Program with HSC has just begun. In the near future, exciting results will be released not only from studies on galaxy evolution, but also from in fields such as solar systems, stars, nearby galaxies, and cosmology."

Contacts and sources:
Saeko S. Hayashi
National Institutes of Natural Sciences

New Theory Leads To Radiationless Revolution

Physicists have found a radical new way confine electromagnetic energy without it leaking away, akin to throwing a pebble into a pond with no splash.

The theory could have broad ranging applications from explaining dark matter to combating energy losses in future technologies.

Visualization of dark matter as energy confined within non-radiating anapoles.
Credit: Andrey Miroshnichenko

However, it appears to contradict a fundamental tenet of electrodynamics, that accelerated charges create electromagnetic radiation, said lead researcher Dr Andrey Miroshnichenko from The Australian National University (ANU).

"This problem has puzzled many people. It took us a year to get this concept clear in our heads," said Dr Miroshnichenko, from the ANU Research School of Physics and Engineering.

The fundamental new theory could be used in quantum computers, lead to new laser technology and may even hold the key to understanding how matter itself hangs together.

"Ever since the beginning of quantum mechanics people have been looking for a configuration which could explain the stability of atoms and why orbiting electrons do not radiate," Dr Miroshnichenko said.

Dr. Miroshnichenko with his visualization of anapoles as dark matter.

Credit: Stuart Hay, ANU
The absence of radiation is the result of the current being divided between two different components, a conventional electric dipole and a toroidal dipole (associated with poloidal current configuration), which produce identical fields at a distance.

If these two configurations are out of phase then the radiation will be cancelled out, even though the electromagnetic fields are non-zero in the area close to the currents.

Dr Miroshnichenko, in collaboration with colleagues from Germany and Singapore, successfully tested his new theory with a single silicon nanodiscs between 160 and 310 nanometres in diameter and 50 nanometres high, which he was able to make effectively invisible by cancelling the disc's scattering of visible light.

This type of excitation is known as an anapole (from the Greek, 'without poles').

Dr Miroshnichenko's insight came while trying to reconcile differences between two different mathematical descriptions of radiation; one based on Cartesian multipoles and the other on vector spherical harmonics used in a Mie basis set.

"The two gave different answers, and they shouldn't. Eventually we realised the Cartesian description was missing the toroidal components," Dr Miroshnichenko said.

"We realised that these toroidal components were not just a correction, they could be a very significant factor."

Dr Miroshnichenko said the confined energy of anapoles could be important in the development of tiny lasers on the surface of materials, called spasers, and also in the creation of efficient X-ray lasers by high-order harmonic generation.

Contacts and sources:
Dr. Andrey Miroshnichenko
Australian National University

Unravelling the History and Metamorphosis of Galaxies

A team of international scientists, led by astronomers from the Cardiff University School of Physics and Astronomy, has shown for the first time that galaxies can change their structure over the course of their lifetime.

By observing the sky as it is today, and peering back in time using the Hubble and Herschel telescopes, the team have shown that a large proportion of galaxies have undergone a major ‘metamorphosis’ since they were initially formed after the Big Bang.


By providing the first direct evidence of the extent of this transformation, the team hope to shed light on the processes that caused these dramatic changes, and therefore gain a greater understanding of the appearance and properties of the Universe as we know it today.

In their study, which has been published in the Monthly Notices of the Royal Astronomical Society¸ the researchers observed around 10,000 galaxies currently present in the Universe using a survey of the sky created by the Herschel ATLAS and GAMA projects.

The researchers then classified the galaxies into the two main types: flat, rotating, disc-shaped galaxies (much like our own galaxy, the Milky Way); and large, spherical galaxies with a swarm of disordered stars.

Using the Hubble and Herschel telescopes, the researchers then looked further out into the Universe, and thus further back in time, to observe the galaxies that formed shortly after the Big Bang.

The researchers showed that 83 per cent of all the stars formed since the Big Bang were initially located in a disc-shaped galaxy.

However, only 49 per cent of stars that exist in the Universe today are located in these disc-shaped galaxies—the remainder are located in spherical-shaped galaxies.

The results suggest a massive transformation in which disc-shaped galaxies became spherical-shaped galaxies.

A popular theory is that the this transformation was caused by many cosmic catastrophes, in which two disk-dominated galaxies, straying too close to each other, were forced by gravity to merge into a single galaxy, with the merger destroying the disks and producing a huge pileup of stars. An opposing theory is that the transformation was a more gentle process, with stars formed in a disk gradually moving to the centre of a disk and producing a central pile-up of stars.

Lead author of the study Professor Steve Eales, from Cardiff University’s School of Physics and Astronomy, said: “Many people have claimed before that this metamorphosis has occurred, but by combining Herschel and Hubble, we have for the first time been able to accurately measure the extent of this transformation.

“Galaxies are the basic building blocks of the Universe, so this metamorphosis really does represent one of the most significant changes in its appearance and properties in the last 8 billion years.”

Contacts and sources:
Cardiff University 

Did Alien Life Arise Spontaneously? Seeds of Life Spread from One Living Planet in All Directions: New Theory Says "Custers of Life Form, Grow and Overlap"

We only have one example of a planet with life: Earth. But within the next generation, it should become possible to detect signs of life on planets orbiting distant stars. If we find alien life, new questions will arise. For example, did that life arise spontaneously? Or could it have spread from elsewhere? If life crossed the vast gulf of interstellar space long ago, how would we tell?

In this theoretical artist's conception of the Milky Way galaxy, translucent green "bubbles" mark areas where life has spread beyond its home system to create cosmic oases, a process called panspermia. New research suggests that we could detect the pattern of panspermia, if it occurs.

Credit: NASA/JPL/R. Hurt

New research by Harvard astrophysicists shows that if life can travel between the stars (a process called panspermia), it would spread in a characteristic pattern that we could potentially identify.

"In our theory clusters of life form, grow, and overlap like bubbles in a pot of boiling water," says lead author Henry Lin of the Harvard-Smithsonian Center for Astrophysics (CfA).

There are two basic ways for life to spread beyond its host star. The first would be via natural processes such as gravitational slingshotting of asteroids or comets. The second would be for intelligent life to deliberately travel outward. The paper does not deal with how panspermia occurs. It simply asks: if it does occur, could we detect it? In principle, the answer is yes.

The model assumes that seeds from one living planet spread outward in all directions. If a seed reaches a habitable planet orbiting a neighboring star, it can take root. Over time, the result of this process would be a series of life-bearing oases dotting the galactic landscape.

"Life could spread from host star to host star in a pattern similar to the outbreak of an epidemic. In a sense, the Milky Way galaxy would become infected with pockets of life," explains CfA co-author Avi Loeb.

If we detect signs of life in the atmospheres of alien worlds, the next step will be to look for a pattern. For example, in an ideal case where the Earth is on the edge of a "bubble" of life, all the nearby life-hosting worlds we find will be in one half of the sky, while the other half will be barren.

Lin and Loeb caution that a pattern will only be discernible if life spreads somewhat rapidly. Since stars in the Milky Way drift relative to each other, stars that are neighbors now won't be neighbors in a few million years. In other words, stellar drift would smear out the bubbles.

Contacts and sources:
Christine Pulliam
Harvard-Smithsonian Center for Astrophysics (CfA)

Wednesday, August 26, 2015

Giant Collision Triggered “Radio Phoenix” Suggests Chandra Data

Astronomers have found evidence for a faded electron cloud “coming back to life,” much like the mythical phoenix, after two galaxy clusters collided. This “radio phoenix,” so-called because the high-energy electrons radiate primarily at radio frequencies, is found in Abell 1033. The system is located about 1.6 billion light years from Earth.

Abell 1033 galaxy cluster
Image credit: X-ray: NASA/CXC/Univ of Hamburg/F. de Gasperin et al; Optical: SDSS; Radio: NRAO/VLA

By combining data from NASA’s Chandra X-ray Observatory, the Westerbork Synthesis Radio Telescope in the Netherlands, NSF’s Karl Jansky Very Large Array (VLA), and the Sloan Digital Sky Survey (SDSS), astronomers were able to recreate the scientific narrative behind this intriguing cosmic story of the radio phoenix.

Galaxy clusters are the largest structures in the Universe held together by gravity. They consist of hundreds or even thousands of individual galaxies, unseen dark matter, and huge reservoirs of hot gas that glow in X-ray light. Understanding how clusters grow is critical to tracking how the Universe itself evolves over time.

Astronomers think that the supermassive black hole close to the center of Abell 1033 erupted in the past. Streams of high-energy electrons filled a region hundreds of thousands of light years across and produced a cloud of bright radio emission. This cloud faded over a period of millions of years as the electrons lost energy and the cloud expanded.

The radio phoenix emerged when another cluster of galaxies slammed into the original cluster, sending shock waves through the system. These shock waves, similar to sonic booms produced by supersonic jets, passed through the dormant cloud of electrons. The shock waves compressed the cloud and re-energized the electrons, which caused the cloud to once again shine at radio frequencies.

A new portrait of this radio phoenix is captured in this multiwavelength image of Abell 1033. X-rays from Chandra are in pink and radio data from the VLA are colored green. The background image shows optical observations from the SDSS. A map of the density of galaxies, made from the analysis of optical data, is seen in blue. Mouse over the image to see the location of the radio phoenix.

The Chandra data show hot gas in the clusters, which seems to have been disturbed during the same collision that caused the re-ignition of radio emission in the system. The peak of the X-ray emission is seen to the south (bottom) of the cluster, perhaps because the dense core of gas in the south is being stripped away by surrounding gas as it moves. The cluster in the north may not have entered the collision with a dense core, or perhaps its core was significantly disrupted during the merger. On the left side of the image, a so-called wide-angle tail radio galaxy shines in the radio. The lobes of plasma ejected by the supermassive black hole in its center are bent by the interaction with the cluster gas as the galaxy moves through it.

Astronomers think they are seeing the radio phoenix soon after it had reborn, since these sources fade very quickly when located close to the center of the cluster, as this one is in Abell 1033. Because of the intense density, pressure, and magnetic fields near the center of Abell 1033; a radio phoenix is only expected to last a few tens of millions of years.

A paper describing these results was published in a recent issue of the Monthly Notices of the Royal Astronomical Society and a preprint is available online. The authors are Francesco de Gasperin from the University of Hamburg, Germany; Georgiana Ogrean and Reinout van Weeren from the Harvard-Smithsonian Center for Astrophysics; William Dawson from the Lawrence Livermore National Lab in Livermore, California; Marcus Brüggen and Annalisa Bonafede from the University of Hamburg, Germany, and Aurora Simionescu from the Japan Aerospace Exploration Agency in Sagamihara, Japan.

NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations.

Contacts and sources: 
Janet Anderson
Marshall Space Flight Center, Huntsville, Ala.

Megan Watzke
Chandra X-ray Center, Cambridge, Mass.

Black Holes Store, And Garble, Information: Stephen Hawking Offers New Solution To Black Hole Mystery

Black holes don't actually swallow and destroy physical information, according to an idea proposed today by Stephen Hawking at the Hawking Radiation conference being held at KTH Royal Institute of Technology. Instead, they store it in a two-dimensional hologram.

One of the most baffling questions facing a generation of physicists is what happens to the information about the physical state of things that are swallowed up by black holes? Is it destroyed, as our understanding of general relativity would predict? If so, that would violate the laws of quantum mechanics.

This artist's concept illustrates a supermassive black hole with millions to billions times the mass of our sun. Supermassive black holes are enormously dense objects buried at the hearts of galaxies.
Image credit: NASA/JPL-Caltech

Today at the Hawking Radiation conference, Hawking presented his latest idea about how this paradox can be solved — that is, how information is preserved even if it's sucked into a black hole.

Nobel physics laureate Gerard 't Hooft, of Utrecht University, the Netherlands, confers with Stephen Hawking after the Cambridge professor presented his solution to the information loss paradox. Hawking is in town for a weeklong conference on the information loss paradox, which is co-hosted by Nordita at KTH Royal Institute of Technology.  
Photo: Håkan Lindgren
Hawking is in town for the weeklong conference, which is co-sponsored by Nordita, UNC and the Julian Schwinger Foundation. Nordita is co-hosted by KTH and Stockholm University. UNC Physicist Laura Mersini-Houghton was instrumental in assembling 32 of the world's leading physicists to tackle the problem, which stems from contradications between quantum mechanics and general relativity.

Everything in our world is encoded with quantum mechanical information; and according to the laws of quantum mechanics, this information should never entirely disappear, no matter what happens to it. Not even if it gets sucked into a black hole.

But Hawking's new idea is that the information doesn't make it inside the black hole at all. Instead, it's permanently encoded in a 2D hologram at the surface of the black hole's event horizon, or the field surrounding each black hole which represents its point of no return.

As we understand them, black holes are regions of space-time where stars, having exhausted their fuel, collapse under their own gravity, creating a bottomless pit that swallows anything approaching too closely. Not even light can escape them, since their gravitational pull is so infinitely powerful.

"The information is not stored in the interior of the black hole as one might expect, but in its boundary — the event horizon," he said. Working with Cambridge Professor Malcolm Perry (who spoke afterward) and Harvard Professor Andrew Stromberg, Hawking formulated the idea that information is stored in the form of what are known as super translations.

Conference participants wait while Stephen Hawking composes an answer to a question.

Photo, Håkan Lindgren

"The idea is the super translations are a hologram of the ingoing particles," Hawking said. "Thus they contain all the information that would otherwise be lost."

This information is emitted in the quantum fluctuations that black holes produce, albeit in "chaotic, useless form," Hawking said. "For all practical purposes the information is lost."

But in his lecture in Stockholm the previous night, Hawking also offered compelling thoughts about where things that fall into a black hole could eventually wind up.

"The existence of alternative histories with black holes suggests this might be possible," Hawking said. "The hole would need to be large and if it was rotating it might have a passage to another universe. But you couldn't come back to our universe.

"So although I'm keen on space flight, I'm not going to try that."

Contacts and sources:
 David Callahan
KTH Royal Institute of Technology

Tuesday, August 25, 2015

First of Its Kind Fuel Cell Tri-Generator Promises To Reduce Energy Loss, Costs and Emissions

TRISOFC coordinator Dr Mark Worall speaks about the project’s unique solid oxide fuel cell (SOFC)  tri-generator which has the potential to increase the utilisation of available energy, reduce costs, add value, and decrease primary energy use and emissions.
The complete TriSOFC system schematic is shown below.

Credit: © TRISOFC

Almost a half of the world’s primary energy consumption is in the provision of electricity, heating and cooling. Most of this energy comes from centralised power stations where up to 70 % of available energy is wasted. The inefficiency of this model is unacceptably high, leading to considerable CO2 emissions and unnecessarily high running costs. These problems could be addressed if we move from conventional centralised power generation systems to efficient onsite micro-generation technology, and one promising possibility in this line is the solid oxide fuel cell (SOFC).

SOFC technology combines hydrogen and oxygen in an electro-chemical reaction to generate electricity, with the only by-products being water vapour, heat and a modest amount of carbon dioxide. Hydrogen can be supplied from hydrocarbon fuels such as natural gas, which is widely available for domestic and public buildings. For three years, the TRISOFC project team worked to advance this type of technology by developing a low-cost durable low temperature (LT) SOFC tri-generation (cooling, heating and power) prototype.

TRISOFC coordinator Dr Mark Worall from the University of Nottingham provided more specific details on the outcomes of the project, which officially concluded at the end of July: ‘The team designed, optimised and built an LT-SOFC tri-generation prototype, based on the integration of a novel LT-SOFC stack and a desiccant cooling unit.’ Additional components of the system are a fuel processor to generate reformate gas and other equipment for the electrical, mechanical and control balance of plant (BoP).

TRISOFC unique features

The TRISOFC’s system boasts a number of unique features that set it apart from anything that has been done before. In particular, the operating temperature of the TRISOFC system is between 500 and 600 degrees Celsius, in comparison to normal SOFCs of 800 to 1000 degrees Celsius. ‘This is important,’ Dr Worall notes, ‘Because it enables BoP and other temperature dependent components to be manufactured from relatively low cost materials, such as stainless steel, and so potentially it substantially reduces costs of materials and components.’ 

Additionally, the LT-SOFC is based on a single component nanocomposite material, an invention of a team led by Professor Binzhu Zhu of KTH, one of the consortium partners, which is unique in that it can act as an anode, cathode and an electrolyte. Dr Worall adds, ‘Again, this has the potential to reduce costs and complexity and increase reliability and durability.’ Finally, the system has been integrated with an open cycle desiccant dehumidification and cooling system to provide heating, cooling and thermal storage. This has not been used before in fuel cells and it has the advantage of potentially increasing the utilisation of the waste heat (currently 40 % to 50 % of the total energy input is wasted).

Dr Worall notes, ‘In our system, the waste heat from the SOFC is used to re-concentrate the solution. This is a form of thermal storage, which allows us to operate the fuel cell when we don’t need cooling and use it when we do. Our system has three main advantages: firstly, it increases the conversion efficiency of the SOFC from 45 % to 55% to potentially 85 % to 95 %; secondly, it reduces the demand for electrical energy that would be needed to provide comfort cooling (and by reducing electrical energy use, we are also reducing primary energy consumption and carbon dioxide emissions) and thirdly, it reduces cooling provided by vapour-compression refrigeration systems, which currently rely on working fluids that have a global warming potential (GWP) when released.’

Putting the LT- SOFC tri-generation system to the test

The team has successfully proved the concept of a LT- SOFC tri-generation system. Dr Worall draws attention to two particularly significant test results: ‘Tests of two cell 6cm x 6cm LT-planar type SOFCs have shown power density of up to 1100W/cm² with of a power output of 22W, at 530 degrees Celsius. Researchers are in the process of developing 200We stacks and we should be able to demonstrate large scale, low temperature electrical output.’

‘Additionally, tests on the desiccant dehumidification system showed that a coefficient of performance (COP) of above 1.0 was achieved. COP is the ratio of the cooling output to the total energy input, and so represents a key performance parameter… In overall conversion term, our heat powered cooler is competitive with other systems.’

TRISOFC impact

Now the concept has been proven, the next steps will be to prove long-term durability, scale-up production and reduce costs further. Dr Worall and the team expect the system developed under TRISOFC to have a significant impact on a number of levels: ‘This system is a first-of-its-kind fuel cell tri-generator and has great potential to increase the utilisation of the available energy, reduce costs, add value, reduce primary energy use and emissions and promote distributed energy production.’

One group that could feel the benefits the most is consumers, as Dr Worall explains: ‘Most buildings use primary energy for heating, cooling and electricity, so by generating electricity at domestic level, consumers can potentially benefit from the sale of excess electricity production (depending on local energy costs, incentives and tariffs), reduce demand for heat for the provision hot water and heating, and the provision of electricity for cooling. As we are getting three for the price of one, consumers should benefit financially and in terms of reducing their impact on the environment.’

The team is confident that both as an integrated system and as individual components the LT-SOFC tri-generation system has potential for commercialisation. ‘We are actively engaged with industry and end users to develop user-friendly, reliable and financial systems and subsystems,’ Dr Worall concludes.


Contacts and sources:

Link Between Amazon Fires and Devastating Hurricanes Found

Researchers from the University of California, Irvine and NASA have uncovered a remarkably strong link between high wildfire risk in the Amazon basin and the devastating hurricanes that ravage North Atlantic shorelines. The climate scientists' findings are appearing in the journal Geophysical Research Letters near the 10th anniversary of Hurricane Katrina's calamitous August 2005 landfall at New Orleans and the Gulf Coast.

This map of ocean surface temperatures shows how warm waters in the North Atlantic fueled Hurricane Katrina. NASA and UCI researchers have found that the same conditions heighten fire risk in the Amazon basin.

Credits: Scientific Visualization Studio, NASA's Goddard Space Flight Center

"Hurricane Katrina is, indeed, part of this story," said UCI Earth system scientist James Randerson, senior author on the paper. "The ocean conditions that led to a severe hurricane season in 2005 also reduced atmospheric moisture flow to South America, contributing to a once in a century dry spell in the Amazon. The timing of these events is perfectly consistent with our research findings."

Lead author Yang Chen discovered that in addition to the well-understood east-west influence of El Niño on the Amazon, there is also a north-south control on fire activity that is set by the state of the tropical North Atlantic. The North Atlantic has two modes. In years of high numbers of hurricanes and high fire risk, warm waters in the North Atlantic help hurricanes develop and gather strength and speed on their way to North American shores. They also tend to pull a large belt of tropical rainfall - known as the Intertropical Convergence Zone - to the north, Chen said, drawing moisture away from the southern Amazon.

As a consequence, ground water is not fully replenished by the end of rainy season, so coming into the next dry spell, when there is less water stored away in the soils, the plants can't evaporate and transpire as much water out through their stems and leaves into the atmosphere. The atmosphere gets drier and drier, creating conditions where fires can spread rapidly three to six months later. Ground-clearing fires set by farmers for agriculture or new deforestation can easily jump from fields to dense forests under these conditions.

"Understory fires in Amazon forests are extremely damaging, since most rainforest trees are not adapted to fire," noted co-author Douglas Morton at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "The synchronization of forest damages from fires in South America and tropical storms in North America highlights how important it is to consider the Earth as a system."

The team pored over years of historical storm and sea surface temperature data from the National Oceanic & Atmospheric Administration and fire data gathered by NASA satellites. The results showed a striking pattern, a progression over the course of several months from a warm condition in the tropical North Atlantic to a dry and fire-prone southern Amazon, and more destructive hurricane landfalls in North and Central America.

According to Randerson, the importance of this study is that it may help meteorologists develop better seasonal outlooks for drought and fire risk in the Amazon, leveraging large investments by NOAA and other agencies in understanding hurricanes.

"The fires we see in the U.S. West are generally lightning-ignited, whereas they are mostly human-ignited in the Amazon, but climate change can have really large effects on the fire situation in both regions," Randerson said. "Keeping fire out of the Amazon basin is critical from a carbon cycle perspective. There's a huge amount of carbon stored in tropical forests. We really want to keep the forests intact."

Contacts and sources:
Ellen Gray
NASA Goddard Space Flight Center

Incense May Be As Harmful as Smoking Tobacco

Comparison between indoor use of cigarettes and incense provides surprising results

The burning of incense might need to come with a health warning. This follows the first study evaluating the health risks associated with its indoor use. The effects of incense and cigarette smoke were also compared, and made for some surprising results. The research was led by Rong Zhou of the South China University of Technology and the China Tobacco Guangdong Industrial Company in China, and is published in Springer's journal Environmental Chemistry Letters.

Burning incense at the Longhua Temple

Credit:  NosniboR80

Incense burning is a traditional and common practice in many families and in most temples in Asia. It is not only used for religious purposes, but also because of its pleasant smell. During the burning process, particle matter is released into the air. This can be breathed in and trapped in the lungs, and is known to cause an inflammatory reaction. Not much research has been done on incense as a source of air pollution, although it has been linked to the development of lung cancer, childhood leukemia and brain tumors.

Zhou's team therefore assessed the health hazards associated with using incense smoke in the home. They went one step further by comparing these results for the first time with mainstream studies of cigarette smoke. Two types of incense were tested. Both contained agarwood and sandalwood, which are among the most common ingredients used to make this product. Tests were run, among others, to gauge the effects of incense and cigarette smoke on Salmonella tester strains and on the ovary cells of Chinese hamsters.

Incense smoke was found to be mutagenic, meaning that it contains chemical properties that could potentially change genetic material such as DNA, and therefore cause mutations. It was also more cytotoxic and genotoxic than the cigarette used in the study. This means that incense smoke is potentially more toxic to a cell, and especially to its genetic contents. Mutagenics, genotoxins and cytotoxins have all been linked to the development of cancers.

Smoke from the sampled incense was found to consist almost exclusively (99 percent) of ultrafine and fine particles, and is therefore likely to have adverse health effects. Taken together, the four incense smoke samples contained 64 compounds. While some of these are irritants or are only slightly harmful (hypotoxic), ingredients in two of the samples are known to be highly toxic.

"Clearly, there needs to be greater awareness and management of the health risks associated with burning incense in indoor environments," says Zhou, who hopes the results will lead to an evaluation of incense products and help to introduce measures to reduce smoke exposure.

However, he warns that one should not simply conclude that incense smoke is more toxic than cigarette smoke. The small sample size, the huge variety of incense sticks on the market and differences in how it is used compared to cigarettes must be taken into account.

Contacts and sources: 
Springer Science+Business Media

Zhou, R. et al (2015). Higher cytotoxicity and genotoxicity of burning incense than cigarette, Environmental Chemistry Letters. DOI 10.1007/s10311-015-0521-7

Water Trapped in Buckyball Cage Prevents Freezing

New research by scientists from the University of Southampton has found that water molecules react differently to electric fields, which could provide a new way to study spin isomers at the single-molecule level.

Water molecule inside a buckyball
Credit: University of Southampton 

Water molecules exist in two forms or ‘isomers’, ortho and para, that have different nuclear spin states. In ortho-water, the nuclear spins are parallel to one another, and in para water, the spins are antiparallel. The conversion of ortho water into para-water and vice-versa is relevant to a broad range of scientific fields from nuclear magnetic resonance (NMR) to astrophysics.

While it is possible to separate ortho- and para-water molecules it is difficult to study them in bulk water because rapid proton exchange and hindered molecular rotation obscure the direct observation of the two spin isomers..

To help observe this transformation in bulk water, the Southampton research team confined single water molecules in C60 carbon cages or ‘buckyballs’ to produce supramolecular endofullerene H2O@C60. The yield of this chemical synthesis was improved dramatically by the team, allowing them to study bulk quantities of this substance.

The carbon cages prevent water molecules from freezing and keep them separate, so that they continue to rotate freely at very low temperatures, making it possible to study the conversion.

Since water has an electric dipole moment (a measure of the separation of positive and negative electrical charges), the researchers measured the dielectric constant of H2O@C60 at cryogenic temperatures and found that it decreases as water converts from ortho to para, in line with quantum theory and previous NMR studies.

Dr Benno Meier, Senior Research Fellow in Chemistry and lead author, says: “The bulk dielectric constant of H2O@C60 depends on the spin isomer composition of the encapsulated water molecules. The observed time-dependent change in the bulk dielectric constant at 5K, as encapsulated water converts from the ortho to the para isomer, is due to a change in molecular polarisibility on spin conversion.

“This work is a result of a long-standing and fruitful collaboration between Professors Malcolm Levitt and Richard Whitby, who have been studying the ortho to para conversion on a molecular level for several years.”

The research, which is published in Nature Communications, was funded by the Engineering and Physical Sciences Research Council (EPSRC), European Research Council (ERC) and the Wolfson Foundation.

Contacts and sources: 
University of Southampton 

Mammoth Remains - As Far As the Eye Can See - Widest Distribution of Mammoths during the Last Ice Age

Ice Age paleontologist Prof. Dr. Ralf-Dietrich Kahlke of the Senckenberg Research Station for Quaternary Paleontology in Weimar recorded the maximum geographic distribution of the woolly mammoth during the last Ice Age and published the most accurate global map in this regard.

The ice-age pachyderms populated a total area of 33,301,000 square kilometers and may thus be called the most successful large mammals of this era. The study, recently published online in the scientific journal “Quaternary International,” determined that the distribution was limited by a number of climate-driven as well as climate-independent factors.

Complete left tusk of an ice-age Woolly Mammoth (Mammuthus primigenius) from the Siberian Arctic on the Taimyr Peninsula. Each individual discovery increases our knowledge about the past distribution of these Ice Age giants.
Credit: © R.-D. Kahlke/ Senckenberg Weimar

The mammoth is the quintessential symbol of the Ice Age – and the status of these shaggy pachyderms has now been confirmed scientifically. “The recent research findings show that during the last Ice Age, mammoths were the most widely distributed large mammals, thus rightfully serving as a flagship species of the glacial era,” according to Prof. Dr. Ralf-Dietrich Kahlke, an Ice Age researcher at the Senckenberg Research Station for Quaternary Paleontology in Weimar.

Kahlke has summarized the mammoth’s distribution during the most recent Ice Age, i.e., the period between approx. 110,000 and 12,000 years ago, on a worldwide map. All in all, the Weimar paleontologist determined a total distribution area of 33,301,000 square kilometers for these large mammals – almost 100 times the area of Germany today. From Portugal in the southwest across Central and Eastern Europe, Mongolia, Northern China, South Korea and Japan up to Northeastern Siberia, and thence to the American Midwest and Eastern Canada, from the shelf regions of the Arctic Ocean and Northwestern Europe to the bottom of the Adriatic Sea and to the mountains of Crimea: the fossil remains of woolly mammoths have been found everywhere.

The Siberian permafrost soil contains the best preserved mammoth remains in the world. A recently discovered tusk is carried off for inspection. 
Credit: © R.-D. Kahlke/ Senckenberg Weimar

“We related the computed distribution area to the real land surface at that time, thus generating the most precise map to date regarding the global habitats of the woolly mammoth,” explains Kahlke, and he adds, “Such detailed knowledge regarding the distribution area is not even available for many species of animals alive today.”

The generated map is based on decades of surveys of thousands of excavation sites on three continents. “Even sites under water, off the North American Atlantic shore and the North Sea, were taken into account. Due to the lower sea levels during the Ice Age – a large volume of water was bound in glaciers – these areas had fallen dry and were also inhabited by Mammuthus primigenius,” according to Kahlke. 

Prof. R.-D. Kahlke inspects the newly preserved femur of a mammoth 
Credit: © T. Korn/Senckenberg Weimar 

Only the ice-age bison (Bison priscus) had a widespread distribution similar to that of the mammoths. Kahlke explains, “The bison were clearly more variable than the woolly mammoths. Obviously, the mammoths had a higher tolerance toward various environmental factors and they were able to successfully settle in a variety of rather different open landscapes.”

But there were certain factors that limited the distribution of the hirsute pachyderms: glaciers, mountain chains, semi-deserts and deserts, as well as changes in sea level and shifts in vegetation placed restrictions on the mammoths’ distribution area. “The analysis of these limiting factors is useful in understanding the distribution of fossil species and their extinction – as with the mammoths toward the end of the last Ice Age. In addition, the data aid in comprehending current changes in the distribution areas of recent animal species,” offers Kahlke in summary.

Contacts and sources:
Senckenberg Research Institute and Natural History Museum

Monday, August 24, 2015

Study: U.S. Had 31% of World's Mass Shootings from 1966-2012

Despite having only about 5 percent of the world's population, the United States was the attack site for a disproportionate 31 percent of public mass shooters globally from 1966-2012, according to new research that will be presented at the 110th Annual Meeting of the American Sociological Association (ASA).

"The United States, Yemen, Switzerland, Finland, and Serbia are ranked as the Top 5 countries in firearms owned per capita, according to the 2007 Small Arms Survey, and my study found that all five are ranked in the Top 15 countries in public mass shooters per capita," said study author Adam Lankford, an associate professor of criminal justice at the University of Alabama. "That is not a coincidence."

Credit: DHS
Lankford's study, which examines the period from 1966-2012, relies on data from the New York City Police Department's 2012 active shooter report, the FBI's 2014 active shooter report, and multiple international sources. It is the first quantitative analysis of all reported public mass shootings around the world that resulted in the deaths of four or more people. By definition, these shootings do not include incidents that occurred solely in domestic settings or were primarily gang-related, drive-by shootings, hostage taking incidents, or robberies.

"My study provides empirical evidence, based on my quantitative assessment of 171 countries, that a nation's civilian firearm ownership rate is the strongest predictor of its number of public mass shooters," Lankford said. "Until now, everyone was simply speculating about the relationship between firearms and public mass shootings. My study provides empirical evidence of a positive association between the two."

As part of his study, Lankford explored how public mass shootings in the U.S. differed from those abroad. He found that public mass shooters in other countries were 3.6 times less likely to have used multiple weapons (typically multiple guns, but occasionally a gun plus another weapon or weapons) than those in the U.S., where more than half of shooters used at least two weapons.

"Given the fact that the United States has over 200 million more firearms in circulation than any other country, it's not surprising that our public mass shooters would be more likely to arm themselves with multiple weapons than foreign offenders," Lankford said. "I was surprised, however, that the average number of victims killed by each shooter was actually higher in other countries (8.81 victims) than it was in the United States (6.87 victims) because so many horrific attacks have occurred here."

The side-effect of America having experienced so many mass shootings may be that our police are better trained to respond to these incidents than law enforcement in other countries, which reduces the number of casualties, Lankford suggested.

In addition to killing fewer people and using more weapons, U.S. public mass shooters were also more likely to attack in schools, factories/warehouses, and office buildings than offenders in other countries. But compared to U.S. shooters, attackers abroad were significantly more likely to strike in military settings, such as bases, barracks, and checkpoints.

While Lankford's study revealed a strong link between the civilian firearm ownership rate and the large number of public mass shooters in the United States, he said there could be other factors that make the U.S. especially prone to public mass shooting incidents.

"In the United States, where many individuals are socialized to assume that they will reach great levels of success and achieve 'the American Dream,' there may be particularly high levels of strain among those who encounter blocked goals or have negative social interactions with their peers, coworkers, or bosses," Lankford explained. "When we add depression, schizophrenia, paranoia, or narcissism into the mix, this could explain why the U.S. has such a disproportionate number of public mass shooters. Other countries certainly have their share of people who struggle with these problems, but they may be less likely to indulge in the delusions of grandeur that are common among these offenders in the U.S., and, of course, less likely to get their hands on the guns necessary for such attacks."

In terms of the study's policy implications, Lankford said, "The most obvious implication is that the United States could likely reduce its number of school shootings, workplace shootings, and public mass shootings in other places if it reduced the number of guns in circulation."

There is evidence that such an approach could be successful, according to Lankford. "From 1987-1996, four public mass shootings occurred in Australia," Lankford said. "Just 12 days after a mass shooter killed 35 people in the last of these attacks, Australia agreed to pass comprehensive gun control laws. It also launched a major buyback program that reduced Australia's total number of firearms by 20 percent. My study shows that in the wake of these policies, Australia has yet to experience another public mass shooting."

Contacts and sources:

Evidence of Fountains of Fire on the Moon

Tiny beads of volcanic glass found on the lunar surface during the Apollo missions are a sign that fire fountain eruptions took place on the Moon's surface. Now, scientists from Brown University and the Carnegie Institution for Science have identified the volatile gas that drove those eruptions.

Fire fountains, a type of eruption that occurs frequently in Hawaii, require the presence of volatiles mixed in with the erupting lava. Volatile compounds turn into gas as the lavas rise from the depths. That expansion of that gas causes lava to blast into the air once it reaches the surface, a bit like taking the lid of a shaken bottle of soda.

"The question for many years was what gas produced these sorts of eruptions on the Moon," said Alberto Saal, associate professor of earth, environmental, and planetary sciences at Brown and corresponding author of the new research. "The gas is gone, so it hasn't been easy to figure out."

Melt inclusions are tiny dots of magma frozen within olivine crystals. The crystals lock in volatile elements that may have otherwise escaped from the magma. Researchers have shown that melt inclusions within volcanic glasses from the Moon contain carbon. They conclude that gas-phase carbon likely drive the "fire fountain" eruptions the produced the glass.

Credit: Saal Lab / Brow University

The research, published in Nature Geoscience, suggests that lava associated with lunar fire fountains contained significant amounts of carbon. As it rose from the lunar depths, that carbon combined with oxygen to make substantial amounts carbon monoxide (CO) gas. That CO gas was responsible for the fire fountains that sprayed volcanic glass over parts of the lunar surface.

For many years, the Moon was thought to be devoid of volatiles like hydrogen and carbon. It wasn't until the last decade or so that volatiles were definitively detected in lunar samples. In 2008, Saal and colleagues detected water in lunar volcanic beads. They followed that discovery with detections of sulfur, chlorine and fluorine. While it became apparent that the Moon was not completely depleted of volatiles as was once thought, none of the volatiles that had been detected were consistent with fire fountain eruptions. For example, if water had been the driving force, there should be mineralogical signatures in recovered samples. There are none.

For this research, Saal and his colleagues carefully analyzed glass beads brought back to Earth from the Apollo 15 and 17 missions. In particular, they looked at samples that contained melt inclusions, tiny dots of molten magma that became trapped within crystals of olivine. The crystals trap gases present in the magma before they can escape.

Although other volatiles were previously detected in the lunar volcanic glasses and melt inclusions, the measurement of carbon remained elusive due to the high detection limits of the available analytical techniques. Erik Hauri from Carnegie Institution for Science developed a state-of-the-art ion probe technique reducing the detection limits of carbon by two orders of magnitude. That allows a measurement of as low as 0.1 part per million.

"This breakthrough depended on the ability of Carnegie's NanoSIMS ion probe to measure incredibly low levels of carbon, on objects that are the diameter of a human hair," said Hauri. "It is really a remarkable achievement both scientifically and technically."

The researchers probed the melt inclusions using secondary ion mass spectroscopy. They calculated that the samples contained initially 44 to 64 parts per million carbon. Having detected carbon, the researchers devised a theoretical model of how gases would escape from lunar magma at various depths and pressures, calibrated from the results of high-pressure lab experiments. The model had long been used for Earth. Saal and colleagues changed several parameters to match the composition and conditions affecting lunar magma.

The model showed that carbon, as it combines with oxygen to form CO gas, would have degassed before other volatiles.

"Most of the carbon would have degassed deep under the surface," Saal said. "Other volatiles like hydrogen degassed later, when the magma was much closer to the surface and after the lava began breaking up into small globules. That suggests carbon was driving the process in its early stages."

In addition to providing a potential answer to longstanding questions surrounding lunar fire fountains, the findings also serve as more evidence that some volatile reservoirs in the Moon's interior share a common origin with reservoirs in the Earth, the researchers say.

The amount of carbon detected in the melt inclusions was found to be very similar to the amount of carbon found in basalts erupted at Earth's mid-ocean ridges. Saal and his colleagues have shown previously that Earth and the Moon have similar concentrations of water and other volatiles. They have also shown that hydrogen isotope ratios from lunar samples are similar to that of Earth.

If volatile reservoirs on the Earth and Moon do indeed share a common source, it has implications for understanding the Moon's origin. Scientists believe the Moon formed when Earth was hit by a Mars-size object very early in its history. Debris from that impact accreted to form the Moon.

"The volatile evidence suggests that either some of Earth's volatiles survived that impact and were included in the accretion of the Moon or that volatiles were delivered to both the Earth and Moon at the same time from a common source -- perhaps a bombardment of primitive meteorites," Saal said.

Contacts and sources:
Kevin Stacey
Brown University

‘Kathryn’s Wheel’ Collision Lights Up Galaxy

A spectacular collision between galaxies has been spotted near the Milky Way. Two small star systems are slamming into each other, producing a colourful firework display.

Discovered by academics from the University of Manchester and the University of Hong Kong, the so-called ‘bull's-eye’ collision is happening just 30 million light years away from Earth, in a relatively nearby galaxy.

Shock-waves from the collision compress reservoirs of gas in each galaxy and trigger the formation of new stars. This creates a spectacular ring of intense emission, and lights up the system like a Catherine wheel on bonfire night. Such systems are very rare and arise from collisions between two galaxies of similar mass.

The closest such system ever found, the discovery is announced today by a team of astronomers led by Professor Quentin Parker at the University of Hong-Kong and Professor Albert Zijlstra at the University of Manchester. The scientists publish their results in the journal Monthly Notices of the Royal Astronomical Society.

It has been dubbed “Kathryn’s wheel” both after the famous firework that it resembles, but also after Kathryn Zijlstra, who is married to Prof Zijlstra.

Galaxies grow through collisions but it is rare to catch one in the process, and extremely rare to see a bull's-eye collision in progress. Fewer than 20 systems with complete rings are known.

Kathryn's Wheel was discovered during a special wide field survey of the Southern Milky Way undertaken with the UK Schmidt Telescope in Australia. It used a narrow wavelength optical region centred on the so-called red “H-alpha” emission line of Hydrogen gas. This rare jewel was uncovered during a search of the survey images for the remnants of dying stars in our Milky Way. The authors were very surprised to also find this spectacular cosmic ring, sitting remotely behind the dust and gas of the Milky Way in the constellation of Ara (the Altar).

Kathryn's Wheel 

The newly discovered ring galaxy is seven times closer than anything found before, and forty times closer than the most famous example of collisional ring galaxies, the ‘Cartwheel’ galaxy. Kathryn's Wheel is located behind a dense star field and close to a very bright foreground star, which is why it had not been noted before. There are very few other galaxies in its neighbourhood: the odds of a collision in such an empty region of space are low.

Professor Zijlstra said: “This is a very exciting find because it will allow astronomers to study how collisions cause star formation, how long the collision takes, and what types of stars form.

“It is not often that you get to name any objects in the sky. But I think Kathryn’s Wheel is particularly fitting, resembling as it does a firework and continuing the tradition of naming objects after loved ones.”

Professor Parker said: “Not only is this system visually stunning, but it’s close enough to be an ideal target for detailed study. The ring is also quite low in mass – a few thousand million Suns or less than 1% of the Milky Way – so our discovery shows that collision rings can form around much smaller galaxies than we thought.”

Smaller galaxies are more common than large ones, implying that collisional rings could be ten times as common as previously thought. The authors intend more detailed studies on larger telescopes since the system is currently the only one of its kind close enough to permit study in high detail.

Contacts and sources:
Sam Wood
University of Manchester

Citation:  The new work appears in Monthly Notices of the Royal Astronomical Society, Oxford University Press http://mnras.oxfordjournals.org/lookup/doi/10.1093/mnras/stv1432

Bizarre Bat Found With Longest Tongue of Any Mammall Related To Size

The Wildlife Conservation Society (WCS) reports that the groundbreaking Bolivian scientific expedition, Identidad Madidi, has found a bizarre bat along with a new species of big-headed or robber frog (Oreobates sp. nov.) from the Craugastoridae family in Madidi National Park.

Bizarre tube-lipped nectar bat (Anoura fistulata) – the first record of this species in the park. Described in Ecuador just a decade ago and known from only three records. It has the longest tongue in relation to its size of any mammal – stretching 8.5 cm to reach into the deepest flowers.

The researchers found the bizarre tube-lipped nectar bat (Anoura fistulata) – the first record of this species in the park. 
Credit: Mileniusz Spanowicz/WCS

Described in Ecuador just a decade ago and known from only three records. It has the longest tongue in relation to its size of any mammal – stretching 8.5 cm to reach into the deepest flowers.

The frog was found during the first leg of an 18-month long expedition to chronicle the staggering wildlife living in what is believed to be the world’s most biodiverse park.

James Aparicio and Mauricio Ocampo, two professional herpetologists from the Bolivian Faunal Collection and the National Natural History Museum, immediately suspected they had found something exceptional in the first week of the expedition in the tropical montane savannas and gallery forests of the Apolo region of Bolivia. Subsequent examination of available literature supports this discovery as a probable new species for science to be confirmed with forthcoming genetic studies.

James Aparicio said, “Robber frogs are small to medium-sized frogs distributed in the Andes and Amazon region and to date there are 23 known species. As soon as we saw these frogs’ distinctive orange inner thighs, it aroused our suspicions about a possible new species, especially because this habitat has never really been studied in detail before Identidad Madidi.”

Mauricio Ocampo added, “We have spent the last two months ruling out known species at the Bolivian Faunal Collection and also from published accounts, especially recently described species from southern Peru, but we are now confident that this will indeed be confirmed as a new species for science once genetic analyses are completed.”

Identidad Madidi is a multi-institutional effort to describe still unknown species and to showcase the wonders of Bolivia’s extraordinary natural heritage at home and abroad. The expedition officially began on June 5th, 2015 and will eventually visit 14 sites lasting for 18 months as a team of Bolivian scientists works to expand existing knowledge on Madidi’s birds, mammals, reptiles, amphibians, and fish along an altitudinal pathway descending more than 5,000 meters (more than 16,000 feet) from the mountains of the high Andes into the tropical Amazonian forests and grasslands of northern Bolivia.

Participating institutions include the Ministry of the Environment and Water, the Bolivian National Park Service, the Vice Ministry of Science and Technology, Madidi National Park, the Bolivian Biodiversity Network, WCS, the Institute of Ecology, Bolivian National Herbarium, Bolivian Faunal Collection and Armonia with funding from the Gordon and Betty Moore Foundation and WCS.

Teresa Chávez, Director of the Bolivian Biodiversity and Protected Areas Directorate expressed her satisfaction with the scientific results of the Identidad Madidi expedition: “The description of a new species of robber frog (Oreobates) for science is important news for the country as it confirms the extraordinary biodiversity of Madidi National Park and demonstrates the importance of scientific research in protected areas.”

Across the first two study sites in June and July the Identidad Madidi team registered 208 and 254 species of vertebrates respectively, including an impressive 60 species of vertebrates that are new records for the official park list: 15 fish, 5 amphibians, 11 reptiles, 4 birds and 25 mammals. Five of these additions, three catfish, a lizard and another frog, are candidate new species for science and the team continues efforts to determine their identity. Notable new records for the park include the incredible tube-lipped nectar bat (Anoura fistulata) with a record breaking tongue and only a fourth continental distribution record since its discovery in 2005; the beautiful but deadly annellated coral snake (Micrurus annellatus); the bizarre Hagedorn’s tube-snouted ghost knifefish (Sternarchorhynchus hagedornae); and the long-tailed rice rat (Nephelomys keaysi).

Dr. Robert Wallace of the Wildlife Conservation Society stated, “This is just the beginning. We are incredibly proud of the team’s efforts across the first two study sites and while we are expecting more new species for science, as important is the astounding number of additional species confirmed for Madidi further establishing it as the world´s most biologically diverse park.”

The next leg of the expedition will begin on August 20th and will explore three study sites in the High Andes of Madidi, specifically within the Puina valley between 3,750 meters (12,303 feet) and 5,250 meters (17,224 feet) above sea level in Yungas paramo grasslands, Polylepis forests and high mountain puna vegetation.

Wallace added, “The success of the communication and social media campaign is also especially pleasing for the scientific team and you can follow the adventure online atwww.identidadmadidi.org,www.facebook.com/IdentidadMadidi, #IDMadidi.”

Contacts and sources:
Wildlife Conservation Society (WCS)

Satellites Yield Clues to 70 Year Old Solar Mystery

Solar physicists have captured the first direct observational signatures of resonant absorption, thought to play an important role in solving the "coronal heating problem" which has defied explanation for over 70 years.

An international research team from Japan, the U.S.A., and Europe led by Drs. Joten Okamoto and Patrick Antolin combined high resolution observations from JAXA's Hinode mission and NASA's IRIS (Interface Region Imaging Spectrograph) mission, together with state-of-the-art numerical simulations and modeling from NAOJ's ATERUI supercomputer. In the combined data, they were able to detect and identify the observational signatures of resonant absorption.

(Left) For reference, this is an image of the entire Sun taken by SDO/AIA in extreme ultra-violet light (false color). (Right) An image of a solar prominence at the limb of the Sun was taken by Hinode/SOT in visible light (Ca II H line, false color). As shown in the image, a prominence is composed of long, thin structures called threads. A scale model of the Earth is shown on the right for reference.


Resonant absorption is a process where two different types of magnetically driven waves resonate, strengthening one of them. In particular this research looked at a type of magnetic waves known as Alfvénic waves which can propagate through a prominence (a filamentary structure of cool, dense gas floating in the corona). Here, for the first time, researchers were able to directly observe resonant absorption between transverse waves and torsional waves, leading to a turbulent flow which heats the prominence. Hinode observed the transverse motion and IRIS observed the torsional motion; these results would not have been possible without both satellites.

This new information can help explain how the solar corona reaches temperatures of 1,000,000 degrees Celsius; the so called "coronal heating problem."

The solar corona, the outer layer of the Sun's atmosphere, is composed of extreme high temperature gas, known as plasma, with temperatures reaching millions of degrees Celsius. As the outer layer of the Sun, the part farthest from the core where the nuclear reactions powering the Sun occur, it would logically be expected to be the coolest part of the Sun. But in fact, it is 200 times hotter than the photosphere, the layer beneath it. This contradiction, dubbed "the coronal heating problem" has puzzled astrophysicists ever since the temperature of the corona was first measured over 70 years ago.

Space-borne missions to observe the Sun and other technological advances have revealed that the magnetic field of the Sun plays an essential role in this riddle. But the key to solving the "coronal heating problem" is understanding how magnetic energy can be converted efficiently into heat in the corona. There have been two competing theories.

The first theory involves solar flares. Although each flare converts large amounts of magnetic energy into thermal energy, the overall frequency of solar flares is not high enough to account for all of the energy needed to heat and maintain the solar corona. To solve this discrepancy, the idea of "nanoflares" was introduced. It is thought that nanoflares, miniature solar flares, occur continuously throughout the corona and that the sum of their actions convert enough magnetic energy into heat to make up the difference. Unfortunately, such nanoflares have yet to be detected.

The second hypothesis is based on magnetically driven waves. Thanks to space missions such as the Japanese "Hinode" mission (launched in 2006), we now know that the solar atmosphere is permeated with "Alfvénic" waves. These magnetically driven waves can carry significant amounts of energy along the magnetic field lines, enough energy in fact to heat and maintain the corona. But for this theory to work, there needs to be a mechanism through which this energy can be converted into heat.

To look for this conversion mechanism, the research team combined data from two state-of-the-art missions: Hinode and the IRIS imaging and spectroscopic satellite (the newest NASA solar mission, launched in 2013).

Both instruments targeted the same solar prominence (see Figure 1). A prominence is a filamentary bundle of cool, dense gas floating in the corona. Here, 'cool' is a relative term; a prominence is typically about 10,000 degrees. Although denser than the rest of the corona, a prominence doesn't sink because magnetic field lines act like a net to hold it aloft. The individual filaments composing the prominence, called threads, follow the magnetic field lines.

Signs of resonant absorption are shown. Alfvénic waves propagating in the prominence produce transverse oscillations of the threads. On a graph plotting displacement vs. time, this motion appears as a wavy structure (shown in yellow in the top left panel). The waves resonate and produce a characteristic torsional flow which was observed by IRIS (purple dots in the panel). This flow becomes turbulent and heats the plasma. 3D numerical simulations of an oscillating prominence thread (top-right panel) successfully reproduce this process. The schematic diagram on the bottom shows the evolution of the thread's cross-section.


Hinode's very high spatial and temporal resolution allowed researchers to detect small motions in the 2-dimensional plane of the image (up/down and left/right). To understand the complete 3-dimensional phenomenon, researchers used IRIS to measure the Doppler velocity (i.e. velocity along the line of sight, in-to/out-of the picture). The IRIS spectral data also provided vital information about the temperature of the prominence.

These different instruments allow the satellites to detect different varieties of Alfvénic waves: Hinode can detect transverse waves while IRIS can detect torsional waves. Comparing the two data sets shows that these two types of waves are indeed synchronized, and that at the same time there is a temperature increase in the prominence from 10,000 degrees to more than 100,000 degrees. This is the first time that such a close relationship has been established between Alfvénic waves and prominence heating.

But the waves are not synchronized in the way scientists expected. Think of moving a spoon back-and-forth in a cup of coffee: the half-circular torsional flows around the edges of the spoon appear instantly as the spoon moves. But in the case of the prominence threads, the torsional motion is half-a-beat out of sync with the transverse motion driving it: there is a delay between the maximum speed of the transverse motions and the maximum speed of the torsional motion (see Figure 2), like the delay between the motion of the hips of a dancer in a long skirt and the motions of the skirt hem.

To understand this unexpected pattern the team used NAOJ's ATERUI supercomputer to conduct 3D numerical simulations of an oscillating prominence thread. Of the theoretical models they tested, one involving resonant absorption provides the best match to the observed data. In this model, transverse waves resonate with torsional waves, strengthening the torsional waves; similar to how a child on a swing can add energy to the swing, causing it to swing higher and faster, by moving his body in time with the motion. The simulations show that this resonance occurs within a specific layer of the prominence thread close to its surface (see Figure 3). When this happens, a half-circular torsional flow around the boundary is generated and amplified. This is known as the resonant flow. Because of its location close to the boundary, the maximum speed of this flow is delayed by half-a-beat from the maximum speed of the transverse motion, just like the pattern actually observed (see Figure 2).

(Top) Numerical simulations by ATERUI show how resonant absorption can explain the relationship observed between the resonant flow (purple) and the transverse motion (green). The simulations also show that turbulence appears. (Bottom) A time sequence showing one complete cycle of the resonant flow in relation to the transverse motion. 
Credit: NAOJ

The simulations further reveal that this resonant flow along the surface of a thread can become turbulent. The appearance of turbulence is of great importance since it is effective at converting wave energy into heat energy. Another important effect of this turbulence is to enlarge the resonant flow predicted in the models to the size actually observed.

This model can explain the main features of the observations as the results of a two-step process. First resonant absorption transfers energy to the torsional motions, producing a resonant flow along the surface of the prominence thread. Then turbulence in this strengthened resonant flow converts the energy into heat (see Figure 2).

This work shows how the power of multiple satellites, such as Hinode and IRIS, can be combined to investigate long-standing astrophysical problems and will serve as an example for other research looking for similar heating in other solar observations.

These results were published in The Astrophysical Journal, Vol 809 in August, 2015.

Contacts and sources: 
Masaaki Hiramatsu
National INstitutes of Natural Sciences