OpenX

Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

Thursday, May 21, 2015

Weird One-of-a-Kind Star Called "Nasty 1"

Astronomers using NASA's Hubble Space Telescope have uncovered surprising new clues about a hefty, rapidly aging star whose behavior has never been seen before in our Milky Way galaxy. In fact, the star is so weird that astronomers have nicknamed it "Nasty 1," a play on its catalog name of NaSt1. The star may represent a brief transitory stage in the evolution of extremely massive stars.

Astronomers using NASA's Hubble Space Telescope have uncovered surprising new clues about a hefty, rapidly aging star whose behavior has never been seen before in our Milky Way galaxy. Astronomers have nicknamed it 'Nasty 1,' a play on its catalog name of NaSt1.

Credit: NASA/Space Telescope

First discovered several decades ago, Nasty 1 was identified as a Wolf-Rayet star, a rapidly evolving star that is much more massive than our sun. The star loses its hydrogen-filled outer layers quickly, exposing its super-hot and extremely bright helium-burning core.

But Nasty 1 doesn't look like a typical Wolf-Rayet star. The astronomers using Hubble had expected to see twin lobes of gas flowing from opposite sides of the star, perhaps similar to those emanating from the massive star Eta Carinae, which is a Wolf-Rayet candidate. Instead, Hubble revealed a pancake-shaped disk of gas encircling the star. The vast disk is nearly 2 trillion miles wide, and may have formed from an unseen companion star that snacked on the outer envelope of the newly formed Wolf-Rayet. Based on current estimates, the nebula surrounding the stars is just a few thousand years old, and as close as 3,000 light-years from Earth.

"We were excited to see this disk-like structure because it may be evidence for a Wolf-Rayet star forming from a binary interaction," said study leader Jon Mauerhan of the University of California, Berkeley. "There are very few examples in the galaxy of this process in action because this phase is short-lived, perhaps lasting only a hundred thousand years, while the timescale over which a resulting disk is visible could be only ten thousand years or less."

In the team's proposed scenario, a massive star evolves very quickly, and as it begins to run out of hydrogen, it swells up. Its outer hydrogen envelope becomes more loosely bound and vulnerable to gravitational stripping, or a type of stellar cannibalism, by a nearby companion star. In that process, the more compact companion star winds up gaining mass, and the original massive star loses its hydrogen envelope, exposing its helium core to become a Wolf-Rayet star.

Another way Wolf-Rayet stars are said to form is when a massive star ejects its own hydrogen envelope in a strong stellar wind streaming with charged particles. The binary interaction model where a companion star is present is gaining traction because astronomers realize that at least 70 percent of massive stars are members of double-star systems. Direct mass loss alone also cannot account for the number of Wolf-Rayet stars relative to other less-evolved massive stars in the galaxy.

"We're finding that it is hard to form all the Wolf-Rayet stars we observe by the traditional wind mechanism, because mass loss isn't as strong as we used to think," said Nathan Smith of the University of Arizona in Tucson, who is a co-author on the new NaSt1 paper. "Mass exchange in binary systems seems to be vital to account for Wolf-Rayet stars and the supernovae they make, and catching binary stars in this short-lived phase will help us understand this process."

But the mass transfer process in mammoth binary systems isn't always efficient. Some of the stripped matter can spill out during the gravitational tussle between the stars, creating a disk around the binary.

"That's what we think is happening in Nasty 1," Mauerhan said. "We think there is a Wolf-Rayet star buried inside the nebula, and we think the nebula is being created by this mass-transfer process. So this type of sloppy stellar cannibalism actually makes Nasty 1 a rather fitting nickname."

The star's catalogue name, NaSt1, is derived from the first two letters of each of the two astronomers who discovered it in 1963, Jason Nassau and Charles Stephenson.

Viewing the Nasty 1 system hasn't been easy. The system is so heavily cloaked in gas and dust, it blocks even Hubble's view of the stars. Mauerhan's team cannot measure the mass of each star, the distance between them, or the amount of material spilling onto the companion star.

Previous observations of Nasty 1 have provided some information on the gas in the disk. The material, for example, is travelling about 22,000 miles per hour in the outer nebula, slower than similar stars. The comparatively slow speed indicates that the star expelled its material through a less violent event than Eta Carinae's explosive outbursts, where the gas is travelling hundreds of thousands of miles per hour.

Nasty 1 may also be shedding the material sporadically. Past studies in infrared light have shown evidence for a compact pocket of hot dust very close to the central stars. Recent observations by Mauerhan and colleagues at the University of Arizona, using the Magellan telescope at Las Campanas Observatory in Chile, have resolved a larger pocket of cooler dust that may be indirectly scattering the light from the central stars. The presence of warm dust implies that it formed very recently, perhaps in spurts, as chemically enriched material from the two stellar winds collides at different points, mixes, flows away, and cools. Sporadic changes in the wind strength or the rate the companion star strips the main star's hydrogen envelope might also explain the clumpy structure and gaps seen farther out in the disk.

To measure the hypersonic winds from each star, the astronomers turned to NASA's Chandra X-ray Observatory. The observations revealed scorching hot plasma, indicating that the winds from both stars are indeed colliding, creating high-energy shocks that glow in X-rays. These results are consistent with what astronomers have observed from other Wolf-Rayet systems.

The chaotic mass-transfer activity will end when the Wolf-Rayet star runs out of material. Eventually, the gas in the disk will dissipate, providing a clear view of the binary system.

"What evolutionary path the star will take is uncertain, but it will definitely not be boring," said Mauerhan. "Nasty 1 could evolve into another Eta Carinae-type system. To make that transformation, the mass-gaining companion star could experience a giant eruption because of some instability related to the acquiring of matter from the newly formed Wolf-Rayet. Or, the Wolf-Rayet could explode as a supernova. A stellar merger is another potential outcome, depending on the orbital evolution of the system. The future could be full of all kinds of exotic possibilities depending on whether it blows up or how long the mass transfer occurs, and how long it lives after the mass transfer ceases."


Contacts and sources:
Ray Villard
NASA/Goddard Space Flight Center 

Bronze Age Egtved Girl Found in Denmark Came From Germany's Black Forest


The Bronze Age Egtved Girl came from far away, as revealed by strontium isotope analyses of the girl's teeth. The analyses show that she was born and raised outside Denmark's current borders, and strontium isotope analyses of the girl's hair and a thumb nail also show that she travelled great distances the last two years of her life.

The wool from the Egtved Girl's clothing, the blanket she was covered with, and the oxhide she was laid to rest on in the oak coffin all originate from a location outside present-day Denmark.

This is the Egtved Girl's grave, from 1370 BC.
Credit:  The National Museum of Denmark

The combination of the different provenance analyses indicates that the Egtved Girl, her clothing, and the oxhide come from Schwarzwald ("the Black Forest") in South West Germany - as do the cremated remains of a six-year-old child who was buried with the Egtved Girl. The girl's coffin dates the burial to a summer day in the year 1370 BC.

It is senior researcher Karin Margarita Frei, from the National Museum of Denmark and Centre for Textile Research at the University of Copenhagen, who has analysed the Egtved Girl's strontium isotope signatures. The analyses have been carried out in collaboration with Kristian Kristiansen from the University of Gothenburg and the Department of Geosciences and Natural Resource Management and the Centre for GeoGenetics, both University of Copenhagen.

The research has been possible through the support of The Danish National Research Foundation, European Research Council, the Carlsberg Foundation and L'Oréal Denmark-UNESCO For Women in Science Award.

The results have just been published in Scientific Reports.

The girl's movements mapped month by month

Strontium is an element which exists in the earth's crust, but its prevalence is subject to geological variation. Humans, animals, and plants absorb strontium through water and food. By measuring the strontium isotopic signatures in archaeological remains, researchers can determine where humans and animals lived, and where plants grew because of their strontium isotope signatures. In that sense, strontium serves as a kind of GPS for scientists.

"I have analysed the strontium isotopic signatures of the enamel from one of the Egtved Girl's first molars, which was fully formed/crystallized when she was three or four years old, and the analysis tells us that she was born and lived her first years in a region that is geologically older than and different from the peninsula of Jutland in Denmark," Karin Margarita Frei says.

Karin Margarita Frei has also traced the last two years of the Egtved Girl's life by examining the strontium isotopic signatures in the girl's 23-centimetre-long hair. The analysis shows that she had been on a long journey shortly before she died, and this is the first time that researchers have been able to so accurately track a prehistoric person's movements.

"If we consider the last two years of the girl's life, we can see that, 13 to 15 months before her death, she stayed in a place with a strontium isotope signature very similar to the one that characterizes the area where she was born. Then she moved to an area that may well have been Jutland. After a period of c. 9 to 10 months there, she went back to the region she originally came from and stayed there for four to six months before she travelled to her final resting place, Egtved. Neither her hair nor her thumb nail contains a strontium isotopic signatures which indicates that she returned to Scandinavia until very shortly before she died. As an area's strontium isotopic signature is only detectable in human hair and nails after a month, she must have come to "Denmark" and "Egtved" about a month before she passed away," Karin Margarita Frei explains.

The Black Forest Girl

If the Egtved Girl was not born in Jutland, then where did she come from? Karin Margarita Frei suggests that she came from South West Germany, more specifically the Black Forest, which is located 500 miles south of Egtved.

Considered in isolation, the Egtved Girl's strontium isotope signature could indicate that she came from Sweden, Norway or Western or Southern Europe. She could also come from the island Bornholm in the Baltic Sea. But when Karin Margarita Frei combines the girl's strontium isotopic signatures with that of her clothing, she can pinpoint the girl's place of origin relatively accurately.

"The wool that her clothing was made from did not come from Denmark and the strontium isotope values vary greatly from wool thread to wool thread. This proves that the wool was made from sheep that either grazed in different geographical areas or that they grazed in one vast area with very complex geology, and Black Forest's bedrock is characterized by a similarly heterogeneous strontium isotopic range," Karin Margarita Frei says.

That the Egtved Girl in all probability came from the Black Forest region in Germany comes as no surprise to professor Kristian Kristiansen from the University of Gothenburg; the archaeological finds confirm that there were close relations between Denmark and Southern Germany in the Bronze Age.

"In Bronze Age Western Europe, Southern Germany and Denmark were the two dominant centres of power, very similar to kingdoms. We find many direct connections between the two in the archaeological evidence, and my guess is that the Egtved Girl was a Southern German girl who was given in marriage to a man in Jutland so as to forge an alliance between two powerful families," Kristian Kristiansen says.

According to him, Denmark was rich in amber and traded amber for bronze. In Mycenaean Greece and in the Middle East, Baltic amber was as coveted as gold, and, through middlemen in Southern Germany, large quantities of amber were transported to the Mediterranean, and large quantities of bronze came to Denmark as payment. In the Bronze Age, bronze was as valuable a raw material as oil is today so Denmark became one of the richest areas of Northern Europe.

"Amber was the engine of Bronze Age economy, and in order to keep the trade routes going, powerful families would forge alliances by giving their daughters in marriage to each other and letting their sons be raised by each other as a kind of security," Kristian Kristiansen says.

A great number of Danish Bronze Age graves contain human remains that are as well-preserved as those found the Egtved Girl's grave. Karin Margarita Frei and Kristian Kristiansen plan to examine these remains with a view to analysing their strontium isotope signatures.


Contacts and sources:

Senior researcher Karin Margarita Frei
National Museum of Denmark and University of Copenhagen

Professor Kristian Kristiansen
University of Gothenburg

Exploring the Mysteries of Cosmic Explosions

An automated software system developed at Los Alamos National Laboratory played a key role in the discovery of supernova iPTF 14atg and could provide insight, a virtual Rosetta stone, into future supernovae and their underlying physics.

A Los Alamos National Laboratory simulation of an exploding white dwarf, in which the supernova drives an expanding shock wave that collides with a torus of material accreted from a companion star. 
Credit: Los Alamos National Laboratory

"Over the past decade, rapid advances in imaging and computing technology have completely transformed time-domain astronomy," said Przemek Wozniak, the principal investigator of a Laboratory Directed Research and Development (LDRD) project that funds the Laboratory's contributions to the research. "The Intermediate Palomar Transient Factory (iPTF) is a leader among the new breed of data-intensive sky monitoring surveys that seek to discover and understand transient events of astrophysical origin."

The Laboratory is partnering with an international consortium, led by the California Institute of Technology, to conduct the iPTF project.

Type Ia supernovae, such as supernova iPTF 14atg, occur in binary systems, when two stars orbit one another and one of the stars is a dense white dwarf. This supernova demonstrated a rarely observed phenomenon that allowed scientists to understand the underlying physics of type Ia supernovae.

"The challenge in this work is to select transients from the torrent of images and quickly identify the ones that deserve further attention," Wozniak said. "Too many transients compete for scarce resources such as observing time on large telescopes. We are developing new machine learning technology that will allow us to tackle these big data challenges."

Researchers at Los Alamos developed an automated software system based on machine learning algorithms to reliably separate real astronomical transients from false detections. Wozniak said without machine learning technology, it is impossible to find events such as iPTF 14atg before it is too late for detailed follow-up observations that tell scientists about the broad spectral energy distribution of radiation emitted by supernovae.

An important piece of the puzzle in the case of iPTF 14atg came from NASA's Swift satellite, which detected the supernova in time to catch rapidly fading ultraviolet radiation from a young supernova.

"This excess UV emission is strong evidence that the supernova is interacting with its surrounding medium, such as an exploding white dwarf colliding with its companion star in the so-called single degenerate scenario," said Chris Fryer, a computational scientist at Los Alamos who leads the supernova simulation and modeling group at the Laboratory.

In this model, the ejecta from an exploding degenerate object called a white dwarf collide with a normal companion star, producing a UV transient lasting at most a few days. The competing double-degenerate model, which uses a pair of merging white dwarfs, predicts no UV excess.

Wozniak said Los Alamos is at the forefront of this fast-evolving field and well equipped to make important contributions in the future. He said the main idea is to automate and optimize the entire process of selecting, vetting and prioritizing transients in order to collect the most effective follow-up observations for events that matter.




Contacts and sources:
Nancy AmbrosianoLos Alamos National Laboratory

Supernova Ignition Surprises Astronomers


Scientists have captured the early death throes of supernovae for the first time and found that the universe's benchmark explosions are much more varied than expected.

The scientists used the Kepler space telescope to photograph three type 1a supernovae in the earliest stages of ignition. They then tracked the explosions in detail to full brightness around three weeks later, and the subsequent decline over the next few months.

Supernova SN2012fr, just below the center of the host galaxy, outshone the rest of the galaxy for several weeks.
       
Credit: Brad Tucker and Emma Kirby

They found the initial stages of a supernova explosion did not fit with the existing theories.

"The stars all blow up uniquely. It doesn't make sense," said Dr Brad Tucker from The Australian National University (ANU).

"It's particularly weird for these supernovae because even though their initial shockwaves are very different, they end up doing the same thing."

This is a timelapse compilation of the brightness of SN 2012fr over several weeks.  
Credit: ANU

Before this study, the earliest type 1a supernovae had been glimpsed was more than 2.5 hours after ignition, after which the explosions all followed an identical pattern.

This led astronomers to theorise that supernovae, the brilliant explosions of dying stars, all occurred through an identical process.

Astronomers had thought supernovae all happened when a dense star steadily sucked in material from a large nearby neighbour until it became so dense that carbon in the star's core ignited.

"Somewhat to our surprise the results suggest an alternative hypothesis, that a violent collision between two smallish white dwarf stars sets off the explosion," said lead researcher Dr Robert Olling, from the University of Maryland in the United States.

At the peak of their brightness, supernovae are brighter than the billions of stars in their galaxy. Because of their brightness, astronomers have been able to use them to calculate distances to distant galaxies.

Dr. Brad Tucker discusses the first observations of supernova ignition which are challenging theories of how they form.  
Credit: ANU

Measurements of distant supernovae led to the discovery that some unknown force, now called dark energy, is causing the accelerated expansion of the universe. Brian Schmidt from the ANU, Saul Perlmutter (Berkeley) and Adam Reiss (Johns Hopkins) were awarded the Nobel prize in 2011 for this discovery.

Dr Tucker said the new results did not undermine the discovery of dark energy.

"The accelerating universe will not now go away - they will not have to give back their Nobel prizes," he said.

"The new results will actually help us to better understand the physics of supernovae, and figure out what is this dark energy that is dominating the universe."

The findings are published in Nature.

Contacts and sources:
Dr. Brad Tucker
Australian National University

Infections Can Lower I.Q.

New research shows that infections can impair your cognitive ability measured on an IQ scale. The study is the largest of its kind to date, and it shows a clear correlation between infection levels and impaired cognition.

Anyone can suffer from an infection, for example in their stomach, urinary tract or skin. However, a new Danish study shows that a patient's distress does not necessarily end once the infection has been treated. In fact, ensuing infections can affect your cognitive ability measured by an IQ test:

"Our research shows a correlation between hospitalisation due to infection and impaired cognition corresponding to an IQ score of 1.76 lower than the average. People with five or more hospital contacts with infections had an IQ score of 9.44 lower than the average. The study thus shows a clear dose-response relationship between the number of infections, and the effect on cognitive ability increased with the temporal proximity of the last infection and with the severity of the infection. 

An example of one kind of IQ test item, modeled after items in the Raven's Progressive Matrices test
Credit: Life of Riley

Infections in the brain affected the cognitive ability the most, but many other types of infections severe enough to require hospitalisation can also impair a patient's cognitive ability. Moreover, it seems that the immune system itself can affect the brain to such an extent that the person's cognitive ability measured by an IQ test will also be impaired many years after the infection has been cured," explains MD and PhD Michael Eriksen Benrós, who is affiliated with the National Centre for Register-Based Research at Aarhus BSS and the Mental Health Centre Copenhagen, University of Copenhagen.

He has conducted the research in collaboration with researchers from the University of Copenhagen and Aarhus University.

190,000 Danes participated in the study

The study is a nationwide register study tracking 190,000 Danes born between 1974 and 1994, who have had their IQ assessed between 2006 and 2012. 35% of these individuals had a hospital contact with infections before the IQ testing was conducted.

According to Senior Researcher Michael Eriksen Benrós, part of the explanation of the increased risk of impaired cognition following an infection may be as follows:

"Infections can affect the brain directly, but also through peripheral inflammation, which affects the brain and our mental capacity. Infections have previously been associated with both depression and schizophrenia, and it has also been proven to affect the cognitive ability of patients suffering from dementia. This is the first major study to suggest that infections can also affect the brain and the cognitive ability in healthy individuals."

"We can see that the brain is affected by all types of infections. Therefore, it is important that more research is conducted into the mechanisms which lie behind the connection between a person's immune system and mental health," says Michael Eriksen Benrós.

He hopes that learning more about this connection will help to prevent the impairment of people's mental health and improve future treatment.

Experiments on animals have previously shown that the immune system can affect cognitive capabilities, and more recent minor studies in humans have also pointed in that direction. Normally, the brain is protected from the immune system, but with infections and inflammation the brain may be affected. 

Michael Eriksen Benrós' research suggests that it may be the immune system that causes the cognitive impairment, not just the infection, because many different types of infections were associated with a decrease in cognitive abilities. This is the first study to examine these correlations in this manner. 

The results suggest that the immune system's response to infections can possibly affect the brain and thereby also the person's cognitive ability. This is in line with previous studies, some of which have also been conducted by Dr. Michael Eriksen Benrós, which show that infections are associated with an increased risk of developing mental disorders such as depression or schizophrenia.

The researchers behind the study hope that their results may spur on further research on the possible involvement of the immune system in the development of psychiatric disorders and whether the discovered correlations contribute to the development of mental disorders or whether they may be caused by e.g. genetic liability toward acquiring infections in patients with reduced cognitive ability. 

The study has been adjusted for social conditions and parental educational levels; however, it cannot be ruled out that heritable and environmental factors associated with infections might also influence the associations.


Contacts and sources:
Michael Eriksen Benrós, MD, PhD, Senior Researcher
Mental Health Centre Copenhagen, University of Copenhagen
National Centre for Register-Based Research, Aarhus University

Ancient Snakes Had Tiny Limbs Complete with Ankles and Toes


The ancestral snakes in the grass actually lived in the forest, according to the most detailed look yet at the iconic reptiles.

A comprehensive analysis by Yale University paleontologists reveals new insights into the origin and early history of snakes. For one thing, they kept late hours; for another, they also kept their hind legs.

This is an artist's rendering of an ancient snake, with tiny hind limbs.

Credit: Julius T. Csotonyi

"We generated the first comprehensive reconstruction of what the ancestral snake was like," said Allison Hsiang, lead author the study published online May 19 in the journal BMC Evolutionary Biology. Hsiang is a postdoctoral researcher in Yale's Department of Geology and Geophysics.

"We infer that the most recent common ancestor of all snakes was a nocturnal, stealth-hunting predator targeting relatively large prey, and most likely would have lived in forested ecosystems in the Southern Hemisphere," Hsiang said.

Snakes have always captured the imagination of humans. Their long and sinuous body, fearsome reputation, and great diversity -- with more than 3,400 living species -- make them one of the most recognizable groups of living vertebrate animals. Yet little has been known about how, where, and when modern snakes emerged.

The Yale team analyzed snake genomes, modern snake anatomy, and new information from the fossil record to find answers. In doing so, the researchers generated a family tree for both living and extinct snakes, illuminating major evolutionary patterns that have played out across snake evolutionary history.

"Our analyses suggest that the most recent common ancestor of all living snakes would have already lost its forelimbs, but would still have had tiny hind limbs, with complete ankles and toes. It would have first evolved on land, instead of in the sea," said co-author Daniel Field, a Yale Ph.D. candidate. "Both of those insights resolve longstanding debates on the origin of snakes."

The researchers said ancestral snakes were non-constricting, wide-ranging foragers that seized their prey with needle-like hooked teeth and swallowed them whole. They originated about 128.5 million years ago, during the middle Early Cretaceous period.

"Primate brains, including those of humans, are hard-wired to attend to serpents, and with good reason," said Jacques Gauthier, senior author of the study, a Yale professor of geology and geophysics, and curator of fossil vertebrates at the Yale Peabody Museum of Natural History. "Our natural and adaptive attention to snakes makes the question of their evolutionary origin especially intriguing."

Contacts and sources:
Jim Shelton
Yale University

3.3 Million Year Old Tools Oldest Yet Found


The discovery is the first evidence that an even earlier group of proto-humans may have had the thinking abilities needed to figure out how to make sharp-edged tools. The stone tools mark "a new beginning to the known archaeological record," say the authors of a new paper about the discovery, published today in the leading scientific journal Nature.

The finds were made in the desert badlands near Lake Turkana, Kenya. Many other important discoveries of fossils and artifacts have been made nearby.

Credit: West Turkana Archaeological Project

"The whole site's surprising, it just rewrites the book on a lot of things that we thought were true," said geologist Chris Lepre of the Lamont-Doherty Earth Observatory and Rutgers University, a co-author of the paper who precisely dated the artifacts.

The tools "shed light on an unexpected and previously unknown period of hominin behavior and can tell us a lot about cognitive development in our ancestors that we can't understand from fossils alone," said lead author Sonia Harmand, of the Turkana Basin Institute at Stony Brook University and the Universite? Paris Ouest Nanterre.

Hominins are a group of species that includes modern humans, Homo sapiens, and our closest evolutionary ancestors. Anthropologists long thought that our relatives in the genus Homo - the line leading directly to Homo sapiens - were the first to craft such stone tools. But researchers have been uncovering tantalizing clues that some other, earlier species of hominin, distant cousins, if you will, might have figured it out.

The researchers do not know who made these oldest of tools. But earlier finds suggest a possible answer: The skull of a 3.3-million-year-old hominin, Kenyanthropus platytops, was found in 1999 about a kilometer from the tool site. A K. platyops tooth and a bone from a skull were discovered a few hundred meters away, and an as-yet unidentified tooth has been found about 100 meters away.

Sammy Lokorodi, a resident of Kenya's northwestern desert who works as a fossil and artifact hunter, led the way to a trove of 3.3 million-year-old tools.
Credit: Credit: West Turkana Archaeological Project

The precise family tree of modern humans is contentious, and so far, no one knows exactly how K. platyops relates to other hominin species. Kenyanthropus predates the earliest knownHomo species by a half a million years. This species could have made the tools; or, the toolmaker could have been some other species from the same era, such as Australopithecus afarensis, or an as-yet undiscovered early type of Homo.

Lepre said a layer of volcanic ash below the tool site set a "floor" on the site's age: It matched ash elsewhere that had been dated to about 3.3 million years ago, based on the ratio of argon isotopes in the material. To more sharply define the time period of the tools, Lepre and co-author and Lamont-Doherty colleague Dennis Kent examined magnetic minerals beneath, around and above the spots where the tools were found.

Sonia Harmand and Jason Lewis examine stone artifacts at the Lomekwi dig in Kenya. 
Credit: West Turkana Archaeological Project

The Earth's magnetic field periodically reverses itself, and the chronology of those changes is well documented going back millions of years. "We essentially have a magnetic tape recorder that records the magnetic field ... the music of the outer core," Kent said. By tracing the variations in the polarity of the samples, they dated the site to 3.33 million to 3.11 million years.

Lepre's wife and another co-author, Rhoda Quinn of Rutgers, studied carbon isotopes in the soil, which along with animal fossils at the site allowed researchers to reconstruct the area's vegetation. This led to another surprise: The area was at that time a partially wooded, shrubby environment. Conventional thinking has been that sophisticated tool-making came in response to a change in climate that led to the spread of broad savannah grasslands, and the consequent evolution of large groups of animals that could serve as a source of food for human ancestors.

Chris Lepre of Columbia University's Lamont-Doherty Earth Observatory (back to camera) precisely dated the artifacts by analyzing layers above, around and below them for reversals in earth's magnetic field.
Credit: West Turkana Archaeological Project

One line of thinking is that hominins started knapping - banging one rock against another to make sharp-edged stones - so they could cut meat off of animal carcasses, said paper co-author Jason Lewis of the Turkana Basin Institute and Rutgers. But the size and markings of the newly discovered tools "suggest they were doing something different as well, especially if they were in a more wooded environment with access to various plant resources," Lewis said. The researchers think the tools could have been used for breaking open nuts or tubers, bashing open dead logs to get at insects inside, or maybe something not yet thought of.

"The capabilities of our ancestors and the environmental forces leading to early stone technology are a great scientific mystery," said Richard Potts, director of the Human Origins Program at the Smithsonian's National Museum of Natural History, who was not involved in the research. The newly dated tools "begin to lift the veil on that mystery, at an earlier time than expected," he said.

Potts said he had examined the stone tools during a visit to Kenya in February.

"Researchers have thought there must be some way of flaking stone that preceded the simplest tools known until now," he said. "Harmand's team shows us just what this even simpler altering of rocks looked like before technology became a fundamental part of early human behavior."

Photos of selected Lomekwi 3 stones accompanying the paper show both cores and flakes knapped from the cores that the authors say illustrate various techniques.
Credit: West Turkana Archaeological Project

Ancient stone artifacts from East Africa were first uncovered at Olduvai Gorge in Tanzania in the mid-20th century, and those tools were later associated with fossil discoveries in the 1960s of the early human ancestor Homo habilis. That species has been dated to 2.1 million to 1.5 million years ago.

Subsequent finds have pushed back the dates of humans' evolutionary ancestors, and of stone tools, raising questions about who first made that cognitive leap. The discovery of a partial lower jaw in the Afar region of Ethiopia, announced on March 4, pushes the fossil record for the genus Homo to 2.8 million years ago. Evidence from recent papers, the authors note, suggests that there is anatomical evidence that Homo had evolved into several distinct lines by 2 million years ago.

There is some evidence of more primitive tool use going back even before the new find. In 2009, researchers at Dikika, Ethiopia, dug up 3.39 million-year-old animal bones marked with slashes and other cut marks, evidence that someone used stones to trim flesh from bone and perhaps crush bones to get at the marrow inside. That is the earliest evidence of meat and marrow consumption by hominins. No tools were found at the site, so it's unclear whether the marks were made with crafted tools or simply sharp-edged stones. The only hominin fossil remains in the area dating to that time are from Australopithecus afarensis.



The new find came about almost by accident: Harmand and Lewis said that on the morning of July 9, 2011, they had wandered off on the wrong path, and climbed a hill to scout a fresh route back to their intended track. They wrote that they "could feel that something was special about this particular place." They fanned out and surveyed a nearby patch of craggy outcrops. "By teatime," they wrote, "local Turkana tribesman Sammy Lokorodi had helped [us] spot what [we] had come searching for."

By the end of the 2012 field season, excavations at the site, named Lomekwi 3, had uncovered 149 stone artifacts tied to tool-making, from stone cores and flakes to rocks used for hammering and others possibly used as anvils to strike on.

The researchers tried knapping stones themselves to better understand how the tools they found might have been made. They concluded that the techniques used "could represent a technological stage between a hypothetical pounding-oriented stone tool use by an earlier hominin and the flaking-oriented knapping behavior of [later] toolmakers." Chimpanzees and other primates are known to use a stone to hammer open nuts atop another stone. But using a stone for multiple purposes, and using one to crack apart another into a sharper tool, is more advanced behavior.

The find also has implications for understanding the evolution of the human brain. The toolmaking required a level of hand motor control that suggests that changes in the brain and spinal tract needed for such activity could have occurred before 3.3 million years ago, the authors said.

"This is a momentous and well-researched discovery," said paleoanthropologist Bernard Wood of George Washington University, who was not involved in the study. "I have seen some of these artifacts in the flesh, and I am convinced they were fashioned deliberately." Wood said he found it intriguing to see how different the tools are from so-called Oldowan stone tools, which up to now have been considered the oldest and most primitive.

Lepre, who has been conducting fieldwork in eastern Africa for about 15 years, said he arrived at the dig site about a week after the discovery. The site is several hours' drive on rough roads from the nearest town, located in a hot, dry landscape he said is reminiscent of Arizona and New Mexico. Lepre collected chunks of sediment from a series of depths and brought them back to Lamont-Doherty for analysis. He and Kent used a bandsaw to trim the samples into sugar cube-size blocks and inserted them into a magnetometer, which measured the polarity of tiny grains of the minerals hematite and magnetite contained in the sediment.

"The magnetics pretty much clinches that the age is something like 3.3 million years old," said Kent, who also is a professor at Rutgers.

Earlier dating work by Lepre and Kent helped lead to another landmark paper in 2011: a study that suggested Homo erectus, another precursor to modern humans, was using more advanced tool-making methods 1.8 million years ago, at least 300,000 years earlier than previously thought.

"I realized when you [figure out] these things, you don't solve anything, you just open up new questions," said Lepre. "I get excited, then realize there's a lot more work to do."


  
Contacts and sources:
David FunkhouserColumbia University's Lamont-Doherty Earth Observatory

Galaxy's Cannibalistic Feeding Habits Revealed


A team of Australian and Spanish astronomers have caught a greedy galaxy gobbling on its neighbours and leaving crumbs of evidence about its dietary past.

Galaxies grow by churning loose gas from their surroundings into new stars, or by swallowing neighbouring galaxies whole. However, they normally leave very few traces of their cannibalistic habits.

This is a chemical enrichment map of the NGC 1512 and NGC 1510 galaxy system showing the amount of oxygen gas in the star-forming regions around the two galaxies.

Credit: Ángel R. López-Sánchez (AAO/MQU) and Baerbel Koribalski (CSIRO/CASS)

A study published today in Monthly Notices of the Royal Astronomical Society (MNRAS) not only reveals a spiral galaxy devouring a nearby compact dwarf galaxy, but shows evidence of its past galactic snacks in unprecedented detail.

Australian Astronomical Observatory (AAO) and Macquarie University astrophysicist, Ángel R. López-Sánchez, and his collaborators have been studying the galaxy NGC 1512 to see if its chemical story matches its physical appearance.

The team of researchers used the unique capabilities of the 3.9-metre Anglo-Australian Telescope (AAT), near Coonabarabran, New South Wales, to measure the level of chemical enrichment in the gas across the entire face of NGC 1512.

Chemical enrichment occurs when stars churn the hydrogen and helium from the Big Bang into heavier elements through nuclear reactions at their cores. These new elements are released back into space when the stars die, enriching the surrounding gas with chemicals like oxygen, which the team measured.

This is a multiwavelength image of galaxies NGC 1512 and NGC 1510 combining optical and near-infrared data (light blue, yellow, orange), ultraviolet data (dark blue), mid-infrared data (red), and radio data (green).

Credit: Ángel R. López-Sánchez (AAO/MQU) and Baerbel Koribalski (CSIRO/CASS)

"We were expecting to find fresh gas or gas enriched at the same level as that of the galaxy being consumed, but were surprised to find the gases were actually the remnants of galaxies swallowed earlier," Dr López-Sánchez said.

"The diffuse gas in the outer regions of NGC 1512 is not the pristine gas created in the Big Bang but is gas that has already been processed by previous generations of stars."

CSIRO's Australia Telescope Compact Array, a powerful 6-km diameter radio interferometer located in eastern Australia, was used to detect large amounts of cold hydrogen gas that extends way beyond the stellar disk of the spiral galaxy NGC 1512.

"The dense pockets of hydrogen gas in the outer disk of NGC 1512 accurately pin-point regions of active star formation", said CSIRO's Dr Baerbel Koribalski, a member of the research collaboration.

When this finding was examined in combination with radio and ultraviolet observations the scientists concluded that the rich gas being processed into new stars did not come from the inner regions of the galaxy either. Instead, the gas was likely absorbed by the galaxy over its lifetime as NGC 1512 accreted other, smaller galaxies around it.

Dr Tobias Westmeier, from the International Centre for Radio Astronomy Research in Perth, said that while galaxy cannibalism has been known for many years, this is the first time that it has been observed in such fine detail.

"By using observations from both ground and space based telescopes we were able to piece together a detailed history for this galaxy and better understand how interactions and mergers with other galaxies have affected its evolution and the rate at which it formed stars," he said.

The team's successful and novel approach to investigating how galaxies grow is being used in a new program to further refine the best models of galaxy evolution.

For this work the astronomers used spectroscopic data from the AAT at Siding Spring Observatory in Australia to measure the chemical distribution around the galaxies. They identified the diffuse gas around the dual galaxy system using Australian Telescope Compact Array (ATCA) radio observations. In addition, they identified regions of new star formation with data from the Galaxy Evolution Explorer (GALEX) orbiting space telescope.

"The unique combination of these data provide a very powerful tool to disentangle the nature and evolution of galaxies," said Dr López-Sánchez.

"We will observe several more galaxies using the same proven techniques to improve our understanding of the past behaviour of galaxies in the local Universe."

Contacts and sources:
Pete Wheeler
International Centre for Radio Astronomy Research

Supernova Hunting with Supercomputers


Berkeley researchers provide 'roadmap' and tools for finding and studying Type Ia supernovae in their natural habitat. 

Type Ia supernovae are famous for their consistency. Ironically, new observations suggest that their origins may not be uniform at all. Using a "roadmap" of theoretical calculations and supercomputer simulations, astronomers observed for the first time a flash of light caused by a supernova slamming into a nearby star, allowing them to determine the stellar system from which the supernova was born.

This is a simulation of the expanding debris from a supernova explosion (shown in red) running over and shredding a nearby star (shown in blue).

Daniel Kasen, Berkeley Lab/ UC Berkeley

This finding confirms one of two competing theories about the birth of Type Ia supernovae. But taken with other observations, the results imply that there could be two distinct populations of these objects. The details of these findings will appear May 20 in an advance online issue ofNature.

"By calibrating the relative brightness of Type Ia supernovae to several percent accuracy, astronomers were able to use them to discover the acceleration of the Universe. But if we want to push further and constrain the detailed properties of the dark energy driving acceleration, we need more accurate measurements. If we don't know where Type Ia supernovae come from, we can't be totally confident that our cosmological measurements are correct," says Daniel Kasen, an Associate Professor of Astronomy and Physics at UC Berkeley, who holds a joint appointment at the Lawrence Berkeley National Laboratory (Berkeley Lab).

In 2010, Kasen predicted a new way to test the origins of supernovae. Using theoretical arguments and simulations run on supercomputers at the Department of Energy's National Energy Research Scientific Computing Center (NERSC), he showed that if a supernova is born in a binary star system, the collision of the debris with the companion star will produce a brief, hot flash of light. The challenge is then to find a Type Ia event shortly after it ignites, and quickly follow it up with ultraviolet telescopes. 

Using an automated supernova-hunting pipeline--the intermediate Palomar Transient Factory (iPTF), which uses machine-learning algorithms running on NERSC supercomputers--astronomers did just that. They found iPTF14atg just hours after it ignited in a nearby galaxy. Follow up observations with NASA's Swift Space Telescope showed ultraviolet signals consistent with Kasen's predictions.

"Kasen's paper was very important to our work. Without it, we wouldn't have known what to look for," says Yi Cao, a graduate student at Caltech and lead author of the Nature paper. "With the help of NERSC's Edison supercomputer, the iPTF pipeline can turn up supernova candidates 10-15 minutes after its initial detection. This is crucial to our work to search for the ephemeral signal predicted by Kasen."

"We often talk about how computational science is the third pillar of the scientific method, next to theory and experimentation, this finding really brings that point home. In this case, we can see how computational models and tools are driving discovery and transforming our knowledge about the cosmos," says Peter Nugent, Berkeley Lab scientist and member of the iPTF collaboration.

Origin Theories for Type Ia Supernovae

Because the relative brightness of Type Ia supernovae can be measured so well no matter where they are located in the Universe, they make excellent distance markers. In fact, they were instrumental to measuring the accelerating expansion of the Universe in the 1990s--a finding that netted three scientists the 2011 Nobel Prize in Physics, including one for Berkeley Lab's Saul Perlmutter. Yet, astronomers still do not fully understand where they come from.

There are currently two competing origin theories. In both theories, the white dwarf star that eventually becomes a Type Ia supernova is one of a pair of stars that orbits around a common center of mass. In the double-degenerate model the stellar companions are both white dwarfs and the supernova ignites when both stars merge.

In the competing single-degenerate model a white dwarf star orbits with a Sun-like star or a red giant star, which is essentially a dying Sun-like star. As these stars orbit, the white dwarf's gravity pulls, or accretes, material from its stellar companion. As the white dwarf becomes more massive, the temperature and pressure in its core increases, eventually initiating a runaway nuclear reaction, which will end in a dramatic explosion or Type Ia supernova.

In the single-degenerate model, Kasen predicted that the material ejected from a Type Ia supernova would slam into its companion star, generating a shockwave that heats the surrounding material. According to his calculations, the collision should produce emissions detectable at ultraviolet wavelengths in the hours and days following the supernova explosion. And, that's exactly what Cao and his team at Caltech saw in the Swift observations.

The Swift telescope measured a pulse of ultraviolet radiation that declined initially but then rose as the supernova brightened. Because such a pulse is short-lived, it can be missed by surveys that scan the sky less frequently than the iPTF does.

"We have never observed a white dwarf just before it went supernova, but if you can get data soon after ignition, it may be possible to infer the nature of the progenitor system," says Kasen.

After Kasen made his prediction in 2010, he notes that a lot of people tried to look for the ultraviolet signature, but this is the first-time that anyone has seen it. "This discovery is a proof of principal that we can get images of Type Ia supernovae in their infancy. Now we can move forward and try to acquire a large number of these 'baby pictures,' which will tell us how the different channels for igniting stars affect the properties of the supernova," says Kasen.

According to Shrinivas Kulkarni, Professor of Astronomy and Planetary Science at Caltech and principal investigator for the iPTF, the discovery "provides direct evidence for the existence of a companion star in a Type Ia supernova, and demonstrates that at least some type Ia supernovae originate from the single-degenerate channel."

Although the data from supernova iPTF14atg support the single-degenerate model, the double-degenerate model has not been disproven. In fact, previous data from the iPTF have provided credible evidence to support that alternative theory. And that means that both theories actually may be valid, says Caltech Professor of Theoretical Astrophysics Sterl Phinney. "The news is that it seems that both sets of theoretical models are right, and there are two very different kinds of Type Ia supernovae."

"It's really exciting to learn that something that once only existed in your imagination, is actually out there in the real Universe. Automated surveys like iPTF have revolutionized the field by catching these events earlier and earlier. It opens up a new avenue for studying the life and death of stars," says Kasen.


Contacts and sources:
Linda Vu
Lawrence Berkeley National Laboratory

Wednesday, May 20, 2015

Mystery: Why Do Thunderstorms Form At Night,

Thunderstorms that form at night, without a spark from the sun's heat, are a mysterious phenomenon. This summer, scientists will be staying up late in search of some answers.

Plains Elevated Convection at Night (PECAN) scientists are studying nighttime thunderstorms.

Credit: NOAA

From June 1 through July 15, researchers from across North America will fan out each evening across the Great Plains, where storms are more common at night than during the day.

The effort, co-organized by numerous collaborating institutions, will use lab-equipped aircraft, ground-based instruments, and weather balloons to better understand the atmospheric conditions that lead to storm formation and evolution after sunset.

What the scientists find may ultimately help improve forecasts of these sometimes damaging storms.

The Plains Elevated Convection at Night (PECAN) field campaign will involve scientists, students and support staff from eight research laboratories and 14 universities.

The $13.5 million project is largely funded by the National Science Foundation (NSF), which contributed $10.6 million. Additional support is provided by NASA, the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Department of Energy.

Aloft in the night

Thunderstorms that form during the day are less puzzling than nighttime storms.

The sun heats the surface of Earth, which in turn warms the air directly above the ground. When that warm air is forced to rise, it causes convection--a circulation of warm updrafts and cool downdrafts--and sometimes creates a storm.

As part of PECAN, researchers will fan across the U.S. Great Plains at night.

Credit: NAS

But the formation of thunderstorms at night, when the sun is not baking the land, is less well understood.

"At night, the entire storm circulation is elevated higher off the ground," said National Center for Atmospheric Research (NCAR) scientist Tammy Weckwerth, a PECAN principal investigator.

"This makes observations of the conditions leading to nighttime thunderstorms much more challenging because that part of the atmosphere is not well covered by the network of instruments we normally rely on."

The vast array of instruments available to PECAN researchers will allow them to collect data higher in the atmosphere.

The data will help scientists characterize the conditions that lead to individual storm formation as well as to the clustering and organizing of these storms into large-scale systems, which can result in significant precipitation.

"Nighttime thunderstorms are an essential source of summer rain for crops, but are also a potential hazard through excessive rainfall, flash flooding and dangerous cloud-to-ground lightning," said Ed Bensman, program director in NSF’s Division of Atmospheric and Geospace Sciences, which funded the research.

"Weather forecast models often struggle to accurately account for this critical element of summer rainfall on the Great Plains," said Bensman. "The PECAN field campaign will provide researchers and operational forecasters with valuable insights into thunderstorms at night and improve our ability to model them more accurately."

Deploying in the dark

The campaign, based in Hays, Kan., will begin each day at 8 a.m. when a crew of forecasters starts developing a nightly forecast.

Through PECAN, scientists hope to make discoveries that will help forecast severe storms.

Credit: NOAA

At 3 p.m., the scientists will use the forecast to determine where across northern Oklahoma, central Kansas or south-central Nebraska to deploy their mobile resources.

Moving dozens of people around the Great Plains each night will be a challenge for PECAN, but it's also what distinguishes it from past field projects.

"Previous severe weather campaigns focused mostly on daytime storms, for largely practical reasons, as it's more difficult to set up instruments in the dark," said Bart Geerts, an atmospheric scientist at the University of Wyoming and a PECAN principal investigator.

"But the large thunderstorm complexes traveling across the Great Plains at night really are a different beast. "

Scientists believe that several factors may interact to contribute to nocturnal storm formation and maintenance: a stable layer of air at the surface; a strong wind current above that layer, known as a low-level jet; and atmospheric waves, some of which are called "bores," that ripple out from the storms themselves.

"But we don't really know how they interact," Geerts said. "That's what PECAN is about."

A better understanding of these storms will have relevance for areas beyond the Great Plains. Clustered nighttime thunderstorms are common in regions scattered across the globe.

Adds Howie Bluestein, an atmospheric scientist at the University of Oklahoma who is participating in PECAN, “Thunderstorms that occur during the middle of the night over the central Plains in the late spring and early summer have been enigmatic. Data collected during PECAN will help us better understand and predict these rain systems.”

A fleet of instruments

PECAN will use three research aircraft, two of which--a University of Wyoming King Air and NASA DC-8--will fly in clear air away from storms.

Only the third, a NOAA P-3, which is widely used in hurricane research and reconnaissance, will be able to fly into the trailing regions of storms.

The researchers also will rely on a number of ground-based instruments, known as PECAN Integrated Sounding Arrays, or PISAs.

Six PISAs will operate from fixed locations around the study area and four will be mobile, allowing them to be repositioned each night depending on where storms are expected to form.

The instruments within each PISA vary, but collectively they will give each array the ability to measure temperature, moisture and wind profiles, as well as launch weather balloons.

Among the instruments are several that were developed at NCAR's Earth Observing Laboratory, including one that uses an innovative laser-based technique to remotely measure water vapor, and an advanced wind profiler.

Finally, the scientists will have a fleet of mobile and fixed radars.

In all, PECAN researchers will have access to more than 100 instruments brought to the effort by partner institutions.

"PECAN will be using mobile radars, traveling weather stations on vans and trucks, and other systems to probe inside severe nighttime storms," said scientist Karen Kosiba of the Center for Severe Weather Research, a PECAN participant.

"We want to understand more about when, where and why winds, hail and flooding rains occur," Kosiba said. "That will allow us to better forecast these damaging events."


Contacts and sources:
Cheryl Dybas, NSF
David Hosansky, NCAR

Body's Serial Killers Killing Cancer Filmed

A dramatic video has captured the behaviour of cytotoxic T cells - the body's 'serial killers' - as they hunt down and eliminate cancer cells before moving on to their next target.

This is a cytotoxic T cell -- the body's 'serial killers' -- as it hunts down and eliminates cancer cells.

Credit: University of Cambridge

In a study published today in the journal Immunity, a collaboration of researchers from the UK and the USA, led by Professor Gillian Griffiths at the University of Cambridge, describe how specialised members of our white blood cells known as cytotoxic T cells destroy tumour cells and virally-infected cells. Using state-of-the-art imaging techniques, the research team, with funding from the Wellcome Trust, has captured the process on film.


Credit: University of Cambridge

"Inside all of us lurks an army of serial killers whose primary function is to kill again and again," explains Professor Griffiths, Director of the Cambridge Institute for Medical Research. "These cells patrol our bodies, identifying and destroying virally infected and cancer cells and they do so with remarkable precision and efficiency."

There are billions of T cells within our blood - one teaspoon full of blood alone is believed to have around 5 million T cells, each measuring around 10 micrometres in length, about a tenth the width of a human hair. Each cell is engaged in the ferocious and unrelenting battle to keep us healthy. The cells, seen in the video as orange or green amorphous 'blobs' move around rapidly, investigating their environment as they travel.

When a cytotoxic T cell finds an infected cell or, in the case of the film, a cancer cell (blue), membrane protrusions rapidly explore the surface of the cell, checking for tell-tale signs that this is an uninvited guest. The T cell binds to the cancer cell and injects poisonous proteins known as cytotoxins (red) down special pathways called microtubules to the interface between the T cell and the cancer cell, before puncturing the surface of the cancer cell and delivering its deadly cargo.

"In our bodies, where cells are packed together, it's essential that the T cell focuses the lethal hit on its target, otherwise it will cause collateral damage to neighbouring, healthy cells," says Professor Griffiths. "Once the cytotoxins are injected into the cancer cells, its fate is sealed and we can watch as it withers and dies. The T cell then moves on, hungry to find another victim."

The researchers captured the footage through high-resolution 3D time-lapse multi-colour imaging, making use of both spinning disk confocal microscopy and lattice light sheet microscopy. These techniques involves capturing slices through an object and 'stitching' them together to provide the final 3D images across the whole cell. Using these approaches the researchers have managed to elucidate the order the events leading to delivery of the lethal hit from these serial killers.


Contacts and sources:
Craig BrierleyUniversity of Cambridge

Monday, May 18, 2015

Agriculture, Declining Mobility Drove Humans' Evolution to Lighter Bones

Modern lifestyles have famously made humans heavier, but, in one particular way, noticeably lighter weight than our hunter-gatherer ancestors: in the bones. Now a new study of the bones of hundreds of humans who lived during the past 33,000 years in Europe finds the rise of agriculture and a corresponding fall in mobility drove the change, rather than urbanization, nutrition or other factors.

These are cross-sections of an Upper Paleolithic, left, and Early Medieval, right, thigh bone, showing the change in bone shape and reduction in strength in the later individual.

Credit: Study authors/ Johns Hopkins University School of Medicine

The discovery is reported in the early edition of Proceedings of the National Academy of Sciences the week of May 18. It sheds light, researchers say, on a monumental change that has left modern humans susceptible to osteoporosis, a condition marked by brittle and thinning bones.

At the root of the finding, the researchers say, is the knowledge that putting bones under the "stress" of walking, lifting and running leads them to pack on more calcium and grow stronger.

"There was a lot of evidence that earlier humans had stronger bones and that weight-bearing exercise in modern humans prevents bone loss, but we didn't know whether the shift to weaker bones over the past 30,000 years or so was driven by the rise in agriculture, diet, urbanization, domestication of the horse or other lifestyle changes," says Christopher Ruff, Ph.D. , a professor of functional anatomy and evolution at the Johns Hopkins University School of Medicine.

"By analyzing many arm and leg bone samples from throughout that time span, we found that European humans' bones grew weaker gradually as they developed and adopted agriculture and settled down to a more sedentary lifestyle, and that moving into cities and other factors had little impact."

The study was a collaborative effort of researchers from across Europe and the United States that began in 2008. The group focused on Europe because it has many well-studied archeological sites, Ruff says, and because the population has relatively little genetic variation, despite some population movements. That meant that any changes observed could be attributed more to lifestyle than to genetics.

For the study, the researchers took molds of bones from museums' collections and used a portable X-ray machine to scan them, focusing on two major bones from the legs and one from the arms. "By comparing the lower limbs with the upper limbs, which are little affected by how much walking or running a person does, we could determine whether the changes we saw were due to mobility or to something else, like nutrition," Ruff says.

When they analyzed the geometry of bones over time, the researchers found a decline in leg bone strength between the Mesolithic era, which began about 10,000 years ago, and the age of the Roman Empire, which began about 2,500 years ago. Arm bone strength, however, remained fairly steady. "The decline continued for thousands of years, suggesting that people had a very long transition from the start of agriculture to a completely settled lifestyle," Ruff says. "But by the medieval period, bones were about the same strength as they are today."

Ruff notes that Paleolithic-style bones are still likely achievable, at least for younger humans, if they recreate to some extent the lifestyle of their ancestors, notably doing a lot more walking than their peers. He cites studies of professional athletes that have demonstrated how lifestyle is written in our bones. "The difference in bone strength between a professional tennis player's arms is about the same as that between us and Paleolithic humans," he says.


Contacts and sources:
Shawna Williams

Other authors on the paper are Brigitte M. Holt and Erin Whittey of the University of Massachusetts, Amherst; Markku Niskanen, Juho-Antti Junno and Rosa Vilkama of the University of Oulu in Finland; Vladimir Sladek, Martin Hora and Eliska Schuplerova of Charles University in Prague; Margit Berner of Vienna's Natural History Museum; Evan Garofalo of the University of Maryland, Baltimore; and Heather M. Garvin of Mercyhurst University.

Insect Compound Eyes Liquid-Crystal-Based Compound Lenses Work Like Insect Eyes


The compound eyes found in insects and some sea creatures are marvels of evolution. There, thousands of lenses work together to provide sophisticated information without the need for a sophisticated brain. Human artifice can only begin to approximate these naturally self-assembled structures, and, even then, they require painstaking manufacturing techniques.
Credit: University of Pennsylvania

An array of liquid crystal microlenses self-assemble around a central pillar. These lenses produce sets of images with different focal lengths, a property that could be used for three-dimensional imaging. They are also sensitive to the polarization of light, one of the qualities that are thought to help bees navigate their environments.

Credit: University of Pennsylvania

Now, engineers and physicists at the University of Pennsylvania have shown how liquid crystals can be employed to create compound lenses similar to those found in nature. Taking advantage of the geometry in which these liquid crystals like to arrange themselves, the researchers are able to grow compound lenses with controllable sizes.

These lenses produce sets of images with different focal lengths, a property that could be used for three-dimensional imaging. They are also sensitive to the polarization of light, one of the qualities that are thought to help bees navigate their environments.

The study was led by Francesca Serra and Mohamed Amine Gharbi, postdoctoral researchers in the Department of Physics and Astronomy in Penn's School of Arts and Sciences, along with Kathleen Stebe, the School of Engineering and Applied Science's deputy dean for research and a professor in Chemical and Biomolecular Engineering; Randall Kamien, professor in Physics and Astronomy; and Shu Yang, professor in Engineering's departments of Materials Science and Engineering and Chemical and Biomolecular Engineering. Yimin Luo, Iris Liu and Nathan Bade, members of Stebe's lab, also contributed to the study.

It was published in Advanced Optical Materials.

Previous work by the group had shown how smectic liquid crystal, a transparent, soap-like class of the material, naturally self-assembled into flower-like structures when placed around a central silica bead. Each "petal" of these flowers is a "focal conic domain," a structure that other researchers had shown could be used as a simple lens.

"Given the liquid crystal flower's outward similarity to a compound lens, we were curious about its optical properties," said Gharbi.

"Our first question," Serra said, "was what kind of lens is this? Is it an array of individual microlenses, or does it essentially act as one big lens? Both types exist in nature."

To make the lenses, the researchers used photolithography to fashion a sheet of micropillars, then spread the liquid crystal on the sheet. At room temperature, the liquid crystal adheres to the top edges of the posts, transmitting an elastic energy cue that causes the crystal's focal conic domains to line up in concentric circles around the posts.

With these liquid crystal lenses so easy to make, the experiment to test their properties was also relatively simple. Finding a suitable compound lens under a microscope, the researchers put a test image, a glass slide with the letter "P" drawn on in marker, between it and the microscope's light source. Starting with the post in focus, they moved the microscope's objective up and down until they could see an image form.

"If the array worked as a single lens," Serra said, "a single virtual image would appear below the sample. But because they work as separate microlenses, I saw multiple P's, one in each of the lenses."

Because the focal conic domains vary in size, with the largest ones closest to the pillars and descending in size from there, the focal lengths for each ring of the microlenses is different. As the researchers moved the microscope objective up, the images of the P's came into focus in sequence, from the outside layers inward.

"That they focus on different planes is what allows for 3-D image reconstruction," Yang said. "You can use that information to see how far away the object you're seeing is."

A second experiment also showed this parallax effect. Replacing the P with two test images, a cross with a square suspended several inches above it, the researchers showed that the cross intersected the square at different points in different lenses. This phenomenon would allow the reconstruction of the square and the cross's spatial relationship.

A third experiment showed that the team's lenses were sensitive to light polarization, a trait that had not been demonstrated in liquid crystal lenses before. Bees are thought to use this information to better identify flowers by seeing how light waves align as they bounce off their petals. By putting another image, a smiley face, above the microscope's lamp and a polarizing filter on top, the researchers were able to block the images from forming in some lenses but not others.

"For example," Serra said, "the lenses on the right and left of the pillar will show images just for vertically polarized light. This sensitivity results from the peculiar geometrical arrangement of these liquid crystal defects, which other artificial compound eyes or microlens arrays lack."

Answering fundamental questions about how these microlenses work extends this area of research in the direction of practical applications. With an understanding on the geometric relationships between the size of the pillars, the arrangement of the focal conic domains and the focal lengths of the microlenses they produce, the team has shown how to grow these compound lenses to order.

"Last time we had tiny flowers. Now they are 10 times bigger," Stebe said. "That's important because it shows that the system scales; if we ever wanted to mass-produce these lenses, we can use the same technique on arbitrarily large surfaces. We know how to put the pillars in any given position and size, how to cast out thin films of smectic liquid crystal and exactly where and how the lenses form based on this geometric seed."



Contacts and sources:
Evan Lerner
University of Pennsylvania 

Giant Iceberg Runs Aground

The grounding of a giant iceberg in Antarctica has provided a unique real-life experiment that has revealed the vulnerability of marine ecosystems to sudden changes in sea-ice cover.

University of New South Wales (UNSW) Australia scientists have found that within just three years of the iceberg becoming stuck in Commonwealth Bay - an event which dramatically increased sea-ice cover in the bay - almost all of the seaweed on the sea floor had decomposed, or become discoloured or bleached due to lack of light.

UNSW Australia scientists travel out onto the sea-ice covering Commonwealth Bay in East Antarctica, in preparation to drill a hole in the ice and observe life on the seafloor with a video camera.
Credit: UNSW

"Understanding the ecological effect of changes in sea-ice is vital for understanding the future impacts of climate change, but it is impossible to manipulate sea-ice at the scale we need to conduct experiments," says the study's lead author, UNSW's Dr Graeme Clark.

"Luckily, the grounding of an iceberg in Commonwealth Bay in 2010 provided us with a perfect natural experiment to carry out research on this important issue.

UNSW's Dr Ezequiel Marzinelli and Dr Graeme Clark drill a hole in the sea-ice in Commonwealth Bay in East Antarctica.

Credit: UNSWScience

"Historically, the winds coming down from the Antarctic ice cap have kept the bay free of sea-ice for most of the year. But since the massive iceberg got stuck, the bay has been covered in sea-ice several metres thick, all year round," he says.

The study is published in the journal Polar Biology.

The scientists travelled to Commonwealth Bay in the east of the continent in December 2013, as part of the UNSW-led Australasian Antarctic Expedition 2013-2014, which retraced the route of Sir Douglas Mawson's scientific expedition a century before.

Dr Clark and team member, Dr Ezequiel Marzinelli, drilled through the sea ice in the bay and used underwater cameras to survey the seabed.

The UNSW team, led by Professor Emma Johnston, also examined the historical records made by Sir Douglas Mawson, who established his research station in Commonwealth Bay. Mawson's expedition used dredge hauls to survey the bay, which was then rarely covered in ice, in five locations during a two-year period from 1912 to 1913.

An underwater camera is lowered through a hole in the sea ice to the sea floor in Commonwealth Bay in East Antarctica by UNSW scientists. The images reveal disintegrating seaweed and bleached pink coralline algae, plus some small sea stars and brittle stars.

Credit: UNSWScience

The UNSW researchers also compared their survey results with footage taken in the area in 2008 by underwater photographer Erwan Amice.

"After three years of sea-ice cover, the forests of healthy seaweeds that previously dominated the seabed were in a severe state of decline," says Professor Johnston, of the UNSW School of Biological, Earth and Environmental Sciences.

"About three quarters were decomposing, while the remainder were discoloured or bleached. However, darkness-adapted vertebrates such as brittle stars and fan worms were starting to colonise the bay."

Dr Marzinelli says: "This is the first time a shift from an algae-dominated ecosystem to an invertebrate-dominated one has been observed with a known start date. Continued sampling at the site would allow the rate at which this transition is occurring to be worked out."

Sea-ice not only affects how much light reaches the animals and plants living on the seabed, it regulates the amount of disturbance from drifting icebergs that scour the bottom, and determines the amount of detritus and vegetation that settles there.



Contacts and sources:
Deborah Smith

Extreme Heat Exposure to Rise Dramatically by Mid-Century


U.S. residents' exposure to extreme heat could increase four- to six-fold by mid-century, due to both a warming climate and a population that's growing especially fast in the hottest regions of the country, according to new research.

The study, by researchers at the National Center for Atmospheric Research (NCAR) and the City University of New York (CUNY), highlights the importance of considering societal changes when trying to determine future climate impacts.

This graphic illustrates the expected increase in average annual person-days of exposure to extreme heat for each US Census Division when comparing the period 1971-2000 to the period 2041-2070. Person-days are calculated by multiplying the number of days when the temperature is expected to hit at least 95 degrees by the number of people who are projected to live in the areas where extreme heat is occurring. The scale is in billions.

Credit: ©UCAR.

"Both population change and climate change matter," said NCAR scientist Brian O'Neill, one of the study's co-authors. "If you want to know how heat waves will affect health in the future, you have to consider both."

Extreme heat kills more people in the United States than any other weather-related event, and scientists generally expect the number of deadly heat waves to increase as the climate warms. The new study, published May 18 in the journal Nature Climate Change, finds that the overall exposure of Americans to these future heat waves would be vastly underestimated if the role of population changes were ignored.

The total number of people exposed to extreme heat is expected to increase the most in cities across the country's southern reaches, including Atlanta, Charlotte, Dallas, Houston, Oklahoma City, Phoenix, Tampa, and San Antonio.

The research was funded by the National Science Foundation, which is NCAR's sponsor, and the U.S. Department of Energy.

Climate, population, and how they interact

For the study, the research team used 11 different high-resolution simulations of future temperatures across the United States between 2041 and 2070, assuming no major reductions in greenhouse gas emissions. The simulations were produced with a suite of global and regional climate models as part of the North American Regional Climate Change Assessment Program.

Using a newly developed demographic model, the scientists also studied how the U.S. population is expected to grow and shift regionally during the same time period, assuming current migration trends within the country continue.

Total exposure to extreme heat was calculated in "person-days" by multiplying the number of days when the temperature is expected to hit at least 95 degrees by the number of people who are projected to live in the areas where extreme heat is occurring.

The results are that the average annual exposure to extreme heat in the United States during the study period is expected to be between 10 and 14 billion person-days, compared to an annual average of 2.3 billion person-days between 1971 and 2000.

Of that increase, roughly a third is due solely to the warming climate (the increase in exposure to extreme heat that would be expected even if the population remained unchanged). Another third is due solely to population change (the increase in exposure that would be expected if climate remained unchanged but the population continued to grow and people continued to moved to warmer places). The final third is due to the interaction between the two (the increase in exposure expected because the population is growing fastest in places that are also getting hotter).

"We asked, 'Where are the people moving? Where are the climate hot spots? How do those two things interact?'" said NCAR scientist Linda Mearns, also a study co-author. "When we looked at the country as a whole, we found that each factor had relatively equal effect."

At a regional scale, the picture is different. In some areas of the country, climate change packs a bigger punch than population growth and vice versa.

For example, in the U.S. Mountain region--defined by the Census Bureau as the area stretching from Montana and Idaho south to Arizona and New Mexico--the impact of a growing population significantly outstrips the impact of a warming climate. But the opposite is true in the South Atlantic region, which encompasses the area from West Virginia and Maryland south through Florida.

Exposure vs. vulnerability

Regardless of the relative role that population or climate plays, some increase in total exposure to extreme heat is expected in every region of the continental United States. Even so, the study authors caution that exposure is not necessarily the same thing as vulnerability.

"Our study does not say how vulnerable or not people might be in the future," O'Neill said. "We show that heat exposure will go up, but we don't know how many of the people exposed will or won't have air conditioners or easy access to public health centers, for example."

The authors also hope the study will inspire other researchers to more frequently incorporate social factors, such as population change, into studies of climate change impacts.

"There has been so much written regarding the potential impacts of climate change, particularly as they relate to physical climate extremes," said Bryan Jones, a postdoctoral researcher at the CUNY Institute for Demographic Research and lead author of the study. "However, it is how people experience these extremes that will ultimately shape the broader public perception of climate change."



Contacts and sources: 
Laura Snider
National Center for Atmospheric Research (NCAR)

Citation:  "Future population exposure to U.S. heat extremes"
Authors: Bryan Jones, Brian C. O'Neill, Larry McDaniel, Seth McGinnis, Linda O. Mearns, and Claudia Tebaldi.  Publication: Nature Climate Change

Fewer But More Powerful Hurricanes Predicted Due To Climate Change

Climate change may be the driving force behind fewer, yet more powerful hurricanes and tropical storms, says a Florida State geography professor.

In a paper published today by Nature Climate Change, Professor Jim Elsner and his former graduate student Namyoung Kang found that rising ocean temperatures are having an effect on how many tropical storms and hurricanes develop each year.

Hurricane Katrina
Credit: Jeff Schmaltz, MODIS Rapid Response Team, NASA/GSFC

"We're seeing fewer hurricanes, but the ones we do see are more intense," Elsner said. "When one comes, all hell can break loose."

Prior to this research, there had been some discussions among scientists about how warmer ocean temperatures affected the intensity of a hurricane. Elsner and Kang wanted to further explore that concept as well as the number of storms that occurred each year.

Hurricanes can form when ocean waters are 79 degrees Fahrenheit or more. As the warm water evaporates, it provides the energy a storm needs to become a hurricane. Higher temperatures mean higher levels of energy, which would ultimately affect wind speed.

Specifically, Elsner and Kang projected that over the past 30 years, storm speeds have increased on average by 1.3 meters per second -- or 3 miles per hour -- and there were 6.1 fewer storms than there would have been if land and water temperatures had remained constant.

"It's basically a tradeoff between frequency and intensity," Elsner said.

According to the National Oceanic and Atmospheric Administration, the Earth is roughly 1.53 degrees Fahrenheit warmer than it was last century.

Elsner and Kang said the yearly temperatures can also be a good indicator of what's yet to come in a given storm season.

"In a warmer year, stronger but fewer tropical cyclones are likely to occur," said Kang, now deputy director of the National Typhoon Center in South Korea. "In a colder year, on the other hand, weaker but more tropical cyclones."

For the 2015 Atlantic storm season, which begins June 1, the Weather Channel has projected a total of nine named storms, five hurricanes and one major hurricane. The 30-year average is 12 named storms, six hurricanes and three major hurricanes.

The Geophysical Fluid Dynamics Institute at Florida State supported this research.


Contacts and sources:
Kathleen Haughney
Florida State University

Cameras Can Talk To Screens Without User Knowing It with HiLight

Opening the way for new applications of smart devices, Dartmouth researchers have created the first form of real-time communication that allows screens and cameras to talk to each other without the user knowing it.

Using off-the-shelf smart devices, the new system supports an unobtrusive, flexible and lightweight communication channel between screens (of TVs, laptops, tablets, smartphones and other electronic devices) and cameras. The system, called HiLight, will enable new context-aware applications for smart devices. 

Dartmouth researchers have created the first form of real-time communication that allows screens, displaying images such as this landscape, and cameras to talk to each other without the user knowing it.
Credit: Dartmouth College

Such applications include smart glasses communicating with screens to realize augmented reality or acquire personalized information without affecting the content that users are currently viewing. The system also provides far-reaching implications for new security and graphics applications.

The findings will be presented May 20 at the ACM MobiSys'15, a top conference in mobile systems, applications and services. A PDF of the study, further information and demonstration videos are available at the HiLight project website.

In a world of ever-increasing smart devices, enabling screens and cameras to communicate has been attracting growing interest. The idea is simple: information is encoded into a visual frame shown on a screen, and any camera-equipped device can turn to the screen and immediately fetch the information. 

Operating on the visible light spectrum band, screen-camera communication is free of electromagnetic interference, offering a promising alternative for acquiring short-range information. But these efforts commonly require displaying visible coded images, which interfere with the content the screen is playing and create unpleasant viewing experiences.

The Dartmouth team studied how to enable screens and cameras to communicate without the need to show any coded images like QR code, a mobile phone readable barcode. In the HiLight system, screens display content as they normally do and the content can change as users interact with the screens. At the same time, screens transmit dynamic data instantaneously to any devices equipped with cameras behind the scene, unobtrusively, in real time.

HiLight supports communication atop any screen content, such as an image, movie, video clip, game, web page or any other application window, so that camera-equipped devices can fetch the data by turning their cameras to the screen. HiLight leverages the alpha channel, a well-known concept in computer graphics, to encode bits into the pixel translucency change. HiLight overcomes the key bottleneck of existing designs by removing the need to directly modify pixel color values. It decouples communication and screen content image layers.

"Our work provides an additional way for devices to communicate with one another without sacrificing their original functionality," says senior author Xia Zhou, an assistant professor of computer science and co-director of the DartNets (Dartmouth Networking and Ubiquitous Systems) Lab. 

"It works on off-the-shelf smart devices. Existing screen-camera work either requires showing coded images obtrusively or cannot support arbitrary screen content that can be generated on the fly. Our work advances the state-of-the-art by pushing screen-camera communication to the maximal flexibility."


Contacts and sources:
John Cramer