OpenX

Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

Monday, April 20, 2015

Tiny Sensor Detects Spoiled Meat: Use In “Smart Packaging” Could Improve Food Safety.

MIT chemists have devised an inexpensive, portable sensor that can detect gases emitted by rotting meat, allowing consumers to determine whether the meat in their grocery store or refrigerator is safe to eat.

The sensor, which consists of chemically modified carbon nanotubes, could be deployed in “smart packaging” that would offer much more accurate safety information than the expiration date on the package, says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT.

This MIT device, based on modified carbon nanotubes, can detect amines produced by decaying meat.

Photo: Sophie Liu

It could also cut down on food waste, he adds. “People are constantly throwing things out that probably aren’t bad,” says Swager, who is the senior author of a paper describing the new sensor this week in the journal Angewandte Chemie.

The paper’s lead author is graduate student Sophie Liu. Other authors are former lab technician Alexander Petty and postdoc Graham Sazama.

The sensor is similar to other carbon nanotube devices that Swager’s lab has developed in recent years, including one that detects the ripeness of fruit. All of these devices work on the same principle: Carbon nanotubes can be chemically modified so that their ability to carry an electric current changes in the presence of a particular gas.

In this case, the researchers modified the carbon nanotubes with metal-containing compounds called metalloporphyrins, which contain a central metal atom bound to several nitrogen-containing rings. Hemoglobin, which carries oxygen in the blood, is a metalloporphyrin with iron as the central atom.

For this sensor, the researchers used a metalloporphyrin with cobalt at its center. Metalloporphyrins are very good at binding to nitrogen-containing compounds called amines. Of particular interest to the researchers were the so-called biogenic amines, such as putrescine and cadaverine, which are produced by decaying meat.

When the cobalt-containing porphyrin binds to any of these amines, it increases the electrical resistance of the carbon nanotube, which can be easily measured.

“We use these porphyrins to fabricate a very simple device where we apply a potential across the device and then monitor the current. When the device encounters amines, which are markers of decaying meat, the current of the device will become lower,” Liu says.

In this study, the researchers tested the sensor on four types of meat: pork, chicken, cod, and salmon. They found that when refrigerated, all four types stayed fresh over four days. Left unrefrigerated, the samples all decayed, but at varying rates.

There are other sensors that can detect the signs of decaying meat, but they are usually large and expensive instruments that require expertise to operate. “The advantage we have is these are the cheapest, smallest, easiest-to-manufacture sensors,” Swager says.

“There are several potential advantages in having an inexpensive sensor for measuring, in real time, the freshness of meat and fish products, including preventing foodborne illness, increasing overall customer satisfaction, and reducing food waste at grocery stores and in consumers’ homes,” says Roberto Forloni, a senior science fellow at Sealed Air, a major supplier of food packaging, who was not part of the research team.

The new device also requires very little power and could be incorporated into a wireless platform Swager’s lab recently developed that allows a regular smartphone to read output from carbon nanotube sensors such as this one.

The researchers have filed for a patent on the technology and hope to license it for commercial development. The research was funded by the National Science Foundation and the Army Research Office through MIT’s Institute for Soldier Nanotechnologies.


Contacts and sources: 
Anne Trafton 
MIT News Office

NailO: Wireless Mouse Worn On Thumb

Researchers at the MIT Media Laboratory are developing a new wearable device that turns the user’s thumbnail into a miniature wireless track pad.

They envision that the technology could let users control wireless devices when their hands are full — answering the phone while cooking, for instance. It could also augment other interfaces, allowing someone texting on a cellphone, say, to toggle between symbol sets without interrupting his or her typing. Finally, it could enable subtle communication in circumstances that require it, such as sending a quick text to a child while attending an important meeting.
 
A new wearable device, NailO, turns the user’s thumbnail into a miniature wireless track pad. Here, it works as a X-Y coordinate touch pad for a smartphone
Credit: MIT Media Lab

The researchers describe a prototype of the device, called NailO, in a paper they’re presenting next week at the Association for Computing Machinery’s Computer-Human Interaction conference in Seoul, South Korea.

According to Cindy Hsin-Liu Kao, an MIT graduate student in media arts and sciences and one of the new paper’s lead authors, the device was inspired by the colorful stickers that some women apply to their nails. “It’s a cosmetic product, popular in Asian countries,” says Kao, who is Taiwanese. “When I came here, I was looking for them, but I couldn’t find them, so I’d have my family mail them to me.”

Indeed, the researchers envision that a commercial version of their device would have a detachable membrane on its surface, so that users could coordinate surface patterns with their outfits. To that end, they used capacitive sensing — the same kind of sensing the iPhone’s touch screen relies on — to register touch, since it can tolerate a thin, nonactive layer between the user’s finger and the underlying sensors.

Designed in the MIT Media Lab, NailO is a thumbnail-mounted wireless track pad that controls digital devices. Watch it in action.

Video: Melanie Gonick/MIT

Instant access

As the site for a wearable input device, however, the thumbnail has other advantages: It’s a hard surface with no nerve endings, so a device affixed to it wouldn’t impair movement or cause discomfort. And it’s easily accessed by the other fingers — even when the user is holding something in his or her hand.

“It’s very unobtrusive,” Kao explains. “When I put this on, it becomes part of my body. I have the power to take it off, so it still gives you control over it. But it allows this very close connection to your body.”

To build their prototype, the researchers needed to find a way to pack capacitive sensors, a battery, and three separate chips — a microcontroller, a Bluetooth radio chip, and a capacitive-sensing chip — into a space no larger than a thumbnail. “The hardest part was probably the antenna design,” says Artem Dementyev, a graduate student in media arts and sciences and the paper’s other lead author. “You have to put the antenna far enough away from the chips so that it doesn’t interfere with them.”

Kao and Dementyev are joined on the paper by their advisors, principal research scientist Chris Schmandt and Joe Paradiso, an associate professor of media arts and sciences. Dementyev and Paradiso focused on the circuit design, while Kao and Schmandt concentrated on the software that interprets the signal from the capacitive sensors, filters out the noise, and translates it into movements on screen.

For their initial prototype, the researchers built their sensors by printing copper electrodes on sheets of flexible polyester, which allowed them to experiment with a range of different electrode layouts. But in ongoing experiments, they’re using off-the-shelf sheets of electrodes like those found in some track pads.

Slimming down

They’ve also been in discussion with battery manufacturers — traveling to China to meet with several of them — and have identified a technology that they think could yield a battery that fits in the space of a thumbnail, but is only half a millimeter thick. A special-purpose chip that combines the functions of the microcontroller, radio, and capacitive sensor would further save space.

At such small scales, however, energy efficiency is at a premium, so the device would have to be deactivated when not actually in use. In the new paper, the researchers also report the results of a usability study that compared different techniques for turning it off and on. They found that requiring surface contact with the operator’s finger for just two or three seconds was enough to guard against inadvertent activation and deactivation.

“Keyboards and mice — still — are not going away anytime soon,” says Steve Hodges, who leads the Sensors and Devices group at Microsoft Research in Cambridge, England. “But more and more that’s being complemented by use of our devices and access to our data while we’re on the move. I’ve got desktop, I’ve got a mobile phone, but that’s still not enough. Different ways of displaying and controlling devices while we’re on the go are, I believe, going to be increasingly important.”

“Is it the case that we’ll all be walking around with digital fingernails in five years’ time?” Hodges asks. “Maybe it is. Most likely, we’ll have a little ecosystem of these input devices. Some will be audio based, which is completely hands free. But there are a lot of cases where that’s not going to be appropriate. NailO is interesting because it’s thinking about much more subtle interactions, where gestures or speech input are socially awkward.”

Contacts and sources: 
Larry Hardesty  
MIT News Office

Astronomers Solve Decades-Long Mystery of the "Lonely Old Stars"

Many, perhaps most, stars in the Universe live their lives with companions by their sides – in so-called binary systems. Until recently, however, the ancient RR Lyrae stars appeared, for mysterious reasons, to live their lives all alone. A recent study led by Chilean astronomers shows that RR Lyrae stars may not be as lonely as previously thought. The new research is published in Monthly Notices of the Royal Astronomical Society Letters.

Map of the sky towards the central bulge of the Milky Way, with the positions of the binary candidates indicated as red circles. The background image is based on near-infrared observations obtained in the course of the Vista Variables in the Vía Láctea (VVV) ESO Public Survey. The scale is approximately 20 by 15 degrees.  
Credit: D. Minniti. 

Stars are very often found not in isolation, but rather in pairs. In these so-called binary systems, two stars orbit around their common centre of gravity. Suitable binary systems are of extreme importance in astrophysics, as their properties can be inferred with unparalleled accuracy from detailed analysis of their orbital properties.

Puzzlingly, however, an overwhelming majority of the known members of a very important family of stars, known to astronomers as RR Lyrae variables, have for long appeared to live their lives all alone. These stars, being among the oldest known in the cosmos, contain precious information about the origin and evolution of the stellar systems that harbour them, such as the Milky Way itself. However, the lack of RR Lyrae stars in binary systems has made a direct assessment of some of their key properties difficult. Most often, theory had to be invoked to fill the gap.

This apparent solitude has always intrigued astronomers. Now, however, an international research team led by experts of theMillennium Institute of Astrophysics (MAS) and the Pontificia Universidad Católica de Chile's Institute of Astrophysics (IA-PUC) have found evidence that these stars may not abhor companionship so thoroughly after all. In a Letter published in the journal Monthly Notices of the Royal Astronomical Society, the team reports on the identification of as many as 20 candidate RR Lyrae binaries – an increase of up to 2000% with respect to previous tallies. Twelve of those candidates have enough measurements to conclude with high confidence that they do indeed consist of two stars orbiting each other.

"In the solar neighbourhood, about every second star is in a binary. The problem with RR Lyrae variables is that for a long time only one of them was known to be in a long-period binary system. The fact that among 100,000 known RR Lyrae stars only one of them had been seen to have such a companion was something really intriguing for astronomers," explains Gergely Hajdu, IA-PUC Ph.D. student, MAS researcher, and lead author of the study.

Animation showing the light-travel time effect. The upper plot shows the brightness of a binary RR Lyrae star (OGLE-RRLYR-06498) as a function of time. Grey symbols indicate all the available individual measurements, whereas the red symbols show an selection of the data points as they were collected over time. The bottom plot shows the same data, but folded according to the pulsation period. The main brightness variations show up as a sawtooth-like distribution corresponding to the pulsation of the RR Lyrae variable, whilst the back-and-forth movement shown by the red dots is the signal that is brought about by the presence of a binary companion.
Credit:  Royal Astronomical Society
  

In their paper, the authors used a method that astronomers call the "light-travel time effect," which exploits subtle differences in the time it takes starlight to reach us.

"The RR Lyrae stars pulsate regularly, significantly increasing, and then decreasing, their sizes, temperatures, and brightness, in a matter of just a few hours. When a pulsating star is in a binary system, the changes in brightness perceived by us can be affected by where exactly the star is in the course of its orbit around the companion. Thus, the starlight takes longer to reach us when it is at the farthest point along its orbit, and vice-versa. This subtle effect is what we have detected in our candidates," according to Hajdu.

"Our measurements were based on data published by the Polish OGLE Project. The OGLE team have obtained their data using the 1.3m Warsaw telescope, located in Las Campanas Observatory, northern Chile, repeatedly observing the same patches of the sky for many years. Our 20 candidates were found analysing the roughly 2000 best observed RR Lyrae stars towards the central parts of the Milky Way. That's about 5% of the known ones. It was only thanks to the high quality of the OGLE data and the long timespan of these observations that we could finally find signs of companions around so many of these stars," says Hajdu.

Indeed, the systems detected by Hajdu et al. have orbital periods of several years, which indicates that the companions, though bound together by gravity, are not very close to one another. "Binaries with even longer periods may also exist, but the current data do not extend long enough for us to reach strong conclusions in this respect," he adds.

For co-author Márcio Catelan, MAS Associate Researcher, IA-PUC astrophysicist and Hajdu's thesis advisor, these results have significant implications for astrophysics. "These are extremely old stars, which have witnessed the formation of galaxies like our own Milky Way, and survived to tell us the story. Besides, they are easy to identify, since they show characteristic, cyclical brightness variations, which make them excellent distance indicators for the nearby Universe. However, a lot of what we know about them relies on theoretical modelling. We can now exploit the orbital information contained in these binary systems – and there are quite a few of them now – in order to directly measure their physical properties, especially their masses but possibly also their diameters, thus opening new doors to discoveries that until recently seemed impossible.", he says.

This is just the first step towards achieving these goals, however: according to Catelan, more data will be needed, particularly follow-up observations of the binary candidates with sophisticated techniques like spectroscopy and astrometry. The rewards awaiting at the end of the road seem well worth the long journey, and the RR Lyrae will happily traverse that path with their companions firmly by their sides.


Contacts and sources:
Dr Keith Smith
Royal Astronomical Society

Gergely Hajdu
Instituto de Astrofísica, Pontificia Universidad Católica de Chile & Millennium Institute of Astrophysics

The new work appears in G. Hajdu, "New RR Lyrae variables in binary systems", Monthly Notices of the Royal Astronomical Society, vol. 449, pp. L113-L117, 2015, published by Oxford University Press.

Largest Structure In Universe: A Supervoid 1.3 Billion Light Years Across

Astronomers may have found "the largest individual structure ever identified by humanity", according to Dr István Szapudi of the University of Hawaii at Manoa. Dr Szapudi and his team report their findings in the journal Monthly Notices of the Royal Astronomical Society.

A map of the cosmic microwave background made using the Planck satellite. The Cold Spot, the ellipse at the bottom right, area resides in the constellation Eridanus in the southern galactic hemisphere. The insets show the environment of this anomalous patch of the sky, as mapped by Szapudi’s team using PS1 and WISE data and as observed in the cosmic microwave background temperature data. The angular diameter of the vast supervoid aligned with the Cold Spot, which exceeds 30 degrees, is marked by the white circles. Graphics by Gergő Kránicz.
Image credit: ESA Planck Collaboration.

In 2004, astronomers examining a map of the radiation left over from the Big Bang (the cosmic microwave background, or CMB) discovered the Cold Spot, a larger-than-expected unusually cold area of the sky. The physics surrounding the Big Bang theory predicts warmer and cooler spots of various sizes in the infant universe, but a spot this large and this cold was unexpected. Now astronomers may have found an explanation for the existence of the Cold Spot.

If the Cold Spot originated from the Big Bang itself, it could be a rare sign of exotic physics that the standard cosmology (basically, the Big Bang theory and related physics) does not explain. If, however, it is caused by a foreground structure between us and the CMB, it would be a sign that there is an extremely rare large-scale structure in the mass distribution of the universe.

Using data from Hawaii’s Pan-STARRS1 (PS1) telescope located on Haleakala, Maui, and NASA’s Wide Field Survey Explorer(WISE) satellite, Szapudi’s team discovered a large supervoid, a vast region 1.8 billion light-years across, in which the density of galaxies is much lower than usual in the known universe. This void was found by combining observations taken by PS1 at optical wavelengths with observations taken by WISE at infrared wavelengths to estimate the distance to and position of each galaxy in that part of the sky.

Earlier studies, also done in Hawaii, observed a much smaller area in the direction of the Cold Spot, but they could establish only that no very distant structure is in that part of the sky. Paradoxically, identifying nearby large structures is harder than finding distant ones, since we must map larger portions of the sky to see the closer structures. The large three-dimensional sky maps created from PS1 and WISE by Dr András Kovács (Eötvös Loránd University, Budapest, Hungary) were thus essential for this study. The supervoid is only about 3 billion light-years away from us, a relatively short distance in the cosmic scheme of things.

Imagine there is a huge void with very little matter between you (the observer) and the CMB. Now think of the void as a hill. As the light enters the void, it must climb this hill. If the universe were not undergoing accelerating expansion, then the void would not evolve significantly, and light would descend the hill and regain the energy it lost as it exits the void. But with the accelerating expansion, the hill is measurably stretched as the light is traveling over it. By the time the light descends the hill, the hill has gotten flatter than when the light entered, so the light cannot pick up all the speed it lost upon entering the void. The light exits the void with less energy, and therefore at a longer wavelength, which corresponds to a colder temperature.

Getting through a supervoid takes hundreds of millions of years, even at the speed of light, so this measurable effect (known as the Integrated Sachs-Wolfe (ISW) effect) might provide the an explanation for the Cold Spot. The spot is one of the most significant anomalies found to date in the CMB, first by a NASA satellite called the Wilkinson Microwave Anisotropy Probe(WMAP), and more recently by Planck, a satellite launched by the European Space Agency.

While the existence of the supervoid and its expected effect on the CMB do not fully explain all the properties of the Cold Spot, it is very unlikely that the supervoid and the Cold Spot at the same location are a coincidence. The team will continue its work using improved data from PS1, and from the Dark Energy Survey being conducted with a telescope in Chile to study the Cold Spot and supervoid, as well as another large void located near the constellation Draco.


Contacts and sources:
Louise Good
Royal Astronomical Society
  
Dr István Szapudi
Institute for Astronomy at the University of Hawaii at Manoa


Citation: The study appears in István Szapudi et al., "Detection of a supervoid aligned with the cold spot of the cosmic microwave background", Monthly Notices of the Royal Astronomical Society, vol. 450, pp. 288-294, 2015, published by Oxford University Press.

A preprint of the paper is available on the arXiv server.

Ocean Winds Blow Away Hopes El Nino Would End California Drought

A UMD study points to prolonged wind bursts originating in the western Pacific as the reason that the 2014/2015 El Nino will be far less powerful than anticipated and thus unlikely to deliver much-needed rain to California and other western states.


An El Nino is a sporadic warming of ocean temperatures in the central and eastern tropical Pacific that can have

ripple effects in weather systems around the globe. Last month, the federal Center for Weather and Climate Prediction in College Park, Md. announced a long-predicted El Niño had finally arrived, but was far less powerful than expected.

The new study, published online April 13, 2015, in the journal Nature Geoscience, found that prolonged westerly wind bursts can have a strong effect on whether an El Niño event will occur and how strong it is likely to be. In addition, the paper identifies three distinct varieties or “flavors” of El Niño, and explains how these westerly wind bursts (WWBs) can determine which of these flavors will take shape. The findings should help refine future predictions of these global-scale climate events.

In addition, the paper identifies three distinct varieties or “flavors” of El Nino, and explains how these westerly wind bursts (WWBs) can determine which of these flavors will take shape. The findings should help refine future predictions of these global-scale climate events.

“These westerly wind bursts are intraseasonal—they’re not weather, they’re not climate, but somewhere in between,” said Raghu Murtugudde, a professor of atmospheric and oceanic science at the University of Maryland who is a co-author on the study. “Our study shows that the wind bursts are definitely having an effect. We better learn to predict them if we are going to have skillful El Nino predictions.”

The researchers analyzed 50 years of tropical Pacific sea surface temperature and westerly wind burst data. They found differences, especially when comparing the data from this year’s weak El Nino event with the record-breaking event of 1997/98.

“The most notable difference was the existence of strong westerly winds extending from the western to central equatorial Pacific in May 1997, which were not seen in 2014,” said Murtugudde, who also has an appointment in the university’s Earth System Science Interdisciplinary Center (ESSIC). “The development of strong westerly winds in the central equatorial Pacific in association with the warming to its east appears to be an essential element of large El Nino events.”

After adding westerly wind bursts to their intermediate ocean-atmosphere coupled model, the researchers consistently found three “flavors” of El Nino (rather than one, which was the model’s output without the winds). The three warm patterns included extremely strong events with the largest warming near the South American coast, a cluster of weak warm events centered near the dateline, or moderate warming in the central-eastern equatorial Pacific. For strong El Nino events, the westerly wind bursts grow strong and extend east of the dateline.

According to the research team, the wind bursts affect ocean dynamics by exciting Kelvin waves that produce surface warming in the eastern equatorial Pacific and by generating strong equatorial surface currents that extend the eastern edge of the warm pool.

“We hope this study helps other climate modeling researchers realize the importance of westerly wind bursts on El Nino severity and diversity, and the importance of extending our weather forecast capabilities from two to four weeks to capture WWB variability. Fortunately, the latter is now a focus at the National Atmospheric and Oceanic Administration, which develops our weather forecasts,” said Murtugudde.

Additional information on the study can be found here.


Contacts and sources:
Abby Robinson 

Black Hole Hunters Tackle A Cosmic Conundrum

Dartmouth astrophysicists and their colleagues have not only proven that a supermassive black hole exists in a place where it isn't supposed to be, but in doing so have opened a new door to what things were like in the early universe.

Henize 2-10 is a small irregular galaxy that is not too far away in astronomical terms -- 30 million light-years. "This is a dwarf starburst galaxy -- a small galaxy with regions of very rapid star formation -- about 10 percent of the size of our own Milky Way," says co-author Ryan Hickox, an assistant professor in Dartmouth's Department of Physics and Astronomy. "If you look at it, it's a blob, but it surprisingly harbors a central black hole."

A Hubble Space Telescope image shows the Henize 2-10 galaxy, with a hidden supermassive black hole at its center.
Credit: NASA

Hickox says there may be similar small galaxies in the known universe, but this is one of the only ones close enough to allow detailed study. Lead author Thomas Whalen, Hickox and a team of other researchers have now analyzed a series of four X-ray observations of Henize 2-10 using three space telescopes over 13 years, providing conclusive evidence for the existence of a black hole.

Their findings appear as an online preprint to be published in The Astrophysical Journal Letters. A PDF also is available on request.

Suspicions about Henize 2-10 first arose in 2011 when another team, that included some of the co-authors, first looked at galaxy Henize 2-10 and tried to explain its behavior. The observed dual emissions of X-ray and radio waves, often associated with a black hole, gave credence to the presence of one. The instruments utilized were Japan's Advanced Satellite for Cosmology and Astrophysics (1997), the European Space Agency's XMM-Newton (2004, 2011) and NASA's Chandra X-ray Observatory (2001).

"The galaxy was bright in 2001, but it has gotten less bright over time," says Hickox. "This is not consistent with being powered only by star formation processes, so it almost certainly had to have a small supermassive black hole -- small compared to the largest supermassive black holes in massive elliptical galaxies, but is still a million times the mass of the sun."

A characteristic of supermassive black holes is that they do change with time -- not a huge amount, explains Hickox, "and that is exactly what Tom Whalen found," he says. "This variability definitely tells us that the emission is coming from a compact source at the center of this system, consistent with it being a supermassive black hole."

While supermassive black holes are typically found in the central bulges of galaxies, Henize 2-10 has no bulge. "All the associations that people have made between galaxies and black holes tell us there ought to be no black hole in this system," says Whalen, but the team has proven otherwise. Whalen, a recent Dartmouth graduate, is now a member of the Chandra X-ray Center team at the Harvard-Smithsonian Center for Astrophysics.

A big question is where black holes come from. "When people try to simulate where the galaxies come from, you have to put in these black holes at the beginning, but we don't really know what the conditions were. These dwarf starburst galaxies are the closest analogs we have in the universe around us now, to the first galaxies early in the universe," says Whalen.

The authors conclude: "Our results confirm that nearby star-forming galaxies can indeed form massive black holes and that by implication so can their primordial counterparts."

"Studying those to get some sense of what might have happened very early in the universe is very powerful," says Hickox.


Contacts and sources:
John Cramer
Dartmouth University

Pulsing Light Suggests Supermassive Black Hole Merger

As two galaxies enter the final stages of merging, scientists have theorized that the galaxies' supermassive black holes will form a "binary," or two black holes in such close orbit they are gravitationally bound to one another. In a new study, astronomers at the University of Maryland present direct evidence of a pulsing quasar, which may substantiate the existence of black hole binaries.

Two black holes entwined in a gravitational tango in an earlier artist's conception of black holes.
Credit: NASA

"We believe we have observed two supermassive black holes in closer proximity than ever before," said Suvi Gezari, assistant professor of astronomy at the University of Maryland and a co-author of the study. "This pair of black holes may be so close together that they are emitting gravitational waves, which were predicted by Einstein's theory of general relativity."

The study was published online April 14, 2015, in the Astrophysical Journal Letters. The discovery could shed light on how often black holes get close enough to form a gravitationally bound binary and eventually merge together.

Black holes typically gobble up matter, which accelerates and heats up, emitting electromagnetic energy and creating some of the most luminous beacons in the sky called quasars. When two black holes orbit as a binary, they absorb matter cyclically, leading theorists to predict that the binary's quasar would respond by periodically brightening and dimming.

The researchers conducted a systematic search for so-called variable quasars using the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS1) Medium Deep Survey. This Haleakala, Hawaii-based telescope imaged the same patch of sky once every three days and collected hundreds of data points for each object over four years.

In that data, the astronomers found quasar PSO J334.2028+01.4075, which has a very large black hole of almost 10 billion solar masses and emits a periodic optical signal that repeats every 542 days. The quasar's signal was unusual because the light curves of most quasars are arrhythmic. To verify their finding, the research team performed rigorous calculations and simulations and examined additional data, including photometric data from the Catalina Real-Time Transient Survey and spectroscopic data from the FIRST Bright Quasar Survey.

"The discovery of a compact binary candidate supermassive black hole system like PSO J334.2028+01.4075, which appears to be at such close orbital separation, adds to our limited knowledge of the end stages of the merger between supermassive black holes," said UMD astronomy graduate student Tingting Liu, the paper's first author.

The researchers plan to continue searching for new variable quasars. Beginning in 2023, their search could be aided by the Large Synoptic Survey Telescope, which is expected to survey a much larger area and could potentially pinpoint the locations of thousands of these merging supermassive black holes in the night sky.

"These telescopes allow us to watch a movie of how these systems evolve," said Liu. "What's really cool is that we may be able to watch the orbital separation of these supermassive black holes get smaller and smaller until they merge."


Contacts and sources:
Abby Robinson
University of Maryland 

Sunday, April 19, 2015

Beavers The Size of Bears: Extinction, Survival and Evolution in Kentucky


Researchers at an old geological site talk 'dirt' about how Ice Age climate change led to the extinction of mammoths and mastodons, but to the evolution and survival of bison, deer and other present-day species.

This is an 18,000 year-old mastodon molar (Mammut americanum).
Credit: Tom Robinette, UC

The answers to extinction, survival and evolution are right here in the dirt," says University of Cincinnati Quaternary science researcher Ken Tankersley, associate professor of anthropology and geology. "And we are continually surprised by what we find."

While many scientists focus on species' extinction wherever there has been rapid and profound climate change, Tankersley looks closely at why certain species survived.

For many years he has invited students and faculty from archeology and geology, and representatives from the Cincinnati Museum Center and Kentucky State Parks to participate in an in-the-field investigation at a rich paleontological and archeological site not too far from UC's campus.

Through scores of scientific data extracted from fossilized vegetation and the bones and teeth of animals and humans, Tankersley has been able to trace periods of dramatic climate change, what animals roamed the Earth during those epochs and how they survived. And his most recent evidence reveals when humans came on the scene and how they helped change the environment in Big Bone Lick, Kentucky.

"What we found is that deforestation efforts over 5,000 years ago by humans significantly modified the environment to the degree that the erosion began filling in the Ohio River Valley, killing off much of the essential plant life," says Tankersley. "At that point animals had to either move, evolve or they simply died off."

Tankersley will present the culmination of his years of Surviving Climate Change research - including countless hours in the field and in the lab as well as multiple published works - at the Society for American Archeology annual meeting, April 15-19 in San Francisco titled, Quaternary Chronostratigraphy of Big Bone Lick, Kentucky, USA. He also has a paper published online in the March issue of the prestigious journal Quaternary Research titled, "Quaternary chronostratigraphy and stable isotope paleoecology of Big Bone Lick, Kentucky, USA."

STUDENTS DIG DEEP FOR ANSWERS

Big Bone Lick (BBL) State Park in north-central Kentucky has over 25,000 years of well-preserved bones, rocks and other archeological treasures that have been easily accessible since the 1700s. But until recently, the evidence for why some of this region's former inhabitants evolved into present-day animals, while others simply died off, was buried deep in the sediment.

Only 20 minutes away from UC's main campus by interstate, Tankersley and his students have been taking advantage of BBL's rich and accessible history for the past three years. By digging through layers of soil, scavenging around in creek bottoms and scraping specimens from bone fragments, they have unearthed a treasure trove of ancient specimens - some more than 25,000 years old.

"One of my students, Stephanie Miller, discovered a 10-foot mastodon tusk beneath the water table at the bottom of a creek when she reached below the mud and felt a hard object pointed at the end," says Tankersley. "That tusk is now on display at the Cincinnati Museum Center."

Possessing a proud ancestry of part Native American Cherokee, Tankersley feels a strong need for all this discovery is in his bones, too.

Tankersley originally thought that when the ice reached its maximum advance 25,000 years ago - covering the area now known as Sharonville - the mammoths were grazing on C4 tundra vegetation of herbaceous plants and sedges. To his surprise, what he found is that he couldn't have been more wrong.

While mammoths and mastodons are two distinct species of the proboscidean family, they were originally thought to have lived in different epochs in time and in separate areas of the world:


Mastodons existed earlier, about 27-30 million years ago primarily in North and Central America.
Mammoths came on the scene 5.1 million years ago arising out of Africa.

University of Cincinnati students hold a newly discovered 10-foot mastodon tusk.
Credit: Tom Robinette, UC

The evidence at BBL now shows that mammoths and mastodons both roamed together - possibly through intercontinental migration - and were both eating the same vegetation, even with the difference in the shape of their molars.

The original model of the changing landscape botanically, and in terms of the animals' diet was completely wrong, and was a big shock to Tankersley.

Tankersley's evidence also revealed significant periods of radical shifts in environmental temperature since the last glacial maximum more than 25,000 years ago, which caused an increase in the deposit of sediment that was greater than the system was able to support. And those radical shifts from cold and dry to warm and moist significantly altered the landscape and the vegetation and plant life.

"To determine what animals roamed the area and how they survived, we looked at the stable carbon and nitrogen isotope chemistry of both the animals and plants that were in the sediment for the past 25,000 years," says Tankersley. "Since we are what we eat, we discovered that the mammoths, mastodons and bison were not eating the plants we originally thought. As it turns out, they were eating more C3 vegetation, which is tree leaves and weedy vegetation more like we see outside today."

After incidents like cataclysmic cosmic events caused temperatures to drop and darkened the air with clouds of poisonous gas, the resulting climate change presented challenges for most plant and animal species to continue living. According to Tankersley, life at that time became a true test of survival skills for all living things, so those that could move or adapt to their new surroundings survived - many by evolving into a smaller, lighter and faster species.

Larger animal species that could not move fast or for long distances starved or were imprisoned in the muddy landscape and became easy prey for hungry predators.

"My students discovered all of this," says Tankersley. "My job in this 'Surviving Climate Change' project was to give them the resources and tools and teach them the scientific techniques we use, but then let them be the discoverers, which is exactly what happened."

SURVIVAL OF THE MOST FLEXIBLE

At BBL, Tankersley focused on which species survived and which ones went extinct. They determined that during times when food sources were declining, animals had to move to more fruitful environments or learn to do with less food, which ultimately led to the evolution of today's surviving species.

Looking closer at those survival patterns, Tankersley found that species like caribou could no longer make a living in this area, but they could up north. And although bison are still around, they are a lot smaller than they were thousands of years ago.

The moral to this story, explains Tankersley, is that many species evolved into smaller animals over time as their food sources started to decline. While some larger species simply died off from a lack of necessary resources, bison and deer were two mammals that were able to survive by evolving with a smaller body mass and shorter stature.

"If you look at a species and you have an environmental downturn or major change in the amount of solar radiation, the amount of water moisture and the amount of frost-free days, can all plants respond to that equally? Of course not," says Tankersley. "As individuals, we all have different tolerance levels for change. So in the case of the caribou, when the climate changed rapidly and profoundly it could no longer make a living at BBL. But it could continue to make a living up north where it had the environment for survival.

"Species get bigger when there is a lot of food available and smaller when there is not. So the bison downsized, but the mammoth and mastodon did not. They could neither move nor downsize quickly enough so they simply died off."

BEAVERS THE SIZE OF BLACK BEARS - OH MY! 

Tankersley's team also discovered different species within a species. For example, while there were small beavers then just like there are now, from 25,000 until 10,000 years ago there were also large beavers the size of black bears.

"The larger extinct beaver lost its battle to survive because it was dependent on a certain environment that was dying off, but the modern beaver could make its own environment and consequently survived," says Tankersley. "So there is a lesson there. Animals had to adapt, downsize or go extinct."

Last year Tankersley and his students excavated over 17,000 specimens that are now housed at the Cincinnati Museum Center. While digging up animal bones they found evidence of humans who had butchered these animals.

ENTER THE HUMANS

To effectively date the plant and animal specimens, Tankersley's students examined radiocarbon and optically stimulated luminescence (OSL) ages. Dating much of the material to 5,000 years ago using OSL procedures, Tankersley was shocked to find the evidence for human activity and a new anthropological time period now called the Anthropocene - when humans became the most powerful, natural force.



"So much of science is serendipitous," claims Tankersley. "What the students discovered serendipitously, by dating these deposits, was that humans came in and broke the sod.

Deeper into the sediment, Tankersley found that humans had dug pits into the ground to process animal skins to wear as clothing. Based on ethnographic French literature, the Native Americans had put piles of rocks inside the pits along with hickory nuts, then they used hot rocks to boil the water. The oily, greasy meat of the hickory nut would float to the top and the non-edible remains like the shell and hull would sink to the bottom.

"They would skim it off and drink the water, as it was very nutritious," says Tankersley. "When they were finished, they would grab the softened deerskin and leave the rocks and nutshells behind, which is what we found."

To protect their hickory-nut trees from squirrels and other animals, Tankersley found evidence for human deforestation, where large areas of trees were cleared to create separate hickory-tree orchards, protecting them from animal invasion. That deforestation and degradation resulted in substantial erosion of the uplands, which caused the overbank and backwater flooding of the Ohio Valley area.

The changing vegetation, as a result of this deforestation also contributed to the demise, adaptation or evolution of several species.

Furthermore, Tankersley and his students uncovered evidence for animals being hunted by humans during this same period. Looking closely at the hash marks in animal bones, there was strong clues that humans had greatly contributed to the extinction of some of the species in BBL like the larger bison.

Consequently, through deforestation and arboriculture behavior, and the hunting and extinction of many species of animals, Tankersley found clear evidence that humans indeed contributed to the changing landscape even as far back as 5,000 years ago.

"It's hard to believe, but there is no volcano, no earthquake or tsunami that is moving more sediment than we are," says Tankersley. "Humans are the most powerful force on the planet right now."

To help prevent an underlying assumption of landscape change or stability where it does not exist, Tankersley's team efforts show that both natural and human anthropogenic erosional processes were taking place 5,000 years ago. This activity is directly responsible for the primary and secondary deposits of animal, plant and human artifact remains at Big Bone Lick, Kentucky.



Contacts and sources:
Tom Robinette

The Forces That Move Stars in Galaxies

Cosmic accidents are frequent occurrences in space: two or more disk galaxies collide and form elliptical systems. These contain regions in which the stars orbit the centre in precisely the opposite direction to what happens in the rest of the galaxy. Previous attempts to explain this assumed the colliding galaxies had a special relative orientation (“retrograde”). Athanasia Tsatsi, a doctoral student at the Max Planck Institute for Astronomy in Heidelberg, has now found a further possibility: the mass loss of the galaxies involved acts as a kind of huge rocket engine.

Galaxies about to collide: Snapshots from the simulation in which Athanasia Tsatsi was able to prove the effect of the galactic rocket engine. Left: galaxies before the merger; right: the result afterwards.  
© B. Moster / MPIA

Elliptical galaxies form when at least two disk galaxies (our Milky Way is one such galaxy) collide with each other and coalesce. Unusual things may happen in such systems: while the stars in the outer regions all rotate in one direction, the orbital direction shared by the stars in the core region may be a completely different one.

Why is this? One can imagine that the central region of one of the predecessor galaxies is held together particularly well by the gravitational force of the mass assembled in it. Now, the orbital orientation of the stars in this predecessor galaxy is in precisely the opposite direction to the orbital direction in which the two predecessor galaxies orbited each other before the merger (“retrograde merger”).

Under these conditions, it is plausible that the stable central region becomes the heart of the new elliptical galaxy after the merger, and that the stars in it continue to orbit in precisely the same direction as before. The surrounding stars will move in the opposite direction, however – continuing in the orbital direction in which the predecessor galaxies orbited each other before the merger.

This model appears to work well, but predicts a lower number of counter-rotating cores than are actually observed.

This was the point of departure when Athanasia Tsatsi began her doctoral research at the Max Planck Institute for Astronomy in Heidelberg and evaluated computer simulations of galactic collisions. Tsatsi’s aim was actually to find out what the evolving elliptical galaxies would look like through different types of astronomical observation instruments.

Instead, the young researcher made an unexpected discovery when looking through such a “virtual telescope”: Although the galaxy that formed in the simulated merger had a counter-rotating core, the predecessor galaxies by no means had the special orientation, which ought to be the condition for the formation of the retrograde motion according to the conventional attempt at an explanation.

The result of the simulated merger did match what was already known from observations, however. At 130 billion solar masses, the resulting elliptical galaxy was one of the more massive representatives of its class; it is precisely in such high-mass elliptical galaxies that counter-rotating cores are particularly common and long-lived: they could still be detected in computer programs even two billion years or so after the merger.

Athanasia Tsatsi saw something in the simulation which all her predecessors had missed: as the cores of the two galaxies orbit each other, there comes a moment in time when the direction reverses. This reversal takes place just as their reciprocal gravitational effect is causing the two systems to lose significant amounts of mass – and especially stars from their outer regions.

What happens in such a galaxy is closely related to the special case of a problem which the Russian mathematician Ivan Vsevolodovich Meshchersky (1859 to 1935) investigated: point particles whose masses change over time and move under the reciprocal effect of their gravitational force. The change in mass means additional forces, also known as Meshchersky forces, come into play here.

The best-known example of such forces occurs with rocket propulsion: the rocket ejects hot gases from its nozzle; the force thereby exerted on the rocket is in the opposite direction and the rocket accelerates.

This explains why counter-rotating cores can form even with galactic mergers with the same direction of rotation (“prograde merger”): The mass loss of the two galaxies has the same effect as a gigantic rocket engine and can therefore be powerful enough to reverse the orbital direction of the stars, which ultimately end up in the central region of the newly formed galaxy. Tsatsi therefore calls this way of forming counter-rotating cores the Meshchersky mechanism.

Although Athanasia Tsatsi’s discovery initially relates only to one individual case, it is sufficient to prove that central regions rotating in opposite directions really can form in this way. Next, the astronomers must find out how widespread formation processes of this type are – by investigating galactic mergers with a wide range of initial conditions.

If these systematic tests show that the Meshchersky mechanism for the formation of counter-rotating cores occurs frequently enough, this could explain the observed frequency of the phenomenon – theory and practice would then be in harmony.


Contacts and sources:
Dr. Markus Pössel
Max Planck Institute for Astronomy, Heidelberg

Proto-suns Teeming with Prebiotic Molecules

Complex organic molecules such as formamide, from which sugars, amino acids and even nucleic acids essential for life can be made, already appear in the regions where stars similar to our Sun are born. Astrophysicists from Spain and other countries have detected this biomolecule in five protostellar clouds and propose that it forms on tiny dust grains.

Nebulosa NGC1333, one of the stellar formation regions where formamide has been detected. 

Credit:  NASA-Spitzer

One of science's greatest challenges is learning about the origin of life and its precursor molecules. Formamide (NH2CHO) is an excellent candidate for helping to search for answers as it contains four essential elements (nitrogen, hydrogen, carbon and oxygen), and can synthesise amino acids, carbohydrates, nucleic acids and other key compounds for living organisms.

However, this molecule is also abundant in space, mainly in molecular clouds or the concentrations of gas and dust where stars are born. This has been confirmed by an international team of researchers, including Spanish investigators, after searching for formamide in ten star-forming regions.

"We have detected formamide in five protosuns, which proves that this molecule (in all probability also true for our Solar System) is relatively abundant in molecular clouds and is formed in the very early stages of evolution towards a star and its planets," explains Ana López Sepulcre, lead author of the study and researcher at the University of Tokyo (Japan), to SINC.

The other five objects where formamide has not been detected are less evolved and colder, "which indicates that a minimum temperature is needed for it to be detected in the gas," adds the scientist.

The study, which has just been published in the 'Monthly Notices of the Royal Astronomical Society', also offers clues on how formamide could be created in interstellar conditions. "We propose that it is formed on the surface of the dust grains.


Contacts and sources:
Plataforma SINC

Citation: A. López-Sepulcre, Ali A. Jaber, E. Mendoza, B. Lefloch, C. Ceccarelli, C.. Vastel, R. Bachiller, J. Cernicharo, C. Codella, C. Kahane, M. Kama, M. Tafalla. "Shedding light on the formation of the pre-biotic molecule formamide with ASAI". Monthly Notices of the Royal Astronomical Society, April 2015.

Inconspicuous, Tiny Particles Deform the Large-Scale Structure of the Universe

A systematic study of all massive galaxy clusters in the local universe provides information on the lightest elementary particles: Scientists at the Max Planck Institute for Extraterrestrial Physics analysed an X-ray catalogue to show that there is less structure in the universe today than what is expected from the cosmic microwave background observations of the very early universe. This discrepancy can be explained, if the three neutrino families have an overall mass of about half an electron-volt.

We are surrounded by them everywhere and they fly right through us, but we don’t feel them at all - the neutrinos, the strangest among the known elementary particles. They hardly interact with other matter, every second billions fly right through the Earth but only a fraction gets stuck. They are left over in large numbers from the Big Bang, about 340 million per cubic metre on average. Together with photons, the particles of light, they are the most numerous elementary particles in the universe.

Projection of the three-dimensional distribution of galaxy clusters detected in X-rays by the ROSAT satellite. The data are shown in galactic coordinates with the galactic plane in the centre. The gap in the data is due to the “zone-of-avoidance”, an area around the galactic plane where the extinction by the galactic interstellar medium makes observations very difficult. Blue dots are in the northern sky, red dots in the southern sky.

Because of massive neutrinos, the amount of galaxy clusters with a given mass is smaller than predicted by the cosmological standard model based on the results from the Planck satellite.
Credit: © MPE

For a long time, neutrinos were thought to be massless. But now we know from observations of solar neutrinos and from terrestrial experiments that they do carry mass. But we still don’t know how heavy they are. Nevertheless, due to their large number density they can contribute significantly to the mass density of the Universe even if they are relatively light-weight.

In space, another property of cosmic neutrinos becomes important: they are the fastest massive elementary particles left over from the Big Bang. While most other matter agglomerates due to gravitational forces over cosmic times into the large-scale structure we see today, neutrinos to some extent resist this concentration and clumping and actually hinder the growth of structure. Their effectivity depends on their mass: The more massive they are, the more they can impede the clumping of matter.

Astrophysics can take advantage of this damping effect, by measuring it in the formation of large-scale structure. A comparison of two observations unveils this effect. On one side, we see the density fluctuations in the early Universe at a time about 380 000 years after the Big Bang. This has been observed in the cosmic microwave background by the Planck satellite. With this input data, we can use accepted cosmological models to calculate quite precisely how the structure in the present day Universe should look like. This allows us to predict, for example, how many clusters of galaxies with a certain mass should be found per unit volume.

Scientists at the Max Planck Institute for Extraterrestrial Physics in Garching, Hans Böhringer and Gayoung Chon, have searched for all massive galaxy clusters in the nearby Universe (out to a distance of more than 3 Billion light years). They used X-ray observations with ROSAT and compiled a complete catalogue of these objects. This then allows a comparison of the observations with predictions from the cosmological standard model.

“Observations and theoretical prediction fit surprising well together,” asserts Hans Böhringer. “But a closer look reveals that the present day structures are less pronounced than predicted - when neglecting the mass of neutrinos.”

Even though the discrepancy is only 10%, the precision of measurements has increased dramatically over the past years, so that the scientists take the 10% discrepancy seriously.

“We can reconcile observation and theory if we allow for the neutrinos to have mass,” explains Gayoung Chon. “Our analysis indicates that all three neutrino families together have a mass in the range 0.17 to 0.73 eV.”

There three neutrino families, electron, muon and tau neutrino, which can “oscillate”, i.e. change into each other. Many experiments – and also the estimate based on the large scale structure – can only determine the mass differences or the mass of all three families combined. And this is indeed tiny: about 0.8 x 10-36 kg, one million times lighter than the mass of the electron, the lightest elementary particle in ordinary matter (that makes up our body).

Neutrinos therefore contribute only about 1-5% to dark matter. But even this tiny contribution causes an effect measurable with the current, precise methods. Some other cosmological measurements, such as the study of the gravitational lensing effect of large-scale structure and the peculiar motions of galaxies, suggest a damping in the growth of the large-scale structure amplitude as well.

Other, more exotic effects could be the cause for such a damping effect, for example the interaction of dark matter and dark energy has been suggested. “Massive neutrinos seem, however, to be the most plausible interpretation of the data at the moment,” says Hans Böhringer. “This is very encouraging and we are currently improving our measurements to provide more precise results in the future.”

This is a fascinating example for how the world is interwoven on smallest and largest scales. The largest clearly defined objects in the Universe, galaxy clusters, provide information on the lightest known elementary particles with mass. There are 48 orders of magnitude in between the mass scale of the two systems! Astrophysics is providing an important contribution to elementary particle physics.


Contacts and sources:
Dr. Hans Böhringer

Galaxies Die from the inside Out


A major astrophysical mystery has centred on how massive, quiescent elliptical galaxies, common in the modern Universe, quenched their once furious rates of star formation. Such colossal galaxies, often also called spheroids because of their shape, typically pack in stars ten times as densely in the central regions as in our home galaxy, the Milky Way, and have about ten times its mass.

Star formation in what are now "dead" galaxies sputtered out billions of years ago. ESO's Very Large Telescope and the NASA/ESA Hubble Space Telescope have revealed that three billion years after the Big Bang, these galaxies still made stars on their outskirts, but no longer in their interiors. The quenching of star formation seems to have started in the cores of the galaxies and then spread to the outer parts.

This diagram illustrates this process. Galaxies in the early Universe appear at the left. The blue regions are where star formation is in progress and the red regions are the "dead" regions where only older redder stars remain and there are no more young blue stars being formed. The resulting giant spheroidal galaxies in the modern Universe appear on the right.
Credit:  ESO

Astronomers refer to these big galaxies as red and dead as they exhibit an ample abundance of ancient red stars, but lack young blue stars and show no evidence of new star formation. The estimated ages of the red stars suggest that their host galaxies ceased to make new stars about ten billion years ago. This shutdown began right at the peak of star formation in the Universe, when many galaxies were still giving birth to stars at a pace about twenty times faster than nowadays.

"Massive dead spheroids contain about half of all the stars that the Universe has produced during its entire life," said Sandro Tacchella of ETH Zurich in Switzerland, lead author of the article. "We cannot claim to understand how the Universe evolved and became as we see it today unless we understand how these galaxies come to be."

Tacchella and colleagues observed a total of 22 galaxies, spanning a range of masses, from an era about three billion years after the Big Bang [1]. The SINFONI instrument on ESO's Very Large Telescope (VLT) collected light fromthis sample of galaxies, showing precisely where they were churning out new stars. SINFONI could make these detailed measurements of distant galaxies thanks to its adaptive optics system, which largely cancels out the blurring effects of Earth's atmosphere.

The researchers also trained the NASA/ESA Hubble Space Telescope on the same set of galaxies, taking advantage of the telescope's location in space above our planet's distorting atmosphere. Hubble's WFC3 camera snapped images in the near-infrared, revealing the spatial distribution of older stars within the actively star-forming galaxies.

"What is amazing is that SINFONI's adaptive optics system can largely beat down atmospheric effects and gather information on where the new stars are being born, and do so with precisely the same accuracy as Hubble allows for the stellar mass distributions," commented Marcella Carollo, also of ETH Zurich and co-author of the study.

According to the new data, the most massive galaxies in the sample kept up a steady production of new stars in their peripheries. In their bulging, densely packed centres, however, star formation had already stopped.

"The newly demonstrated inside-out nature of star formation shutdown in massive galaxies should shed light on the underlying mechanisms involved, which astronomers have long debated," says Alvio Renzini, Padova Observatory, of the Italian National Institute of Astrophysics.

A leading theory is that star-making materials are scattered by torrents of energy released by a galaxy's central supermassive black hole asit sloppily devours matter. Another idea is that fresh gas stops flowing into a galaxy, starving it of fuel for new stars and transforming it into a red and dead spheroid.

"There are many different theoretical suggestions for the physical mechanisms that led to the death of the massive spheroids," said co-author Natascha Förster Schreiber, at the Max-Planck-Institut für extraterrestrische Physik in Garching, Germany. "Discovering that the quenching of star formation started from the centres and marched its way outwards is a very important step towards understanding how the Universe came to look like it does now."


Contacts and sources:
Sandro Tacchella
ETH Zurich 

Richard Hook
ESO

Paleolithic Remains Show Cannibalistic Habits of Human Ancestors

Analysis of ancient cadavers recovered at a famous archaeological site confirm the existence of a sophisticated culture of butchering and carving human remains, according to a team of scientists from the Natural History Museum, University College London, and a number of Spanish universities.

Gough’s Cave in Somerset was thought to have given up all its secrets when excavations ended in 1992, yet research on human bones from the site has continued in the decades since. After its discovery in the 1880s, the site was developed as a show cave and largely emptied of sediment, at times with minimal archaeological supervision. The excavations uncovered intensively-processed human bones intermingled with abundant butchered large mammal remains and a diverse range of flint, bone, antler, and ivory artefacts.

Credit: The Natural History Museum

New radiocarbon techniques have revealed remains were deposited over a very short period of time, possibly during a series of seasonal occupations, about 14,700 years ago.

Dr Silvia Bello, from the Natural History Museum’s Department of Earth Sciences, lead researcher of the work said, “The human remains have been the subject of several studies. In a previous analysis, we could determine that the cranial remains had been carefully modified to make skull-cups. During this research, however, we’ve identified a far greater degree of human modification than recorded in earlier. We’ve found undoubting evidence for defleshing, disarticulation, human chewing, crushing of spongy bone, and the cracking of bones to extract marrow.”

The presence of human tooth marks on many of the bones provides incontrovertible evidence for cannibalism, the team found. In a wider context, the treatment of the human corpses and the manufacture and use of skull-cups at Gough’s Cave has parallels with other ancient sites in central and western Europe. But the new evidence from Gough’s Cave suggests that cannibalism during the ‘Magdalenian period’ was part of a customary mortuary practice that combined intensive processing and consumption of the bodies with the ritual use of skull-cups.


Credit: The Natural History Museum

Simon Parfitt, of University College London, said, “A recurring theme of this period is the remarkable rarity of burials and how commonly we find human remains mixed with occupation waste at many sites. Further analysis along the lines used to study Gough's Cave will help to establish whether the type of ritualistic cannibalism practiced there is a regional (‘Creswellian’) phenomenon, or a more widespread practice found throughout the Magdalenian world.”


Contacts and sources:
The Natural History Museum

Citation:  Upper Palaeolithic ritualistic cannibalism at Gough's Cave (Somerset, UK): The human remains from head to toe Silvia M. Bello, Palmira Saladié, Isabel Cáceres, Antonio Rodríguez-Hidalgo, Simon A. Parfitta, Available online 15 April 2015, Journal of Human Evolution - 

Engineers Purify Sea and Wastewater in 2.5 Minutes

A group of Mexican engineers from the Jhostoblak Corporate created technology to recover and purify, either seawater or wastewater from households, hotels, hospitals, commercial and industrial facilities, regardless of the content of pollutants and microorganisms in, incredibly, just 2.5 minutes.

Credit: Investigación y Desarrollo

The System PQUA, works with a mixture of dissociating elements, capable of separating and removing all contaminants, as well as organic and inorganic pollutants. "The methodology is founded on molecularly dissociating water pollutants to recover the minerals necessary and sufficient in order for the human body to function properly nourished", technical staff explained.

Notably, the engineers developed eight dissociating elements, and after extensive testing on different types of contaminated water, implemented a unique methodology that indicates what and how much of each element should be combined.

"During the purification process no gases, odors nor toxic elements that may damage or alter the environment, human health or quality of life are generated" said the Mexican firm.

The corporation has a pilot plant in their offices that was used to demonstrate the purification process, which uses gravity to save energy. We observed that the residual water in the container was pumped to reactor tank, where it received a dosing of the dissociating elements in predetermined amounts.

In this phase solid, organic and inorganic matter as well as heavy metals are removed by precipitation and gravity; and a sludge settles at the bottom of the reactor. The latter is removed and examined to determine if it is suitable to use as fertilizer or manufacture construction materials.

Subsequently, the water is conducted to a clarifier tank, to sediment the excess charge of dissolved elements; then the liquid reaches a filter to remove turbidity and is finally passed by polishing tank that eliminates odors, colors and flavors. The treated water is transported to a container where ozone is added to ensure its purity, and finally is ready to drink. Indeed, the resulting liquid is fresh, odorless and has a neutral taste.

"We have done over 50 tests on different types of wastewater and all have been certified and authorized by the laboratories of the Mexican Accreditation Agency (EMA). Also, the Monterrey Institute of Technology and Higher Education (ITESM), the College of Mexico and the National Polytechnic Institute (IPN) have given their validation that the water treated with our technology meets the SSA NOM 127 standard, which indicates the parameters and quality characteristics for vital liquid to be used for human consumption, " says the Corporate Jhostoblak.

Moreover, they report that this development is protected under trade secret in America and soon will get the same record in Switzerland. Its implementation in the market will depend on the needs of users and the issue of new laws regarding use, consumption and water discharge.

For more information, enter Corporate Jhostoblak’s website www.sistemaspqua.weebly.com or write to vrcorporativo@gmail.com.

Contacts and sources:
Investigación y Desarrollo

Disney Researchers’ 3-D Printer Shows Soft Sides With Layered Fabric and Wiring

A team from Disney Research and Carnegie Mellon University have devised a 3-D printer that layers together laser-cut sheets of fabric to form soft, squeezable objects such as bunnies, doll clothing and phone cases. These objects can have complex geometries and incorporate circuitry that makes them interactive.

"Today's 3-D printers can easily create custom metal, plastic, and rubber objects," said Jim McCann, associate research scientist at Disney Research Pittsburgh. "But soft fabric objects, like plush toys, are still fabricated by hand. Layered fabric printing is one possible method to automate the production of this class of objects."

3D printed objects from our layered fabric 3D printer: (a) printed fabric Stanford bunny, (b) printed Japanese sunny doll with two materials, (c) printed touch sensor, (d) printed cellphone case with an embedded conductive fabric coil for wireless power reception.
Credit: Disney Research Pittsburgh

The fabric printer is similar in principle to laminated object manufacturing, which takes sheets of paper or metal that have each been cut into a 2-D shape and then bonds them together to form a 3-D object. Fabric presents particular cutting and handling challenges, however, which the Disney team has addressed in the design of its printer.

The layered-fabric printer will be described at the Association for Computing Machinery's annual Conference on Human Factors in Computing Systems, CHI 2015, April 18-23 in Seoul, South Korea, where the report has received an honorable mention for a Best Paper award. In addition to McCann, the team included Huaishu Peng, a Ph.D. student in information science at Cornell University, and Scott Hudson and Jen Mankoff, both faculty members in Carnegie Mellon's Human-Computer Interaction Institute.

Last year at CHI, Hudson presented a soft 3-D object printer he developed at Disney Research that deposits layers of needle-felted yarn. The layered-fabric printing method, by contrast, can produce thicker, more squeezable objects.

Disney presents a new type of 3D printer that can form precise, but soft and deformable 3D objects from layers of off-the-shelf fabric. Their printer employs an approach where a sheet of fabric forms each layer of a 3D object. The printer cuts this sheet along the 2D contour of the layer using a laser cutter and then bonds it to previously printed layers using a heat sensitive adhesive. Surrounding fabric in each layer is temporarily retained to provide a removable support structure for layers printed above it. This process is repeated to build up a 3D object layer by layer. 

The printer is capable of automatically feeding two separate fabric types into a single print. This allows specially cut layers of conductive fabric to be embedded in our soft prints. Using this capability Disney demonstrates 3D models with touch sensing capability built into a soft print in one complete printing process, and a simple LED display making use of a conductive fabric coil for wireless power reception.
Credit: Disney Research

The latest soft printing apparatus includes two fabrication surfaces - an upper cutting platform and a lower bonding platform. Fabric is fed from a roll into the device, where a vacuum holds the fabric up against the upper cutting platform while a laser cutting head moves below. The laser cuts a rectangular piece out of the fabric roll, then cuts the layer's desired 2-D shape or shapes within that rectangle. This second set of cuts is left purposefully incomplete so that the shapes receive support from the surrounding fabric during the fabrication process.

Once the cutting is complete, the bonding platform is raised up to the fabric and the vacuum is shut off to release the fabric. The platform is lowered and a heated bonding head is deployed, heating and pressing the fabric against previous layers. The fabric is coated with a heat-sensitive adhesive, so the bonding process is similar to a person using a hand iron to apply non-stitched fabric ornamentation onto a costume or banner.

Once the process is complete, the surrounding support fabric is torn away by hand to reveal the 3-D object.

The researchers demonstrated this technique by using 32 layers of 2-millimeter-thick felt to create a 2 ½-inch bunny. The process took about 2 ½ hours.

"The layers in the bunny print are evident because the bunny is relatively small compared to the felt we used to print it," McCann said. "It's a trade-off -- with thinner fabric, or a larger bunny, the layers would be less noticeable, but the printing time would increase."

Two types of material can be used to create objects by feeding one roll of fabric into the machine from left to right, while a second roll of a different material is fed front to back. If one of the materials is conductive, the equivalent of wiring can be incorporated into the device. The researchers demonstrated the possibilities by building a fabric starfish that serves as a touch sensor, as well as a fabric smartphone case with an antenna that can harvest enough energy from the phone to light an LED.

The feel of a fabricated object can be manipulated in the fabrication process by adding small interior cuts that make it easy to bend the object in one direction, while maintaining stiffness in the perpendicular direction.


Contacts and sources: 
Jennifer Liu
Disney Research

Lasers System Proposed To Clear Space Debris

An international team of scientists have put forward a blueprint for a purely space-based system to solve the growing problem of space debris. The proposal, published in Acta Astronautica, combines a super-wide field-of-view telescope, developed by RIKEN's EUSO team, which will be used to detect objects, and a recently developed high-efficiency laser system, the CAN laser that was presented in Nature Photonics in 2013, that will be used to track space debris and remove it from orbit.

Space debris seen from outside geosynchronous orbit (GEO). The two main debris fields are the ring of objects in GEO and the cloud of objects in low Earth orbit (LEO) Debris plot by NASA. 
Credit: NASA 

The computer-generated image shows

ROBEAR: Strong Caregiving Robot With The Gentle Touch


Scientists from RIKEN and Sumitomo Riko Company Limited have developed a new experimental nursing care robot, ROBEAR, which is capable of performing tasks such as lifting a patient from a bed into a wheelchair or providing assistance to a patient who is able to stand up but requires help to do so. ROBEAR will provide impetus for research on the creation of robots that can supplement Japan’s need for new approaches to caregiving.

Credit: RIKEN

The new robot developed by the RIKEN-SRK Collaboration Center for Human-Interactive Robot Research in Nagoya is a successor to RIBA, which was announced in 2009, and RIBA-II, which was developed in 2011. The new ROBEAR robot is lighter than its predecessors, weighing just 140 kilograms compared to RIBA-II’s 230 kilograms, and it incorporates a number of features that enable it to exert force in a gentle way.

Specifically, it includes actuator units with a very low gear ratio, allowing the joints to move very quickly and precisely, and allowing backdrivability, meaning that the force encountered by the actuators as they perform their tasks can be quickly fed back into the system, allowing softer movement. It also incorporates three types of sensors, including torque sensors and Smart Rubber capacitance-type tactile sensors made entirely of rubber, which allow for gentle movements, ensuring that the robot can perform power-intensive tasks such as lifting patients without endangering them.

ROBEAR helps a person rise from a sofa and sit in a wheelchair.
Credit: RIKEN

The robot also improves on its predecessors by having a small base, making the total system more lightweight. It avoids falling over through the use of legs that can be extended when necessary for lifting a patient but retracted to allow the robot to maneuver through tight spaces such as doorways.

Credit: RIKEN

With its rapidly increasing elderly population, Japan faces an urgent need for new approaches to assist care-giving personnel. One of the most strenuous tasks for such personnel, carried out an average of 40 times every day, is that of lifting a patient from a bed into a wheelchair, and this is a major cause of lower back pain. Robots are well-suited to this task, yet none have yet been deployed in care-giving facilities.

According to Toshiharu Mukai, leader of the Robot Sensor Systems Research Team, "We really hope that this robot will lead to advances in nursing care, relieving the burden on care-givers today. We intend to continue with research toward more practical robots capable of providing powerful yet gentle care to elderly people."


Contacts and sources:
Toshiharu Mukai, Team Leader 
Robot Sensor Systems Research Team
RIKEN―SRK Collaboration Center for Human-Interactive Robot Research
RIKEN Innovation Center

Intense Magnetic Field Discovered Close To Supermassive Black Hole


Supermassive black holes, often with masses billions of times that of the Sun, are located at the heart of almost all galaxies in the Universe. These black holes can accrete huge amounts of matter in the form of a surrounding disc.

This artist's impression shows the surroundings of a supermassive black hole, typical of that found at the heart of many galaxies. The black hole itself is surrounded by a brilliant accretion disc of very hot, infalling material and, further out, a dusty torus. There are also often high-speed jets of material ejected at the black hole's poles that can extend huge distances into space.
Credit: ESO/L. Calçada


While most of this matter is fed into the black hole, some can escape moments before capture and be flung out into space at close to the speed of light as part of a jet of plasma. How this happens is not well understood, although it is thought that strong magnetic fields, acting very close to the event horizon, play a crucial part in this process, helping the matter to escape from the gaping jaws of darkness.

Astronomers from Chalmers University of Technology have used the giant telescope Alma to reveal an extremely powerful magnetic field very close to a supermassive black hole in a distant galaxy. The results appear in the 17 April 2015 issue of the journal Science.

A team of five astronomers from Chalmers University of Technology have revealed an extremely powerful magnetic field, beyond anything previously detected in the core of a galaxy, very close to the event horizon of a supermassive black hole. This new observation helps astronomers to understand the structure and formation of these massive inhabitants of the centres of galaxies, and the twin high-speed jets of plasma they frequently eject from their poles.

Up to now only weak magnetic fields far from black holes -- several light-years away -- had been probed. In this study, however, astronomers from Chalmers University of Technology and Onsala Space Observatory in Sweden have now used Alma to detect signals directly related to a strong magnetic field very close to the event horizon of the supermassive black hole in a distant galaxy named PKS 1830-211. This magnetic field is located precisely at the place where matter is suddenly boosted away from the black hole in the form of a jet.

The giant telescope Alma, made up of 66 individual antennas and located at 5,000 altitude in northern Chile, has revealed the intense magnetic field close to a supermassive black hole. In this image, taken during the ESO Ultra HD (UHD) Expedition, the central parts of our galaxy the Milky Way can be seen above the telescope.
Credit: ESO/B. Tafreshi

The team measured the strength of the magnetic field by studying the way in which light was polarised, as it moved away from the black hole.

"Polarisation is an important property of light and is much used in daily life, for example in sun glasses or 3D glasses at the cinema," says Ivan Marti-Vidal, lead author of this work.

"When produced naturally, polarisation can be used to measure magnetic fields, since light changes its polarisation when it travels through a magnetised medium. In this case, the light that we detected with Alma had been travelling through material very close to the black hole, a place full of highly magnetised plasma."

The astronomers applied a new analysis technique that they had developed to the Alma data and found that the direction of polarisation of the radiation coming from the centre of PKS 1830-211 had rotated.

Magnetic fields introduce Faraday rotation, which makes the polarisation rotate in different ways at different wavelengths. The way in which this rotation depends on the wavelength tells us about the magnetic field in the region.

The Alma observations were at an effective wavelength of about 0.3 millimetres, the shortest wavelengths ever used in this kind of study. This allows the regions very close to the central black hole to be probed. Earlier investigations were at much longer radio wavelengths. Only light of millimetre wavelengths can escape from the region very close to the black hole; longer wavelength radiation is absorbed.

"We have found clear signals of polarisation rotation that are hundreds of times higher than the highest ever found in the Universe," says Sebastien Muller, co-author of the paper. "Our discovery is a giant leap in terms of observing frequency, thanks to the use of Alma, and in terms of distance to the black hole where the magnetic field has been probed -- of the order of only a few light-days from the event horizon. These results, and future studies, will help us understand what is really going on in the immediate vicinity of supermassive black holes."


Contacts and sources:
Robert Cumming Chalmers University of Technology