Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Friday, May 29, 2015

High Capacity Soft Batteries Made from Trees

A method for making elastic high-capacity batteries from wood pulp was unveiled by researchers in Sweden and the US. Using nanocellulose broken down from tree fibres, a team from KTH Royal Institute of Technology and Stanford University produced an elastic, foam-like battery material that can withstand shock and stress.

"It is possible to make incredible materials from trees and cellulose," says Max Hamedi, who is a researcher at KTH and Harvard University. One benefit of the new wood-based aerogel material is that it can be used for three-dimensional structures. 

A closeup of the soft battery, created with wood pulp nanocellulose. 
Image: courtesy of Max Hamedi and Wallenberg Wood Science Center

"There are limits to how thin a battery can be, but that becomes less relevant in 3D, " Hamedi says. "We are no longer restricted to two dimensions. We can build in three dimensions, enabling us to fit more electronics in a smaller space."

A 3D structure enables storage of significantly more power in less space than is possible with conventional batteries, he says.

"Three-dimensional, porous materials have been regarded as an obstacle to building electrodes. But we have proven that this is not a problem. In fact, this type of structure and material architecture allows flexibility and freedom in the design of batteries," Hamedi says.

The process for creating the material begins with breaking down tree fibres, making them roughly one million times thinner. The nanocellulose is dissolved, frozen and then freeze-dried so that the moisture evaporates without passing through a liquid state.

Then the material goes through a process in which the molecules are stabilised so that the material does not collapse.

"The result is a material that is both strong, light and soft," Hamedi says. "The material resembles foam in a mattress, though it is a little harder, lighter and more porous. You can touch it without it breaking."

The finished aerogel can then be treated with electronic properties. "We use a very precise technique, verging on the atomic level, which adds ink that conducts electricity within the aerogel. You can coat the entire surface within."

In terms of surface area, Hamedi compares the material to a pair of human lungs, which if unfurled could be spread over a football field. Similarly, a single cubic decimeter of the battery material would cover most of a football pitch, he says.

"You can press it as much as you want. While flexible and stretchable electronics already exist, the insensitivity to shock and impact are somewhat new."

Hamedi says the aerogel batteries could be used in electric car bodies, as well as in clothing, providing the garment has a lining.

The research has been carried out at the Wallenberg Wood Science Center at KTH. KTH Professor Lars Wågberg also has been involved, and his work on

aerogels is in the basis for the invention of soft electronics. Another partner is leading battery researcher, Professor Yi Cui from Stanford University.


Contacts and sources:
KTH Royal Institute of Technology 

Citation: "Self-Assembled Three-Dimensional And Compressible Interdigitated Thin Film Supercapacitors And Batteries”
Nature Communications, May 29, 2015
DOI 10.1038/ncomms8259

DALI: Robot Walker for Elderly People in Public Spaces

Elderly people with walking difficulties are often intimidated by busy public places. This led an EU research project to develop a robot walker to guide them around shopping centres, museums and other public buildings, thus enhancing their autonomy.
Credit: © DALI

Shopping centres, airports, museums and hospitals are the kind of complex and confusing environments where elderly people on the verge of cognitive decline could have difficulties walking around without help. The walking frames they may currently use do not have the flexibility to help them navigate in often-crowded places.

This led researchers on the DALI project to develop a robotic cognitive walker (c-Walker) that can be taken to, or picked up at, the place to be visited, gently guiding the person around the building safely. The device takes corrective actions when the user comes across the type of busy area, obstacle or incident they want to avoid.



‘The c-Walker is aimed at providing physical and cognitive support to older adults. It can give them confidence in public environments,’ explained Luigi Palopoli, professor at Italy’s Trento University who coordinated DALI (Devices for Assisted Living). ‘The device is full of hi-tech solutions, but the user is not necessarily aware of them. She or he comes into contact with a ‘standard’ walker, with a few additions such as the display or bracelets and does not need any kind of computer literacy. The robot simply guides them so that they have a nice, safe experience.’

Programming the robot before setting off

Shopping is recommended as a useful way for elderly people to exercise and is viewed as an important activity for prolonging their autonomous mobility. It also provides them with good opportunities to interact socially. For these reasons, shopping centres were considered by the DALI project to be a typical environment an elderly person would ideally visit. Picking up the c-Walker at the entrance, the elderly shopper selects the profile most suited to them on its simple touch-screen and the shops to visit. The c-Walker then recommends the best course to the user and guides them using visual, acoustic and haptic (tactile) interfaces.

The c-Walker uses different solutions (RFID tags, invisible QR codes, and cameras) to localise itself in the environment. Furthermore, it can connect with remote sensors, such as surveillance cameras, and with other c-Walkers deployed in the environment to gain remote knowledge of the presence of anomalies, crowded spaces or hazards. The device is equipped with brakes and motorised wheels. Haptic armbands tell users when and how to turn. They can also call for assistance if necessary.

DALI has been very much a user-driven project. The scientists spoke with focus groups of over 50 elderly people in Spain and the UK who explained their mobility needs so that features helping them could be incorporated into the robot. The c-Walker was later tested at residential care homes in Ciudad Real in Spain and Trento in Italy. Feedback from these trials was used to design a more advanced prototype.

By basing the design on software rather than expensive mechatronic components, the DALI consortium has been able to bring unit cost down from tens of thousands of euros to around EUR 2 000 per device.

Stepping out through social networks

DALI, funded with EUR 3 million from FP7, ended last October. But now there is a new three-and-a-half year project, ACANTO, which is developing the c-Walker further. ACANTO, receiving EUR 4.2 million from Horizon 2020, aims to bring c-Walker users together in social networks. ‘This will give them more incentive to go places,’ said Prof Palopoli. ‘While in DALI we focused on the single user, in ACANTO we are thinking of groups of users who can do things together, such as visit a museum.’

The consortium believes that, by spinning off a company to market the device, or attracting investment from a major technological manufacturer, c-Walkers could be in common use by 2020.


Contacts and sources:

Autonomous Underwater Robot Swarms with Collective Cognition Move Like Schools of Fish

Scientists have created underwater robot swarms that function like schools of fish, exchanging information to monitor the environment, searching, maintaining, exploring and harvesting resources in underwater habitats. The EU supported COCORO project explored and developed collective cognition in autonomous robots in a rich set of 10 experimental demonstrators, which are shown in 52 videos.

Credit: © COCORO

The COCORO project’s robot swarms not only look like schools of fish, they behave like them too. The project developed autonomous robots that interact with each other and exchange information, resulting in a cognitive system that is aware of its environment. 



According to Dr. Thomas Schmickl, coordinator of the project and Associate Professor in the Department of Zoology at the University of Graz in Austria, what distinguishes COCORO from other similar projects is that researchers created robot swarms that are capable of collective cognition. They work as a collective system of autonomous agents that can learn from past experience and their environment.

Robot swarm cognition in action

In one experiment, twenty Jeff robots floated in a tank of water. As they came into contact with each other, they gradually became aware of the size of their swarm. This ‘swarm size awareness’ was made possible by relaying status information using LEDs.



In another scenario, the robots’ mission was to find debris originating from a sunken airplane. Lily robots searched just below the surface while Jeff robots searched at the bottom of the pool.



Magnets were placed around the airplane to mimic an electro-magnetic signal emitted locally and the robots used their built-in compasses to locate the target. A Jeff robot soon discovered the target and settled on it at the bottom of the pool.

By transmitting LED, it ‘recruited’ the other Jeff robots, which then gathered around the target, while Lily robots collected overhead. 



During field trials in Livorno Harbour, Italy, the robots were exposed to waves, currents and corrosive salt water. Despite the difficult conditions the robot swarms were able to remain clustered around their base station as well as go on “patrols” and successfully return to base.

Bio-mimicry: inspired by nature

‘We didn’t invent all of this ourselves,’ says Dr. Schmickl, explaining that COCORO’s scientists modelled collective cognition in nature. Observing how honeybees cluster, for example, helped them to develop the BEECLUST algorithm that they used to aggregate robots at a specific location. They also applied mechanisms derived from existing studies on how slime mould amoebas congregate using chemical waves to communicate with each other.

A diverse group of biologists, computer scientists and other experts participated in COCORO, which ran from 1 April 2011 until 30 September 2014 and received EUR 2.9 million in EU funding.

Although the project concluded in 2014, its results could have wide application in the fields of computer science, biology, theology, meta-cognition, psychology, and philosophy, as well as a broader impact on our economy and society. Possible applications are in distributed environmental monitoring and search&rescue operations.

‘The way in which some swarm members influence others is very similar to how trends are set by opinion leaders in our society,’ notes Dr. Schmickl.

The COCORO project team has announced that 2015 will be the year of COCORO events. Every week they are presenting a new video made during the project, with the largest autonomous underwater swarm in the world with 41 robots of 3 different kinds.


Contacts and sources:
CORDIS
Link to project's website

How Comets Were Assembled

Rosetta’s target “Chury” and other comets observed by space missions show common evidence of layered structures and bi-lobed shapes. With 3D computer simulations Martin Jutzi, astrophysicist at the University of Bern, was able to reconstruct the formation of these features as a result of gentle collisions and mergers. The study has now been published online in the journal “Science Express”.

In a video sequence based on a computer simulation two icy spheres with a diameter of about one kilometer are moving towards each other. They collide at bicycle speed, start to mutually rotate and separate again after the smaller body has left traces of material on the larger one. 

Collision of two icy spheres with a diameter of about one kilometer. After a first impact the bodies separate and reimpact a day later. 
Simulation: Jutzi/Asphaug

The time sequence shows that the smaller object is slowed down by mutual gravity. After about 14 hours it turns back and impacts again a day after the first collision. The two bodies finally merge to form one body that somehow looks familiar: The bi-lobed frame resembles the shape of comet 67P/Churyumov-Gerasimenko imaged by ESA’s Rosetta mission.

Image taken on 26 September from a distance of 26.3 km from Comet “Chury”. The image shows the spectacular region of activity at the “neck” of the comet with ices sublimating and gases escaping from inside the comet.
Credit: ESA/Rosetta/NAVCAM

100 simulations performed

The simulation is part of a study published in “Science Express” by Bernese astrophysicist Martin Jutzi and his US colleague Erik Asphaug (Arizona State University). With their three-dimensional computer models the researchers reconstruct what happened in the early solar system. “Comets or their precursors formed in the outer planets region, possibly millions of years before planet formation,” Martin Jutzi explains. 

“Reconstructing the formation process of comets can provide crucial information about the initial phase of planet formation, for instance, the initial sizes of the building blocks of planets, the so-called planetesimals or cometesimals in the outer solar system.” 

About 100 simulations were performed, each of them taking one to several weeks to complete, depending on the collision type. The work was supported by the Swiss National Science Foundation through the Ambizione program and in part carried out within the frame of the Swiss National Centre for Competence in Research “PlanetS”.

67P/Churyumov-Gerasimenko isn’t the only comet showing a bi-lobed shape and evidence for a layered structure. Crashing on 9P/Tempel 1 in 2005, NASA’s Deep Impact showed similar layers, a feature that is also presumed on two other comets visited by NASA missions. Half of the comet nuclei that spacecraft have observed so far – among them comets 103P/Hartley 2 and 19P/Borelly – have bi-loped shapes. “How and when these features formed is much debated, with distinct implications for solar system formation, dynamics, and geology”, Martin Jutzi says.

Primordial remnants of a quiet phase

In their study, the researchers applied 3D collisional models, constrained by these shape and topographic data, to understand the basic accretion mechanism and its implications for internal structure. As their three-dimensional computer simulations indicate, the major structural features observed on cometary nuclei can be explained by the pairwise low velocity accretion of weak cometesimals. The model is also compatible with the observed low bulk densities of comets as the collisions result in only minor compaction.

“These slow mergers might represent the quiet, early phase of planet formation around 4.5 billion years ago, before large bodies excited the system to disruptive velocities, supporting the idea that cometary nuclei are primordial remnants of early agglomeration of small bodies,” Martin Jutzi says. 

Alternatively, the same processes of coagulation might have occurred among debris clumps ejected from much larger parent bodies. Along with future space missions using radar to directly image internal structure, the 3D computer simulations are an important step to clarify the question of how the cometary nuclei were assembled.


Contacts and sources:
Martin Jutzi 


Discovery Reveals What Our Solar System Looked Like as a Toddler

Astronomers have discovered a disc of planetary debris surrounding a young sun-like star that shares remarkable similarities with the Kuiper Belt that lies beyond Neptune, and may aid in understanding how our solar system developed.

Left: Image of HD 115600 showing a bright debris ring viewed nearly edge-on and located just beyond a Pluto-like distance to the star. Right: A model of the HD 115600 debris ring on the same scale.
Credit: T. Currie 

An international team of astronomers, including researchers from the University of Cambridge, has identified a young planetary system which may aid in understanding how our own solar system formed and developed billions of years ago.

Using the Gemini Planet Imager (GPI) at the Gemini South telescope in Chile, the researchers identified a disc-shaped bright ring of dust around a star only slightly more massive than the sun, located 360 light years away in the Centaurus constellation. The disc is located between about 37 and 55 Astronomical Units (3.4 – 5.1 billion miles) from its host star, which is almost the same distance as the solar system’s Kuiper Belt is from the sun. The brightness of the disc, which is due to the starlight reflected by it, is also consistent with a wide range of dust compositions including the silicates and ice present in the Kuiper Belt.

The Kuiper Belt lies just beyond Neptune, and contains thousands of small icy bodies left over from the formation of the solar system more than four billion years ago. These objects range in size from specks of debris dust, all the way up to moon-sized objects like Pluto – which used to be classified as a planet, but has now been reclassified as a dwarf planet.

The star observed in this new study is a member of the massive 10-20 million year-old Scorpius-Centaurus OB association, a region similar to that in which the sun was formed. The disc is not perfectly centred on the star, which is strong indication that it was likely sculpted by one or more unseen planets. By using models of how planets shape a debris disc, the team found that ‘eccentric’ versions of the giant planets in the outer solar system could explain the observed properties of the ring.

“It’s almost like looking at the outer solar system when it was a toddler,” said principal investigator Thayne Currie, an astronomer at the Subaru Observatory in Hawaii.

The current theory on the formation of the solar system holds that it originated within a giant molecular cloud of hydrogen, in which clumps of denser material formed. One of these clumps, rotating and collapsing under its own gravitation, formed a flattened spinning disc known as the solar nebula. The sun formed at the hot and dense centre of this disc, while the planets grew by accretion in the cooler outer regions. The Kuiper Belt is believed to be made up of the remnants of this process, so there is a possibility that once the new system develops, it may look remarkably similar to our solar system.

“To be able to directly image planetary birth environments around other stars at orbital distances comparable to the solar system is a major advancement,” said Dr Nikku Madhusudhan of Cambridge’s Institute of Astronomy, one of the paper’s co-authors. “Our discovery of a near-twin of the Kuiper Belt provides direct evidence that the planetary birth environment of the solar system may not be uncommon.”

This is the first discovery with the new cutting-edge Gemini instrument. “In just one of our many 50-second exposures we could see what previous instruments failed to see in more than 50 minutes,” said Currie.

The star, going by the designation HD 115600, was the first object the research team looked at. “Over the next few years, I’m optimistic that GPI will reveal many more debris discs and young planets. Who knows what strange, new worlds we will find,” Currie added.

The paper is accepted for publication in The Astrophysical Journal Letters.



Contacts and sources:
Sarah Collins

Little-Known Quake, Tsunami Hazards Lurk Offshore of Southern California


While their attention may be inland on the San Andreas Fault, residents of coastal Southern California could be surprised by very large earthquakes - and even tsunamis - from several major faults that lie offshore, a new study finds.

The latest research into the little known, fault-riddled, undersea landscape off of Southern California and northern Baja California has revealed more worrisome details about a tectonic train wreck in the Earth's crust with the potential for magnitude 7.9 to 8.0 earthquakes. The new study supports the likelihood that these vertical fault zones have displaced the seafloor in the past, which means they could send out tsunami-generating pulses towards the nearby coastal mega-city of Los Angeles and neighboring San Diego.

"We're dealing with continental collision," said geologist Mark Legg of Legg Geophysical in Huntington Beach, California, regarding the cause of the offshore danger. "That's fundamental. That's why we have this mess of a complicated logjam."

This map shows the California Borderland and its major tectonic features, as well as the locations of earthquakes greater than Magnitude 5.5. The dashed box shows the area of the new study. Large arrows show relative plate motion for the Pacific-North America fault boundary. The abbreviations stand for the following: BP = Banning Pass, CH = Chino Hills, CP = Cajon Pass, LA = Los Angeles, PS = Palm Springs, V = Ventura; ESC = Santa Cruz Basin; ESCBZ = East Santa Cruz Basin Fault Zone; SCI = Santa Catalina Island; SCL = San Clemente Island; SMB = Santa Monica Basin; SNI = San Nicolas Island. 
Credit: Mark Legg

Legg is the lead author of the new analysis accepted for publication in the Journal of Geophysical Research: Earth Surface, a journal of the American Geophysical Union. He is also one of a handful of geologists who have been trying for decades to piece together the complicated picture of what lies beyond Southern California's famous beaches.

The logjam Legg referred to is composed of blocks of the Earth's crust caught in the ongoing tectonic battle between the North American tectonic plate and the Pacific plate. The blocks are wedged together all the way from the San Andreas Fault on the east, to the edge of the continental shelf on the west, from 150 to 200 kilometers (90 to 125 miles) offshore. These chunks of crust get squeezed and rotated as the Pacific plate slides northwest, away from California, relative to the North American plate. The mostly underwater part of this region is called the California Continental Borderland, and includes the Channel Islands.

By combining older seafloor data and digital seismic data from earthquakes along with 4,500 kilometers (2,796 miles) of new seafloor depth measurements, or bathymetry, collected in 2010, Legg and his colleagues were able to take a closer look at the structure of two of the larger seafloor faults in the Borderland: the Santa Cruz-Catalina Ridge Fault and the Ferrelo Fault. What they were searching for are signs, like those seen along the San Andreas, that indicate how much the faults have slipped over time and whether some of that slippage caused some of the seafloor to thrust upwards.

This is a satellite picture of the Channel Islands off the coast of Southern California. New research into the little known, fault-riddled, undersea landscape off of Southern California and northern Baja California has revealed more worrisome details about a tectonic train wreck in the Earth's crust with the potential for magnitude 7.9 to 8.0 earthquakes.

Credit: NASA

What they found along the Santa Cruz-Catalina Ridge Fault are ridges, valleys and other clear signs that the fragmented, blocky crust has been lifted upward, while also slipping sideways like the plates along the San Andreas Fault do. Further out to sea, the Ferrelo Fault zone showed thrust faulting - which is an upwards movement of one side of the fault. The vertical movement means that blocks of crust are being compressed as well as sliding horizontally relative to each other-what Legg describes as "transpression."

Compression comes from the blocks of the Borderland being dragged northwest, but then slamming into the roots of the Transverse Ranges - which are east-west running mountains north and west of Los Angeles. In fact, the logjam has helped build the Transverse Ranges, Legg explained.

"The Transverse Ranges rose quickly, like a mini Himalaya," Legg said.

The real Himalaya arose from a tectonic-plate collision in which the crumpled crust on both sides piled up into fast-growing, steep mountains rather than getting pushed down into Earth's mantle as happens at some plate boundaries.

As Southern California's pile-up continues, the plate movements that build up seismic stress on the San Andreas are also putting stress on the long Santa Cruz-Catalina Ridge and Ferrelo Faults. And there is no reason to believe that those faults and others in the Borderlands can't rupture in the same manner as the San Andreas, said Legg.

"Such large faults could even have the potential of a magnitude 8 quake," said geologist Christopher Sorlien of the University of California at Santa Barbara, who is not a co-author on the new paper.

"This continental shelf off California is not like other continental shelves - like in the Eastern U.S.," said Sorlien.

Whereas most continental shelves are about twice as wide and inactive, like that off the U.S. Atlantic coast, the California continental shelf is very narrow and is dominated by active faults and tectonics. In fact, it's unlike most continental shelves in the world, he said. It's also one of the least well mapped and understood. "It's essentially terra incognita."

"This is one of the only parts of the continental shelf of the 48 contiguous states that didn't have complete ... high-resolution bathymetry years ago," Sorlien said.

And that's why getting a better handle on the hazards posed by the Borderland's undersea faults has been long in coming and slow to catch on, even among earth scientists, he said.

NOAA was working on complete high-resolution bathymetry of the U.S. Exclusive Economic Zone - the waters within 200 miles of shore - until the budget was cut, said Legg. That left out Southern California and left researchers like himself using whatever bits and pieces of smaller surveys to assemble a picture of what's going on in the Borderland, he explained.

"We've got high resolution maps of the surface of Mars," Legg said, "yet we still don't have decent bathymetry for our own backyard."


Contacts and sources: 
Nanci Bompey
The American Geophysical Union


Astronomers See Flare on Famous Red Giant Star

Super-sharp observations with the telescope Alma have revealed what seems to be a gigantic flare on the surface of Mira, one of the closest and most famous red giant stars in the sky. Activity like this in red giants - similar to what we see in the Sun - comes as a surprise to astronomers. The discovery could help explain how winds from giant stars make their contribution to our galaxy's ecosystem.

This is an artist's impression of a giant flare on the surface of red giant Mira A. Behind the star, material is falling onto the star's tiny companion Mira B.
IMAGE
Credit:  Katja Lindblom, CC BY-NC-ND 4.0

New observations with Alma have given astronomers their sharpest ever view of the famous double star Mira. The images clearly show the two stars in the system, Mira A and Mira B, but that's not all. For the first time ever at millimetre wavelengths, they reveal details on the surface of Mira A.

"Alma's vision is so sharp that we can begin to see details on the surface of the star. Part of the stellar surface is not just extremely bright, it also varies in brightness. This must be a giant flare, and we think it's related to a flare which X-ray telescopes observed some years ago", says Wouter Vlemmings, astronomer at Chalmers University of Technology, who led the team.

The team's results were recently published in the journal Astronomy & Astrophysics.

Red giants like Mira A are crucial components of our galaxy's ecosystem. As they near the end of their lives, they lose their outer layers in the form of uneven, smoky winds. These winds carry heavy elements that the stars have manufactured - out into space where they can form new stars and planets. Most of the carbon, oxygen, and nitrogen in our bodies was formed in stars and redistributed by their winds.

This is Alma's false-color image of the double star Mira, 420 light years from Earth. The two stars, separated by a distance similar to the distance between the Sun and Pluto, are imaged so sharply that astronomers can discern surface details. The ellipse in the lower left corner shows the size of the smallest details that Alma can distinguish.

Credit: W. Vlemmings/Alma

Mira - the name means "Wonderful" in Latin - has been known for centuries as one of the most famous variable stars in the sky. At its brightest, it can be clearly seen with the naked eye, but when it's at its faintest a telescope is needed. The star, 420 light years away in the constellation Cetus, is in fact a binary system, made up of two stars of about the same mass as the sun: one is a dense, hot white dwarf and the other a fat, cool, red giant, orbiting each other at a distance about the same as Pluto's average distance from the Sun.

"Mira is a key system for understanding how stars like our sun reach the end of their lives, and what difference it makes for an elderly star to have a close companion", says Sofia Ramstedt, astronomer at Uppsala University and co-author on the paper.

The Sun, our closest star, shows activity powered by magnetic fields, and this activity, sometimes in the form of solar storms, drives the particles that make up the solar wind which in its turn can create auroras on Earth.

"Seeing a flare on Mira A suggests that magnetic fields also have a role to play for red giants' winds", says Wouter Vlemmings.

The new images give astronomers their sharpest ever view of Mira B, which is so close to its companion that material flows from one star to the other.

"This is our clearest view yet of gas from Mira A that is falling towards Mira B" says Eamon O'Gorman, astronomer at Chalmers and member of the team.

The observations were carried out as part of Alma's first long-baseline observations. By placing the telescope's antennas at their maximum distance from each other, Alma reached its maximum resolution for the first time. Mira was one of several targets in the campaign, alongside a young solar system, a gravitationally lensed galaxy and an asteroid. Now Wouter Vlemmings and his team plan new observations of Mira and other similar stars.

"Alma has shown us details on the surface of Mira for the first time. Now we can begin to discover our closest red giants in detail that hasn't previously been possible", he says.


Contacts and sources: 
Robert Cumming
Chalmers University of Technology

Thursday, May 28, 2015

How Spacetime Is Built by Quantum Entanglement

A collaboration of physicists and a mathematician has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory.

This is an illustration of the concept of the holography.
Credit: Hirosi Ooguri

The paper announcing the discovery by Hirosi Ooguri, a Principal Investigator at the University of Tokyo's Kavli IPMU, with Caltech mathematician Matilde Marcolli and graduate students Jennifer Lin and Bogdan Stoica, will be published in Physical Review Letters as an Editors' Suggestion "for the potential interest in the results presented and on the success of the paper in communicating its message, in particular to readers from other fields."

Physicists and mathematicians have long sought a Theory of Everything (ToE) that unifies general relativity and quantum mechanics. General relativity explains gravity and large-scale phenomena such as the dynamics of stars and galaxies in the universe, while quantum mechanics explains microscopic phenomena from the subatomic to molecular scales.

The mathematical formula derived by Ooguri and his collaborators relates local data in the extra dimensions of the gravitational theory, depicted by the red point, are expressed in terms of quantum entanglements, depicted by the blue domes.
Credit: (c) 2015 Jennifer Lin et al.

The holographic principle is widely regarded as an essential feature of a successful Theory of Everything. The holographic principle states that gravity in a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. In particular, the three dimensions of the volume should emerge from the two dimensions of the surface. However, understanding the precise mechanics for the emergence of the volume from the surface has been elusive.

Now, Ooguri and his collaborators have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to compute energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This is analogous to diagnosing conditions inside of your body by looking at X-ray images on two-dimensional sheets. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory. The importance of quantum entanglement has been suggested before, but its precise role in emergence of spacetime was not clear until the new paper by Ooguri and collaborators.

Quantum entanglement is a phenomenon whereby quantum states such as spin or polarization of particles at different locations cannot be described independently. Measuring (and hence acting on) one particle must also act on the other, something that Einstein called "spooky action at distance." The work of Ooguri and collaborators shows that this quantum entanglement generates the extra dimensions of the gravitational theory.

"It was known that quantum entanglement is related to deep issues in the unification of general relativity and quantum mechanics, such as the black hole information paradox and the firewall paradox," says Hirosi Ooguri. "Our paper sheds new light on the relation between quantum entanglement and the microscopic structure of spacetime by explicit calculations. The interface between quantum gravity and information science is becoming increasingly important for both fields. I myself am collaborating with information scientists to pursue this line of research further."


Contacts and sources:
Motoko Kakubayashi
Hirosi Ooguri, Principal Investigator
Kavli Institute for the Physics and Mathematics of the Universe, The University of Tokyo 

Motoko Kakubayashi
Project Specialist
Kavli Institute for the Physics and Mathematics of the Universe, The University of Tokyo

The University of Tokyo http://www.u-tokyo.ac.jp/en/
Kavli Institute for the Physics and Mathematics of the Universe http://www.ipmu.jp/

Citation:  "Locality of Gravitational Systems from Entanglement of Conformal Field Theories"  Journal: Physical Review Letters
Authors: Jennifer Lin (1), Matilde Marcolli (2), Hirosi Ooguri (3,4), Bogdan Stoica (3)
Author affiliations: (1) Enrico Fermi Institute and Department of Physics, University of Chicago
(2) Department of Mathematics, California Institute of Technology
(3) Walter Burke Institute for Theoretical Physics, California Institute of Technology
(4) Kavli Institute for the Physics and Mathematics of the Universe (WPI), University of Tokyo





First Search for Pluto System Hazards Completed: So Far, All Clear

NASA’s New Horizons team has analyzed the first set of hazard-search images of the Pluto system taken by the spacecraft itself – and so far, all looks clear for the spacecraft’s safe passage.

The observations were made May 11-12 from a range of 47 million miles (76 million kilometers) using the telescopic Long Range Reconnaissance Imager (LORRI) on New Horizons. For these observations, LORRI was instructed to take 144 10-second exposures, designed to allow a highly sensitive search for faint satellites, rings or dust sheets in the system.

This image shows the results of the New Horizons team’s first search for potentially hazardous material around Pluto, conducted May 11-12, 2015, from a range of 47 million miles (76 million kilometers). The image combines 48 10-second exposures, taken with the spacecraft’s Long Range Reconnaissance Imager (LORRI), to offer the most sensitive view yet of the Pluto system. 
The left panel is a combination of the original images before any processing. The combined glare of Pluto and its large moon Charon in the center of the field, along with the thousands of background stars, overwhelm any faint moons or rings that might pose a threat to the New Horizons spacecraft. 

The central panel is the same image after extensive processing to remove Pluto and Charon’s glare and most of the background stars, revealing Pluto’s four small moons -- Styx, Nix, Kerberos and Hydra -- as points of light. 

The right panel overlays the orbits and locations of all five moons, including Charon. Remaining unlabeled spots and blemishes in the processed image are imperfectly removed stars, including variable stars which appear as bright or dark dots. The faint grid pattern is an artifact of the image processing. Celestial north is up in these images.

The mission team is looking carefully for any indications of dust or debris that might threaten New Horizons before the spacecraft’s flight through the Pluto system on July 14; a particle as small as a grain of rice could be fatal.

The observations, downlinked to Earth May 12-15 and processed and analyzed May 12-18, detected Pluto and all five of its known moons, but no rings, new moons, or hazards of any kind. The New Horizons hazard detection team, led by John Spencer of the Southwest Research Institute in Boulder, Colorado, determined that small satellites with about half the brightness of Pluto’s faintest known moon, Styx, could have been detected at this range. 

 Any undiscovered moons outside the orbit of Pluto’s largest and closest moon, Charon, are thus likely smaller than 3-10 miles (5-15 kilometers) in diameter. If any undiscovered rings are present around Pluto outside Charon’s orbit, they must be very faint or narrow – less than 1,000 miles wide or reflecting less than one 5-millionth of the incoming sunlight.

The next hazard-search images will be taken May 29-30, and should have about twice the sensitivity of the first batch. The team expects to complete a thorough analysis of the data and report on its results by June 12. The New Horizons team has until July 4 to divert the spacecraft to one of three alternate routes if any dangers are found.

New Horizons is nearly 2.95 billion miles from home, speeding toward Pluto and its moons at just under 750,000 miles per day. The spacecraft is healthy and all systems are operating normally.

The Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, designed, built, and operates the New Horizons spacecraft, and manages the mission for NASA’s Science Mission Directorate. Southwest Research Institute, San Antonio and Boulder, Colorado, leads the science team, payload operations and encounter science planning. New Horizons is part of the New Frontiers Program managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama.


Contacts and sources:
Tricia Talbert
Marshall Space Flight Center

How Sleep Helps Us Learn and Memorize


Sleep is important for long lasting memories, particularly during this exam season. Research publishing in PLOS Computational Biology suggests that sleeping triggers the synapses in our brain to both strengthen and weaken, which prompts the forgetting, strengthening or modification of our memories in a process known as long-term potentiation (LTP).
Credit: Blanco et al.

Researchers led by Sidarta Ribeiro at the Brain Institute of the Federal University of Rio Grande do Norte, Brazil, measured the levels of a protein related to LTP during the sleep cycle of rats. The authors then used the data to build models of sleep-dependent synaptic plasticity.

The results show that sleep can have completely different effects depending on whether LTP is present or not. A lack of LTP leads to memory erasure, while the presence of LTP can either strengthen memories or prompt the emergence of new ones.

The research provides an empirical and theoretical framework to understand the mechanisms underlying the complex role of sleep for learning, which involves selective remembering as well as creativity.


Contacts and sources:
Sidarta Ribeiro
Federal University of Rio Grande do Norte
Brain Institute

Citation: Blanco W, Pereira CM, Cota VR, Souza AC, Rennó-Costa C, Santos S, et al. (2015) Synaptic Homeostasis and Restructuring across the Sleep-Wake Cycle. PLoS Comput Biol 11(5): e1004241.doi:10.1371/journal.pcbi.1004241
 

Natural Cancer Cure : Marine Creature Yields New Cancer Killing Drug

For decades, scientists have known that ET-743, a compound extracted from a marine invertebrate called a mangrove tunicate, can kill cancer cells. The drug has been approved for use in patients in Europe and is in clinical trials in the U.S.

Scientists suspected the mangrove tunicate, which is a type of a sea squirt, doesn't actually make ET-743. But the precise origins of the drug, which is also known as trabectedin, were a mystery.

A mangrove tunicate clings to a tree root 
Photo by Michael M. Schofield 

By analyzing the genome of the tunicate along with the microbes that live inside it using advanced sequencing techniques, researchers at the University of Michigan were able to isolate the genetic blueprint of the ET-743's producer--which turns out to be a type of bacteria called Candidatus Endoecteinascidia frumentensis.

The findings greatly expand understanding of the microbe and of how ET-743 is produced, the researchers reported online May 27 in the journal Environmental Microbiology. They're optimistic that the insights will help make it possible to culture the bacteria in the laboratory without its host.

"These symbiotic microbes have long been thought to be the true sources of many of the natural products that have been isolated from invertebrates in the ocean and on the land. But very little is known about them because we're not able to get most of them to grow in a laboratory setting," said study senior author David Sherman, the Hans W. Vahlteich Professor of Medicinal Chemistry in the College of Pharmacy and a faculty member of the U-M Life Sciences Institute, where his lab is located.

"Currently, many of these compounds can only be harvested in small amounts from host animals, which is unsustainable from an economic and environmental perspective," said Michael Schofield, one of two first authors on the study and a member of the Sherman lab before she graduated from U-M this spring. "Our hope is that understanding the genomes of these micro-organisms and the chemical reactions that occur inside of them will provide new avenues to economical and sustainable production of the medicinal molecules they make."

ET-743 is currently made using a complicated, partially synthetic process.

"A major challenge of sequencing genomes from samples containing a mixture of different organisms is figuring out which DNA sequences go with which organisms. We used bioinformatic approaches that allowed us to tease that apart," said Sunit Jain, a bioinformatics specialist in the U-M Department of Earth and Environmental Sciences, and the study's other first author.

Bioinformatics involves the collection, classification, storage and analysis of biochemical and biological information using computers.

The University of Michigan has filed for patent protection on this discovery.


Contacts and sources:
Ian Demsky
University of Michigan

Super-Efficient Light-Based Computers Possible, Breakthrough Achieved at Stanford

Stanford electrical engineer Jelena Vuckovic wants to make computers faster and more efficient by reinventing how they send data back and forth between chips, where the work is done.

In computers today, data is pushed through wires as a stream of electrons. That takes a lot of power, which helps explain why laptops get so warm.

Infrared light enters this silicon structure from the left. The cut-out patterns, determined by an algorithm, route two different frequencies of this light into the pathways on the right. This is a greatly magnified image of a working device that is about the size of a speck of dust.
Photo: Alexander Piggott

"Several years ago, my colleague David Miller carefully analyzed power consumption in computers, and the results were striking," said Vuckovic, referring to electrical engineering Professor David Miller. "Up to 80 percent of the microprocessor power is consumed by sending data over the wires - so called interconnects."

In a Nature Photonics article whose lead author is Stanford graduate student Alexander Piggott, Vuckovic, a professor of electrical engineering, and her team explain a process that could revolutionize computing by making it practical to use light instead of electricity to carry data inside computers.

Proven technology

In essence, the Stanford engineers want to miniaturize the proven technology of the Internet, which moves data by beaming photons of light through fiber optic threads.

"Optical transport uses far less energy than sending electrons through wires," Piggott said. "For chip-scale links, light can carry more than 20 times as much data."

Theoretically, this is doable because silicon is transparent to infrared light - the way glass is transparent to visible light. So wires could be replaced by optical interconnects: silicon structures designed to carry infrared light.

But so far, engineers have had to design optical interconnects one at a time. Given that thousands of such linkages are needed for each electronic system, optical data transport has remained impractical.

Now the Stanford engineers believe they've broken that bottleneck by inventing what they call an inverse design algorithm.

It works as the name suggests: the engineers specify what they want the optical circuit to do, and the software provides the details of how to fabricate a silicon structure to perform the task.

"We used the algorithm to design a working optical circuit and made several copies in our lab," Vuckovic said.

In addition to Piggott, the research team included former graduate student Jesse Lu (now at Google,) graduate student Jan Petykiewicz and postdoctoral scholars Thomas Babinec and Konstantinos Lagoudakis. As they reported in Nature Photonics, the devices functioned flawlessly despite tiny imperfections.

"Our manufacturing processes are not nearly as precise as those at commercial fabrication plants," Piggott said. "The fact that we could build devices this robust on our equipment tells us that this technology will be easy to mass-produce at state-of-the-art facilities."

The researchers envision many other potential applications for their inverse design algorithm, including high bandwidth optical communications, compact microscopy systems and ultra-secure quantum communications.

Light and silicon

The Stanford work relies on the well-known fact that infrared light will pass through silicon the way sunlight shines through glass.

And just as a prism bends visible light to reveal the rainbow, different silicon structures can bend infrared light in useful ways.

The Stanford algorithm designs silicon structures so slender that more than 20 of them could sit side-by-side inside the diameter of a human hair. These silicon interconnects can direct a specific frequency of infrared light to a specific location to replace a wire.

By loading data onto these frequencies, the Stanford algorithm can create switches or conduits or whatever else is required for the task.

The inverse design algorithm is what makes optical interconnects practical by describing how to create what amount to silicon prisms to bend infrared light.

Once the algorithm has calculated the proper shape for the task, engineers can use standard industrial processes to transfer that pattern onto a slice of silicon.

"Our structures look like Swiss cheese but they work better than anything we've seen before," Vuckovic said.

She and Piggott have made several different types of optical interconnects and they see no limits on what their inverse design algorithm can do.

In their Nature photonics paper, the Stanford authors note that the automation of large-scale circuit design enabled engineers to create today's sophisticated electronics.

By automating the process of designing optical interconnects, they feel that they have set the stage for the next generation of even faster and far more energy-efficient computers that use light rather than electricity for internal data transport.


Contacts and sources:
Tom Abate
Stanford University


Merging Galaxies Send Signal


In the most extensive survey of its kind ever conducted, a team of scientists have found an unambiguous link between the presence of supermassive black holes that power high-speed, radio-signal-emitting jets and the merger history of their host galaxies.

This artist's impression illustrates how high-speed jets from supermassive black holes would look. These outflows of plasma are the result of the extraction of energy from a supermassive black hole's rotation as it consumes the disc of swirling material that surrounds it. These jets have very strong emissions at radio wavelengths.
Credit: ESA/Hubble, L. Calçada (ESO)

Almost all of the galaxies hosting these jets were found to be merging with another galaxy, or to have done so recently. The results lend significant weight to the case for jets being the result of merging black holes and will be presented in the Astrophysical Journal.

A team of astronomers using the NASA/ESA Hubble Space Telescope's Wide Field Camera 3 (WFC3) have conducted a large survey to investigate the relationship between galaxies that have undergone mergers and the activity of the supermassive black holes at their cores.

The team studied a large selection of galaxies with extremely luminous centres -- known as active galactic nuclei (AGNs) -- thought to be the result of large quantities of heated matter circling around and being consumed by a supermassive black hole. Whilst most galaxies are thought to host a supermassive black hole, only a small percentage of them are this luminous and fewer still go one step further and form what are known as relativistic jets [1]. The two high-speed jets of plasma move almost with the speed of light and stream out in opposite directions at right angles to the disc of matter surrounding the black hole, extending thousands of light-years into space. The hot material within the jets is also the origin of radio waves.

It is these jets that Marco Chiaberge from the Space Telescope Science Institute, USA (also affiliated with Johns Hopkins University, USA and INAF-IRA, Italy) and his team hoped to confirm were the result of galactic mergers [2].

The team inspected five categories of galaxies for visible signs of recent or ongoing mergers -- two types of galaxies with jets, two types of galaxies that had luminous cores but no jets, and a set of regular inactive galaxies [3].

"The galaxies that host these relativistic jets give out large amounts of radiation at radio wavelengths," explains Marco. "By using Hubble's WFC3 camera we found that almost all of the galaxies with large amounts of radio emission, implying the presence of jets, were associated with mergers. However, it was not only the galaxies containing jets that showed evidence of mergers!" [4].

"We found that most merger events in themselves do not actually result in the creation of AGNs with powerful radio emission," added co-author Roberto Gilli from Osservatorio Astronomico di Bologna, Italy. "About 40% of the other galaxies we looked at had also experienced a merger and yet had failed to produce the spectacular radio emissions and jets of their counterparts."

Although it is now clear that a galactic merger is almost certainly necessary for a galaxy to host a supermassive black hole with relativistic jets, the team deduce that there must be additional conditions which need to be met. They speculate that the collision of one galaxy with another produces a supermassive black hole with jets when the central black hole is spinning faster -- possibly as a result of meeting another black hole of a similar mass -- as the excess energy extracted from the black hole's rotation would power the jets.

"There are two ways in which mergers are likely to affect the central black hole. The first would be an increase in the amount of gas being driven towards the galaxy's centre, adding mass to both the black hole and the disc of matter around it," explains Colin Norman, co-author of the paper. "But this process should affect black holes in all merging galaxies, and yet not all merging galaxies with black holes end up with jets, so it is not enough to explain how these jets come about. The other possibility is that a merger between two massive galaxies causes two black holes of a similar mass to also merge. It could be that a particular breed of merger between two black holes produces a single spinning supermassive black hole, accounting for the production of jets."

Future observations using both Hubble and ESO's Atacama Large Millimeter/submillimeter Array (ALMA) are needed to expand the survey set even further and continue to shed light on these complex and powerful processes.



Contact and sources:
Mathias Jäger
ESA/Hubble Information Center

Notes

[1] Relativistic jets travel at close to the speed of light, making them one of the fastest astronomical objects known.

[2] The new observations used in this research were taken in collaboration with the 3CR-HST team. This international team of astronomers is currently led by Marco Chiaberge and has conducted a series of surveys of radio galaxies and quasars from the 3CR catalogue using the Hubble Space Telescope.

[3] The team compared their observations with the swathes of archival data from Hubble. They directly surveyed twelve very distant radio galaxies and compared the results with data from a large number of galaxies observed during other observing programmes.

[4] Other studies had shown a strong relationship between the merger history of a galaxy and the high levels of radiation at radio wavelengths that suggests the presence of relativistic jets lurking at the galaxy's centre. However, this survey is much more extensive, and the results very clear, meaning it can now be said with almost certainty that radio-loud AGNs, that is, galaxies with relativistic jets, are the result of galactic mergers.

Contacts and sources:  

Donuts, Math, and Superdense Teleportation of Quantum Information

Putting a hole in the center of the donut--a mid-nineteenth-century invention--allows the deep-fried pastry to cook evenly, inside and out. As it turns out, the hole in the center of the donut also holds answers for a type of more efficient and reliable quantum information teleportation, a critical goal for quantum information science.

In superdense teleportation of quantum information, Alice (near) selects a particular set of states to send to Bob (far), using the hyperentangled pair of photons they share. The possible states Alice may send are represented as the points on a donut shape, here artistically depicted in sharp relief from the cloudy silhouette of general quantum state that surrounds them. To transmit a state, Alice makes a measurement on her half of the entangled state, which has four possible outcomes shown by red, green, blue, and yellow points. She then communicates the outcome of her measurement (in this case, yellow, represented by the orange streak connecting the two donuts) to Bob using a classical information channel. Bob then can make a corrective rotation on his state to recover the state that Alice sent.

Credit: Image by Precision Graphics, copyright Paul Kwiat, University of Illinois at Urbana-Champaign

Quantum teleportation is a method of communicating information from one location to another without moving the physical matter to which the information is attached. Instead, the sender (Alice) and the receiver (Bob) share a pair of entangled elementary particles--in this experiment, photons, the smallest units of light--that transmit information through their shared quantum state. In simplified terms, Alice encodes information in the form of the quantum state of her photon. She then sends a key to Bob over traditional communication channels, indicating what operation he must perform on his photon to prepare the same quantum state, thus teleporting the information.

Quantum teleportation has been achieved by a number of research teams around the globe since it was first theorized in 1993, but current experimental methods require extensive resources and/or only work successfully a fraction of the time.

Now, by taking advantage of the mathematical properties intrinsic to the shape of a donut--or torus, in mathematical terminology--a research team led by physicist Paul Kwiat of the University of Illinois at Urbana-Champaign has made great strides by realizing "superdense teleportation". This new protocol, developed by physicist and paper co-author Herbert Bernstein of Hampshire College in Amherst, MA, effectively reduces the resources and effort required to teleport quantum information, while at the same time improving the reliability of the information transfer.

With this new protocol, the researchers have experimentally achieved 88 percent transmission fidelity, twice the classical upper limit of 44 percent. The protocol uses pairs of photons that are "hyperentangled"--simultaneously entangled in more than one state variable, in this case in polarization and in orbital angular momentum--with a restricted number of possible states in each variable. In this way, each photon can carry more information than in earlier quantum teleportation experiments.

At the same time, this method makes Alice's measurements and Bob's transformations far more efficient than their corresponding operations in quantum teleportation: the number of possible operations being sent to Bob as the key has been reduced, hence the term "superdense."

Kwiat explains, "In classical computing, a unit of information, called a bit, can have only one of two possible values--it's either a zero or a one. A quantum bit, or qubit, can simultaneously hold many values, arbitrary superpositions of 0 and 1 at the same time, which makes faster, more powerful computing systems possible.

"So a qubit could be represented as a point on a sphere, and to specify what state it is, one would need longitude and latitude. That's a lot of information compared to just a 0 or a 1."

"What makes our new scheme work is a restrictive set of states. The analog would be, instead of using a sphere, we are going to use a torus, or donut shape. A sphere can only rotate on an axis, and there is no way to get an opposite point for every point on a sphere by rotating it--because the axis points, the north and the south, don't move. With a donut, if you rotate it 180 degrees, every point becomes its opposite. Instead of axis points you have a donut hole. Another advantage, the donut shape actually has more surface area than the sphere, mathematically speaking--this means it has more distinct points that can be used as encoded information."

Lead author, Illinois physics doctoral candidate Trent Graham, comments, "We are constrained to sending a certain class of quantum states called 'equimodular' states. We can deterministically perform operations on this constrained set of states, which are impossible to perfectly perform with completely general quantum states. Deterministic describes a definite outcome, as opposed to one that is probabilistic. With existing technologies, previous photonic quantum teleportation schemes either cannot work every time or require extensive experimental resources. Our new scheme could work every time with simple measurements."

This research team is part of a broader collaboration that is working toward realizing quantum communication from a space platform, such as the International Space Station, to an optical telescope on Earth. The collaboration--Kwiat, Graham, Bernstein, physicist Jungsang Kim of Duke University in Durham, NC, and scientist Hamid Javadi of NASA's Jet Propulsion Laboratory in Pasadena, CA--recently received funding from NASA Headquarter's Space Communication and Navigation program (with project directors Badri Younes and Barry Geldzahler) to explore the possibility.

"It would be a stepping stone toward building a quantum communications network, a system of nodes on Earth and in space that would enable communication from any node to any other node," Kwiat explains. "For this, we're experimenting with different quantum state properties that would be less susceptible to air turbulence disruptions."



Contacts and sources: 
Siv Schwink
University of Illinois at Urbana-Champaign

Citation: The team's recent experimental findings are published in the May 28, 2015 issue of Nature Communications, and represent the collaborative effort Kwiat, Graham, and Bernstein, as well as physicist Tzu-Chieh Wei of State University of New York at Stony Brook, and mathematician Marius Junge of the University of Illinois.

Oldest Light in the Universe Allows Insight into Birth of the Cosmos

Astrophysicists have developed a new method for calculating the effect of Rayleigh scattering on photons, potentially allowing researchers to better understand the formation of the Universe.

University of British Columbia (UBC) theoretical cosmology graduate student Elham Alipour, UBC physicist Kris Sigurdson and Ohio State University astrophysicist Christopher Hirata probed the effect of Rayleigh scattering -- the process that makes the sky appear blue when the Sun's photons are scattered by molecules in the atmosphere -- on the cosmic microwave background (CMB).

By using different high-frequency channels to observe the CMB and combining this information, researchers may be able to better isolate the Rayleigh signal.
Credit:  UBC Science

The CMB is the oldest light in the universe, which originated when electrons combined with protons to form the first atoms. These primordial atoms were also the first to Rayleigh scatter light.

"Detecting the Rayleigh signal is challenging because the frequency range where Rayleigh scattering has the biggest effect is contaminated by 'noise' and foregrounds, such as galactic dust," lead author Elham Alipour said.

By using different high-frequency channels to observe the CMB and combining this information, researchers may be able to better isolate the Rayleigh signal. This calculation of the effects of Rayleigh scattering on cosmology might help us better understand the formation of our Universe 13.6 billion years ago.

"The CMB sky is a snapshot of the early Universe, it is a single frame in the movie of the Universe, and we have shown that Rayleigh signal gives us another fainter snapshot of the same scene at a slightly different time," co-author Kris Sigurdson explained.

The findings have been highlighted in Physical Review D.


Contacts and sources:
Silvia Moreno-GarciaUniversity of British Columbia 

430,000-Year-Old Murder? Lethal Wounds on Skull Indicate Homicide

Lethal wounds identified on a human skull in the Sima de los Huesos, Spain, may indicate one of the first cases of murder in human history, some 430,000 years ago, according to a study published May 27 2015 in the open-access journal PLOS ONE by Nohemi Sala from Centro Mixto UCM-ISCIII de Evolución y Comportamiento Humanos, Spain, and colleagues.
 
This is a frontal view of Cranium 17 showing the position of the traumatic events T1 (inferior) and T2 (superior).
Credit: Javier Trueba / Madrid Scientific Films
The archeological site, Sima de los Huesos in northern Spain, is located deep within an underground cave system and contains the skeletal remains of at least 28 individuals that date to around 430,000 years ago, during the Middle Pleistocene. The only access to the site is through a 13-meter deep vertical shaft, and how the human bodies arrived there remains a mystery.

A nearly complete skull, Cranium 17 from the Sima de los Huesos, is comprised of 52 cranial fragments recovered during excavations at the site over the last 20 years. This skull shows two penetrating lesions on the frontal bone, above the left eye. Relying on modern forensic techniques, such as contour and trajectory analysis of the traumas, the authors of the study showed that both fractures were likely produced by two separate impacts by the same object, with slightly different trajectories around the time of the individual's death. 

According to the authors, the injuries are unlikely to be the result of an accidental fall down the vertical shaft. Rather, the type of fracture, their location, and that they appear to have been produced by two blows with the same object lead the authors to interpret them as the result of an act of lethal interpersonal aggression--or what may constitute the earliest case of murder in human history.

Furthermore, if this individual was already dead, the authors found that they were likely carried to the top of the vertical shaft by other humans. The authors suggest that humans were likely responsible for the accumulation of bodies in the Sima de los Huesos, which supports the idea that this site represents early evidence of funerary behavior.


Contacts and sources: 
Kayla Graham
PLOS One

Citation: Sala N, Arsuaga JL, Pantoja-Pérez A, Pablos A, Martínez I, Quam RM, et al. (2015) Lethal Interpersonal Violence in the Middle Pleistocene. PLoS ONE 10(5): e0126589. doi:10.1371/journal.pone.0126589

New Human Ancestor Species Found: Australopithecus deyiremeda 3.3. to 3.5 Million Years Old , Lived Alongside "Lucy"


A new relative joins "Lucy" on the human family tree. An international team of scientists, led by Dr. Yohannes Haile-Selassie of The Cleveland Museum of Natural History, has discovered a 3.3 to 3.5 million-year-old new human ancestor species. Upper and lower jaw fossils recovered from the Woranso-Mille area of the Afar region of Ethiopia have been assigned to the new species Australopithecus deyiremeda. This hominin lived alongside the famous "Lucy's" species, Australopithecus afarensis. The species will be described in the May 28, 2015 issue of the international scientific journal Nature.

This is the holotype upper jaw of a new human ancestor species found on March 4, 2011.

Credit: Yohannes Haile-Selassie, Cleveland Museum of Natural History

Lucy's species lived from 2.9 million years ago to 3.8 million years ago, overlapping in time with the new species Australopithecus deyiremeda. The new species is the most conclusive evidence for the contemporaneous presence of more than one closely related early human ancestor species prior to 3 million years ago. The species name "deyiremeda" (day-ihreme-dah) means "close relative" in the language spoken by the Afar people.

Australopithecus deyiremeda differs from Lucy's species in terms of the shape and size of its thick-enameled teeth and the robust architecture of its lower jaws. The anterior teeth are also relatively small indicating that it probably had a different diet.

"The new species is yet another confirmation that Lucy's species, Australopithecus afarensis, was not the only potential human ancestor species that roamed in what is now the Afar region of Ethiopia during the middle Pliocene," said lead author and Woranso-Mille project team leader Dr. Yohannes Haile-Selassie, curator of physical anthropology at The Cleveland Museum of Natural History. "Current fossil evidence from the Woranso-Mille study area clearly shows that there were at least two, if not three, early human species living at the same time and in close geographic proximity."

Dr. Yohannes Haile-Selassie of The Cleveland Museum of Natural History announced a new human ancestor species from Ethiopia. He is pictured holding a cast of the holotype upper jaw.

Credit:  Laura Dempsey, Cleveland Museum of Natural History

"The age of the new fossils is very well constrained by the regional geology, radiometric dating, and new paleomagnetic data," said co-author Dr. Beverly Saylor of Case Western Reserve University. The combined evidence from radiometric, paleomagnetic, and depositional rate analyses yields estimated minimum and maximum ages of 3.3 and 3.5 million years.



"This new species from Ethiopia takes the ongoing debate on early hominin diversity to another level," said Haile-Selassie. "Some of our colleagues are going to be skeptical about this new species, which is not unusual. However, I think it is time that we look into the earlier phases of our evolution with an open mind and carefully examine the currently available fossil evidence rather than immediately dismissing the fossils that do not fit our long-held hypotheses," said Haile-Selassie.

This photo shows the jaw.

Credit:  Laura Dempsey ©Cleveland Museum of Natural History

Scientists have long argued that there was only one pre-human species at any given time between 3 and 4 million years ago, subsequently giving rise to another new species through time. This was what the fossil record appeared to indicate until the end of the 20th century. However, the naming of Australopithecus bahrelghazali from Chad and Kenyanthropus platyops from Kenya, both from the same time period as Lucy's species, challenged this long-held idea. Although a number of researchers were skeptical about the validity of these species, the announcement by Haile-Selassie of the 3.4 million-year-old Burtele partial foot in 2012 cleared some of the skepticism on the likelihood of multiple early hominin species in the 3 to 4 million-year range.

Cast of  fossil specimens of Australopithecus deyiremeda 
Credit:  Laura Dempsey ©Cleveland Museum of Natural History

The Burtele partial fossil foot did not belong to a member of Lucy's species. However, despite the similarity in geological age and close geographic proximity, the researchers have not assigned the partial foot to the new species due to lack of clear association. Regardless, the new species Australopithecus deyiremeda incontrovertibly confirms that multiple species did indeed co-exist during this time period.

This discovery has important implications for our understanding of early hominin ecology. It also raises significant questions, such as how multiple early hominins living at the same time and geographic area might have used the shared landscape and available resources.

To view a photo gallery and video interview, visit www.cmnh.org/nature2015.

Discovery of Australopithecus deyiremeda:

The holotype (type specimen) of Australopithecus deyiremeda is an upper jaw with teeth discovered on March 4, 2011, on top of a silty clay surface at one of the Burtele localities. The paratype lower jaws were also surface discoveries found on March 4 and 5, 2011, at the same locality as the holotype and another nearby locality called Waytaleyta. The holotype upper jaw was found in one piece (except for one of the teeth which was found nearby), whereas the mandible was recovered in two halves that were found about two meters apart from each other. The other mandible was found about 2 kilometers east of where the Burtele specimens were found.

Location of the Discovery:

The fossil specimens were found in the Woranso-Mille Paleontological Project study area located in the central Afar region of Ethiopia about 325 miles (520 kilometers) northeast of the capital Addis Ababa and 22 miles (35 kilometers) north of Hadar ("Lucy's" site). Burtele and Waytaleyta are local names for the areas where the holotype and paratypes were found and they are located in the Mille district, Zone 1 of the Afar Regional State.

The Woranso-Mille Project:

The Woranso-Mille Paleontological project conducts field and laboratory work in Ethiopia every year. This multidisciplinary project is led by Dr. Yohannes Haile-Selassie of The Cleveland Museum of Natural History. Additional co-authors of this research include: Dr. Luis Gibert of University of Barcelona (Spain), Dr. Stephanie Melillo of the Max Planck Institute (Leipzig, Germany), Dr. Timothy M. Ryan of Pennsylvania State University, Dr. Mulugeta Alene of Addis Ababa University (Ethiopia), Drs. Alan Deino and Gary Scott of the Berkeley Geochronology Center, Dr. Naomi E. Levin of Johns Hopkins University, and Dr. Beverly Z. Saylor of Case Western Reserve University. Graduate and undergraduate students from Ethiopia and the United States of America also participated in the field and laboratory activities of the project.


Contacts and sources:
Glenda Bogar
Cleveland Museum of Natural History

Tuesday, May 26, 2015

Cosmic Maid Service: Supernovas, Black Holes Team up to Clean Galaxies

Supernovas just might be the maid service of the universe.

It seems these explosions that mark the end of a star's life work hand-in-hand with supermassive black holes to sweep out gas and shut down galaxies' star-forming factories.

Jets erupting from a supermassive black hole, such as the one in Centaurus A (shown in this color composite image), might clear the way for supernovas to sweep out gas and stop star formation.

 Photo credit: WFI/ESO (optical); A. Weill et al/APEX/MPIFR and ESO (submillimeter); R. Kraft et al/ CXC/CFA and NASA (X-ray).

Recent research, led by Michigan State University astronomers, finds that the black holes located at the cores of galaxies launch fountains of charged particles, which can stir up gas throughout the galaxy and temporarily interrupt star formation.

But unless something intervenes, the gas will eventually cool and start forming stars again.

One mega-outburst from the black hole, though, could heat the gas surrounding the galaxy enough to let supernovas take over and mop up the mess. A celestial cleaning partnership might help astronomers understand why some massive galaxies stopped forming stars billions of years ago.

"Our previous research had shown that black-hole outbursts can limit star formation in massive galaxies, but they can't completely shut it off," said team leader Mark Voit, MSU professor of physics and astronomy in the College of Natural Science. "Something else needs to keep sweeping out the gas that dying stars continually dump into a galaxy, and supernova sweeping appears to work perfectly for that."

Other members of the research team are Megan Donahue, MSU professor of physics and astronomy; Brian O'Shea, MSU associate professor of physics and astronomy; Greg Bryan, Columbia University professor of astronomy; Ming Sun, University of Alabama in Huntsville assistant professor of physics; and Norbert Werner, Stanford University research associate.

This research was recently published in Science News and Astrophysical Journal Letters.

Contacts and sources:
Tom OswaldMichigan State University

Could Left-Handed Cosmic Magnetic Field Explain Missing Antimatter



The discovery of a 'left-handed' magnetic field that pervades the universe could help explain a long standing mystery – the absence of cosmic antimatter

A group of scientists, led by Prof. Tanmay Vachaspati from Arizona State University in the United States, with collaborators at Washington University and Nagoya University, announce their result in Monthly Notices of the Royal Astronomical Society.
An artist’s impression of the Fermi Gamma ray Space Telescope (FGST) in orbit. 

Credit: NASA.  

Planets, stars, gas and dust are almost entirely made up of 'normal' matter of the kind we are familiar with on Earth. But theory predicts that there should be a similar amount of antimatter, like normal matter, but with the opposite charge. For example, an antielectron (called a positron) has the same mass as its conventional counterpart, but a positive rather than negative charge.

In 2001 Prof. Vachaspati published theoretical models to try to solve this puzzle, which predict that the entire universe is filled with helical (screw-like) magnetic fields. He and his team were inspired to search for evidence of these fields in data from the NASA Fermi Gamma ray Space Telescope (FGST).

FGST, launched in 2008, observes gamma rays (electromagnetic radiation with a shorter wavelength than X-rays) from very distant sources, such as the supermassive black holes found in many large galaxies. The gamma rays are sensitive to effect of the magnetic field they travel through on their long journey to the Earth. If the field is helical, it will imprint a spiral pattern on the distribution of gamma rays.

Vachaspati and his team see exactly this effect in the FGST data, allowing them to not only detect the magnetic field but also to measure its properties. The data shows not only a helical field, but also that there is an excess of left-handedness - a fundamental discovery that for the first time suggests the precise mechanism that led to the absence of antimatter.

For example, mechanisms that occur nanoseconds after the Big Bang, when the Higgs field gave masses to all known particles, predict left-handed fields, while mechanisms based on interactions that occur even earlier predict right-handed fields.

 Illustration of the Fermi Gamma ray Space Telescope (FGST) map of the sky with the central band removed to block out gamma rays originating in the Milky Way. Gamma rays of different energies are represented by dots of various colors – red dots represent arrival locations of very energetic gamma rays, green dots represent lower energy, and blue dots represent lowest energy. 

Credit: Hiroyuki Tashiro.

The new analysis looks for spiral patterns in the distribution of gamma rays within patches on the sky, with the highest energy gamma ray at the center of the spiral and the lower energy gamma rays further along the spiral. A helical magnetic field in the universe gives an excess of spirals of one handedness - and FGST data shows an excess of left-handed spirals.

Prof. Vachaspati commented: "Both the planet we live on and the star we orbit are made up of 'normal' matter. Although it features in many science fiction stories, antimatter seems to be incredibly rare in nature. With this new result, we have one of the first hints that we might be able to solve this mystery."

This discovery has wide ramifications, as a cosmological magnetic field could play an important role in the formation of the first stars and could seed the stronger field seen in galaxies and clusters of galaxies in the present day.



Contacts and sources:
Dr Robert Massey
Royal Astronomical Society
 
Prof Tanmay Vachaspati
Director, Cosmology Initiative
Arizona State University

Citation: vThe new work appears in W. Chen et al., "Intergalactic magnetic field spectra from diffuse gamma rays", Monthly Notices of the Royal Astronomical Society, vol. 450, pp. 3371-3380, 2015, published by Oxford University Press.

Details of the earlier theoretical models appear in T. Vachaspati, "Estimate of the Primordial Magnetic Field Helicity", Physical Review Letters, vol. 87, p. 251302, 2001.

 

Monitoring Magnetospheres of Massive Stars

Queen's University PhD student Matt Shultz is researching magnetic, massive stars, and his research has uncovered questions concerning the behaviour of plasma within their magnetospheres.

A huge, billowing pair of gas and dust clouds are captured in this stunning NASA Hubble Space Telescope image of the supermassive star Eta Carinae
Credit: Nathan Smith (University of California, Berkeley), and NASA

Drawing upon the extensive dataset assembled by the international Magnetism in Massive Stars (MiMeS) collaboration, led by Mr. Shultz's supervisor, Queen's professor Gregg Wade, along with some of his own observations collected with both the Canada-France-Hawaii Telescope and the European Southern Observatory's Very Large Telescope, Mr. Shultz is conducting the first systematic population study of magnetosphere-host stars.

"All massive stars have winds: supersonic outflows of plasma driven by the stars' intense radiation. When you put this plasma inside a magnetic field you get a stellar magnetosphere," explains Mr. Shultz (Physics, Engineering Physics and Astronomy). "Since the 1980s, theoretical models have generally found that the plasma should escape the magnetosphere in sporadic, violent eruptions called centrifugal breakout events, triggered when the density of plasma grows beyond the ability of the magnetic field to contain.

"However, no evidence of this dramatic process has yet been observed, so the community has increasingly been calling that narrative into question."

Before now, obvious disagreements with theory had been noted primarily for a single, particularly well-studied star. Studying the full population of magnetic, massive stars with detectable magnetospheres, Mr. Shultz has determined that the plasma density within all such magnetospheres is far lower than the limiting value implied by the centrifugal breakout model. This suggests that plasma might be escaping gradually, maintaining magnetospheres in an essentially steady state.

"We don't know yet what is going on," says Mr. Shultz. "But, when centrifugal breakout was first identified as the most likely process for mass escape, only the simplest diffusive mechanisms were ruled out. Our understanding of space plasmas has developed quite a bit since then. We now need to go back and look more closely at the full range of diffusive mechanisms and plasma instabilities. There are plenty to choose from: the real challenge is developing the theoretical tools that will be necessary to test them."


Contacts and sources:
Anne Craig
Queen's University