Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Wednesday, June 28, 2017

Early, Permanent Human Settlement in Andes Documented

Using five different scientific approaches, a team including University of Wyoming researchers has given considerable support to the idea that humans lived year-round in the Andean highlands of South America over 7,000 years ago.

Examining human remains and other archaeological evidence from a site at nearly 12,500 feet above sea level in Peru, the scientists show that intrepid hunter-gatherers -- men, women and children -- managed to survive at high elevation before the advent of agriculture, in spite of lack of oxygen, frigid temperatures and exposure to elements.

Intrepid hunter-gatherer families permanently occupied high-elevation environments of the Andes Mountains at least 7,000 years ago, according to new research led by University of Wyoming scientists.

Credit: Lauren A. Hayes

"This gives us a very strong baseline to help understand the rates of cultural and genetic change in the Andean highlands, a region known for the domestication of alpaca, potatoes and other plants; emergence of state-level political and economic complexity; and rapid human adaptation to high-elevation life," says Randy Haas, a postdoctoral research associate in the University of Wyoming's Department of Anthropology and the team's leader.

The research appears in the July issue of Royal Society Open Science, a peer-reviewed, open-access scientific journal. Along with Haas, the second author is Ioana Stefenescu, graduate student in UW's Department of Geology and Geophysics. Also contributing to the paper were Alexander Garcia-Putnam, doctoral student in the UW Department of Anthropology; Mark Clementz, associate professor in the Department of Geology and Geophysics; Melissa Murphy, associate professor in the Department of Anthropology; and researchers from the University of California-Davis, the University of California-Merced, the University of Arizona and Peruvian institutions.

Excavations led by Haas at the site in southern Peru produced the remains of 16 people, along with more than 80,000 artifacts, dating to as early as 8,000 years ago. Evidence from that site, as well as others, has led some researchers to estimate that hunter-gatherers began living in the Andes around 9,000 years ago, but debate has continued over whether that human presence was permanent or seasonal.

The research team led by Haas took five different approaches to test whether there was early permanent use of the region: studying the human bones for oxygen and carbon isotopes; the travel distances from the site to low-elevation zones; the demographic mixture of the human remains; and the types of tools and other materials found with them.

The scientists found low oxygen and high carbon isotope values in the bones, revealing the distinct signature of permanent high-elevation occupation; that travel distances to low-elevation zones were too long for seasonal human migration; that the presence of women and small children meant such migration was highly unlikely; and that almost all of the tools used by the hunter-gatherers were made with high-elevation stone material, not brought from elsewhere.

"These results constitute the strongest evidence to date that people were living year-round in the Andean highlands at least 7,000 years ago," Haas says. "Such high-elevation environments were among the last frontiers of human colonization, and this knowledge holds implications for understanding rates of genetic, physiological and cultural adaption in the human species.

Contacts and sources:
Randy Haas
University of Wyoming 

Citation: Humans permanently occupied the Andean highlands by at least 7 ka Randall Haas, Ioana C. Stefanescu, Alexander Garcia-Putnam, Mark S. Aldenderfer, Mark T. Clementz, Melissa S. Murphy, Carlos Viviano Llave, James T. Watson Published 28 June 2017.DOI: 10.1098/rsos.170331 http://dx.doi.org/10.1098/rsos.170331

Tuesday, June 27, 2017

Bizarre Bee-Zed Asteroid Orbits the Sun in the Opposite Direction as Planets

In our solar system, an asteroid orbits the Sun in the opposite direction to the planets. Asteroid 2015 BZ509, also known as Bee-Zed, takes 12 years to make one complete orbit around the Sun. This is the same orbital period as that of Jupiter, which shares its orbit but moves in the opposite direction to the planet's motion.

The asteroid with the retrograde co-orbit was identified by Helena Morais, a professor at São Paulo State University's Institute of Geosciences & Exact Sciences (IGCE-UNESP). Morais had predicted the discovery two years earlier, so much so that the article describing observations of the asteroid published in Nature, is noted by Morais in the News & Views section of the same issue of the journal.

Co-orbital bodies that orbit the Sun in the same direction as a planet can follow trajectories (blue curves with arrows) that, from the perspective of the planet, look like tadpoles, horseshoes or 'quasi-satellites'

Credit: Helena Morais & Fathi Naouni

"It's good to have confirmation," Morais told. "I was sure retrograde co-orbits existed. We've known about this asteroid since 2015, but the orbit hadn't been clearly determined, and it wasn't possible to confirm the co-orbital configuration. Now it's been confirmed after more observations that reduced the number of errors in the orbital parameters. So, we're sure the asteroid is retrograde, co-orbital and stable."

In partnership with Fathi Namouni at the Côte d'Azur Observatory in France, Morais developed a general theory on retrograde co-orbitals and retrograde orbital resonance.

The paper by Paul Wiegert of the University of Western Ontario, Canada, published in March in Nature, describes how object 2015 BZ509, detected in January 2015, using the Panoramic Survey Telescope & Rapid Response System (Pan-STARRS) in Hawaii, was tracked using the Large Binocular Telescope in Arizona. The confirmation that its orbit is retrograde and co-orbital with Jupiter came from these additional observations.

Retrograde orbits are rare. It is estimated that only 82 of the more than 726,000 known asteroids are orbiting the "wrong way". By contrast, prograde co-orbitals that move 'with traffic' are nothing new; Jupiter alone is accompanied by some 6,000 Trojan asteroids that share the giant planet's orbit.

Bee-Zed is unusual because it shares a planet's orbit, because its own orbit is retrograde, and above all, because it has been stable for millions of years. "Instead of being ejected from orbit by Jupiter, as one would expect, the asteroid is in a configuration that assures stability thanks to co-orbital resonance, meaning its motion is synchronized with the planet's, avoiding collisions," Morais said.

The asteroid crosses Jupiter's path every six years, but owing to their co-orbital resonance, they never come closer than 176 million km, far enough to avoid major disturbances to the orbit of the asteroid, although Jupiter's gravity is essential to keeping the planet and Bee-Zed in a 1:1 retrograde resonance.

All the planets and most of the asteroids in the solar system orbit the Sun in the same direction because the solar system emerged from a revolving cloud of dust and gas, most of the constituent objects of which continue to revolve as they did before.

"The vast majority of retrograde objects are comets. Their orbits are typically inclined as well as retrograde. The most famous, of course, is Halley's comet, which has a retrograde orbit with an inclination of 162°, practically identical to that of 2015 BZ509," Morais said.

In the final stages of planetary formation, she explained, small bodies were expelled far from the Sun and planets, forming the spherical shell of debris and comets known as the Oort cloud.

"At these distances, the Milky Way's gravitational effects disturb small bodies. To begin with, they orbited close to the plane of the ecliptic in the same direction as the planets, but their orbits were deformed by the galaxy's tidal force and by interactions with nearby stars, gradually becoming more inclined and forming a more or less spherical reservoir," Morais said.

If the orbits of these bodies are disturbed - by a passing star, for example - they return to paths close to the planets of the solar system and can become active comets. "The icy small bodies warm up as they approach the Sun, and the ice sublimes to form a coma [a dense cloud of gas and dust particles around a nucleus] and often a tail, making the comets observable," she explained.

In the case of 2015 BZ509, the most surprising feature is its long period of stability. In their commentary in Nature, Morais and Namouni say the particularly long life of 2015 BZ509 in its retrograde orbit makes it the most intriguing object in the vicinity of Jupiter. "Further studies are needed to confirm how this mysterious object arrived at its present configuration," they conclude.

Wiegert speculates that Bee-Zed probably originated in the Oort cloud, like the Halley family comets. In any event, more research will be necessary to reconstruct Bee-Zed's epic voyage through the solar system.

"Actually, 2006 BZ8 might even enter into co-orbital retrograde resonance with Saturn in the future. Our simulations showed that resonance capture is more likely for objects with retrograde orbits than for those orbiting in the same direction as the planets," Morais said.

Bee-Zed is expected to stay in the same state for another million years. Its discovery has led researchers to suspect that asteroids in retrograde co-orbits with Jupiter and other planets may be more common than was previously thought, making the theory expounded by Morais and Namouni even more compelling.

Contacts and sources:
Samuel Antenor
Fundação De Amparo À Pesquisa Do Estado De São Paulo

Colliding Galaxies Make Cosmic Goulash

What would happen if you took two galaxies and mixed them together over millions of years? A new image including data from NASA's Chandra X-ray Observatory reveals the cosmic culinary outcome.

Arp 299 is a system located about 140 million light years from Earth. It contains two galaxies that are merging, creating a partially blended mix of stars from each galaxy in the process.

However, this stellar mix is not the only ingredient. New data from Chandra reveals 25 bright X-ray sources sprinkled throughout the Arp 299 concoction. Fourteen of these sources are such strong emitters of X-rays that astronomers categorize them as "ultra-luminous X-ray sources," or ULXs.

This new composite image of Arp 299 contains X-ray data from Chandra (pink), higher-energy X-ray data from NuSTAR (purple), and optical data from the Hubble Space Telescope (white and faint brown). Arp 299 also emits copious amounts of infrared light that has been detected by observatories such as NASA's Spitzer Space Telescope, but those data are not included in this composite.
Arp 299
Image credit: X-ray: NASA/CXC/Univ. of Crete/K. Anastasopoulou et al, NASA/NuSTAR/GSFC/A. Ptak et al; Optical: NASA/STScI

These ULXs are found embedded in regions where stars are currently forming at a rapid rate. Most likely, the ULXs are binary systems where a neutron star or black hole is pulling matter away from a companion star that is much more massive than the Sun. These double star systems are called high-mass X-ray binaries.

Such a loaded buffet of high-mass X-ray binaries is rare, but Arp 299 is one of the most powerful star-forming galaxies in the nearby Universe. This is due at least in part to the merger of the two galaxies, which has triggered waves of star formation. The formation of high-mass X-ray binaries is a natural consequence of such blossoming star birth as some of the young massive stars, which often form in pairs, evolve into these systems.

The infrared and X-ray emission of the galaxy is remarkably similar to that of galaxies found in the very distant Universe, offering an opportunity to study a relatively nearby analog of these distant objects. A higher rate of galaxy collisions occurred when the universe was young, but these objects are difficult to study directly because they are located at colossal distances.

X-ray Image of Arp 299

Credit: NASA

The Chandra data also reveal diffuse X-ray emission from hot gas distributed throughout Arp 299. Scientists think the high rate of supernovas, another common trait of star-forming galaxies, has expelled much of this hot gas out of the center of the system.

A paper describing these results appeared in the August 21st, 2016 issue of the Monthly Notices of the Royal Astronomical Society and is available online. The lead author of the paper is Konstantina Anastasopoulou from the University of Crete in Greece. NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations.

Contacts and sources:
NASA/Chandra X-Ray Observatory 

Moisture-Driven ‘Robots’ Crawl with No External Power Source

Using an off-the-shelf camera flash, researchers turned an ordinary sheet of graphene oxide into a material that bends when exposed to moisture. They then used this material to make a spider-like crawler and claw robot that move in response to changing humidity without the need for any external power.

“The development of smart materials such as moisture-responsive graphene oxide is of great importance to automation and robotics,” said Yong-Lai Zhang of Jilin University, China, and leader of the research team. “Our very simple method for making typical graphene oxides smart is also extremely efficient. A sheet can be prepared within one second.”

The researchers used flash-treated graphene oxide to create a crawler that moved when humidity was increased. Switching the humidity off and on several times induced the crawler to move 3.5 millimeters in 12 seconds, with no external energy supply.
Credit: Yong-Lai Zhang of Jilin University

In the journal Optical Materials Express, from The Optical Society (OSA), the researchers reported that graphene oxide sheets treated with brief exposure to bright light in the form of a camera flash exhibited reversible bending at angles from zero to 85 degrees in response to switching the relative humidity between 33 and 86 percent. They also demonstrated that their method is repeatable and the simple robots they created have good stability.

Although other materials can change shape in response to moisture, the researchers experimented with graphene-based materials because they are incredibly thin and have unique properties such as flexibility, conductivity, mechanical strength and biocompatibility. These properties make graphene ideal for broad applications in various fields. For example, the material’s excellent biocompatibility could allow moisture-responsive graphene oxide to be used in organ-on-a-chip systems that simulate the mechanics and physiological response of entire organs and are used for drug discovery and other biomedical research.

Making a moisture-responsive material
Other groups have shown that graphene oxide can be made moisture responsive through a chemical reaction called reduction, which removes oxygen from molecules. In fact, the researchers previously demonstrated that both sunlight and UV light can induce the effect. However, these approaches were hard to precisely control and not very efficient.

The researchers used flash-treated graphene oxide to create a crawler that moved when humidity was increased. Switching the humidity off and on several times induced the crawler to move 3.5 millimeters in 12 seconds, with no external energy supply.

Credit: Yong-Lai Zhang of Jilin University

The research team experimented with using a camera flash, which typically covers a broad spectral range, as a simple and effective way to create moisture-responsive graphene. A camera flash allowed the researchers to remove oxygen from, or reduce, just one side of a sheet of graphene oxide. When moisture is present, the reduced side of the graphene oxide absorbs fewer water molecules, causing the non-reduced side to expand and the sheet to bend toward the reduced side. If the material is then exposed to dry air, it flattens out.

The researchers found that keeping the flash about 20 to 30 centimeters away from the graphene oxide sheet was enough to selectively modify the top layer of the sheet without penetrating all the way through to the other side. The sheet also needs to be more than 5 microns thick to prevent it from being completely reduced by the flash exposure.

Graphene robots
To make a moisture-driven crawler, the researchers cut flash-treated graphene oxide into an insect shape with four legs. The free-standing crawler was about 1 centimeter wide and moved forward when humidity was increased. Switching the humidity off and on several times induced the crawler to move 3.5 millimeters in 12 seconds, with no external energy supply.

The researchers also made a claw shape by sticking together eight 5-by-1 millimeter ribbons of flash-treated graphene oxide in a star shape. When moisture was present, the claw closed within 12 seconds. It returned back to an open position after 56 seconds of exposure to dry air.

“These robots are simple and can be flexibly manipulated by changing the environmental humidity,” said Zhang. “These designs are very important because moving and capturing/releasing are basic functions of automated systems.”

Zhang added that integrating moisture-responsive graphene into a microchannel system connected to humidity controller could allow even more precise control and other types of robots or simple machines. The researchers are now working on ways to improve the control of the material’s bending and are experimenting with ways to gain more complex performance from robots made of moisture-responsive graphene oxide.

Contacts and sources:
The Optical Society
Optical Materials Express (OMEx)

Paper: Y.-Q. Liu, J.-N. Ma, Y. Liu, D.-D. Han, H.-B. Jiang, J.-W. Mao, C.-H. Han, Z.-Z. Jiao, Y.-L. Zhang, “Facile fabrication of moisture responsive graphene actuators by moderate flash reduction of graphene oxides films,” Opt. Mater. Express Volume 7, Issue 7, 2617-2625 (2017).
DOI: 10.1364/OME.7.002617.

Supermassive Black Holes in Orbital Dance: Groundbreaking Discovery

For the first time ever, astronomers at The University of New Mexico say they've been able to observe and measure the orbital motion between two supermassive black holes hundreds of millions of light years from Earth - a discovery more than a decade in the making.

UNM Department of Physics & Astronomy graduate student Karishma Bansal is the first-author on the paper, Constraining the Orbit of the Supermassive Black Hole Binary 0402+379’, recently published in The Astrophysical Journal. She, along with UNM Professor Greg Taylor and colleagues at Stanford, the U.S. Naval Observatory and the Gemini Observatory, have been studying the interaction between these black holes for 12 years.

"For a long time, we've been looking into space to try and find a pair of these supermassive black holes orbiting as a result of two galaxies merging," said Taylor. "Even though we've theorized that this should be happening, nobody had ever seen it until now."

Artist's conception shows two supermassive black holes, similar to those observed by UNM researchers, orbiting one another more than 750 million light years from Earth.
Credit: Josh Valenzuela/UNM

In early 2016, an international team of researchers, including a UNM alumnus, working on the LIGO project detected the existence of gravitational waves, confirming Albert Einstein's 100-year-old prediction and astonishing the scientific community. These gravitational waves were the result two stellar mass black holes (~30 solar mass) colliding in space within the Hubble time. 

Now, thanks to this latest research, scientists will be able to start to understand what leads up to the merger of supermassive black holes that creates ripples in the fabric of space-time and begin to learn more about the evolution of galaxies and the role these black holes play in it.

Using the Very Long Baseline Array (VLBA), a system made up of 10 radio telescopes across the U.S. and operated in Socorro, N.M., researchers have been able to observe several frequencies of radio signals emitted by these supermassive black holes (SMBH). Over time, astronomers have essentially been able to plot their trajectory and confirm them as a visual binary system. In other words, they've observed these black holes in orbit with one another.

This is a false color VLBA map of radio galaxy 0402+379 at 15 GHz. It hosts two supermassive black holes at its center, being represented by accretion discs with twin jets.
Credit: UNM

"When Dr. Taylor gave me this data I was at the very beginning of learning how to image and understand it," said Bansal. "And, as I learned there was data going back to 2003, we plotted it and determined they are orbiting one another. It's very exciting."

For Taylor, the discovery is the result of more than 20 years of work and an incredible feat given the precision required to pull off these measurements. At roughly 750 million light years from Earth, the galaxy named 0402+379 and the supermassive black holes within it, are incredibly far away; but are also at the perfect distance from Earth and each other to be observed.

Bansal says these supermassive black holes have a combined mass of 15 billion times that of our sun, or 15 billion solar masses. The unbelievable size of these black holes means their orbital period is around 24,000 years, so while the team has been observing them for over a decade, they've yet to see even the slightest curvature in their orbit.

"If you imagine a snail on the recently-discovered Earth-like planet orbiting Proxima Centauri - 4.243 light years away - moving at 1 cm a second, that's the angular motion we're resolving here," said Roger W. Romani, professor of physics at Stanford University and member of the research team.

"What we've been able to do is a true technical achievement over this 12-year period using the VLBA to achieve sufficient resolution and precision in the astrometry to actually see the orbit happening," said Taylor. "It's a bit of triumph in technology to have been able to do this."

VLBA map of radio galaxy 0402+379 at 15 GHz. It hosts two supermassive black holes at its center, being denoted as C1 and C2.
Credit: UNM

While the technical accomplishment of this discovery is truly amazing, Bansal and Taylor say the research could also teach us a lot about the universe, where galaxies come from and where they're going.

"The orbits of binary stars provided tremendous insights about stars," said Bob Zavala, an astronomer with the U.S. Naval Observatory. "Now we'll be able to use similar techniques to understand super-massive black holes and the galaxies they reside within."

Continuing to observe the orbit and interaction of these two supermassive black holes could also help us gain a better understanding of what the future of our own galaxy might look like. Right now, the Andromeda galaxy, which also has a SMBH at its center, is on a path to collide with our Milky Way, meaning the event Bansal and Taylor are currently observing, might occur in our galaxy in a few billion years.

"Supermassive black holes have a lot of influence on the stars around them and the growth and evolution of the galaxy," explained Taylor. "So, understanding more about them and what happens when they merge with one another could be important for our understanding for the universe."

Bansal says the research team will take another observation of this system in three or four years to confirm the motion and obtain a precise orbit. In the meantime, the team hopes that this discovery will encourage related work from astronomers around the world.

Contacts and sources:
Aaron Hilf
The University of New Mexico

Monday, June 26, 2017

Chimpanzee 'Super Strength' and Human Muscle Evolution

Since at least the 1920s, anecdotes and some studies have suggested that chimpanzees are “super strong” compared to humans, implying that their muscle fibers, the cells that make up muscles, are superior to humans.

But now a research team reports that contrary to this belief, chimp muscles’ maximum dynamic force and power output is just about 1.35 times higher than human muscle of similar size, a difference they call “modest” compared with historical, popular accounts of chimp “super strength,” being many times stronger than humans.

Credit: Ikiwaner / Wikimedia Commons

Further, says biomechanist Brian Umberger, an expert in musculoskeletal biomechanics in kinesiology at the University of Massachusetts Amherst, the researchers found that this modest performance advantage for chimps was not due to stronger muscle fibers, but rather the different mix of muscle fibers found in chimpanzees compared to humans.

As the authors explain, the long-standing but untested assumption of chimpanzees’ exceptional strength, if true, “would indicate a significant and previously unappreciated evolutionary shift in the force and/or power-producing capabilities of skeletal muscle” in either chimps or humans, whose lines diverged some 7 or 8 million years ago.

Umberger was part of the team led by Matthew O’Neill at the University of Arizona College of Medicine, Phoenix, and others at Harvard and Ohio State University. Details of this work, supported in part by a National Science Foundation grant to Umberger,appear in the current early online edition of Proceedings of the National Academy of Sciences.

The researchers began by critically examining the scientific literature, where studies reported a wide range of estimates for how chimpanzees outstrip humans in strength and power, averaged about 1.5 times over all. But Umberger says reaching this value from such disparate reports “required a lot of analysis on our part, accounting for differences between subjects, procedures and so on.” He and colleagues say 1.5 times is considerably less than anecdotal reports of chimps being several-fold stronger, but it is still a meaningful difference and explaining it could advance understanding of early human musculoskeletal evolution.

Umberger adds, “There are nearly 100 years of accounts suggesting that chimpanzees must have intrinsically superior muscle fiber properties compared with humans, yet there had been no direct tests of that idea. Such a difference would be surprising, given what we know about how similar muscle fiber properties are across species of similar body size, such as humans and chimps.”

He explains that muscle fiber comes in two general types, fast-twitch, fast and powerful but fatigue quickly, and slow-twitch, which are slower and less powerful but with good endurance. “We found that within fiber types, chimp and human muscle fibers were actually very similar. However, we also found that chimps have about twice as many fast-twitch fibers as humans,” he notes.

For this work, the team used an approach combining isolated muscle fiber preparations, experiments and computer simulations. They directly measured the maximum isometric force and maximum shortening velocity of skeletal muscle fibers of the common chimpanzee. In general, they found that chimp limb and trunk skeletal muscle fibers are similar to humans and other mammals and “generally consistent with expectations based on body size and scaling.”

Umberger, whose primary scientific contribution was in interpreting how muscle properties will affect whole-animal performance, developed computer simulation models that allowed the researchers to integrate the various data on individual muscle properties and assess their combined effects on performance.

O’Neill, Umberger and colleagues also measured the distribution of muscle fiber types and found it to be quite different in humans and chimps, who also have longer muscle fibers than humans. They combined individual measurements in the computer simulation model of muscle function to better understand what the combined effects of the experimental observations were on whole-muscle performance. When all factors were integrated, chimp muscle produces about 1.35 times more dynamics force and power than human muscle.

Umberger says the advantage for chimps in dynamic strength and power comes from the global characteristics of whole muscles, rather than the intrinsic properties of the cells those muscles are made of. “The flip side is that humans, with a high percentage of slow-twitch fibers, are adapted for endurance, such as long-distance travel, at the expense of dynamic strength and power. When we compared chimps and humans to muscle fiber type data for other species we found that humans are the outlier, suggesting that selection for long distance, over-ground travel may have been important early in the evolution of our musculoskeletal system.”

The authors conclude, “Contrary to some long-standing hypotheses, evolution has not altered the basic force, velocity or power-producing capabilities of skeletal muscle cells to induce the marked differences between chimpanzees and humans in walking, running, climbing and throwing capabilities. This is a significant, but previously untested assumption. Instead, natural selection appears to have altered more global characteristics of muscle tissue, such as muscle fiber type distributions and muscle fiber lengths.”

This work is part of a long-running collaboration among Umberger, O’Neill and Susan Larson at Stony Brook University School of Medicine on the general topics of musculoskeletal design, locomotion and human evolution.

Contacts and sources:
Janet Lathrop
University of Massachusetts Amherst

Arsenic Compounds in Rice More Prevalent than Previously Known, Risk for Humans Unknown

Rice is a staple food in many regions of the world, however it sometimes contains levels of arsenic that are hazardous to our health. An interdisciplinary team of researchers at the University of Bayreuth has now discovered that there are arsenic compounds which have a toxic effect on plants and yet had not previously been considered in connection with chemical analyses of rice and the estimated health risks for humans. 

The research concerns thioarsenates, compounds made up of arsenic and sulphur, which may be present in rice fields more often than previously assumed. The scientists have published their findings in the journal Environmental Science and Technology.

Doctoral researchers in Bayreuth Carolin Kerl M.Sc. (left) and Colleen Rafferty M.Sc. (right) are investigating the absorption of thioarsenates in the thale cress (Arabidopsis thaliana). 
Photo: Christian Wissler.

Increased concentrations in rice fields?

Thioarsenates can be found in surface water, groundwater, and bottom water with high levels of sulphide. Sulphide is the reduced form of sulphate; it reacts spontaneously with arsenic and can form thioarsenates. Rice fields provide favourable conditions for these processes.

 “Rice is usually grown on flooded fields. The resulting lack of oxygen in the ground can reduce sulphate to sulphide. We were able to demonstrate for the first time that a considerable amount of the arsenic in rice fields – namely 20 – 30% - is bound up in the form of thioarsenates,” explained Prof. Dr. Britta Planer-Friedrich, Professor of Environmental Geochemistry at the University of Bayreuth.

 “Further research to shed more light on the spread of thioarsenates is now even more urgent since we were able to show for the first time that thioarsenates can be absorbed by plants and are harmful to them.”

Harmfulness for biological model organisms

The experiments in Bayreuth, which also included several doctoral researchers – concentrated on the thale cress (Arabidopsis thaliana), a common plant in the fields of Europe and Asia that has proven to be a useful model organism in biological research. Together with plant physiologist Prof. Dr. Stephan Clemens, various mutants of the thale cress were tested in the laboratory to see how they reacted to thioarsenates added to their nutrient solution. The results were clear: the plants absorb the arsenic-sulphur compounds and their growth is visibly limited. The more arsenic reaches the plant in this way, the more its roots shrivel up.

Toxic for humans too?

“In the wake of these unsettling findings, we plan to investigate the effects of thioarsenates on different types of rice over the next several months. At present, we do not yet sufficiently understand whether or not and to what extent rice plants absorb the arsenic that bonded with sulphur and to what extent this adversely affects their metabolic processes. Above all, it is unclear whether thioarsenates also make their way to the rice grains,” explained Prof. Clemens. 

He added, “At the University of Bayreuth, we have all the research technology necessary to see these experiments through. If it turns out that thioarsenates are absorbed by the roots of the rice plants and make their way to the rice grains unaltered, then further research will be needed. In particular, we would need to clarify whether thioarsenates are toxic for humans who consume food containing rice over an extended period. What’s more: in addition to the previously known forms of arsenic, thioarsenates must be considered in the future when developing rice plants that accumulate less arsenic in their grains. This is an objective currently being pursued by numerous research groups around the world.”

“Not only the EU, which has had a limit for arsenic in rice since 2016, but above all countries in Asia and Africa – where yearly rice consumption can be well above 100 kilograms per person – should be following rice research closely with an eye to amending their food safety regulations. Traces of arsenic are also found in drinking water and other types of food. These trace amounts can add up to a daily dose representing a health risk that is not to be underestimated,” Prof. Planer-Friedrich said.

A few years ago, Planer-Friedrich discovered that thioarsenates could play a more significant role in the earth’s arsenic balance than previously thought. The starting point was a study at the hot springs in Yellowstone National Park. Here it was discovered that more than 80% of the arsenic from the hot springs is bound up in thioarsenates.

 In the following years, it was shown that thioarsenates can occur in soil and groundwater under less extreme conditions. Depending on the sulphide content, they may even account for more than a quarter of total arsenic. These findings have provided impetus for further experiments on the spread of such arsenic compounds – at the University of Bayreuth, such research will focus on the staple food rice.

Contacts and sources:
University of Bayreuth

Citation: Britta Planer-Friedrich, Tanja Kühnlenz, Dipti Halder, Regina Lohmayer, Nathaniel Wilson, Colleen Rafferty, and Stephan Clemens, Thioarsenate Toxicity and Tolerance in the Model System Arabidopsis thaliana, Environmental Science & Technology (2017),
DOI: 10.1021/acs.est.6b06028

The Brightest Light Ever Produced on Earth Equal to 1 Billion Suns

Physicists from the University of Nebraska-Lincoln are seeing an everyday phenomenon in a new light.

By focusing laser light to a brightness one billion times greater than the surface of the sun - the brightest light ever produced on Earth - the physicists have observed changes in a vision-enabling interaction between light and matter.

Those changes yielded unique X-ray pulses with the potential to generate extremely high-resolution imagery useful for medical, engineering, scientific and security purposes. The team's findings, detailed June 26 in the journal Nature Photonics, should also help inform future experiments involving high-intensity lasers.

A rendering of how changes in an electron's motion (bottom view) alter the scattering of light (top view), as measured in a new experiment that scattered more than 500 photons of light from a single electron. Previous experiments had managed to scatter no more than a few photons at a time.

Credit: Extreme Light Laboratory|University of Nebraska-Lincoln

Donald Umstadter and colleagues at the university's Extreme Light Laboratory fired their Diocles Laser at helium-suspended electrons to measure how the laser's photons - considered both particles and waves of light - scattered from a single electron after striking it.

Under typical conditions, as when light from a bulb or the sun strikes a surface, that scattering phenomenon makes vision possible. But an electron - the negatively charged particle present in matter-forming atoms - normally scatters just one photon of light at a time. And the average electron rarely enjoys even that privilege, Umstadter said, getting struck only once every four months or so.

Though previous laser-based experiments had scattered a few photons from the same electron, Umstadter's team managed to scatter nearly 1,000 photons at a time. At the ultra-high intensities produced by the laser, both the photons and electron behaved much differently than usual.

"When we have this unimaginably bright light, it turns out that the scattering - this fundamental thing that makes everything visible - fundamentally changes in nature," said Umstadter, the Leland and Dorothy Olson Professor of physics and astronomy.

Using the brightest light ever produced, University of Nebraska-Lincoln physicists obtained this high-resolution X-ray of a USB drive. The image reveals details not visible with ordinary X-ray imaging

Credit: Extreme Light Laboratory|University of Nebraska-Lincoln

A photon from standard light will typically scatter at the same angle and energy it featured before striking the electron, regardless of how bright its light might be. Yet Umstadter's team found that, above a certain threshold, the laser's brightness altered the angle, shape and wavelength of that scattered light.

"So it's as if things appear differently as you turn up the brightness of the light, which is not something you normally would experience," Umstadter said. "(An object) normally becomes brighter, but otherwise, it looks just like it did with a lower light level. But here, the light is changing (the object's) appearance. The light's coming off at different angles, with different colors, depending on how bright it is."

That phenomenon stemmed partly from a change in the electron, which abandoned its usual up-and-down motion in favor of a figure-8 flight pattern. As it would under normal conditions, the electron also ejected its own photon, which was jarred loose by the energy of the incoming photons. But the researchers found that the ejected photon absorbed the collective energy of all the scattered photons, granting it the energy and wavelength of an X-ray.

The unique properties of that X-ray might be applied in multiple ways, Umstadter said. Its extreme but narrow range of energy, combined with its extraordinarily short duration, could help generate three-dimensional images on the nanoscopic scale while reducing the dose necessary to produce them.

Those qualities might qualify it to hunt for tumors or microfractures that elude conventional X-rays, map the molecular landscapes of nanoscopic materials now finding their way into semiconductor technology, or detect increasingly sophisticated threats at security checkpoints. Atomic and molecular physicists could also employ the X-ray as a form of ultrafast camera to capture snapshots of electron motion or chemical reactions.

A scientist at work in the Extreme Light Laboratory at the University of Nebraska-Lincoln, where physicists using the brightest light ever produced were able to change the way photons scatter from electrons.
Credit: Extreme Light Laboratory|University of Nebraska-Lincoln

As physicists themselves, Umstadter and his colleagues also expressed excitement for the scientific implications of their experiment. By establishing a relationship between the laser's brightness and the properties of its scattered light, the team confirmed a recently proposed method for measuring a laser's peak intensity. The study also supported several longstanding hypotheses that technological limitations had kept physicists from directly testing.

"There were many theories, for many years, that had never been tested in the lab, because we never had a bright-enough light source to actually do the experiment," Umstadter said. "There were various predictions for what would happen, and we have confirmed some of those predictions.

"It's all part of what we call electrodynamics. There are textbooks on classical electrodynamics that all physicists learn. So this, in a sense, was really a textbook experiment."

Contacts and sources:
Donald Umstadter
Extreme Light Laboratory|University of Nebraska-Lincoln

Surprised Scientists Find Water Exists as Two Different Liquids

We normally consider liquid water as disordered with the molecules rearranging on a short time scale around some average structure. Now, however, scientists at Stockholm University have discovered two phases of the liquid with large differences in structure and density. The results are based on experimental studies using X-rays, which are now published in Proceedings of the National Academy of Science (US).

Most of us know that water is essential for our existence on planet Earth. It is less well-known that water has many strange or anomalous properties and behaves very differently from all other liquids. Some examples are the melting point, the density, the heat capacity, and all-in-all there are more than 70 properties of water that differ from most liquids. These anomalous properties of water are a prerequisite for life as we know it.

Pictured is an artist's impression of the two forms of ultra-viscous liquid water with different density. On the background is depicted the x-ray speckle pattern taken from actual data of high-density amorphous ice, which is produced by pressurizing water at very low temperatures.

Credit: Mattias Karlén

"The new remarkable property is that we find that water can exist as two different liquids at low temperatures where ice crystallization is slow", says Anders Nilsson, professor in Chemical Physics at Stockholm University. The breakthrough in the understanding of water has been possible through a combination of studies using X-rays at Argonne National Laboratory near Chicago, where the two different structures were evidenced and at the large X-ray laboratory DESY in Hamburg where the dynamics could be investigated and demonstrated that the two phases indeed both were liquid phases. Water can thus exist as two different liquids.

"It is very exciting to be able to use X-rays to determine the relative positions between the molecules at different times", says Fivos Perakis, postdoc at Stockholm University with a background in ultrafast optical spectroscopy. "We have in particular been able to follow the transformation of the sample at low temperatures between the two phases and demonstrated that there is diffusion as is typical for liquids".

When we think of ice it is most often as an ordered, crystalline phase that you get out of the ice box, but the most common form of ice in our planetary system is amorphous, that is disordered, and there are two forms of amorphous ice with low and high density. The two forms can interconvert and there have been speculations that they can be related to low- and high-density forms of liquid water. To experimentally investigate this hypothesis has been a great challenge that the Stockholm group has now overcome.

"I have studied amorphous ices for a long time with the goal to determine whether they can be considered a glassy state representing a frozen liquid", says Katrin Amann-Winkel, researcher in Chemical Physics at Stockholm University. "It is a dream come true to follow in such detail how a glassy state of water transforms into a viscous liquid which almost immediately transforms to a different, even more viscous, liquid of much lower density".

"The possibility to make new discoveries in water is totally fascinating and a great inspiration for my further studies", says Daniel Mariedahl, PhD student in Chemical Physics at Stockholm University. "It is particularly exciting that the new information has been provided by X-rays since the pioneer of X-ray radiation, Wolfgang Röntgen, himself speculated that water can exist in two different forms and that the interplay between them could give rise to its strange properties".

"The new results give very strong support to a picture where water at room temperature can't decide in which of the two forms it should be, high or low density, which results in local fluctuations between the two", says Lars G.M. Pettersson, professor in Theoretical Chemical Physics at Stockholm University. "In a nutshell: Water is not a complicated liquid, but two simple liquids with a complicated relationship."

These new results not only create an overall understanding of water at different temperatures and pressures, but also how water is affected by salts and biomolecules important for life. In addition, the increased understanding of water can lead to new insights on how to purify and desalinate water in the future. This will be one of the main challenges to humanity in view of the global climate change.

These studies were led by Stockholm University and involve a collaboration including the KTH Royal Institute of Technology in Stockholm, DESY in Hamburg, University of Innsbruck, Argonne National Laboratory in Chicago and SLAC National Accelerator Laboratory in California. The other participants from Stockholm University involved in the study are Harshad Pathak, Alexander Späh, Filippo Cavalca and Daniel Schlesinger. Experiments were conducted at APS BL 6-ID-D at Argonne National Laboratory and PETRA III BL P10 at DESY.

Additional information:
Contacts and sources:
Professor Anders Nilsson
Stockholm University 

The recently published study by Fivos Perakis and Katrin Amann-Winkel et al. can be found here: https://www.eurekalert.org/pio/view.tipsheet.php?id=237&pubdate=2017-06-21

New Extinction Event Discovered

Over two million years ago, a third of the largest marine animals like sharks, whales, sea birds and sea turtles disappeared. This previously unknown extinction event not only had a considerable impact on the earth’s historical biodiversity but also on the functioning of ecosystems. This has been demonstrated by researchers at the University of Zurich.

Fossils from the Pliocene: shark tooth from carchahinus leucas on the left, from negaprion on the right.
Image: UZH

The disappearance of a large part of the terrestrial megafauna such as saber-toothed cat and the mammoth during the ice age is well known. Now, researchers at the University of Zurich and the Naturkunde Museum in Berlin have shown that a similar extinction event had taken place earlier, in the oceans.

New extinction event discovered

The international team investigated fossils of marine megafauna from the Pliocene and the Pleistocene epochs (5.3 million to around 9,700 years BC). “We were able to show that around a third of marine megafauna disappeared about three to two million years ago. Therefore, the marine megafaunal communities that humans inherited were already altered and functioning at a diminished diversity”, explains lead author Dr. Catalina Pimiento, who conducted the study at the Paleontological Institute and Museum of the University of Zurich.

Above all, the newly discovered extinction event affected marine mammals, which lost 55 per cent of their diversity. As many as 43 per cent of sea turtle species were lost, along with 35 per cent of sea birds and 9 per cent of sharks. On the other hand, the following new forms of life were to develop during the subsequent Pleistocene epoch: Around a quarter of animal species, including the polar bear Ursus, the storm petrel Oceanodroma or the penguin Megadyptes, had not existed during the Pliocene. Overall, however, earlier levels of diversity could not be reached again.

Effects on functional diversity

In order to determine the consequences of this extinction, the research team concentrated on shallow coastal shelf zones, investigating the effects that the loss of entire functional entities had on coastal ecosystems. Functional entities are groups of animals not necessarily related, but that share similar characteristics in terms of the function they play on ecosystems. The finding: Seven functional entities were lost in coastal waters during the Pliocene.

Even though the loss of seven functional entities, and one third of the species is relatively modest, this led to an important erosion of functional diversity: 17 per cent of the total diversity of ecological functions in the ecosystem disappeared and 21 per cent changed. Previously common predators vanished, while new competitors emerged and marine animals were forced to adjust. In addition, the researchers found that at the time of the extinction, coastal habitats were significantly reduced due to violent sea levels fluctuations.
Large warm-blooded marine animals are more vulnerable

The researchers propose that the sudden loss of the productive coastal habitats, together with oceanographic factors such as altered sea currents, greatly contributed to these extinctions. 

“Our models have demonstrated that warm-blooded animals in particular were more likely to become extinct. For example, species of sea cows and baleen whales, as well as the giant shark Carcharocles megalodon disappeared”, explains Dr. Pimiento. “This study shows that marine megafauna were far more vulnerable to global environmental changes in the recent geological past than had previously been assumed”. The researcher also points to a present-day parallel: Nowadays, large marine species such as whales or seals are also highly vulnerable to human influences.

Contacts and sources:
Catalina Pimiento Hernandez
Museum of Natural History
Leibniz Institute for Evolution and Biodiversity Science
University of Zurich.

Citation: Catalina Pimiento, John N. Griffin, Christopher F. Clements, Daniele Silvestro, Sara Varela, Mark D. Uhen and Carlos Jaramillo. The Pliocene marine megafauna extinction and its impact on functional diversity. June 26, 2017. Nature Ecology & Evolution. DOI: 10.1038/s41559-017-0223-6

What Happened to the Deepwater Horizon Oil Plume?

The Deepwater Horizon oil spill in the Gulf of Mexico in 2010 is one of the most studied spills in history, yet scientists haven’t agreed on the role of microbes in eating up the oil. 

Now a research team at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has identified all of the principal oil-degrading bacteria as well as their mechanisms for chewing up the many different components that make up the released crude oil.

Gary Andersen holds a PhyloChip, which was used for genetic analysis of the oil spill microbes

Credit: Roy Kaltschmidt/Berkeley Lab

The team, led by Berkeley Lab microbial ecologist Gary Andersen, is the first to simulate the conditions that occurred in the aftermath of the spill. Their study, “Simulation of Deepwater Horizon oil plume reveals substrate specialization within a complex community of hydrocarbon-degraders,” was just published in the Proceedings of the National Academy of Sciences.

“This provides the most complete account yet of what was happening in the hydrocarbon plumes in the deep ocean during the event,” said Andersen. Berkeley Lab’s Ping Hu, the lead author of the study, added: “We simulated the conditions of the Gulf of Mexico oil spill in the lab and were able to understand the mechanisms for oil degradation from all of the principal oil-degrading bacteria that were observed in the original oil spill.”

This oil spill was the largest in history, with the release of 4.1 million barrels of crude oil as well as large amounts of natural gas from a mile below the surface of the ocean. After the initial explosion and uncontained release of oil, researchers observed a phenomenon that had not been seen before: More than 40 percent of the oil, combined with an introduced chemical dispersant, was retained in a plume nearly 100 miles long at this great depth.

Yet because of the difficulty in collecting samples from so far below the ocean surface, and because of the large area that was impacted by the spill, a number of gaps in understanding the fate of the oil over time remained.

Discovery of a new bacterium

Andersen and his team returned to the spill location four years later to collect water at depth. With the assistance of co-authors Piero Gardinali of Florida International University and Ron Atlas of the University of Louisville, a suspension of small, insoluble oil droplets was evenly distributed in bottles, along with the more soluble oil fractions and chemical dispersant to mimic the conditions of the oil plume. Over the next 64 days the composition of the microbes and the crude oil were intensively studied.

Two-liter bottles containing Gulf of Mexico seawater on a rotating carousel to keep oil microdroplets in suspension

Courtesy Gary Andersen

The researchers witnessed an initial rapid growth of a microbe that had been previously observed to be the dominant bacterium in the early stages of the oil release but which had eluded subsequent attempts by others to recreate the conditions of the Gulf of Mexico oil plume.

Through DNA sequencing of its genome they were able to identify its mechanism for degrading oil. They gave this newly discovered bacterium the tentative name of Bermanella macondoprimitus based on its relatedness to other deep-sea microbes and the location where it was discovered.

“Our study demonstrated the importance of using dispersants in producing neutrally buoyant, tiny oil droplets, which kept much of the oil from reaching the ocean surface,” Andersen said. “Naturally occurring microbes at this depth are highly specialized in growing by using specific components of the oil for their food source. So the oil droplets provided a large surface area for the microbes to chew up the oil.”

Working with Berkeley Lab scientist Jill Banfield, a study co-author and also a professor in UC Berkeley’s Department of Earth and Planetary Sciences, the team used newly developed DNA-based methods to identify all of the genomes of the microbes that used the introduced oil for growth along with their specific genes that were responsible for oil degradation. Many of the bacteria that were identified were similar to oil-degrading bacteria found on the ocean surface but had considerably streamlined sets of genes for oil degradation.
Filling in the gaps

Early work on microbial activity after the oil spill was led by Berkeley Lab’s Terry Hazen (now primarily associated with the University of Tennessee), which provided the first data ever on microbial activity from a deepwater dispersed oil plume.

While Hazen’s work revealed a variety of hydrocarbon degraders, this latest study identified the mechanisms the bacteria used to degrade oil and the relationship of these organisms involved in the spill to previously characterized hydrocarbon-degrading organisms.

“We now have the capability to identify the specific organisms that would naturally degrade the oil if spills occurred in other regions and to calculate the rates of the oil degradation to figure out how long it would take to consume the spilled oil at depth,” Andersen said.

Listen to Gary Andersen talking about microbes — how he got interested in them and how they can be used in combating climate change.

Implications for future spills

Andersen noted that it is not clear if the degradation of oil at these depths would have occurred in other offshore oil-producing regions. “The Gulf of Mexico is home to one of the largest concentrations of underwater hydrocarbon seeps, and it has been speculated that this helped in the selection of oil-degrading microbes that were observed in the underwater plumes,” he said.

Although the well drilled by the Deepwater Horizon rig was one of the deepest of its time, new oil exploration offshore of Brazil, Uruguay, and India has now exceeded 2 miles below the ocean surface. By capturing water from these areas and subjecting them to the same test, it may be possible in the future to understand the consequences of an uncontrolled release of oil in these areas in greater detail.

“Our greatest hope would be that there were no oil spills in the future,” Andersen said. “But having the ability to manipulate conditions in the laboratory could potentially allow us to develop new insights for mitigating their impact.”

This research was funded by the Energy Biosciences Institute, a partnership led by UC Berkeley that includes Berkeley Lab and the University of Illinois at Urbana-Champaign. Other study co-authors were Eric Dubinsky, Lauren Tom, Christian Sieber, and Jian Wang of Berkeley Lab, and Alexander Probst of UC Berkeley.

Contacts and sources:
Julie Chao
Lawrence Berkeley National Laboratory

Textiles Made from Synthetic Fibers Pollute Ocean with Microplastics from the Washing Machine

Choose your clothes wisely or eat your shirt. 

Even before the UN Ocean Conference in early June, we already knew about the disastrous ways in which plastic affects the world's oceans. Billions of pieces of plastic are floating in the oceans. Their effects are also sufficiently well-known: marine animals swallow them or get tangled up in them, which causes them to die in agony. 

On the other hand, we know less about the consequences of the smallest pieces of plastic, known as microplastics. Empa researchers have now started to investigate how microplastics are generated and where they actually come from.

Bern Nowack in the Lab.

Credit: Empa

The presence of microplastics in our wastewater can be attributed primarily to two factors. Firstly, many cosmetic products, such as toothpaste, creams, shower gels, and peelings, contain tiny pieces of plastic in order to achieve a mechanical cleaning effect. Secondly, microplastics are washed out in the process of washing polymer textile clothing, and thus they enter our environment via wastewater.

Many researchers who have recently studied nanoparticles are now also investigating microplastics. They include Bernd Nowack, Edgar Hernandez, and Denise Mitrano (who is now working at the water research institute Eawag) from Empa's "Technology and Society" department. 

On the basis of their nanoparticle research, these three researchers recently published a first quantitative investigation of the release of microfibers from polyester textiles during washing, in the specialist journal "Environmental Science and Technology". In this study, the Empa team primarily investigated the ways in which washing agents, water temperature, and the number and length of wash cycles affect the release of microfibers.

A hypothesis that could not be confirmed

To date, the study is the most meticulous and systematic investigation of the release of microfibers from textiles that has ever been carried out. This applies both to the quantity of parameters investigated and to the characterization of the released fibers in terms of number and length. Nowak and his colleagues found out that the quantity of fibers released by five different washing programs was always more or less constant, while washing agents and detergents increased the quantity of microfibers released compared with "normal" water. However, washing temperature had no effect on the number of microfibers that Nowack's team subsequently found in wastewater.

Remarkably, the same was true of the duration of the wash cycles. "And for us, that was really quite astonishing," says Bernd Nowack. He had assumed that they would confirm the well-established hypothesis that the longer a wash cycle lasts, the more microfibers it will release. "At first, it looked as though microfibers were generated during washing," says Nowack. However, if this were the case, longer wash cycles should release more fibers. But this is not the case. The Empa researcher makes a frank admission: "Unfortunately, this means that we are not yet able to explain how the released fibers are generated."

A good basis for follow-up investigations

To ensure that this does not remain the case, a follow-up study is already planned. In cooperation with Manfred Heuberger of Empa's "Advanced Fibers" lab, a PhD thesis on the generation of microfibers during washing will soon be underway. This study will then systematically analyze different types of materials in order to shed light on the generation of microfibers in the washing machine.

Contacts  and sources:

Citation: Polyester Textiles as a Source of Microplastic from Households: A Mechanistic Study to Understand Microfiber Release During Washing; DM Mitrano, E Hernandez, B Nowack; Environmental Science and Technology (2017); DOI: 10.1021/acs.est.7b1750

Topsy-Turvy Motion and the Light Switch Effect at Uranus

More than 30 years after Voyager 2 sped past Uranus, Georgia Institute of Technology researchers are using the spacecraft’s data to learn more about the icy planet. Their new study suggests that Uranus’ magnetosphere, the region defined by the planet’s magnetic field and the material trapped inside it, gets flipped on and off like a light switch every day as it rotates along with the planet. It’s “open” in one orientation, allowing solar wind to flow into the magnetosphere; it later closes, forming a shield against the solar wind and deflecting it away from the planet.

This is much different from Earth’s magnetosphere, which typically only switches between open and closed in response to changes in the solar wind. Earth’s magnetic field is nearly aligned with its spin axis, causing the entire magnetosphere to spin like a top along with the Earth’s rotation. Since the same alignment of Earth’s magnetosphere is always facing toward the sun, the magnetic field threaded in the ever-present solar wind must change direction in order to reconfigure Earth’s field from closed to open. This frequently occurs with strong solar storms.

This is a composite image of Uranus by Voyager 2 and two different observations made by the Hubble Space Telescope -- one for the ring and one for the auroras.

Credit: ESA/Hubble & NASA, L. Lamy/Observatoire de Paris

But Uranus lies and rotates on its side, and its magnetic field is lopsided — it’s off-centered and tilted 60 degrees from its axis. Those features cause the magnetic field to tumble asymmetrically relative to the solar wind direction as the icy giant completes its 17.24-hour full rotation.

Rather than the solar wind dictating a switch like here on Earth, the researchers say Uranus’ rapid rotational change in field strength and orientation lead to a periodic open-close-open-close scenario as it tumbles through the solar wind.

“Uranus is a geometric nightmare,” said Carol Paty, the Georgia Tech associate professor who co-authored the study. “The magnetic field tumbles very fast, like a child cartwheeling down a hill head over heels. When the magnetized solar wind meets this tumbling field in the right way, it can reconnect and Uranus’ magnetosphere goes from open to closed to open on a daily basis.”

Paty says this solar wind reconnection is predicted to occur upstream of Uranus’ magnetosphere over a range of latitudes, with magnetic flux closing in various parts of the planet’s twisted magnetotail.

Reconnection of magnetic fields is a phenomenon throughout the solar system. It occurs when the direction of the interplanetary magnetic field – which comes from the sun and is also known as the heliospheric magnetic field – is opposite a planet’s magnetospheric alignment. Magnetic field lines are then spliced together and rearrange the local magnetic topology, allowing a surge of solar energy to enter the system.

Magnetic reconnection is one reason for Earth’s auroras. Auroras could be possible at a range of latitudes on Uranus due to its off-kilter magnetic field, but the aurora is difficult to observe because the planet is nearly 2 billion miles from Earth. The Hubble Space Telescope occasionally gets a faint view, but it can’t directly measure Uranus’ magnetosphere.

This is an image of the planet Uranus taken by the spacecraft Voyager 2. NASA's Voyager 2 spacecraft flew closely past distant Uranus, the seventh planet from the Sun, in January 1986 
Photo credit: NASA/JPL

The Georgia Tech researchers used numerical models to simulate the planet’s global magnetosphere and to predict favorable reconnection locations. They plugged in data collected by Voyager 2 during its five-day flyby in 1986. It’s the only time a spacecraft has visited.

The researchers say learning more about Uranus is one key to discovering more about planets beyond our solar system.

“The majority of exoplanets that have been discovered appear to also be ice giants in size,” said Xin Cao, the Georgia Tech Ph.D. candidate in earth and atmospheric sciences who led the study. “Perhaps what we see on Uranus and Neptune is the norm for planets: very unique magnetospheres and less-aligned magnetic fields. Understanding how these complex magnetospheres shield exoplanets from stellar radiation is of key importance for studying the habitability of these newly discovered worlds.”

Contacts and sources:
Jason Maderer
Georgia Institute of Technology (Georgia Tech)

The paper, “Diurnal and Seasonal Variability of Uranus’ Magnetosphere,” is currently published in the Journal of Geophysical Research: Space Physics.

Saturday, June 24, 2017

A Rough Diet from Human Predecessors 800,000 Years Ago

The 'Homo antecessor', a hominin species that inhabited the Iberian Peninsula around 800,000 years ago, would have a mechanically more demanding diet than other hominin species in Europe and the African continent. 

Their unique pattern, which would be characterized by the consumption of hard and abrasive foods, may be explained by the differences in food processing in a very demanding environment with fluctuations in climate and food resources, according to a study published in the journal Scientific Reports and led by a team from the University of Alicante, the Faculty of Biology of the University of Barcelona and the Catalan Institute of Human Paleoecology and Social Evolution (IPHES).

Homo antecessor from Atapuerca
Credit: Asociación RUVID

This new research, which reveals for the first time the evidence on the diet of these hominines with the study of the microscopic traces left by food in the dental enamel, relies on the participation of the researchers Alejandro Pérez-Pérez and his team, formed by the doctors Laura Martínez, Ferrán Estebaranz, and Beatriz Pinilla (UB), Marina Lozano (Catalan Institute of Human Paleoecology and Social Evolution, IPHES), Alejandro Romero (University of Alicante), Jordi Galbany (George Washington University, United States) and the co-directors of Atapuerca, José María Bermúdez de Castro (National Research Centre on Human Evolution, CENIEH), Eudald Carbonell (IPHES) and Juan Luís Arsuaga (Universidad Complutense de Madrid).

Before this research, the diet of the hominines of the Lower Pleistocene of Atapuerca (Burgos, Spain), our most remote European ancestors, had been inferred from animal remains –a great variety of large mammals and even turtles– found in the same levels in which the human remains were found. Evidence of cannibalism has also been suggested in some of these fossils. Evidence of cannibalism has also been suggested in some of these fossils.

Food that leaves a mark on the enamel

The study is based on the analysis of the buccal microwear pattern of the fossils from Trinchera Elefante and Gran Dolina in the Atapuerca site. The examined microwear features are small marks on the buccal teeth enamel surface , whose density and length depend on the types of chewed food. 

Credit: Asociación RUVID

 "The usefulness of this methodology has been proved by the study of the microwear patterns of present populations, both hunter-gatherer and agricultural, showing that different feeding patterns correlate with specific microwear patterns in the vestibular surface of the dental crown", explains Professor Alejandro Pérez-Pérez, professor at the Zoology and Biological Anthropology Unit of theof the Department of Evolutionary Biology, Ecology and Environmental Sciences at the University of Barcelona.

In the new study, the Atapuerca fossils have been compared with samples from other Lower Pleistocene populations: with fossils of the African Homo ergaster, ancestors of all Europeans dated from 1.8 million years ago; and also with Homo heidelbergensis, which appeared more than 500,000 years ago in Europe and lasted until at least 200,000 years ago, and finally with Homo neanderthalensis specimens from the Iberian Peninsula that lived between 200,000 and 40,000 years ago.

Higher striation densities in 'Homo antecessor'

The results of the study show that the teeth of H. antecessor show higher striation densities than the rest of the analyzed species. "Our findings do not allow us to say exactly what foods they ate, since the abrasive materials that cause the marks on the teeth may have different origins, but they do allow us to point out that H. antecessor would have had a diet largely based on hard and abrasive foods, such as plants containing phytoliths (which are silica particles produced by plants that are as hard as enamel), tubers with traces of soil particles, collagen or connective tissue and bone or raw meat”, researcher said.

Credit:  Asociación RUVID
The researchers suggest that differences in the Gran Dolina microwear patterns among the compared samples could reflect cultural differences in the way food was processed.  

"Hunting and gathering activities are consistent with the highly-abrasive wear pattern we have encountered, but it is very difficult to think that the available food in the Atapuerca area was very different from that available to other hunter-gatherer hominins. Therefore, it would be the different ways of processing the food that would give rise to these differences in the dental microwear patterns. That is to say, they processed and consumed the food in different ways", Alejandro Pérez-Pérez explained, who leads a team that has also applied this methodology in the study of feeding behaviours of the hominins of the East Africa Pleistocene, including the Paranthropus boisei and Homo habilis species.

A more primitive lithic industry

This pattern of great abrasiveness, observed on the enamel teeth surfaces in Gran Dolina contrasts with what has been observed in the compared species in the study. "Unlike H. neanderthalensis, which had a more advanced lithic industry (called Mode 3 or Mousterian), the tools that have been found related to 'Homo antecessor' are primitive (Mode 1). These industries would not facilitate food processing, as also suggested by evidence that they used teeth to chew bones. Also, the lack of evidence of the use of fire in Atapuerca suggests that they would surely eat everything raw, causing more dental waste, including plant foods, meat, tendons or skin.

For the researchers, a diet with a high meat consumption could have evolutionary implications. "Meat in the diet could have contributed to gain the necessary energy to sustain a large brain like that of H. antecessor, with a volume of approximately 1,000 cubic centimeters, compared to the 764 of the H. ergaster, but it would also represent a significant source of food in a highly demanding environment where preferred foods, such as ripe fruits and tender vegetables, would vary seasonally", as the researcher added.

The research contributes significantly to the better understanding of the dietary adaptations of our ancestors and highlights the importance of the ecological and cultural factors that have conditioned our biological evolution.

Contacts and sources:
Asociación RUVID

Citation: A. Pérez-Pérez, M. Lozano, A. Romero, L. M. Martínez, J. Galbany, B. Pinilla, F. Estebaranz-Sánchez, J. M. Bermúdez de Castro, E. Carbonell and J. L. Arsuaga. «The diet of the first Europeans from Atapuerca». Scientific Reports, February 2017.

How Eggs Get Their Shapes

The evolution of the amniotic egg — complete with membrane and shell — was key to vertebrates leaving the oceans and colonizing the land and air. Now, 360 million years later, bird eggs come in all shapes and sizes, from the almost perfectly spherical eggs of brown hawk- owls to the tear-drop shape of sandpipers’ eggs. The question is, how and why did this diversity in shape evolve?

The answer to that question may help explain how birds evolved and solve an old mystery in natural history.

New study finds birds may have evolved elliptical or asymmetric eggs to maintain streamlined bodies for flight 
Credit: Museum of Comparative Zoology and Harvard University

An international team of scientists led by researchers at Harvard and Princeton universities, with colleagues in the UK, Israel and Singapore, took a quantitative approach to this question. Using methods and ideas from mathematics, physics and biology, they characterized the shape of eggs from about 1,400 species of birds and developed a model that explains how an egg’s membrane determines its shape. Using an evolutionary framework, the researchers found that the shape of an egg correlates with flight ability, suggesting that adaptations for flight may have been critical drivers of egg-shape variation in birds.

The research is published in Science.

“Our study took a unified approach to understanding egg shape by asking three questions: how to quantify egg shape and provide a basis for comparison of shapes across species, what are the biophysical mechanisms that determine egg shape, and what are the implications of egg shape in an evolutionary and ecological setting,” said senior author, L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics at the John A. Paulson School of Engineering and Applied Sciences (SEAS), Professor of Organismic and Evolutionary Biology, and of Physics at Harvard. 

“We showed that egg shapes vary smoothly across species, that it is determined by the membrane properties rather than the shell, and finally that there is a strong correlation linking birds that have eggs that are elliptical and asymmetric with a strong flight ability, the last a real surprise.”

Mahadevan is also a Core Faculty Member of the Wyss Institute for Biologically Inspired Engineering at Harvard University.

The researchers began by plotting the shape — as defined by the pole-to-pole asymmetry and the ellipticity — of some 50,000 eggs, representing 14 percent of species in 35 orders, including two extinct orders.

Average egg shapes for each of 1400 species (black dots), illustrating variation in asymmetry and ellipticity.

 Image courtesy of L. Mahadevan/Museum of Vertebrate Zoology, Berkeley

The researchers found that egg shape was a continuum — with many species overlapping. The shapes ranged from almost perfectly spherical eggs to conical-shaped eggs.

So, how is this diverse spectrum of shapes formed?

Researchers have long known that egg membranes play an important role in egg shape — after all, if an egg shell is dissolved in a mild acid, like vinegar, the egg actually maintains its shape. But how do the properties of the membrane contribute to shape?

Think of a balloon. If a balloon is uniformly thick and made of one material, it will be spherical when inflated. But if it is not uniform, all manner of shapes can be obtained.

Common murre or common guillemot egg 
Credit: Harvard Museum of Comparative Zoology

“Guided by observations that show that the membrane thickness varies from pole to pole, we constructed a mathematical model that considers the egg to be a pressurized elastic shell that grows and showed that we can capture the entire range of egg shapes observed in nature,” said Mahadevan.

The variations of shape come from the variation in the membrane’s thickness and material properties and the ratio of the differential pressure to the stretchiness of the membrane.

The next question is, how are these forms related to the function of the bird?

The researchers looked at correlations between egg shape and traits associated with the species of bird, including nest type and location, clutch size (the number of eggs laid at a time), diet and flight ability.

“We discovered that flight may influence egg shape,” said lead author Mary Caswell Stoddard, Assistant Professor of Ecology and Evolutionary Biology at Princeton University and former Junior Fellow in the Harvard Society of Fellows. “To maintain sleek and streamlined bodies for flight, birds appear to lay eggs that are more asymmetric or elliptical. With these egg shapes, birds can maximize egg volume without increasing the egg’s width – this is an advantage in narrow oviducts.”

So an albatross and a hummingbird, while two very different birds, may have evolved similarly shaped eggs because both are high-powered fliers.

Broad-tailed hummingbird egg

Credit: Museum of Comparative Zoology and Harvard University

"It’s clear from our study that variation in the size and shape of bird eggs is not simply random but is instead related to differences in ecology, including the amount of calcium in the diet, and particularly the extent to which each species is designed for powerful flight," says coauthor Dr. Joseph Tobias from Imperial College, UK.

Next, the researchers hope to observe the egg laying process in real time, to compare it to and refine their model.

This paper was coauthored by E. H. Yong, D. Akkaynak and C. Sheard. The major funding sources for this work include Princeton University, the L’Oréal USA For Women in Science Fellowship, the Harvard Society of Fellows, the Milton Fund, Nanyang Technological University, the Oxford Clarendon Fund, the Fulbright Commission, the Natural Environment Research Council, the MacArthur Foundation and the Radcliffe Institution.

Contacts and sources:
Leah Burrows

Out of Africa; New Research Explores Drive Behind Early Humanity's Travels Across the Globe

A new research project led by Royal Holloway, University of London starts this July, having received a grant of over £450,000 from the Leverhulme Trust to explore the migrations of humans out of Africa.

Over the next three years the team led from the Department of Geography at Royal Holloway will examine important archaeological and environmental sites across the Levant and Arabian Peninsula to understand when and why early humans travelled from Africa, a movement that saw humans dominating the globe. 

Skull of a early homo sapien 
early H. sapiens The Natural History Museum, London
Image courtesy of the Natural History Museum, London

Working in close collaboration with expert archaeologists and scientists from across the region and also from leading research centers in Europe, the research will seek to resolve uncertainties about the chronology of early human dispersals Out of Africa.

Searching for clues in ancient volcanic ash

New environmental and archaeological information coupled with genetic evidence will help uncover the drivers behind the global distribution and dominance of our species. The team will investigate the role of factors such as environmental changes and species interbreeding in spreading humanity across the world.

“Current thinking suggests that humans started moving out of Africa over 120,000 years ago,” explained Professor Simon Blockley of Royal Holloway’s Department of Geography who is leading the project.

“For the first time in this region we will be using a state-of-the-art method for dating events by finding microscopic traces of volcanic ash within archeological and environmental sites that can then be linked to known and dated eruptions. This ash will contain clues as to the timing of the dispersal of human groups and any climatic triggers behind them,” he concluded.

The project is a collaborative effort including Michael Petraglia of the Max Planck Institute for the Science of Human History, Simon Armitage of Royal Holloway and Professor Chris Stringer of the British Natural History Museum begins July 1, 2017.

Contacts and sources:
Royal Holloway, University of London

Cannabinoids Suitable for Migraine Prevention Says EU Study

A study presented at the Congress of the European Academy of Neurology in Amsterdam confirmed that cannabinoids are just as suitable as a prophylaxis for migraine attacks as other pharmaceutical treatments. Interestingly though, when it comes to treating acute cluster headaches they are only effective in patients that suffered from migraine in childhood.

Germany’s recent decision to liberalise the use of cannabis for medical purposes has rekindled policy debate across Europe. While politics and health authorities continue to weigh up the pros and cons of this treatment method, researchers are constantly furthering scientific understanding of the use of cannabinoids.

Image result for cannabinoids
Credit: Wikimedia Commons

Progress was reflected in the results of a current Italian study presented at the 3rd Congress of the European Academy of Neurology (EAN). A research team led by Dr Maria Nicolodi investigated the suitability of cannabinoids as a prophylaxis for migraine and in the acute treatment of migraines and cluster headaches. 

To start with the researchers had to identify the dosage required to effectively treat headaches. A group of 48 chronic migraine volunteers were given a starting oral dose of 10mg of a combination of two compounds. One contained 19 per cent tetrahydrocannabinol (THC), and while the other had virtually no THC it had a 9 per cent cannabidiol (CBD) content. The outcome was that doses of less than 100mg produced no effects. It was not until an oral dose of 200mg was administered that acute pain dropped by 55 per cent.

In phase 2 of the study, 79 chronic migraine patients were given a daily dose of either 25mg of amitriptyline – a tricyclic antidepressant commonly used to treat migraine – or 200mg of the THC-CBD combination for a period of three months. 48 cluster headache patients also received either 200mg THC-CBD or a daily dose of 480mg of the calcium channel blocker verapamil. For acute pain, an additional 200mg TCH-CBD was administered for both types of headaches.

The results after three months of treatment and follow-up after a further four weeks produced various insights. While the TCH-CBD combination yielded slightly better results than amitriptyline (40.1 per cent) with a 40.4 per cent reduction in attacks, the severity and number of cluster headache attacks only fell slightly. When analysing use in the treatment of acute pain, the researchers came across an interesting phenomenon: cannabinoids reduced pain intensity among migraine patients by 43.5 per cent. The same results were seen in cluster headache patients, but only in those that had experienced migraine in childhood. 

In patients without previous history, THC-CBD had no effect whatsoever as an acute treatment. “We were able to demonstrate that cannabinoids are an alternative to established treatments in migraine prevention. That said, they are only suited for use in the acute treatment of cluster headaches in patients with a history of migraine from childhood on,” Dr Nicolodi summarised.

Drowsiness and difficulty concentrating aside, the side effects observed during the study were highly positive. The incidence of stomach ache, colitis and musculoskeletal pain – in female subjects – decreased.

Contacts and sources:
B&K Kommunikation

Citation: 3rd EAN Congress Amsterdam 2017, 24 - 27 June 2017, Abstract Nicolodi, et al. Therapeutic Use of Cannabinoids - Dose Finding, Effects and Pilot Data of Effects in Chronic Migraine and Cluster Headache

Origami Everything: New Algorithm Shows Where to Fold Paper To Make Any 3-D Shape

A new algorithm generates practical paper-folding patterns to produce any 3-D structure,

n a 1999 paper, Erik Demaine — now an MIT professor of electrical engineering and computer science, but then an 18-year-old PhD student at the University of Waterloo, in Canada — described an algorithm that could determine how to fold a piece of paper into any conceivable 3-D shape.

It was a milestone paper in the field of computational origami, but the algorithm didn’t yield very practical folding patterns. Essentially, it took a very long strip of paper and wound it into the desired shape. The resulting structures tended to have lots of seams where the strip doubled back on itself, so they weren’t very sturdy.

Researchers have created a universal algorithm for folding origami shapes that guarantees a minimum number of seams.
Researchers have created a universal algorithm for folding origami shapes that guarantees a minimum number of seams.
Image: Christine Daniloff/MIT

At the Symposium on Computational Geometry in July, Demaine and Tomohiro Tachi of the University of Tokyo will announce the completion of a quest that began with that 1999 paper: a universal algorithm for folding origami shapes that guarantees a minimum number of seams.

“In 1999, we proved that you could fold any polyhedron, but the way that we showed how to do it was very inefficient,” Demaine says. “It’s efficient if your initial piece of paper is super-long and skinny. But if you were going to start with a square piece of paper, then that old method would basically fold the square paper down to a thin strip, wasting almost all the material. The new result promises to be much more efficient. It’s a totally different strategy for thinking about how to make a polyhedron.”

Demaine and Tachi are also working to implement the algorithm in a new version of Origamizer, the free software for generating origami crease patterns whose first version Tachi released in 2008.

Maintaining boundaries

The researchers’ algorithm designs crease patterns for producing any polyhedron — that is, a 3-D surface made up of many flat facets. Computer graphics software, for instance, models 3-D objects as polyhedra consisting of many tiny triangles. “Any curved shape you could approximate with lots of little flat sides,” Demaine explains.

Technically speaking, the guarantee that the folding will involve the minimum number of seams means that it preserves the “boundaries” of the original piece of paper. Suppose, for instance, that you have a circular piece of paper and want to fold it into a cup. Leaving a smaller circle at the center of the piece of paper flat, you could bunch the sides together in a pleated pattern; in fact, some water-cooler cups are manufactured on this exact design.

In this case, the boundary of the cup — its rim — is the same as that of the unfolded circle — its outer edge. The same would not be true with the folding produced by Demaine and his colleagues’ earlier algorithm. There, the cup would consist of a thin strip of paper wrapped round and round in a coil — and it probably wouldn’t hold water.

“The new algorithm is supposed to give you much better, more practical foldings,” Demaine says. “We don’t know how to quantify that mathematically, exactly, other than it seems to work much better in practice. But we do have one mathematical property that nicely distinguishes the two methods. The new method keeps the boundary of the original piece of paper on the boundary of the surface you’re trying to make. We call this watertightness.”

A closed surface — such as a sphere — doesn’t have a boundary, so an origami approximation of it will require a seam where boundaries meet. But “the user gets to choose where to put that boundary,” Demaine says. “You can’t get an entire closed surface to be watertight, because the boundary has to be somewhere, but you get to choose where that is.”

Lighting fires

The algorithm begins by mapping the facets of the target polyhedron onto a flat surface. But whereas the facets will be touching when the folding is complete, they can be quite far apart from each other on the flat surface. “You fold away all the extra material and bring together the faces of the polyhedron,” Demaine says.

Folding away the extra material can be a very complex process. Folds that draw together multiple faces could involve dozens or even hundreds of separate creases.

Developing a method for automatically calculating those crease patterns involved a number of different insights, but a central one was that they could be approximated by something called a Voronoi diagram. To understand this concept, imagine a grassy plain. A number of fires are set on it simultaneously, and they all spread in all directions at the same rate. The Voronoi diagram — named after the 19th-century Ukrainian mathematician Gyorgy Voronoi — describes both the location at which the fires are set and the boundaries at which adjacent fires meet. In Demaine and Tachi’s algorithm, the boundaries of a Voronoi diagram define the creases in the paper.

“We have to tweak it a little bit in our setting,” Demaine says. “We also imagine simultaneously lighting a fire on the entire polygon of the polyhedron and growing out from there. But that concept was really useful. The challenge is to set up where to light the fires, essentially, so that the Voronoi diagram has all the properties we need.”

Completed quest

“It’s very impressive stuff,” says Robert Lang, one of the pioneers of computational origami and a fellow of the American Mathematical Society, who in 2001 abandoned a successful career in optical engineering to become a full-time origamist. “It completes what I would characterize as a quest that began some 20-plus years ago: a computational method for efficiently folding any specified shape from a sheet of paper. Along the way, there have been several nice demonstrations of pieces of the puzzle: an algorithm to fold any shape, but not very efficiently; an algorithm to efficiently fold particular families of tree-like shapes, but not surfaces; an algorithm to fold trees and surfaces, but not every shape. This one covers it all! The algorithm is surprisingly complex, but that arises because it is comprehensive. It truly covers every possibility. And it is not just an abstract proof; it is readily computationally implementable.”

Joseph O’Rourke, a professor of mathematics and computer science at Smith College and the author of How To Fold It: The Mathematics of Linkages, Origami, and Polyhedra, agrees. “What was known before was either ‘cheating’ — winding the polyhedron with a thin strip — or not guaranteed to succeed,” he says. “Their new algorithm is guaranteed to produce a folding, and it is the opposite of cheating in that every facet of the polyhedron is covered by a ‘seamless’ facet of the paper, and the boundary of the paper maps to the boundary of the polyhedral manifold — their ‘watertight’ property. Finally, the extra structural ‘flash’ needed to achieve their folding can all be hidden on the inside and so is invisible.”

Contacts and sources:
Abby Abazorius
Massachusetts Institute of Technology