Sunday, April 30, 2017

Evolving ‘Lovesick’ Organisms Found Survival in Sex

Being ‘lovesick’ takes on a whole new meaning in a new theory which answers the unsolved fundamental question: why do we have sex?

University of Adelaide researchers have developed a computer simulation model which supports the theory that sexual reproduction evolved because of the presence of disease-causing microbes and the need to constantly adapt to resist these co-evolving pathogens.

Published in the Journal of Evolutionary Biology, researcher Dr Jack da Silva and student James Galbraith set out to answer the age-old puzzle that has been occupying evolutionary geneticists for 100 years or more of why most complex organisms reproduce sexually when asexual reproduction is much more efficient.

“Asexual reproduction, such as laying unfertilised eggs or budding off a piece of yourself, is a much simpler way of reproducing,” says Dr da Silva, Senior Lecturer in the University of Adelaide’s School of Biological Sciences. “It doesn’t require finding a mate, and the time and energy involved in that, nor the intricate and complicated genetics that come into play with sexual reproduction. It’s hard to understand why sex evolved at all.”

Asexual reproduction:  Brooding Sea Anemone,Epiactis prolifera.The numerous young are seen on the pedal disk are derived from eggs fertilized in the digestive cavity. The motile larvae, after swimming out of the mouth, migrate down to the disk and becomes installed there until they become little anemones ready to move and be able to feed themselves.
Asexual reproduction of sea anemone.jpg
Credit; Wikimedia Commons

He says one decades-old theory – known as Hill-Robertson Interference – has attracted more attention recently. This theory says sex evolved because it allows the recombination of DNA between mating pairs so that individuals are produced that carry more than one beneficial mutation. Otherwise beneficial mutations compete with each other so that no one mutation is selected over another.

However, Dr da Silva, says, this “elegant theory” doesn’t explain why sexual reproduction would be maintained in a stable, well-adapted population. “It is hard to imagine why this sort of natural selection should be ongoing, which would be required for sex to be favoured,” he says. “Most mutations in an adapted population will be bad. For a mutation to be good for you, the environment needs to be changing fairly rapidly. There would need to be some strong ongoing selective force for sex to be favoured over asexual reproduction.”

An answer lay in bringing another evolutionary theory into the equation. The so-called Red Queen theory says that our pathogens, such as bacteria, viruses and parasites, are continuously adapting to us and we are constantly having to evolve to become resistant to them. This provides the opportunity for new mutations to be beneficial and maintains a strong selective force.

“These two theories have been pushed around and analysed independently but we’ve brought them together,” says Dr da Silva. “Either on their own can’t explain sex, but looking at them together we’ve shown that the Red Queen dynamics of co-evolving pathogens produces that changing environment that makes sex advantageous through the simple genetic mechanism of the Hill-Robertson theory.”

A new combined theory was developed through computer simulations and shown to work. With their model, the researchers accurately reproduced the rapid evolutionary increase shown in the amount of sex with other individuals exhibited by nematode worms coevolving with a highly pathogenic bacterium.

“This is not a definitive test but it shows our model is consistent with the best experimental evidence that exists,” Dr da Silva says.

Contacts and sources:
Dr Jack da Silva, Senior Lecturer, School of Biological Sciences
University of Adelaide

Giant Prehistoric Worm Discovered

Researchers from Lund University, among others, have recently discovered a giant prehistoric worm with massive jaws. The worm lived in the sea 400 million years ago and is estimated to have been up to two metres long. The newly discovered species’ scientific name was inspired by a bassist in an American hard rock band.

The worm species is the largest marine jawed worm ever found, and was discovered in sedimentary rock from Canada. These animals are normally quite small, between a few centimeters and decimeters. But the new fossil finding indicates an unusually large worm. Its body is estimated to have been 1–2 metres long.

Illustration: James OrmistonIllustration: James Ormiston

“The only thing left of the animal is its jaws, which are much larger compared to similar fossils”, says Mats Eriksson, professor of geology at Lund University.

Together with a researcher in Canada and a researcher in England, Eriksson got wind of the fossil in question. By then, the worm remains had lain undiscovered for several years in a museum in Toronto, after rock samples were collected during fieldwork in 1994 in the province of Ontario.

The gigantic worm species, called Websteroprion armstrongi, lived in the sea. But what it fed on is uncertain. Considering its jaws, researchers believe that it may have been both a predator and a scavenger.

Photo: Luke Parry

An interesting aspect of the finding is that it shows that gigantism existed as early as 400 million years ago. Gigantism is a phenomenon in evolution, where an unusually large body can lead to a competitive advantage over other species.

“Our study shows that this phenomenon of gigantism seems to have been limited to a certain evolutionary branch among jawed worms”, says Mats Eriksson.

A several hundred million year-old worm can thus contribute to knowledge of both animal life on Earth in the past and of evolution as a process. In the long run, this type of palaeontological knowledge is very important when trying to understand and conserve biodiversity today, according to Mats Eriksson.

The three researchers behind the present study are fond of music, and have therefore named the worm after a bassist in an American hard rock band, Alex Webster.

Contacts and sources:
Mats Eriksson, professor Department of Geology 
Lund University

The Midas Touch: Gold ‘Nugget-Producing’ Bacteria Discovered

Special ‘nugget-producing’ bacteria may hold the key to more efficient processing of gold ore, mine tailings and recycled electronics, as well as aid in exploration for new deposits, University of Adelaide research has shown.

For more than 10 years, University of Adelaide researchers have been investigating the role of microorganisms in gold transformation. In the Earth’s surface, gold can be dissolved, dispersed and reconcentrated into nuggets. This epic ‘journey’ is called the biogeochemical cycle of gold.

Now they have shown for the first time, just how long this biogeochemical cycle takes and they hope to make to it even faster in the future.

Gold nugget found in the field

Credit: Joel Brugger

“Primary gold is produced under high pressures and temperatures deep below the Earth’s surface and is mined, nowadays, from very large primary deposits, such as at the Superpit in Kalgoorlie,” says Dr Frank Reith, Australian Research Council Future Fellow in the University of Adelaide’s School of Biological Sciences, and Visiting Fellow at CSIRO Land and Water at Waite.

“In the natural environment, primary gold makes its way into soils, sediments and waterways through biogeochemical weathering and eventually ends up in the ocean. On the way bacteria can dissolve and re-concentrate gold – this process removes most of the silver and forms gold nuggets.

“We’ve known that this process takes place, but for the first time we’ve been able to show that this transformation takes place in just years to decades – that’s a blink of an eye in terms of geological time.

“These results have surprised us, and lead the way for many interesting applications such as optimising the processes for gold extraction from ore and re-processing old tailings or recycled electronics, which isn’t currently economically viable.”

Electron microscope images of a gold grain surface (A) and a bacterial cell on the surface of the gold (B) 
Credit: Adelaide Microscopy

Working with John and Johno Parsons (Prophet Gold Mine, Queensland), Professor Gordon Southam (University of Queensland) and Dr Geert Cornelis (formerly of the CSIRO), Dr Reith and postdoctoral researcher Dr Jeremiah Shuster analysed numerous gold grains collected from West Coast Creek using high-resolution electron-microscopy.

Published in the journal Chemical Geology, they showed that five ‘episodes’ of gold biogeochemical cycling had occurred on each gold grain. Each episode was estimated to take between 3.5 and 11.7 years – a total of under 18 to almost 60 years to form the secondary gold.

“Understanding this gold biogeochemical cycle could help mineral exploration by finding undiscovered gold deposits or developing innovative processing techniques,” says Dr Shuster, University of Adelaide. “If we can make this process faster, then the potential for re-processing tailings and improving ore-processing would be game-changing. Initial attempts to speed up these reactions are looking promising.”

The researchers say that this new understanding of the gold biogeochemical process and transformation may also help verify the authenticity of archaeological gold artefacts and distinguish them from fraudulent copies.

Contacts and sources:
Dr Frank Reith, ARC Future Fellow, School of Biological Sciences
University of Adelaide

The World’s Fastest Film Camera: When Light Practically Stands Still (Videos)

Forget high-speed cameras capturing 100 000 images per second. A research group at Lund University in Sweden has developed a camera that can film at a rate equivalent to five trillion images per second, or events as short as 0.2 trillionths of a second. This is faster than has previously been possible.

Credit: Lund University

The new super-fast film camera will therefore be able to capture incredibly rapid processes in chemistry, physics, biology and biomedicine, that so far have not been caught on film.

To illustrate the technology, the researchers have successfully filmed how light – a collection of photons – travels a distance corresponding to the thickness of a paper. 
Credit: Lund University

 In reality, it only takes a picosecond, but on film the process has been slowed down by a trillion times.

Currently, high-speed cameras capture images one by one in a sequence. The new technology is based on an innovative algorithm, and instead captures several coded images in one picture. It then sorts them into a video sequence afterwards.

Elias Kristensson and Andreas Ehn 
Photo: Kennet Ruona

In short, the method involves exposing what you are filming (for example a chemical reaction) to light in the form of laser flashes where each light pulse is given a unique code. The object reflects the light flashes which merge into the single photograph. They are subsequently separated using an encryption key.

The film camera is initially intended to be used by researchers who literally want to gain better insight into many of the extremely rapid processes that occur in nature. Many take place on a picosecond and femtosecond scale, which is unbelievably fast – the number of femtoseconds in one second is significantly larger than the number of seconds in a person’s life-time.

“This does not apply to all processes in nature, but quite a few, for example, explosions, plasma flashes, turbulent combustion, brain activity in animals and chemical reactions. We are now able to film such extremely short processes”, says Elias Kristensson. “In the long term, the technology can also be used by industry and others”.

For the researchers themselves, however, the greatest benefit of this technology is not that they set a new speed record, but that they are now able to film how specific substances change in the same process.

“Today, the only way to visualise such rapid events is to photograph still images of the process. You then have to attempt to repeat identical experiments to provide several still images which can later be edited into a movie. The problem with this approach is that it is highly unlikely that a process will be identical if you repeat the experiment”, he says.

Most days, Elias Kristensson and Andreas Ehn conduct research on combustion – an area which is known to be difficult and complicated to study. The ultimate purpose of this basic research is to make next-generation car engines, gas turbines and boilers cleaner and more fuel-efficient. Combustion is controlled by a number of ultra-fast processes at the molecular level, which can now be captured on film.

For example, the researchers will study the chemistry of plasma discharges, the lifetime of quantum states in combustion environments and in biological tissue, as well as how chemical reactions are initiated. In the autumn, there will be more film material available.

About the camera:

The researchers call the technology FRAME – Frequency Recognition Algorithm for Multiple Exposures.

A regular camera with a flash uses regular light, but in this case the researchers use “coded” light flashes, as a form of encryption. Every time a coded light flash hits the object – for example, a chemical reaction in a burning flame – the object emits an image signal (response) with the exact same coding. The following light flashes all have different codes, and the image signals are captured in one single photograph. These coded image signals are subsequently separated using an encryption key on the computer.

A German company has already developed a prototype of the technology, which means that within an estimated two years more people will be able to use it.

Contacts and sources:
Elias Kristensson
Lund University

Citation: Andreas Ehn, Joakim Bood, Zheming Li, Edouard Berrocal, Marcus
Aldén and Elias Kristensson. FRAME: femtosecond videography for atomic and molecular dynamics. Light: Science & Applications 2017; doi: 10.1038/l

Saturday, April 29, 2017

Scientists Set Record Resolution for Drawing at the One-Nanometer Length Scale

An electron microscope-based lithography system for patterning materials at sizes as small as a single nanometer could be used to create and study materials with new properties.

The ability to pattern materials at ever-smaller sizes -- using electron-beam lithography (EBL), in which an electron-sensitive material is exposed to a focused beam of electrons, as a primary method -- is driving advances in nanotechnology. When the feature size of materials is reduced from the macroscale to the nanoscale, individual atoms and molecules can be manipulated to dramatically alter material properties, such as color, chemical reactivity, electrical conductivity, and light interactions.

This is a schematic showing a focused electron beam (green) shining through a polymeric film (grey: carbon atoms; red: oxygen atoms; white: hydrogen atoms). The glowing area (yellow) indicates the molecular volume chemically modified by the focused electron beam.
Credit: Brookhaven National Laboratory

In the ongoing quest to pattern materials with ever-smaller feature sizes, scientists at the Center for Functional Nanomaterials (CFN) -- a U.S. Department of Energy (DOE) Office of Science User Facility at Brookhaven National Laboratory -- have recently set a new record. Performing EBL with a scanning transmission electron microscope (STEM), they have patterned thin films of the polymer poly(methyl methacrylate), or PMMA, with individual features as small as one nanometer (nm), and with a spacing between features of 11 nm, yielding an areal density of nearly one trillion features per square centimeter. These record achievements are published in the April 18 online edition of Nano Letters.

"Our goal at CFN is to study how the optical, electrical, thermal, and other properties of materials change as their feature sizes get smaller," said lead author Vitor Manfrinato, a research associate in CFN's electron microscopy group who began the project as a CFN user while completing his doctoral work at MIT. "Until now, patterning materials at a single nanometer has not been possible in a controllable and efficient way."

Commercial EBL instruments typically pattern materials at sizes between 10 and 20 nanometers. Techniques that can produce higher-resolution patterns require special conditions that either limit their practical utility or dramatically slow down the patterning process. Here, the scientists pushed the resolution limits of EBL by installing a pattern generator -- an electronic system that precisely moves the electron beam over a sample to draw patterns designed with computer software -- in one of CFN's aberration-corrected STEMs, a specialized microscope that provides a focused electron beam at the atomic scale.

"We converted an imaging tool into a drawing tool that is capable of not only taking atomic-resolution images but also making atomic-resolution structures," said coauthor Aaron Stein, a senior scientist in the electronic nanomaterials group at CFN.

Their measurements with this instrument show a nearly 200 percent reduction in feature size (from 5 to 1.7 nm) and 100 percent increase in areal pattern density (from 0.4 to 0.8 trillion dots per square centimeter, or from 16 to 11 nm spacing between features) over previous scientific reports.

The team's patterned PMMA films can be used as stencils for transferring the drawn single-digit nanometer feature into any other material. In this work, the scientists created structures smaller than 5 nm in both metallic (gold palladium) and semiconducting (zinc oxide) materials. Their fabricated gold palladium features were as small as six atoms wide.

Despite this record-setting demonstration, the team remains interested in understanding the factors that still limit resolution, and ultimately pushing EBL to its fundamental limit.

"The resolution of EBL can be impacted by many parameters, including instrument limitations, interactions between the electron beam and the polymer material, molecular dimensions associated with the polymer structure, and chemical processes of lithography," explained Manfrinato.

An exciting result of this study was the realization that polymer films can be patterned at sizes much smaller than the 26 nm effective radius of the PMMA macromolecule. "The polymer chains that make up a PMMA macromolecule are a million repeating monomers (molecules) long--in a film, these macromolecules are all entangled and balled up," said Stein. "We were surprised to find that the smallest size we could pattern is well below the size of the macromolecule and nears the size of one of the monomer repeating units, as small as a single nanometer."

Next, the team plans to use their technique to study the properties of materials patterned at one-nanometer dimensions. One early target will be the semiconducting material silicon, whose electronic and optical properties are predicted to change at the single-digit nanometer scale.

"This technique opens up many exciting materials engineering possibilities, tailoring properties if not atom by atom, then closer than ever before," said Stein. "Because the CFN is a national user facility, we will soon be offering our first-of-a-kind nanoscience tool to users from around the world. It will be really interesting to see how other scientists make use of this new capability."

This work is supported by DOE's Office of Science.

Contacts and sources:  
Ariana Tantillo
Brookhaven National Laboratory

Citation: Aberration-Corrected Electron Beam Lithography at the One Nanometer Length Scale

Discovered: Binary Star Polluted with Calcium from Supernova

Astrophysicists reported the discovery of a binary solar-type star inside the supernova remnant RCW 86.

An international team of astrophysicists led by a scientist from the Sternberg Astronomical Institute of the Lomonosov Moscow State University reported the discovery of a binary solar-type star inside the supernova remnant RCW 86. Spectroscopic observation of this star revealed that its atmosphere is polluted by heavy elements ejected during the supernova explosion that produced RCW 86. 

In particular, it was found that the calcium abundance in the stellar atmosphere exceeds the solar one by a factor of six, which hints at the possibility that the supernova might belong to the rare type of calcium-rich supernovae - the enigmatic objects, whose origin is yet not clear. The research results are published in Nature Astronomy on 2017 April, 24.

From the upper left, clockwise: 843-MHz image of RCW 86; image of an arc-like optical nebula in the southwest corner of RCW 86; optical and x-ray images of two point sources, [GV2003] N and [GV2003] S, in the centre of the optical arc

Credit: Vasilii Gvaramadze

The evolution of a massive star ends with a violent explosion called a supernova. The central part of the exploded star contracts into a neutron star, while the outer layers expand with a huge velocity and form an extended gaseous shell called supernova remnant (SNR). Currently, several hundreds of SNRs are known in the Milky Way, of which several tens were found to be associated with neutron stars. Detection of new examples of neutron stars in SNRs is very important for understanding the physics of supernova explosions.

In 2002 Vasilii Gvaramadze, a scientist from the Sternberg Astronomical Institute, proposed that the pyriform appearance of RCW 86 can be due to a supernova explosion near the edge of a bubble blown by the wind of a moving massive star - the supernova progenitor star. This allowed him to detect a candidate neutron star, currently known as [GV2003] N, associated with RCW 86 using the data from the Chandra X-ray Observatory.

If [GV2003] N is indeed a neutron star, then it should be a very weak source of optical emission. But in the optical image obtained in 2010, a quite bright star was detected at the position of [GV2003] N. This could mean that [GV2003] N was not a neutron star. Vasilii Gvaramadze, the leading author of the Nature Astronomy publication, explains: "In order to determine the nature of the optical star at the position of [GV2003] N, we obtained its images using seven-channel optical/near-infrared imager GROND at the 2.2-metre telescope of the European Southern Observatory (ESO). Spectral energy distribution has shown that this star is of solar type (so-called G star). But since the X-ray luminosity of the G star should be significantly less than that was measured for [GV2003] N, we have come to a conclusion that we deal with a binary system, composed of a neutron star (visible in X-rays as [GV2003] N) and a G star (visible in optical wavelengths)".

The existence of such systems is a natural result of massive binary star evolution. Recently, it was recognized that the majority of massive stars form in binary and multiple systems. When one of the stars explodes in a binary system, the second one could become polluted by heavy elements, ejected by a supernova.

To check the hypothesis that [GV2003] N is a binary system, astrophysicists have got four spectra of the G star in 2015 with the Very Large Telescope (VLT) of the ESO. It was found that the radial velocity of this star has significantly changed during one month, which is indicative of an eccentric binary with an orbital period of about a month. The obtained result has proved that [GV2003] N is a neutron stars and that RCW 86 is the result of supernova explosion near the edge of a wind-blown bubble. This is very important for understanding the structure of some peculiar SNRs as well as for detection of their associated neutron stars.

Until recently, the most popular explanation of the origin of the calcium-rich supernovae was the helium shell detonation on low-mass white dwarfs. The results obtained by Vasilii Gvaramadze and his colleagues, however, imply that under certain circumstances a large amount of calcium could also be synthesized by explosion of massive stars in binary systems.

Vasilii Gvaramadze sums up: "We continue studying [GV2003] N. We are going to determine orbital parameters of the binary system, estimate the initial and final masses of the supernova progenitor, and the kick velocity obtained by the neutron star at birth. Moreover, we are also going to measure abundances of additional elements in the G star atmosphere. The obtained information could be crucially important for understanding the nature of the calcium-rich supernovae".

Contacts and sources:
Vladimir Koryagin
 Lomonosov Moscow State University

Citation: A solar-type star polluted by calcium-rich supernova ejecta inside the supernova remnant RCW 86.  Vasilii V. Gvaramadze, Norbert Langer, Luca Fossati, Douglas C.-J. Bock, Norberto Castro, Iskren Y. Georgiev, Jochen Greiner, Simon Johnston, Arne Rau & Thomas M. Tauris Nature Astronomy 1, Article number: 0116 (2017)

Friday, April 28, 2017

Synthetic Two-Sided Gecko’s Foot Could Enable Underwater Robotics

Geckos are well known for effortlessly scrambling up walls and upside down across ceilings. Even in slippery rain forests, the lizards maintain their grip. Now scientists have created a double-sided adhesive that copies this reversible ability to stick and unstick to surfaces even in wet conditions. They say their development, reported in ACS’ Journal of Physical Chemistry C, could be useful in underwater robotics, sensors and other bionic devices.

Inspired by geckos’ natural ability to attach and release their feet from surfaces as slick as glass, scientists have made a number of adhesives that can similarly stick and unstick with changes in temperature, light or magnetic field, but mostly in dry conditions. 

Scientists mimic the gecko’s feet to create an adhesive that can stick and unstick to surfaces, even when wet.
Credit: nico99/ 

One promising approach to expanding this to underwater scenarios involves hydrogels that can swell and shrink in response to different acidity levels and other conditions. These volume differences change the gels’ friction and stickiness levels. Feng Zhou, Daniele Dini and colleagues recently developed a method to integrate nanostructured hydrogel fibers on an inorganic membrane. The material’s friction and stickiness levels changed with pH even when wet. The researchers wanted to further expand on this strategy to make the adhesive work on two sides.

The researchers first made the inorganic membrane double-faced and then added the hydrogel nanofibers on both sides. Testing showed that the material exhibited ultra-high friction and adhesion in an acidic liquid (pH of 2), and would rapidly switch to a state of ultra-low friction and stickiness when a basic solution (pH of 12) was added. Additionally, the two sides of the material can stick and slide independently of each other.

The authors acknowledge funding from the National Natural Science Foundation of China, and the U.K.’s Engineering and Physical Sciences Research Council

Contacts and sources:
Katie Cottingham, Ph.D
American Chemical Society

Journal of Physical Chemistry C

Bricks for Martian Pioneers Will Be Stronger Than Reinforced Concrete

Explorers planning to settle on Mars might be able to turn the planet's red soil into bricks without needing to use an oven or additional ingredients. Instead, they would just need to apply pressure to compact the soil--the equivalent of a blow from a hammer.

These are the findings of a study published in Scientific Reports on April 27, 2017. The study was authored by a team of engineers at the University of California San Diego and funded by NASA. The research is all the more important since Congress passed a bill, signed by President Donald Trump in March 2017, directing NASA to send a manned mission on Mars in 2033.

This is a brick made of Martian soil simulant compacted under pressure. The brick was made without any additional ingredients and without baking.

Credit: Jacobs School of Engineering/UC San Diego

"The people who will go to Mars will be incredibly brave. They will be pioneers. And I would be honored to be their brick maker," said Yu Qiao, a professor of structural engineering at UC San Diego and the study's lead author.

Proposals to use Martian soil to build habitats for manned missions on the planet are not new. But this is the first that shows astronauts would need minimal resources to do so. Previous plans included nuclear-powered brick kilns or using complex chemistry to turn organic compounds found on Mars into binding polymers.

Researchers compacted Mars simulant under pressure in a cylindrical, flexible rubber tube. This is what the result of the experiment looked like before it was cut into bricks.

Credit: Jacobs School of Engineering/UC San Diego

In fact, the UC San Diego engineers were initially trying to cut down on the amount of polymers required to shape Martian soil into bricks, and accidently discovered that none was needed. To make bricks out of Mars soil simulant, without additives and without heating or baking the material, two steps were key. One was to enclose the simulant in a flexible container, in this case a rubber tube. The other was to compact the simulant at a high enough pressure. The amount of pressure needed for a small sample is roughly the equivalent of someone dropping 10-lb hammer from a height of one meter, Qiao said.

The process produces small round soil pallets that are about an inch tall and can then be cut into brick shapes. The engineers believe that the iron oxide, which gives Martian soil its signature reddish hue, acts as a binding agent. They investigated the simulant's structure with various scanning tools and found that the tiny iron particles coat the simulant's bigger rocky basalt particles. The iron particles have clean, flat facets that easily bind to one another under pressure.

Researchers investigated the bricks' strengths and found that even without rebar, they are stronger than steel-reinforced concrete. Here is a sample after testing to the point of failure.
Credit: Jacobs School of Engineering/UC San Diego

Researchers also investigated the bricks' strengths and found that even without rebar, they are stronger than steel-reinforced concrete.

Researchers said their method may be compatible with additive manufacturing. To build up a structure, astronauts could lay down a layer of soil, compact it, then lay down an additional layer and compact that, and so on.

The logical next step for the research would be to increase the size of the bricks.

Contacts and sources:
Ioana Patringenaru
University of California San Diego

Tibetan People Have Multiple Adaptations for Life at High Altitudes

The Tibetan people have inherited variants of five different genes that help them live at high altitudes, with one gene originating in the extinct human subspecies, the Denisovans.

The people of Tibet have survived on an extremely high and arid plateau for thousands of years, due to their amazing natural ability to withstand low levels of oxygen, extreme cold, exposure to UV light and very limited food sources. Researchers sequenced the whole genomes of 27 Tibetans and searched for advantageous genes.

Hao Hu and Chad Huff of the University of Texas, Houston, and colleagues report these findings in a new study published April 27th, 2017 in PLOS Genetics.

This is the Tibetan Plateau in Qinghai.

Credit: DaiLuo, Flickr, CC BY

The analysis identified two genes already known to be involved in adaptation to high altitude, EPAS1 and EGLN1, as well as two genes related to low oxygen levels, PTGIS and KCTD12. They also picked out a variant of VDR, which plays a role in vitamin D metabolism and may help compensate for vitamin D deficiency, which commonly affects Tibetan nomads. The Tibetan variant of the EPAS1 gene originally came from the archaic Denisovan people, but the researchers found no other genes related to high altitude with Denisovan roots. 

Further analysis showed that Han Chinese and Tibetan subpopulations split as early as 44 to 58 thousand years ago, but that gene flow between the groups continued until approximately 9 thousand years ago.

The study represents a comprehensive analysis of the demographic history of the Tibetan population and its adaptations to the challenges of living at high altitudes. The results also provide a rich genomic resource of the Tibetan population, which will aid future genetic studies.

Tatum Simonson adds: "The comprehensive analysis of whole-genome sequence data from Tibetans provides valuable insights into the genetic factors underlying this population's unique history and adaptive physiology at high altitude. This study provides further context for analyses of other permanent high-altitude populations, who exhibit characteristics distinct from Tibetans despite similar chronic stresses, as well as lowland populations, in whom hypoxia-related challenges, such those inherent to cardiopulmonary disease or sleep apnea, elicit a wide-range of unique physiological responses."

 Future research efforts will focus on identifying the interplay between various adaptive versus non-adaptive genetic pathways and environmental factors (e.g., hypoxia, diet, cold, UV) in these informative populations to reveal the biological underpinnings of individualized physiological responses."

Contacts and sources:
Chad D. Huff
PLOS Genetics:

Citation: Hu H, Petousi N, Glusman G, Yu Y, Bohlender R, Tashi T, et al. (2017) Evolutionary history of Tibetans inferred from whole-genome sequencing. PLoS Genet 13(4): e1006675. doi:10.1371/journal.pgen.1006675

First Global Simulation Yields New Insights into Ring System

A team of researchers in Japan modeled the two rings around Chariklo, the smallest body in the Solar System known to have rings (Figure 1). This is the first time an entire ring system has been simulated using realistic sizes for the ring particles while also taking into account collisions and gravitational interactions between the particles.

 The team's simulation revealed information about the size and density of the particles in the rings. By considering both the detailed structure and the global picture for the first time, the team found that Chariklo's inner ring should be unstable without help. It is possible the ring particles are much smaller than predicted or that an undiscovered shepherd satellite around Chariklo is stabilizing the ring.

Figure 1: Visualization constructed from simulation of Chariklo’s double ring.

Credit: Shugo Michikoshi, Eiichiro Kokubo, Hirotaka Nakayama, 4D2U Project, NAOJ

In order to elucidate the detailed structure and evolution of Chariklo's rings, Dr. Shugo Michikoshi (Kyoto Women's University/University of Tsukuba) and Prof. Eiichiro Kokubo (National Astronomical Observatory of Japan, NAOJ) performed simulations of the rings by using the supercomputer ATERUI*1 at NAOJ. 

They calculated the motions of 345 million ring particles with the realistic size of a few meters taking into account the inelastic collisions and mutual gravitational attractions between the particles. Thanks to ATERUI's many CPUs and the small size of Chariklo's ring system, the researchers successfully performed the first ever global simulation with realistic sized particles.*2

Their results show that the density of the ring particles must be less than half the density of Chariklo itself. Their results also showed that a striped pattern, known as "self-gravity wakes," forms in the inner ring due to interactions between the particles (Figure 2). These self-gravity wakes accelerate the break-up of the ring. The team recalculated the expected lifetime of Chariklo's rings based on their results and found it to be only 1 to 100 years, much shorter than previous estimates. This is so short that it's surprising the ring is still there.

The research team suggested two possibilities to explain the continued existence of the ring. "Small ring particles is one possibility. If the size of the ring particles is only a few millimeters, the rings can be maintained for 10 million years. Another possibility is the existence of an undiscovered shepherd satellite which slows down the dissolution of the rings." explains Prof. Kokubo.

Credit: Shugo Michikoshi, Eiichiro Kokubo, Hirotaka Nakayama, 4D2U Project, NAOJ

Dr. Michikoshi adds, "The interaction between the rings and a satellite is also an important process in Saturn's rings. To better understand the effect of a satellite on ring structure, we plan to construct a new model for the formation of Chariklo's rings."

Ring systems, such as the iconic rings around Saturn and Uranus, are composed of particles ranging from centimeters to meters in size. Until now, the difficultly of calculating the trajectories and mutual interactions of all these particles had confounded attempts to study rings through computer simulations. Previous researchers have either simulated only a portion of a ring system ignoring the overall structure, or used unrealistically large particles and ignored the detailed structures.

Using a particle density equal to half of Chariklo's density, the overall structure can be maintained. In the close-up view (right) complicated, elongated structures are visible. These structures are called self-gravity wakes. The numbers along the axes indicate distances in km.

Credit: Shugo Michikoshi (Kyoto Women's University/University of Tsukuba)

In 2014, two rings separated by a gap were discovered around Chariklo, the largest known centaur. Centaurs are small bodies wandering between Jupiter and Neptune. Although Chariklo is only hundreds of kilometers in size, its rings are as opaque as those around Saturn and Uranus. Thus Chariklo offered an ideal chance to model a complete ring system.

Contacts and sources:
Dr. Hinako Fukushi
National Institutes Of Natural Sciences

*1 "ATERUI" is a supercomputer for astrophysical simulations in the Center for Computational Astrophysics, NAOJ. Its theoretical peak performance is 1.058 Pflops. It is installed at NAOJ Mizusawa Campus in Oshu City, Iwate, Japan. (Related Aeticle: Supercomputer for Astronomy "ATERUI" Upgraded to Double its Speed. (November 13, 2014))

*2 This study used the general-purpose high-performance library for particle simulations FDPS (Framework for Developing Particle Simulator), which was developed by RIKEN Advanced Institute for Computational Science. On parallel computers such as ATERUI, FDPS calculates particle interactions with the ideal load balancing efficiently. By developing the new simulation code with FDPS, the research team succeeded in global simulations of the rings.

Mineral Resources: Exhaustion Is Just A Myth

Recent articles have declared that deposits of mineral raw materials (copper, zinc, etc.) will be exhausted within a few decades. An international team, including the University of Geneva (UNIGE), Switzerland, has shown that this is incorrect and that the resources of most mineral commodities are sufficient to meet the growing demand from industrialization and future demographic changes.

 Future shortages will arise not from physical exhaustion of different metals but from causes related to industrial exploitation, the economy, and environmental or societal pressures on the use of mineral resources. The report can be read in the journal Geochemical Perspectives.

Some scientists have declared that mineral deposits containing important non renewable resources such as copper and zinc will be exhausted in a few decades if consumption does not decrease. Reaching the opposite conclusion, the international team of researchers shows that even though mineral resources are finite, geological arguments indicate that they are sufficient for at least many centuries, even taking into account the increasing consumption required to meet the growing needs of society. How can this difference be explained?

Comparison of changing estimates for copper reserves, resources and theoretical estimate of ultimate resource to depth of 3.3 km. These estimates are based on grades similar to those of deposits exploited today. If lower grades become feasible to mine, as has occurred over the past century, the resource size could increase significantly. Note log scale.

Credit:  copyright: UNIGE

Definitions matter: reserves and resources

"Do not confuse the mineral resources that exist within the Earth with reserves, which are mineral resources that have been identified and quantified and are able to be exploited economically. Some studies that predict upcoming shortages are based on statistics that only take reserves into account, i.e. a tiny fraction of the deposits that exist", explains Lluis Fontboté, Professor in the Department of Earth Sciences, University of Geneva. To define reserves is a costly exercise that requires investment in exploration, drilling, analyses, and numerical and economic evaluations. Mining companies explore and delineate reserves sufficient for a few decades of profitable operation. Delineation of larger reserves would be a costly and unproductive investment, and does not fit the economic logic of the modern market.

The result is that the estimated life of most mineral commodities is between twenty to forty years, and has remained relatively constant over decades. Use of these values to predict the amount available leads to the frequently announced risks of impending shortages. But this type of calculation is obviously wrong, because it does not take into account the amount of metal in lower quality deposits that are not included in reserves and the huge amount of metal in deposits that have not yet been discovered. Some studies have produced figures that include the known and undiscovered resources, but as our knowledge on ore deposits in large parts of the Earth's crust is very fragmentary, these estimates are generally very conservative.

The vast majority of mined deposits have been discovered at the surface or in the uppermost 300 meters of the crust, but we know that deposits are also present at greater depths. Current techniques allow mining to depths of at least 2000 to 3000 meters. Thus, many mineral deposits that exist have not yet been discovered, and are not included in the statistics. There have been some mineral shortages in the past, especially during the boom related to China's growth, but these are not due to a lack of supplies, but to operational and economic issues. For instance, between the discovery of a deposit and its effective operation, 10 to 20 years or more can elapse, and if demand rises sharply, industrial exploitation cannot respond instantly, creating a temporary shortage.

Environment and society

"The real problem is not the depletion of resources, but the environmental and societal impact of mining operations", says Professor Fontboté. Mining has been undeniably linked to environmental degradation. While impacts can be mitigated by modern technologies, many challenges remain. The financial, environmental and societal costs of mining must be equitably apportioned between industrialized and developing countries, as well as between local communities near mines and the rest of society. 

"Recycling is important and essential, but is not enough to meet the strong growth in demand from developing countries. We must continue to seek and carefully exploit new deposits, both in developing and in industrialized countries", says the researcher at the University of Geneva.

The importance of research

But how can we protect the environment while continuing to mine? Continuing research provides the solutions. If we are to continue mining while minimizing associated environmental effects, we need to better understand the formation of ore deposits, to open new areas of exploration with advanced methods of remote sensing . The continual improvement of exploration and mining techniques is reducing the impact on the Earth's surface. 

"Rapid evolution of technologies and society will eventually reduce our need for mineral raw materials, but at the same time, these new technologies are creating new needs for metals, such as many of the 60 elements that make up every smart phone", adds Professor Fontboté.

The geological perspective that guided the present study leads to the conclusion that shortages will not become a threat for many centuries as long as there is a major effort in mineral exploration, coupled with conservation and recycling. To meet this challenge, society must find ways to discover and mine the needed mineral resources while respecting the environment and the interests of local communities.

Contacts and sources:
Lluis Fontboté
University of Geneva (UNIGE)

The Reason Food Looks Even Better When Dieting

A newly discovered molecule increases appetite during fasting - and decreases it during gorging. The neuron-exciting protein, named NPGL – apparently aims to maintain body mass at a constant, come feast or famine. An evolutionary masterstroke, but not great news for those looking to trim down - or beef up for the summer.

Over recent decades, our understanding of hunger has greatly increased, but this new discovery turns things on their head. Up until now, scientists knew that leptin – a hormone released by fatty tissue, reduces appetite, while ghrelin – a hormone released by stomach tissue makes us want to eat more. These hormones, in turn, activate a host of neurons in the brain’s hypothalamus – the body’s energy control center.

Can you really control what you eat?

Credit: Hiroshima University

The discovery of NPGL by Professor Kazuyoshi Ukena of Hiroshima University shows that hunger and energy consumption mechanisms are even more complex than we realized - and that NPGL plays a central role in what were thought to be well-understood processes.

Professor Ukena first discovered NPGL in chickens after noticing that growing birds grew larger irrespective of diet - suggesting there was more to energy metabolism than meets the eye. Intrigued, the researchers at HU performed a DNA database search to see if mammals might also possess this elusive substance. They found that it exists in all vertebrates - including humans.

In order to investigate its role, if any, in mammals, Professor Ukena’s team fed three groups of mice, on three distinct diets, to see how NPGL levels are altered. The first set of mice was fed on a low-calorie diet for 24 hours. The second group was fed on a high-fat diet for 5 weeks - and the third lucky group was fed on a high-fat diet, but for an extended period of 13 weeks.

The mice fed on a low calorie diet were found to experience an extreme increase in NPGL expression, while the 5-week high-fat-diet group saw a large decrease in NPGL expression.

NPGL apparently aims to maintain body mass at a constant, come feast or famine.

Credit: Hiroshima University

Further analysis found that mice possess NPGL, and its associated neuron network, in the exact same locations of the brain as those regions already known to control appetite suppression and energy use.

Professor Ukena proposes that NPGL plays a vital role in these mechanisms - increasing appetite when energy levels fall and reducing appetite when an energy overload is detected – together, helping to keep us at a healthy and functioning weight, and more importantly alive!

As NPGL levels greatly increased in mice exposed to a low calorie diet, Professor Ukena believes it is an appetite promoter, working in opposition to appetite suppressing hormones such as leptin. Backing this hypothesis up, it was found that mice directly injected with NPGL exhibited a voracious appetite.

Interestingly NPGL levels, which plummeted in the 5-week-long high-fat-diet mice - fell back to normal levels in mice who gorged themselves for the longer period of 13 weeks.

It is proposed that exposure to high-fat diets for long periods of time lead to insensitivity to leptin’s appetite-suppressing effects, and so NPGL - even at normal levels - leads to weight gain and obesity, showing that the body can only do so much to keep our weight in check.

Professor Ukena says that further study is required to understand the interaction of previously known appetite mechanisms with this new kid on the homeostasis block. It does seem however, that we still have a lot to learn about appetite, hunger, and energy consumption. It is hoped that this study into mammalian NPGL adds another piece to the puzzle.

What is certain - but you knew this already - is that dieting is difficult. The discovery and study of mammalian NPGL helps explain why, and provides a plausible excuse for those whose good intentions fall short.

Contacts and sources:
Professor Kazuyoshi Ukena
Hiroshima University

Citation: Neurosecretory protein GL, a hypothalamic small secretory protein, participates in energy homeostasis in male mice.
Daichi Matsuura Kenshiro Shikano Takaya Saito Eiko Iwakoshi-Ukena Megumi Furumitsu Yuta Ochi Manami Sato George E. Bentley Lance J. Kriegsfeld Kazuyoshi Ukena
Endocrinology en.2017-00064. DOI:
Published: 17 March 2017

AI-Based Smartphone Application Can Predict User’s Health Risks

VTT has developed artificial intelligence (AI)-based data analysis methods used in a smartphone application of Odum Ltd. The application can estimate its users' health risks and, if necessary, guide them towards a healthier lifestyle.

"Based on an algorithm developed by VTT, we can predict the risk of illness-related absence from work among members of the working population over the next 12 months, with an up to 80 percent sensitivity," says VTT's Mark van Gils, the scientific coordinator of the project.

"This is a good example of how we can find new information insights that are concretely valuable to both citizens and health care professionals by analysing large and diverse data masses."

Photo: Odum

The application also guides individuals at risk to complete an electronic health exam and take the initiative in promoting their own health.

During the project Odum and VTT examined health data collected from 18–64 year-olds over the course of several years. The project received health data from a total of 120,000 working individuals.

"Health care costs are growing at an alarming pace and health problems are not being addressed early enough," says Jukka Suovanen, CEO of Odum. "Our aim is to decrease illness-related absences by 30 percent among application users and add 10 healthy years to their lives."

Photo: Odum

The most cost-efficient way to improve quality of life and decrease health care costs for both individuals and society is to promote the health of individuals and encourage them to take initiative in reducing their health risks.

VTT is one of the leading research and technology companies in Europe. We help our clients develop new technologies and service concepts in the areas of Digital Health, Wearable technologies and Diagnostics - supporting their growth with top-level research and science-based results.

Contacts and sources:
Technical Research Centre of Finland (VTT)

Jukka Suovanen, CEO

Ice cave in Transylvania Reveals How Winter Changed over the Past 10,000 Years

Ice cores drilled from a glacier in a cave in Transylvania offer new evidence of how Europe's winter weather and climate patterns fluctuated during the last 10,000 years, known as the Holocene period.

The cores provide insights into how the region's climate has changed over time. The researchers' results, published this week in the journal Scientific Reports, could help reveal how the climate of the North Atlantic region, which includes the U.S., varies on long time scales.

The project, funded by the National Science Foundation (NSF) and the Romanian Ministry of Education, involved scientists from the University of South Florida (USF), University of Belfast, University of Bremen and Stockholm University, among other institutions.

A view of what scientists call "The Church," a chamber with exceptionally rich ice formations.

Credit: B. Onac

Researchers from the Emil Racoviță Institute of Speleology in Cluj-Napoca, Romania, and USF's School of Geosciences gathered their evidence in the world's most-explored ice cave and oldest cave glacier, hidden deep in the heart of Transylvania in central Romania.

With its towering ice formations and large underground ice deposit, Scărișoara Ice Cave is among the most important scientific sites in Europe.

Scientist Bogdan Onac of USF and his colleague Aurel Perșoiu, working with a team of researchers in Scărișoara Ice Cave, sampled the ancient ice there to reconstruct winter climate conditions during the Holocene period.

Over the last 10,000 years, snow and rain dripped into the depths of Scărișoara, where they froze into thin layers of ice containing chemical evidence of past winter temperature changes.

Until now, scientists lacked long-term reconstructions of winter climate conditions. That knowledge gap hampered a full understanding of past climate dynamics, Onac said.

"Most of the paleoclimate records from this region are plant-based, and track only the warm part of the year -- the growing season," says Candace Major, program director in NSF's Directorate for Geosciences, which funded the research. "That misses half the story. The spectacular ice cave at Scărișoara fills a crucial piece of the puzzle of past climate change in recording what happens during winter."

The "Great Hall" in the Scărișoara Ice Cave, where researchers extracted ice cores.

Credit: A. Persoiu

Reconstructions of Earth's climate record have relied largely on summer conditions, charting fluctuations through vegetation-based samples, such as tree ring width, pollen and organisms that thrive in the warmer growing season.

Absent, however, were important data from winters, Onac said.

Located in the Apuseni Mountains, the region surrounding the Scărișoara Ice Cave receives precipitation from the Atlantic Ocean and the Mediterranean Sea and is an ideal location to study shifts in the courses storms follow across East and Central Europe, the scientists say.

Radiocarbon dating of minute leaf and wood fragments preserved in the cave's ice indicates that its glacier is at least 10,500 years old, making it the oldest cave glacier in the world and one of the oldest glaciers on Earth outside the polar regions.

The 16-meter (52-foot) ice cliff, seen here from the "Small Reserve."
Credit: C. Ciubotarescu

From samples of the ice, the researchers were able to chart the details of winter conditions growing warmer and wetter over time in Eastern and Central Europe. Temperatures reached a maximum during the mid-Holocene some 7,000 to 5,000 years ago and decreased afterward toward the Little Ice Age, 150 years ago.

A major shift in atmospheric dynamics occurred during the mid-Holocene, when winter storm tracks switched and produced wetter and colder conditions in northwestern Europe, and the expansion of a Mediterranean-type climate toward southeastern Europe.

"Our reconstruction provides one of the very few winter climate reconstructions, filling in numerous gaps in our knowledge of past climate variability," Onac said.

Panoramic view of an ice cliff inside the Scărișoara Ice Cave, where the research was done.

Credit: Gigi Fratila & Claudiu Szabo

Warming winter temperatures led to rapid environmental changes that allowed the northward expansion of Neolithic farmers toward mainland Europe, and the rapid population of the continent.

"Our data allow us to reconstruct the interplay between Atlantic and Mediterranean sources of moisture," Onac said. "We can also draw conclusions about past atmospheric circulation patterns, with implications for future climate changes. Our research offers a long-term context to better understand these changes."

The results from the study tell scientists how the climate of the North Atlantic region, which includes the U.S., varies on long time scales. The scientists are continuing their cave study, working to extend the record back 13,000 years or more.


Media Contacts
Cheryl Dybas, NSF
Vickie Chachere, USF

Cancer Diagnosis Possible with New Breath Test

A new test for the early detection of lung cancer measures tiny changes in the composition of the breath

“Inhale deeply ... and exhale.” This is what a test for lung cancer could be like in future. Scientists at the Max Planck Institute for Heart and Lung Research in Bad Nauheim have developed a method that can detect the disease at an early stage. To this effect, they investigated the presence of traces of RNA molecules that are altered by cancer growth.

 In a study on healthy volunteers and cancer patients, the breath test correctly determined the health status of 98 percent of the participants. The method will now be refined in cooperation with licensing partners so that it can be used for the diagnosis of lung cancer.

Scientists can collect RNA molecules released from lung tissue and amplify them with the help of polymerase chain reaction (qRT-PCR). The molecules can serve as a diagnostic tool to detect cancer cells in the lungs.

Credit: © MPI f. Heart and Lung Research/ G. Barreto

Most lung cancer patients die within five years of diagnosis. One of the main reasons for this is the insidious and largely symptom-free onset of the disease, which often remains unnoticed. In the USA, high-risk groups, such as heavy smokers, are therefore routinely examined by CAT scan. However, patients can be wrongly classified as having the disease.

Together with cooperation partners, researchers at the Max Planck Institute for Heart and Lung Research have now developed a breath test that is much more accurate. In their research, the diagnosis of lung cancer was correct in nine out of ten cases. The method is therefore reliable enough to be used for the routine early detection of lung cancer.

The researchers analyzed RNA molecules released from lung tissue into expired breath, noting differences between healthy subjects and lung cancer patients. Unlike DNA, the RNA profile is not identical in every cell. Several RNA variants, and therefore different proteins, can arise from one and the same DNA segment. In healthy cells, such variants are present in a characteristic ratio. The scientists discovered that cancerous and healthy cells contain different amounts of RNA variants of the GATA6 and NKX2 genes. Cancer cells resemble lung cells in the embryonic stage.

The researchers developed a method to isolate RNA molecules. Not only is their concentration in expired breath extremely low, but they are also frequently highly fragmented. The researchers then investigated the RNA profile in subjects with and without lung cancer and from these data established a model for diagnosing the disease. In a test of 138 subjects whose health status was known, the test was able to identify 98 percent of the patients with lung cancer. 90 percent of the detected abnormalities were in fact cancerous.

“The breath test could make the detection of early-stage lung cancer easier and more reliable, but it will not completely supplant conventional techniques,” says Guillermo Barreto, a Working Group Leader at the Max Planck Institute in Bad Nauheim. “However, it can complement other techniques for detecting early cancer stages and reduce false-positive diagnoses.”

The scientists will contribute to future large-scale clinical trials. Together with the technology transfer organization Max Planck Innovation, they are seeking licensing partners to develop the breath test to maturity and market it. They also hope to use RNA profiles for the early detection of other diseases. Tiny changes could produce tissue profiles, akin to an RNA fingerprint, that reveal diseased cells and allow for rapid treatment.

Contacts and sources:
Dr. Guillermo Barreto
Max Planck Institute for Heart and Lung Research, Bad Nauheim

Citation: Mehta et al. Non-invasive lung cancer diagnosis by detection of GATA6 and NKX2-1 isoforms in exhaled breath condensate. Embo Molecular MedicineDOI

Why Some Let Their Thoughts Run Free and Others Do Not

A wandering mind and daydreaming are more than just a fault in the system

In people who intentionally let their minds wander, two main brain cell networks broadly overlap

Our thoughts are not always tethered to events in the moment. Although mind wandering is often considered a lapse in attention, scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig and the University of York in England have shown that when we engage internal thoughts in a deliberate manner, this is reflected by more effective processing in brain systems involved in cognitive control. This could explain why some people benefit from letting their thoughts run free and other do not.

File:Daydreaming Gentleman.jpg
Credit: Wikimedia Commons

Since people start to make mistakes as soon as they lose concentration on their surroundings, mind wandering has long been interpreted as a failure in control. Now we know that this phenomenon is more complex: Besides the unintentional, spontaneous wandering of our thoughts, mind wandering can serve as a kind of deliberate mental rehearsal that allows us to consider future events and solve problems.

Scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig and the University of York in England have shown that involuntary and intentional mind wandering can be dissociated based on brain structure and function, building on prior studies that demonstrate behavioral and psychological differences. “

We found that in people who often purposefully allow their minds to go off on a tangent the cortex is thicker in some prefrontal regions”, says Johannes Golchert, PhD student at the Max Planck Institute in Leipzig and first author of the study. “Furthermore, we found that in people who intentionally mind wander, two main brain networks broadly overlap each other: the default-mode network, which is active when focusing on information from memory, and the fronto-parietal network, which stabilizes our focus and inhibits irrelevant stimuli as part of our cognitive control system.”

While both networks are strongly connected to each other, the control network can influence our thoughts, helping us focus on goals in a more stable manner. This can be seen as evidence that our mental control is not impaired when we deliberately allow our mind to wander. 

Image result for daydreaming
Credit: Wikimedia Commons

“In this case, our brain barely distinguishes between focusing outwards on our environment or inwards on our thoughts. In both situations the control network is involved”, Golchert explains. “Mind wandering should not just be considered as something disturbing. If you’re able to control it to some extent, that is to say, suppress it when necessary and to let it run free when possible, then you can make the most of it.”

The neuroscientists investigated these relationships using psychological questionnaires and magnetic resonance imaging (MRI). Participants were asked to respond to statements such as: “I allow my thoughts to wander on purpose,” or “I find my thoughts wandering spontaneously”, and then underwent MRI scanning for measures of brain structure and connectivity. The differences in types of mind wandering across participants were then related to differences in brain organization.

Contacts and sources:
Dr. Daniel S. Margulies
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig

Citation: Golchert, J.; Smallwood, J.; Jefferies, E.; Seli, P.; Huntenburg, J. M.; Liem, F.; Lauckner, M.; Oligschläger, S.; Bernhardt, B.; Villringer, A.; Margulies, D. S.
Individual variation in intentionality in the mind-wandering state is reflected in the integration of the default-mode, fronto-parietal, and limbic networks.
NeuroImage 2017; 146, 226 - 235

Illuminating the Cosmic Web

Astronomers use the light of double quasars to measure the structure of the universe

Astronomers believe that matter in intergalactic space is distributed in a vast network of interconnected filamentary structures known as the cosmic web. Nearly all the atoms in the Universe reside in this web, vestigial material left over from the Big Bang. A team led by researchers from the Max Planck Institute for Astronomy in Heidelberg have made the first measurements of small-scale fluctuations in the cosmic web just 2 billion years after the Big Bang. These measurements were enabled by a novel technique using pairs of quasars to probe the cosmic web along adjacent, closely separated lines of sight. They promise to help astronomers reconstruct an early chapter of cosmic history known as the epoch of reionization.
Snapshot of a supercomuter simulation showing part of the cosmic web, 11.5 billion years ago. The researchers created this and other models of the universe and directly compared them with quasar pair data in order to measure the small-scale ripples in the cosmic web. The cube is 24 million light-years on a side.

Credit: © J. Oñorbe / MPIA

The most barren regions of the Universe are the far-flung corners of intergalactic space. In these vast expanses between the galaxies there are only a few atoms per cubic meter – a diffuse haze of hydrogen gas left over from the Big Bang. Viewed on the largest scales, this diffuse material nevertheless accounts for the majority of atoms in the Universe, and fills the cosmic web, its tangled strands spanning billions of light years.

Now, a team led by astronomers from the Max Planck Institute for Astronomy (MPIA) have made the first measurements of small-scale ripples in this primeval hydrogen gas. Although the regions of cosmic web they studied lie nearly 11 billion light years away, they were able to measure variations in its structure on scales a hundred thousand times smaller, comparable to the size of a single galaxy.

Intergalactic gas is so tenuous that it emits no light of its own. Instead astronomers study it indirectly by observing how it selectively absorbs the light coming from faraway sources known as quasars. Quasars constitute a brief hyperluminous phase of the galactic life-cycle, powered by the infall of matter onto a galaxy's central supermassive black hole.

Quasars act like cosmic lighthouses – bright, distant beacons that allow astronomers to study intergalactic atoms residing between the quasars location and Earth. But because these hyperluminous episodes last only a tiny fraction of a galaxy’s lifetime, quasars are correspondingly rare on the sky, and are typically separated by hundreds of millions of light years from each other.

Schematic representation of the technique used to probe the small-scale structure of the cosmic web using light from a rare quasar pair. The spectra (bottom right) contain information about the hydrogen gas the light has encountered on its journey to Earth, as well as the distance of that gas.
Credit: © J. Oñorbe / MPIA

In order to probe the cosmic web on much smaller length scales, the astronomers exploited a fortuitous cosmic coincidence: They identified exceedingly rare pairs of quasars right next to each other on the sky, and measured subtle differences in the absorption of intergalactic atoms measured along the two sightlines.

Alberto Rorai, a post-doctoral researcher at Cambridge university and the lead author of the study says: “One of the biggest challenges was developing the mathematical and statistical tools to quantify the tiny differences we measure in this new kind of data.”

Rorai developed these tools as part of the research for his doctoral degree at the MPIA, and applied his tools to spectra of quasars obtained with the largest telescopes in the world, including the 10 meter diameter Keck telescopes at the summit of Mauna Kea in Hawaii, as well as ESO's 8 meter diameter Very Large Telescope on Cerro Paranal, and the 6.5 meter diameter Magellan telescope at Las Campanas Observatory, both located in the Chilean Atacama Desert.

The astronomers compared their measurements to supercomputer models that simulate the formation of cosmic structures from the Big Bang to the present. “The input to our simulations are the laws of Physics and the output is an artificial Universe which can be directly compared to astronomical data. I was delighted to see that these new measurements agree with the well-established paradigm for how cosmic structures form,” says Jose Oñorbe, a post-doctoral researcher at the MPIA, who led the supercomputer simulation effort.

On a single laptop, these complex calculations would have required almost a thousand years to complete, but modern supercomputers enabled the researchers to carry them out in just a few weeks.

Joseph Hennawi, who leads the research group at MPIA responsible for the measurement, explains: “One reason why these small-scale fluctuations are so interesting is that they encode information about the temperature of gas in the cosmic web just a few billion years after the Big Bang.” According to the current level of knowledge, the universe had quite a mercurial youth: initially, about 400,000 years after the Big Bang, the universe had cooled down to such an extent that neutral hydrogen could arise. At that point, there were practically no heavenly bodies yet and therefore no light. It was not until few hundred million years later that this 'dark age' ended and a new era began, in which stars and quasars lit up and emitted energetic ultraviolet rays. The latter were so intense that they robbed atoms in the intergalactic space of their electrons - the gas was ionized again.

How and when reionization occurred is one of the biggest open questions in the field of cosmology, and these new measurements provide important clues that will help narrate this chapter of cosmic history.

Contacts and sources:
Dr. Markus PösselMax Planck Institute for Astronomy in Heidelberg

Citation: A. Rorai et al.Measurement of the Small-Scale Structure of the Intergalactic Medium Using Close Quasar Pairs Science, 28 April 2017

DNA from Extinct Humans Discovered in Cave Sediments

Researchers have developed a new method to retrieve hominin DNA from cave sediments -- even in the absence of skeletal remains.
While there are numerous prehistoric sites in Europe and Asia that contain tools and other human-made artefacts, skeletal remains of ancient humans are scarce. Researchers of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, have therefore looked into new ways to get hold of ancient human DNA.

From sediment samples collected at seven archaeological sites, the researchers "fished out" tiny DNA fragments that had once belonged to a variety of mammals, including our extinct human relatives. They retrieved DNA from Neandertals in cave sediments of four archaeological sites, also in layers where no hominin skeletal remains have been discovered. In addition, they found Denisovan DNA in sediments from Denisova Cave in Russia. These new developments now enable researchers to uncover the genetic affiliations of the former inhabitants of many archaeological sites which do not
yield human remains.

This is an entrance to the archaeological site of Vindija Cave, Croatia.

Credit: MPI f. Evolutionary Anthropology/ J. Krause

By looking into the genetic composition of our extinct relatives, the Neandertals, and their cousins from Asia, the Denisovans, researchers can shed light on our own evolutionary history. However, fossils of ancient humans are rare, and they are not always available or suitable for genetic analyses. "We know that several components of sediments can bind DNA", says Matthias Meyer of the Max Planck Institute for Evolutionary Anthropology. "We therefore decided to investigate whether hominin DNA may survive in sediments at archaeological sites known to have been occupied by ancient hominins."

To this aim Meyer and his team collaborated with a large network of researchers excavating at seven archaeological sites in Belgium, Croatia, France, Russia and Spain. Overall, they collected sediment samples covering a time span from 14,000 to over 550,000 years ago. Using tiny amounts of material the researchers recovered and analyzed fragments of mitochondrial DNA - genetic material from the mitochondria, the "energy factories" of the cell - and identified them as belonging to twelve different mammalian families that include extinct species such as the woolly mammoth, the woolly rhinoceros, the cave bear and the cave hyena.

A sediment sample is prepared for DNA extraction.

Credit: MPI f. Evolutionary Anthropology/ S. Tüpke

The researchers then looked specifically for ancient hominin DNA in the samples. "From the preliminary results, we suspected that in most of our samples, DNA from other mammals was too abundant to detect small traces of human DNA", says Viviane Slon, Ph.D. student at the Max Planck Institute in Leipzig and first author of the study. "We then switched strategies and started targeting specifically DNA fragments of human origin."

Nine samples from four archaeological sites contained enough ancient hominin DNA for further analyses: Eight sediment samples contained Neandertal mitochondrial DNA from either one or multiple individuals, while one sample contained Denisovan DNA. Most of these samples originated from archaeological layers or sites where no Neandertal bones or teeth were previously found.

A new tool for archaeology

"By retrieving hominin DNA from sediments, we can detect the presence of hominin groups at sites and in areas where this cannot be achieved with other methods", says Svante Pääbo, director of the Evolutionary Genetics department at the Max Planck Institute for Evolutionary Anthropology and co-author of the study. "This shows that DNA analyses of sediments are a very useful archaeological
procedure, which may become routine in the future".

This image shows excavations at the site of El Sidrón, Spain.

Credit: El Sidrón research team

Even sediment samples that were stored at room temperature for years still yielded DNA. Analyses of these and of freshly-excavated sediment samples recovered from archaeological sites where no human remains are found will shed light on these sites' former occupants and our joint genetic history.

Contacts and sources:
Dr. Matthias Meyer

Citation: Viviane Slon, Charlotte Hopfe, Clemens L. Weiß, Fabrizio Mafessoni, Marco de la Rasilla, Carles Lalueza-Fox, Antonio Rosas, Marie Soressi, Monika V. Knul, Rebecca Miller, John R. Stewart, Anatoly P. Derevianko, Zenobia Jacobs, Bo Li, Richard G. Roberts, Michael V. Shunkov, Henry de Lumley, Christian Perrenoud, Ivan Guši?, ?eljko Ku?an, Pavao Rudan, Ayinuer Aximu-Petri, Elena Essel, Sarah Nagel, Birgit Nickel, Anna Schmidt, Kay Prüfer, Janet Kelso, Hernán A. Burbano, Svante Pääbo, Matthias Meyer
Neandertal and Denisovan DNA from Pleistocene sediments.
Science; 27 April, 2017

Thursday, April 27, 2017

Long-held Tsunami Formation Theory Challenged by New NASA Study

A new NASA study is challenging a long-held theory that tsunamis form and acquire their energy mostly from vertical movement of the seafloor.

An undisputed fact was that most tsunamis result from a massive shifting of the seafloor -- usually from the subduction, or sliding, of one tectonic plate under another during an earthquake. Experiments conducted in wave tanks in the 1970s demonstrated that vertical uplift of the tank bottom could generate tsunami-like waves. In the following decade, Japanese scientists simulated horizontal seafloor displacements in a wave tank and observed that the resulting energy was negligible. This led to the current widely held view that vertical movement of the seafloor is the primary factor in tsunami generation.

Photo taken March 11, 2011, by Sadatsugu Tomizawa and released via Jiji Press on March 21, 2011, showing tsunami waves hitting the coast of Minamisoma in Fukushima prefecture, Japan.
 Tsunami waves hitting the coast of Minamisoma in Fukushima prefecture, Japan
Credits: Sadatsugu Tomizawa CC BY-NC-ND 2.0

In 2007, Tony Song, an oceanographer at NASA’s Jet Propulsion Laboratory in Pasadena, California, cast doubt on that theory after analyzing the powerful 2004 Sumatra earthquake in the Indian Ocean. Seismograph and GPS data showed that the vertical uplift of the seafloor did not produce enough energy to create a tsunami that powerful. But formulations by Song and his colleagues showed that once energy from the horizontal movement of the seafloor was factored in, all of the tsunami’s energy was accounted for. Those results matched tsunami data collected from a trio of satellites –the NASA/Centre National d’Etudes Spatiales (CNES) Jason, the U.S. Navy’s Geosat Follow-on and the European Space Agency’s Environmental Satellite.

Further research by Song on the 2004 Sumatra earthquake, using satellite data from the NASA/German Aerospace Center Gravity Recovery and Climate Experiment (GRACE) mission, also backed up his claim that the amount of energy created by the vertical uplift of the seafloor alone was insufficient for a tsunami of that size.

“I had all this evidence that contradicted the conventional theory, but I needed more proof,” Song said.

His search for more proof rested on physics -- namely, the fact that horizontal seafloor movement creates kinetic energy, which is proportional to the depth of the ocean and the speed of the seafloor's movement. After critically evaluating the wave tank experiments of the 1980s, Song found that the tanks used did not accurately represent either of these two variables. They were too shallow to reproduce the actual ratio between ocean depth and seafloor movement that exists in a tsunami, and the wall in the tank that simulated the horizontal seafloor movement moved too slowly to replicate the actual speed at which a tectonic plate moves during an earthquake.

“I began to consider that those two misrepresentations were responsible for the long-accepted but misleading conclusion that horizontal movement produces only a small amount of kinetic energy,” Song said.

Building a Better Wave Tank

To put his theory to the test, Song and researchers from Oregon State University in Corvallis simulated the 2004 Sumatra and 2011 Tohoku earthquakes at the university’s Wave Research Laboratory by using both directly measured and satellite observations as reference. Like the experiments of the 1980s, they mimicked horizontal land displacement in two different tanks by moving a vertical wall in the tank against water, but they used a piston-powered wave maker capable of generating faster speeds. They also better accounted for the ratio of how deep the water is to the amount of horizontal displacement in actual tsunamis.

The new experiments illustrated that horizontal seafloor displacement contributed more than half the energy that generated the 2004 and 2011 tsunamis.

“From this study, we’ve demonstrated that we need to look at not only the vertical but also the horizontal movement of the seafloor to derive the total energy transferred to the ocean and predict a tsunami,” said Solomon Yim, a professor of civil and construction engineering at Oregon State University and a co-author on the study.

The finding further validates an approach developed by Song and his colleagues that uses GPS technology to detect a tsunami’s size and strength for early warnings.

The JPL-managed Global Differential Global Positioning System (GDGPS) is a very accurate real-time GPS processing system that can measure seafloor movement during an earthquake. As the land shifts, ground receiver stations nearer to the epicenter also shift. The stations can detect their movement every second through real-time communication with a constellation of satellites to estimate the amount and direction of horizontal and vertical land displacement that took place in the ocean. They developed computer models to incorporate that data with ocean floor topography and other information to calculate the size and direction of a tsunami.

“By identifying the important role of the horizontal motion of the seafloor, our GPS approach directly estimates the energy transferred by an earthquake to the ocean,” Song said. “Our goal is to detect a tsunami’s size before it even forms, for early warnings.”

The study is published in Journal of Geophysical Research -- Oceans.

Contacts and sources:
Alan Buis
Jet Propulsion Laboratory,

'Iceball' Planet Discovered Through Microlensing

Scientists have discovered a new planet with the mass of Earth, orbiting its star at the same distance that we orbit our sun. The planet is likely far too cold to be habitable for life as we know it, however, because its star is so faint. But the discovery adds to scientists' understanding of the types of planetary systems that exist beyond our own.

"This 'iceball' planet is the lowest-mass planet ever found through microlensing," said Yossi Shvartzvald, a NASA postdoctoral fellow based at NASA's Jet Propulsion Laboratory, Pasadena, California, and lead author of a study published in the Astrophysical Journal Letters.

This artist's concept shows OGLE-2016-BLG-1195Lb, a planet discovered through a technique called microlensing.
Artist's concept shows OGLE-2016-BLG-1195Lb
Credits: NASA/JPL-Caltech

Microlensing is a technique that facilitates the discovery of distant objects by using background stars as flashlights. When a star crosses precisely in front of a bright star in the background, the gravity of the foreground star focuses the light of the background star, making it appear brighter. A planet orbiting the foreground object may cause an additional blip in the star’s brightness. In this case, the blip only lasted a few hours. This technique has found the most distant known exoplanets from Earth, and can detect low-mass planets that are substantially farther from their stars than Earth is from our sun.

The newly discovered planet, called OGLE-2016-BLG-1195Lb, aids scientists in their quest to figure out the distribution of planets in our galaxy. An open question is whether there is a difference in the frequency of planets in the Milky Way's central bulge compared to its disk, the pancake-like region surrounding the bulge. OGLE-2016-BLG-1195Lb is located in the disk, as are two planets previously detected through microlensing by NASA's Spitzer Space Telescope.

"Although we only have a handful of planetary systems with well-determined distances that are this far outside our solar system, the lack of Spitzer detections in the bulge suggests that planets may be less common toward the center of our galaxy than in the disk," said Geoff Bryden, astronomer at JPL and co-author of the study.

For the new study, researchers were alerted to the initial microlensing event by the ground-based Optical Gravitational Lensing Experiment (OGLE) survey, managed by the University of Warsaw in Poland. Study authors used the Korea Microlensing Telescope Network (KMTNet), operated by the Korea Astronomy and Space Science Institute, and Spitzer, to track the event from Earth and space.

KMTNet consists of three wide-field telescopes: one in Chile, one in Australia, and one in South Africa. When scientists from the Spitzer team received the OGLE alert, they realized the potential for a planetary discovery. The microlensing event alert was only a couple of hours before Spitzer's targets for the week were to be finalized, but it made the cut.

With both KMTNet and Spitzer observing the event, scientists had two vantage points from which to study the objects involved, as though two eyes separated by a great distance were viewing it. Having data from these two perspectives allowed them to detect the planet with KMTNet and calculate the mass of the star and the planet using Spitzer data.

"We are able to know details about this planet because of the synergy between KMTNet and Spitzer," said Andrew Gould, professor emeritus of astronomy at Ohio State University, Columbus, and study co-author.

Although OGLE-2016-BLG-1195Lb is about the same mass as Earth, and the same distance from its host star as our planet is from our sun, the similarities may end there.

OGLE-2016-BLG-1195Lb is nearly 13,000 light-years away and orbits a star so small, scientists aren't sure if it's a star at all. It could be a brown dwarf, a star-like object whose core is not hot enough to generate energy through nuclear fusion. This particular star is only 7.8 percent the mass of our sun, right on the border between being a star and not.

Alternatively, it could be an ultra-cool dwarf star much like TRAPPIST-1, which Spitzer and ground-based telescopes recently revealed to host seven Earth-size planets. Those seven planets all huddle closely around TRAPPIST-1, even closer than Mercury orbits our sun, and they all have potential for liquid water. But OGLE-2016-BLG-1195Lb, at the sun-Earth distance from a very faint star, would be extremely cold -- likely even colder than Pluto is in our own solar system, such that any surface water would be frozen. A planet would need to orbit much closer to the tiny, faint star to receive enough light to maintain liquid water on its surface.

Ground-based telescopes available today are not able to find smaller planets than this one using the microlensing method. A highly sensitive space telescope would be needed to spot smaller bodies in microlensing events. NASA's upcoming Wide Field Infrared Survey Telescope (WFIRST), planned for launch in the mid-2020s, will have this capability.

"One of the problems with estimating how many planets like this are out there is that we have reached the lower limit of planet masses that we can currently detect with microlensing," Shvartzvald said. "WFIRST will be able to change that."

JPL manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate, Washington. Science operations are conducted at the Spitzer Science Center at Caltech in Pasadena, California. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at the Infrared Processing and Analysis Center at Caltech. Caltech manages JPL for NASA. For more information about Spitzer, visit:

Contacts and sources:
Elizabeth Landau
Jet Propulsion Laboratory