Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Tuesday, June 28, 2016

Dwarf Planet Makemake Moon Discovered in The Kuiper Belt, Nicknamed MK2

A Southwest Research Institute-led team has discovered an elusive, dark moon orbiting Makemake, one of the “big four” dwarf planets populating the Kuiper Belt region at the edge of our solar system. The findings are detailed in the paper “Discovery of a Makemakean Moon,” published in the June 27 issue of Astrophysical Journal Letters.

“Makemake’s moon proves that there are still wild things waiting to be discovered, even in places people have already looked,” said Dr. Alex Parker, lead author of the paper and the SwRI astronomer credited with discovering the satellite. Parker spotted a faint point of light close to the dwarf planet using data from Hubble’s Wide Field Camera 3. “Makemake’s moon — nicknamed MK2 — is very dark, 1,300 times fainter than the dwarf planet.”

A SwRI-led team analyzed data from Hubble’s Wide Field Camera 3 to discover a small, dark moon around the dwarf planet Makemake. The image shows different views of the Makemake system taken two days apart. The moon over Makemake is faint but visible on the left, but completely lost in the glare of the parent dwarf on the right. 
 Image Courtesy of NASA/Hubble WFC3/SwRI/Alex Parker

A nearly edge-on orbital configuration helped it evade detection, placing it deep within the glare of the icy dwarf during a substantial fraction of its orbit. Makemake is one of the largest and brightest known Kuiper Belt Objects (KBOs), second only to Pluto. The moon is likely less than 100 miles wide while its parent dwarf planet is about 870 miles across. Discovered in 2005, Makemake is shaped like football and sheathed in frozen methane.

“With a moon, we can calculate Makemake’s mass and density,” Parker said. “We can contrast the orbits and properties of the parent dwarf and its moon, to understand the origin and history of the system. We can compare Makemake and its moon to other systems, and broaden our understanding of the processes that shaped the evolution of our solar system.”

With the discovery of MK2, all four of the currently designated dwarf planets are known to host one or more satellites. The fact that Makemake’s satellite went unseen despite previous searches suggests that other large KBOs may host hidden moons.

Trans-Neptunian Dwarf Planets 
Credit: NASA

Prior to this discovery, the lack of a satellite for Makemake suggested that it had escaped a past giant impact. Now, scientists will be looking at its density to determine if it was formed by a giant collision or if it was grabbed by the parent dwarf’s gravity. The apparent ubiquity of moons orbiting KBO dwarf planets supports the idea that giant collisions are a near-universal fixture in the histories of these distant worlds.

The authors of this paper were supported by a grant from Space Telescope Science Institute (STScI), which conducts Hubble Space Telescope operations. The Association of Universities for Research in Astronomy Inc. in Washington, D.C., operates STScI for NASA. The Hubble telescope is a project of international cooperation between NASA and European Space Agency. NASA’s Goddard Space Flight Center in Greenbelt, Md., manages the telescope.



Contacts and sources:
Deb Schmid
Southwest Research Institute,

Monday, June 27, 2016

Hydrogel Hybrid Doesn’t Dry Out: New Water-Based Material Could Be Used to Make Artificial Skin, Longer-Lasting Contact Lenses


If you leave a cube of Jell-O on the kitchen counter, eventually its water will evaporate, leaving behind a shrunken, hardened mass — hardly an appetizing confection. The same is true for hydrogels. Made mostly of water, these gelatin-like polymer materials are stretchy and absorbent until they inevitably dry out.

Now engineers at MIT have found a way to prevent hydrogels from dehydrating, with a technique that could lead to longer-lasting contact lenses, stretchy microfluidic devices, flexible bioelectronics, and even artificial skin.

See how MIT researchers designed a hydrogel that doesn't dry out.

Video: Melanie Gonick/MIT

The engineers, led by Xuanhe Zhao, the Robert N. Noyce Career Development Associate Professor in MIT’s Department of Mechanical Engineering, devised a method to robustly bind hydrogels to elastomers — elastic polymers such as rubber and silicone that are stretchy like hydrogels yet impervious to water. They found that coating hydrogels with a thin elastomer layer provided a water-trapping barrier that kept the hydrogel moist, flexible, and robust. The results are published today in the journal Nature Communications.

Zhao says the group took inspiration for its design from human skin, which is composed of an outer epidermis layer bonded to an underlying dermis layer. The epidermis acts as a shield, protecting the dermis and its network of nerves and capillaries, as well as the rest of the body’s muscles and organs, from drying out.

Engineers at MIT have devised a method to bind two stretchy materials: gelatin-like polymer materials called hydrogels, and elastomers, which are impervious to water and can thus seal in the hydrogel’s water.
Photo: Melanie Gonick/MIT

The team’s hydrogel-elastomer hybrid is similar in design to, and in fact multiple times tougher than, the bond between the epidermis and dermis. The team developed a physical model to quantitatively guide the design of various hydrogel-elastomer bonds. In addition, the researchers are exploring various applications for the hybrid material, including artificial skin. In the same paper, they report inventing a technique to pattern tiny channels into the hybrid material, similar to blood vessels. They have also embedded complex ionic circuits in the material to mimic nerve networks.

“We hope this work will pave the way to synthetic skin, or even robots with very soft, flexible skin with biological functions,” Zhao says.

The paper’s lead author is MIT graduate student Hyunwoo Yuk. Co-authors include MIT graduate students German Alberto Parada and Xinyue Liu and former Zhao group postdoc Teng Zhang, now an assistant professor at Syracuse University.

Getting under the skin

In December 2015, Zhao’s team reported that they had developed a technique to achieve extremely robust bonding of hydrogels to solid surfaces such as metal, ceramic, and glass. The researchers used the technique to embed electronic sensors within hydrogels to create a “smart” bandage. They found, however, that the hydrogel would eventually dry out, losing its flexibility.

Others have tried to treat hydrogels with salt to prevent dehydration, but Zhao says this method can make a hydrogel incompatible with biological tissues, and even toxic. Instead, the researchers, inspired by skin, reasoned that coating hydrogels with a material that was similarly stretchy but also water-resistant would be a better strategy for preventing dehydration. They soon landed on elastomers as the ideal coating, though the rubbery material came with one major challenge: It was inherently resistant to bonding with hydrogels.

“Most elastomers are hydrophobic, meaning they do not like water,” Yuk says. “But hydrogels are a modified version of water. So these materials don’t like each other much and usually can’t form good adhesion.”

The team tried to bond the materials together using the technique they developed for solid surfaces, but with elastomers, Yuk says, the hydrogel bonding was “horribly weak.” After searching through the literature on chemical bonding agents, the researchers found a candidate compound that might bring hydrogels and elastomers together: benzophenone, which is activated via ultraviolet (UV) light.

Figure (a) shows the fabrication procedure for a hydrogel-elastomer microfluidic chip. Figure (b) shows that the hydrogel-elastomer microfluidic hybrid supports convection of chemicals (represented by food dye in different colors) in the microfluidic channels and diffusion of chemicals in the hydrogel, even when the material is stretched, as seen in figure (c). In figure (d), the microfluidic hybrid is used as a platform for a diffusion-reaction study. Acid and base solutions from two microfluidic channels diffuse in the pH-sensitive hydrogel and form regions of different colors (light red for acid and dark violet for base).
 
Courtesy of the researchers

After dipping a thin sheet of elastomer into a solution of benzophenone, the researchers wrapped the treated elastomer around a sheet of hydrogel and exposed the hybrid to UV light. They found that after 48 hours in a dry laboratory environment, the weight of the hybrid material did not change, indicating that the hydrogel retained most of its moisture. They also measured the force required to peel the two materials apart, and found that to separate them required 1,000 joules per square meters — much higher than the force needed to peel the skin’s epidermis from the dermis.

“This is tougher even than skin,” Zhao says. “We can also stretch the material to seven times its original length, and the bond still holds.”

Expanding the hydrogel toolset

Taking the comparison with skin a step further, the team devised a method to etch tiny channels within the hydrogel-elastomer hybrid to simulate a simple network of blood vessels. They first cured a common elastomer onto a silicon wafer mold with a simple three-channel pattern, etching the pattern onto the elastomer using soft lithography. They then dipped the patterned elastomer in benzophenone, laid a sheet of hydrogel over the elastomer, and exposed both layers to ultraviolet light. In experiments, the researchers were able to flow red, blue, and green food coloring through each channel in the hybrid material.

Yuk says in the future, the hybrid-elastomer material may be used as a stretchy microfluidic bandage, to deliver drugs directly through the skin.

“We showed that we can use this as a stretchable microfluidic circuit,” Yuk says. “In the human body, things are moving, bending, and deforming. Here, we can perhaps do microfluidics and see how [the device] behaves in a moving part of the body.”

The researchers also explored the hybrid material’s potential as a complex ionic circuit. A neural network is such a circuit; nerves in the skin send ions back and forth to signal sensations such as heat and pain. Zhao says hydrogels, being mostly composed of water, are natural conductors through which ions can flow. The addition of an elastomer layer, he says, acts as an insulator, preventing ions from escaping — an essential combination for any circuit.

To make it conductive to ions, the researchers submerged the hybrid material in a concentrated solution of sodium chloride, then connected the material to an LED light. By placing electrodes at either end of the material, they were able to generate an ionic current that switched on the light.

“We show very beautiful circuits not made of metal, but of hydrogels, simulating the function of neurons,” Yuk says. “We can stretch them, and they still maintain connectivity and function.”

Syun-Hyun Yun, an associate professor at Harvard Medical School and Massachusetts General Hospital, says that hydrogels and elastomers have distinct physical and chemical properties that, when combined, may lead to new applications.

“It is a thought-provoking work,” says Yun, who was not involved in the research. “Among many [applications], I can imagine smart artificial skins that are implanted and provide a window to interact with the body for monitoring health, sensing pathogens, and delivering drugs.”

Next, the group hopes to further test the hybrid material’s potential in a number of applications, including wearable electronics and on-demand drug-delivering bandages, as well as nondrying, circuit-embedded contact lenses.

“Ultimately, we’re trying to expand the argument of using hydrogels as an advanced engineering toolset,” Zhao says.

This research was funded, in part, by the Office of Naval Research, Draper Laboratory, MIT Institute for Soldier Nanotechnologies, and National Science Foundation.


Contacts and sources: 
Jennifer Chu 
MIT

SDO Watches Twisting Solar Material Over the Sun’s Surface

Solar material twists above the sun’s surface in this close-up captured by NASA’s Solar Dynamics Observatory on June 7-8, 2016, showcasing the turbulence caused by combative magnetic forces on the sun. 

This spinning cloud of solar material is part of a dark filament angling down from the upper left of the frame. 


Credits: NASA/SDO/Goddard Space Flight Center, Joy Ng

Filaments are long, unstable clouds of solar material suspended above the sun’s surface by magnetic forces. SDO captured this video in wavelengths of extreme ultraviolet light, which is typically invisible to our eyes, but is colorized here in red for easy viewing.
Contacts and sourcesS:
By Steele Hill and Sarah Frazier
NASA's Goddard Space Flight Center,

Mercury’s Origins Traced to Rare Meteorite; Planet Cooled Dramatically in Half a Billion Years


Around 4.6 billion years ago, the universe was a chaos of collapsing gas and spinning debris. Small particles of gas and dust clumped together into larger and more massive meteoroids that in turn smashed together to form planets. Scientists believe that shortly after their formation, these planets — and particularly Mercury — were fiery spheres of molten material, which cooled over millions of years.

Now, geologists at MIT have traced part of Mercury’s cooling history and found that between 4.2 and 3.7 billion years ago, soon after the planet formed, its interior temperatures plummeted by 240 degrees Celsius, or 464 degrees Fahrenheit.

An image, taken by MESSENGER during its Mercury flyby on Jan. 14, 2008, of Mercury’s full crescent.

Image: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington


They also determined, based on this rapid cooling rate and the composition of lava deposits on Mercury’s surface, that the planet likely has the composition of an enstatite chondrite — a type of meteorite that is extremely rare here on Earth.

Timothy Grove, the Cecil and Ida Green Professor of Geology in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, says new information on Mercury’s past is of interest for tracing Earth’s early formation.

“Here we are today, with 4.5 billion years of planetary evolution, and because the Earth has such a dynamic interior, because of the water we’ve preserved on the planet, [volcanism] just wipes out its past,” Grove says. “On planets like Mercury, early volcanism is much more dramatic, and [once] they cooled down there were no later volcanic processes to wipe out the early history. This is the first place where we actually have an estimate of how fast the interior cooled during an early part of a planet’s history.”

Grove and his colleagues, including researchers from the University of Hanover, in Germany; the University of Liége, in Belgium; and the University of Bayreuth, in Germany, have published their results in Earth and Planetary Science Letters.

Compositions in craters

For their analysis, the team utilized data collected by NASA’s MESSENGER spacecraft. The MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) probe orbited Mercury between 2011 and 2015, collecting measurements of the planet’s chemical composition with each flyby. During its mission, MESSENGER produced images that revealed kilometer-thick lava deposits covering the entire planet’s surface.

An X-ray spectrometer onboard the spacecraft measured the X-ray radiation from the planet’s surface, produced by solar flares on the sun, to determine the chemical composition of more than 5,800 lava deposits on Mercury’s surface.

Grove’s co-author, Olivier Namur of the University of Hanover, recalculated the surface compositions of all 5,800 locations, and correlated each composition with the type of terrain in which it was found, from heavily cratered regions to those that were less impacted. The density of a region’s craters can tell something about that region’s age: The more craters there are, the older the surface is, and vice versa. The researchers were able to correlate Mercury’s lava composition with age and found that older deposits, around 4.2 billion years old, contained elements that were very different from younger deposits that were estimated to be 3.7 billion years old.

“It’s true of all planets that different age terrains have different chemical compositions because things are changing inside the planet,” Grove says. “Why are they so different? That’s what we’re trying to figure out.”

A rare rock, 10 standard deviations away

To answer that question, Grove attempted to retrace a lava deposit’s path, from the time it melted inside the planet to the time it ultimately erupted onto Mercury’s surface.

To do this, he started by recreating Mercury’s lava deposits in the lab. From MESSENGER’s 5,800 compositional data points, Grove selected two extremes: one representing the older lava deposits and one from the younger deposits. He and his team converted the lava deposits’ element ratios into the chemical building blocks that make up rock, then followed this recipe to create synthetic rocks representing each lava deposit.

Perspective view of ancient volcanic plains in the northern high-latitudes of Mercury revealed by instruments on board the MESSENGER spacecraft.
Image: NASA/JHUAPL/CIW-DTM/GSFC/MIT/Brown Univ/; Rendering by James Dickson

The team melted the synthetic rocks in a furnace to simulate the point in time when the deposits were lava, and not yet solidified as rock. Then, the researchers dialed the temperature and pressure of the furnace up and down to effectively turn back the clock, simulating the lava’s eruption from deep within the planet to the surface, in reverse.

Throughout these experiments, the team looked for tiny crystals forming in each molten sample, representing the point at which the sample turns from lava to rock. This represents the stage at which the planet’s solid rocky core begins to melt, creating a molten material that sloshes around in Mercury’s mantle before erupting onto the surface.

The team found a surprising disparity in the two samples: The older rock melted deeper in the planet, at 360 kilometers, and at higher temperatures of 1,650 C, while the younger rock melted at shallower depths, at 160 kilometers, and 1,410 C. The experiments indicate that the planet’s interior cooled dramatically, over 240 degrees Celsius between 4.2 and 3.7 billion years ago — a geologically short span of 500 million years.

“Mercury has had a huge variation in temperature over a fairly short period of time, that records a really amazing melting process,” Grove says.

The researchers determined the chemical compositions of the tiny crystals that formed in each sample, in order to identify the original material that may have made up Mercury’s interior before it melted and erupted onto the surface. They found the closest match to be an enstatite chondrite, an extremely rare form of meteorite that is thought to make up only about 2 percent of the meteorites that fall to Earth.

“We now know something like an enstatite chondrite was the starting material for Mercury, which is surprising, because they are about 10 standard deviations away from all other chondrites,” Grove says.

Grove cautions that the group’s results are not set in stone and that Mercury may have been an accumulation of other types of starting materials. To know this would require an actual sample from the planet’s surface.

“The next thing that would really help us move our understanding of Mercury way forward is to actually have a meteorite from Mercury that we could study,” Grove says. “That would be lovely.”

This research was funded, in part, by NASA.


Contacts and sources:
Jennifer Chu 
MIT

Cosmos Redshift 7: The Earliest Stars and Super Bright Galaxies Discovered in the Early Universe


Astronomers have identified a family of incredible galaxies that could shed further light on the transformation of the early Universe known as the ‘epoch of reionisation’. Dr David Sobral of Lancaster University will present their results on Monday 27 June at the National Astronomy Meeting in Nottingham.

About 150 million years after the Big Bang, some 13 billion years ago, the Universe was completely opaque to high energy ultraviolet light, with neutral hydrogen gas blocking its passage. Astronomers have long realised that this situation ended in the so-called ‘epoch of reionisation’, where ultraviolet light from the earliest stars broke open neutral hydrogen atoms, and could start to travel freely through the cosmos. This reionisation period marks a key transition between the relatively simple early cosmos, with normal matter made up of hydrogen and helium, and the universe as we see it today: transparent on large scales and filled with heavier elements.

This illustration of reionisation shows a timeline summarizing the evolution of the Universe running from left to right, where the Big Bang is on the left and the age of the Universe is about two billion years on the right. It shows how the cosmic “fog" of neutral (uncharged) hydrogen pervading the early Universe is cleared by the first objects to emit radiation.
 Credit: NASA / CXC / M.Weiss.

In 2015 Sobral led a team that found the first example of a spectacularly bright galaxy within the epoch of reionisation, named Cosmos Redshift 7 or CR7, which may harbour first generation stars. The team also discovered a similar galaxy, MASOSA, which, together with Himiko, discovered by a Japanese team, hinted at a larger population of similar objects, perhaps made up of the earliest stars and/or black holes.

Using the Subaru and Keck telescopes on Hawaii, and the Very Large Telescope in Chile, Sobral and his team, along with a group in the US, have now found more examples of this population. All of the newly found galaxies seem to have a large bubble of ionised gas around them.

Sobral comments: “Stars and black holes in the earliest, brightest galaxies must have pumped out so much ultraviolet light that they quickly broke up hydrogen atoms in the surrounding universe. The fainter galaxies seem to have stayed shrouded from view for a lot longer. Even when they eventually become visible, they show evidence of plenty of opaque material still in place around them.”

“This makes the bright galaxies visible much earlier on in the history of the Universe, allowing us to not only use them to study reionisation itself, but also to study the properties of the very first galaxies and the black holes they may contain”, adds team member Jorryt Matthee, a PhD student at Leiden Observatory.

This artist’s impression shows CR7, a very distant galaxy discovered using ESO’s Very Large Telescope. It is by far the brightest galaxy yet found in the early Universe and there is strong evidence that examples of the first generation of stars lurk within it.

Credit: ESO / M. Kornmesser.  

These massive, brilliant, and previously purely theoretical objects were the creators of the first heavy elements in history — the elements necessary to forge the stars around us today, the planets that orbit them, and life as we know it. This newly found galaxy is three times brighter than the brightest distant galaxy known up to now.

Sergio Santos is another co-author of the study and will soon be a PhD student at Lancaster University. He adds: “Our results highlight how hard it is to study the small faint sources in the early Universe. The neutral hydrogen gas blocks out some of their light, and because they are not capable of building their own local bubbles as quickly as the bright galaxies, they are much harder to detect.”

With five bright sources now confirmed, and many more expected to follow, CR7 may be part of a ‘team’ of tens to hundreds of thousands of bright galaxies. The fifth (V) galaxy, also discovered by Sobral and his team, takes the name VR7, this time in tribute to the astrophysicist Vera Rubin, who in 1996 became the first woman in over 150 years to win the Gold Medal of the Royal Astronomical Society.

Sobral is now looking forward to continuing his work with the new telescopes that will become available in the next few years. He comments: “What is really surprising is that the galaxies we find are much more numerous than people assumed, and they have a puzzling diversity. When telescopes like the James Webb Space Telescope are up and running, we will be able to take a closer look at these intriguing objects. We have only scratched the surface, and so the next few years will certainly bring fantastic new discoveries.”

For now, astronomers are using the largest existing telescopes on the ground and in space to better assess the composition, size and shape of the new galaxies. Results from this work have and will appear in papers in the journal Monthly Notices of the Royal Astronomical Society.



Contacts and sources:
Dr Robert Massey
Royal Astronomical Society

Dr David Sobral
Lancaster University

The Effects of Sad Music on Emotions

Sad music can provide enjoyment, comfort or pain to different people, according to new research looking at the effects of melancholy songs on the emotions.

Researchers at Durham University and the University of Jyväskylä, Finland, said their findings could have implications for how music therapy and rehabilitation could help people’s moods.

The musicologists looked at the emotional experiences associated with sad music of 2,436 people across three large-scale surveys in the UK and Finland.


Credit:  Durham University

They identified the reasons for listening to sad music, and emotions involved in memorable experiences related to listening to sad music.

Sad music can be enjoyable

Writing in the prestigious scientific journal PLOS ONE, the researchers said that the majority of people surveyed highlighted the enjoyable nature of such experiences, which in general lead to clear improvement of mood.

The researchers said that listening to sad music led to feelings of pleasure related to enjoyment of the music in some people, or feelings of comfort where sad music evoked memories in others.

However, a significant portion of people also reported painful experiences associated with listening to sad music, which invariably related to personal loss such as the death of a loved one, divorce, breakup, or other significant adversity in life.

The research was funded by the Academy of Finland.

Music rehabilitation and music therapy

Lead researcher Professor Tuomas Eerola, Professor of Music Cognition in the Department of Music, said: “Previous research in music psychology and film studies has emphasized the puzzling pleasure that people experience when engaging with tragic art.


Credit:  PLOS ONE

“However, there are people who absolutely hate sad-sounding music and avoid listening to it. In our research, we wanted to investigate this wide spectrum of experiences that people have with sad music, and find reasons for both listening to and avoiding that kind of music.

“The results help us to pinpoint the ways people regulate their mood with the help of music, as well as how music rehabilitation and music therapy might tap into these processes of comfort, relief, and enjoyment.

“The findings also have implications for understanding the paradoxical nature of enjoyment of negative emotions within the arts and fiction.”

Study co-author Dr Henna-Riikka Peltola from the University of Jyväskylä, in Finland said sad music led to mixed emotions.

Dr Peltola added: “There seem to be two types of enjoyable experiences evoked by sad music listening.

“In these instances, music is typically the central source of these experiences, and aesthetic qualities were very much involved in the experienced pleasure.

“Alternatively, sad music is also associated with a set of emotions that give comfort to the listener, and where memories and associations play a strong part of making the experience pleasant. These experiences were often mentioned to confer relief and companionship in difficult situations of life.

“However, a large number of people also associated sad music with painful experiences. Such intense experiences seemed to be mentally and even physically straining, and thus far from pleasurable.”

Pleasure, comfort and pain associated with sad music

The three types of experience associated with listening to sad music (pleasure, comfort and pain) were found across the different surveys.

The researchers added that experiences of enjoyable sadness were not affected by gender or age, although musical expertise and interest in music seemed to amplify these feelings.

Older people reported stronger experiences of comforting sadness, while strong negative feelings when listening to sad music were more pronounced for younger people and women.

Each type of emotional experience associated with sad music could be connected to a distinct profile of reasons, psychological mechanisms, and reactions, the researchers added.

Professor Eerola added: “We think that this demonstrates well the functional nature of these experiences.

“Although the positive experiences seemed to be the most frequently associated with sad music, truly negative experiences are not uncommon in any of the samples in our research.”

Commenting on the study Professor Jörg Fachner, Professor of Music, Health and the Brain, at Anglia Ruskin University, who was not part of the research team, said: “This study confirms that music therapists can work with authentic experiences when using music representing the sorrowful and painful content of sad life events such as the death of a spouse or child.

“Some people enjoy sad music and derive a lot of comfort out of such music in certain situations but when a particular piece of music becomes a container for a negative emotion related to a personal or environmental challenge, a music therapist would carefully start working on its representations.

“A skillful, trained music therapist can sense and adapt to the individual meaning of the sad music representing negative experiences and memories as described in this study.”


Contacts and sources:
Durham University

Parallax: When It Comes to Brown Dwarfs, 'How Far?' Is a Key Question


Brown dwarfs are sometimes called failed stars. They're stars' dim, low-mass siblings and they fade in brightness over time. They're fascinating to astronomers for a variety of reasons, but much about them remains unknown. New work from a Carnegie-led team reports the distances of a number of brown dwarfs, as well as low-mass stars, in The Astronomical Journal.

Brown dwarfs are too small to sustain the hydrogen fusion process that powers stars. Their temperatures can range from nearly as hot as a star to as cool as a planet, and their masses also range between star-like and giant planet-like. They are of particular interest to scientists because they can offer clues to star-formation processes.

An artist's conception of a cool brown dwarf.

Credit to NASA/JPL-Caltech/Penn State University

The intrinsic brightness of brown dwarfs, particularly cool brown dwarfs, is poorly known, but this key parameter can only be determined once an object's distance has been measured. Intrinsic brightness is a determination of how bright an object would be if observed at a common distance, eliminating the fact that a bright star can seem dimmer if it is far away and a dim star can seem brighter if it is close.

An ongoing planet-hunting survey run by Carnegie co-authors Alycia Weinberger, Alan Boss, Ian Thompson, Sandy Keiser, and others has yielded the distances to 134 low mass stars and brown dwarfs, of which 38 had not been previously measured.

"Accurate distances are the foundation upon which we can determine the physical properties and luminosities of brown dwarfs and low mass stars," Weinberger said.

The team built a special instrument for precisely measuring the locations of stars over time, the CAPSCam--the Carnegie Astrometric Planet Search Camera--and they use it at the DuPont Telescope at our Las Campanas Observatory in Chile.

The primary goal of the CAPS project is to search for extrasolar planets by the astrometric method, where the planet's presence can be detected indirectly through the wobble of the host star around the center of mass of the system. But CAPSCam also measures parallaxes to stars and brown dwarfs, including the 134 objects published in this study.

An illustration of parallax.
Credit:  Carnegie Institution for Science

Parallax may sound like a word straight out of science fiction, but it's something you've almost certainly experienced yourself. Hold a pen up in front of your face and look at it first with just your right eye and then just your left eye. It appears to be moving in regard to background objects as you switch from eye to eye, even though you know it isn't moving at all. That's parallax!

What's more, if you hold the pen further from your face, it appears to move less when you switch eyes than it did when it was closer to you. In the same way, closer stars have larger parallactic motion.

What does it have to do with astronomy? It's the only direct way to measure astronomical distances and the CAPSCam is capable of doing so very precisely. By measuring the shift in an object's position from different viewpoints in the Earth's orbit relative to something fixed in the background, astronomers can use geometry to calculate how far away the object is.

"There is still so much about brown dwarfs that remains unknown," explained Weinberger. "As we learn more about them, it could improve our knowledge about the star formation process and possibly also refine our understanding of the distribution of matter in the universe, since it seems that there are far more brown dwarfs than initially thought."

The study revealed some other useful distance measurements in addition to the brown dwarf discoveries.

The team used the motion of two stars and compared them to others in two different stellar groups to confirm the age of the two stars age, between 30 and 50 million years old for one and 100 million years old for the other. This is because distance measurements can tell researchers about the location of a star in 3-D, not just the star's position in 2-D on the sky, and let them measure the star's velocity. Finding groups of young stars moving together lets astronomers study them in aggregate and better estimate how old they are and learn about their evolution.

The team also reported the first parallax for a star that is notable for hosting a Neptune-sized planet. Relatively few giant planets orbiting low-mass stars are known, so every instance is of interest to planet hunters. Using this measurement the team refined the radius and density estimates for the planet, finding it to be about half as dense as Neptune, closer to Saturn's density.

"In 2007, we began our long-term search for gas giant planets and brown dwarfs orbiting nearby low mass dwarf stars," said Boss. "We're excited to have such a wealth of measurements to publish from our CAPSCam project."

The study's other co-authors are: Guillem Anglada-Escude, a former Carnegie fellow now at Queen Mary University of London and Gregory Burley of the National Research Council of Canada.

This work was supported by the National Science Foundation, a NASA Origins of Solar Systems grant, and a NASA Astrobiology grant.



Contacts and sources: 
Alycia Weinberger
The Carnegie Institution for Science

Jupiter Is Glowing in New Images

A team led by Leigh Fletcher of the University of Leicester in the United Kingdom are presenting new images of Jupiter at the UK's Royal Astronomical Society's National Astronomy Meeting in Nottingham.

This video was created from many short VISIR exposures at a wavelength of 5 micrometres. The telescope was moved slightly between exposures and the changing turbulence of the Earth’s atmosphere can be seen.

Credit: ESO/L. Fletcher

Obtained with the VISIR instrument on ESO'sVery Large Telescope , the new images are part of a focused effort to improve understanding of Jupiter's atmosphere prior to the arrival of NASA's Juno spacecraftJuno spacecraft in July this year.

The Juno spacecraft was named after the mythological wife of the god Jupiter. Just like his planetary counterpart, Jupiter veiled himself in clouds to hide his mischief, and only Juno was able to peer through them to see his true nature.

In preparation for the imminent arrival of NASA's Juno spacecraft in July 2016, astronomers used ESO's Very Large Telescope to obtain spectacular new infrared images of Jupiter using the VISIR instrument. They are part of a campaign to create high-resolution maps of the giant planet to inform the work to be undertaken by Juno over the following months, helping astronomers to better understand the gas giant. This false-color image was created by selecting and combining the best images obtained from many short VISIR exposures at a wavelength of 5 micrometers.

Credit: ESO/L. Fletcher

The campaign has involved the use of several telescopes based in Hawaii and Chile, as well as contributions from amateur astronomers around the world. The maps do not just give snapshots of the planet, they also reveal how Jupiter's atmosphere has been shifting and changing in the months prior to Juno's arrival.

The Juno spacecraft was launched in 2011, and has travelled nearly 3000 million kilometres to reach the Jovian system. Spacecraft can collect data free from the limitations affecting telescopes on Earth so with that in mind, it might seem surprising that this ground-based campaign was considered so important.

Leigh Fletcher describes the significance of this research in preparing for Juno's arrival: "These maps will help set the scene for what Juno will witness in the coming months. Observations at different wavelengths across the infrared spectrum allow us to piece together a three-dimensional picture of how energy and material are transported upwards through the atmosphere."

False colour images generated from VLT observations in February and March 2016, showing two different faces of Jupiter. The bluer areas are cold and cloud-free, the orangey areas are warm and cloudy, more colourless bright regions are warm and cloud-free, and dark regions are cold and cloudy (such as the Great Red Spot and the prominent ovals). The wave pattern over the North Equatorial Band shows up in orange.

This view was created from VLT/VISIR infrared images from February 2016 (left) and March 2016 (right). The orange images were obtained at 10.7 micrometres wavelength and highlight the different temperatures and presence of ammonia. The blue images at 8.6 micrometres highlight variations in cloud opacity.

Credit: ESO/L.N. Fletcher


Capturing sharp images through the Earth's constantly shifting atmosphere is one of the greatest challenges faced by ground-based telescopes. This glimpse of Jupiter's own turbulent atmosphere, rippling with cooler gas clouds, was possible thanks to a technique known as lucky imaging. Sequences of very short exposures were taken of Jupiter by VISIR, producing thousands of individual frames. The lucky frames, where the image is least affected by the atmosphere's turbulence, are selected and the rest discarded. Those selected frames are aligned and combined to produce remarkable final pictures like the ones shown here.

Glenn Orton, leader of the ground-based campaign in support of Juno's mission, elaborates on why the preparatory observations from Earth are so valuable: "The combined efforts of an international team of amateur and professional astronomers have provided us with an incredibly rich dataset over the past eight months. Together with the new results from Juno, the VISIR dataset in particular will allow researchers to characterise Jupiter's global thermal structure, cloud cover and distribution of gaseous species."

This view compares a lucky imaging view of Jupiter from VISIR (left) at infrared wavelengths with a very sharp amateur image in visible light from about the same time (right).

Credit: ESO/L.N. Fletcher/Damian Peach

Whilst the modern Juno's mission to unveil the mighty Jupiter will bring new and highly anticipated results, its way has been paved by ground-based efforts here on Earth.


Contacts and sources:
Richard Hook
ESO


Supermassive Black Hole Seeds Could Be Revealed by Gravitational Waves


Gravitational waves captured by space-based detectors could help identify the origins of supermassive black holes, according to new computer simulations of the Universe.

Scientists led by Durham University’s Institute for Computational Cosmology ran the huge cosmological simulations that can be used to predict the rate at which gravitational waves caused by collisions between the monster black holes might be detected.



The amplitude and frequency of these waves could reveal the initial mass of the seeds from which the first black holes grew since they were formed 13 billion years ago and provide further clues about what caused them and where they formed, the researchers said.

RAS National Astronomy Meeting

The research was presented today (Monday, June 27, 2016) at the Royal Astronomical Society’s National Astronomy Meeting in Nottingham, UK. It was funded by the Science and Technology Facilities Council, the European Research Council and the Belgian Interuniversity Attraction Poles Programme.

Gas and stars in a slice of the EAGLE simulations at the present day. The intensity shows the gas density, while the color encodes the gas temperature. Researchers used the EAGLE simulations to predict the rate at which gravitational waves caused by collisions between supermassive black holes might be detected.

Credit: The EAGLE project/Stuart McAlpine

The study combined simulations from the EAGLE project – which aims to create a realistic simulation of the known Universe inside a computer – with a model to calculate gravitational wave signals.

Two detections of gravitational waves caused by collisions between supermassive black holes should be possible each year using space-based instruments such as the Evolved Laser Interferometer Space Antenna (eLISA) detector that is due to launch in 2034, the researchers said.

In February the international LIGO and Virgo collaborations announced that they had detected gravitational waves for the first time using ground-based instruments and in June reported a second detection.

Supermassive black holes

As eLISA will be in space – and will be at least 250,000 times larger than detectors on Earth – it should be able to detect the much lower frequency gravitational waves caused by collisions between supermassive black holes that are up to a million times the mass of our sun.

13.8 billion years of evolution of the gas in the EAGLE simulations. The intensity shows the gas density, while the colour encodes the gas temperature. Researchers used EAGLE simulations to predict the rate at which gravitational waves caused by collisions between supermassive black holes might be detected.
Credit: The EAGLE project/Stuart McAlpine

Current theories suggest that the seeds of these black holes were the result of either the growth and collapse of the first generation of stars in the Universe; collisions between stars in dense stellar clusters; or the direct collapse of extremely massive stars in the early Universe.

As each of these theories predicts different initial masses for the seeds of supermassive black hole seeds, the collisions would produce different gravitational wave signals.

This means that the potential detections by eLISA could help pinpoint the mechanism that helped create supermassive black holes and when in the history of the Universe they formed.

Gravitational waves

Lead author Jaime Salcido, PhD student in Durham University’s Institute for Computational Cosmology, said: “Understanding more about gravitational waves means that we can study the Universe in an entirely different way.

“These waves are caused by massive collisions between objects with a mass far greater than our sun.

“By combining the detection of gravitational waves with simulations we could ultimately work out when and how the first seeds of supermassive black holes formed.”

13.8 billion years of evolution of the dark matter in the EAGLE simulations. The intensity shows the density of dark matter. Researchers used EAGLE simulations to predict the rate at which gravitational waves caused by collisions between supermassive black holes might be detected.

Credit: The EAGLE project/Stuart McAlpine

Co- author Professor Richard Bower, of Durham University’s Institute for Computational Cosmology, added: “Black holes are fundamental to galaxy formation and are thought to sit at the centre of most galaxies, including our very own Milky Way.

“Discovering how they came to be where they are is one of the unsolved problems of cosmology and astronomy.

“Our research has shown how space based detectors will provide new insights into the nature of supermassive black holes.”

Detecting gravitational waves in space

Gravitational waves were first predicted 100 years ago by Albert Einstein as part of his Theory of General Relativity.

The waves are concentric ripples caused by violent events in the Universe that squeeze and stretch the fabric of space time but most are so weak they cannot be detected.

LIGO detected gravitational waves using ground-based instruments, called interferometers, that use laser beams to pick up subtle disturbances caused by the waves.

eLISA will work in a similar way, detecting the small changes in distances between three satellites that will orbit the sun in a triangular pattern connected by beams from lasers in each satellite.

In June it was reported that the LISA Pathfinder, the forerunner to eLISA, had successfully demonstrated the technology that opens the door to the development of a large space observatory capable of detecting gravitational waves in space.

* Durham's researchers will show how they use supercomputer simulations to test how galactic ingredients and violent events combine to shape the life history of galaxies when they exhibit at the Royal Society Summer Science Exhibition in London from 4 to 10 July, 2016.


Contacts and sources:
Leighton Kitson
Durham University

Sunday, June 26, 2016

First Quantum Satellite Will Seek Cryptography Breakthroughs, China Schedules Launch for August

Launch of the world’s first quantum communications satellite will take place in August, the leader of China's space science program has said.

Dr Wu Ji of the National Space Science Centre (NSSC) under the Chinese Academy of Sciences (CAS), told reporters in Beijing while updating on space science missions (link in Chinese).

The pioneering QUantum Experiments at Space Scale (QUESS) mission, part of China's ambitious space science agenda, was expected to launch from Jiuquan Satellite Launch Centre in July, but has now been moved to August.

Once launched on a Long March 2D rocket, the 620kg QUESS satellite will delve into the counter-intuitive quantum world, including the spooky phenomenon of quantum entanglement, and seek breakthroughs in cryptography.

It will also attempt quantum teleportation at a space scale, and fundamental tests of the laws of quantum mechanics on a global scale.

 Payloads for China's QUESS quantum satellite 


Credit: NSSC

While the cause of the delay was not stated, major problems with the payload seem unlikely.

At the same time, 2016 will be China's busiest year so far in terms of launches, with more than 20 planned, placing added strain on rocket production capabilities.

Dark matter, microgravity and beyond

QUESS is just one of four missions in the Strategic Priority Program on Space Science run by the Chinese Academy of Sciences, initiated in 2011.

Implemented by the NSSC in Beijing, two missions – the DAMPE (Wukong) Dark Matter probe in December, and April’sShijian-10 retrievable microgravity space science satellite – have already been launched.

QUESS will be followed later in the year by the fourth mission, the Hard X-ray Modulation Telescope (HXMT), which will observe black holes, neutron stars and other phenomena based on their X-ray and gamma ray emissions over a four-year lifetime.

Shijian-10 undergoing tests before launch in April 2016. 

Credit: CAS

New space science missions

With this first batch of space science missions about to bear fruit, China is working on five new probes to study a range of Earth, solar and deep space phenomena.

The missions, announced earlier this month, are the space-weather observatory mission in collaboration with the European Space Agency (SMILE), a global water cycle observation mission (WCOM), the Magnetosphere, Ionosphere and Thermosphere mission (MIT), Einstein Probe (EP), and the Advanced Space-based Solar Observatory (ASO-S).

The missions were selected from those outlined in a national roadmap for space science for 2016-2030 produced by the NSSC, and will follow on from a range of exciting Chinese space science missions in 2016.

Wu Ji, recently profiled by Nature as star of Chinese science, said each scientific satellite is pioneering and non-repetitive, meaning the missions require new ideas, new designs, new materials and technologies, and as such Chinese space science efforts are a major driving force for innovation.  



Contacts and sources: 
 Chinese Academy of Sciences (CAS),

Scientists Begin Modeling Universe with Einstein's Full Theory of General Relativity

Research teams on both sides of the Atlantic have shown that precise modeling of the universe and its contents will change the detailed understanding of the evolution of the universe and the growth of structure in it.

One hundred years after Einstein introduced general relativity, it remains the best theory of gravity, the researchers say, consistently passing high-precision tests in the solar system and successfully predicting new phenomena such as gravitational waves, which were recently discovered by the Laser Interferometer Gravitational-Wave Observatory.

In a simulation of the universe without commonly made simplifications, galaxy profiles float atop a grid representing the spacetime background shaped by the distribution of matter. Regions of blue color contain more matter, which generates a deeper gravitational potential. Regions devoid of matter, darker in color, have a shallower potential.

Credit: James Mertens


The equations of general relativity, unfortunately, are notoriously difficult to solve. For the past century, physicists have used a variety of assumptions and simplifications in order to apply Einstein's theory to the universe.

On Earth, that's something like averaging the music made by a symphony. The audience would hear a single average note, keeping the overall beat, growing generally louder and softer rather than the individual notes and rhythms of each of the orchestra's instruments.

Wanting details and their effects, U.S. and European teams each wrote computer codes that will eventually lead to the most accurate possible models of the universe and provide new insights into gravity and its effects.

While simulations of the universe and the structures within it have been the subject of scientific discovery for decades, these codes have made some simplifications or assumptions. These two codes are the first to use Einstein's complete theory of general relativity to account for the effects of the clumping of matter in some regions and the dearth of matter in others.

Both groups of physicists were trying to answer the question of whether small-scale structures in the universe produce effects on larger distance scales. Both confirmed that's the case, though neither has found qualitative changes in the expansion of the universe as some scientists have predicted.

"Both we and the other group examine the universe using the full theory of general relativity, and have therefore been able to create more accurate models of physical processes than have been done before," said James Mertens, a physics PhD student at Case Western Reserve University who took the lead in developing and implementing the numerical techniques for the U.S. team.

Mertens worked with John T. Giblin Jr., the Harvey F. Lodish Development Professor of Natural Science at Kenyon College and an adjunct associate professor of physics at Case Western Reserve; and Glenn Starkman, professor of physics and director of the Institute for the Science of Origins at Case Western Reserve. They submitted two manuscripts describing their work to the arXiv preprint website on Nov. 3, 2015.

Less than two weeks later, Marco Bruni, reader in cosmology and gravitation at the University of Portsmouth, in England, and Eloisa Bentivegna, Senior Researcher and Rita Levi Montalcini Fellow at the University of Catania, Italy, submitted a similar study.

Letters by the two groups appear back-to-back in the June 24th issue of Physical Review Letters, and the U.S. group has a second paper giving more of the details in the issue of The Physical Review Part D to be published on the same day. The work will be highlighted as Editors' Suggestion by Physical Review Letters and Physical Review D and in a Synopsis on the American Physical Society Physics website.

The researchers say computers employing the full power of general relativity are the key to producing more accurate results and perhaps new or deeper understanding.

"No one has modeled the full complexity of the problem before," Starkman said. "These papers are an important step forward, using the full machinery of general relativity to model the universe, without unwarranted assumptions of symmetry or smoothness. The universe doesn't make these assumptions, neither should we."

Both groups independently created software to solve the Einstein Field Equations, which describe the complicated interrelationships between the contents of the universe and the curvature of space and time, at billions of places and times over the history of the universe.

Comparing the outcomes of these numerical simulations of the correct nonlinear dynamics to the outcomes of traditional simplified linear models, the researchers found that approximations break down.

"By assuming less, we're seeing something new," Giblin said.

Bentivegna said that their preliminary applications of numerical relativity have shown how and by how much approximations miss the correct answers. More importantly, she said, "This will allow us to comprehend a larger class of observational effects that are likely to emerge as we do precision cosmology."

"There are indeed several aspects of large-scale structure formation (and their consequences on, for example, the cosmic microwave background) which call for a fully general relativistic approach," said Sabino Matarrese, professor of physics and astronomy at the University of Padua, who was not involved in the studies.

This approach will also provide accuracy and insight to such things as gravitational lensing maps and studying the cross-correlation among different cosmological datasets, he added.

The European team found that perturbations reached a "turnaround point" and collapsed much earlier than predicted by approximate models. Comparing their model to the commonly assumed homogeneous expansion of the universe, local deviations in an underdensity (a region with less than the average amount of matter) reached nearly 30 percent.

The U.S. team found that inhomogeneous matter generates local differences in the expansion rate of an evolving universe, deviating from the behavior of a widely used approximation to the behavior of space and time, called the Friedmann-Lemaître-Robertson-Walker metric.

Stuart L. Shapiro, professor of physics and astronomy at the University of Illinois at Urbana-Champaign, is among the acknowledged leaders of solving Einstein's equations on the computer. "These works are important, not only for the new results that they report, but also for being forerunners in the application of numerical relativity to long-standing problems in cosmology," said Shapiro, who was not involved in the studies.

No longer restricted by the assumptions, researchers must abandon some traditional approaches, he continued, "and these papers begin to show us the way."

Bruni said galaxy surveys coming in the next decade will provide new high-precision measurements of cosmological parameters and that theoretical predictions must be equally precise and accurate.

"Numerical relativity simulations apply general relativity in full and aim precisely at this high level of accuracy," he said. "In the future they should become the new standard, or at least the benchmark for any work that makes simplifying assumptions."

Both teams are continuing to explore aspects of the universe using numerical relativity and enhancing their codes.

Bentivegna and Bruni used the Einstein Toolkit, which is open-source, to develop theirs. The U.S. team created CosmoGRaPH and will soon make the software open-source. Both codes will be available online for other researchers to use and improve.


Contacts and sources: 
Kevin Mayhood
Case Western Reserve University

Volcanoes Get Quiet Before They Erupt

When dormant volcanoes are about to erupt, they show some predictive characteristics—seismic activity beneath the volcano starts to increase, gas escapes through the vent, or the surrounding ground starts to deform. However, until now, there has not been a way to forecast eruptions of more restless volcanoes because of the constant seismic activity and gas and steam emissions. 

Carnegie volcanologist Diana Roman, working with a team of scientists from Penn State, Oxford University, the University of Iceland, and INETER* has shown that periods of seismic quiet occur immediately before eruptions and can thus be used to forecast an impending eruption for restless volcanoes. The duration of the silence can indicate the level of energy that will be released when eruption occurs. Longer quiet periods mean a bigger bang. The research is published in Earth and Planetary Science Letters.

Iceland's Eyjafjallajökull eruption in 2010  
Credit: Árni Friðriksson

The team monitored a sequence of eruptions at the Telica Volcano in Nicaragua in 2011. It is a so-called stratovolcano, with a classic-looking cone built up by many layers of lava and ash. They started monitoring Telica in 2009 with various instruments and by 2011 they had a comprehensive network within 2.5 miles (4 kilometers) of the volcano’s summit.

The 2011 eruptive event was a month-long series of small to moderate ash explosions. Prior to the eruption, there was a lack of deep seismicity or deformation, and small changes in sulfur dioxide gas emissions, indicating that the eruption was not driven by fresh magma. Instead, the eruption likely resulted from the vents being sealed off so that gas could not escape. This resulted in an increase in the pressure that eventually caused the explosions.

Of the 50 explosions that occurred, 35 had preceding quiet periods lasting 30 minutes or longer. Thirteen explosions were preceded by quiet intervals of at least five minutes. Only two of the 50 did not have any quiet period preceding the explosion.

“It is the proverbial calm before the storm,” remarked Roman. “The icing on the cake is that we could also use these quiet periods to forecast the amount of energy released.”

The researchers did a “hindsight” analysis of the energy released. They found that the longer the quiet phase preceding an explosion, the more energy was released in the ensuing explosion. The quiet periods ranged from 6 minutes before an explosion to over 10 hours (619 minutes) for the largest explosion.

The researchers were also able to forecast a minimum energy for impending explosions based on the data from the previous quiet/explosion pairs and the duration of the particular quiet period being analyzed. The correlation between duration of quiet periods and amount of energy released is tied to the duration of the gas pathways being blocked. The longer the blockage, the more pressure builds up resulting in more energy released. Sealing might be occurring due to mineral precipitation in cracks that previously acted as gas pathways, or due to the settling of the rock near the volcano’s surface.

Telica Volcano

Credit: Smithsonian Institution

“What is clear is that this method of careful monitoring of Telica or other similar volcanoes in real time could be used for short-term forecasts of eruptions,” Roman said. “Similar observations of this phenomenon have been noted anecdotally elsewhere. Our work has now quantified that quiet periods can be used for eruption forecasts and that longer quiet periods at recently active volcanoes could indicate a higher risk of energetic eruptions.”

*The paper’s other authors are Mel Rodgers of Oxford University, Peter LaFemina of Penn State University, Halldor Geirsson of the University of Iceland, and Virginia Tenorio of the Instituto Nicaraguense de Estudios Territoriales.

This work was supported by the National Science Foundation and the Nicaraguan Institute of Earth Sciences (INETER).


Contacts and sources: 
Carnegie Institute 

What Did Earth's Ancient Magnetic Field Look Like?

New work from Carnegie's Peter Driscoll suggests Earth's ancient magnetic field was significantly different than the present day field, originating from several poles rather than the familiar two. It is published in Geophysical Research Letters.

Earth generates a strong magnetic field extending from the core out into space that shields the atmosphere and deflects harmful high-energy particles from the Sun and the cosmos. Without it, our planet would be bombarded by cosmic radiation, and life on Earth's surface might not exist. The motion of liquid iron in Earth's outer core drives a phenomenon called the geodynamo, which creates Earth's magnetic field. This motion is driven by the loss of heat from the core and the solidification of the inner core.

But the planet's inner core was not always solid. What effect did the initial solidification of the inner core have on the magnetic field? Figuring out when it happened and how the field responded has created a particularly vexing and elusive problem for those trying to understand our planet's geologic evolution, a problem that Driscoll set out to resolve.

This is an illustration of ancient Earth's magnetic field compared to the modern magnetic field 

Courtesy of Peter Driscoll.

Here's the issue: Scientists are able to reconstruct the planet's magnetic record through analysis of ancient rocks that still bear a signature of the magnetic polarity of the era in which they were formed. This record suggests that the field has been active and dipolar--having two poles--through much of our planet's history. The geological record also doesn't show much evidence for major changes in the intensity of the ancient magnetic field over the past 4 billion years. A critical exception is in the Neoproterozoic Era, 0.5 to 1 billion years ago, where gaps in the intensity record and anomalous directions exist. Could this exception be explained by a major event like the solidification of the planet's inner core?

In order to address this question, Driscoll modeled the planet's thermal history going back 4.5 billion years. His models indicate that the inner core should have begun to solidify around 650 million years ago. Using further 3-D dynamo simulations, which model the generation of magnetic field by turbulent fluid motions, Driscoll looked more carefully at the expected changes in the magnetic field over this period.

"What I found was a surprising amount of variability," Driscoll said. "These new models do not support the assumption of a stable dipole field at all times, contrary to what we'd previously believed."

His results showed that around 1 billion years ago, Earth could have transitioned from a modern-looking field, having a "strong" magnetic field with two opposite poles in the north and south of the planet, to having a "weak" magnetic field that fluctuated wildly in terms of intensity and direction and originated from several poles. Then, shortly after the predicted timing of the core solidification event, Driscoll's dynamo simulations predict that Earth's magnetic field transitioned back to a "strong," two-pole one.

"These findings could offer an explanation for the bizarre fluctuations in magnetic field direction seen in the geologic record around 600 to 700 million years ago," Driscoll added. "And there are widespread implications for such dramatic field changes."

Overall, the findings have major implications for Earth's thermal and magnetic history, particularly when it comes to how magnetic measurements are used to reconstruct continental motions and ancient climates. Driscoll's modeling and simulations will have to be compared with future data gleaned from high quality magnetized rocks to assess the viability of the new hypothesis.


Contacts and sources:
Peter Driscoll
The Carnegie Institution for Science

Why Are Some Nice and Others Nasty



A University of Exeter scientist has helped develop an innovative mathematical model for exploring why some individuals evolve to be genetically programmed to be nice, while others stay nasty.

Dr Sasha Dall, Senior Lecturer in Mathematical Ecology based at the Penryn Campus in Cornwall, and a team of international colleagues have designed a new mathematical framework for examining social behaviour in a range of different species that will help advance our understanding of the evolution of sociality.

The researchers produced an innovative model of social evolution.
Credit: University of Exeter

The theory of kin selection has for some time allowed biologists to explain why some animals and other organisms adopt altruistic behaviour, at their own expense, for the benefit of their relatives – worker bees laying down their lives to promote the welfare of their mother, the Queen, for example. But up until now scientists have not been able to explain the role of genetic polymorphism, or why some individuals appear to be genetically programmed to help others whilst living side-by-side with others who tend to exploit their generosity.

Using colony-living microbes as inspiration to explore why some individuals are by nature generous and others less so, the researchers produced an innovative model of social evolution that allows them to understand how far this is likely to be influenced by conditioning or the surrounding environment.

They found that the behaviour of individuals can often evolve to be determined by a set of inherited genetic tendencies that accurately predict social relationships, including their likely relatedness to other members of their community, and their surroundings rather than in direct response to what they sense or experience.

Dr Dall, a co-author on the paper, which is published in the journal PLOS Computational Biology, said: “As humans our behaviours are flexible and we base what we are meant to do on what we see after processing information about our world. However, some species rely on inherited instructions on what to do – individuals behave differently according to which specific genetic variants they are born with. What we have been able to show is how you can get a situation where you end up with distinct levels of genetically determined niceness coexisting within populations.”

Lead author on the paper Professor Olof Leimar, of Stockholm University, said: “Social evolution theory hasn’t previously addressed genetic polymorphism. We have developed a model that allows us to explore this within a general framework alongside other behavioural influences. Our hope and aim is to do further work in this area to test our model experimentally.”

Genes as Cues of Relatedness and Social Evolution in Heterogeneous Environments by Olof Leimar, Sasha Dall, Peter Hammerstein and John McNamara is published in PLOS Computational Biology. This work was funded by a Leverhulme Trust International Network Grant to the team.



Contacts and sources:
Louise Vennells
University of Exeter

Hordes of Black Holes Merge Annually

New calculations predict that the Laser Interferometer Gravitational wave Observatory (LIGO) will detect approximately 1,000 mergers of massive black holes annually once it achieves full sensitivity early next decade.

The prediction, published online June 22 in the journal Nature, is based on computer simulations of more than a billion evolving binary stars. The simulations are based on state-of-the-art modeling of the physics involved, informed by the most recent astronomical and astrophysical observations.

"The main thing we find is that what LIGO detected makes sense," said Daniel Holz, assistant professor in physics and astronomy at the University of Chicago and a co-author of the Nature paper. The simulations predict the formation of black-hole binary stars in a range of masses that includes the two already observed. As more LIGO data become available, Holz and his colleagues will be able to test their results more rigorously.

This artist's illustration depicts the merging black hole binary systems for GW150914 (left) and GW151226 (right). The black hole pairs are shown together in this illustration, but were actually detected at different times, and on different parts of the sky. The images have been scaled to show the difference in black hole masses. New research predicts that the Laser Interferometer Gravitational wave Observatory will detect gravitational waves generated by many more merging black holes in coming years.

Credit: LIGO/A. Simonnet

The paper's lead author, Krzysztof Belczynski of Warsaw University in Poland, said he hopes the results will surprise him, that they will expose flaws in the work. Their calculations show, for example, that once LIGO reaches full sensitivity, it will detect only one pair of colliding neutron stars for every 1,000 detections of the far more massive black-hole collisions.

"Actually, I would love to be proven wrong on this issue. Then we will learn a lot," Belczynski said.

Forming big black holes

The new Nature paper, which includes co-authors Tomasz Bulik of Warsaw University and Richard O'Shaughnessy of the Rochester Institute of Technology, describes the most likely black-hole formation scenario that generated the first LIGO gravitational-wave detection in September 2015. That detection confirmed a major prediction of Albert Einstein's 1915 general theory of relativity.

The paper is the most recent in a series of publications, topping a decade of analyses where Holz, Belczynski and their associates theorize that the universe has produced many black-hole binaries in the mass range that are close enough to Earth for LIGO to detect.

LIGO scientist David Reitze takes us on a 1.3 billion year journey that begins with the violent merger of two black holes in the distant universe. The event produced gravitational waves, tiny ripples in the fabric of space and time, which LIGO detected on September 14, 2015, as they passed Earth.

Credit: LIGO Lab Caltech : MIT

"Here we simulate binary stars, how they evolve, turn into black holes, and eventually get close enough to crash into each other and make gravitational waves that we would observe," Holz said.

The simulations show that the formation and evolution of a typical system of binary stars results in a merger of similar masses, and after similarly elapsed times, to the event that LIGO detected last September. These black hole mergers have masses ranging from 20 to 80 times more than the sun.

LIGO will begin recording more gravitational-wave-generating events as the system becomes more sensitive and operates for longer periods of time. LIGO will go through successive upgrades over the coming years, and is expected to reach its design sensitivity by 2020. By then, the Nature study predicts that LIGO might be detecting more than 100 black hole collisions annually.

LIGO has detected big black holes and big collisions, with a combined mass greater than 30 times that of the sun. These can only be formed out of big stars.

"To make those you need to have low metallicity stars, which just means that these stars have to be relatively pristine," Holz said. The Big Bang produced mainly hydrogen and helium, which eventually collapsed into stars.

Forging metals

As these stars burned they forged heavier elements, which astronomers call "metals." Those stars with fewer metals lose less mass as they burn, resulting in the formation of more massive black holes when they die. That most likely happened approximately two billion years after the Big Bang, before the young universe had time to form significant quantities of heavy metals. Most of those black holes would have merged relatively quickly after their formation.

LIGO would be unable to detect the ones that merged early and quickly. But if the binaries were formed in large enough numbers, a small fraction would survive for longer periods and would end up merging 11 billion years after the Big Bang (2.8 billion years ago), recently enough for LIGO to detect.

This artist's animation shows the merger of two black holes and the gravitational waves that ripple outward during the event. The black holes—which represent those detected by LIGO on Dec. 26, 2015—were 14 and 8 times the mass of the sun, until they merged, forming a single black hole 21 times the sun's mass. One solar mass was converted to gravitational waves. In reality, the area near the black holes would appear
Credit: LIGO Lab Caltech : MIT

"That's in fact what we think happened," Holz said. Statistically speaking, "it's the most likely scenario." He added, however, that the universe continues to produce binary stars in local, still pristine pockets of low metallicity that resemble conditions of the early universe.

"In those pockets you can make these big stars, make the binaries, and then they'll merge right away and we would detect those as well."

Belczynski, Holz, and collaborators have based their simulations on what they regard as the best models available. They assume "isolated formation," which involves two stars forming in a binary, evolving in tandem into black holes, and eventually merging with a burst of gravitational wave emission. A competing model is "dynamical formation," which focuses on regions of the galaxy that contain a high density of independently evolving stars. Eventually, many of them will find each other and form binaries.

"There are dynamical processes by which those black holes get closer and closer and eventually merge," Holz said. Identifying which black holes merged under which scenario is difficult. One potential method would entail examining the black holes' relative spins. Binary stars that evolved dynamically are expected to have randomly aligned spins; detecting a preference for aligned spins would be clear evidence in favor of the isolated evolutionary model.

LIGO is not yet able to precisely measure black hole spin alignment, "but we're starting to get there," Holz said. "This study represents the first steps in the birth of the entirely new field of gravitational wave astronomy. We have been waiting for a century, and the future has finally arrived."


Contacts and sources:
Steve Koppes
University of Chicago 

Hair, Scales and Feathers Have Common Ancestor

The potential evolutionary link between hairs in mammals, feathers in birds and scales in reptiles has been debated for decades. Today, researchers of the University of Geneva (UNIGE) and the SIB Swiss Institute of Bioinformatics, Switzerland, demonstrate that all these skin appendages are homologous: they share a common ancestry. 

On the basis of new analyses of embryonic development, the Swiss biologists evidenced molecular and micro-anatomical signatures that are identical between hairs, feathers and scales at their early developmental stages. These new observations, published in Science Advances, indicate that the three structures evolved from their common reptilian ancestor.

Mammalian hairs and avian feathers develop from a similar primordial structure called a 'placode': a local thickening of the epidermis with columnar cells that reduce their rate of proliferation and express very specific genes. This observation has puzzled evolutionary and developmental biologists for many years because birds and mammals are not sister groups: they evolved from different reptilian lineages.

Placodes (spots stained in dark blue by the expression of an early developmental gene) are visible before the development of hair, scales and feathers in (from left to right) the mouse, the snake, the chicken and the crocodile.

Copyright UNIGE 2016 (Tzika, Di-Poï, Milinkovitch).

According to previous studies, reptiles' scales however do not develop from an anatomical placode. This would imply that birds and mammals have independently 'invented' placodes during their evolution.


The single evolutionary origin of placodes revealed

In 2015, a team from Yale University (USA) published an article showing that scales, hairs and feathers share molecular signatures during their development. These results fueled an old debate between two schools. One defends that these molecular signatures suggest a common evolutionary origin of skin appendages, whereas the other proposes that the same genes are re-used for developing different skin appendages.

Today, Nicolas Di-Poï and Michel C. Milinkovitch at the Department of Genetics and Evolution of the UNIGE Faculty of Science and at the SIB put this long controversy to rest by demonstrating that scales in reptiles develop from a placode with all the anatomical and molecular signatures of avian and mammalian placodes. The two scientists finely observed and analysed the skin morphological and molecular characteristics during embryonic development in crocodiles, snakes and lizards. 'Our study not only provides new molecular data that complement the work of the American team but also reveals key microanatomical facts, explains Michel Milinkovitch. Indeed, we have identified in reptiles new molecular signatures that are identical to those observed during the development of hairs and feathers, as well as the presence of the same anatomical placode as in mammals and birds. This indicates that the three types of skin appendages are homologous: the reptilian scales, the avian feathers and the mammalian hairs, despite their very different final shapes, evolved from the scales of their reptilian common ancestor.'

A key gene for skin appendage development

During their new study, the researchers from UNIGE and SIB also investigated the bearded dragon, a species of lizard that comes in three variants. The first is the normal wild-type form. The second has scales of reduced size because it bears one copy of a natural genetic mutation. The third has two copies of the mutation ... and lacks all scales. By comparing the genome of these three variants, Di-Poï and Milinkovitch have discovered the gene affected by this mutation. '

We identified that the peculiar look of these naked lizards is due to the disruption of the ectodysplasin-A (EDA), a gene whose mutations in humans and mice are known to generate substantial abnormalities in the development of teeth, glands, nails and hairs', says Michel Milinkovitch. The Swiss researchers have demonstrated that, when EDA is malfunctioning in lizards, they fail to develop a proper scale placode, exactly as mammals or birds affected with similar mutations in that same gene cannot develop proper hairs or feathers placodes. These data all coherently indicate the common ancestry between scales, feathers and hairs.

The next challenge for the Swiss team, and many other researchers around the world, is to decipher the fine mechanisms explaining the diversity of forms of skin appendages. How has the ancestral scaly skin given rise to the very different morphologies of scales, feathers and hairs, as well as the astonishing variety of forms that these appendages can take? These future studies will hopefully fine-tune our understanding of the physical and molecular mechanisms generating the complexity and the diversity of life during evolution.





Contacts and sources:
Michel Milinkovitch
University of Geneva (UNIGE)