ADS


Unseen Is Free

Unseen Is Free
Try It Now

OpenX

Google Translate

Thursday, July 31, 2014

Mercury's Magnetic Field Tells Scientists How Its Interior Is Different From Earth's

Earth and Mercury are both rocky planets with iron cores, but Mercury's interior differs from Earth's in a way that explains why the planet has such a bizarre magnetic field, UCLA planetary physicists and colleagues report.

Measurements from NASA's Messenger spacecraft have revealed that Mercury's magnetic field is approximately three times stronger at its northern hemisphere than its southern one. In the current research, scientists led by Hao Cao, a UCLA postdoctoral scholar working in the laboratory of Christopher T. Russell, created a model to show how the dynamics of Mercury's core contribute to this unusual phenomenon.

This is Mercury, with colors enhanced to emphasize the chemical, mineralogical and physical differences among the rocks that make up its surface.

Credit: NASA

The magnetic fields that surround and shield many planets from the sun's energy-charged particles differ widely in strength. While Earth's is powerful, Jupiter's is more than 12 times stronger, and Mercury has a rather weak magnetic field. Venus likely has none at all. The magnetic fields of Earth, Jupiter and Saturn show very little difference between the planets' two hemispheres.

Within Earth's core, iron turns from a liquid to a solid at the inner boundary of the planet's liquid outer core; this results in a solid inner part and liquid outer part. The solid inner core is growing, and this growth provides the energy that generates Earth's magnetic field. Many assumed, incorrectly, that Mercury would be similar.

"Hao's breakthrough is in understanding how Mercury is different from the Earth so we could understand Mercury's strongly hemispherical magnetic field," said Russell, a co-author of the research and a professor in the UCLA College's department of Earth, planetary and space sciences. "We had figured out how the Earth works, and Mercury is another terrestrial, rocky planet with an iron core, so we thought it would work the same way. But it's not working the same way."

Mercury's peculiar magnetic field provides evidence that iron turns from a liquid to a solid at the core's outer boundary, say the scientists, whose research currently appears online in the journal Geophysical Research Letters and will be published in an upcoming print edition.

"It's like a snow storm in which the snow formed at the top of the cloud and middle of the cloud and the bottom of the cloud too," said Russell. "Our study of Mercury's magnetic field indicates iron is snowing throughout this fluid that is powering Mercury's magnetic field."

The research implies that planets have multiple ways of generating a magnetic field.

Hao and his colleagues conducted mathematical modeling of the processes that generate Mercury's magnetic field. In creating the model, Hao considered many factors, including how fast Mercury rotates and the chemistry and complex motion of fluid inside the planet.

The cores of both Mercury and Earth contain light elements such as sulfur, in addition to iron; the presence of these light elements keeps the cores from being completely solid and "powers the active magnetic field–generation processes," Hao said.

Hao's model is consistent with data from Messenger and other research on Mercury and explains Mercury's asymmetric magnetic field in its hemispheres. He said the first important step was to "abandon assumptions" that other scientists make.

"Planets are different from one another," said Hao, whose research is funded by a NASA fellowship. "They all have their individual character."

Co-authors include Jonathan Aurnou, professor of planetary science and geophysics in UCLA's Department of Earth, Planetary and Space Sciences, and Johannes Wicht, a research scientist at Germany's Max Planck Institute for Solar System Research.



Contacts and sources:
Stuart Wolpert
University of California - Los Angeles

Wednesday, July 30, 2014

DNA Reveals 10,000 Years Of Cattle Domestication

A research team from the University of Basel made a surprising find in a Neolithic settlement at the boarders of Lake Biel in Switzerland: The DNA of a cattle bone shows genetic traces of the European aurochs and thus adds a further facet to the history of cattle domestication. The journal Scientific Reports has published the results.

Metacarpus of a small and compact adult bovid found in Twann after sampling for genetic analysis. 
(Illustration: University of Basel, Integrative Prehistory and Archaeological Science)

The modern cattle is the domesticated descendant of the aurochs, a wild species that became extinct in the 17th century. The aurochs' domestication already began roughly 10,000 years ago in the Near East. It is their DNA that reveals their ancestry: Aurochs of the Near East carry a maternally inherited genetic signature (mtDNA) called T haplogroup. Modern cattle still carry this signature and thus show that they derive from these early domesticated cattle of the Near East. This suggests that with the spreading of early farmers from the Near East to Europa, the domesticated cattle was imported to Europe alongside.

Unlike the aurochs of the Near East, the local wild aurochs of Europe belonged to the P haplogroup. So far, scientists believed that the female European aurochs did not genetically influence the Near Eastern cattle imported during the Neolithic Age (5,500 – 2,200 BC).

Small sturdy cows as draft animals

Scientists from the University of Basel by accident found a very small metacarpal bone from a Neolithic cattle among other animal bones found in the lake settlement Twann in Switzerland and analyzed its mtDNA. The analysis showed that the bovine bone carried the European aurochs' genetic signature of the P haplogroup. The bone thus represents the first indisputable evidence that female European aurochs also crossbred with domestic cattle from the Near East.

The bone, dated to around 3,100 BC, is evidence for the earlier crossbreeding between a wild female European aurochs with a domestic bull. “If these were coincidental single events or rather cases of intentional crossbreeding cannot be clearly answered on the basis of our results”, explains Prof. Jörg Schibler, head of the research groups for Integrative Prehistoric and Archaeological Science (IPAS) from the Department Environmental Science at the University of Basel.

The animal, to which the bone belonged, was exceptionally small with a withers height of only 112 centimeters. “This raises a number of questions for us: How difficult was copulation or birth in this case? And how many generations did it take to develop such small animals?”, explains the archaeogenetics specialist Angela Schlumbaum in regards to the significance of the discovery.

The scientists assume that the early farmers of the Horgen culture (3,400 – 2,750 BC), to which the bone dates, could have been trying to create a new smaller and sturdier type of cattle especially suitable as draft animal by intentional crossbreeding with wild aurochs. This assumption would be in accordance with archaeological finds of wooden wheels, wagons and a yoke from the Horgen culture.


Contacts and sources:

Mysterious Molecules In Space

Over the vast, empty reaches of interstellar space, countless small molecules tumble quietly though the cold vacuum. Forged in the fusion furnaces of ancient stars and ejected into space when those stars exploded, these lonely molecules account for a significant amount of all the carbon, hydrogen, silicon and other atoms in the universe. In fact, some 20 percent of all the carbon in the universe is thought to exist as some form of interstellar molecule.

Absorption wavelength as a function of the number of carbon atoms in the silicon-terminated carbon chains SiC_(2n+1)H, for the extremely strong pi-pi electronic transitions. When the chain contains 13 or more carbon atoms - not significantly longer than carbon chains already known to exist in space - these strong transitions overlap with the spectral region occupied by the elusive diffuse interstellar bands (DIBs). 
Credit: D. Kokkin, ASU

Many astronomers hypothesize that these interstellar molecules are also responsible for an observed phenomenon on Earth known as the "diffuse interstellar bands," spectrographic proof that something out there in the universe is absorbing certain distinct colors of light from stars before it reaches the Earth. But since we don't know the exact chemical composition and atomic arrangements of these mysterious molecules, it remains unproven whether they are, in fact, responsible for the diffuse interstellar bands.

Now in a paper appearing this week in The Journal of Chemical Physics, from AIP Publishing, a group of scientists led by researchers at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. has offered a tantalizing new possibility: these mysterious molecules may be silicon-capped hydrocarbons like SiC3H, SiC4H and SiC5H, and they present data and theoretical arguments to back that hypothesis.

At the same time, the group cautions that history has shown that while many possibilities have been proposed as the source of diffuse interstellar bands, none has been proven definitively.

"There have been a number of explanations over the years, and they cover the gamut," said Michael McCarthy a senior physicist at the Harvard-Smithsonian Center for Astrophysics who led the study.

Molecules in Space and How We Know They're There

Astronomers have long known that interstellar molecules containing carbon atoms exist and that by their nature they will absorb light shining on them from stars and other luminous bodies. Because of this, a number of scientists have previously proposed that some type of interstellar molecules are the source of diffuse interstellar bands -- the hundreds of dark absorption lines seen in color spectrograms taken from Earth.

In showing nothing, these dark bands reveal everything. The missing colors correspond to photons of given wavelengths that were absorbed as they travelled through the vast reaches of space before reaching us. More than that, if these photons were filtered by falling on space-based molecules, the wavelengths reveal the exact energies it took to excite the electronic structures of those absorbing molecules in a defined way.

Armed with that information, scientists here on Earth should be able to use spectroscopy to identify those interstellar molecules -- by demonstrating which molecules in the laboratory have the same absorptive "fingerprints." But despite decades of effort, the identity of the molecules that account for the diffuse interstellar bands remains a mystery. Nobody has been able to reproduce the exact same absorption spectra in laboratories here on Earth.

"Not a single one has been definitively assigned to a specific molecule," said Neil Reilly, a former postdoctoral fellow at Harvard-Smithsonian Center for Astrophysics and a co-author of the new paper.

Now Reilly, McCarthy and their colleagues are pointing to an unusual set of molecules — silicon-terminated carbon chain radicals — as a possible source of these mysterious bands.

As they report in their new paper, the team first created silicon-containing carbon chains SiC3H, SiC4H and SiC5H in the laboratory using a jet-cooled silane-acetylene discharge. They then analyzed their spectra and carried out theoretical calculations to predict that longer chains in this family might account for some portion of the diffuse interstellar bands.

However, McCarthy cautioned that the work has not yet revealed the smoking gun source of the diffuse interstellar bands. In order to prove that these larger silicon capped hydrocarbon molecules are such a source, more work needs to be done in the laboratory to define the exact types of transitions these molecules undergo, and these would have to be directly related to astronomical observations. But the study provides a tantalizing possibility for finding the elusive source of some of the mystery absorption bands -- and it reveals more of the rich molecular diversity of space.

"The interstellar medium is a fascinating environment," McCarthy said. "Many of the things that are quite abundant there are really unknown on Earth."


Contacts and sources:
American Institute of Physics (AIP)

Citation: "Optical Spectra of the Silicon-Terminated Carbon Chain Radicals SiCnH (n=3,4,5)," is authored by D. L. Kokkin, N. J. Reilly, R. C. Fortenberry, T. D. Crawford and M. C. McCarthy. It will be published in The Journal of Chemical Physics on July 29, 2014. After that date, it can be accessed at: http://scitation.aip.org/content/aip/journal/jcp/141/4/10.1063/1.4883521

Mapping Dark Matter, 4.5 Billion Light Years Away

Using the NASA/ESA Hubble Space Telescope, an international team of astronomers have mapped the mass within a galaxy cluster more precisely than ever before. Created using observations from Hubble's Frontier Fields observing programme, the map shows the amount and distribution of mass within MCS J0416.1–2403, a massive galaxy cluster found to be 160 trillion times the mass of the Sun.

The detail in this 'mass map' was made possible thanks to the unprecedented depth of data provided by new Hubbleobservations, and the cosmic phenomenon known as strong gravitational lensing. The team, led by Dr Mathilde Jauzac of Durham University in the UK and the Astrophysics & Cosmology Research Unit in South Africa, publish their results in the journal Monthly Notices of the Royal Astronomical Society.

Galaxy cluster MCS J0416.1–2403, one of six clusters targeted by the Hubble Frontier Fields programme. The blue in this image is a mass map created by using new Hubble observations combined with the magnifying power of a process known as gravitational lensing. In red is the hot gas detected by NASA’s Chandra X-Ray Observatory and shows the location of the gas in the cluster. The matter shown in blue that is separate from the red areas detected by Chandra consists of what is known as dark matter, and which can only be detected directly by gravitational lensing.
Credit: ESA/Hubble, NASA, HST Frontier Fields. Acknowledgement: Mathilde Jauzac (Durham University, UK) and Jean-Paul Kneib (École Polytechnique Fédérale de Lausanne, Switzerland). Click for a larger version

Measuring the amount and distribution of mass within distant objects in the Universe can be very difficult. A trick often used by astronomers is to explore the contents of large clusters of galaxies by studying the gravitational effects they have on the light from very distant objects beyond them. This is one of the main goals of Hubble's Frontier Fields, an ambitious observing programme scanning six different galaxy clusters — including MCS J0416.1–2403.

Around three quarters of all matter in the Universe is so-called ‘dark matter’, which cannot be seen directly as it does not emit or reflect any light, and can pass through other matter without friction (it is collisionless). It interacts only by gravity, and its presence must be deduced from its gravitational effects.

One of these effects was predicted by Einstein’s general theory of relativity and sees large clumps of mass in the Universe warp and distort the space-time around them. Acting like lenses, they appear to magnify and bend light that travels through them from more distant objects. This is one of the few techniques astronomers can use to study dark matter.

Despite their large masses, the effect of galaxy clusters on their surroundings is usually quite minimal. For the most part they cause what is known as weak lensing, making even more distant sources appear as only slightly more elliptical or smeared across the sky. However, when the cluster is large and dense enough and the alignment of cluster and distant object is just right, the effects can be more dramatic. The images of normal galaxies can be transformed into rings and sweeping arcs of light, even appearing several times within the same image. This effect is known as strong lensing, and it is this phenomenon, seen around the six galaxy clusters targeted by the Frontier Fields programme, that has been used to map the mass distribution of MCS J0416.1–2403, using the new Hubbledata.

"The depth of the data lets us see very faint objects and has allowed us to identify more strongly lensed galaxies than ever before," explains Dr Jauzac, lead author of the new Frontier Fields paper.

"Even though strong lensing magnifies the background galaxies they are still very far away and very faint. The depth of these data means that we can identify incredibly distant background galaxies. We now know of more than four times as many strongly lensed galaxies in the cluster than we did before."

Using Hubble's Advanced Camera for Surveys, the astronomers identified 51 new multiply imaged galaxies around the cluster, quadrupling the number found in previous surveys and bringing the grand total of lensed galaxies to 68. Because these galaxies are seen several times this equates to almost 200 individual strongly lensed images which can be seen across the frame. This effect has allowed Jauzac and her colleagues to calculate the distribution of visible and dark matter in the cluster and produce a highly constrained map of its mass.

"Although we’ve known how to map the mass of a cluster using strong lensing for more than twenty years, it’s taken a long time to get telescopes that can make sufficiently deep and sharp observations, and for our models to become sophisticated enough for us to map, in such unprecedented detail, a system as complicated as MCS J0416.1–2403," says team member Jean-Paul Kneib.

By studying 57 of the most reliably and clearly lensed galaxies, the astronomers modelled the mass of both normal and dark matter within MCS J0416.1-2403. "Our map is twice as good as any previous models of this cluster!" adds Jauzac.

The total mass within MCS J0416.1-2403 — modelled to be over 650,000 light-years across — was found to be 160 trillion times the mass of the Sun. With an uncertainty of 0.5%, this measurement is the most precise mass of a cluster ever produced. By precisely pinpointing where the mass resides within clusters like this one, the astronomers are also measuring the warping of space-time with high precision.

"The Frontier Fields observations and gravitational lensing techniques have opened up a way to very precisely characterise distant objects — in this case a cluster so far away that its light has taken four and a half billion years to reach us," adds Jean-Paul Kneib.

"But we will not stop here. To get a full picture of the mass we need to include weak lensing measurements too. Whilst it can only give a rough estimate of the inner core mass of a cluster, weak lensing provides valuable information about the mass surrounding the cluster core."

The team will continue to study the cluster using ultra-deep Hubble imaging and detailed strong and weak lensing information to map the outer regions of the cluster as well as its inner core, and will thus be able to detect substructures in the cluster's surroundings. They will also use X-ray measurements of hot gas from the Chandraobservatory and spectroscopic redshifts made from ground-based observatories to map the contents of the cluster, evaluating the respective contribution of dark matter, gas and stars.

Combining these sources of data will further enhance the detail of this mass distribution map, showing it in 3D and including the relative velocities of the galaxies within it. This paves the way to understanding the history and evolution of this galaxy cluster.



Contact and sources:

Georgia Bladon
ESA/Hubble,





Milky Way Less Massive Than Previously Thought

The Milky Way is less massive than astronomers previously thought, according to new research. 

 For the first time, scientists have been able to precisely measure the mass of the galaxy that contains our Solar system. A team led by researchers at the University of Edinburgh have found that the Milky Way is approximately half the mass of a neighbouring galaxy – known as Andromeda – which has a similar structure to our own. They publish their results in the journal Monthly Notices of the Royal Astronomical Society.

An image of the Andromeda galaxy, Messier 31. 
Credit: Adam Evans.

The Milky Way and Andromeda are the two largest members of a cluster of galaxies which astronomers call the Local Group. Both galaxies have a spiral shape and appear to be of similar dimensions, but until now scientists had been unable to prove which is most massive as previous studies were only able to measure the mass enclosed within both galaxies’ inner regions.

The Edinburgh astronomers used recently published data on the known distances between galaxies – as well as their velocities – to calculate the total masses of Andromeda and the Milky Way. Revealing this for both galaxies, they also found that so-called ‘dark’ matter makes up 90% of the matter in both systems.

Dark matter is a little understood invisible substance which makes up most of the outer regions of galaxies and around 27% of the content of the Universe. The researchers estimate that Andromeda contains twice as much dark matter as the Milky Way, causing it to be about twice as massive in total. Their work should help astronomers learn more about how the outer regions of galaxies are structured.

Dr Jorge Peñarrubia, of the University of Edinburgh’s School of Physics and Astronomy, who led the study, said: “We always suspected that Andromeda is more massive than the Milky Way, but weighing both galaxies simultaneously proved to be extremely challenging. Our study combined recent measurements of the relative motion between our galaxy and Andromeda with the largest catalogue of nearby galaxies ever compiled to make this possible.”


Contacts and sources:

Citation: The study, carried out by University of Edinburgh scientists in collaboration with the University of British Colombia, Carnegie Mellon University and NRC Herzberg Institute of Astrophysics, appears in “A dynamic model of the local cosmic expansion”, J. Peñarrubia, Y-Z. Ma, M. G. Walker and A. McConnachie, Monthly Notices of the Royal Astronomical Society, Oxford University Press. From 30 July the paper will be available from http://mnras.oxfordjournals.org/lookup/doi/10.1093/mnras/stu879

A preprint of the Cambridge paper can be seen at http://arxiv.org/pdf/1405.3662v2.pdf

Double Star With Weird And Wild Planet-Forming Discs Found

Astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) have found wildly misaligned planet-forming gas discs around the two young stars in the binary system HK Tauri. These new ALMA observations provide the clearest picture ever of protoplanetary discs in a double star.

This artist’s impression shows a striking pair of wildly misaligned planet-forming gas discs around both the young stars in the binary system HK Tauri. ALMA observations of this system have provided the clearest picture ever of protoplanetary discs in a double star. The new result demonstrates one possible way to explain why so many exoplanets — unlike the planets in the Solar System — came to have strange, eccentric or inclined orbits.
Credit: R. Hurt (NASA/JPL-Caltech/IPAC)

The new result also helps to explain why so many exoplanets — unlike the planets in the Solar System — came to have strange, eccentric or inclined orbits. The results will appear in the journal Nature on 31 July 2014.

Unlike our solitary Sun, most stars form in binary pairs — two stars that are in orbit around each other. Binary stars are very common, but they pose a number of questions, including how and where planets form in such complex environments.
This image of the binary system HK Tauri combines visible light and infrared data from the NASA/ESA Hubble Space Telescope with new data from ALMA. The ALMA observations of this system have provided the clearest picture ever of protoplanetary discs in a double star. The new result demonstrates one possible way to explain why so many exoplanets — unlike the planets in the Solar System — came to have strange, eccentric or inclined orbits.

Credit: B. Saxton (NRAO/AUI/NSF); K. Stapelfeldt et al. (NASA/ESA Hubble)


“ALMA has now given us the best view yet of a binary star system sporting protoplanetary discs — and we find that the discs are mutually misaligned!” said Eric Jensen, an astronomer at Swarthmore College in Pennsylvania, USA.

The two stars in the HK Tauri system, which is located about 450 light-years from Earth in the constellation of Taurus (The Bull)  are less than five million years old and separated by about 58 billion kilometres — this is 13 times the distance of Neptune from the Sun.

This picture shows the key velocity data taken with ALMA that helped the astronomers determine that the discs in HK Tauri were misaligned. The red areas represent material moving away from Earth and the blue indicates material moving toward us.

Credit:: NASA/JPL-Caltech/R. Hurt (IPAC)

The fainter star, HK Tauri B, is surrounded by an edge-on protoplanetary disc that blocks the starlight. Because the glare of the star is suppressed, astronomers can easily get a good view of the disc by observing in visible light or at near-infrared wavelengths.

The companion star, HK Tauri A, also has a disc, but in this case it does not block out the starlight. As a result the disc cannot be seen in visible light because its faint glow is swamped by the dazzling brightness of the star. But it does shine brightly in millimetre-wavelength light, which ALMA can readily detect.

Using ALMA (http://www.eso.org/alma), the team were not only able to see the disc around HK Tauri A, but they could also measure its rotation for the first time. This clearer picture enabled the astronomers to calculate that the two discs are out of alignment with each other by at least 60 degrees. So rather than being in the same plane as the orbits of the two stars at least one of the discs must be significantly misaligned.

This video takes us from a broad view of the sky deep into the star forming clouds of Taurus. The final sequence shows an artist’s impression of HK Tauri, a young double star with a protoplanetary disc around each of its component stars. ALMA observations of this system have provided the clearest picture ever of protoplanetary discs in a double star. The new result demonstrates one possible way to explain why so many exoplanets — unlike the planets in the Solar System — came to have strange, eccentric or inclined orbits.

Credit: ESO/Digitized Sky Survey 2/N. Risinger (skysurvey.org). Music: movetwo

“This clear misalignment has given us a remarkable look at a young binary star system,” said Rachel Akeson of the NASA Exoplanet Science Institute at the California Institute of Technology in the USA. “Although there have been earlier observations indicating that thistype of misaligned system existed, the new ALMA observations of HK Tauri show much more clearly what is really going on in one of these systems.”

Stars and planets form out of vast clouds of dust and gas. As material in these clouds contracts under gravity, it begins to rotate until most of the dust and gas falls into a flattened protoplanetary disc swirling around a growing central protostar (http://en.wikipedia.org/wiki/Protostar).

This wide field image shows extensive dust and small clumps of star formation in part of the Taurus star formation region. A faint star at the centre of this picture is the young binary star system HK Tauri. ALMA observations of this system have provided the clearest picture ever of protoplanetary discs in a double star. The new result demonstrates one possible way to explain why so many exoplanets — unlike the planets in the Solar System — came to have strange, eccentric or inclined orbits. This picture was created from images from the Digitized Sky Survey 2.

Credit: ESO/Digitized Sky Survey 2. Acknowledgement: Davide De Martin

But in a binary system like HK Tauri things are much more complex. When the orbits of the stars and the protoplanetary discs are not roughly in the same plane any planets that may be forming can end up in highly eccentric and tilted orbits [1].
This chart shows the constellation of Taurus (The Bull). All the stars visible on a dark clear night are shown. As well as the famous star clusters of the Hyades and Pleiades this area of the sky also contains dark dust clouds and is the site of star formation. One of these newly formed stars is HK Tauri (marked with a red circle). ALMA observations of this system have provided the clearest picture ever of protoplanetary discs in a double star.

This double star is very faint and red and cannot be seen visually in any but the largest telescopes.
Credit: ESO, IAU and Sky & Telescope

“Our results show that the necessary conditions exist to modify planetary orbits and that these conditions are present at the time of planet formation, apparently due to the formation process of a binary star system,” noted Jensen. “We can’t rule other theories out, but we can certainly rule in that a second star will do the job.”

Since ALMA can see the otherwise invisible dust and gas of protoplanetary discs, it allowed for never-before-seen views of this young binary system. “Because we’re seeing this in the early stages of formation with the protoplanetary discs still in place, we can see better how things are oriented,” explained Akeson.

Looking forward, the researchers want to determine if this type of system is typical or not. They note that this is a remarkable individual case, but additional surveys are needed to determine if this sort of arrangement is common throughout our home galaxy, the Milky Way.

Jensen concludes: “Although understanding this mechanism is a big step forward, it can’t explain all of the weird orbits of extrasolar planets — there just aren’t enough binary companions for this to bethe whole answer. So that’s an interesting puzzle still to solve, too!”

[1] If the two stars and their discs are not all in the same plane, the gravitational pull of one star will perturb the other disc, making it wobble or precess, and vice versa. A planet forming in one of these discs will also be perturbed by the other star, which will tilt and deform its orbit.


Contacts and sources:
ESO

Scientists Reproduce Evolutionary Changes By Manipulating Embryonic Development Of Mice

A group of researchers from the University of Helsinki and the Universitat Autònoma de Barcelona have been able experimentally to reproduce in mice morphological changes which have taken millions of years to occur. 

Through small and gradual modifications in the embryonic development of mice teeth, induced in the laboratory, scientists have obtained teeth which morphologically are very similar to those observed in the fossil registry of rodent species which separated from mice millions of years ago.


Credit: Wikipedia

To modify the development of their teeth, the team from the Institute of Biotechnology of the University of Helsinki worked with embryonic teeth cultures from mice not coded by the ectodysplasin A (EDA) protein, which regulates the formation of structures and differentiation of organs in the embryo throughout its development. The teeth obtained with these cultures which present this mutation develop into very basic forms, with very uniform crowns. Scientists gradually added different amounts of the EDA protein to the embryonic cells and let them develop.

The researchers observed that the teeth formed with different degrees of complexity in their crown. The more primitive changes observed coincide with those which took place in animals of the Triassic period, some two hundred million years ago. The development of more posterior patterns coincides with the different stages of evolution found in rodents which became extinct already in the Palaeocene Epoch, some 60 million years ago. Researchers have thus achieved experimentally to reproduce the transitions observed in the fossil registry of mammal teeth.

The team of scientists were able to contrast the shape of these teeth with a computer-generated prediction model created by Isaac Salazar-Ciudad, researcher at the UAB and at the University of Helsinki, which reproduces how the tooth changes from a group of equal cells to a complex three-dimensional structure, with the full shape of a molar tooth, calculating the position of space of each cell. The model is capable of predicting the changes in the morphology of the tooth when a gene is modified, and therefore offers an explanation of the mechanisms that cause these specific changes to occur in the shape of teeth throughout evolution.

"Evolution has been explained as the ability of individuals to adapt to their environment in different ways" Isaac Salazar-Ciudad states, "but we do not know why or how individuals differ morphologically. The research helps to understand evolution, in each generation, as a game between the possible variations in form and natural selection".


Contacts and sources:
Isaac Salazar-Ciudad
Universitat Autonoma de Barcelona

Antarctic Ice Sheet Is Result Of CO2 Decrease, Not Continental Breakup

Climate modelers from the University of New Hampshire have shown that the most likely explanation for the initiation of Antarctic glaciation during a major climate shift 34 million years ago was decreased carbon dioxide (CO2) levels. 

Credit: NASA

The finding counters a 40-year-old theory suggesting massive rearrangements of Earth's continents caused global cooling and the abrupt formation of the Antarctic ice sheet. It will provide scientists insight into the climate change implications of current rising global CO2 levels.

In a paper published today in Nature, Matthew Huber of the UNH Institute for the Study of Earth, Oceans, and Space and department of Earth sciences provides evidence that the long-held, prevailing theory known as "Southern Ocean gateway opening" is not the best explanation for the climate shift that occurred during the Eocene-Oligocene transition when Earth's polar regions were ice-free.

"The Eocene-Oligocene transition was a major event in the history of the planet and our results really flip the whole story on its head," says Huber. "The textbook version has been that gateway opening, in which Australia pulled away from Antarctica, isolated the polar continent from warm tropical currents, and changed temperature gradients and circulation patterns in the ocean around Antarctica, which in turn began to generate the ice sheet. We've shown that, instead, CO2-driven cooling initiated the ice sheet and that this altered ocean circulation."

Huber adds that the gateway theory has been supported by a specific, unique piece of evidence—a "fingerprint" gleaned from oxygen isotope records derived from deep-sea sediments. These sedimentary records have been used to map out gradient changes associated with ocean circulation shifts that were thought to bear the imprint of changes in ocean gateways.

Although declining atmospheric levels of CO2 has been the other main hypothesis used to explain the Eocene-Oligocene transition, previous modeling efforts were unsuccessful at bearing this out because the CO2 drawdown does not by itself match the isotopic fingerprint. It occurred to Huber's team that the fingerprint might not be so unique and that it might also have been caused indirectly from CO2 drawdown through feedbacks between the growing Antarctic ice sheet and the ocean.

Says Huber, "One of the things we were always missing with our CO2 studies, and it had been missing in everybody's work, is if conditions are such to make an ice sheet form, perhaps the ice sheet itself is affecting ocean currents and the climate system—that once you start getting an ice sheet to form, maybe it becomes a really active part of the climate system and not just a passive player."

For their study, Huber and colleagues used brute force to generate results: they simply modeled the Eocene-Oligocene world as if it contained an Antarctic ice sheet of near-modern size and shape and explored the results within the same kind of coupled ocean-atmosphere model used to project future climate change and across a range of CO2 values that are likely to occur in the next 100 years (560 to 1200 parts per million).

"It should be clear that resolving these two very different conceptual models for what caused this huge transformation of the Earth's surface is really important because today as a global society we are, as I refer to it, dialing up the big red knob of carbon dioxide but we're not moving continents around."

Just what caused the sharp drawdown of CO2 is unknown, but Huber points out that having now resolved whether gateway opening or CO2 decline initiated glaciation, more pointed scientific inquiry can be focused on answering that question.

Huber notes that despite his team's finding, the gateway opening theory won't now be shelved, for that massive continental reorganization may have contributed to the CO2 drawdown by changing ocean circulation patterns that created huge upwellings of nutrient-rich waters containing plankton that, upon dying and sinking, took vast loads of carbon with them to the bottom of the sea.

The National Science Foundation provided funding for the project and the computing was carried out using clusters at Purdue University's Rosen Center for Advanced Computing.


Contacts and sources:
David Sims
University of New Hampshire
 
The article is available to download here: http://www.nature.com/nature/journal/v511/n7511/full/nature13597.html.

Watching Schrödinger's Cat Die (Or Come To Life)

One of the famous examples of the weirdness of quantum mechanics is the paradox of Schrödinger's cat.

If you put a cat inside an opaque box and make his life dependent on a random event, when does the cat die? When the random event occurs, or when you open the box?

Continuous monitoring of a quantum system can direct the quantum state along a random path. This three-dimensional map shows how scientists tracked the transition between two qubit states many times to determine the optimal path.
Credit: Irfan Siddiqi, UC Berkeley

Though common sense suggests the former, quantum mechanics – or at least the most common "Copenhagen" interpretation enunciated by Danish physicist Neils Bohr in the 1920s – says it's the latter. Someone has to observe the result before it becomes final. Until then, paradoxically, the cat is both dead and alive at the same time.

University of California, Berkeley, physicists have for the first time showed that, in fact, it's possible to follow the metaphorical cat through the whole process, whether he lives or dies in the end.

"Gently recording the cat's paw prints both makes it die, or come to life, as the case may be, and allows us to reconstruct its life history," said Irfan Siddiqi, UC Berkeley associate professor of physics, who is senior author of a cover article describing the result in the July 31 issue of the journal Nature.

The Schrödinger's cat paradox is a critical issue in quantum computers, where the input is an entanglement of states – like the cat's entangled life and death– yet the answer to whether the animal is dead or alive has to be definite.

"To Bohr and others, the process was instantaneous – when you opened the box, the entangled system collapsed into a definite, classical state. This postulate stirred debate in quantum mechanics," Siddiqi said. "But real-time tracking of a quantum system shows that it's a continuous process, and that we can constantly extract information from the system as it goes from quantum to classical. This level of detail was never considered accessible by the original founders of quantum theory."

For quantum computers, this would allow continuous error correction. The real world, everything from light and heat to vibration, can knock a quantum system out of its quantum state into a real-world, so-called classical state, like opening the box to look at the cat and forcing it to be either dead or alive. A big question regarding quantum computers, Siddiqi said, is whether you can extract information without destroying the quantum system entirely.

"This gets around that fundamental problem in a very natural way," he said. "We can continuously probe a system very gently to get a little bit of information and continuously correct it, nudging it back into line, toward the ultimate goal."

Being two opposing things at the same time

In the world of quantum physics, a system can be in two superposed states at the same time, as long as no one is observing. An observation perturbs the system and forces it into one or the other. Physicists say that the original entangled wave functions collapsed into a classical state.

In the past 10 years, theorists such as Andrew N. Jordan, professor of physics at the University of Rochester and coauthor of the Nature paper, have developed theories predicting the most likely way in which a quantum system will collapse.

"The Rochester team developed new mathematics to predict the most likely path with high accuracy, in the same way one would use Newtown's equations to predict the least cumbersome path of a ball rolling down a mountain," Siddiqi said. "The implications are significant, as now we can design control sequences to steer a system along a certain trajectory. For example, in chemistry one could use this to prefer certain products of a reaction over others."

Lead researcher Steve Weber, a graduate student in Siddiqi's group, and Siddiqi's former postdoctoral fellow Kater Murch, now an assistant professor of physics at Washington University in St. Louis, proved Jordan correct. They measured the trajectory of the wave function of a quantum circuit – a qubit, analogous to the bit in a normal computer – as it changed. The circuit, a superconducting pendulum, could be in two different energy states and was coupled to a second circuit to read out the final voltage, corresponding to the pendulum's frequency.

"If you did this experiment many, many times, measuring the road the system took each time and the states it went through, we could determine what the most likely path is," Siddiqi said. "Then we could design a control sequence to take the road we want to take for a given quantum evolution."

If you probed a chemical reaction in detail, for example, you could find the most likely path the reaction would take and design a way to steer the reaction to the products you want, not the most likely, Siddiqi said.

"The experiment demonstrates that, for any choice of final quantum state, the most likely or 'optimal path' connecting them in a given time can be found and predicted," Jordan said. "This verifies the theory and opens the way for active quantum control techniques."

The work was supported in part by the Office of Naval Research and the Office of the Director of National Intelligence (ODNI) of the Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office.


Contacts and sources:
Robert Sanders
University of California - Berkeley

40,000 Year Old Missing Piece Found For Ice Age Lion Sculpture

Archaeologists from the University of Tübingen have found an ancient fragment of ivory belonging to a 40,000 year old animal figurine. Both pieces were found in the Vogelherd Cave in southwestern Germany, which has yielded a number of remarkable works of art dating to the Ice Age.

The fragment on the left makes up half the head of the animal figure on the right, showing that the “lion” was fully three-dimensional, and not a relief as long thought. 
Photo: Hilde Jensen, Universität Tübingen

The mammoth ivory figurine depicting a lion was discovered during excavations in 1931. The new fragment makes up one side of the figurine’s head, and the sculpture may be viewed at the Tübingen University Museum from 30 July.

“The figurine depicts a lion,” says Professor Nicholas Conard of Tübingen University’s Institute of Prehistory and Medieval Archaeology, and the Senckenberg Center for Human Evolution and Palaeoenvironment Tübingen. “It is one of the most famous Ice Age works of art, and until now, we thought it was a relief, unique among these finds dating to the dawn of figurative art. The reconstructed figurine clearly is a three dimensional sculpture.”

Now complete: This lion was carved from mammoth ivory 40,000 years ago. 

Photos: Hilde Jensen, University of Tübingen
The new fragment was discovered when today’s archaeologists revisited the work of their predecessors from the 1930s. “We have been carrying out renewed excavations and analysis at Vogelherd Cave for nearly ten years,” says Conard. 

“The site has yielded a wealth of objects that illuminate the development of early symbolic artifacts dating to the period when modern humans arrived in Europe and displaced the indigenous Neanderthals.” He points out that the Vogelherd Cave has provided evidence of the world’s earliest art and music and is a key element in the push to make the caves of the Swabian Jura a UNESCO World Heritage site.

Vogelherd is one of four caves in the region where the world’s earliest figurines have been found, dating back to 40,000 years ago. Several dozen figurines and fragments of figurines have been found in the Vogelherd alone, and researchers are piecing together thousands of mammoth ivory fragments.

The new-look lion can be seen at the University Museum in Hohentübingen Castle, Wed.-Sun., 10am – 5pm; and Thursdays 10am – 7pm.


Contacts and sources:
Professor Nicholas Conard
University of Tübingen

Tuesday, July 29, 2014

Brainwaves Can Predict Audience Reaction For Television Programming

Media and marketing experts have long sought a reliable method of forecasting responses from the general population to future products and messages. According to a study conducted at the City College of New York (CCNY) in partnership with Georgia Tech, it appears that the brain responses of just a few individuals are a remarkably strong predictor.

By analyzing the brainwaves of 16 individuals as they watched mainstream television content, researchers were able to accurately predict the preferences of large TV audiences, up to 90 percent in the case of Super Bowl commercials. The findings appear in a paper entitled "Audience Preferences Are Predicted by Temporal Reliability of Neural Processing," which was just published in the latest edition of Nature Communications.



"Alternative methods such as self-reports are fraught with problems as people conform their responses to their own values and expectations," said Jacek Dmochowski, lead author of the paper and a postdoctoral fellow at CCNY at the time the study was being conducted. However, brain signals measured using electroencephalography (EEG) can, in principle, alleviate this shortcoming by providing immediate physiological responses immune to such self-biasing. "Our findings show that these immediate responses are in fact closely tied to the subsequent behavior of the general population," he added.

Lucas Parra, Herbert Kayser Professor of Biomedical Engineering at CCNY and the paper's senior author explained that, "when two people watch a video, their brains respond similarly – but only if the video is engaging. Popular shows and commercials draw our attention and make our brainwaves very reliable; the audience is literally 'in-sync'."

In the study, participants watched scenes from The Walking Dead TV show and several commercials from the 2012 and 2013 Super Bowls. EEG electrodes were placed on their heads to capture brain activity. The reliability of the recorded neural activity was then compared to audience reactions in the general population using publicly available social media data provided by the Harmony Institute and ratings from USA Today's Super Bowl Ad Meter.


"Brain activity among our participants watching The Walking Dead predicted 40 percent of the associated Twitter traffic," said Parra. "When brainwaves were in agreement, the number of tweets tended to increase." Brainwaves also predicted 60 percent of the Nielsen ratings that measure the size of a TV audience.

The study was even more accurate (90 percent) when comparing preferences for Super Bowl ads. For instance, researchers saw very similar brainwaves from their participants as they watched a 2012 Budweiser commercial that featured a beer-fetching dog. The general public voted the ad as their second favorite that year. The study found little agreement in the brain activity among participants when watching a GoDaddy commercial featuring a kissing couple. It was among the worst rated ads in 2012.

The CCNY researchers collaborated with Matthew Bezdek and Eric Schumacher from Georgia Tech to identify which brain regions are involved and explain the underlying mechanisms. Using functional magnetic resonance imaging (fMRI), they found evidence that brainwaves for engaging ads could be driven by activity in visual, auditory and attention brain areas.

"Interesting ads may draw our attention and cause deeper sensory processing of the content," said Bezdek, a postdoctoral researcher at Georgia Tech's School of Psychology.

Apart from applications to marketing and film, Parra is investigating whether this measure of attentional draw can be used to diagnose neurological disorders such as attention deficit disorder or mild cognitive decline. Another potential application is to predict the effectiveness of online educational videos by measuring how engaging they are.



Contacts and sources:
Jason Maderer
Georgia Institute of Technology

The Control Of Nature: Stewardship Of Fire Ecology By Native Californian Cultures

Before the colonial era, 100,000s of people lived on the land now called California, and many of their cultures manipulated fire to control the availability of plants they used for food, fuel, tools, and ritual. Contemporary tribes continue to use fire to maintain desired habitat and natural resources.

Frank Lake, an ecologist with the U.S. Forest Service’s Pacific Southwest Station, will lead a field trip to the Stone Lake National Wildlife Refuge during the Ecological Society of America’s 99th Annual Meeting in Sacramento, Cal., this August. Visitors will learn about plant and animal species of cultural importance to local tribes. Don Hankins, a faculty associate at California State University at Chico and a member of the Miwok people, will co-lead the trip, which will end with a visit to California State Indian Museum.

Stone Lake National Wildlife Refuge in Elk Grove, Cal.

Credit, Justine Belson/ USFWS.

Lake will also host a special session on a “sense of place,” sponsored by the Traditional Ecological Knowledge section of the Ecological Society, that will bring representatives of local tribes into the Annual Meeting to share their cultural and professional experiences working on tribal natural resources issues.

“The fascinating thing about the Sacramento Valley and the Miwok lands where we are taking the field trip is that it was a fire and flood system,” said Lake. “To maintain the blue and valley oak, you need an anthropogenic fire system.”

Lake, raised among the Yurok and Karuk tribes in the Klamath River area of northernmost California, began his career with an interest in fisheries, but soon realized he would need to understand fire to restore salmon. Fire exerts a powerful effect on ecosystems, including the quality and quantity of water available in watersheds, in part by reducing the density of vegetation.

“Those trees that have grown up since fire suppression are like straws sucking up the groundwater,” Lake said.

The convergence of the Sacramento and San Joaquin rivers was historically one of the largest salmon bearing runs on the West Coast, Lake said, and the Miwok, Patwin and Yokut tribal peoples who lived in the area saw and understood how fire was involved.

California native cultures burned patches of forest in deliberate sequence to diversify the resources available within their region. The first year after a fire brought sprouts for forage and basketry. In 3 to 5 years, shrubs produced a wealth of berries. Mature trees remained for the acorn harvest, but burning also made way for the next generation of trees, to ensure a consistent future crop. Opening the landscape improved game and travel, and created sacred spaces.

“They were aware of the succession, so they staggered burns by 5 to 10 years to create mosaics of forest in different stages, which added a lot of diversity for a short proximity area of the same forest type,” Lake said. “Complex tribal knowledge of that pattern across the landscape gave them access to different seral stages of soil and vegetation when tribes made their seasonal rounds.”

In oak woodlands, burning killed mold and pests like the filbert weevil and filbert moth harbored by the duff and litter on the ground. People strategically burned in the fall, after the first rain, to hit a vulnerable time in the life cycle of the pests, and maximize the next acorn crop. Lake thinks that understanding tribal use of these forest environments has context for and relevance to contemporary management and restoration of endangered ecosystems and tribal cultures.

“Working closely with tribes, the government can meet its trust responsibility and have accountability to tribes, and also fulfill the public trust of protection of life, property, and resources,” Lake said. “By aligning tribal values with public values you can get a win-win, reduce fire along wildlife-urban interfaces, and make landscapes more resilient.”

Contacts and sources:
Liza Lester
Ecological Society of America

Violent Aftermath For The Warriors At Alken Enge

Four pelvic bones on a stick and bundles of desecrated bones testify to the ritual violence perpetrated on the corpses of the many warriors who fell in a major battle close to the Danish town of Skanderborg around the time Christ was born.

Denmark attracted international attention in 2012 when archaeological excavations revealed the bones of an entire army, whose warriors had been thrown into the bogs near the Alken Enge wetlands in East Jutland after losing a major engagement in the era around the birth of Christ. Work has continued in the area since then and archaeologists and experts from Aarhus University, Skanderborg Museum and Moesgaard Museum have now made sensational new findings.

Four pelvic bones on a stick. 

Photo: Peter Jensen, Aarhus University

“We have found a wooden stick bearing the pelvic bones of four different men. In addition, we have unearthed bundles of bones, bones bearing marks of cutting and scraping, and crushed skulls. Our studies reveal that a violent sequel took place after the fallen warriors had lain on the battlefield for around six months,” relates Project Manager Mads Kähler Holst from Aarhus University.
Religious act

The remains of the fallen were gathered together and all the flesh was cleaned from the bones, which were then sorted and brutally desecrated before being cast into the lake. The warriors’ bones are mixed with the remains of slaughtered animals and clay pots that probably contained food sacrifices.

“We are fairly sure that this was a religious act. It seems that this was a holy site for a pagan religion – a sacred grove – where the victorious conclusion of major battles was marked by the ritual presentation and destruction of the bones of the vanquished warriors,” adds Mads Kähler Holst.
Remains of corpses thrown in the lake

Geological studies have revealed that back in the Iron Age, the finds were thrown into the water from the end of a tongue of land that stretched out into Mossø lake, which was much larger back then than it is today.

“Most of the bones we find here are spread out over the lake bed seemingly at random, but the new finds have suddenly given us a clear impression of what actually happened. This applies in particular to the four pelvic bones. They must have been threaded onto the stick after the flesh was cleaned from the skeletons,” explains Field Director Ejvind Hertz from Skanderborg Museum.
Internal Germanic conflict

The battles near Alken Enge were waged during that part of the Iron Age when major changes were taking place in Northern Europe because the Roman Empire was expanding northwards, putting pressure on the Germanic tribes. This resulted in wars between the Romans and the Germanic tribes, and between the Germanic peoples themselves.

Archaeologists assume that the recent finds at the Alken dig stem from an internal conflict of this kind. Records kept by the Romans describe the macabre rituals practised by the Germanic peoples on the bodies of their vanquished enemies, but this is the first time that traces of an ancient holy site have been unearthed.


Contacts and sources:
Mads Kähler Holst
Aarhus University

Rocket Research Confirms X-Ray Glow Emanates From Galactic Hot Bubble

When we look up to the heavens on a clear night, we see an immense dark sky with uncountable stars. With a small telescope we can also see galaxies, nebulae, and the disks of planets. If you look at the sky with an X-ray detector, you would see many of these same familiar objects; in addition, you would see the whole sky glowing brightly with X-rays. This glow is called the “diffuse X-ray background.”


Credit: NASA

While, at higher energies, the diffuse emission is due to point sources too far away and faint to be seen individually, the origins of the soft X-ray glow have been controversial, even 50 years after it was first discovered. The longstanding debate centers around whether the soft X-ray emission comes from outside our solar system, from a hot bubble of gas called the local hot bubble, or whether the emission comes from within the solar system, due to the solar wind colliding with diffuse gas.

New findings settle this controversy. A study published online Sunday in the journal Nature shows that the emission is dominated by the local hot bubble of gas (1 million degrees), with, at most, 40 percent of the emission originating within the solar system. The findings should put to rest the disagreement about the origin of the X-ray emission and confirm the existence of the local hot bubble.



“We now know that the emission comes from both sources, but is dominated by the local hot bubble,” said Massimiliano Galeazzi, professor and associate chair in the Department of Physics in the College of Arts and Sciences, and principal investigator of the study. “This is a significant discovery. Specifically, the existence or nonexistence of the local bubble affects our understanding of the galaxy close to the sun and can be used as the foundation for future models of the galaxy structure.”

Galeazzi, who led the investigation, and his collaborators from NASA, the University of Wisconsin-Madison, the University of Michigan, the University of Kansas, the Johns Hopkins University and CNES in France, launched a sounding rocket to analyze the diffuse X-ray emission, with the goal of identifying how much of that emission comes from within our solar system and how much from the local hot bubble.

UM’s Massimiliano Galeazzi, in blue on the left, and his collaborators ready the sounding rocket for launch with NASA engineers.
Credit: UM

“The DXL team is an extraordinary example of cross-disciplinary science, bringing together astrophysicists, planetary scientists, and heliophysicists,” said F. Scott Porter, astrophysicist at NASA’s Goddard Space Flight Center. “It’s unusual but very rewarding when scientists with such diverse interests come together to produce such groundbreaking results.”

The study measured the diffuse X-ray emission at low energy, what is referred to as the 1/4 keV band, corresponding to radiation with wavelength of the order of 5 nm.

“At that low energy, the light gets absorbed by the neutral gas in our galaxy, so the fact that we observe it means that the source must be ‘local,’ possibly within a few hundred light-years from earth,” Galeazzi said. “However, until now it was unclear whether it comes from within the solar system (within few astronomical units from earth), or a very hot bubble of gas in the solar neighborhood (hundreds of light-years from earth). This is like traveling at night and seeing a light, not knowing if the light comes from 10 yards or 1,000 miles away.”

Interstellar bubbles are probably created by stellar winds and supernova explosions, which cast material outward, forming large cavities in the interstellar medium—the material that fills the space between the stars in a galaxy. Hot X-ray emitting gas can fill the bubble, if a second supernova occurs within the empty cavity.

X-ray emission also occurs within our solar system when the solar wind collides with interplanetary neutral gas. The solar wind is a stream of charged particles released, with great energy, from the atmosphere of the sun. They create a solar wind that travels vast distances, forming a region called the heliosphere. As these particles travel through space at supersonic speeds, they may collide with neutral hydrogen and helium that enters the solar system due to the motion of the sun in the galaxy, capturing an electron and emitting X-rays. This is called the solar wind charge exchange process.

The team refurbished and modernized an X-ray detector that was mounted on a sounding rocket. The X-ray detector was originally flown by the University of Wisconsin-Madison on multiple missions during the 1970s to map the soft X-ray sky. The current team, led by Galeazzi, rebuilt, tested, calibrated, and adapted the detectors to a modern NASA suborbital sounding rocket. Components from a 1993 Space Shuttle mission also were used. The sounding rocket mission, known as “The Diffuse X-ray emission from the Local Galaxy,” aimed at separating and quantifying the X-ray emission from the two suspected sources: the local hot bubble and the solar wind charge exchange. This was the first mission designed for this kind of study.

“X-ray telescopes on satellites can observe for long periods of time and have reasonably large collecting areas, but very tiny fields of view, so they are very good for studying a small area in great detail,” said Dan McCammon, professor of physics at the University of Wisconsin-Madison and one of the scientists who built the original instrument. “However, the observations for this experiment needed to look at a large part of the sky in a short time, to make sure the solar wind did not change during the measurements. The sounding rocket could do it 4,000 times faster.”

The rocket was launched with the support of NASA’s Wallops Flight Facility, from White Sands Missile Range in New Mexico, on December 12, 2012. It reached an altitude of 258 km (160 miles), and stayed above the Earth’s atmosphere for five minutes, enough time to carry out its mission successfully. The information collected was transmitted directly to researchers on the ground at the launch facility.

“The sounding rocket program allows us to conduct high-risk, high-payoff science quickly and inexpensively,” Porter said. “It is really one of NASA’s crown jewels.”

Galeazzi and collaborators are already planning the next launch, planned for December 2015. That mission will be similar in design and goals, but will have multiple instruments to characterize the emission in more detail.

The Nature article is titled “The origin of the ‘local’ ¼ keV X-ray flux in both charge exchange and a hot bubble.” Other authors are M. Chiao, M.R. Collier, F. S. Porter, S. L. Snowden, N. E. Thomas and B. M. Walsh, from NASA’s Goddard Space Flight Center; T. Cravens and I. Robertson, from Department of Physics and Astronomy, University of Kansas; D. Koutroumpa, from Universitè Versailles St-Quentin; Sorbonne Universitès & CNRS/INSU, LATMOS-IPSL; K.D. Kuntz, from The Henry A. Rowland Department of Physics and Astronomy, Johns Hopkins University; R. Lallement, from GEPI Observatoire de Paris, CNRS, Université Paris Diderot; S. T. Lepri from the Department of Atmospheric, Oceanic, and Space Sciences, University of Michigan; D. McCammon and K. Morgan, from the Department of Physics, University of Wisconsin-Madison; and Y. Uprety and E. Ursino, from the UM Department of Physics.


Contacts and sources:
By Marie Guma-Diaz and Annette Gallagher
University of Miami

Citation:  Galeazzi et al. "The origin of the local 1/4-keV X-ray flux in both charge exchange and a hot bubble." Nature online, 27 July 2014.  

The Real Price Of Steak

New research reveals the comparative environmental costs of livestock-based foods.

We are told that eating beef is bad for the environment, but do we know its real cost? Are the other animal or animal-derived foods better or worse? New research at the Weizmann Institute of Science, conducted in collaboration with scientists in the US, compared the environmental costs of various foods and came up with some surprisingly clear results.

The findings, which appear in the Proceedings of the National Academy of Sciences (PNAS), will hopefully not only inform individual dietary choices, but those of governmental agencies that set agricultural and marketing policies.

Dr. Ron Milo
of the Institute’s Plant Sciences Department, together with his research student Alon Shepon, in collaboration with Tamar Makov of Yale University and Dr. Gidon Eshel in New York, asked which types of animal based-food should one consume, environmentally speaking. Though many studies have addressed parts of the issue, none has done a thorough, comparative study that gives a multi-perspective picture of the environmental costs of food derived from animals.

Credit: Weizmann Institute of Science

The team looked at the five main sources of protein in the American diet: dairy, beef, poultry, pork and eggs. Their idea was to calculate the environmental inputs – the costs – per nutritional unit: a calorie or gram of protein. The main challenge the team faced was to devise accurate, faithful input values. 

For example, cattle grazing on arid land in the western half of the US use enormous amounts of land, but relatively little irrigation water. Cattle in feedlots, on the other hand, eat mostly corn, which requires less land, but much more irrigation and nitrogen fertilizer. The researchers needed to account for these differences, but determine aggregate figures that reflect current practices and thus approximate the true environmental cost for each food item.

The inputs the researchers employed came from the US Department of Agriculture databases, among other resources. Using the US for this study is ideal, says Milo, because much of the data quality is high, enabling them to include, for example, figures on import-export imbalances that add to the cost. The environmental inputs the team considered included land use, irrigation water, greenhouse gas emissions, and nitrogen fertilizer use. Each of these costs is a complex environmental system. For example, land use, in addition to tying up this valuable resource in agriculture, is the main cause of biodiversity loss. Nitrogen fertilizer creates water pollution in natural waterways.

When the numbers were in, including those for the environmental costs of different kinds of feed (pasture, roughage such as hay, and concentrates such as corn), the team developed equations that yielded values for the environmental cost – per calorie and then per unit of protein, for each food.

The calculations showed that the biggest culprit, by far, is beef. That was no surprise, say Milo and Shepon. The surprise was in the size of the gap: In total, eating beef is more costly to the environment by an order of magnitude – about ten times on average – than other animal-derived foods, including pork and poultry. 

Cattle require on average 28 times more land and 11 times more irrigation water, are responsible for releasing 5 times more greenhouse gases, and consume 6 times as much nitrogen, as eggs or poultry. Poultry, pork, eggs and dairy all came out fairly similar. That was also surprising, because dairy production is often thought to be relatively environmentally benign. But the research shows that the price of irrigating and fertilizing the crops fed to milk cows – as well as the relative inefficiency of cows in comparison to other livestock – jacks up the cost significantly.

Milo believes that this study could have a number of implications. In addition to helping individuals make better choices about their diet, it should hopefully help inform agricultural policy. And the tool the team has created for analyzing the environmental costs of agriculture can be expanded and refined to be applied, for example, to understanding the relative cost of plant-based diets, or those of other nations. In addition to comparisons, it can point to areas that might be improved. Models based on this study can help policy makers decide how to better ensure food security through sustainable practices.

Dr. Ron Milo’s research is supported by the Mary and Tom Beck-Canadian Center for Alternative Energy Research; the Lerner Family Plant Science Research Endowment Fund; the European Research Council; the Leona M. and Harry B. Helmsley Charitable Trust; Dana and Yossie Hollander, Israel; the Jacob and Charlotte Lehrman Foundation; the Larson Charitable Foundation; the Wolfson Family Charitable Trust; Charles Rothschild, Brazil; Selmo Nissenbaum, Brazil; and the estate of David Arthur Barton. Dr. Milo is the incumbent of the Anna and Maurice Boukstein Career Development Chair in Perpetuity.


Contacts and sources:
Weizmann Institute of Science

Mutations From Venus, Mutations From Mars

Weizmann Institute researchers explain why genetic fertility problems can persist in a population

Some 15% of adults suffer from fertility problems, many of these due to genetic factors. This is something of a paradox: We might expect such genes, which reduce an individual’s ability to reproduce, to disappear from the population. Research at the Weizmann Institute of Science that recently appeared in Nature Communications may now have solved this riddle. Not only can it explain the high rates of male fertility problems, it may open new avenues in understanding the causes of genetic diseases and their treatment.

Various theories explain the survival of harmful mutations: A gene that today causes obesity, for example may have once granted an evolutionary advantage, or disease-causing gene may persist because it is passed on in a small, relatively isolated population.

Dr. Moran Gershoni, a postdoctoral fellow in the group of Prof. Shmuel Pietrokovski of the Molecular Genetics Department, decided to investigate another approach – one based on differences between males and females. Although males and females carry nearly identical sets of genes, many are activated differently in each sex. So natural selection works differently on the same genes in males and females.

Genes that affect only half the population will have double the mutation rate


Take, for example, a mutation that impairs breast milk. It will undergo negative selection only in women. Conversely, a hypothetical gene variant that benefits women but is harmful to men could spread in a population, as it undergoes positive selection in half that population. Gershoni and Pietrokovski created a mathematical model for harmful mutations that affect only half the population; their model showed that these mutations should occur twice as often as those that affect males and females equally.

To test the model, the researchers searched in a computational analysis of the activities of all the human genes that appear in public databases, identifying 95 genes that are exclusively active in the testes. Most of these genes are vital for procreation; and damage to them leads, in many cases, to male sterility.

The researchers then looked at these 95 genes in people whose genomes had been made available through the 1000 Genomes Project, which gave them a broad cross-section of human populations. Their analysis revealed that genes that are active only in the testes have double the harmful mutation rate of those that are active in both sexes – right in line with the mathematical model. Pietrokovski and his team are now conducting follow-up experiments to see whether the mutations they identified do, indeed, play a role in these problems and whether the “sex-difference” approach can explain their survival.

This new understanding of the persistence of genetic mutations could yield insights into other diseases with genetic components, especially those that affect the sexes asymmetrically, including schizophrenia and Parkinson’s, which are more likely to affect men, and depression and autoimmune diseases, which affect more women. And, say Gershoni and Pietrokovski, these findings highlight the need to fit even common medical treatments to the gender of the patient.

Prof. Shmuel Pietrokovski is the incumbent of the Herman and Lilly Schilling Foundation Professorial Chair.

Contacts and sources:
Weizmann Institute of Science

Measuring The Smallest Magnets - Two Single Electrons

Weizmann Institute of Science physicists measured magnetic interactions between single electrons

Imagine trying to measure a tennis ball that bounces wildly, every time to a distance a million times its own size. The bouncing obviously creates enormous “background noise” that interferes with the measurement. But if you attach the ball directly to a measuring device, so they bounce together, you can eliminate the noise problem.

As reported recently in Nature, physicists at the Weizmann Institute of Science used a similar trick to measure the interaction between the smallest possible magnets – two single electrons – after neutralizing magnetic noise that was a million times stronger than the signal they needed to detect.

An illustration showing the magnetic field lines of two electrons, arranged so that their spins point in opposite directions

Dr. Roee Ozeri of the Institute’s Physics of Complex Systems Department says: “The electron has spin, a form of orientation involving two opposing magnetic poles. In fact, it’s a tiny bar magnet.” The question is whether pairs of electrons act like regular bar magnets in which the opposite poles attract one another.

Dr. Shlomi Kotler performed the study while a graduate student under Dr. Ozeri’s guidance, with Drs. Nitzan Akerman, Nir Navon and Yinnon Glickman. Detecting the magnetic interaction of two electrons poses an enormous challenge: When the electrons are at a close range – as they normally are in an atomic orbit – forces other than the magnetic one prevail. On the other hand, if the electrons are pulled apart, the magnetic force becomes dominant, but so weak in absolute terms that it’s easily drowned out by ambient magnetic noise emanating from power lines, lab equipment and the earth’s magnetic field.

The scientists overcame the problem by borrowing a trick from quantum computing that protects quantum information from outside interference. This technique binds two electrons together so that their spins point in opposite directions. Thus, like the bouncing tennis ball attached to the measuring device, the combination of equal but opposite spins makes the electron pair impervious to magnetic noise.

The Weizmann scientists built an electric trap in which two electrons are bound to two strontium ions that are cooled close to absolute zero and separated by 2 micrometers (millionths of a meter). At this distance, which is astronomic by the standards of the quantum world, the magnetic interaction is very weak. But because the electron pairs were not affected by external magnetic noise, the interactions between them could be measured with great precision. The measurement lasted for 15 seconds – tens of thousands of times longer than the milliseconds during which scientists have until now been able to preserve quantum data.

The measurements showed that the electrons interacted magnetically just as two large magnets do: Their north poles repelled one another, rotating on their axes until their unlike poles drew near. This is in line with the predictions of the Standard Model, the currently accepted theory of matter. Also as predicted, the magnetic interaction weakened as a function of the distance between them to the power of three.

In addition to revealing a fundamental principle of particle physics, the measurement approach may prove useful in such areas as the development of atomic clocks or the study of quantum systems in a noisy environment.

Dr. Roee Ozeri’s research is supported by the Crown Photonics Center; the Yeda-Sela Center for Basic Research; the Wolfson Family Charitable Trust; Martin Kushner Schnur, Mexico; Friends of the Weizmann Institute of Science in Memory of Richard Kronstein; and the Zumbi Stiftung.



Contacts and sources:
Weizmann Institute of Science

Learning The Smell Of Fear: Mothers Teach Babies Their Own Fears Via Odor, U-M Research Finds

Babies can learn what to fear in the first days of life just by smelling the odor of their distressed mothers, new research suggests. And not just “natural” fears: If a mother experienced something before pregnancy that made her fear something specific, her baby will quickly learn to fear it too -- through the odor she gives off when she feels fear.

The study involved rat mothers and pups, and found that mothers conditioned to fear the smell of peppermint could transmit that fear to their babies simply through the odor they gave off while feeling that fear. 

Photo illustration - research animals not shown 
 Credit: University of Michigan Health System

In the first direct observation of this kind of fear transmission, a team ofUniversity of Michigan Medical School and New York Universitystudied mother rats who had learned to fear the smell of peppermint – and showed how they “taught” this fear to their babies in their first days of life through their alarm odor released during distress.

In a new paper in the Proceedings of the National Academy ofSciences, the team reports how they pinpointed the specific area of the brain where this fear transmission takes root in the earliest days of life.

Their findings in animals may help explain a phenomenon that has puzzled mental health experts for generations: how a mother’s traumatic experience can affect her children in profound ways, even when it happened long before they were born.

The researchers also hope their work will lead to better understanding of why not all children of traumatized mothers, or of mothers with major phobias, other anxiety disorders or major depression, experience the same effects.

Jacek Debiec, M.D., Ph.D.
 Credit: University of Michigan Health System

“During the early days of an infant rat’s life, they are immune to learning information about environmental dangers. But if their mother is the source of threat information, we have shown they can learn from her and produce lasting memories,” says Jacek Debiec, M.D., Ph.D., the U-M psychiatrist and neuroscientist who led the research.

“Our research demonstrates that infants can learn from maternal expression of fear, very early in life,” he adds. “Before they can even make their own experiences, they basically acquire their mothers’ experiences. Most importantly, these maternally-transmitted memories are long-lived, whereas other types of infant learning, if not repeated, rapidly perish.”

Peering inside the fearful brain

Debiec, who treats children and mothers with anxiety and other conditions in theU-M Department of Psychiatry, notes that the research on rats allows scientists to see what’s going on inside the brain during fear transmission, in ways they could never do in humans.

He began the research during his fellowship at NYU with Regina Marie Sullivan, Ph.D., senior author of the new paper, and continues it in his new lab at U-M’sMolecular and Behavioral Neuroscience Institute.

The researchers taught female rats to fear the smell of peppermint by exposing them to mild, unpleasant electric shocks while they smelled the scent, before they were pregnant. Then after they gave birth, the team exposed the mothers to just the minty smell, without the shocks, to provoke the fear response. They also used a comparison group of female rats that didn’t fear peppermint.

They exposed the pups of both groups of mothers to the peppermint smell, under many different conditions with and without their mothers present.

Using special brain imaging, and studies of genetic activity in individual brain cells and cortisol in the blood, they zeroed in on a brain structure called the lateral amygdala as the key location for learning fears. During later life, this area is key to detecting and planning response to threats – so it makes sense that it would also be the hub for learning new fears.

But the fact that these fears could be learned in a way that lasted, during a time when the baby rat’s ability to learn any fears directly was naturally suppressed, is what makes the new findings so interesting, says Debiec.

The team even showed that the newborns could learn their mothers’ fears even when the mothers weren’t present. Just the piped-in scent of their mother reacting to the peppermint odor she feared was enough to make them fear the same thing.

 Credit: University of Michigan Health System

Even when just the odor of the frightened mother was piped in to a chamber where baby rats were exposed to peppermint smell, the babies developed a fear of the same smell, and their blood cortisol levels rose when they smelled it.

And when the researchers gave the baby rats a substance that blocked activity in the amygdala, they failed to learn the fear of peppermint smell from their mothers. This suggests, Debiec says, that there may be ways to intervene to prevent children from learning irrational or harmful fear responses from their mothers, or reduce their impact.

From animals to humans: next steps

The new research builds on what scientists have learned over time about the fear circuitry in the brain, and what can go wrong with it. That work has helped psychiatrists develop new treatments for human patients with phobias and other anxiety disorders – for instance, exposure therapy that helps them overcome fears by gradually confronting the thing or experience that causes their fear.

In much the same way, Debiec hopes that exploring the roots of fear in infancy, and how maternal trauma can affect subsequent generations, could help human patients. While it’s too soon to know if the same odor-based effect happens between human mothers and babies, the role of a mother’s scent in calming human babies has been shown.

Debiec, who hails from Poland, recalls working with the grown children of Holocaust survivors, who experienced nightmares, avoidance instincts and even flashbacks related to traumatic experiences they never had themselves. While they would have learned about the Holocaust from their parents, this deeply ingrained fear suggests something more at work, he says.

Going forward, he hopes to work with U-M researchers to observe human infants and their mothers -- including U-M psychiatrist Maria Muzik, M.D. and psychologist Kate Rosenblum, Ph.D., who run a Women and Infants Mental Health clinic and research program and also work with military families. The program is currently seeking women and their children to take part in a range of studies; those interested in learning more can call the U-M Mental Health Research Line at (734) 232-0255.

The research was supported by the National Institutes of Health (DC009910, MH091451), and by a NARSAD Young Investigator Award from the Brain and Behavior Research Foundation, and University of Michigan funds. Reference:www.pnas.org/cgi/doi/10.1073/pnas.1316740111



Contacts and sources:
University of Michigan Health System