Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Thursday, June 29, 2017

'Bulges' In Volcanoes Can Predict Eruptions

A team of researchers from the University of Cambridge have developed a new way of measuring the pressure inside volcanoes, and found that it can be a reliable indicator of future eruptions.

Using a technique called 'seismic noise interferometry' combined with geophysical measurements, the researchers measured the energy moving through a volcano. They found that there is a good correlation between the speed at which the energy travelled and the amount of bulging and shrinking observed in the rock. The technique could be used to predict more accurately when a volcano will erupt. Their results are reported in the journal Science Advances.

Credit: Clare Donaldso

Data was collected by the US Geological Survey across Kilauea in Hawaii, a very active volcano with a lake of bubbling lava just beneath its summit. During a four-year period, the researchers used sensors to measure relative changes in the velocity of seismic waves moving through the volcano over time. They then compared their results with a second set of data which measured tiny changes in the angle of the volcano over the same time period.

Lava Waterfall, Kilauea Volcano, Hawaii. 
Credit: Dhilung Kirat

As Kilauea is such an active volcano, it is constantly bulging and shrinking as pressure in the magma chamber beneath the summit increases and decreases. Kilauea's current eruption started in 1983, and it spews and sputters lava almost constantly. Earlier this year, a large part of the volcano fell away and it opened up a huge 'waterfall' of lava into the ocean below. Due to this high volume of activity, Kilauea is also one of the most-studied volcanoes on Earth.

The Cambridge researchers used seismic noise to detect what was controlling Kilauea's movement. Seismic noise is a persistent low-level vibration in the Earth, caused by everything from earthquakes to waves in the ocean, and can often be read on a single sensor as random noise. But by pairing sensors together, the researchers were able to observe energy passing between the two, therefore allowing them to isolate the seismic noise that was coming from the volcano.

"We were interested in how the energy travelling between the sensors changes, whether it's getting faster or slower," said Clare Donaldson, a PhD student in Cambridge's Department of Earth Sciences, and the paper's first author. "We want to know whether the seismic velocity changes reflect increasing pressure in the volcano, as volcanoes bulge out before an eruption. This is crucial for eruption forecasting."

One to two kilometres below Kilauea's lava lake, there is a reservoir of magma. As the amount of magma changes in this underground reservoir, the whole summit of the volcano bulges and shrinks. At the same time, the seismic velocity changes. As the magma chamber fills up, it causes an increase in pressure, which leads to cracks closing in the surrounding rock and producing faster seismic waves - and vice versa.

"This is the first time that we've been able to compare seismic noise with deformation over such a long period, and the strong correlation between the two shows that this could be a new way of predicting volcanic eruptions," said Donaldson.

Volcano seismology has traditionally measured small earthquakes at volcanoes. When magma moves underground, it often sets off tiny earthquakes, as it cracks its way through solid rock. Detecting these earthquakes is therefore very useful for eruption prediction. But sometimes magma can flow silently, through pre-existing pathways, and no earthquakes may occur. This new technique will still detect the changes caused by the magma flow.

Seismic noise occurs continuously, and is sensitive to changes that would otherwise have been missed. The researchers anticipate that this new research will allow the method to be used at the hundreds of active volcanoes around the world.

Contacts and sources:
Sarah Collins
University of Cambridge 

Citation: C. Donaldson et al. ‘Relative seismic velocity variations correlate with deformation at Kīlauea volcano’. Science Advances (2017) DOI: 10.1126/sciadv.1700219

Evidence of Neolithic "Skull Cult" Discovered at Göbekli Tepe

In Turkey, Carved Skulls Provide the First Evidence of a Neolithic "Skull Cult": Three carved skull fragments uncovered at a Neolithic dig site in Turkey feature modifications not seen before among human remains of the time, researchers say. 

Thus, these modified skull fragments could point to a new "skull cult" -- or ritual group -- from the Neolithic period. Throughout history, people have valued skulls for different reasons, from ancestor worship to the belief that human skulls transmit protective properties. This focus on the skull has led to the establishment of the term skull cult in anthropology, and various such cults -- each with characteristic modifications to skull bones -- have been catalogued.

This is a pillar from Building D at Göbekli Tepe seen from the southeast.
Credit: German Archaeological Institute (DAI)

Recently, Julia Gresky and colleagues observed a previously unknown type of modification in three partial skulls uncovered at Göbekli Tepe. Each skull had intentional deep incisions along its sagittal axes and one of those skulls also displayed a drilled hole in the left parietal bone, as well as red ochre remnants, the authors say. 

By using different microscopic techniques to analyze the fragments, Gresky et al. verified that the carvings were executed using lithic tools, thus ruling out natural causes, like animal gnawing. In addition, they were able to discount scalping as a source of the marks, due to the depth of the carvings; however, other minor cut-marks on the skulls show signs of possible defleshing, they say. 

More likely, the skulls were carved to venerate ancestors not long after their death, say the authors, or, to put recently "dispatched" enemies on display. These findings present the very first evidence for treatment of the dead at Göbekli Tepe.

Contacts and sources:
American Association For The Advancement Of Science

2 Billion Refugees By 2100 Due to Rising Seas Possible

Global sea levels have risen by 8 inches in the last 130 years.
In the year 2100, 2 billion people - about one-fifth of the world's population - could become climate change refugees due to rising ocean levels. Those who once lived on coastlines will face displacement and resettlement bottlenecks as they seek habitable places inland, according to Cornell University research.

"We're going to have more people on less land and sooner that we think," said lead author Charles Geisler, professor emeritus of development sociology at Cornell. "The future rise in global mean sea level probably won't be gradual. Yet few policy makers are taking stock of the significant barriers to entry that coastal climate refugees, like other refugees, will encounter when they migrate to higher ground."

The graph tracks the change in sea level since 1993 as observed by satellites.
Data source: Satellite sea level observations.
Credit: NASA Goddard Space Flight Center

Earth's escalating population is expected to top 9 billion people by 2050 and climb to 11 billion people by 2100, according to a United Nations report. Feeding that population will require more arable land even as swelling oceans consume fertile coastal zones and river deltas, driving people to seek new places to dwell.

By 2060, about 1.4 billion people could be climate change refugees, according to the paper. Geisler extrapolated that number to 2 billion by 2100.

"The colliding forces of human fertility, submerging coastal zones, residential retreat, and impediments to inland resettlement is a huge problem. We offer preliminary estimates of the lands unlikely to support new waves of climate refugees due to the residues of war, exhausted natural resources, declining net primary productivity, desertification, urban sprawl, land concentration, 'paving the planet' with roads and greenhouse gas storage zones offsetting permafrost melt," Geisler said.

The paper describes tangible solutions and proactive adaptations in places like Florida and China, which coordinate coastal and interior land-use policies in anticipation of weather-induced population shifts.

Florida has the second-longest coastline in the United States, and its state and local officials have planned for a coastal exodus, Geisler said, in the state's Comprehensive Planning Act.

Sea Level Rise

Beyond sea level rise, low-elevation coastal zones in many countries face intensifying storm surges that will push sea water further inland. Historically, humans have spent considerable effort reclaiming land from oceans, but now live with the opposite - the oceans reclaiming terrestrial spaces on the planet," said Geisler. In their research, Geisler and Currens explore a worst-case scenario for the present century.

The authors note that the competition of reduced space that they foresee will induce land-use trade-offs and conflicts. In the United States and elsewhere, this could mean selling off public lands for human settlement.

"The pressure is on us to contain greenhouse gas emissions at present levels. It's the best 'future proofing' against climate change, sea level rise and the catastrophic consequences likely to play out on coasts, as well as inland in the future," said Geisler.

Contacts and sources:
Lindsey Hadlock
Cornell University

Citation:  "Impediments to inland resettlement under conditions of accelerated sea level rise" will be published in the July issue of the journal Land Use Policy but is already available online here.

Wednesday, June 28, 2017

Brewing Viking Beer - A Legal Requirement for Farmers

When archaeologist Geir Grønnesby dug test pits at 24 different farms in central Norway, he nearly always found thick layers of fire-cracked stones dating from the Viking Age and earlier. Long ago, Norwegians brewed beer using stones.

There’s nothing archaeologists like better than piles of centuries-old rubbish. Ancient bones and stones from trash heaps can tell complex stories. And in central Norway, at least, the story seems to be that Vikings and their descendants brewed beer by tossing hot rocks into wooden kettles.

Some of the best archaeological finds come from rubbish heaps. Throughout mid-Norway, these rubbish heaps often contain cracked stones that have been used to brew beer. 
Some of the best archaeological finds come from rubbish heaps. Throughout mid-Norway, these rubbish heaps often contain cracked stones that have been used to brew beer.  Photo: Åge Hojem
Photo: Åge Hojem, NTNU University Museum

“There are a lot of these stones, and they are found at most of the farmyards on old, named farms,” says Geir Grønnesby, an archaeologist at the NTNU University Museum.

Grønnesby is fascinated by the history of Norwegian farm settlements, and with good reason. Much of the story of how Norwegian farms were settled and developed over the millennia remains a mystery.

Geir Grønnesby, an archaeologist at the NTNU University Museum, has buckets full of rocks that have been used to brew beer since the Viking age. They’re found in buried rubbish heaps around many farms in Trøndelag.

Photo: Nancy Bazilchuk

There’s a simple reason for this: most archaeological digs are from construction projects, because developers are required to check for cultural artefacts before beginning construction. It is rare that a developer would build a road or other big development through a farm, which means they are rarely dug up by archaeologists.

In other words, “most of the archaeological information we have about the Viking Age comes from graves, and most of the archaeological information about the Middle Ages comes from excavations in cities,” Grønnesby said. That’s a problem because “most people lived in the countryside.”

Essentially, he says, Norwegian farms are sitting on an enormous underground treasure trove that in places dates from the AD 600, the late Iron Age — and yet they are mostly untouched.

“So I started doing these small excavations to look for cultural layers in farmyards,” he said. “The oldest carbon-14 dates I found are from 600 AD, and all the dates are from this time or later. And when I found the stones, I had to write about them, since there were so many.”
A curious sociologist

Grønnesby is not the first to remark on fire-cracked stones on farms in central Norway. That distinction goes to a pioneering sociologist named Eilert Sundt, who recorded an encounter on a farm in 1851 in Hedmark.

As Sundt later wrote, he was walking and saw a farmer near a pile of strange-looking, smallish stones.

Mounds of fire-cracked stones, or brewing stones, from a farmstead in Ranheim, outside of Trøndheim.

Photo: Geir Grønnesby

“What’s with these stones?” he asked the farmer, pointing to the pile. “They’re brewing stones,” the farmer told him. “Stones they used for cooking to brew beer — from the old days when they didn’t have iron pots.”

In his article, Sundt noted that most of the farms he visited had piles of burned or fire-cracked stones. Every time he asked about them, the answer was the same: they were from brewing, when the stones were heated until they were “glowing hot” and then plopped into wooden vessels to heat things up.

The stones were so omnipresent, Sundt wrote, and so thick and compact in places that houses were built right on top of them.

Reports from archaeologists who examined farmsteads in more recent times also confirm this observation.

When one archaeologist dug a test trench in the 1980s at a farm in Steinkjer, north of Trondheim, he found a cultural layer more than a metre thick, much of which was fire-cracked stone.

This excavation in Hofstad, on the island of Hitra, shows a thick layer of brewing stones and other Viking and mediaeval aged trash.

 Photo: Geir Grønnesby

Grønnesby himself excavated more than 700 cubic metres of stone from a portion of a farmstead in Ranheim, also north of Trondheim. And when Grønnesby did his test sample of the 24 farms, 71 per cent either had cracked stone layers or probably did.
Rituals and the Reformation

It’s not so unusual that Vikings brewed beer using stones, Grønnesby said. Brewing with heated stones has also been reported from England, Finland and the Baltics. It’s a tradition that continues in Germany, where it’s possible even today to buy “stone-brewed beer”.

Grønnesby says the presence of great numbers of brewing stones on Norwegian farms underscores the cultural importance of beer itself.

“Beer drinking was an important part of social and religious institutions,” he said. For example, the Gulating, a Norwegian parliamentary assembly that met from 900 to 1300 AD, regulated even the smallest details of beer brewing and drinking at that time.

The Gulating’s laws required three farmers to work together to brew beer, which then had to be blessed. An individual who failed to brew beer for three consecutive years had to give half his farm to the bishop and the other half to the King and then leave the country. Only very small farms were exempt from this strict regulation.

What’s equally interesting is when brewing stones disappear from cultural layers — at about 1500, right around the time of the Reformation.

The work of an archaeologist may seem glamourous — think Indiana Jones — but for Geir Grønnesby from NTNU’s University Museum, it sometimes involves digging holes in search of Viking and Middle Age trash heaps.

 Photo: Geir Grønnesby, NTNU University Museum

“It could just be a strange coincidence,” Grønnesby said. “It could be religion. Or it could be that iron vessels were more widely available by then.”
From rubbish heaps to treasure troves

Each time a glowing hot brewing stone was plopped into a cold vat of fluid, it would crack. After several of these cycles, the stones would be too small to be useful and the brewers would toss them out onto a rubbish heap.

That means the thick layers of stones also contain other artefacts, like old spinning weights and loom weights, animal bones and beads. It is for this reason, as much as for the stones themselves, that the layers are important, Grønnesby said.

“Archaeologists are always finding these layers, but they used to look at them and scratch their heads, and (the layers) didn’t get the kind of recognition they deserve,” he said. “These layers represent archives from the Viking Age to medieval times, so we should excavate them more often.”

You can read about Grønnesby’s research in the recently published book, “The Agrarian Life of the North: 2000 BC to AD 1000: Studies in Rural Settlement and Farming in Norway”, edited by Frode Iversen & Håkan Petersson. Grønnesby’s chapter is entitled “Hot Rocks! Beer Brewing on Viking and Medieval Age Farms in Trøndelag.”

Contacts and sources:
Geir Grønnesby
By Nancy Bazilchuk

Helping Dogs Cope with Fireworks and Other Noise Anxiety

Who doesn't like 4th of July celebrations?  Dogs. 

Independence Day fireworks can be a highlight of the summer season for many. But for the family dog, fireworks and thunderstorms can trigger fear and anxiety similar to a panic attack. A third of all dogs will suffer from noise anxiety connected to suddenness of the sound.

“To some degree, their ears are more sensitive. They pick up a wider range of sounds than we do,” said veterinarian Mark D. Freeman at the Virginia-Maryland College of Veterinary Medicine at Virginia Tech. “Sudden very loud sounds can cause dogs to be very frightened, and with the continuation of noise, it’s sort of a ramping up effect. The more noise they are exposed to, the more reactive they become.”

Every year near the 4th of July, there is a significant increase in the number of traumatic injuries to dogs, specifically related to the fear response associated with fireworks

Credit: Virginia Tech

Many dogs will look for a place to hide or a place where they feel more safe and secure. “When they are in a situation where they are being bombarded with noises that are causing a tremendous amount of stress for them, they are looking for any source of security, and that includes a “safe” hiding place,” said Freeman.

Quoting Freeman

“There are a number of different techniques that can be utilized for animals that have phobias associated with loud noises. A general rule is to approach any phobia through behavior modification therapy, if that’s an option; desensitizing animals to the loud noises so they pretty much ignore them.”

“Medications, such as sedatives, can be effective in helping a dog feel calm and quiet. Sileo actually is a sedative that we use very commonly in veterinary practice. It had originally only been available as an injectable medication, but has now been produced in an oral gel that is absorbed through the gum tissue.”

“Unless you know for a fact that your dog has been desensitized and is not afraid of loud noises, I would advise against taking your dog to a fireworks show.”

“Every year near the 4th of July, we see a significant increase in the number of traumatic injuries to dogs, specifically related to the fear response associated with fireworks. Dogs have jumped through glass windows and off decks and balconies, chewed through doors and walls, and many get hit by cars when they panic and run away from the noise.”

Contacts and sources:
Dr. Mark D. Freeman  
Virginia Tech.

Lightning Starting More Boreal Forest Fires

A new NASA-funded study finds that lightning storms were the main driver of recent massive fire years in Alaska and northern Canada, and that these storms are likely to move farther north with climate warming, potentially altering northern landscapes.

The study, led by Vrije Universiteit Amsterdam and the University of California, Irvine, examined the cause of the fires, which have been increasing in number in recent years. There was a record number of lightning-ignited fires in the Canadian Northwest Territories in 2014 and in Alaska in 2015. The team found increases of between two and five percent a year in the number of lightning-ignited fires since 1975.

To study the fires, the team analyzed data from NASA’s Terra and Aqua satellites and from ground-based lightning networks.

A lightning-caused wildfire burns in Alberta, Canada.
A lightning-caused wildfire burns in Alberta, Canada
Credits: The Government of Alberta

Lead author Sander Veraverbeke of Vrije Universiteit Amsterdam, who conducted the work while at UC Irvine, said that while the drivers of large fire years in the high north are still poorly understood, the observed trends are consistent with climate change.

“We found that it is not just a matter of more burning with higher temperatures. The reality is more complex: higher temperatures also spur more thunderstorms. Lightning from these thunderstorms is what has been igniting many more fires in these recent extreme events,” Veraverbeke said.

Study co-author Brendan Rogers at Woods Hole Research Center in Falmouth, Massachusetts, said these trends are likely to continue. “We expect an increasing number of thunderstorms, and hence fires, across the high latitudes in the coming decades as a result of climate change.” This is confirmed in the study by different climate model outputs.

Study co-author Charles Miller of NASA’s Jet Propulsion Laboratory in Pasadena, California, said while data from the lightning networks were critical to this study, it is challenging to use these data for trend detection because of continuing network upgrades. “A spaceborne sensor that provides high northern latitude lightning data that can be linked with fire dynamics would be a major step forward,” he said.

A lightning-caused wildfire burns in central Alaska in July 2008.

Credits: BLM Alaska Fire Service

The researchers found that the fires are creeping farther north, near the transition from boreal forests to Arctic tundra. “In these high-latitude ecosystems, permafrost soils store large amounts of carbon that become vulnerable after fires pass through,” said co-author James Randerson of the University of California, Irvine. “Exposed mineral soils after tundra fires also provide favorable seedbeds for trees migrating north under a warmer climate.”

“Taken together, we discovered a complex feedback loop between climate, lightning, fires, carbon and forests that may quickly alter northern landscapes,” Veraverbeke concluded. “A better understanding of these relationships is critical to better predict future influences from climate on fires, and from fires on climate.”

The study was published in the journal Nature Climate Change. The Alaska Fire Science Consortium at the University of Alaska, Fairbanks, also participated in the study.

Contacts and sources:
Alan Buis, Jet Propulsion Laboratory, Pasadena, California
Sander Veraverbeke, Vrije Universiteit Amsterdam, The Netherlands
Brian Bell, University of California, Irvine

Japanese Food Will Be Contaminated by Low-Level Radioactivity from Fukushima for Decades

Food in Japan will be contaminated by low-level radioactivity for decades following the 2011 Fukushima nuclear disaster, but not at a level which poses a serious risk to human health, according to new research.

Scientists can predict with confidence what the effect of the Japanese disaster has had and will continue to have thanks to a legacy of data on radioactive pollution in the environment after decades of nuclear testing worldwide.

Professor Jim Smith, from the University of Portsmouth, and Dr Keiko Tagami, from the Japanese National Institute of Radiological Sciences, say radiation in the average diet in the region is very low.

Credit: University of Portsmouth

The exception is for wild food, such as mushrooms and game animals, where contamination remains high.

Professor Jim Smith, from the School of Earth and Environmental Sciences, said: “The world’s deadliest nuclear weapons tests during the Cold War have yielded one benefit: a better understanding of how radioactivity contaminates the environment.”

It’s been six years since a tsunami triggered a major nuclear disaster at the Fukushima Nuclear Power Plant in Japan.

The team have predicted the effects of radioactive contamination on the Japanese diet by analysing thousands of measurements spanning 50 years of nuclear weapons testing.

Professor Smith said: “Hundreds of above-ground nuclear weapons tests carried out by the US, USSR, Britain, France and China during the Cold War spread thousands of peta-Becquerels of radioactivity around the World, dwarfing emissions from the Chernobyl and Fukushima nuclear accidents.

“Radioactive elements such as Caesium-137, Strontium-90 and Carbon-14 contaminated the global environment, potentially causing hundreds of thousands of unseen cancer deaths.

“These deadly above-ground tests were stopped by the 1963 Test Ban Treaty, but over the last 60 years, their lingering effects have yielded one benefit: a better understanding of how radioactivity contaminates the environment after nuclear accidents like Fukushima.”

The study uses data from the Japanese Environmental Radioactivity Database, which was established to monitor radioactivity in response to the global fallout from Cold War nuclear weapons testing and from the Chernobyl accident in 1986.

It focuses on radiocaesium - the most important radioactive contaminant affecting the Fukushima area - in food.

Professor Smith said: “From 1959-2009, thousands of measurements were made of radiocaesium in nuclear fallout, wheat, rice and in people’s average diet in Japan. This unique historical data has allowed us to evaluate radiocaesium levels in Japanese agricultural systems, which can be used to inform predictions of the long-term consequences of food chain contamination post-Fukushima.

“The results show that radioactivity will continue to be found in foodstuffs for many decades, but that current levels are low and will continue to decline over time.”

Dr Keiko Tagami, a co-author at the Japanese National Institute of Radiological Sciences said: “This study gives us the evidence to explain to people how contamination levels will change over time. It gives us confidence that radiation doses in the average diet in the Fukushima region are very low and do not present a significant health risk now or in the future.

“But we have to continue monitoring foodstuffs, particularly “wild” foods such as mushrooms, new shoots of edible plants and game animals where contamination levels remain high.”

Data from Chernobyl was also analysed because a small amount of radiocaesium reached Japan.

Professor Smith said: “It’s important to use data gathered from nuclear tests to help inform communities affected by more recent nuclear incidents. It’s one positive that can come from the deadly nuclear experiments of the past.”

The paper was published in Science of the Total Environment and part-funded by the Natural Environment Research Council.

Contacts and  sources:
Professor Jim Smith
University of Portsmouth,

Citation: Time trends in radiocaesium in the Japanese diet following nuclear weapons testing and Chernobyl: Implications for long term contamination post-Fukushima http://www.sciencedirect.com/science/article/pii/S0048969717313487

Memories Can Be Selectively Erased While Leaving Others Intact: Proof of Principle

A new study of snail neurons suggests memories that trigger anxiety, PTSD could be ‘erased’ without affecting normal memory of past events.

Different types of memories stored in the same neuron of the marine snail Aplysia can be selectively erased, according to a new study by researchers at Columbia University Medical Center (CUMC) and McGill University and published today in Current Biology.

The findings suggest that it may be possible to develop drugs to delete memories that trigger anxiety and post-traumatic stress disorder (PTSD) without affecting other important memories of past events.

During emotional or traumatic events, multiple memories can become encoded, including memories of any incidental information that is present when the event occurs. In the case of a traumatic experience, the incidental, or neutral, information can trigger anxiety attacks long after the event has occurred, say the researchers.

Two Aplysia sensory neurons with synaptic contacts on the same motor neuron in culture after isolation from the nervous system of Aplysia. The motor neuron has been injected with a fluorescent molecule that blocks the activity of a specific Protein Kinase M molecule.

Credit: Schacher Lab/Columbia University Medical Center

“The example I like to give is, if you are walking in a high-crime area and you take a shortcut through a dark alley and get mugged, and then you happen to see a mailbox nearby, you might get really nervous when you want to mail something later on,” says Samuel Schacher, PhD, a professor of neuroscience in the Department of Psychiatry at CUMC and co-author of the paper. 

In the example, fear of dark alleys is an associative memory that provides important information—e.g., fear of dark alleys—based on a previous experience. Fear of mailboxes, however, is an incidental, non-associative memory that is not directly related to the traumatic event.

“One focus of our current research is to develop strategies to eliminate problematic non-associative memories that may become stamped on the brain during a traumatic experience without harming associative memories, which can help people make informed decisions in the future—like not taking shortcuts through dark alleys in high-crime areas,” Dr. Schacher adds.

Brains create long-term memories, in part, by increasing the strength of connections between neurons and maintaining those connections over time. Previous research suggested that increases in synaptic strength in creating associative and non-associative memories share common properties. This suggests that selectively eliminating non-associative synaptic memories would be impossible, because for any one neuron, a single mechanism would be responsible for maintaining all forms of synaptic memories.

The new study tested that hypothesis by stimulating two sensory neurons connected to a single motor neuron of the marine snail Aplysia; one sensory neuron was stimulated to induce an associative memory and the other to induce a non-associative memory.

By measuring the strength of each connection, the researchers found that the increase in the strength of each connection produced by the different stimuli was maintained by a different form of a Protein Kinase M (PKM) molecule (PKM Apl III for associative synaptic memory and PKM Apl I for non-associative). They found that each memory could be erased – without affecting the other -- by blocking one of the PKM molecules.

In addition, they found that specific synaptic memories may also be erased by blocking the function of distinct variants of other molecules that either help produce PKMs or protect them from breaking down.

The researchers say that their results could be useful in understanding human memory because vertebrates have similar versions of the Aplysia PKM proteins that participate in the formation of long-term memories. In addition, the PKM-protecting protein KIBRA is expressed in humans, and mutations of this gene produce intellectual disability.

“Memory erasure has the potential to alleviate PTSD and anxiety disorders by removing the non-associative memory that causes the maladaptive physiological response,” says Jiangyuan Hu, PhD, an associate research scientist in the Department of Psychiatry at CUMC and co-author of the paper. “By isolating the exact molecules that maintain non-associative memory, we may be able to develop drugs that can treat anxiety without affecting the patient’s normal memory of past events.”

“Our study is a ‘proof of principle’ that presents an opportunity for developing strategies and perhaps therapies to address anxiety,” said Dr. Schacher. “For example, because memories are still likely to change immediately after recollection, a therapist may help to ‘rewrite’ a non-associative memory by administering a drug that inhibits the maintenance of non-associative memory.”

Future studies in preclinical models are needed to better understand how PKMs are produced and localized at the synapse before researchers can determine which drugs may weaken non-associative memories.

Contacts and sources:
Columbia University Medical Center (CUMC)

Age-Related Macular Degeneration and the Intestinal Microbiome Connection Discovered

Scientists discover a connection between the development of age-related macular degeneration and the intestinal microbiome.

Age-related macular degeneration is the leading cause of blindness in developed countries, but its causes are unknown, and no effective treatment exists. In a collaborative study with Tufts University, Weizmann Institute of Science’s researchers have discovered a connection between the development of this disease and the intestinal microbiome.

Credit: Weizmann Institute of Science

As reported recently in the Proceedings of the National Academy of Sciences, USA, the Tufts researchers showed that when mice ate simple carbohydrates, they had an increased risk of developing macular degeneration. But once the mice switched to a diet including complex carbohydrates, the degeneration stopped.

Weizmann Institute’s Tal Korem and other members of Prof. Eran Segal’s team of the Computer Science and Applied Mathematics Department then entered the picture. The scientists found that when the mice switched to eating complex carbohydrates, the composition of their intestinal microbes also changed. “We don’t know yet if this change affects the course of retinal degeneration, but if it does, it may in the future be possible to prevent or stop this degeneration by altering the microbiome,” Korem says.

Also taking part in the study were Dr. Adina Weinberger, Dr. Tali Avnit-Sagi and Maya Lotan-Pompan.

Contacts and sources:
Weizmann Institute of Science

Wind from the Desert Brings the Dust Storm Microbiome

Israel is subjected to sand and dust storms from several directions: northeast from the Sahara, northwest from Saudi Arabia and southwest from the desert regions of Syria. The airborne dust carried in these storms affects the health of people and ecosystems alike. New research at the Weizmann Institute of Science suggests that part of the effect might not be in the particles of dust but rather in bacteria that cling to them, traveling many kilometers in the air with the storms.

Some of these bacteria might be pathogenic - harmful to us or the environment - and a few of them also carry genes for antibiotic resistance. Others may induce ecosystem functions such as nitrogen fixation. Prof. Yinon Rudich and his research group, including postdoctoral fellow Dr. Daniela Gat and former research student Yinon Mazar, in Weizmann's Earth and Planetary Sciences Department investigated the genetics of the windborne bacteria arriving along with the dust.

Dust storm in Timna Park is shown.
Credit: The Weizmann Institute of Science

"In essence, we investigated the microbiome of windborne dust," says Rudich. "The microbiome of a dust storm originating in the Sahara is different from one blowing in from the Saudi or Syrian deserts, and we can see the fit between the bacterial population and the environmental conditions existing in each area."

The researchers found that during a dust storm the concentration of bacteria and the number of bacterial species present in the atmosphere rise sharply, so people walking outdoors in these storms are exposed to many more bacteria than usual.

Rudich and his team then explored the genes in these bacteria, checking for antibiotic resistance -- a trait that can arise owing to elevated use of antibiotics but also naturally, especially in soil bacteria. Antibiotic resistance has been defined by the World Health Organization as one of the primary global health challenges of the twenty-first century, and its main driver is the overuse of antibiotics. But bacteria can pass on the genes for antibiotic resistance, so any source of resistance is concerning. How many different genes for antibiotic resistance come to Israel from the various dust storms, and how prevalent are these genes?

Rudich says that the study enabled the researchers to identify a "signature" for each source of bacteria based on the prevalence of antibiotic resistant genes, which revealed whether the genes were local or imported from distant deserts. "We found that as more 'mixing' occurs between local dust and that which comes from far off, the lower the contribution of the imported antibiotic resistance genes." In other words, antibiotic resistance coming from Africa or Saudi Arabia is still a very minor threat compared to that caused and spread by human activity, especially animal husbandry. Also participating in this research were Dr. Eddie Cytryn of the Volcani Center and Prof. Yigal Erel of the Hebrew University of Jerusalem.

City air not set to improve

Urban air pollution is attributed, to a large extent, to emissions from transportation. Prof. Rudich and Staff Scientist Dr. Michal Pardo-Levin ask how these sources contribute to air pollution. Their findings show that pollution that does not come from the combustion engine but rather is released from the friction of the vehicle's tires on the road and from braking systems can lead to serious health effects upon inhalation. That means that even if we manage to significantly reduce our cars' tailpipe emissions, city air will still be polluted, to a large extent, with these other substances. And since the friction of tires and brakes are necessary for driving, reducing their emissions could be much harder.

Contacts and sources:
The Weizmann Institute of Science  

Early, Permanent Human Settlement in Andes Documented

Using five different scientific approaches, a team including University of Wyoming researchers has given considerable support to the idea that humans lived year-round in the Andean highlands of South America over 7,000 years ago.

Examining human remains and other archaeological evidence from a site at nearly 12,500 feet above sea level in Peru, the scientists show that intrepid hunter-gatherers -- men, women and children -- managed to survive at high elevation before the advent of agriculture, in spite of lack of oxygen, frigid temperatures and exposure to elements.

Intrepid hunter-gatherer families permanently occupied high-elevation environments of the Andes Mountains at least 7,000 years ago, according to new research led by University of Wyoming scientists.

Credit: Lauren A. Hayes

"This gives us a very strong baseline to help understand the rates of cultural and genetic change in the Andean highlands, a region known for the domestication of alpaca, potatoes and other plants; emergence of state-level political and economic complexity; and rapid human adaptation to high-elevation life," says Randy Haas, a postdoctoral research associate in the University of Wyoming's Department of Anthropology and the team's leader.

The research appears in the July issue of Royal Society Open Science, a peer-reviewed, open-access scientific journal. Along with Haas, the second author is Ioana Stefenescu, graduate student in UW's Department of Geology and Geophysics. Also contributing to the paper were Alexander Garcia-Putnam, doctoral student in the UW Department of Anthropology; Mark Clementz, associate professor in the Department of Geology and Geophysics; Melissa Murphy, associate professor in the Department of Anthropology; and researchers from the University of California-Davis, the University of California-Merced, the University of Arizona and Peruvian institutions.

Excavations led by Haas at the site in southern Peru produced the remains of 16 people, along with more than 80,000 artifacts, dating to as early as 8,000 years ago. Evidence from that site, as well as others, has led some researchers to estimate that hunter-gatherers began living in the Andes around 9,000 years ago, but debate has continued over whether that human presence was permanent or seasonal.

The research team led by Haas took five different approaches to test whether there was early permanent use of the region: studying the human bones for oxygen and carbon isotopes; the travel distances from the site to low-elevation zones; the demographic mixture of the human remains; and the types of tools and other materials found with them.

The scientists found low oxygen and high carbon isotope values in the bones, revealing the distinct signature of permanent high-elevation occupation; that travel distances to low-elevation zones were too long for seasonal human migration; that the presence of women and small children meant such migration was highly unlikely; and that almost all of the tools used by the hunter-gatherers were made with high-elevation stone material, not brought from elsewhere.

"These results constitute the strongest evidence to date that people were living year-round in the Andean highlands at least 7,000 years ago," Haas says. "Such high-elevation environments were among the last frontiers of human colonization, and this knowledge holds implications for understanding rates of genetic, physiological and cultural adaption in the human species.

Contacts and sources:
Randy Haas
University of Wyoming 

Citation: Humans permanently occupied the Andean highlands by at least 7 ka Randall Haas, Ioana C. Stefanescu, Alexander Garcia-Putnam, Mark S. Aldenderfer, Mark T. Clementz, Melissa S. Murphy, Carlos Viviano Llave, James T. Watson Published 28 June 2017.DOI: 10.1098/rsos.170331 http://dx.doi.org/10.1098/rsos.170331

Tuesday, June 27, 2017

Bizarre Bee-Zed Asteroid Orbits the Sun in the Opposite Direction as Planets

In our solar system, an asteroid orbits the Sun in the opposite direction to the planets. Asteroid 2015 BZ509, also known as Bee-Zed, takes 12 years to make one complete orbit around the Sun. This is the same orbital period as that of Jupiter, which shares its orbit but moves in the opposite direction to the planet's motion.

The asteroid with the retrograde co-orbit was identified by Helena Morais, a professor at São Paulo State University's Institute of Geosciences & Exact Sciences (IGCE-UNESP). Morais had predicted the discovery two years earlier, so much so that the article describing observations of the asteroid published in Nature, is noted by Morais in the News & Views section of the same issue of the journal.

Co-orbital bodies that orbit the Sun in the same direction as a planet can follow trajectories (blue curves with arrows) that, from the perspective of the planet, look like tadpoles, horseshoes or 'quasi-satellites'

Credit: Helena Morais & Fathi Naouni

"It's good to have confirmation," Morais told. "I was sure retrograde co-orbits existed. We've known about this asteroid since 2015, but the orbit hadn't been clearly determined, and it wasn't possible to confirm the co-orbital configuration. Now it's been confirmed after more observations that reduced the number of errors in the orbital parameters. So, we're sure the asteroid is retrograde, co-orbital and stable."

In partnership with Fathi Namouni at the Côte d'Azur Observatory in France, Morais developed a general theory on retrograde co-orbitals and retrograde orbital resonance.

The paper by Paul Wiegert of the University of Western Ontario, Canada, published in March in Nature, describes how object 2015 BZ509, detected in January 2015, using the Panoramic Survey Telescope & Rapid Response System (Pan-STARRS) in Hawaii, was tracked using the Large Binocular Telescope in Arizona. The confirmation that its orbit is retrograde and co-orbital with Jupiter came from these additional observations.

Retrograde orbits are rare. It is estimated that only 82 of the more than 726,000 known asteroids are orbiting the "wrong way". By contrast, prograde co-orbitals that move 'with traffic' are nothing new; Jupiter alone is accompanied by some 6,000 Trojan asteroids that share the giant planet's orbit.

Bee-Zed is unusual because it shares a planet's orbit, because its own orbit is retrograde, and above all, because it has been stable for millions of years. "Instead of being ejected from orbit by Jupiter, as one would expect, the asteroid is in a configuration that assures stability thanks to co-orbital resonance, meaning its motion is synchronized with the planet's, avoiding collisions," Morais said.

The asteroid crosses Jupiter's path every six years, but owing to their co-orbital resonance, they never come closer than 176 million km, far enough to avoid major disturbances to the orbit of the asteroid, although Jupiter's gravity is essential to keeping the planet and Bee-Zed in a 1:1 retrograde resonance.

All the planets and most of the asteroids in the solar system orbit the Sun in the same direction because the solar system emerged from a revolving cloud of dust and gas, most of the constituent objects of which continue to revolve as they did before.

"The vast majority of retrograde objects are comets. Their orbits are typically inclined as well as retrograde. The most famous, of course, is Halley's comet, which has a retrograde orbit with an inclination of 162°, practically identical to that of 2015 BZ509," Morais said.

In the final stages of planetary formation, she explained, small bodies were expelled far from the Sun and planets, forming the spherical shell of debris and comets known as the Oort cloud.

"At these distances, the Milky Way's gravitational effects disturb small bodies. To begin with, they orbited close to the plane of the ecliptic in the same direction as the planets, but their orbits were deformed by the galaxy's tidal force and by interactions with nearby stars, gradually becoming more inclined and forming a more or less spherical reservoir," Morais said.

If the orbits of these bodies are disturbed - by a passing star, for example - they return to paths close to the planets of the solar system and can become active comets. "The icy small bodies warm up as they approach the Sun, and the ice sublimes to form a coma [a dense cloud of gas and dust particles around a nucleus] and often a tail, making the comets observable," she explained.

In the case of 2015 BZ509, the most surprising feature is its long period of stability. In their commentary in Nature, Morais and Namouni say the particularly long life of 2015 BZ509 in its retrograde orbit makes it the most intriguing object in the vicinity of Jupiter. "Further studies are needed to confirm how this mysterious object arrived at its present configuration," they conclude.

Wiegert speculates that Bee-Zed probably originated in the Oort cloud, like the Halley family comets. In any event, more research will be necessary to reconstruct Bee-Zed's epic voyage through the solar system.

"Actually, 2006 BZ8 might even enter into co-orbital retrograde resonance with Saturn in the future. Our simulations showed that resonance capture is more likely for objects with retrograde orbits than for those orbiting in the same direction as the planets," Morais said.

Bee-Zed is expected to stay in the same state for another million years. Its discovery has led researchers to suspect that asteroids in retrograde co-orbits with Jupiter and other planets may be more common than was previously thought, making the theory expounded by Morais and Namouni even more compelling.

Contacts and sources:
Samuel Antenor
Fundação De Amparo À Pesquisa Do Estado De São Paulo

Colliding Galaxies Make Cosmic Goulash

What would happen if you took two galaxies and mixed them together over millions of years? A new image including data from NASA's Chandra X-ray Observatory reveals the cosmic culinary outcome.

Arp 299 is a system located about 140 million light years from Earth. It contains two galaxies that are merging, creating a partially blended mix of stars from each galaxy in the process.

However, this stellar mix is not the only ingredient. New data from Chandra reveals 25 bright X-ray sources sprinkled throughout the Arp 299 concoction. Fourteen of these sources are such strong emitters of X-rays that astronomers categorize them as "ultra-luminous X-ray sources," or ULXs.

This new composite image of Arp 299 contains X-ray data from Chandra (pink), higher-energy X-ray data from NuSTAR (purple), and optical data from the Hubble Space Telescope (white and faint brown). Arp 299 also emits copious amounts of infrared light that has been detected by observatories such as NASA's Spitzer Space Telescope, but those data are not included in this composite.
Arp 299
Image credit: X-ray: NASA/CXC/Univ. of Crete/K. Anastasopoulou et al, NASA/NuSTAR/GSFC/A. Ptak et al; Optical: NASA/STScI

These ULXs are found embedded in regions where stars are currently forming at a rapid rate. Most likely, the ULXs are binary systems where a neutron star or black hole is pulling matter away from a companion star that is much more massive than the Sun. These double star systems are called high-mass X-ray binaries.

Such a loaded buffet of high-mass X-ray binaries is rare, but Arp 299 is one of the most powerful star-forming galaxies in the nearby Universe. This is due at least in part to the merger of the two galaxies, which has triggered waves of star formation. The formation of high-mass X-ray binaries is a natural consequence of such blossoming star birth as some of the young massive stars, which often form in pairs, evolve into these systems.

The infrared and X-ray emission of the galaxy is remarkably similar to that of galaxies found in the very distant Universe, offering an opportunity to study a relatively nearby analog of these distant objects. A higher rate of galaxy collisions occurred when the universe was young, but these objects are difficult to study directly because they are located at colossal distances.

X-ray Image of Arp 299

Credit: NASA

The Chandra data also reveal diffuse X-ray emission from hot gas distributed throughout Arp 299. Scientists think the high rate of supernovas, another common trait of star-forming galaxies, has expelled much of this hot gas out of the center of the system.

A paper describing these results appeared in the August 21st, 2016 issue of the Monthly Notices of the Royal Astronomical Society and is available online. The lead author of the paper is Konstantina Anastasopoulou from the University of Crete in Greece. NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations.

Contacts and sources:
NASA/Chandra X-Ray Observatory 

Moisture-Driven ‘Robots’ Crawl with No External Power Source

Using an off-the-shelf camera flash, researchers turned an ordinary sheet of graphene oxide into a material that bends when exposed to moisture. They then used this material to make a spider-like crawler and claw robot that move in response to changing humidity without the need for any external power.

“The development of smart materials such as moisture-responsive graphene oxide is of great importance to automation and robotics,” said Yong-Lai Zhang of Jilin University, China, and leader of the research team. “Our very simple method for making typical graphene oxides smart is also extremely efficient. A sheet can be prepared within one second.”

The researchers used flash-treated graphene oxide to create a crawler that moved when humidity was increased. Switching the humidity off and on several times induced the crawler to move 3.5 millimeters in 12 seconds, with no external energy supply.
Credit: Yong-Lai Zhang of Jilin University

In the journal Optical Materials Express, from The Optical Society (OSA), the researchers reported that graphene oxide sheets treated with brief exposure to bright light in the form of a camera flash exhibited reversible bending at angles from zero to 85 degrees in response to switching the relative humidity between 33 and 86 percent. They also demonstrated that their method is repeatable and the simple robots they created have good stability.

Although other materials can change shape in response to moisture, the researchers experimented with graphene-based materials because they are incredibly thin and have unique properties such as flexibility, conductivity, mechanical strength and biocompatibility. These properties make graphene ideal for broad applications in various fields. For example, the material’s excellent biocompatibility could allow moisture-responsive graphene oxide to be used in organ-on-a-chip systems that simulate the mechanics and physiological response of entire organs and are used for drug discovery and other biomedical research.

Making a moisture-responsive material
Other groups have shown that graphene oxide can be made moisture responsive through a chemical reaction called reduction, which removes oxygen from molecules. In fact, the researchers previously demonstrated that both sunlight and UV light can induce the effect. However, these approaches were hard to precisely control and not very efficient.

The researchers used flash-treated graphene oxide to create a crawler that moved when humidity was increased. Switching the humidity off and on several times induced the crawler to move 3.5 millimeters in 12 seconds, with no external energy supply.

Credit: Yong-Lai Zhang of Jilin University

The research team experimented with using a camera flash, which typically covers a broad spectral range, as a simple and effective way to create moisture-responsive graphene. A camera flash allowed the researchers to remove oxygen from, or reduce, just one side of a sheet of graphene oxide. When moisture is present, the reduced side of the graphene oxide absorbs fewer water molecules, causing the non-reduced side to expand and the sheet to bend toward the reduced side. If the material is then exposed to dry air, it flattens out.

The researchers found that keeping the flash about 20 to 30 centimeters away from the graphene oxide sheet was enough to selectively modify the top layer of the sheet without penetrating all the way through to the other side. The sheet also needs to be more than 5 microns thick to prevent it from being completely reduced by the flash exposure.

Graphene robots
To make a moisture-driven crawler, the researchers cut flash-treated graphene oxide into an insect shape with four legs. The free-standing crawler was about 1 centimeter wide and moved forward when humidity was increased. Switching the humidity off and on several times induced the crawler to move 3.5 millimeters in 12 seconds, with no external energy supply.

The researchers also made a claw shape by sticking together eight 5-by-1 millimeter ribbons of flash-treated graphene oxide in a star shape. When moisture was present, the claw closed within 12 seconds. It returned back to an open position after 56 seconds of exposure to dry air.

“These robots are simple and can be flexibly manipulated by changing the environmental humidity,” said Zhang. “These designs are very important because moving and capturing/releasing are basic functions of automated systems.”

Zhang added that integrating moisture-responsive graphene into a microchannel system connected to humidity controller could allow even more precise control and other types of robots or simple machines. The researchers are now working on ways to improve the control of the material’s bending and are experimenting with ways to gain more complex performance from robots made of moisture-responsive graphene oxide.

Contacts and sources:
The Optical Society
Optical Materials Express (OMEx)

Paper: Y.-Q. Liu, J.-N. Ma, Y. Liu, D.-D. Han, H.-B. Jiang, J.-W. Mao, C.-H. Han, Z.-Z. Jiao, Y.-L. Zhang, “Facile fabrication of moisture responsive graphene actuators by moderate flash reduction of graphene oxides films,” Opt. Mater. Express Volume 7, Issue 7, 2617-2625 (2017).
DOI: 10.1364/OME.7.002617.

Supermassive Black Holes in Orbital Dance: Groundbreaking Discovery

For the first time ever, astronomers at The University of New Mexico say they've been able to observe and measure the orbital motion between two supermassive black holes hundreds of millions of light years from Earth - a discovery more than a decade in the making.

UNM Department of Physics & Astronomy graduate student Karishma Bansal is the first-author on the paper, Constraining the Orbit of the Supermassive Black Hole Binary 0402+379’, recently published in The Astrophysical Journal. She, along with UNM Professor Greg Taylor and colleagues at Stanford, the U.S. Naval Observatory and the Gemini Observatory, have been studying the interaction between these black holes for 12 years.

"For a long time, we've been looking into space to try and find a pair of these supermassive black holes orbiting as a result of two galaxies merging," said Taylor. "Even though we've theorized that this should be happening, nobody had ever seen it until now."

Artist's conception shows two supermassive black holes, similar to those observed by UNM researchers, orbiting one another more than 750 million light years from Earth.
Credit: Josh Valenzuela/UNM

In early 2016, an international team of researchers, including a UNM alumnus, working on the LIGO project detected the existence of gravitational waves, confirming Albert Einstein's 100-year-old prediction and astonishing the scientific community. These gravitational waves were the result two stellar mass black holes (~30 solar mass) colliding in space within the Hubble time. 

Now, thanks to this latest research, scientists will be able to start to understand what leads up to the merger of supermassive black holes that creates ripples in the fabric of space-time and begin to learn more about the evolution of galaxies and the role these black holes play in it.

Using the Very Long Baseline Array (VLBA), a system made up of 10 radio telescopes across the U.S. and operated in Socorro, N.M., researchers have been able to observe several frequencies of radio signals emitted by these supermassive black holes (SMBH). Over time, astronomers have essentially been able to plot their trajectory and confirm them as a visual binary system. In other words, they've observed these black holes in orbit with one another.

This is a false color VLBA map of radio galaxy 0402+379 at 15 GHz. It hosts two supermassive black holes at its center, being represented by accretion discs with twin jets.
Credit: UNM

"When Dr. Taylor gave me this data I was at the very beginning of learning how to image and understand it," said Bansal. "And, as I learned there was data going back to 2003, we plotted it and determined they are orbiting one another. It's very exciting."

For Taylor, the discovery is the result of more than 20 years of work and an incredible feat given the precision required to pull off these measurements. At roughly 750 million light years from Earth, the galaxy named 0402+379 and the supermassive black holes within it, are incredibly far away; but are also at the perfect distance from Earth and each other to be observed.

Bansal says these supermassive black holes have a combined mass of 15 billion times that of our sun, or 15 billion solar masses. The unbelievable size of these black holes means their orbital period is around 24,000 years, so while the team has been observing them for over a decade, they've yet to see even the slightest curvature in their orbit.

"If you imagine a snail on the recently-discovered Earth-like planet orbiting Proxima Centauri - 4.243 light years away - moving at 1 cm a second, that's the angular motion we're resolving here," said Roger W. Romani, professor of physics at Stanford University and member of the research team.

"What we've been able to do is a true technical achievement over this 12-year period using the VLBA to achieve sufficient resolution and precision in the astrometry to actually see the orbit happening," said Taylor. "It's a bit of triumph in technology to have been able to do this."

VLBA map of radio galaxy 0402+379 at 15 GHz. It hosts two supermassive black holes at its center, being denoted as C1 and C2.
Credit: UNM

While the technical accomplishment of this discovery is truly amazing, Bansal and Taylor say the research could also teach us a lot about the universe, where galaxies come from and where they're going.

"The orbits of binary stars provided tremendous insights about stars," said Bob Zavala, an astronomer with the U.S. Naval Observatory. "Now we'll be able to use similar techniques to understand super-massive black holes and the galaxies they reside within."

Continuing to observe the orbit and interaction of these two supermassive black holes could also help us gain a better understanding of what the future of our own galaxy might look like. Right now, the Andromeda galaxy, which also has a SMBH at its center, is on a path to collide with our Milky Way, meaning the event Bansal and Taylor are currently observing, might occur in our galaxy in a few billion years.

"Supermassive black holes have a lot of influence on the stars around them and the growth and evolution of the galaxy," explained Taylor. "So, understanding more about them and what happens when they merge with one another could be important for our understanding for the universe."

Bansal says the research team will take another observation of this system in three or four years to confirm the motion and obtain a precise orbit. In the meantime, the team hopes that this discovery will encourage related work from astronomers around the world.

Contacts and sources:
Aaron Hilf
The University of New Mexico

Monday, June 26, 2017

Chimpanzee 'Super Strength' and Human Muscle Evolution

Since at least the 1920s, anecdotes and some studies have suggested that chimpanzees are “super strong” compared to humans, implying that their muscle fibers, the cells that make up muscles, are superior to humans.

But now a research team reports that contrary to this belief, chimp muscles’ maximum dynamic force and power output is just about 1.35 times higher than human muscle of similar size, a difference they call “modest” compared with historical, popular accounts of chimp “super strength,” being many times stronger than humans.

Credit: Ikiwaner / Wikimedia Commons

Further, says biomechanist Brian Umberger, an expert in musculoskeletal biomechanics in kinesiology at the University of Massachusetts Amherst, the researchers found that this modest performance advantage for chimps was not due to stronger muscle fibers, but rather the different mix of muscle fibers found in chimpanzees compared to humans.

As the authors explain, the long-standing but untested assumption of chimpanzees’ exceptional strength, if true, “would indicate a significant and previously unappreciated evolutionary shift in the force and/or power-producing capabilities of skeletal muscle” in either chimps or humans, whose lines diverged some 7 or 8 million years ago.

Umberger was part of the team led by Matthew O’Neill at the University of Arizona College of Medicine, Phoenix, and others at Harvard and Ohio State University. Details of this work, supported in part by a National Science Foundation grant to Umberger,appear in the current early online edition of Proceedings of the National Academy of Sciences.

The researchers began by critically examining the scientific literature, where studies reported a wide range of estimates for how chimpanzees outstrip humans in strength and power, averaged about 1.5 times over all. But Umberger says reaching this value from such disparate reports “required a lot of analysis on our part, accounting for differences between subjects, procedures and so on.” He and colleagues say 1.5 times is considerably less than anecdotal reports of chimps being several-fold stronger, but it is still a meaningful difference and explaining it could advance understanding of early human musculoskeletal evolution.

Umberger adds, “There are nearly 100 years of accounts suggesting that chimpanzees must have intrinsically superior muscle fiber properties compared with humans, yet there had been no direct tests of that idea. Such a difference would be surprising, given what we know about how similar muscle fiber properties are across species of similar body size, such as humans and chimps.”

He explains that muscle fiber comes in two general types, fast-twitch, fast and powerful but fatigue quickly, and slow-twitch, which are slower and less powerful but with good endurance. “We found that within fiber types, chimp and human muscle fibers were actually very similar. However, we also found that chimps have about twice as many fast-twitch fibers as humans,” he notes.

For this work, the team used an approach combining isolated muscle fiber preparations, experiments and computer simulations. They directly measured the maximum isometric force and maximum shortening velocity of skeletal muscle fibers of the common chimpanzee. In general, they found that chimp limb and trunk skeletal muscle fibers are similar to humans and other mammals and “generally consistent with expectations based on body size and scaling.”

Umberger, whose primary scientific contribution was in interpreting how muscle properties will affect whole-animal performance, developed computer simulation models that allowed the researchers to integrate the various data on individual muscle properties and assess their combined effects on performance.

O’Neill, Umberger and colleagues also measured the distribution of muscle fiber types and found it to be quite different in humans and chimps, who also have longer muscle fibers than humans. They combined individual measurements in the computer simulation model of muscle function to better understand what the combined effects of the experimental observations were on whole-muscle performance. When all factors were integrated, chimp muscle produces about 1.35 times more dynamics force and power than human muscle.

Umberger says the advantage for chimps in dynamic strength and power comes from the global characteristics of whole muscles, rather than the intrinsic properties of the cells those muscles are made of. “The flip side is that humans, with a high percentage of slow-twitch fibers, are adapted for endurance, such as long-distance travel, at the expense of dynamic strength and power. When we compared chimps and humans to muscle fiber type data for other species we found that humans are the outlier, suggesting that selection for long distance, over-ground travel may have been important early in the evolution of our musculoskeletal system.”

The authors conclude, “Contrary to some long-standing hypotheses, evolution has not altered the basic force, velocity or power-producing capabilities of skeletal muscle cells to induce the marked differences between chimpanzees and humans in walking, running, climbing and throwing capabilities. This is a significant, but previously untested assumption. Instead, natural selection appears to have altered more global characteristics of muscle tissue, such as muscle fiber type distributions and muscle fiber lengths.”

This work is part of a long-running collaboration among Umberger, O’Neill and Susan Larson at Stony Brook University School of Medicine on the general topics of musculoskeletal design, locomotion and human evolution.

Contacts and sources:
Janet Lathrop
University of Massachusetts Amherst

Arsenic Compounds in Rice More Prevalent than Previously Known, Risk for Humans Unknown

Rice is a staple food in many regions of the world, however it sometimes contains levels of arsenic that are hazardous to our health. An interdisciplinary team of researchers at the University of Bayreuth has now discovered that there are arsenic compounds which have a toxic effect on plants and yet had not previously been considered in connection with chemical analyses of rice and the estimated health risks for humans. 

The research concerns thioarsenates, compounds made up of arsenic and sulphur, which may be present in rice fields more often than previously assumed. The scientists have published their findings in the journal Environmental Science and Technology.

Doctoral researchers in Bayreuth Carolin Kerl M.Sc. (left) and Colleen Rafferty M.Sc. (right) are investigating the absorption of thioarsenates in the thale cress (Arabidopsis thaliana). 
Photo: Christian Wissler.

Increased concentrations in rice fields?

Thioarsenates can be found in surface water, groundwater, and bottom water with high levels of sulphide. Sulphide is the reduced form of sulphate; it reacts spontaneously with arsenic and can form thioarsenates. Rice fields provide favourable conditions for these processes.

 “Rice is usually grown on flooded fields. The resulting lack of oxygen in the ground can reduce sulphate to sulphide. We were able to demonstrate for the first time that a considerable amount of the arsenic in rice fields – namely 20 – 30% - is bound up in the form of thioarsenates,” explained Prof. Dr. Britta Planer-Friedrich, Professor of Environmental Geochemistry at the University of Bayreuth.

 “Further research to shed more light on the spread of thioarsenates is now even more urgent since we were able to show for the first time that thioarsenates can be absorbed by plants and are harmful to them.”

Harmfulness for biological model organisms

The experiments in Bayreuth, which also included several doctoral researchers – concentrated on the thale cress (Arabidopsis thaliana), a common plant in the fields of Europe and Asia that has proven to be a useful model organism in biological research. Together with plant physiologist Prof. Dr. Stephan Clemens, various mutants of the thale cress were tested in the laboratory to see how they reacted to thioarsenates added to their nutrient solution. The results were clear: the plants absorb the arsenic-sulphur compounds and their growth is visibly limited. The more arsenic reaches the plant in this way, the more its roots shrivel up.

Toxic for humans too?

“In the wake of these unsettling findings, we plan to investigate the effects of thioarsenates on different types of rice over the next several months. At present, we do not yet sufficiently understand whether or not and to what extent rice plants absorb the arsenic that bonded with sulphur and to what extent this adversely affects their metabolic processes. Above all, it is unclear whether thioarsenates also make their way to the rice grains,” explained Prof. Clemens. 

He added, “At the University of Bayreuth, we have all the research technology necessary to see these experiments through. If it turns out that thioarsenates are absorbed by the roots of the rice plants and make their way to the rice grains unaltered, then further research will be needed. In particular, we would need to clarify whether thioarsenates are toxic for humans who consume food containing rice over an extended period. What’s more: in addition to the previously known forms of arsenic, thioarsenates must be considered in the future when developing rice plants that accumulate less arsenic in their grains. This is an objective currently being pursued by numerous research groups around the world.”

“Not only the EU, which has had a limit for arsenic in rice since 2016, but above all countries in Asia and Africa – where yearly rice consumption can be well above 100 kilograms per person – should be following rice research closely with an eye to amending their food safety regulations. Traces of arsenic are also found in drinking water and other types of food. These trace amounts can add up to a daily dose representing a health risk that is not to be underestimated,” Prof. Planer-Friedrich said.

A few years ago, Planer-Friedrich discovered that thioarsenates could play a more significant role in the earth’s arsenic balance than previously thought. The starting point was a study at the hot springs in Yellowstone National Park. Here it was discovered that more than 80% of the arsenic from the hot springs is bound up in thioarsenates.

 In the following years, it was shown that thioarsenates can occur in soil and groundwater under less extreme conditions. Depending on the sulphide content, they may even account for more than a quarter of total arsenic. These findings have provided impetus for further experiments on the spread of such arsenic compounds – at the University of Bayreuth, such research will focus on the staple food rice.

Contacts and sources:
University of Bayreuth

Citation: Britta Planer-Friedrich, Tanja Kühnlenz, Dipti Halder, Regina Lohmayer, Nathaniel Wilson, Colleen Rafferty, and Stephan Clemens, Thioarsenate Toxicity and Tolerance in the Model System Arabidopsis thaliana, Environmental Science & Technology (2017),
DOI: 10.1021/acs.est.6b06028

The Brightest Light Ever Produced on Earth Equal to 1 Billion Suns

Physicists from the University of Nebraska-Lincoln are seeing an everyday phenomenon in a new light.

By focusing laser light to a brightness one billion times greater than the surface of the sun - the brightest light ever produced on Earth - the physicists have observed changes in a vision-enabling interaction between light and matter.

Those changes yielded unique X-ray pulses with the potential to generate extremely high-resolution imagery useful for medical, engineering, scientific and security purposes. The team's findings, detailed June 26 in the journal Nature Photonics, should also help inform future experiments involving high-intensity lasers.

A rendering of how changes in an electron's motion (bottom view) alter the scattering of light (top view), as measured in a new experiment that scattered more than 500 photons of light from a single electron. Previous experiments had managed to scatter no more than a few photons at a time.

Credit: Extreme Light Laboratory|University of Nebraska-Lincoln

Donald Umstadter and colleagues at the university's Extreme Light Laboratory fired their Diocles Laser at helium-suspended electrons to measure how the laser's photons - considered both particles and waves of light - scattered from a single electron after striking it.

Under typical conditions, as when light from a bulb or the sun strikes a surface, that scattering phenomenon makes vision possible. But an electron - the negatively charged particle present in matter-forming atoms - normally scatters just one photon of light at a time. And the average electron rarely enjoys even that privilege, Umstadter said, getting struck only once every four months or so.

Though previous laser-based experiments had scattered a few photons from the same electron, Umstadter's team managed to scatter nearly 1,000 photons at a time. At the ultra-high intensities produced by the laser, both the photons and electron behaved much differently than usual.

"When we have this unimaginably bright light, it turns out that the scattering - this fundamental thing that makes everything visible - fundamentally changes in nature," said Umstadter, the Leland and Dorothy Olson Professor of physics and astronomy.

Using the brightest light ever produced, University of Nebraska-Lincoln physicists obtained this high-resolution X-ray of a USB drive. The image reveals details not visible with ordinary X-ray imaging

Credit: Extreme Light Laboratory|University of Nebraska-Lincoln

A photon from standard light will typically scatter at the same angle and energy it featured before striking the electron, regardless of how bright its light might be. Yet Umstadter's team found that, above a certain threshold, the laser's brightness altered the angle, shape and wavelength of that scattered light.

"So it's as if things appear differently as you turn up the brightness of the light, which is not something you normally would experience," Umstadter said. "(An object) normally becomes brighter, but otherwise, it looks just like it did with a lower light level. But here, the light is changing (the object's) appearance. The light's coming off at different angles, with different colors, depending on how bright it is."

That phenomenon stemmed partly from a change in the electron, which abandoned its usual up-and-down motion in favor of a figure-8 flight pattern. As it would under normal conditions, the electron also ejected its own photon, which was jarred loose by the energy of the incoming photons. But the researchers found that the ejected photon absorbed the collective energy of all the scattered photons, granting it the energy and wavelength of an X-ray.

The unique properties of that X-ray might be applied in multiple ways, Umstadter said. Its extreme but narrow range of energy, combined with its extraordinarily short duration, could help generate three-dimensional images on the nanoscopic scale while reducing the dose necessary to produce them.

Those qualities might qualify it to hunt for tumors or microfractures that elude conventional X-rays, map the molecular landscapes of nanoscopic materials now finding their way into semiconductor technology, or detect increasingly sophisticated threats at security checkpoints. Atomic and molecular physicists could also employ the X-ray as a form of ultrafast camera to capture snapshots of electron motion or chemical reactions.

A scientist at work in the Extreme Light Laboratory at the University of Nebraska-Lincoln, where physicists using the brightest light ever produced were able to change the way photons scatter from electrons.
Credit: Extreme Light Laboratory|University of Nebraska-Lincoln

As physicists themselves, Umstadter and his colleagues also expressed excitement for the scientific implications of their experiment. By establishing a relationship between the laser's brightness and the properties of its scattered light, the team confirmed a recently proposed method for measuring a laser's peak intensity. The study also supported several longstanding hypotheses that technological limitations had kept physicists from directly testing.

"There were many theories, for many years, that had never been tested in the lab, because we never had a bright-enough light source to actually do the experiment," Umstadter said. "There were various predictions for what would happen, and we have confirmed some of those predictions.

"It's all part of what we call electrodynamics. There are textbooks on classical electrodynamics that all physicists learn. So this, in a sense, was really a textbook experiment."

Contacts and sources:
Donald Umstadter
Extreme Light Laboratory|University of Nebraska-Lincoln

Surprised Scientists Find Water Exists as Two Different Liquids

We normally consider liquid water as disordered with the molecules rearranging on a short time scale around some average structure. Now, however, scientists at Stockholm University have discovered two phases of the liquid with large differences in structure and density. The results are based on experimental studies using X-rays, which are now published in Proceedings of the National Academy of Science (US).

Most of us know that water is essential for our existence on planet Earth. It is less well-known that water has many strange or anomalous properties and behaves very differently from all other liquids. Some examples are the melting point, the density, the heat capacity, and all-in-all there are more than 70 properties of water that differ from most liquids. These anomalous properties of water are a prerequisite for life as we know it.

Pictured is an artist's impression of the two forms of ultra-viscous liquid water with different density. On the background is depicted the x-ray speckle pattern taken from actual data of high-density amorphous ice, which is produced by pressurizing water at very low temperatures.

Credit: Mattias Karlén

"The new remarkable property is that we find that water can exist as two different liquids at low temperatures where ice crystallization is slow", says Anders Nilsson, professor in Chemical Physics at Stockholm University. The breakthrough in the understanding of water has been possible through a combination of studies using X-rays at Argonne National Laboratory near Chicago, where the two different structures were evidenced and at the large X-ray laboratory DESY in Hamburg where the dynamics could be investigated and demonstrated that the two phases indeed both were liquid phases. Water can thus exist as two different liquids.

"It is very exciting to be able to use X-rays to determine the relative positions between the molecules at different times", says Fivos Perakis, postdoc at Stockholm University with a background in ultrafast optical spectroscopy. "We have in particular been able to follow the transformation of the sample at low temperatures between the two phases and demonstrated that there is diffusion as is typical for liquids".

When we think of ice it is most often as an ordered, crystalline phase that you get out of the ice box, but the most common form of ice in our planetary system is amorphous, that is disordered, and there are two forms of amorphous ice with low and high density. The two forms can interconvert and there have been speculations that they can be related to low- and high-density forms of liquid water. To experimentally investigate this hypothesis has been a great challenge that the Stockholm group has now overcome.

"I have studied amorphous ices for a long time with the goal to determine whether they can be considered a glassy state representing a frozen liquid", says Katrin Amann-Winkel, researcher in Chemical Physics at Stockholm University. "It is a dream come true to follow in such detail how a glassy state of water transforms into a viscous liquid which almost immediately transforms to a different, even more viscous, liquid of much lower density".

"The possibility to make new discoveries in water is totally fascinating and a great inspiration for my further studies", says Daniel Mariedahl, PhD student in Chemical Physics at Stockholm University. "It is particularly exciting that the new information has been provided by X-rays since the pioneer of X-ray radiation, Wolfgang Röntgen, himself speculated that water can exist in two different forms and that the interplay between them could give rise to its strange properties".

"The new results give very strong support to a picture where water at room temperature can't decide in which of the two forms it should be, high or low density, which results in local fluctuations between the two", says Lars G.M. Pettersson, professor in Theoretical Chemical Physics at Stockholm University. "In a nutshell: Water is not a complicated liquid, but two simple liquids with a complicated relationship."

These new results not only create an overall understanding of water at different temperatures and pressures, but also how water is affected by salts and biomolecules important for life. In addition, the increased understanding of water can lead to new insights on how to purify and desalinate water in the future. This will be one of the main challenges to humanity in view of the global climate change.

These studies were led by Stockholm University and involve a collaboration including the KTH Royal Institute of Technology in Stockholm, DESY in Hamburg, University of Innsbruck, Argonne National Laboratory in Chicago and SLAC National Accelerator Laboratory in California. The other participants from Stockholm University involved in the study are Harshad Pathak, Alexander Späh, Filippo Cavalca and Daniel Schlesinger. Experiments were conducted at APS BL 6-ID-D at Argonne National Laboratory and PETRA III BL P10 at DESY.

Additional information:
Contacts and sources:
Professor Anders Nilsson
Stockholm University 

The recently published study by Fivos Perakis and Katrin Amann-Winkel et al. can be found here: https://www.eurekalert.org/pio/view.tipsheet.php?id=237&pubdate=2017-06-21

New Extinction Event Discovered

Over two million years ago, a third of the largest marine animals like sharks, whales, sea birds and sea turtles disappeared. This previously unknown extinction event not only had a considerable impact on the earth’s historical biodiversity but also on the functioning of ecosystems. This has been demonstrated by researchers at the University of Zurich.

Fossils from the Pliocene: shark tooth from carchahinus leucas on the left, from negaprion on the right.
Image: UZH

The disappearance of a large part of the terrestrial megafauna such as saber-toothed cat and the mammoth during the ice age is well known. Now, researchers at the University of Zurich and the Naturkunde Museum in Berlin have shown that a similar extinction event had taken place earlier, in the oceans.

New extinction event discovered

The international team investigated fossils of marine megafauna from the Pliocene and the Pleistocene epochs (5.3 million to around 9,700 years BC). “We were able to show that around a third of marine megafauna disappeared about three to two million years ago. Therefore, the marine megafaunal communities that humans inherited were already altered and functioning at a diminished diversity”, explains lead author Dr. Catalina Pimiento, who conducted the study at the Paleontological Institute and Museum of the University of Zurich.

Above all, the newly discovered extinction event affected marine mammals, which lost 55 per cent of their diversity. As many as 43 per cent of sea turtle species were lost, along with 35 per cent of sea birds and 9 per cent of sharks. On the other hand, the following new forms of life were to develop during the subsequent Pleistocene epoch: Around a quarter of animal species, including the polar bear Ursus, the storm petrel Oceanodroma or the penguin Megadyptes, had not existed during the Pliocene. Overall, however, earlier levels of diversity could not be reached again.

Effects on functional diversity

In order to determine the consequences of this extinction, the research team concentrated on shallow coastal shelf zones, investigating the effects that the loss of entire functional entities had on coastal ecosystems. Functional entities are groups of animals not necessarily related, but that share similar characteristics in terms of the function they play on ecosystems. The finding: Seven functional entities were lost in coastal waters during the Pliocene.

Even though the loss of seven functional entities, and one third of the species is relatively modest, this led to an important erosion of functional diversity: 17 per cent of the total diversity of ecological functions in the ecosystem disappeared and 21 per cent changed. Previously common predators vanished, while new competitors emerged and marine animals were forced to adjust. In addition, the researchers found that at the time of the extinction, coastal habitats were significantly reduced due to violent sea levels fluctuations.
Large warm-blooded marine animals are more vulnerable

The researchers propose that the sudden loss of the productive coastal habitats, together with oceanographic factors such as altered sea currents, greatly contributed to these extinctions. 

“Our models have demonstrated that warm-blooded animals in particular were more likely to become extinct. For example, species of sea cows and baleen whales, as well as the giant shark Carcharocles megalodon disappeared”, explains Dr. Pimiento. “This study shows that marine megafauna were far more vulnerable to global environmental changes in the recent geological past than had previously been assumed”. The researcher also points to a present-day parallel: Nowadays, large marine species such as whales or seals are also highly vulnerable to human influences.

Contacts and sources:
Catalina Pimiento Hernandez
Museum of Natural History
Leibniz Institute for Evolution and Biodiversity Science
University of Zurich.

Citation: Catalina Pimiento, John N. Griffin, Christopher F. Clements, Daniele Silvestro, Sara Varela, Mark D. Uhen and Carlos Jaramillo. The Pliocene marine megafauna extinction and its impact on functional diversity. June 26, 2017. Nature Ecology & Evolution. DOI: 10.1038/s41559-017-0223-6