Wednesday, April 25, 2018

Assembly of Most Massive Galaxy Cluster Seen for the First Time

A dense flock of 14 galaxies from 1.4 billion years after the Big Bang is destined to become one of the most massive structures in the modern universe.

For the first time, astronomers have witnessed the birth of a colossal cluster of galaxies. Their observations reveal at least 14 galaxies packed into an area only four times the diameter of the Milky Way’s galactic disk. Computer simulations of the galaxies predict that over time the cluster will assemble into one of the most massive structures in the modern universe, the astronomers report in the April 26 issue of Nature.

Galaxies within the cluster are churning out stars at an incredible pace, ranging from 50 to 1,000 times the Milky Way’s star formation rate. These rates are higher than can be explained for solitary galaxies, suggesting that the galaxies are influencing one another and are actively assembling into a cluster.

Astronomical Assembly: Astronomers recently discovered a group of interacting and merging galaxies in the early universe, as seen in this artist’s illustration.

Credit: ESO/M. Kornmesser

“More so than any other candidate discovered to date, this seems like we’re catching a cluster in the process of being assembled,” says study co-author Chris Hayward, an associate research scientist at the Center for Computational Astrophysics at the Flatiron Institute in New York City. “This is the missing link in our understanding of how clusters form.”

Astrophysicists simulated the assembly of a budding cluster of 14 galaxies that was recently discovered by the Atacama Large Millimeter/submillimeter Array. As the simulation progresses, the galaxies merge into one titanic elliptical galaxy surrounded by a halo of galaxies, stars and dust. This configuration resembles that of galaxy clusters seen in the modern universe.

Credit: D. Rennehan, A. Babul & B. Moa (University of Victoria); C. Hayward (Flatiron Institute); S. Chapman (Dalhousie University/University of Victoria/NRC Herzberg); P. Hopkins (Caltech)

Galaxy clusters are the largest structures held together by gravity in the present-day universe and contain hundreds or even thousands of galaxies. Clusters grow over time as gravity draws in more material. This newborn galaxy cluster, or protocluster, is around 12.4 billion light-years away from Earth. That distance means that the protocluster appears today as it existed 1.4 billion years after the Big Bang.

How the assembly of galaxies got so big so fast “is a bit of a mystery,” says study co-author Scott Chapman, the Killam Professor in astrophysics at Dalhousie University in Halifax, Canada. “It wasn’t built up gradually over billions of years, as you might expect.”

Chapman, Hayward, Tim Miller of Yale University, and collaborators spotted the protocluster during a follow-up to a survey conducted using the South Pole Telescope in Antarctica. That undertaking inspected around 6 percent of the sky, but with relatively coarse resolution. In those observations, the protocluster was the brightest light source not magnified by the effect of a massive object’s gravity, which bends light like a lens. While bright, the source just looked like a fuzzy blob composed of at least three galaxies. An additional study by the Atacama Large Millimeter/submillimeter Array in Chile provided clarity and a surprise.

“It just hit you in the face because all of a sudden there are all these galaxies there,” Chapman says. “We went from three to 14 in one fell swoop. It instantly became obvious this was a very interesting, massive structure forming and not just a flash in the pan.”

In total, the protocluster contains around 10 trillion suns’ worth of mass. All that material in such a confined space means that the galaxies will probably merge over time, rather than drift apart. A numerical simulation developed by Hayward, Chapman and colleagues projected how the protocluster would grow over the next billion years. Over that time span, the 14 galaxies will merge into one giant elliptical galaxy surrounded by a halo of galaxies, stars and dust. The researchers estimate that in the modern-day universe, the cluster will contain roughly 1,000 trillion suns’ worth of mass. That’s comparable to the mass of the Coma cluster of galaxies that lies a few hundred million light-years from Earth.

The surprisingly high star formation rates within the galaxies provide further evidence that the galaxies are forming a cluster, says Hayward. The observed surge of star formation during the protocluster’s assembly fits with the composition of modern galaxy clusters, which contain an abundance of old stars of around the same age. “There’s some special aspect of this environment that’s causing the galaxies to form stars much more rapidly than individual galaxies that aren’t in this special place,” says Hayward. One possible explanation is that the gravitational tug of neighboring galaxies compresses gas within a galaxy, triggering star formation.

The protocluster is a precursor to the larger and more mature galaxy clusters seen in the modern universe, making the protocluster an excellent test bed for learning more about how present-day clusters formed and evolved. Modern clusters, for instance, brim with superheated gas that can reach temperatures of more than 1 million degrees. Scientists aren’t sure how that gas got there, though. The high rate of star formation in the newly discovered protocluster may provide a clue: A deluge of newborn stars in a forming cluster may spew hot gas into the voids between the galaxies. That expelled gas is not dense enough to form stars and instead lingers throughout the cluster.

Further exploration of protoclusters will provide additional insight, Chapman says. The group has already identified two more protoclusters from the South Pole Telescope survey, though they are not as spectacular, he says. “As we flush out the details of those, we’ll see just how similar they are to this structure.”

Contacts and sources: 
Anastasia Greenebaum
Simons Foundation

Citation: A massive core for a cluster of galaxies at a redshift of 4.3
Nature volume 556, pages469–472 (2018)

Winter Wave Heights and Extreme Storms on the Rise in Western Europe

Average winter wave heights along the Atlantic coast of Western Europe have been rising for almost seven decades, according to new research.

The coastlines of Scotland and Ireland have seen the largest increases, with the average height of winter waves more than 10mm/year (more than 0.7metres in total) higher than in 1948.

That has also led to increased wave heights during extreme weather conditions, with levels off the Irish coast increasing 25mm/year during the past 70 years, representing an average increase of 1.7m.

Waves crashing onto Chesil Beach in Dorset during the winter of 2013/14.

Credit: Tim Poate/University of Plymouth

The study, accepted for publication in Geophysical Research Letters, a journal of the American Geophysical Union, was conducted by scientists at the National Centre for Scientific Research (CNRS) in France, the University of Bordeaux and the University of Plymouth.

They say its findings are important for scientists and coastal managers looking to predict future wave heights, and take measures to protect coastal communities across Western Europe.

Dr Bruno Castelle, Senior Scientist at CNRS, said: "The height of waves during winter storms is the primary factor affecting dune and cliff erosion, explaining up to 80% of the shoreline variability along exposed sandy coasts. So any increases in wave heights, and greater frequency of extreme storms, are going to have a major impact on thousands of communities along the Atlantic coastlines of Western Europe. This work and our other recent studies have shown both are on the rise, meaning there is a real need to ensure the Atlantic coasts of Europe are protected against present and future storm threats."

The study used a combination of weather and wave hindcasts, and actual data, to measure changes in wave height and variability on coastlines from Scotland in the north to Portugal in the south.

These were then correlated against two climate indices - the North Atlantic Oscillation (NAO), which has long been known to affect climate variability in the Northern Hemisphere, and the West Europe Pressure Anomaly (WEPA), based on atmospheric pressure along the Atlantic coast of Europe.

The results showed that all areas had seen an average rise in winter wave heights during this period, although it varied from 10mm/year in Scotland, to 5mm/year in France and 1mm/year in Portugal.

The same scientists have previously shown that the winter storms of 2013/14 were the most energetic to hit the Atlantic coast of western Europe since records began in 1948.

Professor Gerd Masselink, Lead of the Coastal Processes Research Group at the University of Plymouth, said: "Whether extreme winters such as that of 2013/2014 will repeat more frequently and/or further intensify in the future is a key issue for the Atlantic coast of western Europe. It is therefore important to investigate if these extreme winters are already happening with increasing regularity and intensity, and why this is happening. If human-induced climate change is responsible, we need to seriously start thinking about decreasing our vulnerability to extreme storm events and pro-actively adapt to a more energetic future wave climate.

Contacts and sources:
Alan Williams
University of Plymouth

Climate Change Not the Key Driver of Human Conflict and Displacement in East Africa

Over the last 50 years climate change has not been the key driver of the human displacement or conflict in East Africa, rather it is politics and poverty, according to new research by UCL. According to the UN Refugee Agency in 2016 there were over 20 million displaced people in Africa.
Human displacement refers to the total number of forcibly displaced people, and includes internally displaced people - the largest group represented - and refugees, those forced to across international borders.

"Terms such as climate migrants and climate wars have increasingly been used to describe displacement and conflict, however these terms imply that climate change is the main cause. Our research suggests that socio-political factors are the primary cause while climate change is a threat multiplier," said Professor Mark Maslin (UCL Geography).

Total East African conflict, governance and economics graph.

Credit: Mark Maslin, UCL

The study, published in Palgrave Communications, found that climate variations such as regional drought and global temperature played little part in the causation of conflict and displacement of people in East Africa over the last 50 years.

The major driving forces on conflict were rapid population growth, reduced or negative economic growth and instability of political regimes. While the total number of displaced people is linked to rapid population growth and low or stagnating economic growth.

However the study found that variations in refugee numbers, people forced to cross international borders, are significantly linked to the incidence of severe regional droughts as well as political instability, rapid population growth and low economic growth.

The UN Refugee Agency report there were over 20 million displaced people in Africa in 2016 - a third of the world's total. There has been considerable debate as to whether climate change will exacerbate this situation in the future by increasing conflict and triggering displacement of people.

This new study suggests that stable effective governance, sustained economic growth and reduced population growth are essential if conflict and forced displacement of people are to be reduced in Africa, which will be severally affected by climate change.

A new composite conflict and displacement database was used to identify major episodes of political violence and number of displaced people at country level, for the last 50 years. These were compared to past global temperatures, the Palmer Drought Index, and data for the 10 East African countries represented in the study on population size, population growth, GDP per capita, rate of change of GDP per capita, life expectancy and political stability.

Total East African displaced people and drought index graphic.
Credit: Mark Maslin, UCL

The data were then analysed together using optimization regression modelling to identify whether climate change between 1963 and 2014 impacted the risk of conflict and displacement of people in East Africa.

The findings suggest that about 80% of conflict throughout the period can be explained by population growth that occurred 10 years ago, political stability that occurred three years ago and economic growth within the same year.

For total displacement of people, the modelling suggests that 70% can be predicted by population growth and economic growth from 10 years before.

While for refugees, 90% can be explained by severe droughts that occurred one year ago, population growth that occurred 10 years ago, economic growth one year ago, and political stability two years ago. This correlates with an increase in refugees in the 1980s during a period of major droughts across East Africa.

"The question remains as to whether drought would have exacerbated the refugee situation in East Africa had there been slower expansion of population, positive economic growth and more stable political regimes in the region," said Erin Owain, first author of the study.

"Our research suggests that the fundamental cause of conflict and displacement of large numbers of people is the failure of political systems to support and protect their people", concluded Professor Maslin.

The research was funded by the Natural Environmental Research Council and the Royal Society.

Contacts and sources:
Natasha Downes
 UCL (University College London)

Citation:  'Addressing the relative contribution of economic, political and environmental factors on past conflict and displacement of people in Africa'Owain, L.E., Maslin, A.M., Palgrave Communications

How Fungus Knows Which Way Is Up

The pin mold fungus Phycomyces blakesleeanus forms a dense forest of vertically growing fruiting bodies, but how does it know which way is "up"? New research publishing 24 April in the open access journal PLOS Biology, from Gregory Jedd's group at the Temasek Life Sciences Laboratory, National University of Singapore, reveals that the fungi have acquired and re-modelled a gene from bacteria to help them make large gravity-sensing crystals.

It was already known that the fruiting bodies of this fungus sense gravity and grow upright by detecting how octahedral protein crystals in fluid-filled chambers (vacuoles) settle under their own weight. But how the fungus acquired this trick during its evolution has remained unclear. To solve this puzzle, the team purified the crystals and identified a protein which they named OCTIN as their primary building block.

Phycomyces fruiting bodies. Each stalk is a single cell that elongates to form a structure 1-3 cm tall, with a spore-containing sphere at its tip. The spores accumulate melanin as they mature, explaining the black color. Inset: An OCTIN crystal from Phycomyces blakesleeanus; the crystal is about 5 microns across, dwarfing typical bacteria (1-2 microns in length) from which the OCTIN gene is likely to have been acquired.

Credit: Tu Anh Nguyen

Genetic information is generally transmitted vertically from parents to offspring. Horizontal gene transfer (HGT) is a phenomenon that occurs when DNA is transferred between unrelated individuals and can lead to the acquisition of useful functions such as resistance to environmental extremes and expanded metabolic capacity. In most well understood cases of HGT, however, it tends to be enzymes that confer these traits, and the original and acquired functions tend to remain closely related to each other.

"We were surprised that OCTIN-related genes are found in bacteria and that all the evidence pointed to horizontal gene transfer from bacteria into the ancestor of Phycomyces," said the authors. "This was intriguing because estimates of sedimentation show that bacteria are too small to employ gravity sensing structures. This made it clear that we were looking at the emergence of an evolutionary novelty based on how the proteins assembled."

In the case of OCTIN, the researchers noticed that the position of cysteine residues, which have the capacity to form molecular bonds within and between proteins, have become rearranged during the evolution of fungal OCTIN. Correspondingly, the fungal OCTIN crystals dramatically swell and dissolve upon the addition of chemical agents that break such bonds, indicating that they indeed play a critical role in holding the crystal lattice together. 

By contrast, bacterial OCTIN, which can also assemble into cysteine-bonded arrays, does so at a much smaller size scale (thousands of times smaller). However, this innate tendency to assemble suggests that bacterial OCTIN might have been predisposed to accumulating the mutations required to eventually build a crystal lattice.

The authors note that OCTIN itself is not the end of the story; when they forced mammalian cell to make fungal OCTIN, it did not form crystals. This suggests that the fungus has co-factors that are required to assist crystal assembly. Dr Jedd adds, "We are currently searching for these factors with the aim of reconstituting OCTIN crystal formation in the test tube. This will allow us to better understand and manipulate the assembly process and its products. 

High-order assemblies like those formed by OCTIN are not uncommon in nature. Identifying and studying these types of proteins will not only reveal mechanisms of adaptation and evolution, but can also lead to engineered smart protein assemblies with applications in areas such as drug delivery and immune system modulation."

Contacts and sources:
Gregory Jedd

Citation: Nguyen TA, Greig J, Khan A, Goh C, Jedd G (2018) Evolutionary novelty in gravity sensing through horizontal gene transfer and high-order protein assembly. PLoS Biol 16(4): e2004920. The free article is available in PLOS Biology:

Future Wearable Device Could Tell How We Power Human Movement

For athletes and weekend warriors alike, returning from a tendon injury too soon often ensures a trip right back to physical therapy. However, a new technology developed by University of Wisconsin-Madison engineers could one day help tell whether your tendons are ready for action.

A team of researchers led by UW-Madison mechanical engineering professor Darryl Thelen and graduate student Jack Martin has devised a new approach for noninvasively measuring tendon tension while a person is engaging in activities like walking or running.

This advance could provide new insights into the motor control and mechanics of human movement. It also could apply to fields ranging from orthopedics, rehabilitation, ergonomics and sports. The researchers described their approach in a paper published (April 23, 2018) in the journal Nature Communications.

UW-Madison researchers developed a simple, noninvasive wearable device that enables them to measure tendon tension while a person is engaging in activities like walking or running. Here, the device is placed over the Achilles tendon.

Credit: Renee Meiller/UW-Madison

Muscles generate movement at joints by pulling on tendons, which are bands of tissue that connect muscles to the skeleton. But assessing the forces transmitted by tendons inside the body of a living person is tricky.

"Currently, wearables can measure our movement, but do not provide information on the muscle forces that generate the movement," says Thelen, whose work is supported by the National Institutes of Health.

To overcome this challenge, Thelen and his collaborators developed a simple, noninvasive device that can be easily mounted on the skin over a tendon. The device enables the researchers to assess tendon force by looking at how the vibrational characteristics of the tendon change when it undergoes loading, as it does during movement.

This phenomenon is similar to a guitar string, where the tension in the string changes the vibrational response. When a guitar string is plucked, the speed of the wave traveling along the string, and thus the vibration frequency, is related to the tension, or force, applied to the string.

UW-Madison Professor Darryl Thelen (on left) and graduate student Jack Martin measure tension in student Alexander Teague's Achilles tendon as he's running using a noninvasive wearable device that the research team developed.

Credit: Renee Meiller/UW-Madison

"We've found a way to measure the vibrational characteristics -- in this case, the speed of a shear wave traveling along a tendon -- and then we went further and determined how we can interpret this measurement to find the tensile stress within the tendon," Thelen says.

The new system for measuring wave speed is portable and relatively inexpensive. It includes a mechanical device that lightly taps the tendon 50 times per second. Each tap initiates a wave in the tendon, and two miniature accelerometers determine how quickly it travels.

The researchers have used the device to measure forces on the Achilles tendon, as well as the patellar and hamstring tendons. In each case, they can measure what happens in the tendon when users modify their gait -- for example, by changing step length or speed.

By measuring how muscles and tendons behave within the human body, this system could eventually enable clinicians to plan more effective treatments for patients suffering from musculoskeletal diseases and injuries.

"We think the potential of this new technology is high, both from a basic science standpoint and for clinical applications," Thelen says. "For example, tendon force measures could be used to guide treatments of individuals with gait disorders. It may also be useful to objectively assess when a repaired tendon is sufficiently healed to function normally and allow a person to return to activity."

Contacts and sources:
Darryl Thelen 
University of Wisconsin-Madison

Deadly Dust Levels Expected to Increase in the American Southwest

The American southwest, already prone to high levels of dust, could see a deadly increase due to climate change by 2100. Increases in dust due to climate change could result in additional illness and deaths in U.S. Southwest by 2100
Credit: SEAS

In 1935, at the height of the Dust Bowl, a team of researchers from the Kansas Board of Health set out to understand the impact of dust on human health. In areas impacted by dust storms, the researchers documented an increase in respiratory infections, a 50-to-100 percent increase in pneumonia cases and an overall increase in “morbidity and mortality from the acute infections of the respiratory tract.”

And yet, the reported concluded on an optimistic note. After all, the researchers noted, the rain fell, the skies cleared, and it was hoped that something like this “will never occur again.”

Fast forward to 2018. Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), the Department of Earth and Planetary Sciences (EPS), and the George Washington University (GWU) have found that in the coming decades, increased dust emissions from severe and prolonged droughts in the American Southwest could result in significant increases in hospital admissions and premature deaths.

“Our results indicate that future droughts driven by climate change could pose a potentially substantial public health burden in the U.S. Southwest,” said Pattanun “Ploy” Achakulwisut, former Harvard graduate student and first author of the paper. “This is a climate penalty that is not yet widely recognized.”

Achakulwisut is currently a postdoctoral fellow at GWU’s Milken Institute School of Public Health.

The research is published in Environmental Research Letters.

Since the Dust Bowl, the link between exposure to airborne particles and cardiovascular and respiratory illness has been well documented but little attention has been given to how climate change may impact airborne dust levels.

This research, led by Achakulwisut, Loretta Mickley, Senior Research Fellow at SEAS, and Susan Anenberg, Associate Professor of Environmental and Occupational Health at GW, is the first to quantify how changing climatic conditions in the U.S. Southwest and northern Mexico may impact airborne dust levels and human health.

“The U.S. Southwest has been seeing some of the fastest population growth in the U.S., and the area is projected to experience severe and persistent droughts in coming decades due to human-caused climate change,” Anenberg said. “We know that droughts are associated with increases in exposure to small dust particles (PM2.5) and minerals. These pollutants can penetrate deeply into the lung and are linked to asthma, respiratory inflammation, and cardiovascular mortality, as well an illness known as Valley Fever that is on the rise in the Southwest.”

Relying on observational data of airborne fine dust levels and regional drought conditions over the past 16 years, the researchers found that years with higher-than-normal dust levels in the U.S. Southwest correspond to dry soil conditions across southwestern North America, including areas spanning the Chihuahuan, Mojave, and Sonoran Deserts. The researchers estimated future changes in dust levels associated with drought conditions under the best- and worst-case climate change scenarios, and quantified how those changes would impact human health in the surrounding areas.

Credit: SEAS

They found that, depending on different climate change scenarios, airborne dust levels may increase by 10 to 30 percent. As a result, premature deaths attributable to fine dust could rise by between 20 to 130 percent, and annual hospitalizations due to dust-related cardiovascular and respiratory illness could grow by between 60 and 300 percent by 2100. These estimates take into account population growth in the region, as well as changing vulnerability to disease.

Credit: SEAS

“This research highlights the need to better understand both the potential effects climate change will have on dust levels, as well as the specific health impacts of exposure to fine dust in populated, arid regions that may be vulnerable to climate change,” said Mickley. “Our results suggest that drought-driven increases in fine dust would pose a substantial public health burden in the U.S. Southwest, especially under the worst-case climate change scenario.”

This research was supported by the U.S. Environmental Protection Agency.

Contacts and sources:
Leah Burrows

Why Uranus Stinks to High Heaven

SHydrogen sulfide, the gas that gives rotten eggs their distinctive odor, permeates the upper atmosphere of the planet Uranus - as has been long debated, but never definitively proven. Based on sensitive spectroscopic observations with the Gemini North telescope, astronomers uncovered the noxious gas swirling high in the giant planet's cloud tops. This result resolves a stubborn, long-standing mystery of one of our neighbors in space.

Even after decades of observations, and a visit by the Voyager 2 spacecraft, Uranus held on to one critical secret, the composition of its clouds. Now, one of the key components of the planet's clouds has finally been verified.

Patrick Irwin from the University of Oxford, UK and global collaborators spectroscopically dissected the infrared light from Uranus captured by the 8-meter Gemini North telescope on Hawaii's Maunakea. They found hydrogen sulfide, the odiferous gas that most people avoid, in Uranus's cloud tops. The long-sought evidence is published in the April 23rd issue of the journal Nature Astronomy.

This image of a crescent Uranus, taken by Voyager 2 on January 24th, 1986, reveals its icy blue atmosphere. Despite Voyager 2's close flyby, the composition of the atmosphere remained a mystery until now.

Image credit: NASA/JPL

The Gemini data, obtained with the Near-Infrared Integral Field Spectrometer (NIFS), sampled reflected sunlight from a region immediately above the main visible cloud layer in Uranus's atmosphere. "While the lines we were trying to detect were just barely there, we were able to detect them unambiguously thanks to the sensitivity of NIFS on Gemini, combined with the exquisite conditions on Maunakea," said Irwin. "Although we knew these lines would be at the edge of detection, I decided to have a crack at looking for them in the Gemini data we had acquired."

"This work is a strikingly innovative use of an instrument originally designed to study the explosive environments around huge black holes at the centers of distant galaxies," said Chris Davis of the United State's National Science Foundation, a leading funder of the Gemini telescope. "To use NIFS to solve a longstanding mystery in our own Solar System is a powerful extension of its use." Davis adds.

Astronomers have long debated the composition of Uranus's clouds and whether hydrogen sulfide or ammonia dominate the cloud deck, but lacked definitive evidence either way. "Now, thanks to improved hydrogen sulfide absorption-line data and the wonderful Gemini spectra, we have the fingerprint which caught the culprit," says Irwin. The spectroscopic absorption lines (where the gas absorbs some of the infrared light from reflected sunlight) are especially weak and challenging to detect according to Irwin.

The detection of hydrogen sulfide high in Uranus's cloud deck (and presumably Neptune's) contrasts sharply with the inner gas giant planets, Jupiter and Saturn, where no hydrogen sulfide is seen above the clouds, but instead ammonia is observed. The bulk of Jupiter and Saturn's upper clouds are comprised of ammonia ice, but it seems this is not the case for Uranus. These differences in atmospheric composition shed light on questions about the planets' formation and history.

Leigh Fletcher, a member of the research team from the University of Leicester in the UK, adds that the differences between the cloud decks of the gas giants (Jupiter and Saturn), and the ice giants (Uranus and Neptune), were likely imprinted way back during the birth of these worlds. "During our Solar System's formation the balance between nitrogen and sulphur (and hence ammonia and Uranus's newly-detected hydrogen sulfide) was determined by the temperature and location of planet's formation."

Another factor in the early formation of Uranus is the strong evidence that our Solar System's giant planets likely migrated from where they initially formed. Therefore, confirming this composition information is invaluable in understanding Uranus' birthplace, evolution and refining models of planetary migrations.

According to Fletcher, when a cloud deck forms by condensation, it locks away the cloud-forming gas in a deep internal reservoir, hidden away beneath the levels that we can usually see with our telescopes. "Only a tiny amount remains above the clouds as a saturated vapour," said Fletcher. "And this is why it is so challenging to capture the signatures of ammonia and hydrogen sulfide above cloud decks of Uranus. The superior capabilities of Gemini finally gave us that lucky break," concludes Fletcher.

Glenn Orton, of NASA's Jet Propulsion Laboratory, and another member of the research team notes, "We've strongly suspected that hydrogen sulfide gas was influencing the millimeter and radio spectrum of Uranus for some time, but we were unable to attribute the absorption needed to identify it positively. Now, that part of the puzzle is falling into place as well."

While the results set a lower limit to the amount of hydrogen sulfide around Uranus, it is interesting to speculate what the effects would be on humans even at these concentrations. "If an unfortunate human were ever to descend through Uranus's clouds, they would be met with very unpleasant and odiferous conditions." But the foul stench wouldn't be the worst of it according to Irwin. "Suffocation and exposure in the negative 200 degrees Celsius atmosphere made of mostly hydrogen, helium, and methane would take its toll long before the smell," concludes Irwin.

The new findings indicate that although the atmosphere might be unpleasant for humans, this far-flung world is fertile ground for probing the early history of our Solar System and perhaps understanding the physical conditions on other large, icy worlds orbiting the stars beyond our Sun.

Contacts and sources:
Peter Michaud
Gemini Observatory

Patrick Irwin
Professor of Planetary Physics
Department of Physics
University of Oxford

Drinking Baking Soda Could Be an Inexpensive, Safe Way To Combat Autoimmune Disease

A daily dose of baking soda may help reduce the destructive inflammation of autoimmune diseases like rheumatoid arthritis, scientists say.

They have some of the first evidence of how the cheap, over-the-counter antacid can encourage our spleen to promote instead an anti-inflammatory environment that could be therapeutic in the face of inflammatory disease, Medical College of Georgia scientists report in the Journal of Immunology.

They have shown that when rats or healthy people drink a solution of baking soda, or sodium bicarbonate, it becomes a trigger for the stomach to make more acid to digest the next meal and for little-studied mesothelial cells sitting on the spleen to tell the fist-sized organ that there's no need to mount a protective immune response.

"It's most likely a hamburger not a bacterial infection," is basically the message, says Dr. Paul O'Connor, renal physiologist in the MCG Department of Physiology at Augusta University and the study's corresponding author.

Pictured is Dr. Paul O'Connor, renal physiologist in the lab at the Medical College of Georgia Department of Physiology at Augusta University.

Credit: Phil Jones, Senior Photographer, Augusta University

Mesothelial cells line body cavities, like the one that contains our digestive tract, and they also cover the exterior of our organs to quite literally keep them from rubbing together. About a decade ago, it was found that these cells also provide another level of protection. They have little fingers, called microvilli, that sense the environment, and warn the organs they cover that there is an invader and an immune response is needed.

Drinking baking soda, the MCG scientists think, tells the spleen - which is part of the immune system, acts like a big blood filter and is where some white blood cells, like macrophages, are stored - to go easy on the immune response. "Certainly drinking bicarbonate affects the spleen and we think it's through the mesothelial cells," O'Connor says.

The conversation, which occurs with the help of the chemical messenger acetylcholine, appears to promote a landscape that shifts against inflammation, they report.

In the spleen, as well as the blood and kidneys, they found after drinking water with baking soda for two weeks, the population of immune cells called macrophages, shifted from primarily those that promote inflammation, called M1, to those that reduce it, called M2. Macrophages, perhaps best known for their ability to consume garbage in the body like debris from injured or dead cells, are early arrivers to a call for an immune response.

In the case of the lab animals, the problems were hypertension and chronic kidney disease, problems which got O'Connor's lab thinking about baking soda.

One of the many functions of the kidneys is balancing important compounds like acid, potassium and sodium. With kidney disease, there is impaired kidney function and one of the resulting problems can be that the blood becomes too acidic, O'Connor says. Significant consequences can include increased risk of cardiovascular disease and osteoporosis.

"It sets the whole system up to fail basically," O'Connor says. Clinical trials have shown that a daily dose of baking soda can not only reduce acidity but actually slow progression of the kidney disease, and it's now a therapy offered to patients.

"We started thinking, how does baking soda slow progression of kidney disease?" O'Connor says.

That's when the anti-inflammatory impact began to unfold as they saw reduced numbers of M1s and increased M2s in their kidney disease model after consuming the common compound.

When they looked at a rat model without actual kidney damage, they saw the same response. So the basic scientists worked with the investigators at MCG's Georgia Prevention Institute to bring in healthy medical students who drank baking soda in a bottle of water and also had a similar response.

"The shift from inflammatory to an anti-inflammatory profile is happening everywhere," O'Connor says. "We saw it in the kidneys, we saw it in the spleen, now we see it in the peripheral blood."

The shifting landscape, he says, is likely due to increased conversion of some of the proinflammatory cells to anti-inflammatory ones coupled with actual production of more anti-inflammatory macrophages. The scientists also saw a shift in other immune cell types, like more regulatory T cells, which generally drive down the immune response and help keep the immune system from attacking our own tissues. That anti-inflammatory shift was sustained for at least four hours in humans and three days in rats.

The shift ties back to the mesothelial cells and their conversations with our spleen with the help of acetylcholine. Part of the new information about mesothelial cells is that they are neuron-like, but not neurons O'Connor is quick to clarify.

"We think the cholinergic (acetylcholine) signals that we know mediate this anti-inflammatory response aren't coming directly from the vagal nerve innervating the spleen, but from the mesothelial cells that form these connections to the spleen," O'Connor says.

In fact, when they cut the vagal nerve, a big cranial nerve that starts in the brain and reaches into the heart, lungs and gut to help control things like a constant heart rate and food digestion, it did not impact the mesothelial cells' neuron-like behavior.

The affect, it appears, was more local because just touching the spleen did have an effect.

When they removed or even just moved the spleen, it broke the fragile mesothelial connections and the anti-inflammatory response was lost, O'Connor says. In fact, when they only slightly moved the spleen as might occur in surgery, the previously smooth covering of mesothelial cells became lumpier and changed colors.

"We think this helps explain the cholinergic (acetylcholine) anti-inflammatory response that people have been studying for a long time," O'Connor says.

Studies are currently underway at other institutions that, much like vagal nerve stimulation for seizures, electrically stimulate the vagal nerve to tamp down the immune response in people with rheumatoid arthritis. While there is no known direct connection between the vagal nerve and the spleen - and O'Connor and his team looked again for one - the treatment also attenuates inflammation and disease severity in rheumatoid arthritis, researchers at the Feinstein Institute for Medical Research reported in 2016 in the journal Proceedings of the National Academy of Sciences.

O'Connor hopes drinking baking soda can one day produce similar results for people with autoimmune disease.

Credit: Pixabay

"You are not really turning anything off or on, you are just pushing it toward one side by giving an anti-inflammatory stimulus," he says, in this case, away from harmful inflammation. "It's potentially a really safe way to treat inflammatory disease."

The spleen also got bigger with consuming baking soda, the scientists think because of the anti-inflammatory stimulus it produces. Infection also can increase spleen size and physicians often palpate the spleen when concerned about a big infection.

Other cells besides neurons are known to use the chemical communicator acetylcholine. Baking soda also interact with acidic ingredients like buttermilk and cocoa in cakes and other baked goods to help the batter expand and, along with heat from the oven, to rise. It can also help raise the pH in pools, is found in antacids and can help clean your teeth and tub.

The research was funded by the National Institutes of Health.

Contacts and sources: 
Toni Baker 
Medical College of Georgia at Augusta University 

Tuesday, April 24, 2018

3-D Printed Food with Custom Taste and Texture: Could It Change How We Eat?

Researchers have 3-D printed food with customized texture and body absorption characteristics.

Imagine a home appliance that, at the push of a button, turns powdered ingredients into food that meets the individual nutrition requirements of each household member. Although it may seem like something from science fiction, new research aimed at using 3-D printing to create customized food could one day make this a reality.

A: Food materials are pulverized under ultra-low temperature close to -100 degrees Celsius. B: Micro-sized food materials are reconstructed into a porous film-shaped material by jetting bonding an agent under optimized water content and heat conditions. The process to build film-type materials is repeated layer by layer to form to a three-dimensional food block. C: The exterior of foods and internal microstructure of a food block with specific porosity is designed to give texture with controlled human body absorption while eating and ingesting.

Credit: Jin-Kyu Rhee, Ewha Womans University

Jin-Kyu Rhee, associate professor at Ewha Womans University in South Korea, discussed his new research and the potential of 3-D printing technology for food production at the American Society for Biochemistry and Molecular Biology annual meeting during the 2018 Experimental Biology meeting to be held April 21-25 in San Diego.

"We built a platform that uses 3-D printing to create food microstructures that allow food texture and body absorption to be customized on a personal level," said Rhee. "We think that one day, people could have cartridges that contain powdered versions of various ingredients that would be put together using 3-D printing and cooked according to the user's needs or preferences."

3-D printing of food works much like 3-D printing of other materials in that layers of raw material are deposited to build up a final product. In addition to offering customized food options, the ability to 3-D print food at home or on an industrial scale could greatly reduce food waste and the cost involved with storage and transportation. It might also help meet the rapidly increasing food needs of a growing world population.

For the new study, the researchers used a prototype 3-D printer to create food with microstructures that replicated the physical properties and nanoscale texture they observed in actual food samples. They also demonstrated that their platform and optimized methods can turn carbohydrate and protein powers into food with microstructures that can be tuned to control food texture and how the food is absorbed by the body.

"We are only in early stages, but we believe our research will move 3-D food printing to the next level," said Rhee. "We are continuing to optimize our 3-D print technology to create customized food materials and products that exhibit longer storage times and enhanced functionality in terms of body absorption."

Jin-Kyu Rhee present edthe research from 12:45-1:30 p.m. Tuesday, April 24, in Exhibit Halls A-D, San Diego Convention Center (poster B284 801.9) (abstract).

Contacts and sources:
Anne Frances Johnson
Experimental Biology 2018

American Society for Biochemistry and Molecular Biology

Experimental Biology is an annual meeting comprised of more than 14,000 scientists and exhibitors from five host societies and multiple guest societies. With a mission to share the newest scientific concepts and research findings shaping clinical advances, the meeting offers an unparalleled opportunity for exchange among scientists from across the United States and the world who represent dozens of scientific areas, from laboratory to translational to clinical research.

How Animals Sense Danger

Have you ever wondered why animals avoid dangers by sensing some "signs" possibly related to the danger? A simple form of this phenomenon is called "fear conditioning", which is a type of learning commonly seen in every animal on the earth. By manipulating activity of specific neurons of the zebrafish brain, scientists at the National Institute of Genetics (NIG) in Japan have elucidated a neuronal population essential for fear conditioning in zebrafish.

 The study, published in the April 25 issue of BMC Biology, suggests that such a neural circuit essential for fear conditioning exists and is conserved during vertebrate evolution.

How can animals avoid dangers to survive? If animals experienced dangerous events together with some "signs", animals memorize the sign, became in fear of it, and perform fear responses, for instance an escape behavior. This is a type of learning, which is called "fear conditioning". In mammals including humans, the amygdala, one of the structures of the brain, plays an important role in fear conditioning. However, how the brain structure and neural circuits essential for fear conditioning have been conserved (or changed) during vertebrate evolution has not been known. 

A section of the zebrafish telencephalon. The neurons essential for fear conditioning are illuminated with GFP (green fluorescence protein). Scale bars: 200 μm.

Credit:  Koichi Kawakami

Zebrafish, a popular model animal in biological studies, can perform fear conditioning as well as human and other mammals. Professor Kawakami's group has succeeded in developing technologies for visualizing and manipulating specific brain neurons in zebrafish by employing the yeast transcription factor Gal4, the green fluorescent protein (GFP), and the botulinum neurotoxin (BoTx). They have generated a collection of transgenic fish lines being used to study brain functions as well as other various organs by zebrafish researchers all over the world. Of the nearly 2,000 such transgenic fish lines in his lab, one played an important role in the current study that labels neurons in the dorsomedial (Dm) area of the telencephalon of zebrafish.

"In mammals including human and mouse, fear conditioning is mediated by a brain area called the amygdala. The amygdala integrates information about dangerous events, like electric shock, and some signs, such as visual or auditory stimuli. However, in fish, such neurons have not been found." Prof. Kawakami said.

"It is important to explore such neurons in fish because we can increase the knowledge about fundamental neural circuits for animals to perform evolutionary conserved fear conditioning"

For this purpose, Dr. Lal, a former graduate student in his lab, developed a behavioral analysis system. Fish are placed with a small tank with two compartments. Green LEDs are not harmful to fish. They gave electric shocks while green LEDs are on ten times a day for five consecutive days. Finally, when green LEDs are on, fish learned to escape from the compartment which was illuminated, and moved to another compartment.

"It is fun to see how smart they are" Dr. Lal said.

Using these technologies and resources, they have found that neurons in the region called Dm of the telencephalon of fish are essential for fear conditioning. Namely, these neurons are a functional equivalent of the amygdala of mammals. This result is a clue to clarify the structure and evolution of the neural circuit essential for fear conditioning.

Prof. Kawakami showed us his zebrafish facility where thousands of fish tanks can be seen, each of which contains genetically different fish that can turn on, or drive the GFP or BoTx expression in different types of neurons in the brain or in the body.

"This work showcases a successful application of our genetic resources in the study of the brain function. It is also expected to be the basis for clarifying the causes and treatment of diseases involving fear and anxiety and PTSD.", Prof. Kawakami said.

This study was supported partly by JSPS KAKENHI Grant Numbers JP15H02370 and JP16H01651, and NBRP from Japan Agency for Medical Research and Development (AMED).

Contacts and sources:
Koichi Kawakami
Research Organization of Information and Systems

Citation: Identification of a neuronal population in the telencephalon essential for fear conditioning in zebrafish.  Authors: Pradeep Lal, Hideyuki Tanabe, Maximiliano L. Suster, Deepak Ailani, Yuri Kotani, Akira Muto, Mari Itoh, Miki Iwasaki, Hironori Wada, Emre Yaksi and Koichi Kawakami
BMC Biology201816:45 © Kawakami et al. 2018

Why Zero-Calorie Sweeteners Can Still Lead to Diabetes, Obesity

Increased awareness of the health consequences of eating too much sugar has fueled a dramatic uptick in the consumption of zero-calorie artificial sweeteners in recent decades. However, new research finds sugar replacements can also cause health changes that are linked with diabetes and obesity, suggesting that switching from regular to diet soda may be a case of ‘out of the frying pan, into the fire.’

Artificial sweeteners are one of the most common food additives worldwide, frequently consumed in diet and zero-calorie sodas and other products. While some previous studies have linked artificial sweeteners with negative health consequences, earlier research has been mixed and raised questions about potential bias related to study sponsorship.

Sugar Dish.jpg
Credit: Steve Snodgrass / Wikimedia Commons

This new study is the largest examination to date that tracks biochemical changes in the body—using an approach known as unbiased high-throughput metabolomics—after consumption of sugar or sugar substitutes. Researchers also looked at impacts on vascular health by studying how the substances affect the lining of blood vessels. The studies were conducted in rats and cell cultures.

“Despite the addition of these non-caloric artificial sweeteners to our everyday diets, there has still been a drastic rise in obesity and diabetes,” said lead researcher Brian Hoffmann, PhD, assistant professor in the department of biomedical engineering at the Medical College of Wisconsin and Marquette University. “In our studies, both sugar and artificial sweeteners seem to exhibit negative effects linked to obesity and diabetes, albeit through very different mechanisms from each other.”

Hoffmann will present the research at the American Physiological Society annual meeting during the 2018 Experimental Biology meeting, held April 21-25 in San Diego.

The team fed different groups of rats diets high in glucose or fructose (kinds of sugar), or aspartame or acesulfame potassium (common zero-calorie artificial sweeteners). After three weeks, the researchers saw significant differences in the concentrations of biochemicals, fats and amino acids in blood samples.

The results suggest artificial sweeteners change how the body processes fat and gets its energy. In addition, they found acesulfame potassium seemed to accumulate in the blood, with higher concentrations having a more harmful effect on the cells that line blood vessels.

“We observed that in moderation, your body has the machinery to handle sugar; it is when the system is overloaded over a long period of time that this machinery breaks down,” Hoffmann said. “We also observed that replacing these sugars with non-caloric artificial sweeteners leads to negative changes in fat and energy metabolism.”

So, which is worse, sugar or artificial sweeteners? Researchers cautioned that the results do not provide a clear answer and the question warrants further study. It is well known that high dietary sugar is linked to negative health outcomes and the study suggests artificial sweeteners do, too.

“It is not as simple as ‘stop using artificial sweeteners’ being the key to solving overall health outcomes related to diabetes and obesity,” Hoffmann added. “If you chronically consume these foreign substances (as with sugar) the risk of negative health outcomes increases. As with other dietary components, I like to tell people moderation is the key if one finds it hard to completely cut something out of their diet.”

Brian Hoffmann  presented this research on Sunday, April 22, from 10 a.m.–noon in the San Diego Convention Center Exhibit Hall (poster A322) (abstract). 

About Experimental Biology 2018

Experimental Biology is an annual meeting that attracts more than 14,000 scientists and exhibitors from five host societies and more than two dozen guest societies. With a mission to share the newest scientific concepts and research findings shaping clinical advances, the meeting offers an unparalleled opportunity for exchange among scientists from across the U.S. and the world who represent dozens of scientific areas, from laboratory to translational to clinical research. #expbio

Contacts and sources:
American Physiological Society (APS)
Federation of American Societies for Experimental Biology (FASEB)

Harvesting Water from Fog with Harps (Video)

As summertime draws near, some people around the U.S. will face annual water usage restrictions as water supplies become strained. But for those who live in arid climates year-round, water shortages are a constant concern. In these areas, residents must capitalize on even the smallest bit of moisture in the air.

Now researchers report in ACS Applied Materials & Interfaces that they have developed a type of “harp” to harvest fresh water from fog.

According to the World Wildlife Foundation, as much as two-thirds of the world’s population could face water shortages by 2025. To combat this, fog harvesting is used to collect fresh water in dry climates. Current methods involve setting out a mesh netting with vertical and horizontal wires to catch water droplets, which then fall into a collector. If the wire mesh is too coarse, it cannot effectively capture water, but if the wire mesh is too fine, it gets clogged easily. 

Credit: ACS

Coatings and lubricants have been applied to the mesh to prevent clogging, but they don’t last and can leach into the water, contaminating it. Although previous studies have tested harp-like structures for this application, the researchers had not optimized the harps, nor had they compared their performances to mesh devices. So, Jonathan Boreyko and colleagues wanted to take those extra steps.

The researchers made three harp prototypes with uncoated vertical wires of three different diameters pulled taut on support frames. They then compared these harps with uncoated meshes that had nearly the same thicknesses. The water collection efficiency decreased for meshes with fine wires as they became clogged. But the efficiency rose with smaller-diameter wires in the harps because of a reduced pinning force of the droplets being shed along the same plane as the wires. In addition, the harps consistently collected more water than the equivalent meshes at all wire diameters.

 In fact, the fog harp with the finest wires collected more than three-times the amount of water of the finest mesh. The researchers also showed that the harp could be scaled up to a real-world size.

The authors acknowledge funding from the Institute for Creativity, Arts, and Technology at Virginia Tech and the Department of Biomedical Engineering and Mechanics at Virginia Tech.

Contacts and sources:
Katie Cottingham, Ph.D.
The American Chemical Society

Citation: “Fog Harvesting with Harps”ACS Applied Materials & Interfaces

Handheld Device to Sniff Out Humans Trapped in Collapsed Buildings

Earthquakes are lethal natural disasters frequently burying people alive under collapsed buildings. Tracking entrapped humans from their unique volatile chemical signature with hand-held devices would accelerate urban search and rescue (USaR) efforts

The first step after buildings collapse from an earthquake, bombing or other disaster is to rescue people who could be trapped in the rubble. But finding entrapped humans among the ruins can be challenging. 

Scientists now report in the ACS journal Analytical Chemistry the development of an inexpensive, selective sensor that is light and portable enough for first responders to hold in their hands or for drones to carry on a search for survivors.

A new sensor could aid first responders in their search for survivors after buildings collapse.
Credit: Linda Macpherson/

In the hours following a destruction-causing event, the survival rate of people stuck in the rubble rapidly drops, so it’s critical to get in there fast. Current approaches include the use of human-sniffing dogs and acoustic probes that can detect cries for help. But these methods have drawbacks, such as the limited availability of canines and the silence of unconscious victims. 

Devices that detect a human chemical signature, which includes molecules that are exhaled or that waft off the skin, are promising. But so far, these devices are too bulky and expensive for wide implementation, and they can miss signals that are present at low concentrations. So, Sotiris E. Pratsinis and colleagues wanted to develop an affordable, compact sensor array to detect even the most faint signs of life.

The researchers built their palm-sized sensor array from three existing gas sensors, each tailored to detect a specific chemical emitted by breath or skin: acetone, ammonia or isoprene. They also included two commercially available sensors for detecting humidity and CO2. In a human entrapment simulation, the sensors rapidly detected tiny amounts of these chemicals, at levels unprecedented for portable detectors--down to three parts per billion. The next step is to test the sensor array in the field under conditions similar to those expected in the aftermath of a calamity.

The authors acknowledge funding from the Swiss National Science Foundation, the European Union’s Horizon 2020 research and innovation program and the Austrian Research Promotion Agency.

Contacts and sources:
Katie Cottingham, Ph.D.
The American Chemical Society

Citation: “Sniffing Entrapped Humans with Sensor Arrays
Andreas T. Güntner, Nicolay J. Pineau†, Paweł Mochalski‡, Helmut Wiesenhofer‡, Agapios Agapiou§, Christopher A. Mayhew‡, and Sotiris E. Pratsinis*
† Particle Technology Laboratory, ETH Zurich, Zurich CH-8092, Switzerland
‡ Institute for Breath Research of the University of Innsbruck, Dornbirn AT-6850, Austria
§ Department of Chemistry, University of Cyprus, P.O. Box 20537, Nicosia CY-1678, Cyprus
Analytical Chemistry., 2018, 90 (8), pp 4940–4945

Some Kitchen Cabinets Emit Potentially Harmful Compounds

Probably the last place anyone would want to find airborne polychlorinated biphenyl compounds (PCBs) is in the kitchen, yet that's exactly where scientists detected their presence, according to a new report in ACS’ journal Environmental Science & Technology. They say that the PCBs, which are widely considered carcinogenic, are unwanted byproducts of sealant breakdown in modern kitchen cabinetry.

File:Integrated kitchen with lower cabinets open.jpg
Credit: Kotivalo / Wikimedia Commons

As a group, PCBs are classified by the International Agency for Research on Cancer as known human carcinogens, and their manufacture was banned in the U.S. in 1979. But because of the tendency of these chemicals to stick around in the environment and their inadvertent production as manufacturing byproducts, PCBs can still be found in offices and schools. Keri C. Hornbuckle and colleagues at the University of Iowa College of Engineering wanted to determine how much and what types of PCBs are present in and around residences.

The researchers measured the concentrations of PCBs using polyurethane-equipped passive air samplers (PUF-PAS) for a 6-week interval from August 22, 2017, to October 2, 2017, inside and outside 16 homes in Iowa. They found neurotoxic PCB-47 and PCB-51, as well as PCB-68, at much higher levels than expected. The concentrations seemed to be dependent on the year the house was built, with higher levels in more recent years. After testing the emissions coming from a variety of household items, including the stove, floor and walls, the researchers found the PCBs wafting off the finished kitchen cabinetry. The researchers suspect that the substances come from the decomposition of 2,4-dichlorobenzoyl peroxide, a common ingredient in modern cabinet sealants. This finding brings to light a previously unknown source of a toxic chemical in the home.

The authors acknowledge funding from the Superfund Research Program of the National Institute of Environmental Health Sciences.

Contacts and sources:
Katie Cottingham, Ph.D.
The American Chemical Society

Record Amounts of Microplastic Frozen in Arctic Sea Ice, 12,000 Particles Per Liter

Experts at the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) have recently found higher amounts of microplastic in arctic sea ice than ever before. However, the majority of particles were microscopically small.

 The ice samples from five regions throughout the Arctic Ocean contained up to 12,000 microplastic particles per litre of sea ice. Further, the different types of plastic showed a unique footprint in the ice allowing the researchers to trace them back to possible sources. This involves the massive garbage patch in the Pacific Ocean, while in turn, the high percentage of paint and nylon particles pointed to the intensified shipping and fishing activities in some parts of the Arctic Ocean. 

The new study has just been released in the journal Nature Communications.

An AWI scientist is preparing an Arctic sea-ice core for a microplastic analysis in a lab at the AWI Helgoland.
Credit: © Alfred-Wegener-Institut/Tristan Vankann

"During our work, we realised that more than half of the microplastic particles trapped in the ice were less than a twentieth of a millimetre wide, which means they could easily be ingested by arctic microorganisms like ciliates, but also by copepods," says AWI biologist and first author Dr Ilka Peeken. The observation is a very troubling one because, as she explains, "No one can say for certain how harmful these tiny plastic particles are for marine life, or ultimately also for human beings."

The AWI researcher team had gathered the ice samples in the course of three expeditions to the Arctic Ocean on board the research icebreaker Polarstern in the spring of 2014 and summer of 2015. They hail from five regions along the Transpolar Drift and the Fram Strait, which transports sea ice from the Central Arctic to the North Atlantic.

Infrared spectrometer reveals heavy contamination with microparticles

The term microplastic refers to plastic particles, fibres, pellets and other fragments with a length, width or diameter ranging from only a few micrometres - thousandths of a millimetre - to under five millimetres. A considerable amount of microplastic is released directly into the ocean by the gradual deterioration of larger pieces of plastic. But microplastic can also be created on land - e.g. by laundering synthetic textiles or abrasion of car tyres, which initially floats through the air as dust, and is then blown to the ocean by the wind, or finds its way there through sewer networks.

Germany's research icebreaker POLARSTERN above the Lomonossov Ridge in the central Arctic Ocean.

Credit: © Alfred-Wegener-Institut/Ruediger Stein

In order to determine the exact amount and distribution of microplastic in the sea ice, the AWI researchers were the first to analyse the ice cores layer by layer using a Fourier Transform Infrared Spectrometer (FTIR), a device that bombards microparticles with infrared light and uses a special mathematical method to analyse the radiation they reflect back. Depending on their makeup, the particles absorb and reflect different wavelengths, allowing every substance to be identified by its optic fingerprint.

"Using this approach, we also discovered plastic particles that were only 11 micrometres across. That's roughly one-sixth the diameter of a human hair, and also explains why we found concentrations of over 12,000 particles per litre of sea ice - which is two to three time higher than what we'd found in past measurements," says Gunnar Gerdts, in whose laboratory the measurements were carried out. Surprisingly, the researchers found that 67 percent of the particles detected in the ice belonged to the smallest-scale category "50 micrometres and smaller".

Ice drift and the chemical fingerprint offer clues to pollutants' regions of origin

The particle density and composition varied significantly from sample to sample. At the same time, the researchers determined that the plastic particles were not uniformly distributed throughout the ice core. "We traced back the journey of the ice floes we sampled and can now safely say that both the region in which the sea ice is initially formed and the water masses in which the floes drift through the Arctic while growing, have an enormous influence on the composition and layering of the encased plastic particles," relates Ilka Peeken.

The team of researchers also learned e.g. that ice floes, which are driven in the pacific water masses of the Canadian Basin, contain particularly high concentrations of polyethylene particles. Polyethylene is above all used in packaging material. As the experts write in their study, "Accordingly, we assume that these fragments represent remains of the so-called Great Pacific Garbage Patch and are pushed along the Bering Strait and into the Arctic Ocean by the Pacific inflow."

In contrast, the scientists predominantly found paint particles from ship's paint and nylon waste from fishing nets in ice from the shallow marginal seas of Siberia. "These findings suggest that both the expanding shipping and fishing activities in the Arctic are leaving their mark. The high microplastic concentrations in the sea ice can thus not only be attributed to sources outside the Arctic Ocean. Instead, they also point to local pollution in the Arctic," says Ilka Peeken.

The researchers found a total of 17 different types of plastic in the sea ice, including packaging materials like polyethylene and polypropylene, but also paints, nylon, polyester, and cellulose acetate, the latter is primarily used in the manufacture of cigarette filters. Taken together, these six materials accounted for roughly half of all the microplastic particles detected.

According to Ilka Peeken, "The sea ice binds all this plastic litter for two to a maximum of eleven years - the time it takes for ice floes from the marginal seas of Siberia or the North American Arctic to reach the Fram Strait, where they melt." But conversely, this also means that sea ice transports large quantities of microplastic to the waters off the northeast coast of Greenland.

The researchers can't yet say whether the released plastic particles subsequently remain in the Arctic or are transported farther south; in fact, it seems likely that the plastic litter begins sinking into deeper waters relatively quickly. "Free-floating microplastic particles are often colonized by bacteria and algae, which makes them heavier and heavier. Sometimes they clump together with algae, which makes them drift down to the seafloor much faster," explains AWI biologist and co-author Dr Melanie Bergmann.

The observations made by researchers at the AWI's deep-sea network HAUSGARTEN in the Fram Strait lend additional weight to this thesis. As Melanie Bergmann relates, "We recently recorded microplastic concentrations of up to 6500 plastic particles per kilogram of seafloor; those are extremely high values."

Contacts and sources:
Dr. Folke Mehrtens
The Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI)

Evolutionary Anthropology: Rhythm Crucial in Drummed Speech

Researchers find Amazonian Bora people to mimic the rhythm of their language for communication over large distances using drums

How can an entire language be mapped onto beats on two drums? To answer this question, an international team of researchers, including Frank Seifart and Sven Grawunder of the former Department of Linguistics at the Max Planck Institute for Evolutionary Anthropology in Leipzig and Julien Meyer from the Université Grenoble Alpes in France carried out research into the drummed speech system of the Bora people of the Northwest Amazon. What they found was that the Boras not only reproduce the melody of words and sentences in this endangered language, but also their rhythm. This suggests the crucial role of linguistic rhythm in language processing has been underestimated.

The Amazonian Bora people mimic the rhythm of their language using drums.
Credit: © Gaiamedia/ Aexcram

The human voice can produce rich and varied acoustic signals to transmit information. Normally, this transmission only has a reach of about 200 metres. The Boras, an indigenous group of about 1,500 members residing in small communities in the Amazonian rainforest of Colombia and Peru, can extend this range by a factor of 100 by emulating Bora phrases in sequences of drumbeats. The Boras do this with manguaré drums – pairs of wooden slit drums traditionally carved from single logs (each about two metres) through burning. Each drum can produce two pitches, a pair four in total.

The Boras use manguaré drums in two ways. One is the ‘musical mode’, which is used to perform memorised drum sequences with little or no variation as part of rituals and festivals. The other is the ‘talking mode’, which is used to transmit relatively informal messages and public announcements. 

“For example, the manguaré is used to ask someone to bring something or to come do something, to announce the outcome of non-alcoholic drinking competitions or the arrival of visitors”, says Seifart of the former Department of Linguistics at the Max Planck Institute for Evolutionary Anthropology where the major part of the now published work was done. “In this model, only two pitches are used and each beat corresponds to a syllable of a corresponding phrase of spoken Bora. The announcements contain on average 15 words and 60 drum beats.”

Rhythm essential

The Boras use drummed Bora to mimic the tone and rhythm of their spoken language and to elaborate Bora phrases in order to overcome remaining ambiguities. “Rhythm turns out to be crucial for distinguishing words in drummed Bora”, says Seifart. “There are four rhythmic units encoded in the length of pauses between beats. These units correspond to vowel-to-vowel intervals with different numbers of consonants and vowel lengths. The two phonological tones represented in drummed speech encode only a few lexical contrasts. Rhythm therefore appears to crucially contribute to the intelligibility of drummed Bora.”

This, the researchers argue, provides novel evidence for the role of rhythmic structures composed of vowel-to-vowel intervals in the complex puzzle concerning the redundancy and distinctiveness of acoustic features embedded in speech

Contacts and sources:
Sandra Jacobs, Max Planck Institute for Evolutionary Anthropology
Dr. Frank Seifart Amsterdam Center for Language and Communication  University of Amsterdam

Citation: Reducing language to rhythm: Amazonian Bora drummed language exploits speech rhythm for long-distance communication.
Frank Seifart, Julien Meyer, Sven Grawunder & Laure Dentel
Royal Society Open Science; 25 April, 2018 (DOI: 10.1098/rsos.170354)

Artificial Leaf Is Mini Medicine Factory

Using sunlight for sustainable and cheap production of, for example, medicines. The 'mini-factory' in the form of a leaf that chemical engineers from Eindhoven University of Technology presented in 2016 showed that it is possible. Now the researchers have come with an improved version: their 'mini-factory' is now able to keep production at the same level, irrespective of the variation in sunlight due to cloudiness or time of the day. As a result, this boosts the average yield by about 20%. This is due to a clever feedback system costing less than 50 euros that automatically slows down or speeds up production. This has removed a significant practical barrier for green reactors that operate purely on sunlight.

With their 'artificial leaf' researchers, under the guidance of the Eindhoven chemical engineer Timothy Noël, reaped a lot of admiration about a year and a half ago. First they succeeded in making chemical reactions possible with sunlight - something that had previously seemed almost impossible. Chemists had dreamed of this possibility for ages, but the problem was that the amount of sunlight was not sufficient.

Even with the naked eye the amount of light captured by the 'mini-factories' is visible, lit up bright red. The 'veins' through the leaves are the thin channels through which liquid can be pumped. The start products enter the one channel, light causes the reactions and the end product comes out via the other channels.

Credit: Bart van Overbeeke

Their breakthrough can be partly attributed to the use of relatively new materials (so-called luminescent solar concentrators) that seal in a specific part of the sunlight inside, in a similar way to plants that do this using special antenna molecules in their leaves. The second discovery was to apply very thin channels in these materials, through which liquids are pumped thereby exposing the liquids to sufficient sunlight to allow chemical reactions to take place. The end products then flow out at the extremities of the channels.

Problem: not always the same amount of sun

One of the biggest practical problems to apply this on a large scale is that there is not always the same intensity of sunlight. Because, for example, the sky is cloudy or the sunlight varies in intensity and composition during the day. "If there is too much light, you get unwanted by-products and if there is too little light, the reactions do not take place or do so too slowly," Noël explains. "Ideally, the system should automatically adapt to the amount of incoming sunlight."

The feedback system developed does exactly that. It consists of just three relatively simple elements. A light sensor measures the amount of light that reaches the channels. A microcontroller translates this signal to a pump speed. And the pump drives the fluids through the channels at that speed. All this costs less than 50 euros. Experiments to determine the required pump speed for a specific light intensity enabled the researchers to optimize the feedback loop.

Test on the roof

In addition to lab testing under artificial light, they also tested their system outdoors in natural sunlight, on top of the roof of one of the buildings on the TU/e campus. At a yield setting of 90%, the system kept production stable for an hour at between 86% and 93%. The same system without feedback looping varied significantly between 55% and 97%. The average yield was increased by about 20% thanks to the feedback loop.

According to Noël, this brings a cheap and sustainable reactor considerably closer to being able to produce chemical products on a large scale, wherever you want, with only sunlight as an energy source. "It is inevitable that energy prices will rise. And with a source of energy like the sun that is free and available, these kinds of technological solutions can make the difference."

Contacts and sources:
Barry van der Meer
Eindhoven University of Technology

Citation: Real-time reaction control for solar production of chemicals under fluctuating irradiance,
Fang Zhao, Dario Cambié, Volker Hessel, Michael Debije and Timothy Noël, Green Chemistry 2018, 24 april 2018 DOI: 10.1039/c8gc00613j

"Potentially Apocalyptic Risks": RAND Study Says By 2040, Artificial Intelligence Could End Nuclear Stability

A new RAND Corporation paper finds that artificial intelligence has the potential to upend the foundations of nuclear deterrence by the year 2040.

While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks, according to the paper.

During the Cold War, the condition of mutual assured destruction maintained an uneasy peace between the superpowers by ensuring that any attack would be met by a devastating retaliation. Mutual assured destruction thereby encouraged strategic stability by reducing the incentives for either country to take actions that might escalate into a nuclear war.

Credit Chara Williams,  RAND Corportation

The new RAND publication says that in coming decades, artificial intelligence has the potential to erode the condition of mutual assured destruction and undermine strategic stability. Improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed.

Nations may be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals even if they have no intention of carrying out an attack, researchers say. This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.

"The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history," said Edward Geist, co-author on the paper and associate policy researcher at the RAND Corporation, a nonprofit, nonpartisan research organization. "Much of the early development of AI was done in support of military efforts or with military objectives in mind."

He said one example of such work was the Survivable Adaptive Planning Experiment in the 1980s that sought to use AI to translate reconnaissance data into nuclear targeting plans.

Under fortuitous circumstances, artificial intelligence also could enhance strategic stability by improving accuracy in intelligence collection and analysis, according to the paper. While AI might increase the vulnerability of second-strike forces, improved analytics for monitoring and interpreting adversary actions could reduce miscalculation or misinterpretation that could lead to unintended escalation.

Researchers say that given future improvements, it is possible that eventually AI systems will develop capabilities that, while fallible, would be less error-prone than their human alternatives and therefore be stabilizing in the long term.

"Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes," said Andrew Lohn, co-author on the paper and associate engineer at RAND. "There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk."

RAND researchers based their perspective on information collected during a series of workshops with experts in nuclear issues, government branches, AI research, AI policy and national security.

"Will Artificial Intelligence Increase the Risk of Nuclear War?" is available at

The perspective is part of a broader effort to envision critical security challenges in the world of 2040, considering the effects of political, technological, social, and demographic trends that will shape those security challenges in the coming decades.

Funding for the Security 2040 initiative was provided by gifts from RAND supporters and income from operations.

The research was conducted within the RAND Center for Global Risk and Security, which works across the RAND Corporation to develop multi-disciplinary research and policy analysis dealing with systemic risks to global security. The center draws on RAND's expertise to complement and expand RAND research in many fields, including security, economics, health, and technology.

‘Unseen’ Siblings of Milky Way’s Supermassive Black Hole Sought

Astronomers are beginning to understand what happens when black holes get the urge to roam the Milky Way.

Typically, a supermassive black hole (SMBH) exists at the core of a massive galaxy. But sometimes SMBHs may “wander” throughout their host galaxy, remaining far from the center in regions such as the stellar halo, a nearly spherical area of stars and gas that surrounds the main section of the galaxy.

Credit: Yale University

Astronomers theorize that this phenomenon often occurs as a result of mergers between galaxies in an expanding universe. A smaller galaxy will join with a larger, main galaxy, depositing its own, central SMBH onto a wide orbit within the new host.

In a new study published in the Astrophysical Journal Letters, researchers from Yale, the University of Washington, Institut d’Astrophysique de Paris, and University College London predict that galaxies with a mass similar to the Milky Way should host several supermassive black holes.

Credit: Yale University

The team used a new, state-of-the-art cosmological simulation, Romulus, to predict the dynamics of SMBHs within galaxies with better accuracy than previous simulation programs.

“It is extremely unlikely that any wandering supermassive black hole will come close enough to our Sun to have any impact on our solar system,” said lead author Michael Tremmel, a postdoctoral fellow at the Yale Center for Astronomy and Astrophysics. “We estimate that a close approach of one of these wanderers that is able to affect our solar system should occur every 100 billion years or so, or nearly 10 times the age of the universe.”

Tremmel said that since wandering SMBHs are predicted to exist far from the centers of galaxies and outside of galactic disks, they are unlikely to accrete more gas — making them effectively invisible. “We are currently working to better quantify how we might be able to infer their presence indirectly,” Tremmel said.

Co-authors of the study are Fabio Governato, Marta Volonteri, Andrew Pontzen, and Thomas Quinn.

The study is part of the Blue Waters computing project supported by the National Science Foundation and the University of Illinois.

Contacts and sources:
Jim Shelton
Yale University

Citation: Wandering Supermassive Black Holes in Milky-Way-mass Halos.  Authors: Michael Tremmel, Fabio Governato, Marta Volonteri, Andrew Pontzen, Thomas R. Quinn. The Astrophysical Journal, 2018; 857 (2): L22 DOI: 10.3847/2041-8213/aabc0a

Dark Chocolate Reduces Stress and Inflammation, While Improving Memory, Immunity and Mood

Data represent first human trials examining the impact of dark chocolate consumption on cognition and other brain functions

New research shows there might be health benefits to eating certain types of dark chocolate. Findings from two studies being presented today at the Experimental Biology 2018 annual meeting in San Diego show that consuming dark chocolate that has a high concentration of cacao (minimally 70% cacao, 30% organic cane sugar) has positive effects on stress levels, inflammation, mood, memory and immunity. While it is well known that cacao is a major source of flavonoids, this is the first time the effect has been studied in human subjects to determine how it can support cognitive, endocrine and cardiovascular health.

Lee S. Berk, DrPH, associate dean of research affairs, School of Allied Health Professions and a researcher in psychoneuroimmunology and food science from Loma Linda University, served as principal investigator on both studies.

“For years, we have looked at the influence of dark chocolate on neurological functions from the standpoint of sugar content - the more sugar, the happier we are," Berk said. "This is the first time that we have looked at the impact of large amounts of cacao in doses as small as a regular-sized chocolate bar in humans over short or long periods of time, and are encouraged by the findings. These studies show us that the higher the concentration of cacao, the more positive the impact on cognition, memory, mood, immunity and other beneficial effects.”

This is 72% cacao organic dark chocolate 
File:Dark chocolate Blanxart.jpg
Credit: John Loo / Wikimedia Commons

The flavonoids found in cacao are extremely potent antioxidants and anti-inflammatory agents, with known mechanisms beneficial for brain and cardiovascular health. The following results were presented in live poster sessions during the Experimental Biology 2018 meeting.

Dark Chocolate (70% Cacao) Effects Human Gene Expression: Cacao Regulates Cellular Immune Response, Neural Signaling, and Sensory Perception (Monday, April 23, 10:00 a.m. – 12:00 p.m., San Diego Convention Center, Exhibit Halls A - D)

· This pilot feasibility experimental trial examined the impact of 70 percent cacao chocolate consumption on human immune and dendritic cell gene expression, with focus on pro- and anti-inflammatory cytokines. Study findings show cacao consumption up-regulates multiple intracellular signaling pathways involved in T-cell activation, cellular immune response and genes involved in neural signaling and sensory perception - the latter potentially associated with the phenomena of brain hyperplasticity.

Dark Chocolate (70% Organic Cacao) Increases Acute and Chronic EEG Power Spectral Density (μv2) Response of Gamma Frequency (25-40Hz) for Brain Health: Enhancement of Neuroplasticity, Neural Synchrony, Cognitive Processing, Learning, Memory, Recall, and Mindfulness Meditation (Tuesday, April 24, 10:00 a.m. – 12:00 p.m., San Diego Convention Center, Exhibit Halls A - D)

This study assessed the electroencephalography (EEG) response to consuming 48 g of dark chocolate (70% cacao) after an acute period of time (30 mins) and after a chronic period of time (120 mins), on modulating brain frequencies 0-40Hz, specifically beneficial gamma frequency (25-40Hz). Findings show that this superfood of 70 percent cacao enhances neuroplasticity for behavioral and brain health benefits.

Berk said the studies require further investigation, specifically to determine the significance of these effects for immune cells and the brain in larger study populations. Further research is in progress to elaborate on the mechanisms that may be involved in the cause-and-effect brain-behavior relationship with cacao at this high concentration.

Contacts and sources:
Briana Pastorino
Loma Linda University Adventist Health Sciences Center