Friday, October 30, 2020

Models Show How COVID-19 Cuts a Neighborhood Path

New models show how COVID-19 can spread through a city, based on population demographics, simulation techniques and virus case data.
Credit: Mark Stone/U. of Washington

The coronavirus doesn’t spread uniformly through a community.

But in the world of disease modeling, many projections take a high-level approach to a geographic area, like a county or state, and forecast based on a general idea that a virus will take root and spread at an equal rate until it reaches its peak of infection.

A research team led by UC Irvine and the University of Washington has created a new model of coronavirus diffusion through a community. This approach, published Sept. 10 in the Proceedings of the National Academy of Sciences, factors in network exposure — whom one interacts with — and demographics to simulate at a more detailed level both where and how quickly the coronavirus could spread through Seattle and 18 other major cities.

The team used U.S. Census Bureau tract demographics, simulation techniques and COVID-19 case data from spring 2020 to estimate a range of days for the virus to spread within a given city.

The result: Some neighborhoods peak sooner than others. And in every city, the virus sticks around far longer than some might expect.

“The most basic takeaway from this research is risk. People are at risk longer than they think, the virus will last longer than expected, and the point at which you think you don’t need to be vigilant means that it just hasn’t happened to you yet,” said co-author Zack Almquist, an assistant professor of sociology at the UW.

This census tract map shows estimated ranges of the numbers of days to peak infection.

Credit: Thomas et al., 2020, PNAS

Almquist and the team took on their study with two basic premises: Account for the social and geographic connections within a tract that could affect the course of the virus; and assume no vaccine or other major intervention alters its path. Then, based on actual COVID-19 and demographic data, project a likely scenario for spread over time.

Take Seattle. The study’s map of the city outlines each census tract and provides a color-coded range of days each tract could take to reach peak infection before the virus goes into a low remission. The overall range is vast, from neighborhoods with the fastest peak — 83 days — to those that take more than 1,000. That’s more than three years, assuming there is no significant intervention to stem the spread.

Left is a map of Seattle, with neighborhoods delineated, showing the individual locations of residents as colored dots. The color is the timing of infection spread, with red occurring first, and blue occurring last (scale depicted in the lower right). Black means no infection (this can be seen more clearly on the zoomed-in figure around Capitol Hill). In the zoomed-in map of Capitol Hill in the lower right, dots represent residents and colors again represent infection timing; social connections are shown as gray/black edges. Neighborhood boundaries are provided by Zillow.

Credit: Zack Almquist/U. of Washington

Denser neighborhoods in Seattle, such as Capitol Hill or the University District, reach peak infection rate earlier. But simulations predict that even nearby neighborhoods won’t reach peak infection until weeks or even years later. These models predict more “burst-like” behavior of the virus’ spread than standard models — with short, sudden episodes of infection across the city, Almquist said.

In the study’s model of Washington, D.C., census tracts also appear to reach peak infection rates at different times.

This map shows the peak infection day range for census tracts in Washington, D.C.

Credit: Thomas et al., 2020, PNAS

Again, denser areas tend to peak sooner. But the network connections can cause “bursty” peak infection days, with some areas seeing early peak infections and others seeing it much later based on the neighborhoods’ relative connections with each other, Almquist said.

Projecting the path of the virus can help estimate the impact on local hospitals. Researchers predicted this in several ways, such as modeling the number of cases per hospital over time and the number of days a hospital is at peak capacity.

The model of projected hospital cases shows how the geographic variations in the timing in peak COVID-19 infections could affect hospitals in different areas. Without outside intervention, some hospitals would remain at capacity for years, especially those farthest from major population centers.

These charts show hospital load predictions for two different scenarios: a community with a 20% hospitalization rate (left), and one with a 2% hospitalization rate (right), both indicating the number of days that a hospital stays at full capacity, based on the number of beds projected to be filled.

Credit: Thomas et al., 2020, PNAS

These types of models are important because they provide a more detailed and nuanced prediction of an unknown like the novel coronavirus, said Almquist. Gauging how the virus might spread throughout a city and strain its hospitals can help local officials and health care providers plan for many scenarios. And while this study assumes no major interventions will rein in the virus, it’s reasonable to believe the virus will linger to some degree, even with solutions such as a vaccine, according to Almquist.

“If you project these models for what it means over the country, we might expect to see some areas, such as rural populations, not see infection for months or even years before their peak infection occurs,” Almquist said. “These projections, as well as others, are beginning to suggest that it could take years for the spread of COVID-19 to reach saturation in the population, and even if it does so it is likely to become endemic without a vaccine.”

Co-authors are Loring Thomas, Peng Huang, Fan Yin, Xiaoshuang Iris Luo, John Hipp and Carter Butts, all of UC Irvine. The study was funded by the National Science Foundation and UC Irvine.

Contacts and sources:
Kim Eckart
University of Washington

Publication:  Spatial heterogeneity can lead to substantial local variations in COVID-19 timing and severity 

Breakthrough Quantum-Dot Transistors Create a Flexible Alternative to Conventional Electronics

Quantum dot logic circuits provide the long-sought building blocks for innovative devices, including printable electronics, flexible displays, and medical diagnostics.

Researchers at Los Alamos National Laboratory and their collaborators from the University of California, Irvine have created fundamental electronic building blocks out of tiny structures known as quantum dots and used them to assemble functional logic circuits. The innovation promises a cheaper and manufacturing-friendly approach to complex electronic devices that can be fabricated in a chemistry laboratory via simple, solution-based techniques, and offer long-sought components for a host of innovative devices.

By depositing gold (Au) and Indium (In) contacts, researchers create two crucial types of quantum dot transistors on the same substrate, opening the door to a host of innovative electronic

Credit: Los Alamos National Laboratory

"Potential applications of the new approach to electronic devices based on non-toxic quantum dots include printable circuits, flexible displays, lab-on-a-chip diagnostics, wearable devices, medical testing, smart implants, and biometrics," said Victor Klimov, a physicist specializing in semiconductor nanocrystals at Los Alamos and lead author on a paper announcing the new results in the October 19 issue of Nature Communications.

For decades, microelectronics has relied on extra-high purity silicon processed in a specially created clean-room environment. Recently, silicon-based microelectronics has been challenged by several alternative technologies that allow for fabricating complex electronic circuits outside a clean room, via inexpensive, readily accessible chemical techniques. Colloidal semiconductor nanoparticles made with chemistry methods in much less stringent environments are one such emerging technology. Due to their small size and unique properties directly controlled by quantum mechanics, these particles are dubbed quantum dots.

A colloidal quantum dot consists of a semiconductor core covered with organic molecules. As a result of this hybrid nature, they combine the advantages of well-understood traditional semiconductors with the chemical versatility of molecular systems. These properties are attractive for realizing new types of flexible electronic circuits that could be printed onto virtually any surface including plastic, paper, and even human skin. This capability could benefit numerous areas including consumer electronics, security, digital signage and medical diagnostics.

A key element of electronic circuitry is a transistor that acts as a switch of electrical current activated by applied voltage. Usually transistors come in pairs of n- and p-type devices that control flows of negative and positive electrical charges, respectively. Such pairs of complementary transistors are the cornerstone of the modern CMOS (complementary metal oxide semiconductor) technology, which enables microprocessors, memory chips, image sensors and other electronic devices.

The first quantum dot transistors were demonstrated almost two decades ago. However, integrating complementary n- and p-type devices within the same quantum dot layer remained a long-standing challenge. In addition, most of the efforts in this area have focused on nanocrystals based on lead and cadmium. These elements are highly toxic heavy metals, which greatly limits practical utility of the demonstrated devices.

The team of Los Alamos researchers and their collaborators from the University of California, Irvine have demonstrated that by using copper indium selenide (CuInSe2) quantum dots devoid of heavy metals they were able to address both the problem of toxicity and simultaneously achieve straightforward integration of n- and p-transistors in the same quantum dot layer. As a proof of practical utility of the developed approach, they created functional circuits that performed logical operations.

The innovation that Klimov and colleagues are presenting in their new paper allows them to define p- and n-type transistors by applying two different types of metal contacts (gold and indium, respectively). They completed the devices by depositing a common quantum dot layer on top of the pre-patterned contacts. "This approach permits straightforward integration of an arbitrary number of complementary p- and n-type transistors into the same quantum dot layer prepared as a continuous, un-patterned film via standard spin-coating," said Klimov.
Funding: This work was supported by the Laboratory Directed Research and Development (LDRD) program at Los Alamos National Laboratory under project 20200213DR and the University of California (UC) Office of the President under the UC Laboratory Fees Research Program Collaborative Research and Training Award LFR-17-477148.

Contacts and sources:
James Riordon
Los Alamos National Laboratory

Publication: J. Yun, J. Lim, J. Roh, D. C. J. Neo, M. Law, and V. I. Klimov, Solution-processable integrated CMOS circuits based on colloidal CuInSe2 quantum dots, Nature Communications, DOI:

Denisovan DNA found in sediments of Baishiya Karst Cave on Tibetan Plateau

One year after the publication of research on the Xiahe mandible, the first Denisovan fossil found outside of Denisova Cave, the same research team has now reported their findings of Denisovan DNA from sediments of the Baishiya Karst Cave (BKC) on the Tibetan Plateau where the Xiahe mandible was found. The study was published in Science on Oct. 29.

Baishiya Karst Cave

Credit: HAN Yuanyuan

The research team was led by Prof. CHEN Fahu from the Institute of Tibetan Plateau Research (ITP) of the Chinese Academy of Sciences (CAS), Prof. ZHANG Dongju from Lanzhou University, Prof. FU Qiaomei from the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) of CAS, Prof. Svante Pääbo from the Max Planck Institute for Evolutionary Anthropology, and Prof. LI Bo from University of Wollongong.

Using cutting-edge paleogenetic technology, the researchers successfully extracted Denisovan mtDNA from Late Pleistocene sediment samples collected during the excavation of BKC. Their results show that this Denisovan group is closely related to the late Denisovans from Denisova Cave, indicating Denisovans occupied the Tibetan Plateau for a rather long time and had probably adapted to the high-altitude environment.

Denisovans were first discovered and identified in 2010 by a research team led by Prof. Svante Pääbo. Almost a decade later, the Xiahe mandible was found on the Tibetan Plateau. As the first Denisovan fossil found outside of Denisova Cave, it confirmed that Denisovans had occupied the roof of the world in the late Middle Pleistocene and were widespread. Although the Xiahe mandible shed great new light on Denisovan studies, without DNA and secure stratigraphic and archaeological context, the information it revealed about Denisovans was still considerably restricted.

Collecting sediment DNA samples (YAO Juanting and CHEN Xiaoshan)

Credit: HAN Yuanyuan

In 2010, a research team from Lanzhou University led by Prof. CHEN Fahu, current director of ITP, began to work in BKC and the Ganjia basin where it is located. Since then, thousands of pieces of stone artifacts and animal bones have been found. Subsequent analysis indicated that the stone artifacts were mainly produced using simple core-flake technology. Among animal species represented, gazelles and foxes dominated in the upper layers, but rhinoceros, wild bos and hyena dominated in the lower layers. Some of the bones had been burnt or have cut-marks, indicating that humans occupied the cave for a rather long time.

To determine when people occupied the cave, researchers used radiocarbon dating of bone fragments recovered from the upper layers and optical dating of sediments collected from all layers in the excavated profile. They measured 14 bone fragments and about 30,000 individual grains of feldspar and quartz minerals from 12 sediment samples to construct a robust chronological framework for the site. Dating results suggest that the deepest excavated deposits contain stone artifacts buried over ~190 ka (thousand years). Sediments and stone artifacts accumulated over time until at least ~45 ka or even later.

Preparing sediment samples in IVPP cleanroom (FU Qiaomei)
Credit: WANG Xiao

To determine who occupied the cave, researchers used sedimentary DNA technology to analyze 35 sediment samples specially collected during the excavation for DNA analysis. They captured 242 mammalian and human mtDNA samples, thus enriching the record of DNA related to ancient hominins. Interestingly, they detected ancient human fragments that matched mtDNA associated with Denisovans in four different sediment layers deposited ~100 ka and ~60 ka.

More interestingly, they found that the hominin mtDNA from 60 ka share the closest genetic relationship to Denisova 3 and 4 - i.e., specimens sampled from Denisova Cave in Altai, Russia. In contrast, mtDNA dating to ~100 ka shows a separation from the lineage leading to Denisova 3 and 4.

Using sedimentary DNA from BKC, researchers found the first genetic evidence that Denisovans lived outside of Denisova Cave. This new study supports the idea that Denisovans had a wide geographic distribution not limited to Siberia, and they may have adapted to life at high altitudes and contributed such adaptation to modern humans on the Tibetan Plateau.

However, there are still many questions left. For example, what's the latest age of Denisovans in BKC? Due to the reworked nature of the top three layers, it is difficult to directly associate the mtDNA with their depositional ages, which are as late as 20-30 ka BP. Therefore, it is uncertain whether these late Denisovans had encountered modern humans or not. In addition, just based on mtDNA, we still don't know the exact relationship between the BKC Denisovans, those from Denisova Cave in Siberia and modern Tibetans. Future nuclear DNA from this site may provide a tool to further explore these questions.

Contacts and sources:
FU Qiaomei
Chinese Academy of Sciences (CAS)

Publication: Denisovan DNA in Late Pleistocene sediments from Baishiya Karst Cave on the Tibetan Plateau Science 30 Oct 2020:Vol. 370, Issue 6516, pp. 584-587
DOI: 10.1126/science.abb6320

Tuesday, October 27, 2020

Ancient Lake Contributed to Past San Andreas Fault Ruptures and Could Help Explain Fault’s “Earthquake Drought”

The San Andreas fault, which runs along the western coast of North America and crosses dense population centers like Los Angeles, California, is one of the most-studied faults in North America because of its significant hazard risk. Based on its roughly 150-year recurrence interval for magnitude 7.5 earthquakes and the fact that it’s been over 300 years since that’s happened, the southern San Andreas fault has long been called “overdue” for such an earthquake. For decades, geologists have been wondering why it has been so long since a major rupture has occurred. Now, some geophysicists think the “earthquake drought” could be partially explained by lakes — or a lack thereof.

San Andreas area map by Rebecca Dzombak.

Credit: Rebecca Dzombak.

Today, at the Geological Society of America’s 2020 Annual Meeting, Ph.D. student Ryley Hill will present new work using geophysical modeling to quantify how the presence of a large lake overlying the fault could have affected rupture timing on the southern San Andreas in the past. Hundreds of years ago, a giant lake — Lake Cahuilla — in southern California and northern Mexico covered swathes of the Mexicali, Imperial, and Coachella Valleys, through which the southern San Andreas cuts. The lake served as a key point for multiple Native American populations in the area, as evidenced by archaeological remains of fish traps and campsites. It has been slowly drying out since its most recent high water mark (between 1000 and 1500 CE). If the lake over the San Andreas has dried up and the weight of its water was removed, could that help explain why the San Andreas fault is in an earthquake drought?

Some researchers have already found a correlation between high water levels on Lake Cahuilla and fault ruptures by studying a 1,000-year record of earthquakes, written in disrupted layers of soils that are exposed in deeply dug trenches in the Coachella Valley. Hill’s research builds on an existing body of modeling but expands to incorporate this unique 1,000-year record and focuses on improving one key factor: the complexity of water pressures in rocks under the lake.

Hill is exploring the effects of a lake on a fault’s rupture timing, known as lake loading. Lake loading on a fault is the cumulative effect of two forces: the weight of the lake’s water and the way in which that water creeps, or diffuses, into the ground under the lake. The weight of the lake’s water pressing down on the ground increases the stress put on the rocks underneath it, weakening them — including any faults that are present. The deeper the lake, the more stress those rocks are under, and the more likely the fault is to slip.

What’s more complicated is how the pressure of water in empty spaces in soils and bedrock (porewater) changes over both time and space. “It’s not that [water] lubricates the fault,” Hill explains. It’s more about one force balancing another, making it easier or harder for the fault to give way. “Imagine your hands stuck together, pressing in. If you try to slip them side by side, they don’t want to slip very easily. But if you imagine water between them, there’s a pressure that pushes [your hands] out — that’s basically reducing the stress [on your hands], and they slip really easily.” Together, these two forces create an overall amount of stress on the fault. Once that stress builds up to a critical threshold, the fault ruptures, and Los Angeles experiences “the Big One.”

Where previous modeling work focused on a fully drained state, with all of the lake water having diffused straight down (and at a single time), Hill’s model is more complex, incorporating different levels of porewater pressure in the sediments and rocks underneath the lake and allowing pore pressures to be directly affected by the stresses from the water mass. That, in turn, affects the overall fault behavior.

While the work is ongoing, Hill says they’ve found two key responses. When lake water is at its highest, it increases the stresses enough to push the timeline for the fault reaching that critical stress point just over 25% sooner. “The lake could modulate this [fault slip] rate just a little bit,” Hill says. “That’s what we think maybe tipped the scales to cause the [fault] failure.”

The overall effect of Lake Cahuilla drying up makes it harder for a fault to rupture in his model, pointing to its potential relevance for the recent quiet on the fault. But, Hill stresses, this influence pales in comparison to continent-scale tectonic forces. “As pore pressures decrease, technically, the bedrock gets stronger,” he says. “But how strong it’s getting is all relevant to tectonically driven slip rates. They’re much, much stronger.”

Session no. 36 – T94. Induced and triggered earthquakes in the United States and Canada
Monday, 26 Oct.: 5:30 to 8:00 p.m. EDT
Presentation time: 6:05 to 6:20 p.m. EDT
Session Link:
Paper 148-9: Can the lack of lake loading explain the earthquake drought on the southern San Andreas Fault?
Abstract Link:
Contact : Ryley Hill, University of California San Diego, California, USA;

Contacts and sources:
Kea Giles
Geological Society of America


Monday, October 26, 2020

New More Complete View of Massive Metallic Asteroid Psyche

A new study authored by Southwest Research Institute planetary scientist Dr. Tracy Becker discusses several new views of the asteroid 16 Psyche, including the first ultraviolet observations. The study, which was published today in The Planetary Science Journal and presented at the virtual meeting of the American Astronomical Society’s Division for Planetary Sciences, paints a clearer view of the asteroid than was previously available.

At about 140 miles in diameter, Psyche is one of the most massive objects in the main asteroid belt orbiting between Mars and Jupiter. Previous observations indicate that Psyche is a dense, largely metallic object thought to be the leftover core of a planet that failed in formation.

 The massive asteroid 16 Psyche is the subject of a new study by SwRI scientist Tracy Becker, who observed the object at ultraviolet wavelengths.

Courtesy of Maxar/ASU/P. Rubin/NASA/JPL-Caltech

“We’ve seen meteorites that are mostly metal, but Psyche could be unique in that it might be an asteroid that is totally made of iron and nickel,” Becker said. “Earth has a metal core, a mantle and crust. It’s possible that as a Psyche protoplanet was forming, it was struck by another object in our solar system and lost its mantle and crust.”

Becker observed the asteroid at two specific points in its rotation to view both sides of Psyche completely and delineate as much as possible from observing the surface at ultraviolet (UV) wavelengths.

“We were able to identify for the first time on any asteroid what we think are iron oxide ultraviolet absorption bands,” she said. “This is an indication that oxidation is happening on the asteroid, which could be a result of the solar wind hitting the surface.”

Becker’s study comes as NASA is preparing to launch the spacecraft Psyche, which will travel to the asteroid as part of an effort to understand the origin of planetary cores. The mission is set to launch in 2022. Metal asteroids are relatively rare in the solar system, and scientists believe Psyche could offer a unique opportunity to see inside a planet.

“What makes Psyche and the other asteroids so interesting is that they’re considered to be the building blocks of the solar system,” Becker said. “To understand what really makes up a planet and to potentially see the inside of a planet is fascinating. Once we get to Psyche, we’re really going to understand if that’s the case, even if it doesn’t turn out as we expect. Any time there’s a surprise, it’s always exciting.”

Becker also observed that the asteroid’s surface could be mostly iron, but she noted that the presence of even a small amount of iron could dominate UV observations. However, while observing Psyche, the asteroid appeared increasingly reflective at deeper UV wavelengths.

“This is something that we need to study further,” she said. “This could be indicative of it being exposed in space for so long. This type of UV brightening is often attributed to space weathering.”

Read the study "HST Ultraviolet Observations of Asteroid (16) Psyche."

Contacts and sources:
Southwest Research Institute

Publication: HST UV Observations of Asteroid (16) Psyche.
Tracy M. Becker, Nathaniel Cunningham, Philippa Molyneux, Lorenz Roth, Lori M. Feaga, Kurt D. Retherford, Zoe A. Landsman, Emma Peavler, Linda T. Elkins-Tanton, Jan-Erik Walhund. The Planetary Science Journal, 2020; 1 (3): 53 DOI: 10.3847/PSJ/abb67e

Hot-Button Words Trigger Conservatives and Liberals Differently

How can the partisan divide be bridged when conservatives and liberals consume the same political content, yet interpret it through their own biased lens?

Researchers from UC Berkeley, Stanford University and Johns Hopkins University scanned the brains of more than three dozen politically left- and right-leaning adults as they viewed short videos involving hot-button immigration policies, such as the building of the U.S.-Mexico border wall, and the granting of protections for undocumented immigrants under the federal Deferred Action for Childhood Arrivals (DACA) program.

Their findings, published today in the Proceedings of the National Academy of Sciences journal, show that liberals and conservatives respond differently to the same videos, especially when the content being viewed contains vocabulary that frequently pops up in political campaign messaging.

“Our study suggests that there is a neural basis to partisan biases, and some language especially drives polarization,” said study lead author Yuan Chang Leong, a postdoctoral scholar in cognitive neuroscience at UC Berkeley. “In particular, the greatest differences in neural activity across ideology occurred when people heard messages that highlight threat, morality and emotions.”

Overall, the results offer a never-before-seen glimpse into the partisan brain in the weeks leading up to what is arguably the most consequential U.S. presidential election in modern history. They underscore that multiple factors, including personal experiences and the news media, contribute to what the researchers call “neural polarization.”

“Even when presented with the same exact content, people can respond very differently, which can contribute to continued division,” said study senior author Jamil Zaki, a professor of psychology at Stanford University. “Critically, these differences do not imply that people are hardwired to disagree. Our experiences, and the media we consume, likely contribute to neural polarization.”

Partisan trigger points

Study shows conservative-liberal disparity in brain response to hot-button vocabulary. 

Image by Yuan Chang Leong

Specifically, the study traces the source of neural polarization to a higher-order brain region known as the dorsomedial prefrontal cortex, which is believed to track and make sense of narratives, among other functions.

Another key finding is that the closer the brain activity of a study participant resembles that of the “average liberal” or the “average conservative,” as modeled in the study, the more likely it is that the participant, after watching the videos, will adopt that particular group’s position.

“This finding suggests that the more participants adopt the conservative interpretation of a video, the more likely they are to be persuaded to take the conservative position, and vice versa,” Leong said.

Leong and fellow researchers launched the study with a couple of theories about how people with different ideological biases would differ in the way they process political information. They hypothesized that if sensory information, like sounds and visual imagery, drove polarization, they would observe differences in brain activity in the visual and auditory cortices.

However, if the narrative storytelling aspects of the political information people absorbed in the videos drove them apart ideologically, the researchers expected to see those disparities also revealed in higher-order brain regions, such as the prefrontal cortex. And that theory panned out.
How they conducted the study

To establish that attitudes toward hardline immigration policies predicted both conservative and liberal biases, the researchers first tested questions out on 300 people recruited via the Amazon Mechanical Turk online marketplace who identified, to varying degrees, as liberal, moderate or conservative.

They then recruited 38 young and middle-aged men and women with similar socio-economic backgrounds and education levels who had rated their opposition or support for controversial immigration policies, such as those that led to the U.S.-Mexico border wall, DACA protections for undocumented immigrants, the ban on refugees from majority-Muslim countries coming to the U.S. and the cutting of federal funding to sanctuary cities.

Researchers scanned the study participants’ brains via functional Magnetic Resonance Imaging (fMRI) as they viewed two dozen brief videos representing liberal and conservative positions on the various immigration policies. The videos included news clips, campaign ads and snippets of speeches by prominent politicians.

After each video, the participants rated on a scale of one to five how much they agreed with the general message of the video, the credibility of the information presented and the extent to which the video made them likely to change their position and to support the policy in question.

To calculate group brain responses to the videos, the researchers used a measure known as inter-subject correlation, which can be used to measure how similarly two brains respond to the same message.

Partisans showed differences in their brain responses to political messaging. 

Graphic by Yuan Chang Leong

Their results showed a high shared response across the group in the auditory and visual cortices, regardless of the participants’ political attitudes. However, neural responses diverged along partisan lines in the dorsomedial prefrontal cortex, where semantic information, or word meanings, are processed.

Next, the researchers drilled down further to learn what specific words were driving neural polarization. To do this, they edited the videos into 87 shorter segments and placed the words in the segments into one of 50 categories. Those categories included words related to morality, emotions, threat and religion.

The researchers found that the use of words related to risk and threat, and to morality and emotions, led to greater polarization in the study participants’ neural responses.
Risk, threat, emotional, moral

An example of a risk-related statement was, “I think it’s very dangerous, because what we want is cooperation amongst the cities and the federal government to ensure that we have safety in our communities, and to ensure that our citizens are protected.”

Meanwhile, an example of a moral-emotional statement was, “What are the fundamental ethical principles that are the basis of our society? Do no harm, and be compassionate, and this federal policy violates both of these principles.”

Overall, the research study’s results suggest that political messages that use threat-related and moral-emotional language drive partisans to interpret the same message in opposite ways, contributing to increasing polarization, Leong said.

Going forward, Leong hopes to use neuroimaging to build more precise models of how political content is interpreted and to inform interventions aimed at narrowing the divide between conservatives and liberals.

In addition to Leong and Zaki, co-authors of the study are Robb Willer at Stanford University and Janice Chen at Johns Hopkins University.

STUDY IN PNAS: Conservative and liberal attitudes drive polarized neural responses to political content

Contacts and sources:
Yasmin Anwar
University of California - Berkeley

Publication: Conservative and liberal attitudes drive polarized neural responses to political content.
Yuan Chang Leong, Janice Chen, Robb Willer, Jamil Zaki. Proceedings of the National Academy of Sciences, Oct. 20, 2020; DOI: 10.1073/pnas.2008530117

NASA Discovers Water on Sunlit Surface of Moon

This illustration highlights the Moon’s Clavius Crater with an illustration depicting water trapped in the lunar soil there, along with an image of NASA’s Stratospheric Observatory for Infrared Astronomy 
(SOFIA) that found sunlit lunar water.
Credit: NASA/Daniel Rutter

NASA’s Stratospheric Observatory for Infrared Astronomy (SOFIA) has confirmed, for the first time, water on the sunlit surface of the Moon. This discovery indicates that water may be distributed across the lunar surface, and not limited to cold, shadowed places.

SOFIA has detected water molecules (H2O) in Clavius Crater, one of the largest craters visible from Earth, located in the Moon’s southern hemisphere. Previous observations of the Moon’s surface detected some form of hydrogen, but were unable to distinguish between water and its close chemical relative, hydroxyl (OH). Data from this location reveal water in concentrations of 100 to 412 parts per million – roughly equivalent to a 12-ounce bottle of water – trapped in a cubic meter of soil spread across the lunar surface. The results are published in the latest issue of Nature Astronomy.

“We had indications that H2O – the familiar water we know – might be present on the sunlit side of the Moon,” said Paul Hertz, director of the Astrophysics Division in the Science Mission Directorate at NASA Headquarters in Washington. “Now we know it is there. This discovery challenges our understanding of the lunar surface and raises intriguing questions about resources relevant for deep space exploration.”

As a comparison, the Sahara desert has 100 times the amount of water than what SOFIA detected in the lunar soil. Despite the small amounts, the discovery raises new questions about how water is created and how it persists on the harsh, airless lunar surface.

Water is a precious resource in deep space and a key ingredient of life as we know it. Whether the water SOFIA found is easily accessible for use as a resource remains to be determined. Under NASA’s Artemis program, the agency is eager to learn all it can about the presence of water on the Moon in advance of sending the first woman and next man to the lunar surface in 2024 and establishing a sustainable human presence there by the end of the decade.

SOFIA’s results build on years of previous research examining the presence of water on the Moon. When the Apollo astronauts first returned from the Moon in 1969, it was thought to be completely dry. Orbital and impactor missions over the past 20 years, such as NASA’s Lunar Crater Observation and Sensing Satellite, confirmed ice in permanently shadowed craters around the Moon’s poles. Meanwhile, several spacecraft – including the Cassini mission and Deep Impact comet mission, as well as the Indian Space Research Organization’s Chandrayaan-1 mission – and NASA’s ground-based Infrared Telescope Facility, looked broadly across the lunar surface and found evidence of hydration in sunnier regions. Yet those missions were unable to definitively distinguish the form in which it was present – either H2O or OH.

“Prior to the SOFIA observations, we knew there was some kind of hydration,” said Casey Honniball, the lead author who published the results from her graduate thesis work at the University of Hawaii at Mānoa in Honolulu. “But we didn’t know how much, if any, was actually water molecules – like we drink every day – or something more like drain cleaner.”

Scientists using NASA’s telescope on an airplane, the Stratospheric Observatory for Infrared Astronomy, discovered water on a sunlit surface of the Moon for the first time. SOFIA is a modified Boeing 747SP aircraft that allows astronomers to study the solar system and beyond in ways that are not possible with ground-based telescopes. Molecular water, H2O, was found in Clavius Crater, one of the largest craters visible from Earth in the Moon’s southern hemisphere. This discovery indicates that water may be distributed across the lunar surface, and not limited to cold, shadowed places.
<iframe width="700" height="400" src="" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Credits: NASA/Ames Research Center

SOFIA offered a new means of looking at the Moon. Flying at altitudes of up to 45,000 feet, this modified Boeing 747SP jetliner with a 106-inch diameter telescope reaches above 99% of the water vapor in Earth’s atmosphere to get a clearer view of the infrared universe. Using its Faint Object infraRed CAmera for the SOFIA Telescope (FORCAST), SOFIA was able to pick up the specific wavelength unique to water molecules, at 6.1 microns, and discovered a relatively surprising concentration in sunny Clavius Crater.

“Without a thick atmosphere, water on the sunlit lunar surface should just be lost to space,” said Honniball, who is now a postdoctoral fellow at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “Yet somehow we’re seeing it. Something is generating the water, and something must be trapping it there.”

Several forces could be at play in the delivery or creation of this water. Micrometeorites raining down on the lunar surface, carrying small amounts of water, could deposit the water on the lunar surface upon impact. Another possibility is there could be a two-step process whereby the Sun’s solar wind delivers hydrogen to the lunar surface and causes a chemical reaction with oxygen-bearing minerals in the soil to create hydroxyl. Meanwhile, radiation from the bombardment of micrometeorites could be transforming that hydroxyl into water.

How the water then gets stored – making it possible to accumulate – also raises some intriguing questions. The water could be trapped into tiny beadlike structures in the soil that form out of the high heat created by micrometeorite impacts. Another possibility is that the water could be hidden between grains of lunar soil and sheltered from the sunlight – potentially making it a bit more accessible than water trapped in beadlike structures.

For a mission designed to look at distant, dim objects such as black holes, star clusters, and galaxies, SOFIA’s spotlight on Earth’s nearest and brightest neighbor was a departure from business as usual. The telescope operators typically use a guide camera to track stars, keeping the telescope locked steadily on its observing target. But the Moon is so close and bright that it fills the guide camera’s entire field of view. With no stars visible, it was unclear if the telescope could reliably track the Moon. To determine this, in August 2018, the operators decided to try a test observation.

“It was, in fact, the first time SOFIA has looked at the Moon, and we weren’t even completely sure if we would get reliable data, but questions about the Moon’s water compelled us to try,” said Naseem Rangwala, SOFIA’s project scientist at NASA's Ames Research Center in California's Silicon Valley. “It’s incredible that this discovery came out of what was essentially a test, and now that we know we can do this, we’re planning more flights to do more observations.”

SOFIA’s follow-up flights will look for water in additional sunlit locations and during different lunar phases to learn more about how the water is produced, stored, and moved across the Moon. The data will add to the work of future Moon missions, such as NASA’s Volatiles Investigating Polar Exploration Rover (VIPER), to create the first water resource maps of the Moon for future human space exploration.

In the same issue of Nature Astronomy, scientists have published a paper using theoretical models and NASA's Lunar Reconnaissance Orbiter data, pointing out that water could be trapped in small shadows, where temperatures stay below freezing, across more of the Moon than currently expected. The results can be found here.

“Water is a valuable resource, for both scientific purposes and for use by our explorers,” said Jacob Bleacher, chief exploration scientist for NASA’s Human Exploration and Operations Mission Directorate. “If we can use the resources at the Moon, then we can carry less water and more equipment to help enable new scientific discoveries.”

SOFIA is a joint project of NASA and the German Aerospace Center. Ames manages the SOFIA program, science, and mission operations in cooperation with the Universities Space Research Association, headquartered in Columbia, Maryland, and the German SOFIA Institute at the University of Stuttgart. The aircraft is maintained and operated by NASA’s Armstrong Flight Research Center Building 703, in Palmdale, California.

Contacts and sources:Felicia Chou
NASA Headquarters

Alison Hawkes
Ames Research Center,

Irregular Appearances of Glacial and Interglacial Climate States: Deviations in 41,000- Year Cycle

During the last 2.6 million years of Earth’s climate has alterd between glacial and interglacial states. As such, there have been times in which the transition between the two climate states appeared with either regular or irregular periodicity. AWI researcher Peter Köhler has now discovered that the irregular appearance of interglacials has been more frequent than previously thought. His study makes a significant contribution to our understanding of Earth’s fundamental climate changes.

In order to understand human beings’ role in the development of our current climate, we have to look back a long way, since there has always been climate change – albeit over vastly different timescales than the anthropogenic climate change, which is mainly due to the use of fossil fuels over the past 200 years. Without humans, for millions of years, climate altered between glacial and interglacial states over periods of many thousands of years, mainly because of the Earth’s tilt which changes by a few degrees with a periodicity of 41,000 years. This in turn changes the angle at which the sun’s rays strike Earth – and as such the energy that reaches the planet, especially at high latitudes in summer. \

However, there is strong evidence that during the course of the last 2.6 million years, interglacials have repeatedly been ‘skipped’. The Northern Hemisphere – particularly North America – remained frozen for long periods, despite the angle of the axial tilt changing to such an extent that more solar energy once again reached Earth during the summer, which should have melted the inland ice masses. This means Earth’s tilt can’t be the sole reason for Earth's climate to alter between glacial and interglacial states.

Aerial view of the Beyond EPICA camp
 Photo: Beyond EPICA

In order to solve the puzzle, climate researchers are investigating more closely at what points in Earth’s history irregularities occurred. Together with colleagues at Utrecht University, physicist Peter Köhler from the Alfred Wegener Institute (AWI) has now made a significant contribution towards providing a clearer picture of the sequence of glacial and interglacial periods over the last 2.6 million years. 

Until now, experts thought that, especially over the past 1.0 million years, glacial and interglacial periods deviated from their 41,000- year cycle, and that interglacial periods were skipped, as a result of which some glacial periods lasted for 80,0000 or even 120,000 years. “For the period between 2.6 and 1.0 million years ago, it was assumed that the rhythm was 41,000 years,” says Peter Köhler. But as his study, which has now been published in the scientific journal Nature Communications, shows, there were also repeated irregularities during the period between 2.6 and 1.0 million years ago.

Köhler’s study is particularly interesting because he re-evaluated a well-known dataset that researchers have been using for several years – the LR04 climate dataset – yet arrived at completely different conclusions. This dataset consists of a global evaluation of core samples from deep-sea sediments that are millions of years old, and includes measurements from the ancient shells of microscopic, single-celled marine organisms – foraminifera – that were deposited on the ocean floor. Foraminifera incorporate oxygen from the seawater into their calcium shells. But over millennia, the level of specific oxygen isotopes – oxygen atoms that have differing numbers of neutrons and therefore different masses – varies in seawater. 
18O reveals what the world was like in the past

The LR04 dataset contains measurements of the ratio of the heavy oxygen isotope 18O to the lighter 16O. The ratio of 18O/16O stored in the foraminifera’s shells depends on the water temperature. But there is also another effect that leads to relatively large amounts of 18O being found in the foraminifera’s shells in glacial periods: when, during the course of a glacial period, there is heavy snowfall on land, which leads to the formation of thick ice sheets, the sea level falls – in the period studied, by as much as 120 m. Since 18O is heavier than 16O, water molecules containing this heavy isotope evaporate less readily than molecules containing the lighter isotope. As such, comparatively more 18O remains in the ocean and the 18O content of the foraminifera shells increases. 

“If you take the LR04 dataset at face value, it means you blur two effects – the influence of ocean temperature and that of land ice, or rather that of sea level change,” says Peter Köhler. “This makes statements regarding the alternation of the glacial periods uncertain.” And there is an additional factor: climate researchers mainly determine the sequence of glacial periods on the basis of glaciation in the Northern Hemisphere. But using 18O values doesn’t allow us to say whether prehistoric glaciation chiefly occurred in the Northern Hemisphere or in Antarctica.

Aerial view of the Beyond EPICA camp
 Photo: Beyond EPICA

Computer model separates the influencing parameters

In an attempt to solve this problem, Köhler and his team evaluated the LR04 dataset in a completely different way. The data was fed into a computer model that simulates the growth and melting of the large continental ice sheets. What sets it apart: the model is capable of separating the influence of temperature and that of sea level change on the 18O concentration.

 Furthermore, it can accurately analyse where and when snow falls and the ice increases – more in the Northern Hemisphere or in Antarctica. “Mathematicians call this separation a deconvolution,” Köhler explains, “which our model is capable of delivering.” The results show that the sequence of glacials and interglacials was irregular even in the period 2.6 to 1.0 million years ago – a finding that could be crucial in the coming years.

 As part of the ongoing major EU project ‘BE-OIC (Beyond EPICA Oldest Ice Core)’, researchers are drilling deeper than ever before into the Antarctic ice. With the oldest ice core recovered to date, ‘EPICA’, they have ‘only’ travelled back roughly 800,000 years into the past. The ancient ice provides, among other things, information on how much carbon dioxide Earth’s atmosphere contained at that time. With ‘Beyond EPICA’ they will delve circa 1.5 million years into the past. By combining the carbon dioxide measurements with Köhler’s analyses, valuable insights can be gained into the relation between these two factors – the fluctuations in the sequence of glacials and the carbon dioxide content of the atmosphere. And this can help us understand the fundamental relationship between greenhouse gases and climate changes in Earth’s glacial history.

Contacts and sources:
Ulrike Windhövel
Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research.

Publication: Interglacials of the Quaternary defined by northern hemispheric land ice distribution outside of Greenland.
Peter Köhler, Roderik S. W. van de Wal. Nature Communications, 2020; 11 (1) DOI: 10.1038/s41467-020-18897-5

Thursday, October 22, 2020

Discovery Enables Adult Skin To Regenerate Like A Newborn’s

A newly identified genetic factor allows adult skin to repair itself like the skin of a newborn babe. The discovery by Washington State University researchers has implications for better skin wound treatment as well as preventing some of the aging process in skin.

In a study, published in the journal eLife on Sept. 29, the researchers identified a factor that acts like a molecular switch in the skin of baby mice that controls the formation of hair follicles as they develop during the first week of life. The switch is mostly turned off after skin forms and remains off in adult tissue. When it was activated in specialized cells in adult mice, their skin was able to heal wounds without scarring. The reformed skin even included fur and could make goose bumps, an ability that is lost in adult human scars.

An image of a regenerating skin wound with hair follicles that can make goose bumps. The green lines are the muscles attached to individual regenerating hairs so that they can stand up.
Credit: Washington State University

“We were able to take the innate ability of young, neonatal skin to regenerate and transfer that ability to old skin,” said Ryan Driskell, an assistant professor in WSU’s School of Molecular Biosciences. “We have shown in principle that this kind of regeneration is possible.”

Mammals are not known for their regenerative abilities compared to other organisms, such as salamanders that can regrow entire limbs and regenerate their skin. The WSU study suggests that the secret to human regeneration might be found by studying our own early development.

“We can still look to other organisms for inspiration, but we can also learn about regeneration by looking at ourselves,” said Driskell. “We do generate new tissue, once in our life, as we are growing.”

Driskell’s team used a new technique called single cell RNA sequencing to compare genes and cells in developing and adult skin. In developing skin, they found a transcription factor–proteins that bind to DNA and can influence whether genes are turned on or off. The factor the researchers identified, called Lef1, was associated with papillary fibroblasts which are developing cells in the papillary dermis, a layer of skin just below the surface that gives skin its tension and youthful appearance.

When the WSU researchers activated the Lef1 factor in specialized compartments of adult mouse skin, it enhanced the skins’ ability to regenerate wounds with reduced scarring, even growing new hair follicles that could make goose bumps.

Driskell first got the idea to look at early stages of mammalian life for the capacity to repair skin after learning of the work of Dr. Michael Longaker of Stanford University. When performing emergency life-saving surgery in utero, Longaker and his colleagues observed that when those babies were born they did not have any scars from the surgery.

A lot of work still needs to be done before this latest discovery in mice can be applied to human skin, Driskell said, but this is a foundational advance. With the support from a new grant from the National Institutes of Health, the WSU research team will continue working to understand how Lef1 and other factors work to repair skin. Also to help further this research, the Driskell lab has created an open, searchable web resource for the RNA sequence data for other scientists to access at

Contacts and sources:
Sara Zaske
Ryan Driskell, WSU School of Molecular Biosciences
Washington State University

Smile, Wave: Some Exoplanets May Be Able To See Us, Too

Cornell astronomer Lisa Kaltenegger and Lehigh University’s Joshua Pepper have identified 1,004 main-sequence stars – similar to our sun – that might contain Earth-like planets in their own habitable zones within about 300 light-years of here, which should be able to detect Earth’s chemical traces of life.
<iframe width="700" height="408" src="" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
John Munson/Cornell University

Three decades after Cornell astronomer Carl Sagan suggested that Voyager 1 snap Earth’s picture from billions of miles away – resulting in the iconic Pale Blue Dot photograph – two astronomers now offer another unique cosmic perspective:

Some exoplanets – planets from beyond our own solar system – have a direct line of sight to observe Earth’s biological qualities from far, far away.

Lisa Kaltenegger, associate professor of astronomy in the College of Arts and Sciences and director of Cornell’s Carl Sagan Institute; and Joshua Pepper, associate professor of physics at Lehigh University, have identified 1,004 main-sequence stars (similar to our sun) that might contain Earth-like planets in their own habitable zones – all within about 300 light-years of Earth – and which should be able to detect Earth’s chemical traces of life.

The paper, “Which Stars Can See Earth as a Transiting Exoplanet?” was published Oct. 21 in the Monthly Notices of the Royal Astronomical Society.

“Let’s reverse the viewpoint to that of other stars and ask from which vantage point other observers could find Earth as a transiting planet,” Kaltenegger said. A transiting planet is one that passes through the observer’s line of sight to another star, such as the sun, revealing clues as to the makeup of the planet’s atmosphere.

“If observers were out there searching, they would be able to see signs of a biosphere in the atmosphere of our Pale Blue Dot,” she said, “And we can even see some of the brightest of these stars in our night sky without binoculars or telescopes”.

The iconic “pale blue dot” photograph of planet Earth, which was taken Feb. 14, 1990 by NASA’s Voyager 1 spacecraft, from a distance of 3.7 billion miles. Now 30 years later, Voyager 1 is nearly 14 billion miles away.

Credit; NASA

Transit observations are a crucial tool for Earth’s astronomers to characterize inhabited extrasolar planets, Kaltenegger said, which astronomers will start to use with the launch of NASA’s James Webb Space telescope next year.

But which stars systems could find us? Holding the key to this science is Earth’s ecliptic – the plane of Earth’s orbit around the Sun. The ecliptic is where the exoplanets with a view of Earth would be located, as they will be the places able to see Earth crossing its own sun – effectively providing observers a way to discover our planet’s vibrant biosphere.

Pepper and Kaltenegger created the list of the thousand closest stars using NASA’s Transiting Exoplanet Survey Satellite (TESS) star catalog .

“Only a very small fraction of exoplanets will just happen to be randomly aligned with our line of sight so we can see them transit.” Pepper said. ”But all of the thousand stars we identified in our paper in the solar neighborhood could see our Earth transit the sun, calling their attention.”

“If we found a planet with a vibrant biosphere, we would get curious about whether or not someone is there looking at us too,” Kaltenegger said.

“If we’re looking for intelligent life in the universe, that could find us and might want to get in touch” she said, “we've just created the star map of where we should look first.”

This work was funded by the Carl Sagan Institute and the Breakthrough Initiative.

Contacts and sources:
Jeff Tyson
Cornell University


Wednesday, October 21, 2020

Cannabis Reduces OCD Symptoms by Half in the Short-Term

People with obsessive-compulsive disorder, or OCD, report that the severity of their symptoms was reduced by about half within four hours of smoking cannabis, according to a Washington State University study.

Buds of marijuana (cannabis indica inflorescence) in a small green ceramic cup.

Credit: Efiks / Wikimedia Commons

The researchers analyzed data inputted into the Strainprint app by people who self-identified as having OCD, a condition characterized by intrusive, persistent thoughts and repetitive behaviors such as compulsively checking if a door is locked. After smoking cannabis, users with OCD reported it reduced their compulsions by 60%, intrusions, or unwanted thoughts, by 49% and anxiety by 52%.

The study, recently published in the Journal of Affective Disorders, also found that higher doses and cannabis with higher concentrations of CBD, or cannabidiol, were associated with larger reductions in compulsions.

"The results overall indicate that cannabis may have some beneficial short-term but not really long-term effects on obsessive-compulsive disorder," said Carrie Cuttler, the study's corresponding author and WSU assistant professor of psychology. "To me, the CBD findings are really promising because it is not intoxicating. This is an area of research that would really benefit from clinical trials looking at changes in compulsions, intrusions and anxiety with pure CBD."

The WSU study drew from data of more than 1,800 cannabis sessions that 87 individuals logged into the Strainprint app over 31 months. The long time period allowed the researchers to assess whether users developed tolerance to cannabis, but those effects were mixed. As people continued to use cannabis, the associated reductions in intrusions became slightly smaller suggesting they were building tolerance, but the relationship between cannabis and reductions in compulsions and anxiety remained fairly constant.

Traditional treatments for obsessive-compulsive disorder include exposure and response prevention therapy where people's irrational thoughts around their behaviors are directly challenged, and prescribing antidepressants called serotonin reuptake inhibitors to reduce symptoms. While these treatments have positive effects for many patients, they do not cure the disorder nor do they work well for every person with OCD.

"We're trying to build knowledge about the relationship of cannabis use and OCD because it's an area that is really understudied," said Dakota Mauzay, a doctoral student in Cuttler's lab and first author on the paper.

Aside from their own research, the researchers found only one other human study on the topic: a small clinical trial with 12 participants that revealed that there were reductions in OCD symptoms after cannabis use, but these were not much larger than the reductions associated with the placebo.

The WSU researchers noted that one of the limitations of their study was the inability to use a placebo control and an "expectancy effect" may play a role in the results, meaning when people expect to feel better from something they generally do. The data was also from a self-selected sample of cannabis users, and there was variability in the results which means that not everyone experienced the same reductions in symptoms after using cannabis.

However, Cuttler said this analysis of user-provided information via the Strainprint app was especially valuable because it provides a large data set and the participants were using market cannabis in their home environment, as opposed to federally grown cannabis in a lab which may affect their responses. Strainprint's app is intended to help users determine which types of cannabis work the best for them, but the company provided the WSU researchers free access to users' anonymized data for research purposes.

Cuttler said this study points out that further research, particularly clinical trials on the cannabis constituent CBD, may reveal a therapeutic potential for people with OCD.

This is the fourth study Cuttler and her colleagues have conducted examining the effects of cannabis on various mental health conditions using the data provided by the app created by the Canadian company Strainprint. Others include studies on how cannabis impacts PTSD symptoms, reduces headache pain, and affects emotional well-being.

Contacts and sources:
Sara Zaske / Carrie Cuttler
Washington State University

Publication: Acute Effects of Cannabis on Symptoms of Obsessive-Compulsive Disorder.  A Dakota Mauzay, Emily M. LaFrance, Carrie Cuttler.  Journal of Affective Disorders

High Levels of Microplastics Released from Infant Feeding Bottles during Formula Prep

AMBER and Trinity researchers discover infants up to 12 months old injest on average 1,000,000 microplastics every day from baby bottle, based on World Health Organisation guidelines for sterilisation and infant formula preparation.
Credit: AMBER, and Trinity College Dublin.

New research shows that high levels of microplastics (MPs) are released from infant-feeding bottles (IFBs) during formula preparation. The research also indicates a strong relationship between heat and MP release, such that warmer liquids (formula or water used to sterilise bottles) result in far greater release of MPs.

In response, the researchers involved - from AMBER, the SFI Research Centre for Advanced Materials and Bioengineering Research, TrinityHaus and the Schools of Engineering and Chemistry at Trinity College Dublin - have developed a set of recommendations for infant formula preparation when using plastic IFBs that minimise MP release.

AMBER and Trinity researchers identify global prevalence of microplastics produced by plastic infant feeding bottle and suggest behavioural and technological solutions.
Credit: AMBER, and Trinity College Dublin.

Led by Dr Jing Jing Wang, Professor John Boland and Professor Liwen Xiao at Trinity, the team analysed the potential for release of MPs from polypropylene infant-feeding bottles (PP-IFBs) during formula preparation by following international guidelines. They also estimated the exposure of 12-month-old infants to MPs in 48 countries and regions and have just published their findings in the high-profile journal Nature Food.

Key findings

  • PP-IFBs can release up to 16 million MPs and trillions of smaller nanoplastics per litre. Sterilisation and exposure to high temperature water significantly increase microplastic release from 0.6 million to 55 million particles/l when temperature increases from 25 to 95 °C
  • Other polypropylene plastic-ware products (kettles, lunchboxes) release similar levels of MPs
  • The team undertook a global survey and estimated the exposure of 12-month-old infants to microplastics in 48 regions. Following current guidelines1 for infant-feeding bottle sterilisation and feeding formula preparation the average daily exposure level for infants is in excess of 1 million MPs. Oceania, North America and Europe have the highest levels of potential exposure, at 2,100,000, 2,280,000, and 2,610,000 particles/day, respectively
  • The level of microplastics released from PP-IFBs can be significantly reduced by following modified sterilisation and formula preparation procedure

Recommended sterilisation and formula preparation procedures
  • Sterilising infant feeding bottles
  • Sterilise the bottle following WHO recommended guidelines and allow to cool
  • Prepare sterilised water by boiling in a non-plastic kettle/cooker (e.g. glass or stainless steel)
  • Rinse the sterilised bottle using room temperature sterilised water at least 3 times

Preparing infant formula
  • Prepare hot water using a non-plastic kettle/cooker
  • Prepare infant formula in a non-plastic container using at least 70oC water. Cool to room temperature and transfer prepared formula into a high-quality plastic infant feeding bottle

Standard Precautions
  • Do not reheat prepared formula in plastic containers and avoid microwave ovens
  • Do not vigorously shake the formula in the bottle at any time
  • Do not use sonication to clean plastic infant feeding bottles
Studying microplastics through a project of scale

There is growing evidence to suggest that micro2 and nano plastics are released into our food and water sources through the chemical and physical degradation of larger plastic items. Some studies have demonstrated the potential transfer of micro and nano plastics from oceans to humans via the food chain3 but little is known about the direct release of microplastics (MPs) from plastic products through everyday use.

Polypropylene (PP) is one of the most commonly produced plastics in the world for food preparation and storage. It is used to make everyday items such as lunch boxes, kettles and infant-feeding bottles (IFBs). Despite its widespread use the capacity of PP to release microplastics was not appreciated until now.

Measuring Polypropylene microplastic release (PP-MPs) from infant feeding bottles (IFB)

Drawing on international guidelines for infant formula preparation (cleaning, sterilising, and mixing techniques), the team developed a protocol4 to quantify the PP-MPs released from 10 representative infant-feeding bottles that account for 68.8% of the global infant-feeding bottle market.

When the role of temperature on the release of PP-MPs was analysed a clear trend emerged; the higher the temperature of liquid inside the bottle, the more microplastics released.

Under a standardised protocol, after sterilisation and exposure to water at 70?C, the PP-IFBs released up to 16.2 million PP-MP per litre. When the water temperature was increased to 95?C, as much as 55 million PP-MP per litre were released, while when the PP-IFB's were exposed to water at 25?C - well under international guidelines for sterilisation or formula preparation - 600,000 PP-MP per litre were generated.

Estimating the exposure of 12-month-old infants to MPs from PP-IFBs

Given the widespread use of PP-IFBs and the quantity of MPs released through normal daily use, the team realised the potential exposure of infants to MPs is a worldwide issue. The team estimated the exposure of 12-month-old infants to MPs in 48 countries and regions by using MP release rates from PP-IFBs, the market share of each PP-IFB, the infant daily milk-intake volume, and breastfeeding rates.

The team found that the overall average daily consumption of PP-MPs by infants per capita was 1,580,000 particles.

Oceania, North America and Europe were found to have the highest levels of potential exposure corresponding to 2,100,000, 2,280,000, and 2,610,000 particles/day, respectively.

Mitigating exposure

Given the global preference for PP-IBFs it is important to mitigate against unintended generation of micro and nanoplastics in infant formula. Based on their findings the team devised and tested a series of recommendations for the preparation of baby formula that will help minimise the production of MPs.

They note though, that given the prevalence of plastic products in daily food storage and food preparation, and the fact that every PP product tested in the study (infant bottles, kettles, lunch boxes and noodle cups) released similar levels of MPs, there is an urgent need for technological solutions.

As Professor John Boland, AMBER, CRANN, and Trinity's School of Chemistry explains: "When we saw these results in the lab we recognised immediately the potential impact they might have. The last thing we want is to unduly alarm parents, particularly when we don't have sufficient information on the potential consequences of microplastics on infant health.

"We are calling on policy makers, however, to reassess the current guidelines for formula preparation when using plastic infant feeding bottles. Crucially, we have found that it is possible to mitigate the risk of ingesting microplastics by changing practices around sterilisation and formula preparation."

Professor Liwen Xiao at TrinityHaus and Trinity's School of Engineering said: "Previous research has predominantly focused on human exposure to micro and nanoplastics via transfer from ocean and soils into the food chain driven by the degradation of plastics in the environment.

"Our study indicates that daily use of plastic products is an important source of microplastic release, meaning that the routes of exposure are much closer to us than previously thought. We need to urgently assess the potential risks of microplastics to human health. Understanding their fate and transport through the body following ingestion is an important focus of future research. Determining the potential consequences of microplastics on our health is critical for the management of microplastic pollution."

Lead authors, Dr Dunzhu Li and Dr Yunhong Shi, researchers at CRANN and Trinity's School of Engineering, said:"We have to accept that plastics are pervasive in modern life, and that they release micro and nano plastics through everyday use. We don't yet know the risks to human health of these tiny plastic particles, but we can develop behavioural and technological solutions and strategies to mitigate against their exposure."

Dr Jing Jing Wang, Microplastics Group at AMBER and CRANN, said: "While this research points to the role of plastic products as a direct source of microplastic the removal of microplastics from the environment and our water supplies remains a key future challenge.

"Our team will investigate specific mechanisms of micro and nano plastic release during food preparation in a host of different contexts. We want to develop appropriate technologies that will prevent plastics degrading and effective filtration technologies that will remove micro and nanoplastics from our environment for large scale water treatment and local distribution and use."

This work has been undertaken by the Microplastics Group led by Dr Jing Jing Wang at AMBER and CRANN, with internal collaboration from TrinityHaus and Trinity's School of Engineering and School of Chemistry. This research was supported by Enterprise Ireland, Science Foundation Ireland, a School of Engineering Scholarship at Trinity, and the China Scholarship Council.

Contacts and sources:
Rachel Kavanagh
Trinity College Dublin

Publication: Microplastic release from the degradation of polypropylene feeding bottles during infant formula preparation. Dunzhu Li, Yunhong Shi, Luming Yang, Liwen Xiao, Daniel K. Kehoe, Yurii K. Gun’ko, John J. Boland & Jing Jing Wang   Nature Food (2020)

Mouthwashes, Oral Rinses May Inactivate Human Coronaviruses

Certain oral antiseptics and mouthwashes may have the ability to inactivate human coronaviruses, according to a Penn State College of Medicine research study. The results indicate that some of these products might be useful for reducing the viral load, or amount of virus, in the mouth after infection and may help to reduce the spread of SARS-CoV-2, the coronavirus that causes COVID-19.

Craig Meyers, distinguished professor of microbiology and immunology and obstetrics and gynecology, led a group of physicians and scientists who tested several oral and nasopharyngeal rinses in a laboratory setting for their ability to inactivate human coronaviruses, which are similar in structure to SARS-CoV-2. The products evaluated include a 1% solution of baby shampoo, a neti pot, peroxide sore-mouth cleansers, and mouthwashes.

Credit: Claude TRUONG-NGOC / Wikimedia Commons

The researchers found that several of the nasal and oral rinses had a strong ability to neutralize human coronavirus, which suggests that these products may have the potential to reduce the amount of virus spread by people who are COVID-19-positive.

“While we wait for a vaccine to be developed, methods to reduce transmission are needed,” Meyers said. “The products we tested are readily available and often already part of people’s daily routines.”

Meyers and colleagues used a test to replicate the interaction of the virus in the nasal and oral cavities with the rinses and mouthwashes. Nasal and oral cavities are major points of entry and transmission for human coronaviruses. They treated solutions containing a strain of human coronavirus, which served as a readily available and genetically similar alternative for SARS-CoV-2, with the baby shampoo solutions, various peroxide antiseptic rinses and various brands of mouthwash. They allowed the solutions to interact with the virus for 30 seconds, one minute and two minutes, before diluting the solutions to prevent further virus inactivation. According to Meyers, the outer envelopes of the human coronavirus tested and SARS-CoV-2 are genetically similar so the research team hypothesizes that a similar amount of SARS-CoV-2 may be inactivated upon exposure to the solution.

To measure how much virus was inactivated, the researchers placed the diluted solutions in contact with cultured human cells. They counted how many cells remained alive after a few days of exposure to the viral solution and used that number to calculate the amount of human coronavirus that was inactivated as a result of exposure to the mouthwash or oral rinse that was tested. The results were published in the Journal of Medical Virology.

The 1% baby shampoo solution, which is often used by head and neck doctors to rinse the sinuses, inactivated greater than 99.9% of human coronavirus after a two-minute contact time. Several of the mouthwash and gargle products also were effective at inactivating the infectious virus. Many inactivated greater than 99.9% of virus after only 30 seconds of contact time and some inactivated 99.99% of the virus after 30 seconds.

According to Meyers, the results with mouthwashes are promising and add to the findings of a study showing that certain types of oral rinses could inactivate SARS-CoV-2 in similar experimental conditions. In addition to evaluating the solutions at longer contact times, they studied over-the-counter products and nasal rinses that were not evaluated in the other study. Meyers said the next step to expand upon these results is to design and conduct clinical trials that evaluate whether products like mouthwashes can effectively reduce viral load in COVID-19-positive patients.

“People who test positive for COVID-19 and return home to quarantine may possibly transmit the virus to those they live with,” said Meyers, a researcher at Penn State Cancer Institute. “Certain professions including dentists and other health care workers are at a constant risk of exposure. Clinical trials are needed to determine if these products can reduce the amount of virus COVID-positive patients or those with high-risk occupations may spread while talking, coughing or sneezing. Even if the use of these solutions could reduce transmission by 50%, it would have a major impact.”

Future studies may include a continued investigation of products that inactive human coronaviruses and what specific ingredients in the solutions tested inactivate the virus.

Janice Milici, Samina Alam, David Quillen, David Goldenberg and Rena Kass of Penn State College of Medicine and Richard Robison of Brigham Young University also contributed to this research.

The research was supported by funds from Penn State Huck Institutes for the Life Sciences. The researchers declare no conflict of interest.

Contacts and sources:
Barbara Schindo
Penn State

Publication: Lowering the transmission and spread of human coronavirus Craig Meyers PhD Richard Robison PhD Janice Milici BS Samina Alam PhD David Quillen MD David Goldenberg MD, FACS Rena Kass MD

Monday, October 19, 2020

Ground-Breaking Discovery Finally Proves Rain Really Can Move Mountains

A pioneering technique which captures precisely how mountains bend to the will of raindrops has helped to solve a long-standing scientific enigma.

The dramatic effect rainfall has on the evolution of mountainous landscapes is widely debated among geologists, but new research led by the University of Bristol and published today in Science Advances, clearly calculates its impact, furthering our understanding of how peaks and valleys have developed over millions of years.

Its findings, which focused on the mightiest of mountain ranges – the Himalaya – also pave the way for forecasting the possible impact of climate change on landscapes and, in turn, human life.

Lead author Dr Byron Adams, Royal Society Dorothy Hodgkin Fellow at the university’s Cabot Institute for the Environment, said: “It may seem intuitive that more rain can shape mountains by making rivers cut down into rocks faster. But scientists have also believed rain can erode a landscape quickly enough to essentially ‘suck’ the rocks out of the Earth, effectively pulling mountains up very quickly.

“Both these theories have been debated for decades because the measurements required to prove them are so painstakingly complicated. That’s what makes this discovery such an exciting breakthrough, as it strongly supports the notion that atmospheric and solid earth processes are intimately connected.”

While there is no shortage of scientific models aiming to explain how the Earth works, the greater challenge can be making enough good observations to test which are most accurate.

The study was based in the central and eastern Himalaya of Bhutan and Nepal, because this region of the world has become one of the most sampled landscapes for erosion rate studies. Dr Adams, together with collaborators from Arizona State University (ASU) and Louisiana State University, used cosmic clocks within sand grains to measure the speed at which rivers erode the rocks beneath them.

“When a cosmic particle from outer space reaches Earth, it is likely to hit sand grains on hillslopes as they are transported toward rivers. When this happens, some atoms within each grain of sand can transform into a rare element. By counting how many atoms of this element are present in a bag of sand, we can calculate how long the sand has been there, and therefore how quickly the landscape has been eroding,” Dr Adams said.

“Once we have erosion rates from all over the mountain range, we can compare them with variations in river steepness and rainfall. However, such a comparison is hugely problematic because each data point is very difficult to produce and the statistical interpretation of all the data together is complicated.”

First and corresponding author Dr Byron Adams in the steep terrain of the Greater Himalaya, central Bhutan.

Credit: Second author Professor Kelin Whipple

Dr Adams overcame this challenge by combining regression techniques with numerical models of how rivers erode.

“We tested a wide variety of numerical models to reproduce the observed erosion rate pattern across Bhutan and Nepal. Ultimately only one model was able to accurately predict the measured erosion rates,” Dr Adams said. “This model allows us for the first time to quantify how rainfall affects erosion rates in rugged terrain.”

Research collaborator Professor Kelin Whipple, Professor of Geology at ASU, said: “Our findings show how critical it is to account for rainfall when assessing patterns of tectonic activity using topography, and also provide an essential step forward in addressing how much the slip rate on tectonic faults may be controlled by climate-driven erosion at the surface.”

The study findings also carry important implications for land use management, infrastructure maintenance, and hazards in the Himalaya.

Looking upstream within a tributary of the Wang Chu, southwestern Bhutan.
Credit: Dr Byron Adams

In the Himalaya, there is the ever-present risk that high erosion rates can drastically increase sedimentation behind dams, jeopardising critical hydropower projects. The findings also suggest greater rainfall can undermine hillslopes, increasing the risk of debris flows or landslides, some of which may be large enough to dam the river creating a new hazard – lake outburst floods.

Dr Adams added: “Our data and analysis provides an effective tool for estimating patterns of erosion in mountainous landscapes such as the Himalaya, and thus, can provide invaluable insight into the hazards that influence the hundreds of millions of people who live within and at the foot of these mountains.”

The Ta Dzong overlooking the Paro Valley, western Bhutan.
Credit: Dr Byron Adams

The research was funded by the Royal Society, the UK Natural Environmental Research Council (NERC), and the National Science Foundation (NSF) of the US.

Building on this research, Dr Adams is currently exploring how landscapes respond after large volcanic eruptions.

“This new frontier of landscape evolution modelling is also shedding new light on volcanic processes. With our cutting-edge techniques to measure erosion rates and rock properties, we will be able to better understand how rivers and volcanoes have influenced each other in the past,” Dr Adams said. “This will help us to more accurately anticipate what is likely to happen after future volcanic eruptions and how to manage the consequences for communities living nearby.”

Paper:‘Climate controls on erosion in tectonically active landscapes’ by Byron Adams et al in Science Advances.

Contacts and sources:
The Cabot Institute for the Environment
University of Bristol