Tuesday, August 20, 2019

The Meat Allergy: UVA IDs Biological Changes Triggered by Tick Bites

A University of Virginia School of Medicine scientist has identified key immunological changes in people who abruptly develop an allergic reaction to mammalian meat, such as beef. His work also provides an important framework for other scientists to probe this strange, recently discovered allergy caused by tick bites.

The findings by UVA’s Loren Erickson, PhD, and his team offer important insights into why otherwise healthy people can enjoy meat all their lives until a hot slab of ground beef or a festive Fourth of July hot dog suddenly become potentially life-threatening. Symptoms of the meat allergy can range from mild hives to nausea and vomiting to severe anaphylaxis, which can result in death.

“We don’t know what it is about the tick bite that causes the meat allergy. And, in particular, we haven’t really understood the source of immune cells that produce the antibodies that cause the allergic reactions,” Erickson explained. “There’s no way to prevent or cure this food allergy, so we need to first understand the underlying mechanism that triggers the allergy so we can devise a new therapy.”

Meat Allergy Caused by Tick Bites

People who develop the allergy in response to the bite of the Lone Star tick often have to give up eating mammalian meat, including beef and pork, entirely. Even food that does not appear to contain meat can contain meat-based ingredients that trigger the allergy. That means people living with the meat allergy must be hyper-vigilant. (For one person’s experience with the meat allergy, visit UVA’s Making of Medicine blog.)

The allergy was first discovered by UVA’s Thomas Platts-Mills, MD, a renowned allergist who determined that people were suffering reactions to a sugar called alpha-gal found in mammalian meat. Exactly what is happening inside the body, though, has remained poorly understood. Erickson’s work, along with that of others at UVA, is changing that.

Erickson’s team in UVA’s Department of Microbiology, Immunology and Cancer Biology has found that people with the meat allergy have a distinctive form of immune cells known as B cells, and they have them in great numbers. These white blood cells produce antibodies that release chemicals that cause the allergic reaction to meat.

In addition, Erickson, a member of UVA’s Carter Immunology Center, has developed a mouse model of the meat allergy so that scientists can study the mysterious allergy more effectively.

“This is the first clinically relevant model that I know of, so now we can go and ask a lot of these important questions,” he said. “We can actually use this model to identify underlying causes of the meat allergy that may inform human studies. So it’s sort of a back-and-forth of experiments that you can do in animal models that you can’t do in humans. But you can identify potential mechanisms that could lead to new therapeutic strategies so that we can go back to human subjects and test some of those hypotheses.”
Findings Published

Erickson describes the new meat allergy model in an article in the Journal of Immunology. The research team consisted of Jessica L. Chandrasekhar, Kelly M. Cox, William M. Loo, Hui Qiao, Kenneth S. Tung and Erickson. Tung and Erickson are both part of UVA’s Carter Center.

Contacts and sources:
Joshua Barney
University of Virginia Health System

Stone Age Boat Building Site Discovered Underwater

The Maritime Archaeological Trust has discovered a new 8,000 year old structure next to what is believed to be the oldest boat building site in the world on the Isle of Wight.

Director of the Maritime Archaeological Trust, Garry Momber, said “This new discovery is particularly important as the wooden platform is part of a site that doubles the amount of worked wood found in the UK from a period that lasted 5,500 years.”

Garry Momber tagging structure
Credit: National Oceanography Centre, UK

The site lies east of Yarmouth, and the new platform is the most intact, wooden Middle Stone Age structure ever found in the UK. The site is now 11 meters below sea level and during the period there was human activity on the site, it was dry land with lush vegetation. Importantly, it was at a time before the North Sea was fully formed and the Isle of Wight was still connected to mainland Europe.

The site was first discovered in 2005 and contains an arrangement of trimmed timbers that could be platforms, walkways or collapsed structures. However, these were difficult to interpret until the Maritime Archaeological Trust used state of the art photogrammetry techniques to record the remains. During the late spring the new structure was spotted eroding from within the drowned forest. The first task was to create a 3D digital model of the landscape so it could be experienced by non-divers. It was then excavated by the Maritime Archaeological Trust during the summer and has revealed a cohesive platform consisting of split timbers, several layers thick, resting on horizontally laid round-wood foundations.

Garry continued “The site contains a wealth of evidence for technological skills that were not thought to have been developed for a further couple of thousand years, such as advanced wood working. This site shows the value of marine archaeology for understanding the development of civilisation.

Yet, being underwater, there are no regulations that can protect it. Therefore, it is down to our charity, with the help of our donors, to save it before it is lost forever.”

The Maritime Archaeological Trust is working with the National Oceanography Centre (NOC) to record and study, reconstruct and display the collection of timbers. Many of the wooden artefacts are being stored in the British Ocean Sediment Core Research facility (BOSCORF), operated by the National Oceanography Centre.

As with sediment cores, ancient wood will degrade more quickly if it is not kept in a dark, wet and cold setting. While being kept cold, dark and wet, the aim is to remove salt from within wood cells of the timber, allowing it to be analysed and recorded. This is important because archaeological information, such as cut marks or engravings, are most often found on the surface of the wood and are lost quickly when timber degrades. Once the timbers have been recorded and have desalinated, the wood can be conserved for display.

Dr Suzanne Maclachlan, the curator at BOSCORF, said “It has been really exciting for us to assist the Trust’s work with such unique and historically important artefacts. This is a great example of how the BOSCORF repository is able to support the delivery of a wide range of marine science.”

When diving on the submerged landscape Dan Snow, the history broadcaster and host of History Hit, one of the world's biggest history podcasts, commented that he was both awestruck by the incredible remains and shocked by the rate of erosion.

This material, coupled with advanced wood working skills and finely crafted tools suggests a European, Neolithic (New Stone Age) influence. The problem is that it is all being lost. As the Solent evolves, sections of the ancient land surface are being eroded by up to half a metre per year and the archaeological evidence is disappearing.

Research in 2019 was funded by the Scorpion Trust, the Butley Research Group, the Edward Fort Foundation and the Maritime Archaeology Trust. Work was conducted with the help of volunteers and many individuals who gave their time and often money, to ensure the material was recovered successfully.

Contacts and sources:
National Oceanography Centre, UK

New Insights into What May Go Awry in Brains of People with Alzheimer’s

More than three decades of research on Alzheimer’s disease have not produced any major treatment advances for those with the disorder, according to a UCLA expert who has studied the biochemistry of the brain and Alzheimer’s for nearly 30 years. “Nothing has worked,” said Steven Clarke, a distinguished professor of chemistry and biochemistry. “We’re ready for new ideas.” Now, Clarke and UCLA colleagues have reported new insights that may lead to progress in fighting the devastating disease.

Scientists have known for years that amyloid fibrils — harmful, elongated, water-tight rope-like structures — form in the brains of people with Alzheimer’s, and likely hold important clues to the disease. UCLA Professor David Eisenberg and an international team of chemists and molecular biologists reported in the journal Nature in 2005 that amyloid fibrils contain proteins that interlock like the teeth of a zipper. The researchers also reported their hypothesis that this dry molecular zipper is in the fibrils that form in Alzheimer’s disease, as well as in Parkinson’s disease and two dozen other degenerative diseases. Their hypothesis has been supported by recent studies.

Alzheimer’s disease, the most common cause of dementia among older adults, is an irreversible, progressive brain disorder that kills brain cells, gradually destroys memory and eventually affects thinking, behavior and the ability to carry out the daily tasks of life. More than 5.5 million Americans, most of whom are over 65, are thought to have dementia caused by Alzheimer’s.

The UCLA team reports in the journal Nature Communications that the small protein beta amyloid, also known as a peptide, that plays an important role in Alzheimer’s has a normal version that may be less harmful than previously thought and an age-damaged version that is more harmful.

Rebeccah Warmack, who was a UCLA graduate student at the time of the study and is its lead author, discovered that a specific version of age-modified beta amyloid contains a second molecular zipper not previously known to exist. Proteins live in water, but all the water gets pushed out as the fibril is sealed and zipped up. Warmack worked closely with UCLA graduate students David Boyer, Chih-Te Zee and Logan Richards; as well as senior research scientists Michael Sawaya and Duilio Cascio.

What goes wrong with beta amyloid, whose most common forms have 40 or 42 amino acids that are connected like a string of beads on a necklace?

The researchers report that with age, the 23rd amino acid can spontaneously form a kink, similar to one in a garden hose. This kinked form is known as isoAsp23. The normal version does not create the stronger second molecular zipper, but the kinked form does.

“Now we know a second water-free zipper can form, and is extremely difficult to pry apart,” Warmack said. “We don’t know how to break the zipper.”

The normal form of beta amyloid has six water molecules that prevent the formation of a tight zipper, but the kink ejects these water molecules, allowing the zipper to form.

When one of its amino acids forms a kink, beta amyloid creates a harmful molecular zipper, shown here in green.

Rebeccah Warmack/UCLA

“Rebeccah has shown this kink leads to faster growth of the fibrils that have been linked to Alzheimer’s disease,” said Clarke, who has conducted research on biochemistry of the brain and Alzheimer’s disease since 1990. “This second molecular zipper is double trouble. Once it’s zipped, it’s zipped, and once the formation of fibrils starts, it looks like you can’t stop it. The kinked form initiates a dangerous cascade of events that we believe can result in Alzheimer’s disease.”

Why does beta amyloid’s 23rd amino acid sometimes form this dangerous kink?

Clarke thinks the kinks in this amino acid form throughout our lives, but we have a protein repair enzyme that fixes them.

“As we get older, maybe the repair enzyme misses the repair once or twice,” he said. “The repair enzyme might be 99.9% effective, but over 60 years or more, the kinks eventually build up. If not repaired or if degraded in time, the kink can spread to virtually every neuron and can do tremendous damage.”

“The good news is that knowing what the problem is, we can think about ways to solve it,” he added. “This kinked amino acid is where we want to look.”

The research offers clues to pharmaceutical companies, which could develop ways to prevent formation of the kink or get the repair enzyme to work better; or by designing a cap that would prevent fibrils from growing.

Clarke said beta amyloid and a much larger protein tau — with more than 750 amino acids — make a devastating one-two punch that forms fibrils and spreads them to many neurons throughout the brain. All humans have both beta amyloid and tau. Researchers say it appears that beta amyloid produces fibrils that can lead to tau aggregates, which can spread the toxicity to other brain cells. However, exactly how beta amyloid and tau work together to kill neurons is not yet known.

 Research by UCLA professor Steven Clarke and former graduate student Rebeccah Warmack, as well as UCLA colleagues, reveals new information about the brain’s biochemistry.
Steven Clarke, Rebeccah Warmack
Credit: Reed Hutchinson/UCLA

In this study, Warmack produced crystals, both the normal and kinked types, in 15 of beta amyloid’s amino acids. She used a modified type of cryo-electron microscopy to analyze the crystals. Cryo-electron microscopy, whose development won its creators the 2017 Nobel Prize in chemistry, enables scientists to see large biomolecules in extraordinary detail. Professor Tamir Gonen pioneered the modified microscopy, called microcrystal electron diffraction, which enables scientists to study biomolecules of any size.

Eisenberg is UCLA’s Paul D. Boyer Professor of Molecular Biology and a Howard Hughes Medical Institute investigator. Other researchers are co-author Gonen, a professor of biological chemistry and physiology at the UCLA David Geffen School of Medicine and a Howard Hughes Medical Institute investigator; and Jose Rodriguez, assistant professor of chemistry and biochemistry who holds the Howard Reiss Career Development Chair.

The research was funded by the National Science Foundation, National Institutes of Health, Howard Hughes Medical Institute, and the UCLA Longevity Center’s Elizabeth and Thomas Plott Chair in Gerontology, which Clarke held for five years.

Contacts and sources:
Stuart Wolpert
University of California - Los Angeles

Citation: Structure of amyloid-β (20-34) with Alzheimer’s-associated isomerization at Asp23 reveals a distinct protofilament interface.
Rebeccah A. Warmack, David R. Boyer, Chih-Te Zee, Logan S. Richards, Michael R. Sawaya, Duilio Cascio, Tamir Gonen, David S. Eisenberg, Steven G. Clarke. Nature Communications, 2019; 10 (1) DOI: 10.1038/s41467-019-11183-z

Stardust Discovered in the Antarctic snow

The rare isotope iron-60 is created in massive stellar explosions. Only a very small amount of this isotope reaches the earth from distant stars. Now, a research team with significant involvement from the Technical University of Munich (TUM) has discovered iron-60 in Antarctic snow for the first time. The scientists suggest that the iron isotope comes from the interstellar neighborhood.

The quantity of cosmic dust that trickles down to Earth each year ranges between several thousand and ten thousand tons. Most of the tiny particles come from asteroids or comets within our solar system. However, a small percentage comes from distant stars. There are no natural terrestrial sources for the iron-60 isotope contained therein; it originates exclusively as a result of supernova explosions or through the reactions of cosmic radiation with cosmic dust.

The Kohnen Station is a container settlement in the Antarctic, from whose vicinity the snow samples in which iron-60 was found originate
.The Kohnen Station is a container settlement in the Antarctic, from whose vicinity the snow samples in which iron-60 was found originate.
Image: Martin Leonhardt / Alfred-Wegener-Institut (AWI)

Antarctic snow travels around the world

The first evidence of the occurrence of iron-60 on Earth was discovered in deep-sea deposits by a TUM research team 20 years ago. Among the scientists on the team was Dr. Gunther Korschinek, who hypothesized that traces of stellar explosions could also be found in the pure, untouched Antarctic snow. In order to verify this assumption, Dr. Sepp Kipfstuhl from the Alfred Wegener Institute collected 500 kg of snow at the Kohnen Station, a container settlement in the Antarctic, and had it transported to Munich for analysis. There, a TUM team melted the snow and separated the meltwater from the solid components, which were processed at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) using various chemical methods, so that the iron needed for the subsequent analysis was present in the milligram range, and the samples could be returned to Munich.

Korschinek and Dominik Koll from the research area Nuclear, Particle and Astrophysics at TUM found five iron-60 atoms in the samples using the accelerator laboratory in Garching near Munich. “Our analyses allowed us to rule out cosmic radiation, nuclear weapons tests or reactor accidents as sources of the iron-60," states Koll. “As there are no natural sources for this radioactive isotope on Earth, we knew that the iron-60 must have come from a supernova."

Stardust comes from the interstellar neighborhood

The research team was able to make a relatively precise determination as to when the iron-60 has been deposited on Earth: The snow layer that was analyzed was not older than 20 years. Moreover, the iron isotope that was discovered did not seem to come from particularly distant stellar explosions, as the iron-60 dust would have dissipated too much throughout the universe if this had been the case. Based on the half-life of iron-60, any atoms originating from the formation of the Earth would have completely decayed by now. Koll therefore assumes that the iron-60 in the Antarctic snow originates from the interstellar neighborhood, for example from an accumulation of gas clouds in which our solar system is currently located.

"Our solar system entered one of these clouds approximately 40,000 years ago," says Korschinek, "and will exit it in a few thousand years. If the gas cloud hypothesis is correct, then material from ice cores older than 40,000 years would not contain interstellar iron-60,” adds Koll. "This would enable us to verify the transition of the solar system into the gas cloud – that would be a groundbreaking discovery for researchers working on the environment of the solar system.

Contacts and sources:
Technical University of Munich (TUM)

Citation: Interstellar Fe60 in Antarctica
Dominik Koll, Gunther Korschinek, Thomas Faestermann, J. M. Gómez-Guzmán, Sepp Kipfstuhl, Silke Merchel, Jan M. Welch. . Physical Review Letters, 2019; 123 (7) DOI: 10.1103/PhysRevLett.123.072701

Sunday, August 18, 2019

Global Urban Water Scarcity Endures as a ‘Daily Reality’

Urban water provision is a social good, but one that will become increasingly difficult for cities and water utilities to provide due to climate change and population growth. More than 40% of residents in 15 cities in the “global south” – developing nations in Sub-Saharan Africa, South Asia and Latin America – still lack quality, affordable water that can be piped into dwellings, according to a report released by the World Resources Institute’s Ross Center for Sustainable Cities.

“Cities need to rethink how they view equitable access to water,” said report co-author Victoria Beard, professor of city and regional planning at Cornell University and a fellow at the World Resources Institute.

Ratio of total annual water withdrawals to total available annual renewable supply, accounting for upstream consumptive use. 
Baseline water stress.jpg
Credit: Source: World Resources Institute (WRI)

“In many developing countries … urban residents lack access to safe, reliable and affordable water on a daily basis, ” said Beard, also a fellow at Cornell’s Atkinson Center for a Sustainable Future. “These are the same countries that have made huge strides in guaranteeing universal access to primary education. Equitable access to water requires similar levels of political commitment. The solutions are not high tech. We know what needs to be done.”

In addition to Beard, the report – “Unaffordable and Undrinkable: Rethinking Urban Water Access in the Global South” – was prepared by lead author Diana Mitlin, professor, Manchester University; David Satterthwaite, senior fellow, the International Institute for Environment and Development; and Jillian Du, research analyst, WRI Ross Center for Sustainable Cities.

The authors analyzed data from 15 cities from the global south and found that, on average, 58% of households have water piped into their home dwelling or plot. In Latin America, about 97% of urban households had running water, while South Asia had 63% and sub-Saharan Africa had 22% – and often the water was poor quality.

Lack of access to piped-in water means that families must purchase water from private sources (tanker trucks, vendors) or buy bottled water, which can cost up to 52 times as much as piped utility water, Beard said.

When water is either unavailable or too expensive, households in these countries are forced into tough decisions, Beard said.

“Families will sacrifice their health and time to self-provide ‘free’ – but likely unsafe – ground and surface water, or they will buy water that requires financial cutbacks on food, electricity, education, health care or other household needs,” she said. “’Day Zero’ [a phrase that denotes water scarcity or a complete lack of water] is a daily reality for nearly half the population in many cities in the global south.”

The report offers four general solutions:
  • Extending a municipal piped water system to all households or plots;
  • Addressing intermittent water service to reduce contamination;
  • Implementing diverse strategies to make water affordable; and
  • Supporting citywide upgrading of informal settlements around the world, to improve rather than displace urban residents.

Beard said reports over the past decade have claimed that society had turned the corner on delivery of water to fulfill basic human needs in the global south. But in her own observations, she explained, data showed that the urban water crisis remains a problem.

Said Beard: “Widely used global indicators used to monitor water access have failed to capture the everyday reality on the ground in urban neighborhoods.”

Decades of attempts to increase the private sector’s role in water provision and corporatize water utilities have not adequately improved access, especially for the urban under-served. Cities and urban change agents should commit to providing equitable access to safe, reliable, and affordable water

Contacts and sources:
World Resources Institute
Cornell University

Early Species Developed Much Faster Than Previously Thought

When Earth's species were rapidly diversifying nearly 500 million years ago, that evolution was driven by complex factors including global cooling, more oxygen in the atmosphere, and more nutrients in the oceans. But it took a combination of many global environmental and tectonic changes occurring simultaneously and combining like building blocks to produce rapid diversification into new species, according to a new study by Dr. Alycia Stigall, Professor of Geological Sciences at Ohio University.

She and fellow researchers have narrowed in a specific time during an era known as the Ordovician Radiation, showing that new species actually developed rapidly during a much shorter time frame than previously thought. The Great Biodiversification Event where many new species developed, they argue, happened during the Darriwilian Stage about 465 million years ago. Their research, "Coordinated biotic and abiotic change during the Great Ordovician Biodiversification Event: Darriwilian assembly of early Paleozoic building blocks," was published in Palaeogeography, Palaeoclimatology, Palaeoecology as part of a special issue they are editing on the Great Ordovician Biodiversification Event.

Building block model of the earth system that produced the Great Ordovician Biodiversificaiton Event. 
Building block model of the earth system that produced the Great Ordovician Biodiversificaiton Event. Figure from Stigall et al., 2019.
Figure from Stigall et al., 2019.

New datasets have allowed them to show that what previously looked like species development widespread over time and geography was actually a diversification pulse. Picture a world before the continents as we know them, when most of the land mass was south of the equator, with only small continents and islands in the vast oceans above the tropics. Then picture ice caps forming over the southern pole. As the ice caps form, the ocean recedes and local, isolated environments form around islands and in seas perched atop continents. In those shallow marine environments, new species develop.

Then picture the ice caps melting and the oceans rising again, with those new species riding the waves of global diversification to populate new regions. The cycle then repeats producing waves of new species and new dispersals.

Lighting the Spark of Diversification

The early evolution of animal life on Earth is a complex and fascinating subject. The Cambrian Explosion (between about 540 to 510 million years ago) produced a stunning array of body plans, but very few separate species of each, notes Stigall. But nearly 40 million years later, during the Ordovician Period, this situation changed, with a rapid radiation of species and genera during the Great Ordovician Biodiversification Event.

The triggers of the GOBE and processes that promoted diversification have been subject to much debate, but most geoscientists haven't fully considered how changes like global cooling or increased oxygenation would foster increased diversification.

A recent review paper by Stigall and an international team of collaborators attempts to provide clarity on these issues. For this study, Stigall teamed up with Cole Edwards (Appalachian State University), a sedimentary geochemist, and fellow paleontologists Christian Mac Ørum Rasmussen (University of Copenhagen) and Rebecca Freeman (University of Kentucky) to analyze how changes to the physical earth system during the Ordovician could have promoted this rapid increase in diversity.

In their paper, Stigall and colleagues demonstrate that the main pulse of diversification during the GOBE is temporally restricted and occurred in the Middle Ordovician Darriwilian Stage (about 465 million years ago). Many changes to the physical earth system, including oceanic cooling, increased nutrient availability, and increased atmospheric oxygen accumulate in the interval leading up to the Darriwilian.

These physical changes were necessary building blocks, but on their own were not enough to light the spark of diversification.

The missing ingredient was a method to alternately connect and isolate populations of species through cycles of vicariance and dispersal. That spark finally occurs in the Darriwilian Stage when ice caps form over the south pole of the Ordovician Earth. The waxing and waning of these ice sheets caused sea level to rise and fall (similar to the Pleistocene), which provided the alternate connection and disconnection needed to facilitate rapid diversity accumulation.

Stigall and her collaborators compared this to the assembly of building blocks required to pass a threshold.

Contacts and sources:
Ohio University

Citation: Coordinated biotic and abiotic change during the Great Ordovician Biodiversification Event: Darriwilian assembly of early Paleozoic building blocks
 Alycia L.StigallCole T. Edwards, Rebecca L.FreemanChristian M.Ø.Rasmussen   Palaeogeography, Palaeoclimatology, Palaeoecology Volume 530, 15 September 2019, Pages 249-270 https://www.sciencedirect.com/science/article/pii/S0031018219302305?via%3Dihub

Wearable Sensors Detect What’s in Your Sweat

Needle pricks not your thing? A team of scientists at the University of California, Berkeley, is developing wearable skin sensors that can detect what’s in your sweat.

New wearable sensors developed by scientists at UC Berkeley can provide real-time measurements of sweat rate and electrolytes and metabolites in sweat. 
A close up shot of a new sweat sensor on a person's forehead
Photo by Bizen Maskey, Sunchon National University

They hope that one day, monitoring perspiration could bypass the need for more invasive procedures like blood draws, and provide real-time updates on health problems such as dehydration or fatigue.

In a paper appearing (Friday, August 16) in Science Advances, the team describes a new sensor design that can be rapidly manufactured using a “roll-to-roll” processing technique that essentially prints the sensors onto a sheet of plastic like words on a newspaper.

They used the sensors to monitor the sweat rate, and the electrolytes and metabolites in sweat, from volunteers who were exercising, and others who were experiencing chemically induced perspiration.

“The goal of the project is not just to make the sensors but start to do many subject studies and see what sweat tells us — I always say ‘decoding’ sweat composition,” said Ali Javey, a professor of electrical engineering and computer science at UC Berkeley and senior author on the paper.

“For that we need sensors that are reliable, reproducible, and that we can fabricate to scale so that we can put multiple sensors in different spots of the body and put them on many subjects,” said Javey, who also serves as a faculty scientist at Lawrence Berkeley National Laboratory.

The sensors can be rapidly manufactured using a roll-to-roll processing technique that prints the sensors onto a sheet of plastic.
 Photo by Antti Veijola, VTT

The new sensors contain a spiraling microscopic tube, or microfluidic, that wicks sweat from the skin. By tracking how fast the sweat moves through the microfluidic, the sensors can report how much a person is sweating, or their sweat rate.

The microfluidics are also outfitted with chemical sensors that can detect concentrations of electrolytes like potassium and sodium, and metabolites like glucose.

Javey and his team worked with researchers at the VTT Technical Research Center of Finland to develop a way to quickly manufacture the sensor patches in a roll-to-roll processing technique similar to screen printing.

“Roll-to-roll processing enables high-volume production of disposable patches at low cost,” Jussi Hiltunen of VTT said. “Academic groups gain significant benefit from roll-to-roll technology when the number of test devices is not limiting the research. Additionally, up-scaled fabrication demonstrates the potential to apply the sweat-sensing concept in practical applications.”

To better understand what sweat can say about the real-time health of the human body, the researchers first placed the sweat sensors on different spots on volunteers’ bodies — including the forehead, forearm, underarm and upper back — and measured their sweat rates and the sodium and potassium levels in their sweat while they rode on an exercise bike.

They found that local sweat rate could indicate the body’s overall liquid loss during exercise, meaning that tracking sweat rate might be a way to give athletes a heads up when they may be pushing themselves too hard.

“Traditionally what people have done is they would collect sweat from the body for a certain amount of time and then analyze it,” said Hnin Yin Yin Nyein, a graduate student in materials science and engineering at UC Berkeley and one of the lead authors on the paper. “So you couldn’t really see the dynamic changes very well with good resolution. Using these wearable devices we can now continuously collect data from different parts of the body, for example to understand how the local sweat loss can estimate whole-body fluid loss.”

They also used the sensors to compare sweat glucose levels and blood glucose levels in healthy and diabetic patients, finding that a single sweat glucose measurement cannot necessarily indicate a person’s blood glucose level.

“There’s been a lot of hope that non-invasive sweat tests could replace blood-based measurements for diagnosing and monitoring diabetes, but we’ve shown that there isn’t a simple, universal correlation between sweat and blood glucose levels,” said Mallika Bariya, a graduate student in materials science and engineering at UC Berkeley and the other lead author on the paper. “This is important for the community to know, so that going forward we focus on investigating individualized or multi-parameter correlations.”

Ali Javey describes an earlier version of his lab’s wearable sweat sensor in this video from 2016. (UC Berkeley video by Roxanne Makasdjian and Stephen McNally)

Co-authors on the paper include Liisa Kivimaki, Sanna Uusitalo, Elina Jansson, Tuomas Happonen and Christina Liedert of the VTT Technical Research Center of Finland; and Tiffany Sun Liaw, Christine Heera Ahn, John A. Hangasky, Jianqi Zhao, Yuanjing Lin, Minghan Chao, Yingbo Zhao and Li-Chia Tai of UC Berkeley.

This work was supported by the NSF Nanomanufacturing Systems for Mobile Computing and Mobile Energy Technologies (NASCENT), the Berkeley Sensor and Actuator Center (BSAC), and the Bakar fellowship.

Contacts and sources:
Kara Manke
University of California, Berkeley

Citation: Regional and correlative sweat analysis using high-throughput microfluidic sensing patches toward decoding sweat
Hnin Yin Yin Nyein1,2,3,*, Mallika Bariya1,2,3,*, Liisa Kivimäki4, Sanna Uusitalo4, Tiffany Sun Liaw1, Elina Jansson4, Christine Heera Ahn1, John A. Hangasky5, Jiangqi Zhao1,3, Yuanjing Lin1,3, Tuomas Happonen4, Minghan Chao1, Christina Liedert4, Yingbo Zhao1,3, Li-Chia Tai1,2,3, Jussi Hiltunen4 and Ali Javey1,2,3,†

1Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA.
2Berkeley Sensor and Actuator Center, University of California, Berkeley, Berkeley, CA 94720, USA.
3Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, Berkeley, CA 94720, USA.
4VTT-Technical Research Centre of Finland, Kaitoväylä 1, FIN-90590 Oulu, Finland.
5California Institute for Quantitative Biosciences (QB3), University of California, Berkeley, Berkeley, CA 94720, USA.
* These authors contributed equally to this work.
Science Advances 16 Aug 2019:
Vol. 5, no. 8, eaaw9906
DOI: 10.1126/sciadv.aaw9906 http://dx.doi.org/10.1126/sciadv.aaw9906

Saturday, August 17, 2019

Soft Robot Exoskeleton Makes Running and Walking Easier

A versatile, portable exosuit that assists both walking and running highlights the potential for lightweight and non-restrictive wearable robots outside the lab
 Credit: Wyss Institute at Harvard University

Between walking at a leisurely pace and running for your life, human gaits can cover a wide range of speeds. Typically, we choose the gait that allows us to consume the least amount of energy at a given speed. For example, at low speeds, the metabolic rate of walking is lower than that of running in a slow jog; vice versa at high speeds, the metabolic cost of running is lower than that of speed walking.

Researchers in academic and industry labs have previously developed robotic devices for rehabilitation and other areas of life that can either assist walking or running, but no untethered portable device could efficiently do both. Assisting walking and running with a single device is challenging because of the fundamentally different biomechanics of the two gaits. However, both gaits have in common an extension of the hip joint, which starts around the time when the foot comes in contact with the ground and requires considerable energy for propelling the body forward.

 This video shows demonstrates the use of the hip-assisting exosuit in different natural environments, and shows how the robotic device senses changes in the gait-specific vertical movements of the center of mass during walking and running to rapidly adjust its actuation.

 Credit: Wyss Institute at Harvard University

As reported today in Science, a team of researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering and John A. Paulson School of Engineering and Applied Sciences (SEAS), and the University of Nebraska Omaha now has developed a portable exosuit that assists with gait-specific hip extension during both walking and running. Their lightweight exosuit is made of textile components worn at the waist and thighs, and a mobile actuation system attached to the lower back which is controlled by an algorithm that can robustly detect the transition from walking to running and vice versa.

The team first showed that the exosuit worn by users in treadmill-based indoor tests, on average, reduced their metabolic costs of walking by 9.3% and of running by 4% compared to when they were walking and running without the device. “We were excited to see that the device also performed well during uphill walking, at different running speeds and during overground testing outside, which showed the versatility of the system,” said Conor Walsh, Ph.D., who led the study. Walsh is a Core Faculty member of the Wyss Institute, the Gordon McKay Professor of Engineering and Applied Sciences at SEAS, and Founder of the Harvard Biodesign Lab. “While the metabolic reductions we found are modest, our study demonstrates that it is possible to have a portable wearable robot assist more than just a single activity, helping to pave the way for these systems to become ubiquitous in our lives,” said Walsh.

The hip exosuit was developed as part of the Defense Advanced Research Projects Agency (DARPA)’s former Warrior Web program and is the culmination of years of research and optimization of the soft exosuit technology by the team. A previous multi-joint exosuit developed by the team could assist both the hip and ankle during walking, and a medical version of the exosuit aimed at improving gait rehabilitation for stroke survivors is now commercially available in the US and Europe, via a collaboration with ReWalk Robotics.

The light-weight versatile exosuit assists hip extension during uphill walking and at different running speeds in natural terrain.
 Credit: Wyss Institute at Harvard University

The team’s most recent hip-assisting exosuit is designed to be simpler and lighter weight compared to their past multi-joint exosuit. It assists the wearer via a cable actuation system. The actuation cables apply a tensile force between the waist belt and thigh wraps to generate an external extension torque at the hip joint that works in concert with the gluteal muscles. The device weighs 5kg in total with more than 90% of its weight located close to the body’s center of mass. “This approach to concentrating the weight, combined with the flexible apparel interface, minimizes the energetic burden and movement restriction to the wearer,” said co-first-author Jinsoo Kim, a SEAS graduate student in Walsh’s group. “This is important for walking, but even more so for running as the limbs move back and forth much faster.” Kim shared the first-authorship with Giuk Lee, Ph.D., a former postdoctoral fellow on Walsh’s team and now Assistant Professor at Chung-Ang University in Seoul, South Korea.

A major challenge the team had to solve was that the exosuit needed to be able to distinguish between walking and running gaits and change its actuation profiles accordingly with the right amount of assistance provided at the right time of the gait cycle.

To explain the different kinetics during the gait cycles, biomechanists often compare walking to the motions of an inverted pendulum and running to the motions of a spring-mass system. During walking, the body’s center of mass moves upward after heel-strike, then reaches maximum height at the middle of the stance phase to descend towards the end of the stance phase. In running, the movement of the center of mass is opposite. It descends towards a minimum height at the middle of the stance phase and then moves upward towards push-off.

The team’s portable exosuit is made of textile components worn at the waist and thighs, and a mobile actuation system attached to the lower back which uses an algorithm that robustly predicts tran
sitions between walking and running gaits. 

Credit: Wyss Institute at Harvard University

“We took advantage of these biomechanical insights to develop our biologically inspired gait classification algorithm that can robustly and reliably detect a transition from one gait to the other by monitoring the acceleration of an individual’s center of mass with sensors that are attached to the body,” said co-corresponding author Philippe Malcolm, Ph.D., Assistant Professor at University of Nebraska Omaha. “Once a gait transition is detected, the exosuit automatically adjusts the timing of its actuation profile to assist the other gait, as we demonstrated by its ability to reduce metabolic oxygen consumption in wearers.”

In ongoing work, the team is focused on optimizing all aspects of the technology, including further reducing weight, individualizing assistance and improving ease of use. “It is very satisfying to see how far our approach has come,” said Walsh, “and we are excited to continue to apply it to a range of applications, including assisting those with gait impairments, industry workers at risk of injury performing physically strenuous tasks, or recreational weekend warriors.”

“This breakthrough study coming out of the Wyss Institute’s Bioinspired Soft Robotics platform gives us a glimpse into a future where wearable robotic devices can improve the lives of the healthy, as well as serve those with injuries or in need of rehabilitation,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School, the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at SEAS.

Other authors on the study are past and present members of Walsh’s team, including data analyst Roman Heimgartner; Research Fellow Dheepak Arumukhom Revi; Control Engineer Nikos Karavas, Ph.D.; Functional Apparel Designer Danielle Nathanson; Robotics Engineer Ignacio Galiana, Ph.D.; Robotics Engineer Asa Eckert-Erdheim; Electromechanical Engineer Patrick Murphy; Engineer David Perry; Software Engineer Nicolas Menard, and graduate student Dabin Kim Choe. The study was funded by the Defense Advanced Research Projects Agency’s Warrior Web Program, the National Science Foundation and Harvard’s Wyss Institute for Biologically Inspired Engineering.

Contacts and sources:
Benjamin Boettner
Wyss Institute for Biologically Inspired Engineering at Harvard

Citation: Reducing the metabolic rate of walking and running with a versatile, portable exosuit
Jinsoo Kim, Giuk Lee, Roman Heimgartner, Dheepak Arumukhom Revi, Nikos Karavas, Danielle Nathanson, Ignacio Galiana, Asa Eckert-Erdheim, Patrick Murphy, David Perry, Nicolas Menard, Dabin Kim Choe, Philippe Malcolm, Conor J. Walsh. . Science, 2019 DOI: 10.1126/science.aav7536

Friday, August 16, 2019

Reading Invisible Writings of the Egyptians Now Possible

The first thing that catches an archaeologist's eye on the small piece of papyrus from Elephantine Island on the Nile is the apparently blank patch. Researchers from the Egyptian Museum, Berlin universities and Helmholtz-Zentrum Berlin have now used the synchrotron radiation from BESSY II to unveil its secret. This pushes the door wide open for analysing the giant Berlin papyrus collection and many more.

For more than a century, numerous metal crates and cardboard boxes have sat in storage at the Egyptian Museum and Papyrus Collection Berlin, all of which were excavated by Otto Rubensohn from 1906 to 1908 from an island called Elephantine on the River Nile in the south of Egypt, near the city of Aswan. Eighty percent of the texts on the papyrus in these containers have yet to be studied, and this can hardly be done using conventional methods anymore. Thousands of years ago, the Egyptians would carefully roll up or fold together letters, contracts and amulets to a tiny size so that they would take up the least possible space. In order to read them, the papyri would have to be just as carefully unfolded again.

 "Today, however, much of this papyrus has aged considerably, so the valuable texts can easily crumble if we try to unfold or unroll them," Prof. Dr. Heinz-Eberhard Mahnke of Helmholtz-Zentrum Berlin and Freie Universität Berlin describes the greatest obstacle facing the Egyptologists, who are eager to unearth the scientific treasures waiting in the boxes and crates in the Berlin Egyptian Museum.

Testing the fragile papyrus with nondestructive methods

The physicist at Helmholtz-Zentrum Berlin knew from many years of research how to analyse the fragile papyrus without destroying it: shining a beam of X-ray light on the specimen causes the atoms in the papyrus to become excited and send back X-rays of their own, much like an echo. Because the respective elements exhibit different X-ray fluorescence behaviour, the researchers can distinguish the atoms in the sample by the energy of the radiation they return. The scientists therefore long ago developed laboratory equipment that uses this X-ray fluorescence to analyse sensitive specimens without destroying them.

Credit: HZB

Scholars in ancient Egypt typically wrote with a black soot ink made from charred pieces of wood or bone and which consisted mainly of elemental carbon. "For certain purposes, however, the ancient Egyptians also used coloured inks containing elements such as iron, copper, mercury or lead," Heinz-Eberhard Mahnke explains. If the ancient Egyptian scribes had used such a "metal ink" to inscribe the part that now appears blank on the Elephantine papyrus, then X-ray fluorescence should be able to reveal traces of those metals. Indeed, using the equipment in their laboratory, the researchers were able to detect lead in the blank patch of papyrus.

Revealing sharper details at BESSY II with "absorption edge radiography"

In fact, they even managed to discern characters, albeit as a blurry image. To capture a much sharper image, they studied it with X-ray radiography at BESSY II, where the synchrotron radiation illuminates the specimen with many X-ray photons of high coherence. Using "absorption edge radiography" at the BAMline station of BESSY II, they were able to increase the brightness of this technique for the sample studied, and thus better distinguish the characters written on the papyrus from the structure of the ancient paper. So far, it has not been possible to translate the character, but it could conceivably depict a deity.

Composition of the invisible ink resolved in the Rathgen laboratory

The analysis at BESSY II did not identify the kind of leaded ink the ancient scribes used to write these characters on the papyrus. Only by using a "Fourier-transform infrared spectrometer" could the scientists of the Rathgen Research Laboratory Berlin finally identify the substance as lead carboxylate, which is in fact colourless. But why would the ancient scribe have wanted to write on the papyrus with this kind of "invisible ink"? "We suspect the characters may originally have been written in bright minium (red lead) or perhaps coal-black galena (lead glance)," says Heinz-Eberhard Mahnke, summarising the researchers' deliberations.

If such inks are exposed to sunlight for too long, the energy of the light can trigger chemical reactions that alter the colours. Even many modern dyes similarly fade over time in the bright sunlight. It is therefore easily conceivable that, over thousands of years, the bright red minium or jet black galena would transform into the invisible lead carboxylate, only to mystify researchers as a conspicuously blank space on the papyrus fragment.

Method developed to study folded papyri without contact

With their investigation, Dr. Tobias Arlt of Technische Universität Berlin, Prof. Dr. Heinz-Eberhard Mahnke and their colleagues have pushed the door wide open for future studies to decipher texts even on finely folded or rolled papyri from the Egyptian Museum without having to unfold them and risk destroying the precious finds. The researchers namely developed a new technique for virtually opening the valuable papyri on the computer without ever touching them.

The Elephantine project funded by the European Research Council, ERC, and headed by Prof. Dr. Verena Lepper (Stiftung Preußischer Kulturbesitz-Staatliche Museen zu Berlin) is thus well on its way to studying many more of the hidden treasures in the collection of papyrus in Berlin and other parts of the world, and thus to learning more about Ancient Egypt.

Contacts and sources: 
Dr. Heinz-Eberhard Mahnke, Silvia Zerbe
Helmholtz-Zentrum Berlin (HZB)

Citation:  "Absorption Edge Sensitive Radiography and Tomography of Egyptian Papyri". T. Arlt, H.-E. Mahnke T. Siopi, E. Menei, C. Aibéo, R.-R. Pausewein, I. Reiche, I. Manke, V. Lepper Published in the Journal of Cultural Heritage (2019): https://doi.org/10.1016/j.culher.2019.04.007)


"Virtual unfolding of folded papyri"; H.-E. Mahnke, T. Arlt, D. Baum, H.-C. Hege, F. Herter, N. Lindow, I. Manke, T. Siopi, E. Menei, M. Etienne, V. Lepper (https://doi.org/10.1016/j.culher.2019.07.007)

Humans Migrated to Mongolia Much Earlier Than Previously Believed

Stone tools uncovered in Mongolia by an international team of archaeologists indicate that modern humans traveled across the Eurasian steppe about 45,000 years ago, according to a new University of California, Davis, study. The date is about 10,000 years earlier than archaeologists previously believed.

The site also points to a new location for where modern humans may have first encountered their mysterious cousins, the now extinct Denisovans, said Nicolas Zwyns, an associate professor of anthropology and lead author of the study.

Ancient tools were found in a site in the western flank of the Tolbor Valley
Mountains of Mongolia
Credit: UC Davis

Zwyns led excavations from 2011 to 2016 at the Tolbor-16 site along the Tolbor River in the Northern Hangai Mountains between Siberia and northern Mongolia.

The excavations yielded thousands of stone artifacts, with 826 stone artifacts associated with the oldest human occupation at the site. With long and regular blades, the tools resemble those found at other sites in Siberia and Northwest China — indicating a large-scale dispersal of humans across the region, Zwyns said.

A sampling of stone tools uncovered at the Tolbor-16 site in Mongolia, with examples of long triangular (bottom row, left) and double-edged blades (bottom row, middle) that resemble those found at other sites in Siberia and Northwest China. The discovery suggests a dispersal through the region of early modern humans who shared a cultural and technological background. The shorter blades, top row, are examples of tool technology known before to researchers. 
Credit: UC Davis

“These objects existed before, in Siberia, but not to such a degree of standardization,” Zwyns said. “The most intriguing (aspect) is that they are produced in a complicated yet systematic way — and that seems to be the signature of a human group that shares a common technical and cultural background.”

That technology, known in the region as the Initial Upper Palaeolithic, led the researchers to rule out Neanderthals or Denisovans as the site’s occupants. “Although we found no human remains at the site, the dates we obtained match the age of the earliest Homo sapiens found in Siberia,” Zwyns said. “After carefully considering other options, we suggest that this change in technology illustrates movements of Homo sapiens in the region.”

Their findings were published online in an article in Scientific Reports.

The age of the site — determined by luminescence dating on the sediment and radiocarbon dating of animal bones found near the tools — is about 10,000 years earlier than the fossil of a human skullcap from Mongolia, and roughly 15,000 years after modern humans left Africa.

Evidence of soil development (grass and other organic matter) associated with the stone tools suggests that the climate for a period became warmer and wetter, making the normally cold and dry region more hospitable to grazing animals and humans.

Preliminary analysis identifies bone fragments at the site as large (wild cattle or bison) and medium size bovids (wild sheep, goat) and horses, which frequented the open steppe, forests and tundra during the Pleistocene — another sign of human occupation at the site.

The dates for the stone tools also match the age estimates obtained from genetic data for the earliest encounter between Homo sapiens and the Denisovans.

“Although we don’t know yet where the meeting happened, it seems that the Denisovans passed along genes that will later help Homo sapiens settling down in high altitude and to survive hypoxia on the Tibetan Plateau,” Zwyns said. “From this point of view, the site of Tolbor-16 is an important archaeological link connecting Siberia with Northwest China on a route where Homo sapiens had multiple possibilities to meet local populations such as the Denisovans.”

Co-authors of the paper include UC Davis anthropology graduate students Roshanne Bakhtiary and Kevin Smith, former graduate student Joshua Noyer, and undergraduate alumna Aurora Allshouse, now a graduate student at Harvard University.

Other members of the team included colleagues from universities and institutes in South Carolina, the United Kingdom, Mongolia, Germany, Belgium and Russia.

Contacts and sources:
Karen Nikos-Rose, Kathleen Holder
University of California, Davis

Unraveling the Stripe Order Mystery

One of the greatest mysteries in condensed matter physics is the exact relationship between charge order and superconductivity in cuprate superconductors. In superconductors, electrons move freely through the material--there is zero resistance when it's cooled below its critical temperature. However, the cuprates simultaneously exhibit superconductivity and charge order in patterns of alternating stripes. This is paradoxical in that charge order describes areas of confined electrons. How can superconductivity and charge order coexist?

Doped charges in the CuO2 planes of cuprate superconductors form regular one-dimensional 'stripes' at low temperatures. Excitation with ultrafast near-infrared pulses allows direct observation of diffusive charge dynamics, which may be involved in the establishing in-plane superconductivity.

Credit: Greg Stewart/SLAC National Accelerator Laboratory

Now researchers at the University of Illinois at Urbana-Champaign, collaborating with scientists at the SLAC National Accelerator Laboratory, have shed new light on how these disparate states can exist adjacent to one another. Illinois Physics post-doctoral researcher Matteo Mitrano, Professor Peter Abbamonte, and their team applied a new x-ray scattering technique, time-resolved resonant soft x-ray scattering, taking advantage of the state-of-the-art equipment at SLAC. This method enabled the scientists to probe the striped charge order phase with an unprecedented energy resolution. This is the first time this has been done at an energy scale relevant to superconductivity.

The scientists measured the fluctuations of charge order in a prototypical copper-oxide superconductor, La2?xBaxCuO4 (LBCO) and found the fluctuations had an energy that matched the material's superconducting critical temperature, implying that superconductivity in this material--and by extrapolation, in the cuprates--may be mediated by charge-order fluctuations.

The researchers further demonstrated that, if the charge order melts, the electrons in the system will reform the striped areas of charge order within tens of picoseconds. As it turns out, this process obeys a universal scaling law. To understand what they were seeing in their experiment, Mitrano and Abbamonte turned to Illinois Physics Professor Nigel Goldenfeld and his graduate student Minhui Zhu, who were able to apply theoretical methods borrowed from soft condensed matter physics to describe the formation of the striped patterns.

Professor Nigel Goldenfeld (right) and his graduate student Minhui Zhu pose outside the Institute for Genomic Biology on the University of Illinois at Urbana-Champaign Campus. Goldenfeld and Zhu elucidated the experimental observations using a theory borrowed from the field of soft condensed matter, establishing that charge order stripe formation in cuprate superconductors adhere to a universal scaling law, akin to pattern formation in liquids and polymers.

Credit: Siv Schwink, Illinois Physics

These findings were published on August 16, 2019, in the online journal Science Advances.

Cuprates have stripes

The significance of this mystery can be understood within the context of research in high-temperature superconductors (HTS), specifically the cuprates--layered materials that contain copper complexes. The cuprates, some of the first discovered HTS, have significantly higher critical temperatures than "ordinary" superconductors (e.g., aluminum and lead superconductors have a critical temperature below 10 K). In the 1980s, LBCO, a cuprate, was found to have a superconducting critical temperature of 35 K (-396°F), a discovery for which Bednorz and Müller won the Nobel Prize.

That discovery precipitated a flood of research into the cuprates. In time, scientists found experimental evidence of inhomogeneities in LBCO and similar materials: insulating and metallic phases that were coexisting. In 1998, Illinois Physics Professor Eduardo Fradkin, Stanford Professor Steven Kivelson, and others proposed that Mott insulators--materials that ought to conduct under conventional band theory but insulate due to repulsion between electrons--are able to host stripes of charge order and superconductivity. La2CuO4, the parent compound of LBCO, is an example of a Mott insulator. As Ba is added to that compound, replacing some La atoms, stripes form due to the spontaneous organization of holes--vacancies of electrons that act like positive charges.

Professor Peter Abbamonte (middle, in navy sweater) and postdoctoral researcher Matteo Mitrano (right, in white dress shirt) pose with their team at the SLAC National Accelerator Laboratory in Menlo Park, California. The experimental team used a new investigative technique called time-resolved resonant soft x-ray scattering, to probe the striped charge order phase in a well-studied cuprate superconductor, with an unprecedented energy resolution, finding that superconductivity in cuprates may be mediated by charge-order fluctuations. This is the first time such an experiment has been done at an energy scale relevant to superconductivity.
Credit: SLAC

Still, other questions regarding the behavior of the stripes remained. Are the areas of charge order immobile? Do they fluctuate?

"The conventional belief is that if you add these doped holes, they add a static phase which is bad for superconductivity--you freeze the holes, and the material cannot carry electricity," Mitrano comments. "If they are dynamic--if they fluctuate--then there are ways in which the holes could aid high-temperature superconductivity."

Probing the fluctuations in LBCO

To understand what exactly the stripes are doing, Mitrano and Abbamonte conceived of an experiment to melt the charge order and observe the process of its reformation in LBCO. Mitrano and Abbamonte reimagined a measurement technique called resonant inelastic x-ray scattering, adding a time-dependent protocol to observe how the charge order recovers over a duration of 40 picoseconds. The team shot a laser at the LBCO sample, imparting extra energy into the electrons to melt the charge order and introduce electronic homogeneity.

"We used a novel type of spectrometer developed for ultra-fast sources, because we are doing experiments in which our laser pulses are extremely short," Mitrano explains. "We performed our measurements at the Linac Coherent Light Source at SLAC, a flagship in this field of investigation. Our measurements are two orders of magnitude more sensitive in energy than what can be done at any other conventional scattering facility."

Abbamonte adds, "What is innovative here is using time-domain scattering to study collective excitations at the sub-meV energy scale. This technique was demonstrated previously for phonons. Here, we have shown the same approach can be applied to excitations in the valence band."

Hints of a mechanism for superconductivity

The first significant result of this experiment is that the charge order does in fact fluctuate, moving with an energy that almost matches the energy established by the critical temperature of LBCO. This suggests that Josephson coupling may be crucial for superconductivity.

The idea behind the Josephson effect, discovered by Brian Josephson in 1962, is that two superconductors can be connected via a weak link, typically an insulator or a normal metal. In this type of system, superconducting electrons can leak from the two superconductors into the weak link, generating within it a current of superconducting electrons.

Josephson coupling provides a possible explanation for the coupling between superconductivity and striped regions of charge order, wherein the stripes fluctuate such that superconductivity leaks into the areas of charge order, the weak links.

Obeying universal scaling laws of pattern formation

After melting the charge order, Mitrano and Abbamonte measured the recovery of the stripes as they evolved in time. As the charge order approached its full recovery, it followed an unexpected time dependence. This result was nothing like what the researchers had encountered in the past. What could possibly explain this?

The answer is borrowed from the field of soft condensed matter physics, and more specifically from a scaling law theory Goldenfeld had developed two decades prior to describe pattern formation in liquids and polymers. Goldenfeld and Zhu demonstrated the stripes in LBCO recover according to a universal, dynamic, self-similar scaling law.

Goldenfeld explains, "By the mid-1990s, scientists had an understanding of how uniform systems approach equilibrium, but how about stripe systems? I worked on this question about 20 years ago, looking at the patterns that emerge when a fluid is heated from below, such as the hexagonal spots of circulating, upwelling white flecks in hot miso soup. Under some circumstances these systems form stripes of circulating fluid, not spots, analogous to the stripe patterns of electrons in the cuprate superconductors. And when the pattern is forming, it follows a universal scaling law. This is exactly what we see in LBCO as it reforms its stripes of charge order."

Through their calculations, Goldenfeld and Zhu were able to elucidate the process of time-dependent pattern reformation in Mitrano and Abbamonte's experiment. The stripes reform with a logarithmic time dependence--a very slow process. Adherence to the scaling law in LBCO further implies that it contains topological defects, or irregularities in its lattice structure. This is the second significant result from this experiment.

Zhu comments, "It was exciting to be a part of this collaborative research, working with solid-state physicists, but applying techniques from soft condensed matter to analyze a problem in a strongly correlated system, like high-temperature superconductivity. I not only contributed my calculations, but also picked up new knowledge from my colleagues with different backgrounds, and in this way gained new perspectives on physical problems, as well as new ways of scientific thinking."

In future research, Mitrano, Abbamonte, and Goldenfeld plan to further probe the physics of charge order fluctuations with the goal of completely melting the charge order in LBCO to observe the physics of stripe formation. They also plan similar experiments with other cuprates, including yttrium barium copper oxide compounds, better known as YBCO.

Goldenfeld sees this and future experiments as ones that could catalyze new research in HTS: "What we learned in the 20 years since Eduardo Fradkin and Steven Kivelson's work on the periodic modulation of charge is that we should think about the HTS as electronic liquid crystals," he states. "We're now starting to apply the soft condensed matter physics of liquid crystals to HTS to understand why the superconducting phase exists in these materials."

Contacts and sources:
Dr. Jeff Damasco, Peter Abbamonte
Grainger College of Engineering
University of Illinois at Urbana-Champaign

Best of Both Worlds: Asteroids and Massive Mergers

The race is on. Since the construction of technology able to detect the ripples in space and time triggered by collisions from massive objects in the universe, astronomers around the world have been searching for the bursts of light that could accompany such collisions, which are thought to be the sources of rare heavy elements.

The University of Arizona’s Steward Observatory has partnered with the Catalina Sky Survey, which searches for near-Earth asteroids from atop Mount Lemmon, in an effort dubbed Searches after Gravitational Waves Using ARizona Observatories, or SAGUARO, to find optical counterparts to massive mergers. 

An artist's conception of two merging neutron stars creating ripples in space time known as gravitational waves. 
An artist's conception of two merging neutron stars creating ripples in space time known as gravitational waves. (Image: NASA)
Image: NASA

“Catalina Sky Survey has all of this infrastructure for their asteroid survey. So we have deployed additional software to take gravitational wave alerts from LIGO (the Laser Interferometer Gravitational-Wave Observatory) and the Virgo interferometer then notify the survey to search an area of sky most likely to contain the optical counterpart,” said Michael Lundquist, postdoctoral research associate and lead author on the study published today in the Astrophysical Journal Letters.

“Essentially, instead of searching the next section of sky that we would normally, we go off and observe some other area that has a higher probability of containing an optical counterpart of a gravitational wave event,” said Eric Christensen, Catalina Sky Survey director and Lunar and Planetary Laboratory senior staff scientist. “The main idea is we can run this system while still maintaining the asteroid search.”

The ongoing campaign began in April, and in that month alone, the team was notified of three massive collisions. Because it is difficult to tell the precise location from which the gravitational wave originated, locating optical counterparts can be difficult.

According to Lundquist, two strategies are being employed. In the first, teams with small telescopes target galaxies that are at the right approximate distance, according to the gravitational wave signal. Catalina Sky Survey, on the other hand, utilizes a 60-inch telescope with a wide field of view to scan large swaths of sky in 30 minutes.

Three alerts, on April 9, 25 and 26, triggered the team’s software to search nearly 20,000 objects. Machine learning software then trimmed down the total number of potential optical counterparts to five.

The first gravitational wave event was a merger of two black holes, Lundquist said.

“There are some people who think you can get an optical counterpart to those, but it’s definitely inconclusive,” he said.

The second event was a merger of two neutron stars, the incredibly dense core of a collapsed giant star. The third is thought to be a merger between a neutron star and a black hole, Lundquist said.

While no teams confirmed optical counterparts, the UA team did find several supernovae. They also used the Large Binocular Telescope Observatory to spectroscopically classify one promising target from another group. It was determined to be a supernova and not associated with the gravitational wave event.

“We also found a near-Earth object in the search field on April 25,” Christensen said. “That proves right there we can do both things at the same time.”

They were able to do this because Catalina Sky Survey has observations of the same swaths of sky going back many years. Many other groups don’t have easy access to past photos for comparison, offering the UA team a leg up.

“We have really nice references,” Lundquist said. “We subtract the new image from the old image and use that difference to look for anything new in the sky.”

“The process Michael described,” Christensen said, “starting with a large number of candidate detections and filtering down to whatever the true detections are, is very familiar. We do that with near-Earth objects, as well.”

The team is planning on deploying a second telescope in the hunt for optical counterparts: Catalina Sky Survey’s 0.7-meter Schmidt telescope. While the telescope is smaller than the 60-inch telescope, it has an even wider field of view, which allows astronomers to quickly search an even larger chunk of sky. They’ve also improved their machine learning software to filter out stars that regularly change in brightness.

"Catalina Sky Survey takes hundreds of thousands of images of the sky every year, from multiple telescopes. Our survey telescopes image the entire visible nighttime sky several times per month, then we are looking for one kind of narrow slice of the pie," Christensen said. “So, we’ve been willing to share the data with whoever wants to use it.”

Contacts and sources:
Mikayla Mace
University of Arizona
Citation: Searches after Gravitational Waves Using ARizona Observatories (SAGUARO): System Overview and First Results from Advanced LIGO/Virgo's Third Observing Run M. J. Lundquist et al. The Astrophysical Journal Letters, Volume 881, Number 2 http://dx.doi.org/10.3847/2041-8213/ab32f2

Thursday, August 15, 2019

Gut-Brain Connection Helps Explain Obesity from Overeating

Eating extra servings typically shows up on the scale later, but how this happens has not been clear. A new study published today in the Journal of Clinical Investigation by a multi-institutional team led by researchers at Baylor College of Medicine reveals a previously unknown gut-brain connection that helps explain how those extra servings lead to weight gain.

Mice consuming a high-fat diet show increased levels of gastric inhibitory polypeptide (GIP), a hormone produced in the gut that is involved in managing the body’s energy balance. The study reports that the excess GIP travels through the blood to the brain where it inhibits the action of leptin, the satiety hormone; consequently, the animals continue eating and gain weight. Blocking the interaction of GIP with the brain restores leptin’s ability to inhibit appetite and results in weight loss in mice.

food-salad-restaurant-person.png (970x444)
Credit: Baylor College of Medicine

“We have uncovered a new piece of the complex puzzle of how the body manages energy balance and affects weight,” said corresponding author Dr. Makoto Fukuda, assistant professor of pediatrics at Baylor and the USDA/ARS Children’s Nutrition Research Center at Baylor and Texas Children’s Hospital.

Researchers know that leptin, a hormone produced by fat cells, is important in the control of body weight both in humans and mice. Leptin works by triggering in the brain the sensation of feeling full when we have eaten enough, and we stop eating. However, in obesity resulting from consuming a high-fat diet or overeating, the body stops responding to leptin signals – it does not feel full, and eating continues, leading to weight gain.

“We didn’t know how a high-fat diet or overeating leads to leptin resistance,” Fukuda said. “My colleagues and I started looking for what causes leptin resistance in the brain when we eat fatty foods. Using cultured brain slices in petri dishes we screened blood circulating factors for their ability to stop leptin actions. After several years of efforts, we discovered a connection between the gut hormone GIP and leptin.”

GIP is one of the incretin hormones produced in the gut in response to eating and known for their ability to influence the body’s energy management. To determine whether GIP was involved in leptin resistance, Fukuda and his colleagues first confirmed that the GIP receptor, the molecule on cells that binds to GIP and mediates its effects, is expressed in the brain.

Then the researchers evaluated the effect blocking the GIP receptor would have on obesity by infusing directly into the brain a monoclonal antibody developed by Dr. Peter Ravn at AstraZeneca that effectively prevents the GIP-GIP receptor interaction. This significantly reduced the body weight of high-fat-diet-fed obese mice.

“The animals ate less and also reduced their fat mass and blood glucose levels,” Fukuda said. “In contrast, normal chow-fed lean mice treated with the monoclonal antibody that blocks GIP-GIP receptor interaction neither reduced their food intake nor lost body weight or fat mass, indicating that the effects are specific to diet-induced obesity.”

Further experiments showed that if the animals were genetically engineered to be leptin deficient, then the treatment with the specific monoclonal antibody did not reduce appetite and weight in obese mice, indicating that GIP in the brain acts through leptin signaling. In addition, the researchers identified intracellular mechanisms involved in GIP-mediated modulation of leptin activity.

“In summary, when eating a balanced diet, GIP levels do not increase and leptin works as expected, triggering in the brain the feeling of being full when the animal has eaten enough and the mice stop eating,” Fukuda said. “But, when the animals eat a high-fat diet and become obese, the levels of blood GIP increase. GIP flows into the hypothalamus where it inhibits leptin’s action. Consequently, the animals do not feel full, overeat and gain weight. Blocking the interaction of GIP with the hypothalamus of obese mice restores leptin’s ability to inhibit appetite and reduces body weight.”

These data indicate that GIP and its receptor in the hypothalamus, a brain area that regulates appetite, are necessary and sufficient to elicit leptin resistance. This is a previously unrecognized role of GIP on obesity that plays directly into the brain.

Although more research is needed, the researchers speculate that these findings might one day be translated into weight loss strategies that restore the brain’s ability to respond to leptin by inhibiting the anti-leptin effect of GIP.

Other contributors to this work include Kentaro Kaneko, Yukiko Fu, Hsiao-Yun, Lin, Elizabeth L. Cordonier, Qianxing Mo, Yong Gao, Ting Yao, Jacqueline Naylor, Victor Howard, Kenji Saito, Pingwen Xu, Siyu S. Chen, Miao-Hsueh Chen, Yong Xu, Kevin W. Williams and Peter Ravn. For author affiliations refer to the published paper.

This work was supported by USDA CRIS 6250-51000-055, AHA-14BGIA20460080, NIH-P30-DK079638, NIH R01DK104901, AHA-15POST22500012, the Uehara Memorial Foundation 201340214, NIH-T32HD071839, AHA-13POST13800000 and AHA-15POST22670017. Further support was provided by NIH R01DK100699, DK119169, China Scholarship Council 201406280111, CRIS 6250-51000-059 and NIH P30-DK079638. This project was also supported in part by the Genomic and RNA Profiling Core at Baylor College of Medicine with funding from P30 Digestive Disease Center Support Grant (NIDDK-DK56338) and P30 Cancer Center Support Grant (NCI-CA125123).

Contacts and sources:
Homa Shalchi
Baylor College of Medicine

Citation: Gut-derived GIP activates central Rap1 to impair neural leptin sensitivity during overnutrition
Kentaro Kaneko, Yukiko Fu, Hsiao-Yun Lin, Elizabeth L. Cordonier, Qianxing Mo, Yong Gao, Ting Yao, Jacqueline Naylor, Victor Howard, Kenji Saito, Pingwen Xu, Siyu S. Chen, Miao-Hsueh Chen, Yong Xu, Kevin W. Williams, Peter Ravn, Makoto Fukuda. . Journal of Clinical Investigation, 2019; DOI: 10.1172/JCI126107

Flavonoid-rich Food Protect Against Cancer and Heart Disease

Consuming flavonoid-rich items such as apples and tea protects against cancer and heart disease, particularly for smokers and heavy drinkers, according to new research from Edith Cowan University (ECU).

Researchers from ECU’s School of Medical and Health Sciences analysed data from the Danish Diet, Cancer and Health cohort that assessed the diets of 53,048 Danes over 23 years.

They found that people who habitually consumed moderate to high amounts of foods rich in flavonoids, compounds found in plant-based foods and drinks, were less likely to die from cancer or heart disease.

Flavonoid-rich items such as apples and tea protects against cancer and heart disease.
Apple, tea and wine.
Credit: Edith Cowan University

No quick fix for poor habits

Lead researcher Dr Nicola Bondonno said while the study found a lower risk of death in those who ate flavonoid-rich foods, the protective effect appeared to be strongest for those at high risk of chronic diseases due to cigarette smoking and those who drank more than two standard alcoholic drinks a day.

“These findings are important as they highlight the potential to prevent cancer and heart disease by encouraging the consumption of flavonoid-rich foods, particularly in people at high risk of these chronic diseases,” she said.

“But it’s also important to note that flavonoid consumption does not counteract all of the increased risk of death caused by smoking and high alcohol consumption. By far the best thing to do for your health is to quit smoking and cut down on alcohol.

“We know these kind of lifestyle changes can be very challenging, so encouraging flavonoid consumption might be a novel way to alleviate the increased risk, while also encouraging people to quit smoking and reduce their alcohol intake.”
How much is enough?

Participants consuming about 500mg of total flavonoids each day had the lowest risk of a cancer or heart disease-related death.

“It’s important to consume a variety of different flavonoid compounds found in different plant based food and drink. This is easily achievable through the diet: one cup of tea, one apple, one orange, 100g of blueberries, and 100g of broccoli would provide a wide range of flavonoid compounds and over 500mg of total flavonoids”.

Dr Bondonno said while the research had established an association between flavonoid consumption and lower risk of death, the exact nature of the protective effect was unclear but likely to be multifaceted.

“Alcohol consumption and smoking both increase inflammation and damage blood vessels, which can increase the risk of a range of diseases,” she said.

“Flavonoids have been shown to be anti-inflammatory and improve blood vessel function, which may explain why they are associated with a lower risk of death from heart disease and cancer.”.

Dr Bondonno said the next step for the research was to look more closely at which types of heart disease cancers were most protected by flavonoids.

‘Flavonoid intake is associated with lower mortality in the Danish Diet Cancer and Health Cohort’ was recently published in Nature Communications.

The ECU study was a collaboration with researchers from the Herlev & Gentofte University Hospital, Aarhus University, as well as the Danish Cancer Society Research Centre, Aalborg University Hospital, the Universities of Western Australia and the International Agency for Research on Cancer.

Contacts and sources:
Edith Cowan University

Citation: Flavonoid intake is associated with lower mortality in the Danish Diet Cancer and Health Cohort.
Nicola P. Bondonno, Frederik Dalgaard, Cecilie Kyrø, Kevin Murray, Catherine P. Bondonno, Joshua R. Lewis, Kevin D. Croft, Gunnar Gislason, Augustin Scalbert, Aedin Cassidy, Anne Tjønneland, Kim Overvad, Jonathan M. Hodgson. Nature Communications, 2019; 10 (1) DOI: 10.1038/s41467-019-11622-x

July 2019 Hottest July Since Record Keeping Began 140 Years Ago

Much of the planet sweltered in unprecedented heat in July, as temperatures soared to new heights in the hottest month ever recorded. The record warmth also shrank Arctic and Antarctic sea ice to historic lows.

Here’s a closer look into NOAA’s latest monthly global climate report:

Climate by the numbers

The average global temperature in July was 1.71 degrees F above the 20th-century average of 60.4 degrees, making it the hottest July in the 140-year record, according to scientists at NOAA’s National Centers for Environmental Information. The previous hottest month on record was July 2016.

Nine of the 10 hottest Julys have occurred since 2005—with the last five years ranking as the five hottest. Last month was also the 43rd consecutive July and 415th consecutive month with above-average global temperatures.

Year to date I January through July

The period from January through July produced a global temperature that was 1.71 degrees F above the 20th-century average of 56.9 degrees, tying with 2017 as the second-hottest year to date on record.

It was the hottest year to date for parts of North and South America, Asia, Australia, New Zealand, the southern half of Africa, portions of the western Pacific Ocean, western Indian Ocean and the Atlantic Ocean. 

An annotated map of the world showing notable climate events that occurred around the world in July 2019. For details, see the short bulleted list below in our story

Download Image

More notable stats and facts

Record-low sea ice: Average Arctic sea ice set a record low for July, running 19.8% below average – surpassing the previous historic low of July 2012.

Average Antarctic sea-ice coverage was 4.3% below the 1981-2010 average, making it the smallest for July in the 41-year record.

Some cool spots: Parts of Scandinavia and western and eastern Russia had temperatures at least 2.7 degrees F below average.

More > Access NOAA’s full climate report and download images.

Contacts and sources:
John Bateman

Moon Glows Brighter Than Sun in Gamma Ray Images

If our eyes could see high-energy radiation called gamma rays, the Moon would appear brighter than the Sun! That’s how NASA’s Fermi Gamma-ray Space Telescope has seen our neighbor in space for the past decade.

Gamma-ray observations are not sensitive enough to clearly see the shape of the Moon’s disk or any surface features. Instead, Fermi’s Large Area Telescope (LAT) detects a prominent glow centered on the Moon’s position in the sky.

Banner image: The Moon shines brightly in gamma rays as seen in this time sequence from NASA’s Fermi Gamma-ray Space Telescope. Each 5-by-5-degree image is centered on the Moon and shows gamma rays with energies above 31 million electron volts, or tens of millions of times that of visible light. At these energies, the Moon is actually brighter than the Sun. Brighter colors indicate greater numbers of gamma rays. This animation shows how longer exposure, ranging from two to 128 months (10.7 years), improved the view. '
Fermi gamma-ray observations of Moon
Credit: NASA/DOE/Fermi LAT Collaboration

Mario Nicola Mazziotta and Francesco Loparco, both at Italy’s National Institute of Nuclear Physics in Bari, have been analyzing the Moon’s gamma-ray glow as a way of better understanding another type of radiation from space: fast-moving particles called cosmic rays.

“Cosmic rays are mostly protons accelerated by some of the most energetic phenomena in the universe, like the blast waves of exploding stars and jets produced when matter falls into black holes,” explained Mazziotta.

Because the particles are electrically charged, they’re strongly affected by magnetic fields, which the Moon lacks. As a result, even low-energy cosmic rays can reach the surface, turning the Moon into a handy space-based particle detector. When cosmic rays strike, they interact with the powdery surface of the Moon, called the regolith, to produce gamma-ray emission. The Moon absorbs most of these gamma rays, but some of them escape.

Mazziotta and Loparco analyzed Fermi LAT lunar observations to show how the view has improved during the mission. They rounded up data for gamma rays with energies above 31 million electron volts — more than 10 million times greater than the energy of visible light — and organized them over time, showing how longer exposures improve the view.

“Seen at these energies, the Moon would never go through its monthly cycle of phases and would always look full,” said Loparco.

These images show the steadily improving view of the Moon’s gamma-ray glow from NASA’s Fermi Gamma-ray Space Telescope. Each 5-by-5-degree image is centered on the Moon and shows gamma rays with energies above 31 million electron volts, or tens of millions of times that of visible light. At these energies, the Moon is actually brighter than the Sun. Brighter colors indicate greater numbers of gamma rays. This image sequence shows how longer exposure, ranging from two to 128 months (10.7 years), improved the view.

Credits: NASA/DOE/Fermi LAT Collaboration

As NASA sets its sights on sending humans to the Moon by 2024 through the Artemis program, with the eventual goal of sending astronauts to Mars, understanding various aspects of the lunar environment take on new importance. These gamma-ray observations are a reminder that astronauts on the Moon will require protection from the same cosmic rays that produce this high-energy gamma radiation.

While the Moon’s gamma-ray glow is surprising and impressive, the Sun does shine brighter in gamma rays with energies higher than 1 billion electron volts. Cosmic rays with lower energies do not reach the Sun because its powerful magnetic field screens them out. But much more energetic cosmic rays can penetrate this magnetic shield and strike the Sun’s denser atmosphere, producing gamma rays that can reach Fermi.

Although the gamma-ray Moon doesn’t show a monthly cycle of phases, its brightness does change over time. Fermi LAT data show that the Moon’s brightness varies by about 20% over the Sun’s 11-year activity cycle. Variations in the intensity of the Sun’s magnetic field during the cycle change the rate of cosmic rays reaching the Moon, altering the production of gamma rays.

Contacts and sources:
Francis Reddy
NASA/Goddard Space Flight Center