Friday, June 28, 2019

Climate Impact of Clouds Made from Airplane Contrails May Triple by 2050



In the right conditions, airplane contrails can linger in the sky as contrail cirrus - ice clouds that can trap heat inside the Earth's atmosphere. Their climate impact has been largely neglected in global schemes to offset aviation emissions, even though contrail cirrus have contributed more to warming the atmosphere than all CO2 emitted by aircraft since the start of aviation. A new study published in the European Geosciences Union (EGU) journal Atmospheric Chemistry and Physics has found that, due to air traffic activity, the climate impact of contrail cirrus will be even more significant in the future, tripling by 2050.

Contrail cirrus change global cloudiness, which creates an imbalance in the Earth's radiation budget - called 'radiative forcing' - that results in warming of the planet. The larger this radiative forcing, the more significant the climate impact. In 2005, air traffic made up about 5% of all anthropogenic radiative forcing, with contrail cirrus being the largest contributor to aviation's climate impact.


Radiative forcing due to the formation of contrails for present-day climate conditions and (a) present-day air traffic volume, and (b) for air traffic volume expected for the year 2050. Panels on the right hand side show the radiative forcing for climate conditions expected for 2050 and (c) air traffic volume for the year 2050, and (d) air traffic volume for the year 2050 assuming an increase in fuel efficiency and a 50% decrease in soot emissions. The numbers in the boxes show the global mean radiative forcing for each simulation.

Credit: Bock and Burkhardt, Atmos. Chem. Phys., 2019


"It is important to recognise the significant impact of non-CO2 emissions, such as contrail cirrus, on climate and to take those effects into consideration when setting up emission trading systems or schemes like the Corsia agreement," says Lisa Bock, a researcher at DLR, the German Aerospace Center, and lead-author of the new study. Corsia, the UN's scheme to offset air traffic carbon emissions from 2020, ignores the non-CO2 climate impacts of aviation.

But the new Atmospheric Chemistry and Physics study shows these non-CO2 climate impacts cannot be neglected. Bock and her colleague Ulrike Burkhardt estimate that contrail cirrus radiative forcing will be 3 times larger in 2050 than in 2006. This increase is predicted to be faster than the rise in CO2 radiative forcing since expected fuel efficiency measures will reduce CO2 emissions.

The increase in contrail cirrus radiative forcing is due to air traffic growth, expected to be 4 times larger in 2050 compared to 2006 levels, and a slight shift of flight routes to higher altitudes, which favours the formation of contrails in the tropics. The impact on climate due to contrail cirrus will be stronger over Northern America and Europe, the busiest air traffic areas on the globe, but will also significantly increase in Asia.

"Contrail cirrus' main impact is that of warming the higher atmosphere at air traffic levels and changing natural cloudiness. How large their impact is on surface temperature and possibly on precipitation due to the cloud modifications is unclear," says Burkhardt. Bock adds: "There are still some uncertainties regarding the overall climate impact of contrail cirrus and in particular their impact on surface temperatures because contrail cirrus themselves and their effects on the surface are ongoing topics of research. But it's clear they warm the atmosphere."

Cleaner aircraft emissions would solve part of the problem highlighted in the study. Reducing the number of soot particles emitted by aircraft engines decreases the number of ice crystals in contrails, which in turn reduces the climate impact of contrail cirrus. However, "larger reductions than the projected 50% decrease in soot number emissions are needed," says Burkhardt. She adds that even 90% reductions would likely not be enough to limit the climate impact of contrail cirrus to 2006 levels.

Another often discussed mitigation method is rerouting flights to avoid regions particularly sensitive to the effects of contrail formation. But Bock and Burkhardt caution about applying measures to reduce the climate impact of short-lived contrail cirrus that could result in increases in long-lived CO2 emissions, in particular given the uncertainties in estimating the climate impact of contrail cirrus. They say that measures to reduce soot emissions would be preferable to minimise the overall radiative forcing of future air traffic since they do not involve an increase of CO2 emissions.

"This would enable international aviation to effectively support measures to achieve the Paris climate goals," Burkhardt concludes.




Contacts and sources:
Barbara FerreiraEuropean Geosciences Union


Citation: Contrail cirrus radiative forcing for future air traffic
Lisa Bock and Ulrike Burkhardt Atmospheric Chemistry and Physics, 19, 8163-8174, https://doi.org/10.5194/acp-19-8163-2019, 2019



Study Shows How Fast Human Brains ‘See’ The World



A new study from Western University’s renowned Brain and Mind Institute shows how fast our brain makes sense of a world in which the images of people, places and things are constantly shrinking, expanding and changing on the retina at the back of our eyes.

The Western-led research team discovered that once the image of an object falls on the retina, it takes just over a tenth of a second for the brain to understand the real-world size of that object.

Juan Chen, Melvyn Goodale and their collaborators at Western’s Brain and Mind Institute, South China Normal University and the University of East Anglia (U.K.) also found that the representations of the real size of objects in the world emerge in the very earliest stages of visual processing in the cerebral cortex of the brain. These findings were published in Current Biology.

A visual representation of size constancy using an animated gif of the oil painting Paris Street; Rainy Day by Gustave Caillebotte (1877).

Animated by Western University

As Goodale explains, our innate ability to see the real-world size of objects, despite dramatic changes in the images captured by our eyes, is called size constancy.

“Remarkably we see a world that is stable, and things are perceived to be the size they really are,” says Goodale, the founding director of the Brain and Mind Institute and senior author of the study. “This is a good thing because otherwise our perception of the world would be chaotic and impossible to interpret.”


Credit: Western University


It is understood that human brains create size constancy by calculating the distance of objects we see – the further away the object, the smaller the retinal image. As a result, even though the image of a car driving away from us becomes smaller and smaller on our retina, we continue to see the car as being the same size.

For the study, Chen, Goodale and their collaborators used electroencephalography (EEG) to measure the tiny electrical signals in the brain that occur when people are presented with objects of different sizes at different distances. Unlike previous experiments, in which investigators manipulated the apparent distance of objects by changing their appearance on a computer screen, the Brain and Mind investigators moved the entire display closer or further away from the observers while their brain activity was being measured with EEG.

By conducting the experiment in this way, all of the cues to distance, such as stereo vision, pictorial cues and the vergence of the eyes were available and completely congruent with one another. Using this technique, the team was also able to pinpoint exactly when size constancy emerges in the visual areas of the brain.

“In the first 100 milliseconds after the presentation of an object on the screen, the EEG signal reflects the size of the image on the retina of the eye but by 150 milliseconds, the signal represents the real size of the object,” explains Goodale.

This change from retinal to real-size coding in the EEG signals reflects the merging of information about the size of the retinal image and information about the distance of the object from the observer.

This Brain and Mind Institute discovery about how the human brain allows us to see the real size of objects in the world can help engineers who are trying to devise machine vision systems for everything from robots to self-driving cars. It also represents a first step in understanding how our brain provides us with a compelling but stable representation of the visual world.


Contacts and sources:
Jeff Renaud
University of Western Ontario






Life on Mars Possible after Last Great Meteorite Impact Nearly 4.5 Billion Years Ago



A new international study led by Western University shows that Mars’ first ‘real chance’ at developing life started very early, 4.48 billion years ago, when giant, life-inhibiting meteorites stopped striking the red planet.

Tiny igneous zircon grains within this rock fragment were fractured by the launch from Mars but otherwise unaltered for more than 4.4 billion years. The images was taken with an optical polarizing compound microscope at Western’s Zircon & Accessory Phase Laboratory.

Credit Desmond Moser, Western University

These findings, published online today in Nature Geoscience, suggest that conditions under which life could have thrived may have occurred on Mars from around 4.2 to 3.5 billion years ago. This predates the earliest evidence of life on Earth by up to 500 million years.

It is known that the number and size of meteorite impacts on Mars and Earth gradually declined after the planets formed. Eventually, impacts became small and infrequent enough that the near-surface conditions could allow life to develop. However, when the heavy meteorite bombardment waned has long been debated. It has been proposed that there was a ‘late’ phase of heavy bombardment of both planets that ended around 3.8 billion years ago.

For the study, Desmond Moser from Western’s Departments of Earth Sciences and Geographyand his students and collaborators analyzed the oldest-known mineral grains from meteorites that are believed to have originated from Mars’ southern highlands. These ancient grains, imaged down to atomic levels, are almost unchanged since they crystallized near the surface of Mars.


Microscope image taken in Western’s Zircon and Accessory Phase Laboratory of a thin slice through the meteorite shows most ancient (>4.43 billion years) crust of Mars. It has not witnessed giant impact processes so giant impacts had to have happened earlier.
Credit:

In comparison, analysis of impacted areas on Earth and its Moon shows that more than 80 per cent of the grains studied contained features associated with impacts, such as exposure to intense pressures and temperatures. The analyses of Earth, Mars and Moon samples were conducted at Western’s nationally unique Zircon & Accessory Phase Laboratory, which is led by Moser.

The results suggest that heavy bombardment of Mars ended before the analyzed minerals formed, which means, the Martian surface would have become habitable by the time it is believed that water was abundant there. Water was also present on Earth by this time so it is plausible that the solar system’s biological clock started much earlier than previously accepted.

“Giant meteorite impacts on Mars between 4.2 and 3.5 billion years ago may have actually accelerated the release of early waters from the interior of the planet setting the stage for life-forming reactions,” says Moser. “This work may point out good places to get samples returned from Mars.”



Contacts and sources:
Jeff Renaud
University of Western Ontario






Opioids Study Shows High-Risk Counties Across the Country, Suggests Local Solutions



Dozens of counties in the Midwest and South are at the highest risk for opioid deaths in the United States, say University of Michigan researchers.

In a study of more than 3,000 counties across the U.S., the researchers found that residents of 412 counties are at least twice as likely to be at high risk for opioid overdose deaths and to lack providers who can deliver medications to treat opioid use disorder.

States with among the most high-risk counties include: North Carolina, Ohio, Virginia, Kentucky, Michigan, Tennessee, Illinois, Indiana, Georgia, Oklahoma, West Virginia, South Carolina, Wisconsin and Florida.

Map shows counties with opioid high-risk, which includes low rate of medication for treatment of opioid use disorder providers and high rates of opioid overdose death (red).

Credit: Rebecca L. Haffajee, JD, PhD, MPH; Lewei Allison Lin, MD, MS; Amy S. B. Bohnert, PhD, MHS; Jason E. Goldstick, PhD


The study, published in the June 28 issue of JAMA Network Open, suggests strategies for increasing treatment for opioid addiction, including by increasing the number of primary care clinicians capable of providing medications as well as improving employment opportunities in those communities.

"We hope policymakers can use this information to funnel additional money and resources to specific counties within their states," said lead author Rebecca Haffajee, assistant professor of health management and policy at the U-M School of Public Health. "We need more strategies to augment and increase the primary care provider workforce in those high-risk counties, people who are willing and able to provide opioid use disorder treatments."

The U-M researchers looked at opioid overdose mortality rates in 3,142 U.S. counties between January 2015 and December 2017. They defined an opioid high-risk county as one with opioid overdose mortality above the national rate and with the availability of providers to deliver opioid use disorder medications below the national rate.

The study, they say, is the first to include data from all three opioid use disorder medications on the market, including methadone, buprenorphine and naltrexone. Their analysis included publicly listed providers of methadone (1,517 opioid treatment programs), buprenorphine (24,851 clinicians approved to prescribe the medication) and the extended-release naltrexone product Vivitrol (5,222 health care providers, as compiled by the drug manufacturer).

In their cross-sectional study, the researchers also looked at demographics, workforce, access to health care insurance, road density, urbanicity and opioid prescriptions.

Among counties analyzed, they found that:


412 counties (13%) are classified as high-risk, having both high opioid overdose mortality and low treatment capacity.
751 counties (24%) had a high rate of opioid overdose mortality.
1,457 (46%) counties lacked a publicly available provider of opioid use disorder medication.
946 out of 1,328 rural counties (71%) lacked a publicly available provider of opioid use disorder medication.



The study found that certain factors--such as a younger population, lower rates of unemployment and higher density of primary care physicians--are associated with a lower risk of opioid overdose death and lack of capacity to treat opioid use disorder.

Haffajee, also a member of the U-M Institute for Healthcare Policy and Innovation, said it's important to understand the differences of the opioid epidemic at the local level.

"In rural areas, the opioid crisis is often still a prescription opioid issue. But in metropolitan counties, highly potent illicit fentanyl and other synthetic opioids are more prevalent and are killing people," she said. "That's likely why we identified metropolitan areas as higher-risk, despite the fact that these counties typically have some (just not enough) treatment providers.

"Understanding these differences at the sub-state level and coming up with strategies that target specific county needs can allow us to more efficiently channel the limited amount of resources we have to combat this crisis."

.





Contacts and sources:
Nardy Baeza Bickel
University of Michigan


Citation: Characteristics of US Counties With High Opioid Overdose Mortality and Low Capacity to Deliver Medications for Opioid Use Disorder  In addition to Haffajee, authors included Lewei Allison Lin, assistant professor of psychiatry; Amy Bohnert, associate professor of medicine; and Jason Goldstick, research assistant professor of emergency medicine. All are part of the Opioid Solutions Network at U-M: 
http://jamanetwork.com/journals/jamanetworkopen/fullarticle/10.1001/jamanetworkopen.2019.6373?utm_source=For_The_Media&utm_medium=referral&utm_campaign=ftm_links&utm_term=062819

JAMA Network Open: https://jamanetwork.com/journals/jamanetworkopen

U-M Opioids Solution: https://opioids.umich.edu



Smart Materials Provide Real-Time Insight into Wearers' Emotions



Smart wearable technology that changes color, heats up, squeezes or vibrates as your emotions are heightened has the potential to help people with affective disorders better control their feelings.

Researchers from Lancaster University's School of Computing and Communications have worked with smart materials on wrist-worn prototypes that can aid people diagnosed with depression, anxiety, and bi-polar disorders in monitoring their emotions.

Wrist bands that change color depending upon the level of emotional arousal allow users to easily see or feel what is happening without having to refer to mobile or desktop devices.

Co-creator Muhammad Umair wearing one of the prototype smart materials wrist bands.
Credit: Paul Turner/Lancaster University

"Knowing our emotions and how we can control them are complex skills that many people find difficult to master," said co-author Muhammad Umair, who will present the research at DIS 19 in San Diego.

"We wanted to create low-cost, simple prototypes to support understanding and engagement with real-time changes in arousal. The idea is to develop self-help technologies that people can use in their everyday life and be able to see what they are going through. Wrist-worn private affective wearables can serve as a bridge between mind and body and can really help people connect to their feelings.

"Previous work on this technologies has focused on graphs and abstract visualizations of biosignals, on traditions mobile and desktop interfaces. But we have focused on devices that are wearable and provide not only visual signals but also can be felt through vibration, a tightening feeling or heat sensation without the need to access other programs - as a result we believe the prototype devices provide real-time rather than historic data."

The researchers worked with thermochromic materials that change color when heated up, as well as devices that vibrate or squeeze the wrist. Tests of the devices saw participants wearing the prototypes over the course of between eight and 16 hours, reporting between four and eight occasions each when it activated - during events such as playing games, working, having conversations, watching movies, laughing, relaxing and becoming scared.

One of the prototype smart materials wrist bands.

Credit: Paul Turner/Lancaster University

A skin response sensor picked up changes in arousal - through galvanic skin response, which measures the electrical conductivity of the skin - and represented it through the various prototype designs. Those smart materials which were both instant and constant and which had a physical rather than visual output, were most effective.

Muhammad added: "Participants started to pay attention to their in-the-moment emotional responses, realizing that their moods had changed quickly and understanding what it was that was causing the device to activate. It was not always an emotional response, but sometimes other activities - such as taking part in exercise - could cause a reaction.

"One of the most striking findings was that the devices helped participants started to identify emotional responses which they had been unable to beforehand, even after only two days.

"We believes that a better understanding of the materials we employed and their qualities could open up new design opportunities for representing heightened emotions and allowing people a better sense of sense and emotional understanding."


Contacts and sources:
Paul TurnerLancaster University






Researchers Decipher the History of Supermassive Black Holes in the Early Universe



Astrophysicists at Western University have found evidence for the direct formation of black holes that do not need to emerge from a star remnant. The production of black holes in the early universe, formed in this manner, may provide scientists with an explanation for the presence of extremely massive black holes at a very early stage in the history of our universe.

Shantanu Basu and Arpan Das from Western's Department of Physics & Astronomy have developed an explanation for the observed distribution of supermassive black hole masses and luminosities, for which there was previously no scientific explanation. The findings were published today by Astrophysical Journal Letters.

This is an illustration of a supermassive black hole

Credit:  Scott Woods, Western University

The model is based on a very simple assumption: supermassive black holes form very, very quickly over very, very short periods of time and then suddenly, they stop. This explanation contrasts with the current understanding of how stellar-mass black holes are formed, which is they emerge when the centre of a very massive star collapses in upon itself.

"This is indirect observational evidence that black holes originate from direct-collapses and not from stellar remnants," says Basu, an astronomy professor at Western who is internationally recognized as an expert in the early stages of star formation and protoplanetary disk evolution.

Basu and Das developed the new mathematical model by calculating the mass function of supermassive black holes that form over a limited time period and undergo a rapid exponential growth of mass. The mass growth can be regulated by the Eddington limit that is set by a balance of radiation and gravitation forces or can even exceed it by a modest factor.

"Supermassive black holes only had a short time period where they were able to grow fast and then at some point, because of all the radiation in the universe created by other black holes and stars, their production came to a halt," explains Basu. "That's the direct-collapse scenario."

During the last decade, many supermassive black holes that are a billion times more massive than the Sun have been discovered at high 'redshifts,' meaning they were in place in our universe within 800 million years after the Big Bang. The presence of these young and very massive black holes question our understanding of black hole formation and growth. The direct-collapse scenario allows for initial masses that are much greater than implied by the standard stellar remnant scenario, and can go a long way to explaining the observations. This new result provides evidence that such direct-collapse black holes were indeed produced in the early universe.

Basu believes that these new results can be used with future observations to infer the formation history of the extremely massive black holes that exist at very early times in our universe.



Contacts and sources:
 Jeff Renaud
Western University

Milk: Best Drink to Reduce Burn from Chili Peppers



People who order their Buffalo wings especially spicy and sometimes find them to be too "hot," should choose milk to reduce the burn, according to Penn State researchers, who also suggest it does not matter if it is whole or skim.

The research originated as an effort by the Sensory Evaluation Center in Penn State's College of Agricultural Sciences to identify a beverage to clear the palates of participants in tasting studies involving capsaicin. An extract from chili peppers, capsaicin is considered an irritant because it causes warming and burning sensations.

Capsaicin, an extract from chili peppers like these, is considered an irritant, due to the warming and burning sensations it causes. Widespread consumption of chili peppers and foods such as wings spiced with siracha and hot sauce show that many people enjoy this burn. But these sensations also can be overwhelming.
Credit: Basile Morin / Wikimedia Commons

"We were interested in giving capsaicin solutions to many test participants and we were concerned with the lingering burn at the end of an experiment," said center director John Hayes, associate professor of food science. "Initially, one of our undergrad researchers wanted to figure out the best way to cut the burn for people who found our samples to be too intense."

Widespread consumption of chili peppers and foods such as wings spiced with siracha and hot sauce show that many people enjoy this burn, Hayes added. But these sensations also can be overwhelming. While folklore exists on the ability of specific beverages to mitigate capsaicin burn, quantitative data to support these claims are lacking.

The researchers looked at five beverages and involved 72 people — 42 women and 30 men. Participants drank spicy Bloody Mary mix, containing capsaicin. Immediately after swallowing, they rated the initial burn.

Then, in subsequent separate trials, they drank purified water, cola, cherry-flavored Kool-Aid, seltzer water, non-alcoholic beer, skim milk and whole milk. Participants continued to rate perceived burn every 10 seconds for two minutes. There were eight trials. Seven included one of the test beverages and one trial did not include a test beverage.

Researchers did not expect skim milk to be as effective at reducing the burn as whole milk. The finding indicates, they say, that the fat context of the beverage is not the critical factor and suggests the presence of protein may be more relevant than the fat content.



Credit: Stefan Kühn / Wikimedia Commons


The initial burn of the spicy Bloody Mary mix was, on average, rated below "strong" but above "moderate" by participants and continued to decay over the two minutes of the tests to a mean just above "weak," according to lead researcher Alissa Nolden. All beverages significantly reduced the burn of the mix, but the largest reductions in burn were observed for whole milk, skim milk and Kool-Aid.

More work is needed to determine how these beverages reduce burn, noted Nolden, a doctoral student in food science at Penn State when she conducted the research, now an assistant professor in the Department of Food Science at the University of Massachusetts. She suspects it is related to how capsaicin reacts in the presence of fat, protein and sugar.

"We weren't surprised that our data suggest milk is the best choice to mitigate burn, but we didn't expect skim milk to be as effective at reducing the burn as whole milk," she said. "That appears to mean that the fat context of the beverage is not the critical factor and suggests the presence of protein may be more relevant than lipid content."

Following the completion of all the trials, the participants answered two questions: "How often do you consume spicy food?" and "Do you like spicy food?" Researchers had hoped to see some correlation between participants' perception of the burn from capsaicin and their exposure to spicy food, Nolden pointed out. But no such relationship emerged from the study.

The findings of the research, recently published in Physiology and Behavior, might surprise some spicy foods consumers, but they should not, Nolden noted.

"Beverages with carbonation such as beer, soda and seltzer water predictably performed poorly at reducing the burn of capsaicin," she said. "And if the beer tested would have contained alcohol, it would have been even worse because ethanol amplifies the sensation."

While most people seem to drink beer or soft drinks with their spicy "hot" wings — Penn State research shows that milk or non-carbonated beverages would be better choices to reduce the oral burn triggered by capsaicin.

Credit;    Wikimedia Commons


In the case of Kool-Aid, Nolden and her colleagues do not think that the drink removes the capsaicin but rather overwhelms it with a sensation of sweet.

The study was novel, Nolden believes, because it incorporated products found on food-market shelves, making it more user friendly.

"Traditionally, in our work, we use capsaicin and water for research like this, but we wanted to use something more realistic and applicable to consumers, so we chose spicy Bloody Mary mix," she said. "That is what I think was really cool about this project — all the test beverages are commercially available, too."

Also involved in the research was Gabrielle Lenart, an undergraduate student in food science.

The National Institutes of Health and the U.S. Department of Agriculture's National Institute of Food and Agriculture supported this work.



Contacts and sources:
Jeff Mulhollem
Penn State

Pig-Pen Effect: Mixing Skin Oil and Ozone Can Produce a Personal Pollution Cloud


When ozone and skin oils meet, the resulting reaction may help remove ozone from an indoor environment, but it can also produce a personal cloud of pollutants that affects indoor air quality, according to a team of researchers.

In a computer model of indoor environments, the researchers show that a range of volatile and semi-volatile gases and substances are produced when ozone, a form of oxygen that can be toxic, reacts with skin oils carried by soiled clothes, a reaction that some researchers have likened to the less-than-tidy Peanuts comic strip character.

“When the ozone is depleted through human skin, we become the generator of the primary products, which can cause sensory irritations,” said Donghyun Rim, assistant professor of architectural engineering and an Institute for CyberScience associate, Penn State. “Some people call this higher concentration of pollutants around the human body the personal cloud, or we call it the 'Pig-Pen Effect.'”

The substances that are produced by the reaction include organic compounds, such as carbonyls, that can irritate the skin and lungs, said Rim. People with asthma may be particularly vulnerable to ozone and ozone reaction products, he said.

According to the researchers, who reported their findings in a recent issue of Nature’s Communications Chemistry, skin oils contain substances, such as squalene, fatty acids and wax esters. If a person wears the same clothes too long — for example, more than a day — without washing, there is a chance that the clothes become more saturated with the oils, leading to a higher chance of reaction with ozone, which is an unstable gas.

“Squalene can react very effectively with ozone,” said Rim. “Squalene has a higher reaction rate with ozone because it has a double carbon bond and, because of its chemical makeup, the ozone wants to jump in and break this bond.”

Indoors, ozone concentration can range from 5 to 25 parts per billion — ppb — depending on how the air is circulating from outside to inside and what types of chemicals and surfaces are used in the building. In a polluted city, for example, the amount of ozone in indoor environments may be much higher.

“A lot of people think of the ozone layer when we talk about ozone,” said Rim. “But, we’re not talking about that ozone, that’s good ozone. But ozone at the ground level has adverse health impacts.”

When ozone hits oils on skin and in dirty clothes, it can produce a personal cloud of irritants.


Credit: Penn State


Wearing clean clothes might be a good idea for a lot of reasons, but it might not necessarily lead to reducing exposure to ozone, said Rim. For example, a single soiled t-shirt helps keep ozone out of the breathing zone by removing about 30 to 70 percent of the ozone circulating near a person.

“If you have clean clothes, that means you might be breathing in more of this ozone, which isn’t good for you either,” said Rim.

Rim said that the research is one part of a larger project to better understand the indoor environment where people spend most of their time.

“The bottom line is that we, humans, spend more than 90 percent of our time in buildings, or indoor environments, but, as far as actual research goes, there are still a lot of unknowns about what's going on and what types of gases and particles we're exposed to in indoor environments,” said Rim. “The things that we inhale, that we touch, that we interact with, many of those things are contributing to the chemical accumulations in our body and our health.”

Rather than advising people whether to wear clean or dirty clothes, the researchers suggest that people should focus on keeping ground ozone levels down. Better building design and filtration, along with cutting pollution, are ways that could cut the impact of the Pig-Pen Effect, they added.

To build and validate the models, the researchers used experimental data from prior experiments investigating reactions between ozone and squalene, and between ozone and clothing. The researchers then analyzed further how the squalene-ozone reaction creates pollutants in various indoor conditions.

The team relied on computer modeling to simulate indoor spaces that vary with ventilation conditions and how inhabitants of those spaces manage air quality, Rim said.

In the future, the team may look at how other common indoor sources, such as candle and cigarette smoke, could affect the indoor air quality and its impact on human health.

Rim worked on the study with Pascale S.J. Lakey, postdoctoral researcher and first author of the paper; Michael von Domaros, postdoctoral scholar; Krista M. Parry, graduate student; Douglas J. Tobias, professor, and Manabu Shiraiwa, associate professor, all in the department of chemistry, University of California, Irvine; Glenn C. Morrison, professor of environmental science and engineering, and Youngbo Won, graduate student in architectural engineering, Penn State.

The researchers are part of the Chemistry of Indoor Environment Consortium, a network of scientists studying indoor environments, which is supported by the Alfred P. Sloan Foundation.




Contacts and sources:
Matt Swayne
Penn State


Citation: The impact of clothing on ozone and squalene ozonolysis products in indoor environments.
Pascale S. J. Lakey, Glenn C. Morrison, Youngbo Won, Krista M. Parry, Michael von Domaros, Douglas J. Tobias, Donghyun Rim, Manabu Shiraiwa. Communications Chemistry, 2019; 2 (1) DOI: 10.1038/s42004-019-0159-7

Medicines Made of Solid Gold to Help the Immune System


By studying the effects of gold nanoparticles on the immune cells related to antibody production, researchers at University of Geneva (UNIGE), at Swansea University and at the NCCR “Bio-inspired Materials” are paving the way for more effective vaccines and therapies.


B lymphocytes (blue and green) and gold nanoparticles (red) measured with dark field hyperspectral imaging coupled with fluorescent detection

Credit: © UNIGE

Over the past twenty years, the use of nanoparticles in medicine has steadily increased. However, their safety and effect on the human immune system remains an important concern. By testing a variety of gold nanoparticles, researchers at the University of Geneva (UNIGE), in collaboration with the National Centre of Competence in Research “Bio-inspired Materials” and Swansea University Medical School, (United Kingdom), are providing first evidence of their impact upon human B lymphocytes – the immune cells responsible for antibody production. The use of these nanoparticles is expected to improve the efficacy of pharmaceutical products while limiting potential adverse effects. These results, published in the journal ACS Nano, will lead to the development of more targeted and better tolerated therapies, particularly in the field of oncology. The methodology developed makes it also possible to test the biocompatibility of any nanoparticle at an early stage in the development of a new nanodrug.

Responsible for the production of antibodies, B lymphocytes are a crucial part of the human immune system, and therefore an interesting target for the development of preventive and therapeutic vaccines. However, to achieve their goal, vaccines must reach B lymphocytes quickly without being destroyed, making the use of nanoparticles particularly interesting. “Nanoparticles can form a protective vehicle for vaccines – or other drugs – to specifically deliver them where they can be most effective, while sparing other cells,” explains Carole Bourquin, a Professor at the UNIGE’s Faculties of Medicine and Science, who co-led this study. “This targeting also allows the use of a lower dose of immunostimulant while maintaining an effective immune response. It increases its efficacy while reducing side-effects, provided that the nanoparticles are harmless to all immune cells.” Similar studies have already been conducted for other immune cells such as macrophages, which seek out and interact with nanoparticles, but never before for the smaller, and more difficult to handle, B lymphocytes.


Gold is an ideal material

Gold is an excellent candidate for nanomedicine because of its particular physico-chemical properties. Well tolerated by the body and easily malleable, this metal has, for instance, the particularity of absorbing light and then releasing heat, a property that can be exploited in oncology. “Gold nanoparticles can be used to target tumours. When exposed to a light source, the nanoparticles release heat and destroy neighbouring cancer cells. We could also attach a drug to the surface of the nanoparticles to be delivered to a specific location,” explains UNIGE researcher Sandra Hočevar. “To test their safety and the best formula for medical use, we have created gold spheres with or without a polymer coating, as well as gold rods to explore the effects of coating and shape. We then exposed human B lymphocytes to our particles for 24 hours to examine the activation of the immune response.”

By following activation markers expressed on the surface of B cells, the scientists were able to determine how much their nanoparticles activated or inhibited the immune response. While none of the nanoparticles tested demonstrated adverse effects, their influence on the immune response differed depending on their shape and the presence of a surface, polymer coating. “Surface properties, as well as nanoparticle morphology definitely are important when it comes to the nanoparticle-cell interaction. Interestingly, the gold nanorods inhibited the immune response instead of activating it, probably by causing interference on the cell membrane, or because they are heavier”, says Martin Clift, an Associate Professor of Nanotoxicology and In Vitro Systems at Swansea University Medical School, and the project’s co-leader.

Uncoated, spherical particles easily aggregate and are therefore not appropriate for biomedical use. On the other hand, gold spheres coated with a protective polymer are stable and do not impair B lymphocyte function. “And we can easily place the vaccine or drug to be delivered to the B lymphocytes in this coating,” says Carole Bourquin. «In addition, our study established a methodology for assessing the safety of nanoparticles on B lymphocytes, something that had never been done before. “This could be especially useful for future research, as the use of nanoparticles in medicine still requires clear guidelines.”


Many clinical applications

B cells are at the heart of vaccine response, but also in other areas such as oncology and autoimmune diseases. The gold nanoparticles developed by the team of researchers could make it possible to deliver existing drugs directly to B lymphocytes to reduce the necessary dosage and potential side effects. In fact, studies in patients are already being carried out for the treatment of brain tumours. Gold nanoparticles can be made small enough to cross the blood-brain barrier, allowing specific anti-tumoural drugs to be delivered directly into the cancerous cells.


Contacts and sources:
Carole Bourquin Full Professor Faculties of Medicine and Science, UNIGE, CH
Martin Clift Associate Professor Swansea University Medical School, Wales, UK
 

Citation: Polymer-Coated Gold Nanospheres Do Not Impair the Innate Immune Function of Human B Lymphocytes in Vitro.
Sandra Hočevar, Ana Milošević, Laura Rodriguez-Lorenzo, Liliane Ackermann-Hirschi, Ines Mottas, Alke Petri-Fink, Barbara Rothen-Rutishauser, Carole Bourquin, Martin James David Clift. ACS Nano, 2019; 13 (6): 6790 DOI: 10.1021/acsnano.9b01492

Thursday, June 27, 2019

Synthetic Joint Lubricant Holds Promise for Osteoarthritis



A new type of treatment for osteoarthritis, currently in canine clinical trials, shows promise for eventual use in humans.

The treatment, developed by Cornell biomedical engineers, is a synthetic version of a naturally occurring joint lubricant that binds to the surface of cartilage in joints and acts as a cushion during high-impact activities, such as running.

“When the production of that specific lubricant goes down, it creates higher contact between the surfaces of the joint and, over time, it leads to osteoarthritis,” said David Putnam, a professor in the College of Engineering with appointments in the Meinig School of Biomedical Engineering and the Smith School of Chemical and Biomolecular Engineering.

Areas of the body most affected by osteoarthritis

Areas affected by osteoarthritis.gif
Credit:  NIH / Wikimedia Commons

Putnam is senior author of “Boundary Mode Lubrication of Articular Cartilage With a Biomimetic Diblock Copolymer,” published June 4 in Proceedings of the National Academy of Sciences, USA. Zhexun Sun, a postdoctoral researcher in Putnam’s lab, is the paper’s first author.

The study focuses on a naturally occurring joint lubricant called lubricin, the production of which declines following traumatic injuries to a joint, such as a ligament tear in a knee.

The knee is lubricated in two ways – hydrodynamic mode and boundary mode.

Hydrodynamic mode lubrication occurs when the joint is moving fast and there isn’t a strong force pushing down on it. In this mode, joints are lubricated by compounds like hyaluronic acid (HA) that are thick and gooey, like car oil. There are numerous HA products on the market, approved by the Food and Drug Administration, for treating hydrodynamic mode lubrication disorders.

But HA is ineffective when strong forces are pushing down on the joint, such as those that occur during running or jumping. In these instances, thick gooey HA squirts out from between the cartilage surfaces, and boundary mode lubrication is necessary. Under these forces, lubricin binds to the surface of the cartilage. It contains sugars that hold on to water, to cushion hard forces on the knee.

In the paper, the researchers describe a synthetic polymer they developed that mimics the function of lubricin and is much easier to produce. “We are in clinical trials, with dogs that have osteoarthritis, with our collaborators at Cornell’s College of Veterinary Medicine,” Putnam said.

Those collaborators – Ursula Krotscheck and Kei Hayashi, both associate professors in the Section of Small Animal Surgery in the Department of Clinical Sciences – use a force plate to measure the efficacy of the treatments. The force plate quantifies the amount of force that a dog exerts with each paw, to measure whether they are favoring one paw over another.

“Once we finalize the efficacy study in dogs, we will be in a very good position to market the material for veterinary osteoarthritis treatment,” Putnam said. From there, the human market for a lubricin substitute should follow, just as HA has been made available for human use, mainly in knees.

The synthetic lubricin is patented through Cornell Technology Licensing; an Ithaca company, iFyber, is working with the researchers to develop the synthetic lubricin therapeutic for humans.

Lawrence Bonassar, the Daljit S. and Elaine Sarkaria Professor in Biomedical Engineering and in Mechanical and Aerospace Engineering, is a co-author of the paper. Scott Rodeo, a clinician-scientist at Hospital for Special Surgery (HSS) in New York, has been a longtime collaborator on this project. Every HSS doctor holds an appointment on the faculty of Weill Cornell Medical College.

The study was funded by the National Institutes of Health.


Contacts and sources:
Krishna Ramanujan
Cornell University






Are Testosterone-Boosting Supplements Effective?

Over-the-counter “T boosters” are a popular choice for men looking to raise their testosterone levels, and are frequently marketed as being an effective “natural” option. However, new research points toward these supplements as having little or no known effect.


Man holding testosterone-boosting supplements
Newswise: Are testosterone-boosting supplements effective? Not likely, according to new research
Credit: Shutterstock / Pop Paul-Catalin

Men who want to improve their libido or build body mass may want to think twice before using testosterone-boosting supplements ­– also known as “T boosters” ­– as research shows these alternatives to traditional testosterone replacement therapy may not have ingredients to support their claims, according to Mary K. Samplaski, MD, assistant professor of clinical urology at the Keck School of Medicine of USC.

“Many supplements on the market merely contain vitamins and minerals, but don’t do anything to improve testosterone,” says Samplaski. “Often, people can be vulnerable to the marketing component of these products, making it difficult to tease out what is myth and what is reality.”

Testosterone is the primary male sex hormone and the reason why men produce sperm and have Adam’s apples. It’s also why men develop more “masculine” features like bulging muscles, a deep voice, broad shoulders and a hairy chest. After age 30, most men experience a gradual decline in testosterone, sometimes causing these features to diminish or new symptoms to occur, like erectile dysfunction. In an attempt to turn back the hands of time, some men will turn to T boosters.

Using a structured review approach, Samplaski and a team of researchers explored the active ingredients and advertised claims of 50 T boosting supplements. Their findings were published as an original article in The World Journal of Men’s Health.

Researchers performed a Google search with the search term “Testosterone Booster,” thus mimicking a typical internet research for someone looking to increase testosterone levels, and then selected the first 50 products that came up in their search. Then, the team reviewed published scientific literature on testosterone and the 109 components found in the supplements. Zinc, fenugreek extract and vitamin B6 were three of the most common components in the supplements.

The team also compared the content for each supplement with the Food and Drug Administration’s (FDA) Recommended Daily Allowance (RDA) and the upper tolerable intake level (UL) as set by the Institute of Medicine of the National Academy of Science.

Of the 150 supplements, researchers came across 16 general claims to benefit patients, including claims to “boost T or free T”, “build body lean mass or muscle mass”, or “increase sex drive or libido.”

While 90% of the T booster supplements claimed to boost testosterone, researchers found that less than 25% of the supplements had data to support their claims. Many also contained high doses of vitamins and minerals, occasionally more than the tolerable limit.

Unlike drugs, supplements are not intended to treat, diagnose, prevent, or cure diseases, according to the FDA. As such, Samplaski would like to see more regulation around testosterone-boosting supplements to protect consumers. She also would like to explore disseminating handouts to her patients with more accurate information in the hopes that it encourages patients to seek a medical professional for low testosterone issues.

While no one can escape the effects of aging, Samplaski says there is something men can do to address their concerns. “The safest and most effective way for men to boost low testosterone levels is to talk with a medical professional or a nutritionist.”



Contacts and sources:
Wendy Wolfson
Keck School of Medicine of USC





More Than 50 Uncharted Lakes Found Beneath Greenland's Ice Sheet



Researchers have discovered 56 previously uncharted subglacial lakes beneath the Greenland Ice Sheet bringing the total known number of lakes to 60.

Although these lakes are typically smaller than similar lakes in Antarctica, their discovery demonstrates that lakes beneath the Greenland Ice Sheet are much more common than previously thought.

The Greenland Ice Sheet covers an area approximately seven times the size of the UK, is in places more than three kilometers thick and currently plays an important role in rising global sea levels.

Surface meltwater in Greenland
Surface meltwater in Greenland
Credit: © Dr Andrew Sole, University of Sheffield

Subglacial lakes are bodies of water that form beneath ice masses. Meltwater is derived from the pressure of the thick overlying ice, heat generated by the flow of the ice, geothermal heat retained in the Earth, or water on the surface of the ice that drains to the bed. This water can become trapped in depressions or due to variations in ice thickness.

Knowledge of these new lakes helps form a much fuller picture of where water occurs and how it drains under the ice sheet, which influences how the ice sheet will likely respond dynamically to rising temperatures.

Published in Nature Communications this week, their paper, "Distribution and dynamics of Greenland subglacial lakes", provides the first ice-sheet wide inventory of subglacial lakes beneath the Greenland Ice Sheet.

By analyzing more than 500,000 km of airborne radio echo sounding data, which provide images of the bed of the Greenland Ice Sheet, researchers from the Universities of Lancaster, Sheffield and Stanford identified 54 subglacial lakes, as well as a further two using ice-surface elevation changes.

Lead author Jade Bowling of the Lancaster Environment Centre, Lancaster University, said:

“Researchers have a good understanding of Antarctic subglacial lakes, which can fill and drain and cause overlying ice to flow quicker. However, until now little was known about subglacial lake distribution and behaviour beneath the Greenland Ice Sheet.

"This study has for the first time allowed us to start to build up a picture of where lakes form under the Greenland Ice Sheet. This is important for determining their influence on the wider subglacial hydrological system and ice-flow dynamics, and improving our understanding of the ice sheet's basal thermal state."

The newly discovered lakes range from 0.2-5.9 km in length and the majority were found beneath relatively slow moving ice away from the largely frozen bed of the ice sheet interior and seemed to be relatively stable.

However, in the future as the climate warms, surface meltwater will form lakes and streams at higher elevations on the ice sheet surface, and the drainage of this water to the bed could cause these subglacial lakes to drain and therefore become active. Closer to the margin where water already regularly gets to the bed, the researchers saw some evidence for lake activity, with two new subglacial lakes observed to drain and then refill.

Dr Stephen J. Livingstone, Senior Lecturer in Physical Geography, University of Sheffield, said:

“The lakes we have identified tend to cluster in eastern Greenland where the bed is rough and can therefore readily trap and store meltwater and in northern Greenland, where we suggest the lakes indicate a patchwork of frozen and thawed bed conditions.

“These lakes could provide important targets for direct exploration to look for evidence of extreme life and to sample the sediments deposited in the lake that preserve a record of environmental change.”






Contacts and sources:
Lancaster University

Citation: Distribution and dynamics of Greenland subglacial lakes
J.S. Bowling1,2*, S.J. Livingstone2, A.J. Sole2, W. Chu3 DOI 10.1038/s41467-019-10821-w  http://www.nature.com/ncomms.
1Lancaster Environment Centre, Lancaster University, Lancaster, UK, LA1 4YQ
2Department of Geography, University of Sheffield, Sheffield, UK, S10 2TN
3Department of Geophysics, Stanford University, Stanford, USA, CA 943




Dating and Picking a Pet: The Heart Doesn't Always Know What It Wants

Picking a pet and dating

Psychologists at Indiana University who study how people pick their spouses have turned their attention to another important relationship: choosing a canine companion.

 Samantha Cohen.
Photo by Erin Powell


Their work, published in the journal Behavior Research Methods, recently found that, when it comes to puppy love, the heart doesn't always know what it wants.

The results are based upon data from a working animal shelter and could help improve the pet adoption process.

"What we show in this study is that what people say they want in a dog isn't always in line with what they choose," said Samantha Cohen, who led the study as a Ph.D. student in the IU Bloomington College of Arts and Sciences' Department of Psychological and Brain Sciences. "By focusing on a subset of desired traits, rather than everything a visitor says, I believe we can make animal adoption more efficient and successful."

As a member of the lab of IU Provost Professor Peter Todd, Cohen conducted the study while also volunteering as an adoption counselor at an animal shelter. Todd is co-author on the study.

 Peter Todd.
Photo by Anna Powell Teeter, Indiana University


"It was my responsibility to match dogs to people based on their preferences, but I often noticed that visitors would ultimately adopt some other dog than my original suggestion," Cohen said. "This study provides a reason: Only some desired traits tend to be fulfilled above chance, which means they may have a larger impact on dog selection."

The researchers categorized dogs based upon 13 traits: age, sex, color, size, purebred status, previous training, nervousness, protectiveness, intelligence, excitability, energy level, playfulness and friendliness. They surveyed the preferences of 1,229 people who visited dogs at an animal shelter, including 145 who decided to make an adoption.

A similar disconnect has been found in research on speed dating led by Todd, who has shown that people's stated romantic preferences tend not to match the partners they choose.

 Cohen surveys the preferences of people who visited dogs at an animal shelter.
 Photo by Cadence Baugh Chang, Indiana University


Although most participants in the dog adoption study listed many traits they preferred -- with "friendliness" as the most popular -- they ultimately selected dogs most consistent with just a few preferences, like age and playfulness, suggesting that others, like color or purebred status, exerted less influence on decision-making.

There was also another parallel to the world of dating. In short: Looks matter.

"As multiple psychologists have shown in speed-dating experiments, physical attractiveness is very important," Cohen said. "Most people think they've got a handsome or good-looking dog."

In the article, Cohen outlines some challenges facing aspiring dog-owners:
  • Focusing on "the one": Although adopters often came to the shelter with a vision of the perfect pet, Cohen said many risked missing a good match due to overemphasis on specific physical and personality traits. For example, an adopter who wants an Irish wolfhound because they're large, loyal and light shedders might fail to consider a non-purebred with the same qualities.
  • Mismatched perceptions: Surprisingly, adopters and shelters often used different traits to describe the same dog. These included subjective traits, such as obedience and playfulness, as well as seemingly objective traits, such as color.
  • Missed signals: People who have never had a dog may not grasp the implications of certain behaviors. A dog seen as "playful" at the shelter may come across as "destructive" in a small home, for example.
  • Performance anxiety: Shelters are high-stress environments for dogs, whose personalities may shift when they're more relaxed at home. Picking a dog based upon personality at the shelter is akin to choosing a date based on how well they perform while public speaking, Cohen said.
 Cohen conducted the research while volunteering as an adoption counselor at an animal shelter. 
Photo by Cadence Baugh Chang, Indiana University


To improve pet adoptions, Cohen said animal shelters need to know that people tend to rely on certain traits more strongly when choosing a dog, which might make it easier to match adopters to dogs. She also suggested shelters consider interventions, such as temporary placement in a calmer environment, to help stressed or under-socialized dogs put their best paw forward, showing their typical level of desirable traits, such as friendliness.

Finally, Cohen advises caution about online adoption, since adopters are dependent upon someone else's description of the dogs. She suggests users limit their search criteria to their most desired traits to avoid filtering out a good match based upon less important preferences.

The study was supported in part by an IU Graduate and Professional Student Government Research Award.


Contacts and sources:
Kevin Fryling / Elizabeth Rosdeitcher
Indiana University


Citation: Stated and revealed preferences in companion animal choice.
Samantha E. Cohen, Peter M. Todd. Behavior Research Methods, 2019; DOI: 10.3758/s13428-019-01253-x



Secrets of the Ice Worm



The ice worm is one of the largest organisms that spends its entire life in ice and Washington State University scientist Scot Hotalilng is one of the only people on the planet studying it.

He is the author of a new paper that shows ice worms in the interior of British Columbia have evolved into what may be a genetically distinct species from Alaskan ice worms.

Hotaling and colleagues also identified an ice worm on Vancouver Island that is closely related to a separate population of ice worms located 1,200 miles away in southern Alaska. The researchers believe the genetic intermingling is the result of birds eating the glacier-bound worms (or their eggs) at one location and then dropping them off at another as they migrate up and down the west coast.

The ice worm resembles the common earthworm but is smaller and darker in color. 
Closeup of an ice worm on someone's fingertip.
Photo by Carson Baughman, USGS, Alaska Science Center.


“If you are a worm isolated on a mountaintop glacier, the expectation is you aren’t going anywhere,” said Hotaling, a postdoctoral biology researcher. “But lo and behold, we found this one ice worm on Vancouver Island that is super closely related to ice worms in southern Alaska. The only reasonable explanation we can think of to explain this is birds.”

The ice worm resembles the common earthworm but is smaller and darker in color. What sets the ice worm apart from other members of the Mesenchytraeus genus is its ability to live its entire life in glacial ice.

Millions, perhaps hundreds of millions, of ice worms can be seen wriggling to the top of glaciers from the Chugach Mountains in southeast Alaska to the Cascade Volcanoes of Washington and Oregon during the summer months. In the fall and winter, ice worms subsist deep beneath the surface of glaciers where temperatures stay around freezing.

Scott Hotaling
Credit: WSU

Super cool organism

Hotaling’s interest in ice worms began back in 2009 while he was working as a mountaineering ranger on the high elevation slopes of Mt. Rainer. He was climbing at three in the morning when he noticed a lot of small, black worms crawling around on the surface of a glacier.

“I wasn’t even a biology undergraduate yet but I remember being so fascinated by the fact that there is this worm that can live in a glacier,” he said. “It is not a place where we think of life being able to flourish and these things can be present at like 200 per sq. meter, so dense you can’t walk without stepping in them.”

Hotaling eventually went back to school and earned a PhD in biology at the University of Kentucky where he studied how climate change is affecting mountain biodiversity.

In the summer of 2017, he finally got the opportunity to circle back and do some research on the ice worm when he arrived in Pullman to start a postdoc position in the laboratory of Associate Professor Joanna Kelley, senior author of the study who specializes in evolutionary genomics and extremophile lifeforms.

“In the Kelley lab, we study organisms that have evolved to live in places that are inhospitable to pretty much everything else,” Hotaling said. “Determining the evolutionary mechanisms that enable something like an ice worm to live in a glacier or bacteria to live in a Yellowstone hot spring is a really exciting way to learn about what is possible at the bounds of evolution. That’s where we are working now, understanding the evolution of ice worms.”
In the study

Hotaling and colleagues extracted and sequenced DNA from 59 ice worms collected from nine glaciers across most of their geographical range. Their analysis revealed a genetic divergence between populations of ice worms that are north and west and south and east of the Coast Mountains of British Columbia.

The researchers predict that this deeper split into two genetically distinct ice worm groups occurred as a result of glacial ice sheets contracting around a few hundred thousand years ago, isolating worms in the Pacific Northwest from their counterparts in Alaska.

The most surprising finding of the study was the discovery of a single ice worm on Vancouver Island that was closely related to a population of ice worms 1,200 miles away in Alaska.

“At first we thought there has to be some kind of error in the analysis or prep methods but upon further investigation we confirmed our initial results,” Hotaling said. “These are worms isolated on mountain tops and there is no explanation for how they covered that gap than on, or perhaps within, migrating birds.”

Gray-Crowned Rosy Finch eating ice worms on a glacier. 
A Photo by Scott Hotaling

The research illuminates an important relationship between two of the few large organisms that inhabit North America’s high elevation alpine ecosystems, the ice worm and the Gray-Crowned Rosy Finch, one of North America’s highest elevation nesting birds.

“We knew that ice worms were an important source of food for the birds but we didn’t know until now that the birds are also likely very important for the ice worms,” Hotaling said. “If you are super isolated like an ice worm, you could easily become inbred. But if birds are bringing little bits of new diversity to your mountaintop glacier that could be really good for you.”

Hotaling and Kelley’s study was published this month in Proceedings B of the Royal Society of Publishing.
Contacts and sources:
Will Ferguson
Washington State University
Citation:




Being Too Harsh on Yourself Could Lead to OCD or GAD



A correlation was found between strong feelings of responsibility and likelihood of developing OCD or GAD in American university students 

Two types of responsibility are predictors of OCD or GAD


 (Credit: Emma Buchet and Associate Professor Yoshinori Sugiura/University of Hiroshima)

A new study has found that people who reported intense feelings of responsibility were susceptible to developing Obsessive Compulsive Disorder (OCD) or Generalized Anxiety Disorder (GAD) was published in the International Journal of Cognitive Therapy.

“People with OCD [are] tortured by repeatedly occurring negative thinking and they take some strategy to prevent it… GAD is a very pervasive type of anxiety. [Patients] worry about everything.” describes Associate Professor Yoshinori Sugiura of the University of Hiroshima.

Anxiety and OCD-like behaviors, such as checking if the door is locked, are common in the general population. However, it is the frequency and intensity of these behaviors or feelings that make the difference between a character trait and disorder.

“For example, you’re using two recorders instead of one,” says Sugiura when interviewed. “It’s just in case one fails … having two recorders will enhance your work but if you prepare [too] many recorders … that will interfere with your work.”

A problem Sugiura identifies in psychology is that each disorder that sufferers experience has several competing theories regarding their cause.

“There are too many theories and therapies for mental disorders for one expert to master them all.” elaborates Sugiura.

The goal of this research team (consisting of Sugiura and Associate Professor Brian Fisak (University of Central Florida)) was to find a common cause for these disorders and simplify the theories behind them.

Associate Professor Brian Fisak (University of Central Florida) and Associate Professor Yoshinori Sugiura

Credit: Hiroshima University

Sugiura and Fisak first identified “inflated responsibility”. The team identified 3 types of inflated responsibility: 1) Responsibility to prevent or avoid danger and/or harm, 2) Sense of personal responsibility and blame for negative outcomes and 3) Responsibility to continue thinking about a problem. The research group combined tests used to study OCD and GAD as there had been no previous work that compared these tests in the same study.

To establish whether inflated responsibility was a predictor of OCD or GAD, Sugiura and Fisak sent an online questionnaire to American university students. Through this survey they found that respondents who scored higher in questions about responsibility were more likely to exhibit behaviors that resemble those of OCD or GAD patients. Personal Responsibility and Blame and the Responsibility to Continue Thinking, had the strongest link to the disorders.

The researchers would like to clarify that this preliminary study is not representative of the general population due to the small scale and skewed population (mostly female university students). However, the promising findings suggest that this format can be applied to a larger population and yield similar results.

Sugiura is currently looking into how to reduce responsibility and the preliminary outcomes are positive. When asked for any tips to reduce anxiety or obsessive behaviors he said:

“[A] very quick or easy way is to realize that responsibility is working behind your worry. I ask [patients] “Why are you worried so much?” so they will answer “I can’t help but worry” but they will not spontaneously think “Because I feel responsibility” … just realizing it will make some space between responsibility thinking and your behavior.”Original article:



Contacts and sources:
Emma Buchet
Hiroshima University

Citation:  Inflated Responsibility in Worry and Obsessive Thinking. Sugiura, Y. & Fisak, B. (2019)International Journal of Cognitive Therapy https://doi.org/10.1007/s41811-019-00041-x






3D Printed Prosthetic Hand Can Guess How You Play Rock, Paper, Scissors

Losing a limb, either through illness or accident, can present emotional and physical challenges for an amputee, damaging their quality of life. Prosthetic limbs can be very useful but are often expensive and difficult to use. The Biological Systems Engineering Lab at Hiroshima University has developed a new 3D printed prosthetic hand combined with a computer interface, which is their cheapest, lightest model that is more reactive to motion intent than before. Previous generations of their prosthetic hands have been made of metal, which is heavy and expensive to make.

Professor Toshio Tsuji of the Graduate School of Engineering, Hiroshima University describes the mechanism of this new hand and computer interface using a game of “Rock, Paper, Scissors”. The wearer imagines a hand movement, such as making a fist for Rock or a peace sign for Scissors, and the computer attached to the hand combines the previously learned movements of all 5 fingers to make this motion.

Different hand positions of the prosthetic hand. Image Caption: The prosthetic hand uses signals from electrodes (arrow) and machine learning to copy hand positions.
 Image Credit: Hiroshima University Biological Systems Engineering Lab. 


“The patient just thinks about the motion of the hand and then the robot automatically moves. The robot is like a part of his body. You can control the robot as you want. We will combine the human body and machine like one living body.” explains Tsuji.

Electrodes in the socket of the prosthetic equipment measure electrical signals from nerves through the skin— similar to how an ECG measures heart rate. The signals are sent to the computer, which only takes five milliseconds to make its decision about what movement it should be. The computer then sends the electrical signals to the motors in the hand.

The neural network (named Cybernetic Interface), that allows the computer to “learn”, was trained to recognize movements from each of the 5 fingers and then combine them into different patterns to turn Scissors into Rock, pick up a water bottle or to control the force used to shake someone’s hand.

“This is one of the distinctive features of this project. The machine can learn simple basic motions and then combine and then produce complicated motions.” Tsuji says.

The prosthetic hand and socket. The hand is controlled by the Cybernetic Interface attached to the socket.

 Image Credit: Hiroshima University Biological Systems Engineering Lab

Hiroshima University Biological Systems Engineering Lab tested the equipment with patients in the Robot Rehabilitation Center in the Hyogo Institute of Assistive Technology, Kobe. The researchers also collaborated with the company Kinki Gishi to develop the socket to accommodate the amputee patients’ arm.

Seven participants were recruited for this study, including one amputee who had worn a prosthesis for 17 years. Participants were asked to perform a variety of tasks with the hand that simulated daily life, such as picking up small items, or clenching their fist. The accuracy of prosthetic hand movements measured in the study for single simple motion was above 95 %, and complicated, unlearned motions was 93%.

However, this hand is not quite ready for all wearers. Using the hand for a long time can be burdensome for the wearer as they must concentrate on the hand position in order to sustain it, which caused muscle fatigue. The team are planning on creating a training plan in order to make the best use of the hand and hope it will become an affordable alternative on the prosthetics market.





Contacts and sources:
Hiroshima University
https://www.hiroshima-u.ac.jp/en

Citation: A myoelectric prosthetic hand with muscle synergy-based motion determination and impedance model-based biomimetic control, Akira Furui, Shintaro Eto, Kosuke Nakagaki, Kyohei Shimada, Go Nakamura, Akito Masuda, Takaaki Chin, and Toshio Tsuji,  Science Robotics, Vol. 4, Issue 31, eaaw6339, DOI: 10.1126/scirobotics.eaaw6339, 26 June 2019.




Neanderthals Used Resin 'Glue' to Craft Their Stone Tools

"The hafting of stone tools was an important advance in the technological evolution of Paleolithic humans. Joining a handle to a knife or scraper and attaching a sharp point to a wooden shaft made stone tools more efficient and easier to use."
Archaeologists working in two Italian caves have discovered some of the earliest known examples of ancient humans using an adhesive on their stone tools—an important technological advance called “hafting.”

The new study, which included CU Boulder's Paola Villa, shows that Neanderthals living in Europe from about 55 to 40 thousand years ago traveled away from their caves to collect resin from pine trees. They then used that sticky substance to glue stone tools to handles made out of wood or bone.

Illustration of Neanderthals around a fire
Credit: NASA

The findings add to a growing body of evidence that suggests that these cousins of Homo sapiens were more clever than some have made them out to be.

“We continue to find evidence that the Neanderthals were not inferior primitives but were quite capable of doing things that have traditionally only been attributed to modern humans,” said Villa, corresponding author of the new study and an adjoint curator at the CU Museum of Natural History.

That insight, she added, came from a chance discovery from Grotta del Fossellone and Grotta di Sant’Agostino, a pair of caves near the beaches of what is now Italy’s west coast.

Those caves were home to Neanderthals who lived in Europe during the Middle Paleolithic period, thousands of years before Homo sapiens set foot on the continent. Archaeologists have uncovered more than 1,000 stone tools from the two sites, including pieces of flint that measured not much more than an inch or two from end to end.

Flints bearing traces of pine resin. The letter "R" indicates the presence of visible resin, and the arrows point to spots where researchers sampled material for chemical analysis.
Credit: Degano et al. 2019, PLOS ONE

In a recent study of the materials, Villa and her colleagues noticed a strange residue on just a handful of the flints—bits of what appeared to be organic material.

“Sometimes that material is just inorganic sediment, and sometimes it’s the traces of the adhesive used to keep the tool in its socket” Villa said.




Top: Researchers excavate the Grotta del Fossellone; bottom:
Credit: Paola Villa

Warm fires

To find out, study lead author Ilaria Degano at the University of Pisa conducted a chemical analysis of 10 flints using a technique called gas chromatography/mass spectrometry. The tests showed that the stone tools had been coated with resin from local pine trees. In one case, that resin had also been mixed with beeswax.

The findings, Villa said, indicate that Italian Neanderthals didn’t just resort to their bare hands to use stone tools. In at least some cases, they also attached those tools to handles to give them better purchase as they sharpened wooden spears or performed other tasks like butchering or scraping leather.

“You need stone tools to cut branches off of trees and make them into a point,” Villa said.

 The entrance to the Grotta di Sant’Agostino.
Credit: Paola Villa


The find isn’t the oldest known example of hafting by Neanderthals in Europe—two flakes discovered in the Campitello Quarry in central Italy predate it. But it does suggest that this technique was more common than previously believed.

The existence of hafting also provides more evidence that Neanderthals, like their smaller human relatives, were able to build a fire whenever they wanted one, Villa said—something that scientists have long debated. She explained that pine resin dries when exposed to air. As a result, Neanderthals needed to warm it over a small fired to make an effective glue.

“This is one of several proofs that strongly indicate that Neanderthals were capable of making fire whenever they needed it,” Villa said.

In other words, enjoying the glow of a warm campfire isn’t just for Homo sapiens.

Other coauthors on the study included researchers at Paris Nanterre University in France, University of the Witwatersrand in South Africa, University of Wollongong in Australia, Max Planck Institute for the Science of Human History in Germany, Istituto Italiano di Paleontologia Umana and the University of Pisa.

The research was funded by a National Science Foundation grant to Paola Villa and Sylvain Soriano.



Contacts and sources:
Daniel StrainUniversity of Colorado Boulder


Citation: Hafting of Middle Paleolithic tools in Latium (central Italy): New data from Fossellone and Sant’Agostino caves Ilaria Degano, Sylvain Soriano, Paola Villa , http://dx.doi.org/10.1371/journal.pone.0213473




What Made Humans the Fat Primate?

How did humans get to be so much fatter than chimps, despite sharing 99% of the same DNA? The answer may have to do with an ancient shift in how DNA is packaged inside fat cells. 
How did humans get to be so much fatter than chimps, despite sharing 99% of the same DNA? The answer may have to do with an ancient shift in how DNA is packaged inside fat cells. Source: UConn Rudd Center for Food Policy & Obesity.
Credit: UConn Rudd Center for Food Policy & Obesity.

Blame junk food or a lack of exercise. But long before the modern obesity epidemic, evolution made us fat too. Changes in DNA packaging curbed our body's ability to turn 'bad' fat into 'good' fat.

"We're the fat primates," said Devi Swain-Lenz, a postdoctoral associate in biology at Duke University.

The fact that humans are chubbier than chimpanzees isn't news to scientists. But new evidence could help explain how we got that way.

Despite having nearly identical DNA sequences, chimps and early humans underwent critical shifts in how DNA is packaged inside their fat cells, Swain-Lenz and her Duke colleagues have found. As a result, the researchers say, this decreased the human body's ability to turn "bad" calorie-storing fat into the "good" calorie-burning kind.

The results were published June 24 in the journal Genome Biology and Evolution.

Compared to our closest animal relatives, even people with six-pack abs and rippling arms have considerable fat reserves, researchers say. While other primates have less than 9% body fat, a healthy range for humans is anywhere from 14% to 31%.

To understand how humans became the fat primate, a team led by Swain-Lenz and Duke biologist Greg Wray compared fat samples from humans, chimps and a more distantly-related monkey species, rhesus macaques. Using a technique called ATAC-seq, they scanned each species' genome for differences in how their fat cell DNA is packaged.

Normally most of the DNA within a cell is condensed into coils and loops and tightly wound around proteins, such that only certain DNA regions are loosely packed enough to be accessible to the cellular machinery that turns genes on and off.

The researchers identified roughly 780 DNA regions that were accessible in chimps and macaques, but had become more bunched up in humans. Examining these regions in detail, the team also noticed a recurring snippet of DNA that helps convert fat from one cell type to another.

Not all fat is created equal, Swain-Lenz explained. Most fat is made up of calorie-storing white fat. It's what makes up the marbling in a steak and builds up around our waistlines. Specialized fat cells called beige and brown fat, on the other hand, can burn calories rather than store them to generate heat and keep us warm.

One of the reasons we're so fat, the research suggests, is because the regions of the genome that help turn white fat to brown were essentially locked up -- tucked away and closed for business -- in humans but not in chimps.

"We've lost some of the ability to shunt fat cells toward beige or brown fat, and we're stuck down the white fat pathway," Swain-Lenz said. It's still possible to activate the body's limited brown fat by doing things like exposing people to cold temperatures, she explained, "but we need to work for it."

Humans, like chimps, need fat to cushion vital organs, insulate us from the cold, and buffer us from starvation. But early humans may have needed to plump up for another reason, the researchers say -- as an additional source of energy to fuel our growing, hungry brains.

In the six to eight million years since humans and chimps went their separate ways, human brains have roughly tripled in size. Chimpanzee brains haven't budged.

The human brain uses more energy, pound for pound, than any other tissue. Steering fat cells toward calorie-storing white fat rather than calorie-burning brown fat, the thinking goes, would have given our ancestors a survival advantage.

Swain-Lenz said another question she gets a lot is: "Are you going to make me skinny?"

"I wish," she said.

Because of brown fat's calorie-burning abilities, numerous researchers are trying to figure out if boosting our body's ability to convert white fat to beige or brown fat could make it easier to slim down.

Swain-Lenz says the differences they found among primates might one day be used to help patients with obesity -- but we're not there yet.

"Maybe we could figure out a group of genes that we need to turn on or off, but we're still very far from that," Swain-Lenz said. "I don't think that it's as simple as flipping a switch. If it were, we would have figured this out a long time ago," she explained.

This research was supported by a Charles W. Hargitt Research Fellowship through the Duke biology department.


Contacts and sources:
Robin Ann Smith
Duke University

Citation: "Comparative Analyses of Chromatin Landscape in White Adipose Tissue Suggest Humans May Have Less Beigeing Potential Than Other Primates," Devjanee Swain-Lenz, Alejandro Berrio, Alexias Safi, Gregory E. Crawford, Gregory A. Wray. Genome Biology and Evolution, June 24, 2019. DOI: 10.1093/gbe/evz134.




Is A Great Iron Fertilization Experiment Already Underway in the Ocean?



It’s no secret that massive dust storms in the Saharan Desert occasionally shroud the North Atlantic Ocean with iron, but it turns out these natural blankets aren’t the only things to sneeze at. Iron released by human activities contributes as much as 80 percent of the iron falling on the ocean surface, even in the dusty North Atlantic Ocean, and is likely underestimated worldwide, according to a new study in Nature Communications.

“People don’t even realize it,” said lead author Dr. Tim Conway, Assistant Professor at the USF College of Marine Science, “but we’ve already been doing an iron fertilization experiment of sorts for many decades.”
"People don't even realize it," said lead author Dr. Tim Conway, Assistant Professor at the USF College of Marine Science, "but we've already been doing an iron fertilization experiment of sorts for many decades."

Burning fossil fuels, biofuels, and forests all release iron, which can be transported as an aerosol over large distances from land into the guts of the North Atlantic and beyond. But human-derived iron aerosols have been nearly impossible to see in the data - until now. The team used the isotope ratios of iron in the atmosphere to 'fingerprint' whether the iron came from Saharan desert dust or human sources such as cars, combustion, or fires.

The RV Knorr was operated by Woods Hole Oceanographic Institution from 1970-2016. It was used on the GEOTRACES expeditions in 2010-2011 during which iron aerosol samples were collected for the study led by the USF College of Marine Science.
The R/V Knorr was operated by Woods Hole Oceanographic Institution from 1970-2016. It was used on the GEOTRACES expeditions in 2010-2011 during which iron aerosol samples were collected for the study led by the USF College of Marine Science.
Credit: University of South Florida


"Despite much research, iron chemistry is still something of a black box in the ocean," Conway said. Iron, a trace element, is found in exceedingly low amounts in the ocean; one liter of seawater contains 35 grams of salt but only around one billionth of a gram of iron. This makes it very hard to measure. The iron is also hard to sample without risking contamination, especially if working on a rusty ship.

Trying to establish how much atmospheric iron lands on and dissolves in the ocean presents even more challenges, with storms, seasons, and land use all changing how much dust gets blown from the continents. Digesting dust particles in the lab to see how much iron dissolves is also problematic, and has led to estimates of iron that dissolves when it hits the ocean ranging from 0 to 100 percent.

The current study addresses some of these mysteries that remain in iron chemistry, taking our understanding of atmospheric iron supply to the oceans to the next level.

Conway and his colleagues analyzed aerosol samples collected on research cruises to the North Atlantic in 2010 and 2011 on board the R/V Knorr. The cruises were part of GEOTRACES, a global coordinated research program of 35 countries to study trace metals and their isotopes in the ocean.

The work by Conway and others showed that scientists have significantly underestimated the amount of human-derived iron aerosols to the North Atlantic Ocean compared to natural-derived iron aerosols from Saharan dust storms. Right hand panels show the updated model scenarios (left show the originals). As seen in the new panels, many more areas are deep orange, indicating up to 80% iron deposition from human sources such as fossil fuels, biofuels, and fires, especially to the iron-limited Southern Ocean region.
The work by Conway and others showed that scientists have significantly underestimated the amount of human-derived iron aerosols to the North Atlantic Ocean compared to natural-derived iron aerosols from Saharan dust storms. Right hand panels show the updated model scenarios (left show the originals). As seen in the new panels, many more areas are deep orange, indicating up to 80% iron deposition from human sources such as fossil fuels, biofuels, and fires, especially to the iron-limited Southern Ocean region.
Credit: University of South Florida

Samples were taken from an area off West Africa known to collect dust from the Saharan dust storms, and the others were taken off the coasts of New England and Europe where human-derived pollution is expected to be more important. The team then measured iron isotope ratios in the samples in order to determine whether the iron came from a natural or human source.

Iron isotope ratios (56Fe/54Fe) can change in response to chemical reactions, so human-induced processes like burning fossil fuels release iron with a different isotope 'signature' than iron derived from natural materials. Saharan dust particles were previously assumed to have a ratio that looked like the average continental crust, and Conway has suggested that when Saharan dust particles hit the ocean, the iron that dissolves interacts with organic molecules that bind the heavier 56Fe.

"We carried out this research to investigate that idea and fully expected to see continental signals or perhaps more heavy isotopes in the samples from all three regions," said Conway. "What we found was pretty crazy and very light. We weren't expecting this at all," Conway said.

The iron in Saharan air was indeed a match for the continental crust, but was much heavier than the samples from North America and Europe, which were loaded with lighter (more 54Fe), human-derived iron - not iron from the Sahara.

"The fact that we found human-derived iron in the dusty North Atlantic shows how effective this tracer is for anthropogenic iron," Conway said.

Map showing the sampling locations in the North Atlantic Ocean in 2010 and 2011.
Map showing the sampling locations in the North Atlantic Ocean in 2010 and 2011.
Credit: University of South Florida

Next, they used the iron-isotope tracer work to improve the models used to predict the amount of dust that falls over the global ocean, and were able to show that the iron from human input is much greater than previously thought.

Since the 1990s scientists have proposed the idea of fertilizing the water with iron released from ships to accelerate the growth of phytoplankton. The thinking goes like this:

Iron is a vital micronutrient that phytoplankton need to grow but it's generally scarce in the ocean. When available via dust storm or other source, the phytoplankton slurp up the carbon dioxide during photosynthesis at the ocean's surface. When they die and sink to the ocean bottom, they take the carbon with it - effectively acting as a "carbon sink." So let's add more iron to decrease the carbon dioxide from climate change, say geoengineering enthusiasts.

This geoengineering exercise is still hotly debated today, and the study by Conway and team add fuel to the fire with an unexpected twist.

"It seems we've already been fertilizing the ocean. We just couldn't quantify it," Conway said, although scientists have had a hunch about the human iron input since the mid-2000s.

"We've completely changed the system," he said, and routinely add iron to the ocean when cutting down forests or driving cars. Ironically, because of the way iron works it's therefore possible that these human sources of iron to the ocean may in fact have been acting to mitigate climate change.

"We don't know the magnitude of it yet but it's a fair statement," Conway said.
Contacts and sources:
Kristen KusekUniversity of South Florida

Citation: