Tuesday, July 17, 2018

Origins of Pottery Linked with Intensified Fishing in the Post-Glacial Period



A study into some of the earliest known pottery remains has suggested that the rise of ceramic production was closely linked with intensified fishing at the end of the last Ice Age.

Scientists examined 800 pottery vessels in one of the largest studies ever undertaken, focusing mainly on Japan- a country recognized as being one of the earliest centers for ceramic innovation.

A three year study led by researchers at BioArCh, the University of York, concluded that the ceramic vessels were used by our hunter-gatherer ancestors to store and process fish, initially salmon, but then a wider range including shellfish, freshwater and marine fish and mammals as fishing intensified.

Scientists say this association with fish remained stable even after the onset of climate warming, including in more southerly areas, where expanding forests provided new opportunities for hunting game and gathering plants.

This is incipient J?mon pottery from Hanamiyama site, Yokohama-shi, Kanagawa Prefecture, Japan.
Credit: Nara National Research Institute for Cultural Properties

The research team were able to determine the use of a range of ceramic vessels through chemical analysis of organic food compounds that remained trapped in the pots despite ca. 10,000 years of burial.

The samples analysed are some of the earliest found and date from the end of the Late Pleistocene - a time when our ancestors were living in glacial conditions - to the post-glacial period when the climate warmed close to its current temperature and when pottery began to be produced in much greater quantity.

The study has shed new light on how prehistoric hunter-gatherers processed and consumed foods over this period - until now virtually nothing was known of how or for what early pots were used.

As part of the study, researchers recovered diagnostic lipids from the charred surface deposits of the pottery with most of the compounds deriving from the processing of freshwater or marine organisms.

Lead author, Dr Alex Lucquin, from BioArCh, Department of Archaeology, University of York, said: "Thanks to the exceptional preservation of traces of animal fat, we now know that pottery changed from a rare and special object to an every-day tool for preparing fish.

"I think that our study not only reveals the subsistence of the ancient Jomon people of Japan but also its resilience to a dramatic change in climate.

Professor Oliver Craig, from the Department of Archaeology and Director of the BioArCh research centre at York, who led the study, said: "Our results demonstrate that pottery had a strong association with the processing of fish, irrespective of the ecological setting.

"Contrary to expectations, this association remained stable even after the onset of warming, including in more southerly areas, where expanding forests provided new opportunities for hunting and gathering.

"The results indicate that a broad array of fish was processed in the pottery after the end of the last Ice Age, corresponding to a period when hunter-gatherers began to settle in one place for longer periods and develop more intensive fishing strategies"

"We suggest this marks a significant change in the role of pottery of hunter-gatherers, corresponding massively increased volume of production, greater variation in forms and sizes and the onset of shellfish exploitation."

Dr Simon Kaner, from the University of East Anglia, who was involved in the study, added: "The research highlights the benefits of this kind of international collaboration for unlocking some of the big questions about the human past, and the potential of engaging with established research networks as created by the Sainsbury Institute over the years."

The findings are published in Proceedings of the National Academy of Sciences and the study was funded by the AHRC. It was an international collaboration including researchers in Japan, Sweden and the Netherlands.


Contacts and sources:
Alistair Keely
University of York

Archaeologists Discover Bread That Predates Agriculture By 4,000 Years



At an archaeological site in northeastern Jordan, researchers have discovered the charred remains of a flatbread baked by hunter-gatherers 14,400 years ago. It is the oldest direct evidence of bread found to date, predating the advent of agriculture by at least 4,000 years. The findings suggest that bread production based on wild cereals may have encouraged hunter-gatherers to cultivate cereals, and thus contributed to the agricultural revolution in the Neolithic period.

A team of researchers from the University of Copenhagen, University College London and University of Cambridge have analysed charred food remains from a 14,400-year-old Natufian hunter-gatherer site - a site known as Shubayqa 1 located in the Black Desert in northeastern Jordan. The results, which are published today in the journal Proceedings of the National Academy of Sciences, provide the earliest empirical evidence for the production of bread:

One of the stone structures of the Shubayqa 1 site. The fireplace, where the bread was found, is in the middle.

Photo: Alexis Pantos

"The presence of hundreds of charred food remains in the fireplaces from Shubayqa 1 is an exceptional find, and it has given us the chance to characterize 14,000-year-old food practices. The 24 remains analysed in this study show that wild ancestors of domesticated cereals such as barley, einkorn, and oat had been ground, sieved and kneaded prior to cooking. The remains are very similar to unleavened flatbreads identified at several Neolithic and Roman sites in Europe and Turkey. So we now know that bread-like products were produced long before the development of farming. The next step is to evaluate if the production and consumption of bread influenced the emergence of plant cultivation and domestication at all," said University of Copenhagen archaeobotanist Amaia Arranz Otaegui, who is the first author of the study.

University of Copenhagen archaeologist Tobias Richter, who led the excavations at Shubayqa 1 in Jordan, explained:

"Natufian hunter-gatherers are of particular interest to us because they lived through a transitional period when people became more sedentary and their diet began to change. Flint sickle blades as well as ground stone tools found at Natufian sites in the Levant have long led archaeologists to suspect that people had begun to exploit plants in a different and perhaps more effective way. But the flat bread found at Shubayqa 1 is the earliest evidence of bread making recovered so far, and it shows that baking was invented before we had plant cultivation. So this evidence confirms some of our ideas. Indeed, it may be that the early and extremely time-consuming production of bread based on wild cereals may have been one of the key driving forces behind the later agricultural revolution where wild cereals were cultivated to provide more convenient sources of food."

Charred remains under the microscope

The charred food remains were analysed with electronic microscopy at a University College London lab by PhD candidate Lara Gonzalez Carratero (UCL Institute of Archaeology), who is an expert on prehistoric bread:


Dr. Amaia Arranz-Otaegui and Ali Shakaiteer sampling cereals in the Shubayqa area.

Photo: Joe Roe

"The identification of 'bread' or other cereal-based products in archaeology is not straightforward. There has been a tendency to simplify classification without really testing it against an identification criteria. We have established a new set of criteria to identify flat bread, dough and porridge like products in the archaeological record. Using Scanning Electron Microscopy we identified the microstructures and particles of each charred food remain," said Gonzalez Carratero.

"Bread involves labour intensive processing which includes dehusking, grinding of cereals and kneading and baking. That it was produced before farming methods suggests it was seen as special, and the desire to make more of this special food probably contributed to the decision to begin to cultivate cereals. All of this relies on new methodological developments that allow us to identify the remains of bread from very small charred fragments using high magnification," said Professor Dorian Fuller (UCL Institute of Archaeology).

Research into prehistoric food practices continues

A grant recently awarded to the University of Copenhagen team will ensure that research into food making during the transition to the Neolithic will continue:

"The Danish Council for Independent Research has recently approved further funding for our work, which will allow us to investigate how people consumed different plants and animals in greater detail. Building on our research into early bread, this will in the future give us a better idea why certain ingredients were favoured over others and were eventually selected for cultivation," said Tobias Richter.

The Shubayqa project research was funded by the Independent Research Fund Denmark. Permission to excavate was granted by the Department of Antiquities of Jordan.




Contacts and sources:
Dr. Amaia Arranz Otaegui
Postdoc in Archaeobotany
Centre for the Study of Early Agricultural Societies
University of Copenhagen

The Ancient Armor of Fish -- Scales -- Provide Clues to Hair, Feather Development



When sea creatures first began crawling and slithering onto land about 385 million years ago, they carried with them their body armor: scales. Fossil evidence shows that the earliest land animals retained scales as a protective feature as they evolved to flourish on terra firma.

But as time passed, and species diversified, animals began to shed the heavy scales from their ocean heritage and replace them with fur, hair and feathers.

Today the molecular mechanisms of scale development in fish remain remarkably similar to the mechanisms that also produce feathers on birds, fur on dogs and hair on humans - suggesting a common evolutionary origin for countless vastly different skin appendages.

In these images of zebrafish scales, yellow marks the cells that produce bony material. Magenta marks the bony material.
In these images of zebrafish scales, yellow marks the cells that produce bony material. Magenta marks the bony material.
Photos by Andrew Aman, David Parichy, University of Virginia, for the journal eLife.



A new study, scheduled for online publication Tuesday in the journal eLife, examines the process as it occurs in a common laboratory genetics model, the zebrafish.

"We've found that the molecular pathways that underlie development of scales, hairs and feathers are strikingly similar," said the study's lead author, Andrew Aman, a postdoctoral researcher in biology at the University of Virginia.

Aman and his co-authors, including UVA undergraduate researcher Alexis Fulbright, now a Ph.D. candidate at the University of Utah, used molecular tools to manipulate and visualize scale development in zebrafish and tease out the details of how it works. It turns out, as the researchers suspected, skin appendages seen today originated hundreds of millions of years ago in primitive vertebrate ancestors, prior to the origin of limbs, jaws, teeth or even the internal skeleton.

While zebrafish have been studied for decades in wide-ranging genetic experiments, their scale development has mostly been overlooked, according to Aman.

"Zebrafish skin, including the bony scales, is largely transparent and researchers probably have simply looked past the scales to the internal structures," he said. "This is an area ripe for investigation, so we got the idea to look at the molecular machinery that drives the development of patterning in surface plating. We discovered profound similarities in the development of all skin appendages, whether scales, hair, fur or feathers.

In this image of zebrafish scales, yellow marks the cells that produce bony material. Magenta marks the bony material.
Photo by Andrew Aman, David Parichy, University of Virginia, for the journal eLife.


Aman works in the lab of David Parichy, the study's senior author and the Pratt-Ivy Foundation Distinguished Professor of Morphogenesis in UVA's Department of Biology. Parichy's lab investigates developmental genetics of adult morphology, stem cell biology and evolution, using zebrafish and related species as models. A high percentage of the genes in these common aquarium fish are the same as in humans - reflecting a common ancestry going back to the earliest common vertebrates that populated the ancient seas.

Developmental patterning - such as how scales take shape and form in slightly overlapping layers (in the case of zebrafish, there are more than 200 round scales on each side of the fish) - is a critical part of all development, including how stem cells differentiate and become, for example, bone cells, skin cells and any of the hundreds of kinds of cells that comprise the 37 trillion or so cells in the human body.

How cells differentiate and organize into precise shapes (and sometimes develop into misshapen forms that can result in congenital diseases, cancers and other abnormalities) is of utmost interest to developmental biologists like Parichy and Aman. Understanding the process provides insights into birth defects, cancer and genetic disease, and how the process might be fixed when gone awry.

As an example, teeth, which are actually an epidermal appendage, sometimes are subject to developmental problems. "Defects we find in fish scale development are reminiscent of the developmental problems that can occur with teeth," Parichy said. "Since scales regenerate, maybe there is a way to get teeth to regenerate."

"This research helps us make important links between the natural history of life on Earth, the evolutionary process and human disease," Aman said.

In addition to the journal publication, Parichy will present the study Saturday at the annual meeting of the Society for Developmental Biology in Portland, Oregon.


Contacts and sources: 
Fariss Samarrai
 University of Virginia

Monday, July 16, 2018

Senolytic Drugs Reverse Damage Caused by Aging Cells in Mice

NIH-funded researchers see extended health span and life span in treated mice

Injecting senescent cells into young mice results in a loss of health and function but treating the mice with a combination of two existing drugs cleared the senescent cells from tissues and restored physical function. The drugs also extended both life span and health span in naturally aging mice, according to a new study in Nature Medicine, published on July 9, 2018. The research was supported primarily by the National Institute on Aging (NIA), part of the National Institutes of Health (NIH).

A research team led by James L. Kirkland, M.D., Ph.D., of the Mayo Clinic in Rochester, Minnesota, found that injecting even a small number of senescent cells into young, healthy mice causes damage that can result in physical dysfunction. The researchers also found that treatment with a combination of dasatinib and quercetin could prevent cell damage, delay physical dysfunction, and, when used in naturally aging mice, extend their life span.

Credit: NIH

"This study provides compelling evidence that targeting a fundamental aging process—in this case, cell senescence in mice—can delay age-related conditions, resulting in better health and longer life," said NIA Director Richard J. Hodes, M.D. "This study also shows the value of investigating biological mechanisms which may lead to better understanding of the aging process."

Many normal cells continuously grow, die, and replicate. Cell senescence is a process in which cells lose function, including the ability to divide and replicate, but are resistant to cell death. Such cells have been shown to affect neighboring ones because they secrete several pro-inflammatory and tissue remodeling molecules. Senescent cells increase in many tissues with aging; they also occur in organs associated with many chronic diseases and after radiation or chemotherapy.

Senolytics are a class of drugs that selectively eliminate senescent cells. In this study, Kirkland's team used a combination of dasatinib and quercetin (D+Q) to test whether this senolytic combination could slow physical dysfunction caused by senescent cells. Dasatinib is used to treat some forms of leukemia; quercetin is a plant flavanol found in some fruits and vegetables.

To determine whether senescent cells caused physical dysfunction, the researchers first injected young (four-month-old) mice with either senescent (SEN) cells or non-senescent control (CON) cells. As early as two weeks after transplantation, the SEN mice showed impaired physical function as determined by maximum walking speed, muscle strength, physical endurance, daily activity, food intake, and body weight. In addition, the researchers saw increased numbers of senescent cells, beyond what was injected, suggesting a propagation of the senescence effect into neighboring cells.

To then analyze whether a senolytic compound could stop or delay physical dysfunction, researchers treated both SEN and CON mice for three days with the D+Q compound mix. They found that D+Q selectively killed senescent cells and slowed the deterioration in walking speed, endurance, and grip strength in the SEN mice.

In addition to young mice injected with senescent cells, the researchers also tested older (20-month-old), non-transplanted mice with D+Q intermittently for 4 months. D+Q alleviated normal age-related physical dysfunction, resulting in higher walking speed, treadmill endurance, grip strength, and daily activity.

Finally, the researchers found that treating very old (24- to 27-month-old) mice with D+Q biweekly led to a 36 percent higher average post-treatment life span and lower mortality hazard than control mice. This indicates that senolytics can reduce risk of death in old mice.

"This is exciting research," said Felipe Sierra, Ph.D., director of NIA's Division of Aging Biology. "This study clearly demonstrates that senolytics can relieve physical dysfunction in mice. Additional research will be necessary to determine if compounds, like the one used in this study, are safe and effective in clinical trials with people."

The researchers noted that current and future preclinical studies may show that senolytics could be used to enhance life span not only in older people, but also in cancer survivors treated with senescence-inducing radiation or chemotherapy and people with a range of senescence-associated chronic diseases.

This press release describes a basic research finding. Basic research increases our understanding of human behavior and biology, which is foundational to advancing new and better ways to prevent, diagnose, and treat disease. Science is an unpredictable and incremental process—each research advance builds on past discoveries, often in unexpected ways. Most clinical advances would not be possible without the knowledge of fundamental basic research.




Contacts and sources:
Barbara Cire
NIH/National Institute on Aging


Citation: Senolytics improve physical function and increase lifespan in old age.
Ming Xu, Tamar Pirtskhalava, Joshua N. Farr, Bettina M. Weigand, Allyson K. Palmer, Megan M. Weivoda, Christina L. Inman, Mikolaj B. Ogrodnik, Christine M. Hachfeld, Daniel G. Fraser, Jennifer L. Onken, Kurt O. Johnson, Grace C. Verzosa, Larissa G. P. Langhi, Moritz Weigl, Nino Giorgadze, Nathan K. LeBrasseur, Jordan D. Miller, Diana Jurk, Ravinder J. Singh, David B. Allison, Keisuke Ejima, Gene B. Hubbard, Yuji Ikeno, Hajrunisa Cubro, Vesna D. Garovic, Xiaonan Hou, S. John Weroha, Paul D. Robbins, Laura J. Niedernhofer, Sundeep Khosla, Tamara Tchkonia, James L. Kirkland. Nature Medicine, 2018; DOI: 10.1038/s41591-018-0092-9

Oxygen Levels on Early Earth Rose and Fell Several Times Before the Successful Great Oxidation Event



The Jeerinah Formation in Western Australia, where a University of Washington (UW)-led team found a sudden shift in nitrogen isotopes. “Nitrogen isotopes tell a story about oxygenation of the surface ocean, and this oxygenation spans hundreds of kilometers across a marine basin and lasts for somewhere less than 50 million years,” said lead author Matt Koehler.
The Jeerinah Formation in Western Australia, where a UW-led team found a nitrogen isotope "excursion." “Nitrogen isotopes tell a story about oxygenation of the surface ocean, and this oxygenation spans hundreds of kilometers across a marine basin and lasts for somewhere less than 50 million years," said lead author Matt Koehler.
Credit: Roger Buick

Earth’s oxygen levels rose and fell more than once hundreds of millions of years before the planetwide success of the Great Oxidation Event about 2.4 billion years ago, new research from the University of Washington shows.

The evidence comes from a new study that indicates a second and much earlier “whiff” of oxygen in Earth’s distant past — in the atmosphere and on the surface of a large stretch of ocean — showing that the oxygenation of the Earth was a complex process of repeated trying and failing over a vast stretch of time.

The finding also may have implications in the search for life beyond Earth. Coming years will bring powerful new ground- and space-based telescopes able to analyze the atmospheres of distant planets. This work could help keep astronomers from unduly ruling out “false negatives,” or inhabited planets that may not at first appear to be so due to undetectable oxygen levels.

“The production and destruction of oxygen in the ocean and atmosphere over time was a war with no evidence of a clear winner, until the Great Oxidation Event,” said Matt Koehler, a UW doctoral student in Earth and space sciences and lead author of a new paper published the week of July 9 in the Proceedings of the National Academy of Sciences.

“These transient oxygenation events were battles in the war, when the balance tipped more in favor of oxygenation.”

In 2007, co-author Roger Buick, UW professor of Earth and space sciences, was part of an international team of scientists that found evidence of an episode — a “whiff” — of oxygen some 50 million to 100 million years before the Great Oxidation Event. This they learned by drilling deep into sedimentary rock of the Mount McRae Shale in Western Australia and analyzing the samples for the trace metals molybdenum and rhenium, accumulation of which is dependent on oxygen in the environment.

Now, a team led by Koehler has confirmed a second such appearance of oxygen in Earth’s past, this time roughly 150 million years earlier — or about 2.66 billion years ago — and lasting for less than 50 million years. For this work they used two different proxies for oxygen — nitrogen isotopes and the element selenium — substances that, each in its way, also tell of the presence of oxygen.

“What we have in this paper is another detection, at high resolution, of a transient whiff of oxygen,” said Koehler. “Nitrogen isotopes tell a story about oxygenation of the surface ocean, and this oxygenation spans hundreds of kilometers across a marine basin and lasts for somewhere less than 50 million years.”

The team analyzed drill samples taken by Buick in 2012 at another site in the northwestern part of Western Australia called the Jeerinah Formation.

The researchers drilled two cores about 300 kilometers apart but through the same sedimentary rocks — one core samples sediments deposited in shallower waters, and the other samples sediments from deeper waters. Analyzing successive layers in the rocks years shows, Buick said, a “stepwise” change in nitrogen isotopes “and then back again to zero. This can only be interpreted as meaning that there is oxygen in the environment. It’s really cool — and it’s sudden.”

The nitrogen isotopes reveal the activity of certain marine microorganisms that use oxygen to form nitrate, and other microorganisms that use this nitrate for energy. The data collected from nitrogen isotopes sample the surface of the ocean, while selenium suggests oxygen in the air of ancient Earth. Koehler said the deep ocean was likely anoxic, or without oxygen, at the time.

The team found plentiful selenium in the shallow hole only, meaning that it came from the nearby land, not making it to deeper water. Selenium is held in sulfur minerals on land; higher atmospheric oxygen would cause more selenium to be leached from the land through oxidative weathering — “the rusting of rocks,” Buick said — and transported to sea.

“That selenium then accumulates in ocean sediments,” Koehler said. “So when we measure a spike in selenium abundances in ocean sediments, it could mean there was a temporary increase in atmospheric oxygen.”

The finding, Buick and Koehler said, also has relevance for detecting life on exoplanets, or those beyond the solar system.

“One of the strongest atmospheric biosignatures is thought to be oxygen, but this study confirms that during a planet’s transition to becoming permanently oxygenated, its surface environments may be oxic for intervals of only a few million years and then slip back into anoxia,” Buick said.

“So, if you fail to detect oxygen in a planet’s atmosphere, that doesn’t mean that the planet is uninhabited or even that it lacks photosynthetic life. Merely that it hasn’t built up enough sources of oxygen to overwhelm the ‘sinks’ for any longer than a short interval.

“In other words, lack of oxygen can easily be a ‘false negative’ for life.”

Koehler added: “You could be looking at a planet and not see any oxygen — but it could be teeming with microbial life.”

Koehler’s other co-authors are UW Earth and space sciences doctoral student Michael Kipp, former Earth and space sciences postdoctoral researcher Eva Stüeken — now a faculty member at the University of St. Andrews in Scotland — and Jonathan Zaloumis of Arizona State University.

The research was funded by grants from NASA, the UW-based Virtual Planetary Laboratory and the National Science Foundation; drilling was funded by the Agouron Institute.




Contacts and sources:
Peter Kelley
University of Washington




Citation: Transient surface ocean oxygenation recorded in the ∼2.66-Ga Jeerinah Formation, Australia.
Matthew C. Koehler, Roger Buick, Michael A. Kipp, Eva E. Stüeken, Jonathan Zaloumis. Proceedings of the National Academy of Sciences, 2018; 201720820 DOI: 10.1073/pnas.1720820115

Digitizing and Uploading Copies of the Human Brain

The goal of a technology known as mind upload is to make it possible to create functional copies of the human brain on computers. The development of this technology, which involves scanning of the brain and detailed cell-specific emulation, is currently receiving billions in funding. Science fiction enthusiasts express a more positive attitude towards the technology compared to others.

“Mind upload is a technology rife with unsolved philosophical questions,” says researcher Michael Laakasuo.

“For example, is the potential for conscious experiences transmitted when the brain is copied? Does the digital brain have the ability to feel pain, and is switching off the emulated brain comparable to homicide? And what might potentially everlasting life be like on a digital platform?”

File:Crystal mind.jpg
Credit: Nevit Dilmen / Wikimedia Commons

A positive attitude from science fiction enthusiasts

Such questions can be considered science fiction, but the first breakthroughs in digitizing the brain have already been made: for example, the nervous system of the roundworm (C. elegans) has been successfully modelled within a Lego robot capable of independently moving and avoiding obstacles. Recently, the creation of a functional digital copy of the piece of a somatosensory cortex of the rat brain was also successful.

Scientific discoveries in the field of brain digitisation and related questions are given consideration in both science fiction and scientific journals in philosophy. Moralities of Intelligent Machines, a research group working at the University of Helsinki, is investigating the subject also from the perspective of moral psychology, in other words mapping out the tendency of ordinary people to either approve of or condemn the use of such technology.

“In the first sub-project, where data was collected in the United States, it was found that men are more approving of the technology than women. But standardising for interest in science fiction evened out such differences,” explains Laakasuo.

According to Laakasuo, a stronger exposure to science fiction correlated with a more positive outlook on the mind upload technology overall. The study also found that traditional religiousness is linked with negative reactions towards the technology.

Dis­ap­proval from those dis­gust sens­it­ive to sexual mat­ters

Another sub-study, where data was collected in Finland, indicated that people disapproved in general of uploading a human consciousness regardless of the target, be it a chimpanzee, a computer or an android.

In a third project, the researchers observed a positive outlook on and approval of the technology in those troubled by death and disapproving of suicide. In this sub-project, the researchers also found a strong connection between individuals who are disgust sensitive to sexual matters and disapproval of the mind upload technology. This type of disgust sensitive people find, for example, the viewing of pornographic videos and the lovemaking noises of neighbours disgusting. The indications of negative links between sexual disgust sensitivity and disapproval of the mind upload technology are surprising, given that, on the face of it, the technology has no relevant association with procreation and mate choice.

“However, the inability to biologically procreate with a person who has digitised his or her brain may make the findings seem reasonable. In other words, technology is posing a fundamental challenge to our understanding of human nature,” reasons Laakasuo.

Digital copies of the human brain can reproduce much like an amoeba, by division, which makes sexuality, one of the founding pillars of humanity, obsolete. Against this background, the link between sexual disgust and the condemnation of using the technology in question seems rational.

Fund­ing for research on ma­chine in­tel­li­gence and ro­bot­ics

The research projects above were funded by the Jane and Aatos Erkko Foundation, in addition to which the Moralities of Intelligent Machines project has received €100,000 from the Weisell Foundation (link in Finnish only) for a year of follow-up research. According to Mikko Voipio, the foundation chair, humanism has a great significance to research focused on machine intelligence and robotics.

“The bold advances in artificial intelligence as well as its increasing prevalence in various aspects of life are raising concern about the ethical and humanistic side of technological applications. Are the ethics of the relevant field of application also taken into consideration when developing and training such systems? The Moralities of Intelligent Machines research group is concentrating on this often forgotten factor of applying technology. The board of the Weisell Foundation considers this type of research important right now when artificial intelligence seems to have become a household phrase among politicians. It’s good that the other side of the coin also receives attention.”

According to Michael Laakasuo, funding prospects for research on the moral psychology of robotics and artificial intelligence are currently somewhat hazy, but the Moralities of Intelligent Machines group is grateful to both its funders and Finnish society for their continuous interest and encouragement.




Contacts and sources:
Mi­chael Laakasuo / Niina Niskanen
University of Helsinki

Citation: What makes people approve or condemn mind upload technology? Untangling the effects of sexual disgust, purity and science fiction familiarity.
Michael Laakasuo, Marianna Drosinou, Mika Koverola, Anton Kunnari, Juho Halonen, Noora Lehtonen, Jussi Palomäki. Palgrave Communications, 2018; 4 (1) DOI: 10.1057/s41599-018-0124-6

Sunday, July 15, 2018

Rats Trail Behind Shrews, Monkeys, and Humans in Visual Problem Solving



Rats take a fundamentally different approach toward solving a simple visual discrimination task than tree shrews, monkeys, and humans, according to a comparative study of the four mammal species published in eNeuro. The work could have important implications for the translation of research in animal models to humans.

Credit: Mustafar et al., eNeuro (2018)

Scientists have developed powerful technologies that allow for precise manipulation of the rodent nervous system, which is similar to that of humans, making them crucial laboratory animals in neuroscience. Although it is thought that learning in mice and rats can overcome initial species differences on a task, few studies have directly tested this idea.

Gregor Rainer and colleagues addressed this gap in knowledge by comparing the ability of two species more closely related to humans -- macaques (Macaca fascicularis) and tree shrews (Tupaia belangeri) -- to discriminate a flickering light from two distracting stimuli. While the macaques and tree shrews used similar visual learning strategies and their performance improved over time, rats (Rattus norvegicus) were instead focused on where they had previously received a food reward and their performance did not improve. These findings suggest that rats use their brains differently than the other species in the context of this particular task.



Contacts and sources:
David Barnstone
eNeuro, the Society for Neuroscience


Citation: Divergent solutions to visual problem solving across mammalian species DOI: https://doi.org/10.1523/ENEURO.0167-18.2018
Corresponding author: Gregor Rainer (University of Fribourg, Switzerland), gregor.rainer@unifr.ch

Who Is Watching Who Is Grooming Who Is Important to Primates

Not only the attractiveness of a potential grooming partner matters to wild chimpanzees and sooty mangabeys, their choice also depends on who is observing them

When humans cooperate with others they take their previous experiences with specific individuals into account, as well as their usefulness for carrying out a specific task. Moreover, they consider whether a better candidate is available, and whether the potential cooperation partner is actually reliable. 

Researcher of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, show that wild chimpanzees and sooty mangabeys, two primate species who live in complex social groups, choose their grooming partners based on a variety of criteria, including their social relationship with them and their potential partner’s dominance rank. In particular, individuals of both species avoided grooming group mates whose friends were among the bystanders, as grooming might be interrupted.

Chimpanzees prefer grooming mothers with babies, or their friends.

Credit: © MPI f. Evolutionary Anthropology/ A. Mielke

Working together and exchanging services for the benefit of everyone involved is crucial for humans and partly responsible for our success as a species. In order to achieve a goal, we need to choose the best possible cooperation partners. Yet who qualifies as the best possible partner depends on the task at hand, the abilities of all available candidates, and on our social relationships with them. Like humans, many non-human primates live in close-knit social groups. Individuals cooperate with each other to their mutual benefit, and often by exchanging services.

Grooming interactions play a special role within this system as they are exchangeable for support during a fight, for sharing food from a hunt or other services. Different group members can offer different services as reciprocation for the grooming; for example, high-ranking individuals are more useful supporters in a fight. Since grooming time is limited individuals strive to pick the best grooming partner from all available candidates. Importantly, the success of the grooming interaction does not only depend on the individual and his or her chosen partner, but also on the audience: If the partner has a friend in the audience, the friend might interfere or the partner might leave the grooming bout early, in which case time and effort of the groomer would be wasted.

Data out of the rainforest

Primatologist Alexander Mielke and his colleagues from the Max Planck Institute for Evolutionary Anthropology have now investigated at Taï National Park, Côte d’Ivoire, which properties chimpanzees and sooty mangabeys take into account when selecting a grooming partner, and if the composition of their audience influences their decision. Within the context of the Taï Chimpanzee Project, the researchers collected data in two chimpanzee communities and one mangabey community. In contrast to previous studies, they regarded each grooming initiation as an individual‘s personal decision: From all possible partners with their specific properties, which one would they choose?

In order to determine the social relationships and ranks of all individuals, Mielke and his colleagues analyzed data that researchers and field assistants had collected over many years. They then assessed the impact of the reproductive state of the potential partner (whether they had a baby or were receptive females), their social relationship with the decision-maker, whether there had been aggression between the two just before the decision was made, their sex and their dominance rank.

The researchers checked for each individual whether they had friends amongst the by-standers, and they differentiated between two types of dominance rank: global (compared to all individuals in the community) and local (in comparison to those nearby). This was to test whether non-human primates change their preference for individuals fluidly based on the social environment, or whether they simply preferred certain individuals based on their rank.

Friends may interrupt grooming

"Choosing a grooming partner from among ten, fifteen possible candidates – some of them friends, high-ranking, or with babies – is a very difficult task indeed. And yet individuals from both species, chimpanzee and sooty mangabey, chose their partner flexibly“, says Alexander Mielke, first author of the study. "Both mangabeys and chimpanzees actually preferred grooming mothers with babies, something we did not know was the case in chimpanzees. Both species used grooming for reconciliation, and both species groomed their friends more. Most strikingly, in both species grooming choices depended strongly on the social environment. 

Sooty mangabeys also took their social environment into consideration when deciding for or against a specific grooming partner.

Credit: © MPI f. Evolutionary Anthropology/ A. Mielke

The animals avoided grooming individuals with close friends nearby, possibly because these friends might interrupt the grooming interaction, or because their potential partner might prefer to go and groom these friends“, explains Mielke. „Individuals also chose partners who were higher-ranking compared to other possible partners, independent of their overall rank in the community. This shows that both species are flexible when it comes to taking a decision, and that individuals use the information they have about all available partners. Yet they also consider the wider social circumstances, and adapt their choice to maximize their own benefit.”

These results show that primates are not only aware of the ranks and social relationships of their conspecifics, but that they can judge many individuals simultaneously and flexibly select the best option. The impact of the social environment suggests that additionally, mangabeys and chimpanzees can inhibit a preferred response (e.g. grooming a high-ranking group mate) if their action would not lead to success (because this individual’s friend is present), a skill often considered too difficult for non-human animals.

"The fact that we found these results in both mangabeys and chimpanzees might indicate that this impressive cognitive feat is more widespread amongst primates than previously known“, concludes Mielke. "Grooming is an important part of primate cooperation and choosing the best partner in a specific situation is a vital skill. As in humans, primate social groups consist of many individuals, each with their own status, own objectives, own history. This study gives further evidence that at least mangabeys and chimpanzees are equipped with the cognitive abilities to navigate and thrive in this complex social world.“




Contacts and sources:
Alexander Mielke / Sandra Jacob 
Max Planck Institute for Evolutionary Anthropology


Citation: Flexible decision-making in grooming partner choice in sooty mangabeys and chimpanzees. Alexander Mielke, Anna Preis, Liran Samuni, Jan F. Gogarten, Roman M. Wittig and Catherine Crockford. Royal Society Open Science; 11 July, 2018 (DOI: 10.1098/rsos.172143)

Saturday, July 14, 2018

Humans Evolved in Partially Isolated Populations Scattered Across Africa


The textbook narrative of human evolution casts Homo sapiens as evolving from a single ancestral population in one region of Africa around 300,000 years ago. However, in a commentary published July 11 in the journal Trends in Ecology & Evolution, an interdisciplinary group of researchers concludes that early humans comprised a subdivided, shifting, pan-African meta-population with physical and cultural diversity. This framework better explains existing genetic, fossil, and cultural patterns and clarifies our shared ancestry.

"In the fossil record, we see a mosaic-like, continental-wide trend toward the modern human form, and the fact that these features appear at different places at different times tells us that these populations were not well connected," says Eleanor Scerri, a British Academy postdoctoral fellow in archaeology at the University of Oxford and the Max Planck Institute for the Science of Human History. "This fits with a subdivided population model in which genetic exchanges are neither random nor frequent. This allows us to start detailing the processes that shaped our evolutionary history."

Taken at the David H. Koch Hall of Human Origins at the Smithsonian Natural History Museum
Skulls of our Ancestors 3.jpg
Credit: Ryan Somma / Wikimedia Commons

Explaining this poor connectivity was a series of shifting rivers, deserts, forests, and other physical barriers separating these subpopulations, as highlighted in the ecological record. "These barriers created migration and contact opportunities for groups that may previously have been separated, and later fluctuation might have meant populations that mixed for a short while became isolated again," says Scerri.

The theory that there was mingling and isolation of subpopulations from the southern tip to the northern coasts of Africa is a much better fit with the fossil and genetic data than is a single population model. Examination of H. sapiens fossils paired with inferences made from contemporary DNA samples suggested levels of early human diversity that supported the researchers' shifting subdivided population model.

"For the first time, we've examined all the relevant archaeological, fossil, genetic, and environmental data together to eliminate field-specific biases and assumptions and confirm that a mosaic, pan-African origin view is a much better fit with the data that we have," says Scerri. "To understand our genetic and cultural diversity or where being human comes from - our behavioral flexibility and biological plasticity - we have to look at an ancient history of population subdivision and diverse ecologies across Africa."

Moving forward, this research will allow our models of human evolutionary history to reject the simple linear progression from what might be termed "archaic morphology" toward a recognizably human form in favor of a more accurate account of the complexity and irregularity involved in our evolution and an acknowledgment of a pan-African origin of our species.

"In bringing together people from such diverse fields, we've arrived at a place where we can begin to address some key questions about our shared ancestry and even emerge with new questions we haven't known to ask before," Scerri says. "We are an evolving lineage with deep African roots, so to understand this history, we must re-examine evidence from diverse sources without a priori conceptions."

This research was primarily funded by the British Academy of Humanities and Social Sciences and the Wellcome Trust.







Contacts and sources:
Christina Monnen
Cell Press

Citation: Trends in Ecology & Evolution, Scerri, et al.: "Did our species evolve in subdivided populations across Africa, and why does it matter?" https://www.cell.com/trends/ecology-evolution/fulltext/S0169-5347(18)30117-4

Deep Subterranean Connection Found Between Two Japanese Volcanoes

Scientists have confirmed for the first time that radical changes of one volcano in southern Japan was the direct result of an erupting volcano 22 kilometers (13.7 miles) away. The observations from the two volcanos--Aira caldera and Kirishima--show that the two were connected through a common subterranean magma source in the months leading up to the 2011 eruption of Kirishima.

The Japanese cities of Kirishima and Kagoshima lie directly on the border of the Aira caldera, one of the most active, hazardous, and closely monitored volcanoes in southern Japan. Identifying how volcanoes interact is critical to determine if and how an eruption can influence the activity of a distant volcano or raise the threat of a new strong explosive event.

Southern Japan on Feb. 3rd, 2011, showing the active cones of Kirishima (Shinmoedake) and Aira caldera (Sakurajima) volcanoes. While Kirishima is erupting very strongly, Aira's activity is relatively low.
Credit: NASA

The research team from the University of Miami's (UM) Rosenstiel School of Marine and Atmospheric Science and Florida International University analyzed deformation data from 32 permanent GPS stations in the region to identify the existence of a common magma reservoir that connected the two volcanoes.

Leading up to the eruption of Kirishima, which is located in the densely-populated Kagoshima region, the Aira caldera stopped inflating, which experts took as a sign that the volcano was at rest. The results from this new study, however, indicated that the opposite was happening--the magma chamber inside Aira began to deflate temporarily while Kirishima was erupting and resumed shortly after the activity at Kirishima stopped.

"We observed a radical change in the behavior of Aira before and after the eruption of its neighbor Kirishima," said Elodie Brothelande, a postdoctoral researcher at the UM Rosenstiel School and lead author of the study. "The only way to explain this interaction is the existence of a connection between the two plumbing systems of the volcanoes at depth."

Prior to this new study, scientists had geological records of volcanoes erupting or collapsing at the same time, but this is the first example of an unambiguous connection between volcanoes that allowed scientists to study the underlying mechanisms involved. The findings confirm that volcanoes with no distinct connection at the surface can be part of a giant magmatic system at depth.

"To what extend magmatic systems are connected is an important question in terms of the hazards," said Falk Amelung, professor of geophysics at the UM Rosenstiel School and coauthor of the study. "Is there a lot of magma underground and can one eruption trigger another volcano? Up until now there was little or no evidence of distinct connections."

"Eruption forecasting is crucial, especially in densely populated volcanic areas," said Brothelande. "Now, we know that a change in behavior can be the direct consequence of the activity of its neighbor Kirishima."

The findings also illustrate that large volcanic systems such as Aira caldera can respond to smaller eruptions at nearby volcanoes if fed from a common deep reservoir but not all the time, since magma pathways open and close periodically.

"Now, we have to look whether this connnection is particular for these volcanoes in southeastern Japan or are widespread and occurr around the world," said Amelung.






Contacts and sources:
Diana Udel
University of Miami's (UM) Rosenstiel School of Marine and Atmospheric Science

Citation: "Geodetic evidence for interconnectivity between Aira and Kirishima magmatic systems, Japan," was published June 28 in the journal Scientific Reports. The coauthors include: Elodie Brothelande, Falk Amelung and Zhang Yunjun from the UM Rosenstiel School of Marine and Atmosheric Science and Shimon Wdowinski of Florida International University. The study was supported by NASA's Earth Surface and Interior Program (grant #NNX16AL19G). http://dx.doi.org/10.1038/s41598-018-28026-4

Army Teaching Robots To Be More Reliable Teammates for Soldiers

Researchers at the U.S. Army Research Laboratory and the Robotics Institute at Carnegie Mellon University developed a new technique to quickly teach robots novel traversal behaviors with minimal human oversight.

The technique allows mobile robot platforms to navigate autonomously in environments while carrying out actions a human would expect of the robot in a given situation.

A small unmanned Clearpath Husky robot, which was used by ARL researchers to develop a new technique to quickly teach robots novel traversal behaviors with minimal human oversight.

Credit:  US Army

The experiments of the study were recently published and presented at the Institute of Electrical and Electronics Engineers' International Conference on Robotics and Automation held in Brisbane, Australia.

ARL researchers Drs. Maggie Wigness and John Rogers engaged in face-to-face discussions with hundreds of conference attendees during their two and a half hour interactive presentation.

According to Wigness, one of research team's goals in autonomous systems research is to provide reliable autonomous robot teammates to the Soldier.

"If a robot acts as a teammate, tasks can be accomplished faster and more situational awareness can be obtained," Wigness said. "Further, robot teammates can be used as an initial investigator for potentially dangerous scenarios, thereby keeping Soldiers further from harm."

To achieve this, Wigness said the robot must be able to use its learned intelligence to perceive, reason and make decisions.

"This research focuses on how robot intelligence can be learned from a few human example demonstrations," Wigness said. "The learning process is fast and requires minimal human demonstration, making it an ideal learning technique for on-the-fly learning in the field when mission requirements change."

ARL and CMU researchers focused their initial investigation on learning robot traversal behaviors with respect to the robot's visual perception of terrain and objects in the environment.

More specifically, the robot was taught how to navigate from various points in the environment while staying near the edge of a road, and also how to traverse covertly using buildings as cover.

According to the researchers, given different mission tasks, the most appropriate learned traversal behavior can be activated during robot operation.

ARL researchers Drs. Maggie Wigness and John Rogers pose with a small unmanned Clearpath Husky robot in their lab at the Adelphi Laboratory Center in Maryland.
Credit:  US Army

This is done by leveraging inverse optimal control, also commonly referred to as inverse reinforcement learning, which is a class of machine learning that seeks to recover a reward function given a known optimal policy.

In this case, a human demonstrates the optimal policy by driving a robot along a trajectory that best represents the behavior to be learned.

These trajectory exemplars are then related to the visual terrain/object features, such as grass, roads and buildings, to learn a reward function with respect to these environment features.

While similar research exists in the field of robotics, what ARL is doing is especially unique.

"The challenges and operating scenarios that we focus on here at ARL are extremely unique compared to other research being performed," Wigness said. "We seek to create intelligent robotic systems that reliably operate in warfighter environments, meaning the scene is highly unstructured, possibly noisy, and we need to do this given relatively little a priori knowledge of the current state of the environment. The fact that our problem statement is so different than so many other researchers allows ARL to make a huge impact in autonomous systems research. Our techniques, by the very definition of the problem, must be robust to noise and have the ability to learn with relatively small amounts of data."

According to Wigness, this preliminary research has helped the researchers demonstrate the feasibility of quickly learning an encoding of traversal behaviors.

"As we push this research to the next level, we will begin to focus on more complex behaviors, which may require learning from more than just visual perception features," Wigness said. "Our learning framework is flexible enough to use a priori intel that may be available about an environment. This could include information about areas that are likely visible by adversaries or areas known to have reliable communication. This additional information may be relevant for certain mission scenarios, and learning with respect to these features would enhance the intelligence of the mobile robot."

The researchers are also exploring how this type of behavior learning transfers between different mobile platforms.

Their evaluation to date has been performed with a small unmanned Clearpath Husky robot, which has a visual field of view that is relatively low to the ground.

"Transferring this technology to larger platforms will introduce new perception viewpoints and different platform maneuvering capabilities," Wigness said. "Learning to encode behaviors that can be easily transferred between different platforms would be extremely valuable given a team of heterogeneous robots. In this case, the behavior can be learned on one platform instead of each platform individually."

This research is funded through the Army's Robotics Collaborative Technology Alliance, or RCTA, which brings together government, industrial and academic institutions to address research and development required to enable the deployment of future military unmanned ground vehicle systems ranging in size from man-portables to ground combat vehicles.

"ARL is positioned to actively collaborate with other members of the RCTA, leveraging the efforts of top researchers in academia to work on Army problems," Rogers said. "This particular research effort was the synthesis of several components of the RCTA with our internal research; it would not have been possible if we didn't work together so closely."

Ultimately, this research is crucial for the future battlefield, where Soldiers will be able to rely on robots with more confidence to assist them in executing missions.

"The capability for the Next Generation Combat Vehicle to autonomously maneuver at optempo in the battlefield of the future will enable powerful new tactics while removing risk to the Soldier," Rogers said. "If the NGCV encounters unforeseen conditions which require teleoperation, our approach could be used to learn to autonomously handle these types of conditions in the future."

The U.S. Army Research Laboratory is part of the U.S. Army Research, Development and Engineering Command, which has the mission to ensure decisive overmatch for unified land operations to empower the Army, the joint warfighter and our nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command.




Contacts and sources:
T'Jae Ellis
U.S. Army Research Laboratory

Friday, July 13, 2018

Unraveling a Long Time Cosmic Mystery

Astronomers and physicists around the world, including in Hawaii, have begun to unravel a long-standing cosmic mystery. Using a vast array of telescopes in space and on Earth, they have identified a source of cosmic rays--highly energetic particles that continuously rain down on Earth from space.

In a paper published this week in the journal Science, scientists have, for the first time, provided evidence for a known blazar, designated TXS 0506+056, as a source of high-energy neutrinos. At 8:54 p.m. on September 22, 2017, the National Science Foundation-supported IceCube neutrino observatory at the South Pole detected a high energy neutrino from a direction near the constellation Orion. Just 44 seconds later an alert went out to the entire astronomical community.

The All Sky Automated Survey for SuperNovae team (ASAS-SN), an international collaboration headquartered at Ohio State University, immediately jumped into action. ASAS-SN uses a network of 20 small, 14-centimeter telescopes in Hawaii, Texas, Chile and South Africa to scan the visible sky every 20 hours looking for very bright supernovae. It is the only all-sky, real-time variability survey in existence.

Artist’s impression of a blazar emitting neutrinos and gamma rays. 
Credit: IceCube/NASA

"When ASAS-SN receives an alert from IceCube, we automatically find the first available ASAS-SN telescope that can see that area of the sky and observe it as quickly as possible," said Benjamin Shappee, an astronomer at the University of Hawaii's Institute for Astronomy and an ASAS-SN core member.

On September 23, only 13 hours after the initial alert, the recently commissioned ASAS-SN unit at McDonald Observatory in Texas mapped the sky in the area of the neutrino detection. Those observations and the more than 800 images of the same part of the sky taken since October 2012 by the first ASAS-SN unit, located on Maui's Haleakala, showed that TXS 0506+056 had entered its highest state since 2012.

"The IceCube detection and the ASAS-SN detection combined with gamma-ray detections from NASA's Fermi gamma-ray space telescope and the MAGIC telescopes that show TXS 0506+056 was undergoing the strongest gamma-ray flare in a decade, indicate that this could be the first identified source of high-energy neutrinos, and thus a cosmic-ray source," said Anna Franckowiak, ASAS-SN and IceCube team member, Helmholtz Young Investigator, and staff scientist at DESY in Germany.

Since they were first detected more than one hundred years ago, cosmic rays have posed an enduring mystery: What creates and launches these particles across such vast distances? Where do they come from?

One of the best suspects have been quasars, supermassive black holes at the centers of galaxies that are actively consuming gas and dust. Quasars are among the most energetic phenomena in the universe and can form relativistic jets where elementary particles are accelerate and launched at nearly the speed of light. If that jet happens to be pointed toward Earth, the light from the jet outshines all other emission from the host galaxy and the highly accelerated particles are launched toward the Milky Way. This specific type of quasar is called a blazar.

However, because cosmic rays are charged particles, their paths cannot be traced directly back to their places of origin. Due to the powerful magnetic fields that fill space, they don't travel along a straight path. Luckily, the powerful cosmic accelerators that produce them also emit neutrinos, which are uncharged and unaffected by even the most powerful magnetic fields. Because they rarely interact with matter and have almost no mass, these "ghost particles" travel nearly undisturbed from their cosmic accelerators, giving scientists an almost direct pointer to their source.

"Crucially, the presence of neutrinos also differentiates between two types of gamma-ray sources: those that accelerate only cosmic-ray electrons, which do not produce neutrinos, and those that accelerate cosmic-ray protons, which do," said John Beacom, an astrophysicist at the Ohio State University and an ASAS-SN member.

The ASAS-SN telescope on Haleakala on the island of Maui.
Credit: Ben Shappee, University of Hawaii, Institute for Astronomy

Detecting the highest energy neutrinos requires a massive particle detector, and the National Science Foundation-supported IceCube observatory is the world's largest. The detector is composed of more than 5,000 light sensors arranged in a grid, buried in a cubic kilometer of deep, pristine ice a mile beneath the surface at the South Pole. When a neutrino interacts with an atomic nucleus, it creates a secondary charged particle, which, in turn, produces a characteristic cone of blue light that is detected by IceCube's grid of photomultiplier tubes. Because the charged particle and the light it creates stay essentially true to the neutrino's original direction, they give scientists a path to follow back to the source.

About 20 observatories on Earth and in space have also participated in this discovery. This includes the 8.4-meter Subaru Telescope on Maunakea, which was used to observe the host galaxy of TXS 0506+056 in an attempt to measure its distance, and thus determine the intrinsic luminosity, or energy output, of the blazar. These observations are difficult, because the blazar jet is much brighter than the host galaxy. Disentangling the jet and the host requires the largest telescopes in the world, like those on Maunakea.

"This discovery demonstrates how the many different telescopes and detectors around and above the world can come together to tell us something amazing about our Universe. This also emphasizes the critical role that telescopes in Hawaii play in that community," said Shappee.







Contacts and sources:
Ben Shappee
University of Hawaii at Manoa

  Citation: Multimessenger observations of a flaring blazar coincident with high-energy neutrino IceCube-170922A http://dx.doi.org/10.1126/science.aat1378

Does Dark Matter "Talk" with Ordinary Matter?

Researchers are interpreting new experimental data aimed at showing dark matter interacts with ordinary matter -- an unmet challenge in modern physics

An international team of scientists that includes University of California, Riverside, physicist Hai-Bo Yu has imposed conditions on how dark matter may interact with ordinary matter -- constraints that can help identify the elusive dark matter particle and detect it on Earth.

Dark matter -- nonluminous material in space -- is understood to constitute 85 percent of the matter in the universe. Unlike normal matter, it does not absorb, reflect, or emit light, making it difficult to detect.

Physicists are certain dark matter exists, having inferred this existence from the gravitational effect dark matter has on visible matter. What they are less certain of is how dark matter interacts with ordinary matter -- or even if it does.

Photo shows PandaX, a xenon-based detector in China.

Credit: PandaX.

In the search for direct detection of dark matter, the experimental focus has been on WIMPs, or weakly interacting massive particles, the hypothetical particles thought to make up dark matter.

But Yu's international research team invokes a different theory to challenge the WIMP paradigm: the self-interacting dark matter model, or SIDM, a well-motivated framework that can explain the full range of diversity observed in the galactic rotation curves. First proposed in 2000 by a pair of eminent astrophysicists, SIDM has regained popularity in both the particle physics and the astrophysics communities since around 2009, aided, in part, by work Yu and his collaborators did.

Yu, a theorist in the Department of Physics and Astronomy at UCR, and Yong Yang, an experimentalist at Shanghai Jiaotong University in China, co-led the team analyzing and interpreting the latest data collected in 2016 and 2017 at PandaX-II, a xenon-based dark matter direct detection experiment in China (PandaX refers to Particle and Astrophysical Xenon Detector; PandaX-II refers to the experiment). Should a dark matter particle collide with PandaX-II's liquefied xenon, the result would be two simultaneous signals: one of photons and the other of electrons.

Particle physicist Hai-Bo Yu is an assistant professor of physics and astronomy at UC Riverside.

Credit: I. Pittalwala, UC Riverside.


Yu explained that PandaX-II assumes dark matter "talks to" normal matter -- that is, interacts with protons and neutrons -- by means other than gravitational interaction (just gravitational interaction is not enough). The researchers then search for a signal that identifies this interaction. In addition, the PandaX-II collaboration assumes the "mediator particle," mediating interactions between dark matter and normal matter, has far less mass than the mediator particle in the WIMP paradigm.

"The WIMP paradigm assumes this mediator particle is very heavy -- 100 to 1000 times the mass of a proton -- or about the mass of the dark matter particle," Yu said. "This paradigm has dominated the field for more than 30 years. In astrophysical observations, we don't, however, see all its predictions. The SIDM model, on the other hand, assumes the mediator particle is about 0.001 times the mass of the dark matter particle, inferred from astrophysical observations from dwarf galaxies to galaxy clusters. The presence of such a light mediator could lead to smoking-gun signatures of SIDM in dark matter direct detection, as we suggested in an earlier theory paper. Now, we believe PandaX-II, one of the world's most sensitive direct detection experiments, is poised to validate the SIDM model when a dark matter particle is detected."

The international team of researchers reports July 12 in Physical Review Letters the strongest limit on the interaction strength between dark matter and visible matter with a light mediator. The journal has selected the research paper as a highlight, a significant honor.

"This is a particle physics constraint on a theory that has been used to understand astrophysical properties of dark matter," said Flip Tanedo, a dark matter expert at UCR, who was not involved in the research. "The study highlights the complementary ways in which very different experiments are needed to search for dark matter. It also shows why theoretical physics plays a critical role to translate between these different kinds of searches. The study by Hai-Bo Yu and his colleagues interprets new experimental data in terms of a framework that makes it easy to connect to other types of experiments, especially astrophysical observations, and a much broader range of theories."

PandaX-II is located at the China Jinping Underground Laboratory, Sichuan Province, where pandas are abundant. The laboratory is the deepest underground laboratory in the world. PandaX-II had generated the largest dataset for dark matter detection when the analysis was performed. One of only three xenon-based dark matter direct detection experiments in the world, PandaX-II is one of the frontier facilities to search for extremely rare events where scientists hope to observe a dark matter particle interacting with ordinary matter and thus better understand the fundamental particle properties of dark matter.

Particle physicists' attempts to understand dark matter have yet to yield definitive evidence for dark matter in the lab.

"The discovery of a dark matter particle interacting with ordinary matter is one of the holy grails of modern physics and represents the best hope to understand the fundamental, particle properties of dark matter," Tanedo said.

For the past decade, Yu, a world expert on SIDM, has led an effort to bridge particle physics and cosmology by looking for ways to understand dark matter's particle properties from astrophysical data. He and his collaborators have discovered a class of dark matter theories with a new dark force that may explain unexpected features seen in the systems across a wide range, from dwarf galaxies to galaxy clusters. More importantly, this new SIDM framework serves as a crutch for particle physicists to convert astronomical data into particle physics parameters of dark matter models. In this way, the SIDM framework is a translator for two different scientific communities to understand each other's results.

Now with the PandaX-II experimental collaboration, Yu has shown how self-interacting dark matter theories may be distinguished at the PandaX-II experiment.

"Prior to this line of work, these types of laboratory-based dark matter experiments primarily focused on dark matter candidates that did not have self-interactions," Tanedo said. "This work has shown how dark forces affect the laboratory signals of dark matter."

Yu noted that this is the first direct detection result for SIDM reported by an experimental collaboration.

"With more data, we will continue to probe the dark matter interactions with a light mediator and the self-interacting nature of dark matter," he said.

Yu was joined in the study by researchers from institutes in China, including Shanghai Jiaotong University, Xinjiang University, Yalong River Hydropower Development Company, Chinese Academy of Sciences, Shangdong University, Tsung-Dao Lee Institute, Peking University, and University of Shanghai for Science and Technology; and from the University of Maryland, College Park, USA. The spokesperson for the PandaX-II collaboration is Xiangdong Ji.

A grant from the U.S. Department of Energy supported Yu.



Contacts and sources:
Iqbal Pittalwala
University of California - Riverside (UCR)

Antioxidant Benefits of Sleep Discovered

It has been estimated that the average person spends about a third of his or her life sleeping. The biological function of sleep seems to be to rejuvenate the body.

Understanding sleep has become increasingly important in modern society, where chronic loss of sleep has become rampant and pervasive. As evidence mounts for a correlation between lack of sleep and negative health effects, the core function of sleep remains a mystery. 

In a new study published 12 July in the open access journal PLOS Biology, Vanessa Hill, Mimi Shirasu-Hiza and colleagues at Columbia University, New York, found that short-sleeping fruit fly mutants shared the common defect of sensitivity to acute oxidative stress, and thus that sleep supports antioxidant processes. Understanding this ancient bi-directional relationship between sleep and oxidative stress in the humble fruit fly could provide much-needed insight into modern human diseases such as sleep disorders and neurodegenerative diseases.

 A defect shared among short-sleeping fruit fly mutants suggests that sleep supports antioxidant processes.


Image Credit: pbio.2005206

Why do we sleep? During sleep, animals are vulnerable, immobile, and less responsive to their environments; they are unable to forage for food, mate, or run from predators. Despite the cost of sleep behavior, almost all animals sleep, suggesting that sleep fulfills an essential and evolutionarily conserved function from humans to fruit flies.

The researchers reasoned that if sleep is required for a core function of health, animals that sleep significantly less than usual should all share a defect in that core function. For this study, they used a diverse group of short-sleeping Drosophila (fruit fly) mutants. They found that these short-sleeping mutants do indeed share a common defect: they are all sensitive to acute oxidative stress.

Oxidative stress results from excess free radicals that can damage cells and lead to organ dysfunction. Toxic free radicals, or reactive oxygen species, build up in cells from normal metabolism and environmental damage. If the function of sleep is to defend against oxidative stress, then increasing sleep should increase resistance to oxidative stress. Hill and co-workers used both pharmacological and genetic methods to show that this is true.

Finally, the authors proposed, if sleep has antioxidant effects, then surely oxidative stress might regulate sleep itself. Consistent with this hypothesis, they found that reducing oxidative stress in the brain by overexpressing antioxidant genes also reduced the amount of sleep. Taken together, these results point to a bi-directional relationship between sleep and oxidative stress--that is, sleep functions to defend the body against oxidative stress and oxidative stress in turn helps to induce sleep.

This work is relevant to human health because sleep disorders are correlated with many diseases that are also associated with oxidative stress, such as Alzheimer's, Parkinson's, and Huntington's diseases. Sleep loss could make individuals more sensitive to oxidative stress and subsequent disease; conversely, pathological disruption of the antioxidant response could also lead to loss of sleep and associated disease pathologies.



Contacts and sources:
PLOS Biology
PLOS

Citation: Hill VM, O'Connor RM, Sissoko GB, Irobunda IS, Leong S, Canman JC, et al. (2018) A bidirectional relationship between sleep and oxidative stress in Drosophila. PLoS Biol 16(7): e2005206. https://doi.org/10.1371/journal.pbio.2005206   The article is freely available in PLOS Biology: http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2005206
 

Thursday, July 12, 2018

T Cells Reprogrammed Using CRISPR Gene Editing

In an achievement that has significant implications for research, medicine, and industry, UC San Francisco scientists have genetically reprogrammed the human immune cells known as T cells without using viruses to insert DNA. 

The researchers said they expect their technique—a rapid, versatile, and economical approach employing CRISPR gene-editing technology—to be widely adopted in the burgeoning field of cell therapy, accelerating the development of new and safer treatments for cancer, autoimmunity, and other diseases, including rare inherited disorders.

The new method, described in the July 11, 2018 issue of Nature, offers a robust molecular “cut and paste” system to rewrite genome sequences in human T cells. It relies on electroporation, a process in which an electrical field is applied to cells to make their membranes temporarily more permeable. After experimenting with thousands of variables over the course of a year, the UCSF researchers found that when certain quantities of T cells, DNA, and the CRISPR “scissors” are mixed together and then exposed to an appropriate electrical field, the T cells will take in these elements and integrate specified genetic sequences precisely at the site of a CRISPR-programmed cut in the genome.

“This is a rapid, flexible method that can be used to alter, enhance, and reprogram T cells so we can give them the specificity we want to destroy cancer, recognize infections, or tamp down the excessive immune response seen in autoimmune disease,” said UCSF’s Alex Marson, MD, PhD, associate professor of microbiology and immunology, a member of the UCSF Helen Diller Family Comprehensive Cancer Center, and senior author of the new study. “Now we’re off the races on all these fronts.”

Set of test tubes in Alex Marson's lab.
 Credit: Noah Berger.

But just as important as the new technique’s speed and ease of use, said Marson, also scientific director of biomedicine at the Innovative Genomics Institute, is that the approach makes it possible to insert substantial stretches of DNA into T cells, which can endow the cells with powerful new properties. Members of Marson’s lab have had some success using electroporation and CRISPR to insert bits of genetic material into T cells, but until now, numerous attempts by many researchers to place long sequences of DNA into T cells had caused the cells to die, leading most to believe that large DNA sequences are excessively toxic to T cells.

To demonstrate the new method’s versatility and power, the researchers used it to repair a disease-causing genetic mutation in T cells from children with a rare genetic form of autoimmunity, and also created customized T cells to seek and kill human melanoma cells.

Viruses cause infections by injecting their own genetic material through cell membranes, and since the 1970s scientists have exploited this capability, stripping viruses of infectious features and using the resulting “viral vectors” to transport DNA into cells for research, gene therapy, and in a well-publicized recent example, to create the CAR-T cells used in cancer immunotherapy.

T cells engineered with viruses are now approved by the U.S. Food and Drug Administration to combat certain types of leukemia and lymphoma. But creating viral vectors is a painstaking, expensive process, and a shortage of clinical-grade vectors has led to a manufacturing bottleneck for both gene therapies and cell-based therapies. Even when available, viral vectors are far from ideal, because they insert genes haphazardly into cellular genomes, which can damage existing healthy genes or leave newly introduced genes ungoverned by the regulatory mechanisms that ensure that cells function normally. These limitations, which could potentially lead to serious side effects, have been cause for concern in both gene therapy and cell therapies such as CAR-T-based immunotherapy.

“There has been thirty years of work trying to get new genes into T cells,” said first author Theo Roth, a student pursuing MD and PhD degrees in UCSF’s Medical Scientist Training Program who designed and led the new study in Marson’s lab. “Now there should no longer be a need to have six or seven people in a lab working with viruses just to engineer T cells, and if we begin to see hundreds of labs engineering these cells instead of just a few, and working with increasingly more complex DNA sequences, we’ll be trying so many more possibilities that it will significantly speed up the development of future generations of cell therapy.”

After nearly a year of trial-and-error, Roth determined the ratios of T cell populations, DNA quantity, and CRISPR abundance that, combined with an electrical field delivered with the proper parameters, would result in efficient and accurate editing of the T cells’ genomes.

The research team created CRISPR guides that would cause green fluorescent protein to be expressed in only certain cellular locations and structures. 
Credit: Alex Marson's Lab. 

To validate these findings, Roth directed CRISPR to label an array of different T cell proteins with green fluorescent protein (GFP), and the outcome was highly specific, with very low levels of “off-target” effects: each subcellular structure Roth’s CRISPR-Cas9 templates had been designed to tag with GFP—and no others—glowed green under the microscope.

Then, in complementary experiments devised to serve as proof-of-principle of the new technique’s therapeutic promise, Roth, Marson, and colleagues showed how it could potentially be used to marshal T cells against either autoimmune disease or cancer.

In the first example, Roth and colleagues used T cells provided to the Marson lab by Yale School of Medicine’s Kevan Herold, MD. The cells came from three siblings with a rare, severe autoimmune disease that has so far been resistant to treatment. Genomic sequencing had shown that the T cells in these children carried mutations in a gene called IL2RA. This gene carries instructions for a cell-surface receptor essential for the development of regulatory T cells, or Tregs, which keep other immune cells in check and prevent autoimmunity.

With the non-viral CRISPR technique, the UCSF team was able to quickly repair the IL2RA defect in the children’s T cells, and to restore cellular signals that had been impaired by the mutations. In CAR-T therapy, T cells that have been removed from the body are engineered to enhance their cancer-fighting ability, and then returned to the body to target tumors. The researchers hope that a similar approach could be effective for treating autoimmune diseases in which Tregs malfunction, such as that seen in the three children with the IL2RA mutations.

In a second set of experiments conducted in collaboration with Cristina Puig-Saus, PhD, and Antoni Ribas, MD, of the Parker Institute for Cancer Immunotherapy at UCLA, the scientists completely replaced native T cell receptors in a population of normal human T cells with new receptors that had been specifically engineered to seek out a particular subtype of human melanoma cells. T cell receptors are the sensors the cells use to detect disease or infection, and in lab dishes the engineered cells efficiently homed in on the targeted melanoma cells while ignoring other cells, exhibiting the sort of specificity that is a major goal of precision cancer medicine.

Without using viruses, the researchers were able to generate large numbers of CRISPR-engineered cells reprogrammed to display the new T cell receptor. When transferred into mice implanted with human melanoma tumors, the engineered human T cells went to the tumor site and showed anti-cancer activity.

“This strategy of replacing the T cell receptor can be generalized to any T cell receptor,” said Marson, also a member of the Parker Institute for Cancer Immunotherapy at UCSF and a Chan Zuckerberg Biohub Investigator. “With this new technique we can cut and paste into a specified place, rewriting a specific page in the genome sequence.”

Roth said that because the new technique makes it possible to create viable custom T cell lines in a little over a week, it has already transformed the research environment in Marson’s lab. Ideas for experiments that were previously deemed too difficult or expensive because of the obstacles presented by viral vectors—are now ripe for investigation. “We’ll work on 20 ‘crazy’ ideas,” Roth said, “because we can create CRISPR templates very rapidly, and as soon as we have a template we can get it into T cells and grow them up quickly.”

Marson attributes the new method’s success with Roth’s “absolute perseverance” in the face of the widespread beliefs that viral vectors were necessary and that only small pieces of DNA could be tolerated by T cells. “Theo was convinced that if we could figure out the right conditions we could overcome these perceived limitations, and he put in a Herculean effort to test thousands of different conditions: the ratio of the CRISPR to the DNA; different ways of culturing the cells; different electrical currents. By optimizing each of these parameters and putting the best conditions together he was able to see this astounding result.”

See the study online for a complete list of authors, as well as funding information. Marson is a co-founder of Spotlight Therapeutics, and serves as an advisor to Juno Therapeutics and PACT Pharma. The Marson laboratory has received sponsored funding for Juno, Epinomics, and Sanofi, as well as a gift from Gilead Sciences. Roth, Puig-Saus, Eric Shifrut, PhD, Ribas, and Marson are inventors on new patent applications related to this research.





Contacts and sources:
Pete Farley
University of California - San Francisco

Citation: Reprogramming human T cell function and specificity with non-viral genome targeting.
Theodore L. Roth, Cristina Puig-Saus, Ruby Yu, Eric Shifrut, Julia Carnevale, P. Jonathan Li, Joseph Hiatt, Justin Saco, Paige Krystofinski, Han Li, Victoria Tobin, David N. Nguyen, Michael R. Lee, Amy L. Putnam, Andrea L. Ferris, Jeff W. Chen, Jean-Nicolas Schickel, Laurence Pellerin, David Carmody, Gorka Alkorta-Aranburu, Daniela del Gaudio, Hiroyuki Matsumoto, Montse Morell, Ying Mao, Min Cho, Rolen M. Quadros, Channabasavaiah B. Gurumurthy, Baz Smith, Michael Haugwitz, Stephen H. Hughes, Jonathan S. Weissman, Kathrin Schumann, Jonathan H. Esensten, Andrew P. May, Alan Ashworth, Gary M. Kupfer, Siri Atma W. Greeley, Rosa Bacchetta, Eric Meffre, Maria Grazia Roncarolo, Neil Romberg, Kevan C. Herold, Antoni Ribas, Manuel D. Leonetti, Alexander Marson. Nature, 2018; DOI: 10.1038/s41586-018-0326-5

Just Being Fat Does Not Increase the Risk of Death Says New Study



Researchers at York University's Faculty of Health have found that patients who have metabolic healthy obesity, but no other metabolic risk factors, do not have an increased rate of mortality.

The results of this study could impact how we think about obesity and health, says Jennifer Kuk, associate professor at the School of Kinesiology and Health Science, who led the research team at York University.

"This is in contrast with most of the literature and we think this is because most studies have defined metabolic healthy obesity as having up to one metabolic risk factor," says Kuk. "This is clearly problematic, as hypertension alone increases your mortality risk and past literature would have called these patients with obesity and hypertension, 'healthy'. This is likely why most studies have reported that 'healthy' obesity is still related with higher mortality risk."

Digi-personenweegschaal1286.JPG
Credit; Wikimedia Commons

Kuk's study showed that unlike dyslipidemia, hypertension or diabetes alone, which are related with a high mortality risk, this isn't the case for obesity alone.

The study followed 54,089 men and women from five cohort studies who were categorized as having obesity alone or clustered with a metabolic factor, or elevated glucose, blood pressure or lipids alone or clustered with obesity or another metabolic factor. Researchers looked at how many people within each group died as compared to those within the normal weight population with no metabolic risk factors.

Current weight management guidelines suggest that anyone with a BMI over 30 kg/m2 should lose weight. This implies that if you have obesity, even without any other risk factors, it makes you unhealthy. Researchers found that 1 out of 20 individuals with obesity had no other metabolic abnormalities.

"We're showing that individuals with metabolically healthy obesity are actually not at an elevated mortality rate. We found that a person of normal weight with no other metabolic risk factors is just as likely to die as the person with obesity and no other risk factors," says Kuk. "This means that hundreds of thousands of people in North America alone with metabolically healthy obesity will be told to lose weight when it's questionable how much benefit they'll actually receive."




Contacts and sources:
Anjum Nayyar
York University

Citation: "Individuals with obesity but no other metabolic risk factors are not at significantly elevated all-cause mortality risk in men and women" is published today in Clinical Obesity.

16,000 Year Old Weather Pattern May Be Returning to Southern Ocean



Stronger westerly winds in the Southern Ocean could be the cause of a sudden rise in atmospheric CO2 and temperatures in a period of less than 100 years about 16,000 years ago, according to a study published in Nature Communications.

The westerly winds during that event strengthened as they contracted closer to Antarctica, leading to a domino effect that caused an outgassing of carbon dioxide from the Southern Ocean into the atmosphere.

This contraction and strengthening of the winds is very similar to what we are already seeing today as a result of human caused climate change.

Strengthening westerly winds close to Antarctica could lead to a significant spike in atmospheric CO2 as occurred 16,000 years ago.

Picture credit: Ameen Fahmy (Unsplash.com)

"During this earlier period, known as Heinrich stadial 1, atmospheric CO2 increased by a total of ~40ppm, Antarctic surface atmospheric temperatures increased by around 5°C and Southern Ocean temperatures increased by 3°C," said lead author Dr Laurie Menviel, a Scientia Fellow with the University of New South Wales (Sydney).

"With this in mind, the contraction and strengthening of westerly winds today could have significant implications for atmospheric CO2 concentrations and our future climate."

Scientists know changes in atmospheric carbon dioxide have profound impacts on our climate system. This is why researchers are so interested in Heinrich events, where rapid increases in atmospheric carbon dioxide occur over a very short period of time.

Heinrich event 1, which occurred about 16,000 years ago, is a favorite to study because alterations in ocean currents, temperature, ice and sea levels are clearly captured in an array of geological measures. This allows theories to be tested against these changes.

Until now, many of the propositions put forward for the carbon dioxide spike struggled to explain its timing, rapidity and magnitude.

But when the researchers used climate models to replicate an increase in the strength of westerly winds as they contracted towards the Antarctic, the elements began to align. The stronger winds caused a domino effect that not only reproduced the increase in atmospheric carbon dioxide but also other changes seen during Heinrich 1.

The stronger winds had a direct impact on the ocean circulation, increasing the formation of bottom water along the Antarctic coast and enhancing the transport of carbon rich waters from the deep Pacific Ocean to the surface of the Southern Ocean. As a result, about 100Gt of carbon dioxide was emitted into the atmosphere by the Southern Ocean.

Today, observations suggest westerly winds are again contracting southwards and getting stronger in response to the warming of our planet.

"The carbon exchange in particular between the Southern Ocean and the atmosphere matter deeply for our climate. It is estimated the Southern Ocean absorbs around 25% of our atmospheric carbon emissions and that ~43% of that carbon is taken up by the Ocean south of 30S," said Dr Menviel.

"With westerly winds already contracting towards Antarctica, it's important to know if this event is an analogue for what we may see in our own future.

"For this reason, it is vital to bring more observational networks into the Southern Ocean to monitor these changes. We need a clear warning if we are approaching a point in our climate system where we may see a spike in atmospheric carbon dioxide and the rapid temperature rise that inevitably follows."






Contacts and sources:
Alvin StoneUniversity of New South Wales


Citation: Paper: Menviel L., Spence P., Yu J.,Chamberlain M. A., Matear R. J., Meissner K. J., England M. H. Southern Hemisphere westerlies as a driver of the early deglacial atmospheric CO2 rise. Nature Communications. DOI: 10.1038/s41467-018-04876-4