Thursday, October 31, 2019

Secrets of Skeleton Lake Revealed



Over centuries, the shores of a small Himalayan lake became the final resting place for hundreds of individuals, so much so that today the lake is locally known as Skeleton Lake. Skeletal remains of these ancient people are scattered around the lake, partly due to rockslides, as well as visitors handling and moving the bones around.

Until recently, almost everything about these ancient people was unknown — where they came from, why they were there, how old they were. The prevailing modern theory was that the remains belong to one group of people, perhaps traveling together from the same geographical area, who died about 200-300 years ago. It was proposed that all of these people were killed by a single catastrophic event, perhaps a rockslide, or maybe a deadly epidemic.

Skeletal remains are mysteriously scattered on the shores of Roopkund Lake, in the Himalayas. 
Skeletal remains scattered at Roopkund Lake
Credit: Himadri Sinha Roy  


Roopkund Lake — the lake’s officially recognized name — is located near a shrine to the mountain goddess Nanda Devi. Local folklore tells the story of a pilgrimage to the shrine that made Nanda Devi unhappy, and so the pilgrims met their demise due to her wrath.

Recently, with the help of Penn State’s Radiocarbon Laboratory, researchers discovered that the mystery of Roopkund Lake runs deeper than originally thought.

Unexpected results

About two to three times a week Brendan Culleton, a laboratory scientist for Penn State's Institutes of Energy and the Environment, runs samples through the accelerator mass spectrometer (AMS) housed at the University Park campus. The AMS is used for radiocarbon dating, which calculates the age of an object by measuring how much carbon-14 is in it. Only material that is carbon-bearing, such as bones, teeth and wood, can be dated using an AMS.

As he does most days, Culleton ran several samples without knowing where they were geographically from or what was expected to be found.

“On this particular run there were skeletal samples from all over the world — Mongolia, Sudan, Spain, Turkmenistan and elsewhere,” Culleton said. “Most of them come with the site name. The descriptions I had for the Roopkund samples were simply codes like ‘R66. Bone powder. India.’ I really had no sense at all of the locale or the archaeological context.”

The AMS calculated that the seven or so samples tested from Roopkund didn’t all date to the same time period. In fact, the samples were about 1,000 years apart.

Without any context, the results didn’t mean anything significant to Culleton. He sent the results back to David Reich, a geneticist at Harvard University who had ordered the radiocarbon testing. Culleton assumed his role in this research was finished.

It turns out that Reich was expecting these samples to all date back to the same time period. For decades, the running theory was that the remains were the result of a single catastrophic event that had occurred in just the last 200 to 300 years.

Curiouser and curiouser

Meanwhile, the DNA of the same skeletons was being tested through collaborative work at the Centre for Cellular and Molecular Biology in India and at Harvard Medical School. These labs were learning that there were three distinct genetic groups among the 38 samples being tested — 23 from present-day India, 14 from the eastern Mediterranean area, and one from southeastern Asia.


Analysis of the femur bones of 38 individuals found at Roopkund Lake (aka Skeleton Lake), in India, revealed that people from different areas of the world and different time periods either died or were laid to rest at the lake.

Credit: Brenna Buck


When Reich received the radiocarbon results, he asked Culleton to double check the AMS results.

“David told me that the individuals were all supposed to have died in a single event, and so the results really pointed to some kind of mix-up with the samples or maybe contamination at some point,” Culleton said. “We looked through all the lab notes and the AMS run and saw nothing unusual. After that, David was very excited because the genomic data correlated these two time periods. He sent me a table with the carbon-14 dates and population labels ‘Roopkund_A’ and ‘Roopkund_B.’ Still none the wiser, I looked up Roopkund.”

Culleton was astounded when he understood the context and why researchers thought this might be a single event.

So Reich sent more samples to Penn State for radiocarbon dating to complement the genomic data.

By combining the radiocarbon data from Penn State with the ancient DNA results, as well as with stable isotope measures — which reveals dietary profiles — the researchers found that there were in fact three distinct populations at Roopkund Lake who died at two very different times.

The results suggest a group with Indian-related ancestry died at the site between 1,100 and 1,400 years ago. Later, people from the eastern Mediterranean and southeastern Asia perished there between 100 and 400 years ago.

Collaborative discovery

“Projects like this, especially with ancient DNA, are inherently multidisciplinary,” Culleton said. “The challenge is always for everyone to be able to communicate openly and effectively within and between our areas of expertise. We have to be able to educate each other across domains and be willing to be educated as well. It can be threatening for a scientist to have their hypotheses and theories challenged by an ‘outsider.’ But when a team brings together multiple lines of evidence to bear on a problem, everyone reaps the benefits. In the case of Roopkund Lake, the whole is far greater than the sum of the parts, and the great collaboration is very gratifying.”

Due to modern technology and collaborative work across several disciplines, researchers now know more than they expected to about the individuals who perished at Roopkund Lake.

This research, titled "Ancient DNA from the skeletons of Roopkund Lake reveals Mediterranean migrants in India," recently appeared in Nature Communications.

The Radiocarbon Lab is one of several shared multi-user instrumentation facilities that make up the Energy and Environmental Sustainability Laboratories (EESL), which provide cutting-edge research equipment in the areas of energy and the environment.

EESL is a part of the Institutes of Energy and the Environment (IEE), which works to build teams of researchers from different disciplines to see how new partnerships and new ways of thinking can solve some of the world’s most difficult energy and environmental challenges.

Contacts and sources:
Victoria M. Indivero
Penn State





Sieve Captures and Converts Carbon Dioxide into High Value Chemicals


A new material captures and allows CO2 to be converted into useful organic materials

A new material that can selectively capture carbon dioxide (CO2) molecules and efficiently convert them into useful organic materials has been developed by researchers at Kyoto University, along with colleagues at the University of Tokyo and Jiangsu Normal University in China. They describe the material in the journal Nature Communications.

Human consumption of fossil fuels has resulted in rising global CO2 emissions, leading to serious problems associated with global warming and climate change. One possible way to counteract this is to capture and sequester carbon from the atmosphere, but current methods are highly energy intensive. The low reactivity of CO2 makes it difficult to capture and convert it efficiently.

This new porous coordination polymer has propeller-shaped molecular structures that enables selectively capturing CO2, and efficiently convert the CO2 into useful carbon materials.
Illustration by Izumi Mindy Takamiya


“We have successfully designed a porous material which has a high affinity towards CO2 molecules and can quickly and effectively convert it into useful organic materials,” says Ken-ichi Otake, Kyoto University materials chemist from the Institute for Integrated Cell-Material Sciences (iCeMS).

The material is a porous coordination polymer (PCP, also known as MOF; metal-organic framework), a framework consisting of zinc metal ions. The researchers tested their material using X-ray structural analysis and found that it can selectively capture only CO2 molecules with ten times more efficiency than other PCPs.

The material has an organic component with a propeller-like molecular structure, and as CO2 molecules approach the structure, they rotate and rearrange to permit CO2 trapping, resulting in slight changes to the molecular channels within the PCP – this allows it to act as molecular sieve that can recognize molecules by size and shape. The PCP is also recyclable; the efficiency of the catalyst did not decrease even after 10 reaction cycles.

“One of the greenest approaches to carbon capture is to recycle the carbon dioxide into high-value chemicals, such as cyclic carbonates which can be used in petrochemicals and pharmaceuticals,” says Susumu Kitagawa, materials chemist at Kyoto University.

After capturing the carbon, the converted material can be used to make polyurethane, a material with a wide variety of applications including clothing, domestic appliances and packaging.

This work highlights the potential of porous coordination polymers for trapping carbon dioxide and converting into useful materials, opening up an avenue for future research into carbon capture materials.




Contacts and sources:
Kyoto University
Citation: Carbon dioxide capture and efficient fixation in a dynamic porous coordination polymer.
Pengyan Wu, Yang Li, Jia-Jia Zheng, Nobuhiko Hosono, Ken-ichi Otake, Jian Wang, Yanhong Liu, Lingling Xia, Min Jiang, Shigeyoshi Sakaki & Susumu Kitagawa (2019). Nature Communications, 10:4362.  https://doi.org/10.1038/s41467-019-12414-z
 http://hdl.handle.net/2433/244247



In and Out with 10-Minute Electrical Vehicle Recharge, Battery Good for 1/2 Million Miles



Electric vehicle owners may soon be able to pull into a fueling station, plug their car in, go to the restroom, get a cup of coffee and in 10 minutes, drive out with a fully charged battery, according to a team of engineers.

"We demonstrated that we can charge an electrical vehicle in ten minutes for a 200 to 300 mile range," said Chao-Yang Wang, William E. Diefenderfer Chair of mechanical engineering, professor of chemical engineering and professor of materials science and engineering, and director of the Electrochemical Engine Center at Penn State. "And we can do this maintaining 2,500 charging cycles, or the equivalent of half a million miles of travel."

In a battery, ions flows from the cathode to the anode, resulting in a positive energy charge for the unit.
drawing of a battery with nickel added for heating
Credit: Chao-Yang Wang Lab, Penn State

Lithium-ion batteries degrade when rapidly charged at ambient temperatures under 50 degrees Fahrenheit because, rather than the lithium ions smoothly being inserted into the carbon anodes, the lithium deposits in spikes on the anode surface. This lithium plating reduces cell capacity, but also can cause electrical spikes and unsafe battery conditions.

Batteries heated above the lithium plating threshold, whether by external or internal heating, will not exhibit lithium plating.

The researchers had previously developed their battery to charge at 50 degrees F in 15 minutes. Charging at higher temperatures would be more efficient, but long periods of high heat also degrade the batteries.

"Fast charging is the key to enabling wide spread introduction of electric vehicles," said Wang.

Wang and his team realized that if the batteries could heat up to 140 degrees F for only 10 minutes and then rapidly cool to ambient temperatures, lithium spikes would not form and heat degradation of the battery would also not occur. They report their results in today's (Oct 30) issue of Joule.

"Taking this battery to the extreme of 60 degrees Celsius (140 degrees F) is forbidden in the battery arena," said Wang. "It is too high and considered a danger to the materials and would shorten battery life drastically."

The rapid cooling of the battery would be accomplished using the cooling system designed into the car, explained Wang. The large difference from 140 degrees to about 75 degrees F will also help increase the speed of cooling.

"The 10-minute trend is for the future and is essential for adoption of electric vehicles because it solves the range anxiety problem," said Wang.

Adding to the reduction of range anxiety -- fear of running out of power with no way or time to recharge -- will be, according to Reuters, the establishment of 2,800 charging stations across the U.S., funded by the more than $2 billion penalty paid by Volkswagen after admitting to diesel emissions cheating. These charging stations will be in 500 locations.

The self-heating battery uses a thin nickel foil with one end attached to the negative terminal and the other extending outside the cell to create a third terminal. A temperature sensor attached to a switch causes electrons to flow through the nickel foil to complete the circuit. This rapidly heats up the nickel foil through resistance heating and warms the inside of the battery.

Also working on this project from Penn State are Xiao-Guang Yang, assistant research professor; Teng Liu, graduate student; Yue Gao, post-doctoral scholar; Shanhai Ge, assistant researcher professor; Yongjun Leng, assistant research professor; and Donghai Wang, professor, all in the Department of Mechanical Engineering.

The U.S. Department of Energy supported this work.

Contacts and sources:
A'ndrea Elyse Messer
Penn State

Citation:




Wednesday, October 30, 2019

First Time Water Detected on Potentially ‘Habitable’ Super-Earth



Water vapor has been detected in the atmosphere of a super-Earth with habitable temperatures by UCL researchers in a world first.

K2-18b, which is eight times the mass of Earth, is now the only planet orbiting a star outside the Solar System, or ‘exoplanet’, known to have both water and temperatures that could support life.

Exoplanet K2-18b (artist’s Impression) showing the planet, its host star and an accompanying planet in this system. 
Planet
Credit: ESA/Hubble, M. Kornmesser

The discovery, published today in Nature Astronomy, is the first successful atmospheric detection for an exoplanet orbiting in its star’s ‘habitable zone’, at a distance where water can exist in liquid form.

First author, Dr Angelos Tsiaras (UCL Centre for Space Exochemistry Data (CSED)), said: “Finding water in a potentially habitable world other than Earth is incredibly exciting. K2-18b is not ‘Earth 2.0’ as it is significantly heavier and has a different atmospheric composition. However, it brings us closer to answering the fundamental question: Is the Earth unique?”


Video: Credit: ESA/Hubble, M. Kornmesser

The team used archive data from 2016 and 2017 captured by the ESA/NASA Hubble Space Telescope and developed open-source algorithms to analyse the starlight filtered through K2-18b’s atmosphere. The results revealed the molecular signature of water vapour, also indicating the presence of hydrogen and helium in the planet’s atmosphere.

The authors believe that other molecules including nitrogen and methane may be present but, with current observations, they remain undetectable. Further studies are required to estimate cloud coverage and the percentage of atmospheric water present.

The planet orbits the cool dwarf star K2-18, which is about 110 light years from Earth in the Leo constellation. Given the high level of activity of its red dwarf star, K2-18b may be more hostile than Earth and is likely to be exposed to more radiation.

K2-18b was discovered in 2015 and is one of hundreds of super-Earths – planets with a mass between Earth and Neptune – found by NASA’s Kepler spacecraft. NASA’s TESS mission is expected to detect hundreds more super-Earths in the coming years.

Co-author Dr Ingo Waldmann (UCL CSED), said: “With so many new super-Earths expected to be found over the next couple of decades, it is likely that this is the first discovery of many potentially habitable planets. This is not only because super-Earths like K2-18b are the most common planets in our Galaxy, but also because red dwarfs - stars smaller than our Sun - are the most common stars.”

The next generation of space telescopes, including the NASA/ESA/CSA James Webb Space Telescope and ESA’s ARIEL mission, will be able to characterise atmospheres in more detail as they will carry more advanced instruments. ARIEL is expected to launch in 2028, and will observe 1,000 planets in detail to get a truly representative picture of what they are like.

Professor Giovanna Tinetti (UCL CSED), co-author and Principal Investigator for ARIEL, said: “Our discovery makes K2-18 b one of the most interesting targets for future study. Over 4,000 exoplanets have been detected but we don’t know much about their composition and nature. By observing a large sample of planets, we hope to reveal secrets about their chemistry, formation and evolution.”

“This study contributes to our understanding of habitable worlds beyond our Solar System and marks a new era in exoplanet research, crucial to ultimately place the Earth, our only home, into the greater picture of the Cosmos,” said Dr Tsiaras.

Secretary of State Andrea Leadsom said: “Space exploration is one of the greatest adventures of our time, and for decades, scientists and astronomers have scoured the skies for planets capable of supporting life. This discovery by UK researchers is a giant leap forward in this endeavour, opening a new world of possibilities. The secrets of our universe are out there, and I am enormously proud that our Government-backed researchers and councils are at the forefront of efforts to unlock answers to mysteries that have endured for centuries.”

Chris Lee, the UK Space Agency’s Chief Scientist, said: “This exciting discovery demonstrates the UK’s leading strengths in the science of exoplanets.”

“We continue to build on this expertise, with UCL at the heart of a new mission - ARIEL - to study the atmosphere of worlds orbiting other stars in our Galaxy. This is one of a number of international space science missions involving leading roles for UK science and industry and forms part of our ongoing commitment to the European Space Agency,” he said.

Dr Colin Vincent, Head of STFC’s Astronomy Division, said: “Finding other planets that might have the capability to support life is one of the holy grails of astronomy research. This result based on data from the Hubble Space Telescope gives an exciting taste of what may be possible in the next few years as a number of new telescopes and space missions supported by STFC and the UK Space Agency come online.”

The research was funded by European Research Council and the UK Science and Technology Facilities Council which is part of UKRI.

Contacts and sources:
Bex Caygill 
University College London




Macaques’ Stone Tool Use Varies Despite Same Environment



Stone tool use develops differently within species of Old World monkeys in spite of shared environmental and ecological settings, according to a new study involving  University College London (UCL).

Macaque at Wat Khao Takiab.

Credit: Heiko S via Flickr

Macaques are the only Old World monkeys that have been observed using percussive stone tools and scientists do not know for certain how or why certain groups have developed this behaviour.

In the study, published in eLife journal, researchers analysed data on wild long-tailed macaques living in two islands (Boi Yai Island and Lobi Bay) about 15 kilometers apart in southern Thailand, within the Ao Phang Nga National Park.

The researchers assessed 115 stone tools combined from both locations. They found that despite living in the same environment, with access to the same stone tool making resources (with limestone constituting the most available raw material) and the same prey species, that wild long-tailed macaques on Boi Yai Island selected heavier tools than those on Lobi Bay.

In addition, the stone tools on Boi Yai Island showed a greater number of wear patterns, indicating that the tools are being used multiple times and on more than one prey species.

In particular, they found that stone tools used to crack open oysters are larger and more intensively used on Boi Yai Island compared to Lobi Bay. While oysters on Boi Yai Island are, in general, larger than Lobi Bay, researchers believe this is learnt rather than environmental behavior. This also supports previous findings that macaques adjust tool size to prey size.

Commenting on the findings, co-author Dr Tomos Proffitt (UCL Institute of Archaeology), said: “We observed differences among macaques on two different islands, in relation to tool selection and the degree of tool re-use when foraging for marine prey.

“The theory is that if the environmental factors are the same – the only reasonable conclusion is that one island has developed its own tool using culture either through genetics or through passing down through a learning mechanism. While the other group exhibits a tool use culture which is more ephemeral and ad hoc.”

Long-tailed macaques are the only Old World monkeys who use stone tools in their daily foraging. This behaviour is mainly observed in populations that live along the ocean shores of southern Thailand and Myanmar where long-tailed macaques use tools primarily to prey on shellfish, including oysters, crabs and mussels.

Stone tools in the prehistoric records are a key source of evidence for understanding early hominin technological and cultural variation. Primate archaeology is well placed to improve our scientific knowledge by using the tool behaviors of living primates as models to test hypotheses related to the adoption of tools by early stone-age hominins.

Lead author, Dr Lydia Luncz (Institute of Cognitive and Evolutionary Anthropology, University of Oxford), said: "That we find a potential cultural behavior in macaques is not surprising to us. The interesting part is that the same foraging behavior creates distinct tool evidence in the environment. This might be useful to keep in mind when we look at the archaeological record of human ancestors as well".

However, scientists are concerned about the impact that the tourism boom in Thailand could have on the habits of wild macaques.

The study was carried out with researchers at University of Oxford, Oxford Brookes University, Max Planck Institute for Evolutionary Anthropology, Chulalongkorn University, Bangkok and the National Primate Research Centre of Thailand.

It was funded by the Leverhulme Trust, British Academy and by the German Primate Centre in Goettingen, Germany.



Contacts and sources:
Natasha Downes
University College London



Citation: https://Group-specific archaeological signatures of stone tool use in wild macaques Lydia V Luncz Is a corresponding author, Mike Gill, Tomos Proffitt, Magdalena S Svensson, Lars Kulik, Suchinda Malaivijitnond
elifesciences.org/articles/46961





Favorite Tunes Recognized in Milliseconds



The human brain can recognize a familiar song within 100 to 300 milliseconds, highlighting the deep hold favorite tunes have on our memory, a UCL study finds.

Anecdotally the ability to recall popular songs is exemplified in game shows such as ‘Name That Tune’, where contestants can often identify a piece of music in just a few seconds.

For this study, published in Scientific Reports, researchers at the UCL Ear Institute wanted to find out exactly how fast the brain responded to familiar music, as well as the temporal profile of processes in the brain which allow for this.

Credit:


The main participant group consisted of five men and five women who had each provided five songs, which were very familiar to them. For each participant researchers then chose one of the familiar songs and matched this to a tune, which was similar (in tempo, melody, harmony, vocals and instrumentation) but which was known to be unfamiliar to the participant.

Participants then passively listened to 100 snippets (each less than a second) of both the familiar and unfamiliar song, presented in random order. Around 400 seconds was listened to in total. Researchers used electro-encephalography (EEG) imaging, which records electrical activity in the brain, and pupillometry (a technique that measures pupil diameter – considered a measure of arousal).

The study found the human brain recognised ‘familiar’ tunes from 100 milliseconds (0.1 of a second) of sound onset, with the average recognition time between 100ms and 300ms. This was first revealed by rapid pupil dilation, likely linked to increased arousal associated with the familiar sound, followed by cortical activation related to memory retrieval.

No such differences were found in a control group, compromising of international students who were unfamiliar with all the songs ‘familiar’ and ‘unfamiliar’.

Senior author, Professor Maria Chait, (UCL Ear Institute) said: “Our results demonstrate that recognition of familiar music happens remarkably quickly.

“These findings point to very fast temporal circuitry and are consistent with the deep hold that highly familiar pieces of music have on our memory.”

Professor Chait added: “Beyond basic science, understanding how the brain recognises familiar tunes is useful for various music-based therapeutic interventions.

“For instance, there is a growing interest in exploiting music to break through to dementia patients for whom memory of music appears well preserved despite an otherwise systemic failure of memory systems.

“Pinpointing the neural pathway and processes which support music identification may provide a clue to understanding the basis of this phenomena.”
Study limitations

‘Familiarity’ is a multifaceted concept. In this study, songs were explicitly selected to evoke positive feelings and memories. Therefore, for the ‘main’ group the ‘familiar’ and ‘unfamiliar’ songs did not just differ in terms of recognisability but also in terms of emotional engagement and affect.

While the songs are referred to as ‘familiar’ and ‘unfamiliar’, the effects observed may also be linked with these other factors.

While care was taken in the song matching process, this was ultimately done by hand due to lack of availability of appropriate technology. Advancements in automatic processing of music may improve matching in the future.

Another limitation is the fact that only one ‘familiar’ song was used per subject. This likely limited the demands on the memory processes studied.



Contacts and sources:
Henry Killworth
University College London

Citation:  Rapid Brain Responses to Familiar vs. Unfamiliar Music – an EEG and Pupillometry study Robert Jagiello, Ulrich Pomper, Makoto Yoneya, Sijia Zhao & Maria Chait Scientific Reports volume 9, Article number: 15570 (2019)  https://www.nature.com/articles/s41598-019-51759-9




Tuesday, October 29, 2019

Birthplace and First Homeland for Modern Humans Called "Perfect for Life"

A landmark study pinpoints the birthplace of modern humans in southern Africa and suggests how past climate shifts drove their first migration.

The study has concluded that the earliest ancestors of anatomically modern humans (Homo sapiens sapiens) emerged in a southern African ‘homeland’ and thrived there for 70 thousand years.

Lake Makgadikgadi by SPOT Satellite
Lake Makgadikgadi SPOT 1136.jpg
Credit:  Cnes - Spot Image / Wikimedia Commons

The breakthrough findings are published in the prestigious journal Nature today.

The authors propose that changes in Africa’s climate triggered the first human explorations, which initiated the development of humans’ genetic, ethnic and cultural diversity.

This study provides a window into the first 100 thousand years of modern humans’ history.
DNA as a time capsule

“It has been clear for some time that anatomically modern humans appeared in Africa roughly 200 thousand years ago. What has been long debated is the exact location of this emergence and subsequent dispersal of our earliest ancestors,” says study lead Professor Vanessa Hayes from the Garvan Institute of Medical Research and University of Sydney, and Extraordinary Professor at the University of Pretoria.

“Mitochondrial DNA acts like a time capsule of our ancestral mothers, accumulating changes slowly over generations. Comparing the complete DNA code, or mitogenome, from different individuals provides information on how closely they are related.”

In their study, Professor Hayes and her colleagues collected blood samples to establish a comprehensive catalogue of modern human’s earliest mitogenomes from the so-called ‘L0’ lineage. 

“Our work would not have been possible without the generous contributions of local communities and study participants in Namibia and South Africa, which allowed us to uncover rare and new L0 sub-branches,” says study author and public health Professor Riana Bornman from the University of Pretoria.

“We merged 198 new, rare mitogenomes to the current database of modern human’s earliest known population, the L0 lineage. This allowed us to refine the evolutionary tree of our earliest ancestral branches better than ever before,” says first author Dr Eva Chan from the Garvan Institute of Medical Research, who led the phylogenetic analyses.

By combining the L0 lineage timeline with the linguistic, cultural and geographic distributions of different sublineages, the study authors revealed that 200 thousand years ago, the first Homo sapiens sapiens maternal lineage emerged in a ‘homeland’ south of the Greater Zambezi River Basin region, which includes the entire expanse of northern Botswana into Namibia to the west and Zimbabwe to the east.

Zambezi river basin

Credit: User:Worldtraveller, User:Aymatth2 / Wikimedia Commons


A homeland 'to thrive

Investigating existing geological, archeological and fossil evidence, geologist Dr Andy Moore, from Rhodes University, revealed that the homeland region once held Africa’s largest ever lake system, Lake Makgadikgadi.

“Prior to modern human emergence, the lake had begun to drain due to shifts in underlying tectonic plates. This would have created, a vast wetland, which is known to be one of the most productive ecosystems for sustaining life,” says Dr Moore.

Modern humans’ first migrations

The authors’ new evolutionary timelines suggest that the ancient wetland ecosystem provided a stable ecological environment for modern humans’ first ancestors to thrive for 70 thousand years.

“We observed significant genetic divergence in the modern humans’ earliest maternal sub-lineages, that indicates our ancestors migrated out of the homeland between 130 and 110 thousand years ago,” explains Professor Hayes. “The first migrants ventured northeast, followed by a second wave of migrants who travelled southwest. A third population remained in the homeland until today.”

“In contrast to the northeasterly migrants, the southwesterly explorers appear to flourish, experiencing steady population growth,” says Professor Hayes. The authors speculate that the success of this migration was most likely a result of adaptation to marine foraging, which is further supported by extensive archaeological evidence along the southern tip of Africa.

Climate effects

To investigate what may have driven these early human migrations, co-corresponding author Professor Axel Timmermann, Director of the IBS Center for Climate Physics at Pusan National University, analysed climate computer model simulations and geological data, which capture Southern Africa’s climate history of the past 250 thousand years.

“Our simulations suggest that the slow wobble of Earth’s axis changes summer solar radiation in the Southern Hemisphere, leading to periodic shifts in rainfall across southern Africa,” says Professor Timmermann. “These shifts in climate would have opened green, vegetated corridors, first 130 thousand years ago to the northeast, and then around 110 thousand years ago to the southwest, allowing our earliest ancestors to migrate away from the homeland for the first time.”

“These first migrants left behind a homeland population,” remarks Professor Hayes. “Eventually adapting to the drying lands, maternal descendants of the homeland population can be found in the greater Kalahari region today.”

This study uniquely combined the disciplines of genetics, geology and climatic physics to rewrite our earliest human history.

The research was supported by an Australian Research Council Discovery Project grant (DP170103071) and the Institute for Basic Science (IBS-R028-D1). Professor Vanessa Hayes holds the Sydney University Petre Chair of Prostate Cancer Research.

This study was conducted in consultation with the local African communities, approval from community leaders and ethics approval from the Ministry of Health and Social Services in Namibia, the University of Pretoria Human Research Ethics Committee and St Vincent’s Hospital, Sydney.

Participants for this study were recruited within the borders of South Africa and Namibia. The study was reviewed and approved by the Ministry of Health and Social Services (MoHSS) in Namibia (#17-3-3), with additional local approvals from community leaders, and the University of Pretoria Human Research Ethics Committee (HREC #43/2010 and HREC #280/2017), including US Federal-wide assurance (FWA00002567 and IRB00002235 IORG0001762).




Contacts and sources:
Dr Viviane Richter
Garvan Institute of Medical Research
Vivienne Reiner
University of Sydney

Citation: Human origins in a southern African palaeo-wetland and first migrations.
Eva K. F. Chan, Axel Timmermann, Benedetta F. Baldi, Andy E. Moore, Ruth J. Lyons, Sun-Seon Lee, Anton M. F. Kalsbeek, Desiree C. Petersen, Hannes Rautenbach, Hagen E. A. Förtsch, M. S. Riana Bornman, Vanessa M. Hayes. Nature, 2019; DOI: 10.1038/s41586-019-1714-1




3-D Models of Cascadia Megathrust Events Match Coastal Changes from 1700 Quake, Predicted 10%-14% Chance of Another within 50 Years



By combining models of magnitude 9 to 9.2 earthquakes on the Cascadia Subduction Zone with geological evidence of past coastal changes, researchers have a better idea of what kind of megathrust seismic activity was behind the 1700 Cascadia earthquake.

The analysis by Erin Wirth and Arthur Frankel of the U.S. Geological Survey indicates that a rupture extending to just offshore along most of the Pacific Northwest could cause the pattern of coastal subsidence seen in geologic evidence from the 1700 earthquake, which had an estimated magnitude between 8.7 and 9.2.

An earthquake rupture that also contains smaller patches of high stress drop, strong motion-generating “subevents” matches the along-fault variations in coastal subsidence seen from southern Oregon to British Columbia from the 1700 earthquake, the researchers conclude in their study published in the Bulletin of the .
Seismological Society of America
The Neskowin “Ghost Forest” is the result of coastal subsidence during the 1700 Cascadia earthquake.
| Rob DeGraff

The seismic hazard associated with Cascadia megathrust earthquakes depends on how far landward the rupture extends, along with differences in slip along the fault. The new study could help improve seismic hazard assessments for the region, including estimates of ground shaking intensity in Portland, Oregon, Seattle, Washington and Vancouver, British Columbia.

For instance, the 2014 National Seismic Hazard Maps assigned different “weights” to earthquake scenarios that rupture to different extents of the down-dipping plate in the region’s subduction zone, as a way to express their potential contribution to overall megathrust earthquake hazard. An earthquake where the rupture extends deep and partially inland is weighted at 30%, a shallow rupture that is entirely offshore is weighted at 20%, and a mid-depth rupture that extends approximately to the coastline is weighted at 50%.

“We looked at various magnitude 9 rupture scenarios for Cascadia, to see how the coastal land level changes under those scenarios,” said Wirth, “ and you can’t match the paleoseismic estimates for how the land level changed along the Pacific Northwest coast during the 1700 Cascadia earthquake” with rupture scenarios at the shallowest and deepest points.

“This may mean that these scenarios deserve less weight in assessing the overall seismic hazard for Cascadia,” Wirth noted.

The researchers used data from other megathrust earthquakes around the world, such as the 2010 magnitude 8.8 Maule, Chile and the 2011 magnitude 9.0 in Tohoku, Japan earthquakes to inform their models. One of the features found in these and other megathrust events around the world are distinct patches of strong motion-generating “subevents” that take place in the deeper portions of the megathrust fault.

Wirth and Frankel show that variations in coastal subsidence caused by the 1700 earthquake may be due to the locations of these subevents. But improving the accuracy of paleoseismic estimates for how the land level changed during previous Cascadia earthquakes is critical to ascertain this, said Wirth.

File:Cascadia earthquake sources.png
Credit: USGS / Wikimedia Commons

It’s unclear what causes these subevents, other than that these areas of the fault generate high stress that isreleased in the form of strong ground shaking. This might indicate that the subevents have a physical cause like the structure or composition of the rocks along the fault that makes them mechanically strong, or changes in friction or fluid pore pressure related to their depth.

In the Tohoku and Maule earthquakes, Wirth noted, “the frequency of ground shaking that is most damaging to buildings and infrastructure seemed to be radiated from these discrete patches on the fault.”

To improve seismic hazard assessment in Cascadia, she said, the next steps would be to understand what and where these subevents are, and whether they change over time. “If we could constrain the location of these subevents ahead of time, then you could anticipate where your strongest ground shaking might be.”

In 2002, the USGS estimated that there was a 10% to 14% chance of another magnitude 9.0 Cascadia earthquake occurring in the next 50 years.


Contacts and sources:
Becky Ham
Seismological Society of America

Citation: Impact of Down‐Dip Rupture Limit and High‐Stress Drop Subevents on Coseismic Land‐Level Change during Cascadia Megathrust Earthquakes. Erin A. Wirth, Arthur D. Frankel. Bulletin of the Seismological Society of America, 2019; DOI: 10.1785/0120190043



Blame the Buzz, Players Play Worse Against Opponents Hyped in Rising Rankings

Chess and tennis players perform worse against opponents who are rising in the rankings says a new study 

Buzz about tennis’s newest rising stars – like 15-year-old prodigy Coco Gauff, who beat Venus Williams at Wimbledon – can be so intimidating it can make their opponents play worse, according to new research from Duke University’s Fuqua School of Business.

A study of more than 117,000 pro tennis matches and more than 5 million observations in online amateur chess indicates that even when competitors are evenly matched, players perform worse against an opponent they know has been climbing in rank.

Animation showing tennis player getting ready to serve with sweat droplet rolling down her face.
Credit:  Duke University

As players rise, they gather what social scientists call “status momentum,” said Hemant Kakkar, Ph.D., an assistant professor at Fuqua and author of the research published in the Proceedings of the National Academy of Sciences.

Kakkar said an opponent’s momentum is not just hype; a positive trend in an opponent’s ranking can be threatening for athletes, even for seasoned pros whose training often includes creating the right mental space for high-stakes match-ups.

“Our experiments suggest this is because people take the physical laws of momentum into their mental landscape,” Kakkar said. “For instance, they know a ball rolling downhill will keep rolling until someone applies a force to stop it. They do the same mental gymnastics or mental calculations about a competitor. They tend to think, yeah, this person will keep moving up. Because of this, they start feeling threatened and their performance tends to suffer.”

In tennis, for example, the researchers found that players committed more double faults when facing an opponent with status momentum. This type of unforced error suggests the player’s mental game was faltering, the researchers said.

This theory poses a counterpoint to the widely debated “hot hand” concept in sports psychology that suggests a player’s positive momentum can heighten his own performance – in other words, a basketball shooting guard experiences a psychological boost he makes a basket, and therefore is more likely to sink the next few shots.

While the “hot hands” theory examines how a player’s own momentum could improve performance, Kakkar and his co-authors examine how a player’s momentum actually influences the performance of their opponents.

In addition to analyzing chess and tennis results, the researchers tested their theory with more than 1,800 online research participants. Participants faced various competitive scenarios and took tests to measure how threatened they felt. Results showed they were more threatened by upwardly mobile opponents than by opponents with the same rank who lacked momentum.

Two tactics that many people already use in daily life measurably reduced participants’ threat levels when facing an opponent on a hot streak, the studies found.
People who practiced affirmations of their own skills and strengths before a potential matchup were less threatened, as were those who found a reason to doubt an opponent’s momentum.

“Once you present people with some kind of doubt to the veracity of the rankings, such as a clerical error that affected the rankings, that alleviates some of the adverse effect of the opponent’s momentum,” Kakkar said. “We are generally motivated to think more favorably about ourselves, so when given a reason to doubt others – even a slight one – we tend to think, maybe this person isn’t actually that good, and that can change how threatened we feel.”

In addition to Kakkar, study authors included Niro Sivanathan of the London Business School and Nathan C. Pettit of New York University.

Contacts and sources:
Duke University


Citation: The impact of dynamic status changes within competitive rank-ordered hierarchies. Hemant Kakkar, Niro Sivanathan, Nathan C. Pettit. Proceedings of the National Academy of Sciences, 2019; 201908320 DOI: 10.1073/pnas.1908320116




Asteroid Hygiea, in Inner Solar System, Is Smallest Dwarf Planet Say Astronomers

Astronomers using ESO’s SPHERE instrument at the Very Large Telescope (VLT) have revealed that the asteroid Hygiea could be classified as a dwarf planet. The object is the fourth largest in the asteroid belt after Ceres, Vesta and Pallas. For the first time, astronomers have observed Hygiea in sufficiently high resolution to study its surface and determine its shape and size. They found that Hygiea is spherical, potentially taking the crown from Ceres as the smallest dwarf planet in the Solar System.

A new SPHERE/VLT image of Hygiea, which could be the Solar System’s smallest dwarf planet yet. As an object in the main asteroid belt, Hygiea satisfies right away three of the four requirements to be classified as a dwarf planet: it orbits around the Sun, it is not a moon and, unlike a planet, it has not cleared the neighborhood around its orbit. The final requirement is that it have enough mass that its own gravity pulls it into a roughly spherical shape. This is what VLT observations have now revealed about Hygiea.
SPHERE image of Hygiea
Credit: ESO/P. Vernazza et al./MISTRAL algorithm (ONERA/CNRS)

As an object in the main asteroid beltHygiea satisfies right away three of the four requirements to be classified as a dwarf planet: it orbits around the Sun, it is not a moon and, unlike a planet, it has not cleared the neighbourhood around its orbit. The final requirement is that it has enough mass for its own gravity to pull it into a roughly spherical shape. This is what VLT observations have now revealed about Hygiea.

Astronomers using the SPHERE instrument on ESO's Very Large Telescope have revealed that the asteroid Hygiea could be a dwarf planet. Find out more about this fascinating object in the new ESOcast Light.
Credit: ESO

New observations with ESO’s SPHERE instrument on the Very Large Telescope have revealed that the surface of Hygiea lacks the very large impact crater that scientists expected to see on its surface. Since it was formed from one of the largest impacts in the history of the asteroid belt, they were expecting to find at least one large, deep impact basin, similar to the one on Vesta (bottom right in the central panel).

The new study also found that Hygiea is spherical, potentially taking the crown from Ceres as the smallest dwarf planet in the Solar System. The team used the SPHERE observations to constrain Hygiea’s size, putting its diameter at just over 430 km, while Ceres is close to 950 km in size.
SPHERE images of Hygiea, Vesta and Ceres
Credit: ESO/P. Vernazza et al., L. Jorda et al./MISTRAL algorithm (ONERA/CNRS)



This animation shows where Hygiea’s orbit is in our Solar System. Like Ceres, Hygiea is in the main asteroid belt, which is located between the orbits of Mars and Jupiter. While Hygiea was thought to be an asteroid, new observations with the SPHERE instrument on the VLT have revealed that Hygiea is spherical in shape, meaning it could be reclassified as a dwarf planet. This would make it the smallest dwarf planet yet in our Solar System, after Ceres.

Credit: ESO/spaceengine.org

“Thanks to the unique capability of the SPHERE instrument on the VLT, which is one of the most powerful imaging systems in the world, we could resolve Hygiea’s shape, which turns out to be nearly spherical,” says lead researcher Pierre Vernazza from the Laboratoire d'Astrophysique de Marseille in France. “Thanks to these images, Hygiea may be reclassified as a dwarf planet, so far the smallest in the Solar System.”

The team also used the SPHERE observations to constrain Hygiea’s size, putting its diameter at just over 430 km. Pluto, the most famous of dwarf planets, has a diameter close to 2400 km, while Ceres is close to 950 km in size.

Surprisingly, the observations also revealed that Hygiea lacks the very large impact crater that scientists expected to see on its surface, the team report in the study published today in Nature Astronomy. Hygiea is the main member of one of the largest asteroid families, with close to 7000 members that all originated from the same parent body. Astronomers expected the event that led to the formation of this numerous family to have left a large, deep mark on Hygiea.

“This result came as a real surprise as we were expecting the presence of a large impact basin, as is the case on Vesta,” says Vernazza. Although the astronomers observed Hygiea’s surface with a 95% coverage, they could only identify two unambiguous craters. “Neither of these two craters could have been caused by the impact that originated the Hygiea family of asteroids whose volume is comparable to that of a 100 km-sized object. They are too small,” explains study co-author Miroslav Brož of the Astronomical Institute of Charles University in Prague, Czech Republic.

The team decided to investigate further. Using numerical simulations, they deduced that Hygiea’s spherical shape and large family of asteroids are likely the result of a major head-on collision with a large projectile of diameter between 75 and 150 km. Their simulations show this violent impact, thought to have occurred about 2 billion years ago, completely shattered the parent body. Once the left-over pieces reassembled, they gave Hygiea its round shape and thousands of companion asteroids. “Such a collision between two large bodies in the asteroid belt is unique in the last 3–4 billion years,” says Pavel Ševeček, a PhD student at the Astronomical Institute of Charles University who also participated in the study.


Computational simulation of the fragmentation and reassembly that led to the formation of Hygiea and its family of asteroids, following an impact with a large object. While changes in the shape of Hygiea occur after the impact, the dwarf-planet candidate eventually acquires a round shape.

Credit: P. Ševeček/Charles University


Studying asteroids in detail has been possible thanks not only to advances in numerical computation, but also to more powerful telescopes. “Thanks to the VLT and the new generation adaptive-optics instrument SPHERE, we are now imaging main belt asteroids with unprecedented resolution, closing the gap between Earth-based and interplanetary mission observations,” Vernazza concludes.



Contacts and sources:
Bárbara Ferreira
ESO (European Southern Observatory)

Pierre Vernazza
Laboratoire d’Astrophysique de Marseille

Miroslav Brož
Charles University
Pavel Ševeček
Charles University

Links:
Research paper
Supplementary material
New SPHERE view of VESTA
VLT’s SPHERE spies rocky worlds
SPHERE maps the surface of Ceres
Photos of the VLT

Journal Reference:A basin-free spherical shape as an outcome of a giant impact on asteroid Hygiea.
P. Vernazza, L. Jorda, P. Ševeček, M. Brož, M. Viikinkoski, J. Hanuš, B. Carry, A. Drouard, M. Ferrais, M. Marsset, F. Marchis, M. Birlan, E. Podlewska-Gaca, E. Jehin, P. Bartczak, G. Dudzinski, J. Berthier, J. Castillo-Rogez, F. Cipriani, F. Colas, F. DeMeo, C. Dumas, J. Durech, R. Fetick, T. Fusco, J. Grice, M. Kaasalainen, A. Kryszczynska, P. Lamy, H. Le Coroller, A. Marciniak, T. Michalowski, P. Michel, N. Rambaux, T. Santana-Ros, P. Tanga, F. Vachier, A. Vigan, O. Witasse, B. Yang, M. Gillon, Z. Benkhaldoun, R. Szakats, R. Hirsch, R. Duffard, A. Chapman, J. L. Maestre. Nature Astronomy, 2019; DOI: 10.1038/s41550-019-0915-8

Monday, October 28, 2019

Enceladus Vents Ice Grains, New Organic Compounds Found Geyser Vapor

NASA's Cassini spacecraft in the ice grains emitted from Saturn's moon Enceladus. Powerful hydrothermal vents eject material from Enceladus' core into the moon's massive subsurface ocean. After mixing with the water, the material is released into space as water vapor and ice grains. Condensed onto the ice grains are nitrogen- and oxygen-bearing organic compounds.

On Earth hydrothermal vents on the ocean floor provide the energy that fuels reactions that produce amino acids, the building blocks of life. Scientists believe Enceladus' hydrothermal vents may operate in the same way, supplying energy that leads to the production of amino acids.
Enceladus Organics on Grains of Ice (Illustration)
Credit: NASA/JPL-Caltech



New kinds of organic compounds, the ingredients of amino acids, have been detected in the plumes bursting from Saturn's moon Enceladus. The findings are the result of the ongoing deep dive into data from the NASA-ESA mission Cassini-Huygens. A team of scientists led by Nozair Khawaja of the Free University of Berlin, publish the work in a new paper in Monthly Notices of the Royal Astronomical Society.

Powerful hydrothermal vents eject material from Enceladus' core, which mixes with water from the moon's massive subsurface ocean before it is released into space as water vapour and ice grains. The newly discovered molecules, condensed onto the ice grains, were determined to be nitrogen- and oxygen-bearing compounds.

On Earth, similar compounds are part of chemical reactions that produce amino acids, the building blocks of life. Hydrothermal vents on the ocean floor provide the energy that fuels the reactions. Scientists believe Enceladus' hydrothermal vents may operate in the same way, supplying energy that leads to the production of amino acids.

"If the conditions are right, these molecules coming from the deep ocean of Enceladus could be on the same reaction pathway as we see here on Earth. We don't yet know if amino acids are needed for life beyond Earth, but finding the molecules that form amino acids is an important piece of the puzzle," said Khawaja.

Although the Cassini-Huygens mission ended in September 2017, the data it provided will be mined for decades. Khawaja's team used data from the spacecraft's Cosmic Dust Analyser, or CDA, which detected ice grains emitted from Enceladus into Saturn's E ring.

The scientists used the CDA's mass spectrometer measurements to determine the composition of organic material in the grains.

The identified organics first dissolved in the ocean of Enceladus, then evaporated from the water surface before condensing and freezing onto ice grains inside the fractures in the moon's crust, scientists found. Blown into space with the rising plume emitted through those fractures, the ice grains were then analysed by Cassini's CDA.

The new findings complement the team's discovery last year of large, insoluble complex organic molecules believed to float on the surface of Enceladus' ocean. The team went deeper with this recent work to find the ingredients, dissolved in the ocean, that are needed for the hydrothermal processes that would spur amino acid formation.

"Here we are finding smaller and soluble organic building blocks — potential precursors for amino acids and other ingredients required for life on Earth," said co-author Jon Hillier.

"This work shows that Enceladus' ocean has reactive building blocks in abundance, and it's another green light in the investigation of the habitability of Enceladus," added co-author Frank Postberg.


The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency (ESA) and the Italian Space Agency. NASA's Jet Propulsion Laboratory, a division of Caltech in Pasadena, California, manages the mission for NASA's Science Mission Directorate, Washington. JPL designed, developed and assembled the Cassini orbiter. The radar instrument was built by JPL and the Italian Space Agency, working with team members from the U.S. and several European countries.

The Royal Astronomical Society (RAS), founded in 1820, encourages and promotes the study of astronomy, solar-system science, geophysics and closely related branches of science. The RAS organises scientific meetings, publishes international research and review journals, recognises outstanding achievements by the award of medals and prizes, maintains an extensive library, supports education through grants and outreach activities and represents UK astronomy nationally and internationally. Its more than 4,400 members (Fellows), a third based overseas, include scientific researchers in universities, observatories and laboratories as well as historians of astronomy and others.


Contacts and sources:
Gretchen McCartney
Jet Propulsion Laboratory, Pasadena, California, USA

Alana Johnson
NASA Headquarters, Washington DC, USA

Dr. Nozair Khawaja
Free University of Berlin


The new work appears in "Low-mass nitrogen-, oxygen-bearing, and aromatic compounds in Enceladean ice grains", N. Khawaja, F. Postberg, J. Hillier, F. Klenner, S. Kempf, L. Nölle, R. Reviol, Z. Zou and R. Srama, Monthly Notices of the Royal Astronomical Society, Volume 489, Issue 4, November 2019, Pages 5231–5243, published by Oxford University Press. (The paper is free to view on the OUP website.)











Fish Walking Out of Foul Water onto Land Spark Fear



The largest fish to walk on land, the voracious northern snakehead, will flee water that is too acidic, salty or high in carbon dioxide - important information for future management of this invasive species.

Snakeheads eat native species of fish, frogs and crayfish, destroying the food web in some habitats. They can survive on land for up to 20 hours if conditions are moist.

Snakehead viewed fromt he side
Photo courtesy of Noah Bressman

In a new study published Oct. 21 in the peer-reviewed journal Integrative Organismal Biology, Wake Forest researcher Noah Bressman reported for the first time the water conditions that could drive snakeheads onto land.

Earlier this month, wildlife resources officials in Georgia advised anglers to kill the fish on sight after one was caught in a Gwinnett County pond, and the Pennsylvania Fish and Boat Commission confirmed that a 28-inch northern snakehead was caught in the Monongahela River in Pittsburgh.

Bressman also observed the fish moving in a way no other amphibious fish do: It makes near-simultaneous rowing movements with its pectoral fins while wriggling its axial fin back and forth. These combined motions could help the snakehead travel across uneven surfaces such as grass.

"Snakeheads move more quickly and erratically than once believed," said Bressman, a Ph.D. candidate and the corresponding author of Emersion and terrestrial locomotion of the northern snakehead on multiple substrates. "The fish we studied moved super quickly on rough surfaces such as grass, and we think they use their pectoral fins to push off these three-dimensional surfaces."

Native to Asia, the northern snakehead was first found in the United States in 2002, in a Maryland pond. Since then, the fish have been discovered in the Potomac River, Florida, New York City, Philadelphia, Massachusetts, California and North Carolina.


Credit: Billings Brett, U.S. Fish and Wildlife Service

Bressman studied snakehead populations in Maryland, where the fish is considered a threat to the Chesapeake Bay watershed. The Maryland Department of Natural Resources collected snakeheads by electrofishing in tributaries of the Potomac River and adjacent drainage ditches. The fish, which ranged in size from about 1 inch to 27 inches, were subjected to poor water conditions including high salinity, high acidity, stagnation, crowding, high temperatures, pollution and low light.

The fish tolerated all conditions but high salinity and acidity, and stagnant water with too much carbon dioxide.

Although it is unclear how often snakeheads leave water voluntarily and cross over land to invade other waterways, Bressman said these findings can inform how natural resources agencies plan to contain the fish.

Northern  Snakehead (Channa argus) at the Tokyo Tower Aquarium in Japan

Credit: George Berninger Jr. / Wikimedia Commons
"When snakeheads were discovered on land, it caused a lot of fear because not much was known about them," he said. "Sure, they can move fairly quickly on land, and they have sharp teeth. But you can easily outrun them, and they won't hurt you, your children or your pets.

"But having a better understanding of how amphibious they are can help us better manage their population."

Bressman's current research focuses on invasive walking catfish in Florida.



Contacts and sources:
Cheryl Walker 
University of Wake Forest

Fantastic Grandmothers Find Nests of Venous Sea Snakes









A group of snorkeling grandmothers is helping scientists better understand marine ecology by photographing venomous sea snakes in waters off the city of Noumea, New Caledonia.

Two years ago the seven women, all in their 60s and 70s, who call themselves "the fantastic grandmothers", offered to help scientists Dr Claire Goiran from the University of New Caledonia and Professor Rick Shine from Australia's Macquarie University in their quest to document the sea snake population in a popular swimming spot known as Baie des citrons.

A group of snorkeling grandmothers is helping scientists better understand marine ecology by photographing venomous sea snakes in waters off the city of Noumea, New Caledonia.

Credit: Claire Goiran/UNC

For 15 years Dr Goiran and Professor Shine had been documenting the presence of a small harmless species, known as the turtle?headed sea snake (Emydocephalus annulatus). In the first eight years of the project they also glimpsed - just six times - another species, the 1.5 metre-long, venomous greater sea snake (Hydrophis major).

From 2013 the pair decided to look more closely for this much larger and much more robust snake, but over the ensuing 36 months saw just 10 every year.

Enter the Fantastic Grandmothers, who were fond of snorkelling recreationally in the Baie des citrons and proposed a citizen science project. Armed with cameras, for the past couple of years the women have been venturing underwater and getting up close and personal with the potentially lethal reptiles.



A group of snorkelling grandmothers is helping scientists better understand marine ecology by photographing venomous sea snakes in waters off the city of Noumea, New Caledonia.

Credit: Claire Goiran/UNC

"The results have been astonishing," says Dr Goiran.

"As soon as the grandmothers set to work, we realized that we had massively underestimated the abundance of greater sea snakes in the bay."

Greater sea snakes have distinctive markings, allowing individuals to be easily identified from photographs. In a paper just published in the journal Ecosphere the scientists reveal that thanks to the diving grannies they now know that there are more than 249 of the snakes in the single bay.

The photography project has also revealed crucial new information about the snakes' breeding patterns, and numbers of young - more information, says Dr Goiran, than for any other related species, worldwide.

"Remarkably," says Professor Shine, "they found a large number of lethally toxic sea snakes in a small bay that is occupied every day by hordes of local residents and cruise?ship passengers - yet no bites by the species have ever been recorded at Baie des citrons, testifying to their benevolent disposition."

Dr Goiran is full of praise for the elderly women who happily volunteered to take part in what became a very notable citizen science project.

"I have been studying sea snakes in the Baie des Citrons for 20 years, and thought I understood them very well - but the Fantastic Grandmothers have shown me just how wrong I was," she says.

"The incredible energy of the Grandmothers, and their intimate familiarity with 'my' study area, have transformed our understanding of the abundance and ecology of marine snakes in this system. It's a great pleasure and privilege to work with them."



Contacts and sources:
Benjamin KeirnanMacquarie University



Citation: Grandmothers and deadly snakes: an unusual project in “citizen science” Claire Goiran Richard Shine First published: 16 October 2019 https://doi.org/10.1002/ecs2.2877 Ecosphere Naturalist https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecs2.2877 http://dx.doi.org/10.1002/ecs2.2877



Memory Training Builds upon Strategy Use



Researchers from Åbo Akademi University, Finland, and Umeå University, Sweden, have for the first time obtained clear evidence of the important role strategies have in memory training. Training makes participants adopt various strategies to manage the task, which then affects the outcome of the training.

Strategy acquisition can also explain why the effects of memory training are so limited. Typically, improvements are limited only to tasks that are very similar to the training task – training has provided ways to handle a given type of task, but not much else.

Memory and intellectual improvement applied to self-education and juvenile instruction Year: 1850
Memory and intellectual improvement applied to self-education and juvenile instruction (1850) (14782199265).jpg
Credit:  Wikimedia Commons

A newly published study sheds light on the underlying mechanisms of working memory training that have remained unclear. It rejects the original idea that repetitive computerized training can increase working memory capacity. Working memory training should rather be seen as a form of skill learning in which the adoption of task-specific strategies plays an important role. 

Hundreds of commercial training programs that promise memory improvements are available for the public. However, the effects of the programs do not extend beyond tasks similar to the ones one has been trained on.

FMRI scan during working memory tasks. Working memory tasks typically show activation in the bilateral and superior frontal cortex as well as in parts of the superior bilateral parietal cortex.
FMRI scan during working memory tasks.jpg
Credit: John Graner, Neuroimaging Department, National Intrepid Center of Excellence, Walter Reed National Military Medical Center

The study included 258 adult individuals which were randomized into three groups. Two of the groups completed a four-week working memory training period during which participants completed 3 x 30-minute training sessions per week with a working memory updating task. One group trained with an externally provided strategy instruction, while the other group trained without the strategy instruction. The third group served as controls and only participated in a pretest, intermediate test and posttest. Self-generated strategies were probed with questionnaires at each training session and assessment point. 

This study was conducted within the BrainTrain project (http://www.braintrain.fi), one of the Research Centers of Excellence 2015–2018 at Åbo Akademi University.

The study was funded by the Åbo Akademi University Endowment, the Academy of Finland and the Signe and Ane Gyllenberg Foundation. The study was recently published in theJournal of Memory and Language.




Contacts and sources:
Åbo Akademi University


Citation: The role of strategy use in working memory training outcomes. Journal of Memory and Language. 
Fellman, D., Jylkkä, J., Waris, O., Soveri, A., Ritakallio, L., Haga, S., Salmi, J., Nyman, TN., & Laine, M. (2019). https://doi.org/10.1016/j.jml.2019.104064









When Is Leaving a Child Home Alone Neglect?



Research shows social workers’ opinions vary, depending on if laws are in place or if a child is injured while left unsupervised. "Decisions made by child welfare workers related to the determination of child neglect play an important role in promoting responsible childcare and preventing harm to children. However, the factors that influence these decisions are poorly understood," say researchers

Four children - about 3 to 6 years old. Home all alone and settlement deserted. Mother and father working on Five-acre Bog.
Credit:  Hine, Lewis Wickes; National Child Labor Committee Collection - Library of Congress

A majority of social workers surveyed believe children should be at least 12 before being left home alone four hours or longer, and they are more likely to consider a home-alone scenario as neglect if a child is injured while left unsupervised, according to research being presented at the American Academy of Pediatrics (AAP) 2019 National Conference & Exhibition.
The research abstract, “Social Workers’ Determination of When a Child Being Left Home Alone Constitutes Child Neglect,” will be presented on Monday, Oct. 28 at the Ernest N. Morial Convention Center.

“We found that social workers who participated in the study were significantly more likely to consider it child neglect when a child was left home alone if the child had suffered an injury, as compared to when they did not,” said Charles Jennissen, MD, FAAP, clinical professor and pediatric emergency medicine staff physician for the University of Iowa Carver College of Medicine in Iowa City.

“The level of neglect is really the same whether a child knowingly left home alone is injured or not, and such situations should be handled the same by child protective investigators,” he said.

For the study, researchers surveyed 485 members of the National Association of Social Workers (NASW) who designated their practice as “Child/Family Welfare” from October to December 2015. They provided scenario conditions through an emailed survey in which a child of varying age was left home alone for four hours. The scenarios also varied by whether the child had been injured or not when left at home alone, and if there were a relevant “home alone” laws.

In cases where a child was not injured, nearly every social worker determined that leaving a child home alone for four hours was child neglect when the child was 6 years old or younger. More than 80% of social workers stated that this was child neglect if the child was 8 years or younger; about 50% stated it was child neglect if the child was 10 or younger. A lower proportion described the scenario as child neglect when a child was age 12 or 14.

When the scenarios included the conditions where a law made it illegal to leave a child at home alone or a child was injured, social workers were significantly more likely to consider it a case of child neglect at 8, 10, 12 and 14 years of age. The social workers were also asked at what age should it be illegal to leave a child alone for four hours, over one-half stated it should be illegal for children under 12 years of age and four-fifths agreed it should be illegal for children under 10 years.

Studies have shown that the lack of adult supervision contributes to more than 40% of U.S. pediatric injury-related deaths, the authors note. They say that the results suggest the need for uniform guidelines and safety laws related to childhood supervision nationally, in order to direct social workers in their evaluation of potential cases of child neglect and to better protect children from harm.

“This study recognizes that there are critical connections between safety laws, advocates and professionals in child welfare, and families with small children,” said Gerene Denning, PhD, emeritus research scientist at the University of Iowa Carver College of Medicine. “It takes partnership between all of these to prevent childhood injuries.”

Dr. Jennissen will present an abstract of the study, available below, between 8 – 9 a.m. Oct. 28 at the Council on Child Abuse and Neglect Program in the Ernest N. Morial Convention Center, Room 386-387. To request an interview with an author, journalists may contact AAP Public Affairs or Cheryl Hodgson at the University of Iowa Stead Family Children’s Hospital, cheryl-hodgson@uiowa.edu or (319) 353-7193.]

In addition, Dr. Jennissen will be among the highlighted abstract authors who will be giving brief presentations and be available for interviews during a press conference starting at noon on Sunday, Oct. 27, in rooms 208-209 (Press Office) of the Ernest N. Morial Convention Center. During the meeting. You may reach AAP media relations staff in the National Conference Press Room at 504-670-5406.



Contacts and sources:
The American Academy of Pediatrics
http://www.aapexperience.org/


Abstract Title: Social Workers’ Determination of When a Child Being Left Home Alone Constitutes Child Neglect
Charles Jennissen, MD, FAAP












Supercomputer Analyzes Web Traffic across Entire Internet

Computer scientists think modeling web traffic could aid cybersecurity, computing infrastructure design, Internet policy, and more.

Using a supercomputing system, MIT researchers developed a model that captures what global web traffic could look like on a given day, including previously unseen isolated links (left) that rarely connect but seem to impact core web traffic (right).
Using a supercomputing system, MIT researchers developed a model that captures what global web traffic could look like on a given day, including previously unseen isolated links (left) that rarely connect but seem to impact core web traffic (right).
Image courtesy of the researchers, edited by MIT News

Using a supercomputing system, MIT researchers have developed a model that captures what web traffic looks like around the world on a given day, which can be used as a measurement tool for internet research and many other applications.

Understanding web traffic patterns at such a large scale, the researchers say, is useful for informing internet policy, identifying and preventing outages, defending against cyberattacks, and designing more efficient computing infrastructure. A paper describing the approach was presented at the recent IEEE High Performance Extreme Computing Conference.

For their work, the researchers gathered the largest publicly available internet traffic dataset, comprising 50 billion data packets exchanged in different locations across the globe over a period of several years.

They ran the data through a novel “neural network” pipeline operating across 10,000 processors of the MIT SuperCloud, a system that combines computing resources from the MIT Lincoln Laboratory and across the Institute. That pipeline automatically trained a model that captures the relationship for all links in the dataset — from common pings to giants like Google and Facebook, to rare links that only briefly connect yet seem to have some impact on web traffic.

The model can take any massive network dataset and generate some statistical measurements about how all connections in the network affect each other. That can be used to reveal insights about peer-to-peer filesharing, nefarious IP addresses and spamming behavior, the distribution of attacks in critical sectors, and traffic bottlenecks to better allocate computing resources and keep data flowing.

In concept, the work is similar to measuring the cosmic microwave background of space, the near-uniform radio waves traveling around our universe that have been an important source of information to study phenomena in outer space. “We built an accurate model for measuring the background of the virtual universe of the Internet,” says Jeremy Kepner, a researcher at the MIT Lincoln Laboratory Supercomputing Center and an astronomer by training. “If you want to detect any variance or anomalies, you have to have a good model of the background.”

Joining Kepner on the paper are: Kenjiro Cho of the Internet Initiative Japan; KC Claffy of the Center for Applied Internet Data Analysis at the University of California at San Diego; Vijay Gadepally and Peter Michaleas of Lincoln Laboratory’s Supercomputing Center; and Lauren Milechin, a researcher in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

Breaking up data

In internet research, experts study anomalies in web traffic that may indicate, for instance, cyber threats. To do so, it helps to first understand what normal traffic looks like. But capturing that has remained challenging. Traditional “traffic-analysis” models can only analyze small samples of data packets exchanged between sources and destinations limited by location. That reduces the model’s accuracy.

The researchers weren’t specifically looking to tackle this traffic-analysis issue. But they had been developing new techniques that could be used on the MIT SuperCloud to process massive network matrices. Internet traffic was the perfect test case.

Networks are usually studied in the form of graphs, with actors represented by nodes, and links representing connections between the nodes. With internet traffic, the nodes vary in sizes and location. Large supernodes are popular hubs, such as Google or Facebook. Leaf nodes spread out from that supernode and have multiple connections to each other and the supernode. Located outside that “core” of supernodes and leaf nodes are isolated nodes and links, which connect to each other only rarely.

Capturing the full extent of those graphs is infeasible for traditional models. “You can’t touch that data without access to a supercomputer,” Kepner says.

In partnership with the Widely Integrated Distributed Environment (WIDE) project, founded by several Japanese universities, and the Center for Applied Internet Data Analysis (CAIDA), in California, the MIT researchers captured the world’s largest packet-capture dataset for internet traffic. The anonymized dataset contains nearly 50 billion unique source and destination data points between consumers and various apps and services during random days across various locations over Japan and the U.S., dating back to 2015.

Before they could train any model on that data, they needed to do some extensive preprocessing. To do so, they utilized software they created previously, called Dynamic Distributed Dimensional Data Mode (D4M), which uses some averaging techniques to efficiently compute and sort “hypersparse data” that contains far more empty space than data points. The researchers broke the data into units of about 100,000 packets across 10,000 MIT SuperCloud processors. This generated more compact matrices of billions of rows and columns of interactions between sources and destinations.

Capturing outliers

But the vast majority of cells in this hypersparse dataset were still empty. To process the matrices, the team ran a neural network on the same 10,000 cores. Behind the scenes, a trial-and-error technique started fitting models to the entirety of the data, creating a probability distribution of potentially accurate models.

Then, it used a modified error-correction technique to further refine the parameters of each model to capture as much data as possible. Traditionally, error-correcting techniques in machine learning will try to reduce the significance of any outlying data in order to make the model fit a normal probability distribution, which makes it more accurate overall. But the researchers used some math tricks to ensure the model still saw all outlying data — such as isolated links — as significant to the overall measurements.

In the end, the neural network essentially generates a simple model, with only two parameters, that describes the internet traffic dataset, “from really popular nodes to isolated nodes, and the complete spectrum of everything in between,” Kepner says.

The researchers are now reaching out to the scientific community to find their next application for the model. Experts, for instance, could examine the significance of the isolated links the researchers found in their experiments that are rare but seem to impact web traffic in the core nodes.

Beyond the internet, the neural network pipeline can be used to analyze any hypersparse network, such as biological and social networks. “We’ve now given the scientific community a fantastic tool for people who want to build more robust networks or detect anomalies of networks,” Kepner says. “Those anomalies can be just normal behaviors of what users do, or it could be people doing things you don’t want.”


Contacts and sources:
 Rob Matheson
Massachusetts Institute of Technology (MIT)





Cars That See Shadows and Look Around Corners Are Coming



To improve the safety of autonomous systems, MIT engineers have developed a system that can sense tiny changes in shadows on the ground to determine if there’s a moving object coming around the corner.

Autonomous cars could one day use the system to quickly avoid a potential collision with another car or pedestrian emerging from around a building’s corner or from in between parked cars. In the future, robots that may navigate hospital hallways to make medication or supply deliveries could use the system to avoid hitting people.

MIT engineers have developed a system for autonomous vehicles that senses tiny changes in shadows on the ground to determine if there’s a moving object coming around the corner, such as when another car is approaching from behind a pillar in a parking garage.
MIT engineers have developed a system for autonomous vehicles that senses tiny changes in shadows on the ground to determine if there’s a moving object coming around the corner, such as when another car is approaching from behind a pillar in a parking garage.
Credit; MIT

In a paper being presented at next week’s International Conference on Intelligent Robots and Systems (IROS), the researchers describe successful experiments with an autonomous car driving around a parking garage and an autonomous wheelchair navigating hallways. When sensing and stopping for an approaching vehicle, the car-based system beats traditional LiDAR — which can only detect visible objects — by more than half a second.

That may not seem like much, but fractions of a second matter when it comes to fast-moving autonomous vehicles, the researchers say.

“For applications where robots are moving around environments with other moving objects or people, our method can give the robot an early warning that somebody is coming around the corner, so the vehicle can slow down, adapt its path, and prepare in advance to avoid a collision,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The big dream is to provide ‘X-ray vision’ of sorts to vehicles moving fast on the streets.”

Currently, the system has only been tested in indoor settings. Robotic speeds are much lower indoors, and lighting conditions are more consistent, making it easier for the system to sense and analyze shadows.

Joining Rus on the paper are: first author Felix Naser SM ’19, a former CSAIL researcher; Alexander Amini, a CSAIL graduate student; Igor Gilitschenski, a CSAIL postdoc; recent graduate Christina Liao ’19; Guy Rosman of the Toyota Research Institute; and Sertac Karaman, an associate professor of aeronautics and astronautics at MIT.

Extending ShadowCam

For their work, the researchers built on their system, called “ShadowCam,” that uses computer-vision techniques to detect and classify changes to shadows on the ground. MIT professors William Freeman and Antonio Torralba, who are not co-authors on the IROS paper, collaborated on the earlier versions of the system, which were presented at conferences in 2017 and 2018.

For input, ShadowCam uses sequences of video frames from a camera targeting a specific area, such as the floor in front of a corner. It detects changes in light intensity over time, from image to image, that may indicate something moving away or coming closer. Some of those changes may be difficult to detect or invisible to the naked eye, and can be determined by various properties of the object and environment. ShadowCam computes that information and classifies each image as containing a stationary object or a dynamic, moving one. If it gets to a dynamic image, it reacts accordingly.

Adapting ShadowCam for autonomous vehicles required a few advances. The early version, for instance, relied on lining an area with augmented reality labels called “AprilTags,” which resemble simplified QR codes. Robots scan AprilTags to detect and compute their precise 3D position and orientation relative to the tag. ShadowCam used the tags as features of the environment to zero in on specific patches of pixels that may contain shadows. But modifying real-world environments with AprilTags is not practical.

The researchers developed a novel process that combines image registration and a new visual-odometry technique. Often used in computer vision, image registration essentially overlays multiple images to reveal variations in the images. Medical image registration, for instance, overlaps medical scans to compare and analyze anatomical differences.

Visual odometry, used for Mars Rovers, estimates the motion of a camera in real-time by analyzing pose and geometry in sequences of images. The researchers specifically employ “Direct Sparse Odometry” (DSO), which can compute feature points in environments similar to those captured by AprilTags. Essentially, DSO plots features of an environment on a 3D point cloud, and then a computer-vision pipeline selects only the features located in a region of interest, such as the floor near a corner. (Regions of interest were annotated manually beforehand.)

As ShadowCam takes input image sequences of a region of interest, it uses the DSO-image-registration method to overlay all the images from same viewpoint of the robot. Even as a robot is moving, it’s able to zero in on the exact same patch of pixels where a shadow is located to help it detect any subtle deviations between images.

Next is signal amplification, a technique introduced in the first paper. Pixels that may contain shadows get a boost in color that reduces the signal-to-noise ratio. This makes extremely weak signals from shadow changes far more detectable. If the boosted signal reaches a certain threshold — based partly on how much it deviates from other nearby shadows — ShadowCam classifies the image as “dynamic.” Depending on the strength of that signal, the system may tell the robot to slow down or stop.

“By detecting that signal, you can then be careful. It may be a shadow of some person running from behind the corner or a parked car, so the autonomous car can slow down or stop completely,” Naser says.

Tag-free testing

In one test, the researchers evaluated the system’s performance in classifying moving or stationary objects using AprilTags and the new DSO-based method. An autonomous wheelchair steered toward various hallway corners while humans turned the corner into the wheelchair’s path. Both methods achieved the same 70-percent classification accuracy, indicating AprilTags are no longer needed.

In a separate test, the researchers implemented ShadowCam in an autonomous car in a parking garage, where the headlights were turned off, mimicking nighttime driving conditions. They compared car-detection times versus LiDAR. In an example scenario, ShadowCam detected the car turning around pillars about 0.72 seconds faster than LiDAR. Moreover, because the researchers had tuned ShadowCam specifically to the garage’s lighting conditions, the system achieved a classification accuracy of around 86 percent.

Next, the researchers are developing the system further to work in different indoor and outdoor lighting conditions. In the future, there could also be ways to speed up the system’s shadow detection and automate the process of annotating targeted areas for shadow sensing.

This work was funded by the Toyota Research Institute.


Contacts and sources:
Rob Matheson
Massachusetts Institute of Technology (MIT)