Friday, May 24, 2019

Chimpanzees Menu Includes Tortoises

Chimpanzees found to be eating tortoises after breaking their shells on tree trunks.

An international team of researchers from the Max Planck Institute for Evolutionary Anthropology in Leipzig and the University of Osnabrück, Germany, have observed wild chimpanzees in the Loango National Park, Gabon, eating tortoises. They describe the first observations of this potentially cultural behavior where chimpanzees hit tortoises against tree trunks until the tortoises’ shells break open and then feed on the meat.


Chimpanzee at Loango National Park in Gabon feeding on tortoise meat.

Credit: © Erwan Theleste

"We have known for decades that chimpanzees feed on meat from a variety of animal species, but until now the consumption of reptiles has not been observed", says Tobias Deschner, a primatologist at the Max Planck Institute for Evolutionary Anthropology. "What is particularly interesting is that they use a percussive technique that they normally employ to open hard-shelled fruits to gain access to meat of an animal that is almost inaccessible for any other predator".

The researchers studied the behaviour of chimpanzees of the newly habituated Rekambo community. They observed 38 prey events by ten different chimpanzees in the dry season, a period when other preferred food such as fruits is abundant. "Sometimes, younger animals or females were unable to crack open the tortoise on their own. They then regularly handed the tortoise over to a stronger male who cracked the tortoise’s shell open and shared the meat with all other individuals present", says Simone Pika, first author of the study and a cognitive scientist at the University of Osnabrück.

Leftovers from dinner


Credit: MaxPlanckSociety


There was one exceptional case in which an adult male, who was on his own, cracked a tortoise, ate half of it up while sitting in a tree and then tucked the rest of it in a tree fork. He climbed down, built his nest in a nearby tree and came back the next morning to retrieve the leftovers and continue to feast on them for breakfast. "This indicates that chimpanzees may plan for the future", says Pika. "The ability to plan for a future need, such as for instance hunger, has so far only been shown in non-human animals in experimental and/or captive settings. Many scholars still believe that future-oriented cognition is a uniquely human ability. Our findings thus suggest that even after decades of research, we have not yet grasped the full complexity of chimpanzees’ intelligence and flexibility".

Deschner adds: "Wild chimpanzee behaviour has been studied now for more than 50 years and at more than ten long-term field sites all across tropical Africa. It is fascinating that we can still discover completely new facets of the behavioural repertoire of this species as soon as we start studying a new population".

The authors further emphasize the importance of non-human primate field observations to inform theories of hominin evolution. "As one of our closest living relatives, the study of chimpanzee behaviour is a window into our own history and evolution", says Pika. "To prevent this window from closing once and for all, we need to do whatever we can to secure the survival of these fascinating animals in their natural habitats across Africa", concludes Deschner.





Contacts and sources:
Prof. Dr. Simone Pika, Sandra Jacob
Max Planck Institute for Evolutionary Anthropology
Citation: Wild chimpanzees (Pan troglodytes troglodytes) exploit tortoises (Kinixys erosa) via percussive technology.
Simone Pika, Harmonie Klein, Sarah Bunel, Pauline Baas, Erwan Théleste, Tobias Deschner. Scientific Reports, 2019; 9 (1) DOI: 10.1038/s41598-019-43301-8



The Beer of Pharaohs and Drunken Philistines Recreated with Ancient Yeast

What kind of beer did the Pharaohs drink? In ancient times, beer was an important ingredient in people's daily diet. Great powers were attributed to beer in the ancient world, particularly for religious worship and healing properties.

Israeli scientists resurrected yeast from ancient beer jugs to recreate a 5,000-year-old brew.

The pottery used to produce beer in antiquity served as the basis for this new research. The research was led by Dr. Ronen Hazan and Dr. Michael Klutstein, microbiologists from the School of Dental Medicine at the Hebrew University of Jerusalem (HUJI). They examined the colonies of yeast that formed and settled in the pottery's nano-pores. Ultimately, they were able to resurrect this yeast to create a high-quality beer...that's approximately 5,000 years old.

Beer cruse from Tel Tzafit/Gath archaeological digs, from which Philistine beer was produced.

Credit:  Yaniv Berman/Israel Antiquities Authority.


Many cooks were invited into this'beer kitchen to isolate the yeast specimens from the ancient debris and to create a beer with it. First the scientists reached out to vintners at Kadma Winery. This winery still produces wine in clay vessels, proving that yeast may be safely removed from pottery, even if it had lain dormant in the sun for years.

The yeast was then photographed by Dr. Tziona Ben-Gedalya at Ariel University's Eastern R&D Center. Following her initial examination, the team reached out to archaeologists Dr. Yitzhak Paz from the Israel Antiquities Authority (IAI), Professor Aren Maeir at Bar Ilan University and Professors Yuval Gadot and Oded Lipschits from Tel Aviv University. These archaeologists gave them shards of pottery that had been used as beer and mead (honey wine) jugs back in ancient times--and miraculously, still had yeast specimens stuck inside. These jars date back to the reign of Egyptian Pharaoh Narmer (roughly 3000 BCE), to Aramean King Hazael (800 BCE) and to Prophet Nehemiah (400 BCE) who, according to the bible, governed Judea under Persian rule.

The researchers, with the help of HUJI student Tzemach Aouizerat, cleaned and sequenced the full genome of each yeast specimen and turned them over to Dr. Amir Szitenberg at the Dead Sea-Arava Science Center for analysis. Szitenberg found that these 5,000-year yeast cultures are similar to those used in traditional African brews, such as the Ethiopian honey wine tej, and to modern beer yeast.


L'chaim! The Israeli research team samples their ancient brew.

Credit: Yaniv Berman, Israel Antiquities Authority.


Now it was time to recreate the ancient brew. Local Israeli beer expert Itai Gutman helped the scientists make the beer and the brew was sampled by Ariel University's Dr. Elyashiv Drori, as well as by certified tasters from the International Beer Judge Certification Program (BJCP), under the direction of brewer and Biratenu owner Shmuel Nakai. The testers gave the beer a thumbs up, deeming it high-quality and safe for consumption.

Dr. Ronen Hazan, Hebrew University-Hadassah School of Dental Medicine: "The greatest wonder here is that the yeast colonies survived within the vessel for thousands of years--just waiting to be excavated and grown. This ancient yeast allowed us to create beer that lets us know what ancient Philistine and Egyptian beer tasted like. By the way, the beer isn't bad. Aside from the gimmick of drinking beer from the time of King Pharaoh, this research is extremely important to the field of experimental archaeology--a field that seeks to reconstruct the past. Our research offers new tools to examine ancient methods, and enables us to taste the flavors of the past."

Dr. Yitzchak Paz, Israel Antiquities Authority: "We are talking about a real breakthrough here. This is the first time we succeeded in producing ancient alcohol from ancient yeast. In other words, from the original substances from which alcohol was produced. This has never been done before."

Prof. Yuval Gadot, Tel Aviv University's Department of Archaeology and Ancient Near Eastern Cultures: "We dug at Ramat Rachel, the largest Persian site in the Judaean kingdom, and found a large concentration of jugs with the letters J, H, D - Yahud - written on them. In a royal site like Ramat Rachel it makes sense that alcohol would be consumed at the home of the Persian governor."

Prof. Aren Maeir, Bar-Ilan University's Department of Land of Israel Studies and Archaeology: "These findings paint a portrait that supports the biblical image of drunken Philistines."


Contacts and sources:
Tali Aronsky
Hebrew University of Jerusalem

Citation: Isolation and Characterization of Live Yeast Cells from Ancient Vessels as a Tool in Bio-Archaeology Tzemach Aouizerat, Itai Gutman, Yitzhak Paz, Aren M. Maeir, Yuval Gadot, Daniel Gelman, Amir Szitenberg, Elyashiv Drori, Ania Pinku https://mbio.asm.org/content/10/2/e00388-19 http://dx.doi.org/10.1128/mBio.00388-19



Unique Iron Age Shield Gives Insight into Prehistoric Technology



A unique bark shield, thought to have been constructed with wooden laths during the Iron Age, has provided new insight into the construction and design of prehistoric weaponry.

The unique find has given new insight into prehistoric technology

The only one of its kind ever found in Europe, the shield was found south of Leicester on the Everards Meadows site, in what is believed to have been a livestock watering hole.

Credit:  University of York

Following analysis of the construction of the shield by Michael Bamforthat the University of York, it became apparent that the shield had been carefully constructed with wooden laths to stiffen the structure, a wooden edging rim, and a woven boss to protect the wooden handle.

Although prior evidence has shown that prehistoric people used bark to make bowls and boxes, this is the first time researchers have seen the material used for a weapon of war.

Severe damage

The outside of the shield has been painted and scored in red chequerboard decoration. Radiocarbon dating has revealed that the shield was made between 395 and 255 BC.

The shield was severely damaged before being deposited in the ground, with some of the damage likely to have been caused by the pointed tips of spears. Further analysis is planned to help understand if this occurred in battle or as an act of ritual destruction.

Prehistoric technology

Michael Bamforth, from the University of York’s Department of Archaeology, said: “This truly astonishing and unparalleled artefact has given us an insight into prehistoric technology that we could never have guessed at.

“Although we know that bark has many uses, including making boxes and containers it doesn't survive well in the archaeological record. Initially we didn't think bark could be strong enough to use as a shield to defend against spears and swords and we wondered if it could be for ceremonial use.

"It was only through experimentation that we realised it could be tough enough to protect against blows from metal weapons. Although a bark shield is not as strong as one made from wood or metal, it would be much lighter allowing the user much more freedom of movement."

CT scanning

The shield was first discovered by archaeologists at the University of Leicester's Archaeoligical Services in 2015 at an Iron Age site within a farming landscape known to have been used and managed by Iron Age communities, with the Fosse Way Roman road running close by.

Many cutting-edge analytical techniques have been used to understand the construction of the object, including CT scanning and 3D printing.

Dr Rachel Crellin, Lecturer in later Prehistory at the University of Leicester, who assessed the evidence for impact damage, said: “The first time I saw the shield I was absolutely awed by it: the complex structure, the careful decorations, and the beautiful boss.

“I must admit I was initially sceptical about whether the shield would have functioned effectively, however the experimental work showed that the shield would have worked very effectively, and analysis of the surface of the object has identified evidence of use.”

Craft practices

The shield has now been conserved by York Archaeological Trust and will be deposited with the British Museum on behalf of Everards of Leicestershire, who funded and supported the project.

Dr Julia Farley, Curator of British and European Iron Age Collections at the British Museum, said: “This is an absolutely phenomenal object, one of the most marvelous, internationally important finds that I've encountered in my career.

“Bark and basketry objects were probably commonplace in ancient Britain, but they seldom survive, so to be able to study this shield is a great privilege. It holds a rich store of information about Iron Age society and craft practices.”

Contacts and sources:
Samantha Martin
University of York






Simple Test Can Tell If You're Stressed Out


Would you like to know if you are stressed out.

Stress is often called "the silent killer" because of its stealthy and mysterious effects on everything from heart disease to mental health. University of Cincinnati engineers are developing a home-testing kit that can measure stress hormones in sweat, blood, urine or saliva.

Now researchers at the University of Cincinnati have developed a new test that can easily and simply measure common stress hormones using sweat, blood, urine or saliva. Eventually, they hope to turn their ideas into a simple device that patients can use at home to monitor their health.

University of Cincinnati research assistant Shima Dalirirad holds up a test strip that can measure stress biomarkers in UC's Nanoelectronics Laboratory.

Credit: Andrew Higley/UC Creative Services

The results were published this month in the journal American Chemical Society Sensors.

“I wanted something that’s simple and easy to interpret,” said Andrew Steckl, an Ohio Eminent Scholar and professor of electrical engineering in UC’s College of Engineering and Applied Science.

“This may not give you all the information, but it tells you whether you need a professional who can take over,” Steckl said.

The breakthrough was made possible because of UC's commitment to research as described in its strategic plan called Next Lives Here.

UC researchers developed a device that uses ultraviolet light to measure stress hormones in a drop of blood, sweat, urine or saliva. These stress biomarkers are found in all of these fluids, albeit in different quantities, Steckl said.

"It measures not just one biomarker but multiple biomarkers. And it can be applied to different bodily fluids. That's what's unique," he said.

Steckl has been studying biosensors for years in his Nanoelectronics Laboratory. The latest journal article is part of a series of research papers his group has written on biosensors, including one that provides a review of methods for point-of-care diagnostics of stress biomarkers.

University of Cincinnati research assistant Shima Dalirirad examines a machine that prints test strips in UC's Nanoelectronics Laboratory.
Credit: Andrew Higley/UC Creative Services

Personal experience helping his father with a health crisis informed his research and opinion that a home test for various health concerns would be incredibly helpful.

"I had to take him quite often to the lab or doctor to have tests done to adjust his medication. I thought it would be great if he could just do the tests himself to see if he was in trouble or just imagining things," Steckl said. "This doesn't replace laboratory tests, but it could tell patients more or less where they are."

UC received grant funding for the project from the National Science Foundation and the U.S. Air Force Research Lab. Steckl said the military studies acute stress in its pilots and others who are pushing the edges of human performance.

"Pilots are placed under enormous stress during missions. The ground controller would like to know when the pilot is reaching the end of his or her ability to control the mission properly and pull them out before a catastrophic ending," Steckl said.

But the UC device has widespread applications, Steckl said. His lab is pursuing the commercial possibilities.

"You're not going to replace a full-panel laboratory blood test. That's not the intent," Steckl said. "But if you're able to do the test at home because you're not feeling well and want to know where you stand, this will tell whether your condition has changed a little or a lot."

UC graduate Prajokta Ray, the study's first author, said she was excited to work on such a pressing problem for her Ph.D. studies.

"Stress harms us in so many ways. And it sneaks up on you. You don't know how devastating a short or long duration of stress can be," Ray said. "So many physical ailments such as diabetes, high blood pressure and neurological or psychological disorders are attributed to stress the patient has gone through. That's what interested me."

Ray said taking exams always gave her stress. Understanding how stress affects you individually could be extremely valuable, she said.

"Stress has been a hot topic over the past couple years. Researchers have tried very hard to develop a test that is cheap and easy and effective and detect these hormones in low concentrations," she said. "This test has the potential to make a strong commercial device. It would be great to see the research go in that direction."

UC is at the forefront of biosensor technology. Its labs are examining continuous sweat testing and point-of-care diagnostics for everything from traumatic brain injury to lead poisoning.

Steckl, too, has been a preeminent innovator at UC. His papers have been cited more than 13,000 times, according to Google Scholar. In 2016, he used salmon sperm, a common byproduct of the fishing industry, to replace rare earth metals used in light-emitting diodes for a new kind of organic LED.

"We're device engineers at heart," Steckl said. "We don't shy away from things we don't know much about to begin with. We look for opportunities. That's a hallmark of electrical engineers. We're not smart enough not to go where we shouldn't. Sometimes that pays off!"

Contacts and sources:


Citation:




Bacteria in Fermented Food Signal the Human Immune System, Explaining Health Benefits

How fermented food help our immune system found. 

Researchers have discovered that humans and great apes possess a receptor on their cells that detects metabolites from bacteria commonly found in fermented foods and triggers movement of immune cells. Claudia Stäubert of the University of Leipzig and colleagues report these findings in a new study published 23rd May in PLOS Genetics.

Consuming lactic acid bacteria - the kind that turn milk into yogurt and cabbage into sauerkraut - can offer many health benefits, but scientists still don't understand, on a molecular level, why it is helpful to ingest these bacteria and how that affects our immune system. Now, Stäubert and her colleagues have found one way that lactic acid bacteria interact with our bodies. Initially the researchers were investigating proteins on the surface of cells called hydroxycarboxylic acid (HCA) receptors. 

Researchers found that D-phenyllactic acid is absorbed from lactic acid bacteria fermented food (e.g. Sauerkraut) and induces HCA3-dependent migration in human monocytes. Future studies need to address how HCA3 activation by lactic acid bacteria-derived metabolites modulates immune function and energy storage.

Credit: Claudia Stäubert

Most animals have only two types of this receptor but humans and great apes have three. The researchers discovered that a metabolite produced by lactic acid bacteria, D-phenyllactic acid, binds strongly to the third HCA receptor, signalling the immune system their presence. The researchers propose that the third HCA receptor arose in a common ancestor of humans and great apes, and enabled them to consume foods that are starting to decay, such as fruits picked up from the ground.

The study yields new insights into the evolutionary dynamics between microbes and their human hosts and opens new research directions for understanding the multiple positive effects of eating fermented foods. "We are convinced that this receptor very likely mediates some beneficial and anti-inflammatory effects of lactic acid bacteria in humans," stated author Claudia Stäubert. "That is why we believe it could serve as a potential drug target to treat inflammatory diseases."

Future studies may reveal the details of how D-phenyllactic acid impacts the immune system, and whether the metabolite also affects fat cells, which also carry the third HCA receptor on their surfaces.


Contacts and sources:
Claudia Stäubert
PLOS


Citation: Metabolites of lactic acid bacteria present in fermented foods are highly potent agonists of human hydroxycarboxylic acid receptor 3. Peters A, Krumbholz P, Jäger E, Heintz-Buschart A, Çakir MV, Rothemund S, et al. (2019) PLoS Genet 15(5): e1008145. https://doi.org/10.1371/journal.pgen.1008145  The article is freely available.



Exotic Matter Discovered in The Sun's Atmosphere

Solar flares are the most energetic phenomena in the solar system. A major new finding about how matter behaves in the extreme conditions of the Sun's atmosphere was announced this week by scientists from Ireland and France.

The scientists used large radio telescopes and ultraviolet cameras on a NASA spacecraft to better understand the exotic but poorly understood "fourth state of matter". Known as plasma, this matter could hold the key to developing safe, clean and efficient nuclear energy generators on Earth. The scientists published their findings in the leading international journal Nature Communications.

Most of the matter we encounter in our everyday lives comes in the form of solid, liquid or gas, but the majority of the Universe is composed of plasma - a highly unstable and electrically charged fluid. The Sun is also made up of this plasma.

A solar flare captured by NASA's Solar Dynamics Observatory in 2015. Credit NASA, SDO.

Credit: NASA/SDO

Despite being the most common form of matter in the Universe plasma remains a mystery, mainly due to its scarcity in natural conditions on Earth, which makes it difficult to study. Special laboratories on Earth recreate the extreme conditions of space for this purpose, but the Sun represents an all-natural laboratory to study how plasma behaves in conditions that are often too extreme for the manually constructed Earth-based laboratories.

Postdoctoral Researcher at Trinity College Dublin and the Dublin Institute of Advanced Studies (DIAS), Dr Eoin Carley, led the international collaboration. He said: "The solar atmosphere is a hotbed of extreme activity, with plasma temperatures in excess of 1 million degrees Celsius and particles that travel close to light-speed. The light-speed particles shine bright at radio wavelengths, so we're able to monitor exactly how plasmas behave with large radio telescopes."

"We worked closely with scientists at the Paris Observatory and performed observations of the Sun with a large radio telescope located in Nançay in central France. We combined the radio observations with ultraviolet cameras on NASA's space-based Solar Dynamics Observatory spacecraft to show that plasma on the sun can often emit radio light that pulses like a light-house. We have known about this activity for decades, but our use of space and ground-based equipment allowed us to image the radio pulses for the first time and see exactly how plasmas become unstable in the solar atmosphere."

Studying the behaviour of plasmas on the Sun allows for a comparison of how they behave on Earth, where much effort is now under way to build magnetic confinement fusion reactors. These are nuclear energy generators that are much safer, cleaner and more efficient than their fission reactor cousins that we currently use for energy today.

Professor at DIAS and collaborator on the project, Peter Gallagher, said: "Nuclear fusion is a different type of nuclear energy generation that fuses plasma atoms together, as opposed to breaking them apart like fission does. Fusion is more stable and safer, and it doesn't require highly radioactive fuel; in fact, much of the waste material from fusion is inert helium."

"The only problem is that nuclear fusion plasmas are highly unstable. As soon as the plasma starts generating energy, some natural process switches off the reaction. While this switch-off behaviour is like an inherent safety switch -- fusion reactors cannot form runaway reactions -- it also means the plasma is difficult to maintain in a stable state for energy generation. By studying how plasmas become unstable on the Sun, we can learn about how to control them on Earth."

The success of this research was made possible by the close ties between researchers at Trinity, DIAS, and their French collaborators.

Dr Nicole Vilmer, lead collaborator on the project in Paris, said: "The Paris Observatory has a long history of making radio observations of the Sun, dating back to the 1950s. By teaming up with other radio astronomy groups around Europe we are able to make groundbreaking discoveries such as this one and continue the success we have in solar radio astronomy in France. It also further strengthens scientific collaboration between France and Ireland, which I hope continues in the future."

Dr Carley previously worked at the Paris Observatory, funded by a fellowship awarded by the Irish Research Council and the European Commission. He continues to work closely with his French colleagues today, and hopes to soon study the same phenomena using both French instruments and newly built, state-of-the-art equipment in Ireland.

Dr Carley added: "The collaboration with French scientists is ongoing and we're already making progress with newly built radio telescopes in Ireland, such as the Irish Low Frequency Array (I-LOFAR). I-LOFAR can be used to uncover new plasma physics on the Sun in far greater detail than before, teaching us about how matter behaves in both plasmas on the Sun, here on Earth and throughout the Universe in general."

The work was funded by the Irish Research Council.


Contacts and sources:
Thomas DeaneTrinity College Dublin

Citation: Loss-cone instability modulation due to a magnetohydrodynamic sausage mode oscillation in the solar corona Eoin P. Carley, Laura A. Hayes, Sophie A. Murray, Diana E. Morosan, Warren Shelley, Nicole Vilmer & Peter T. Gallagher Nature Communications 10, Article number: 2276 (2019) https://www.nature.com/articles/s41467-019-10204-1 http://dx.doi.org/10.1038/s41467-019-10204-1





Geometry of an Electron Shown in an Artificial Atom


Physicists at the University of Basel are able to show for the first time how a single electron looks in an artificial atom. A newly developed method enables them to show the probability of an electron being present in a space. This allows improved control of electron spins, which could serve as the smallest information unit in a future quantum computer. The experiments were published in Physical Review Letters and the related theory in Physical Review B.

An electron is trapped in a quantum dot, which is formed in a two-dimensional gas in a semiconductor wafer. However, the electron moves within the space and, with different probabilities corresponding to a wave function, remains in certain locations within its confinement (red ellipses). Using the gold gates applied electric fields, the geometry of this wave function can be changed.

Image: University of Basel, Department of Physics


The spin of an electron is a promising candidate for use as the smallest information unit (qubit) of a quantum computer. Controlling and switching this spin or coupling it with other spins is a challenge on which numerous research groups worldwide are working. The stability of a single spin and the entanglement of various spins depends, among other things, on the geometry of the electrons – which previously had been impossible to determine experimentally.

Only possible in artificial atoms

Scientists in the teams headed by professors Dominik Zumbühl and Daniel Loss from the Department of Physics and the Swiss Nanoscience Institute at the University of Basel have now developed a method by which they can spatially determine the geometry of electrons in quantum dots.

A quantum dot is a potential trap which allows to confine free electrons in an area which is about 1000 times larger than a natural atom. Because the trapped electrons behave similar to electrons bound to an atom, quantum dots are also known as “artificial atoms”.

The electron is held in the quantum dot by electric fields. However, it moves within the space and, with different probabilities corresponding to a wave function, remains in certain locations within its confinement.

Charge distribution sheds light

The scientists use spectroscopic measurements to determine the energy levels in the quantum dot and study the behavior of these levels in magnetic fields of varying strength and orientation. Based on their theoretical model, it is possible to determine the electron’s probability density and thus its wave function with a precision on the sub-nanometer scale.

“To put it simply, we can use this method to show what an electron looks like for the first time,” explains Loss.

Better understanding and optimization

The researchers, who work closely with colleagues in Japan, Slovakia and the US, thus gain a better understanding of the correlation between the geometry of electrons and the electron spin, which should be stable for as long as possible and quickly switchable for use as a qubit.

“We are able to not only map the shape and orientation of the electron, but also control the wave function according to the configuration of the applied electric fields. This gives us the opportunity to optimize control of the spins in a very targeted manner,” says Zumbühl.

The spatial orientation of the electrons also plays a role in the entanglement of several spins. Similarly to the binding of two atoms to a molecule, the wave functions of two electrons must lie on one plane for successful entanglement.

With the aid of the developed method, numerous earlier studies can be better understood and the performance of spin qubits can be further optimized in the future.



Contacts and sources:
Prof. Dr. Dominik Zumbühl
University of Basel


Citation:
Spectroscopy of quantum dot orbitals with in-plane magnetic fieldsLeon C. Camenzind, Liuqi Yu, Peter Stano, Jeramy D. Zimmerman, Arthur C. Gossard, Daniel Loss, and Dominik M. Zumbühl
Physical Review Letters (2019), doi: 10.1103/PhysRevLett.122.207701

Orbital effects of a strong in-plane magnetic field on a gate-defined quantum dot
Peter Stano, Chen-Hsuan Hsu, Leon C. Camenzind, Liuqi Yu, Dominik Zumbühl, and Daniel Loss
Physical Review B (2019), doi: 10.1103/PhysRevB.99.085308





Did Leonardo da Vinci Have ADHD?

The best explanation for Leonardo da Vinci's inability to finish his works is that the great artist may have had Attention Deficit and Hyperactivity Disorder (ADHD), King's College London Professor Marco Catani  suggests.  ADHD is a brain disorder marked by an ongoing pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development.  

Leonardo da Vinci, presumed self-portrait circa 1512
File:Leonardo da Vinci - presumed self-portrait - WGA12798.jpg
Credit: Wikimedia Commons

Leonardo da Vinci produced some of the world’s most iconic art, but historical accounts of his work practices and behavior show that he struggled to complete projects. Drawing on these accounts, Professor Catani lays out the evidence supporting his hypothesis that, as well as explaining his chronic procrastination, ADHD could have been a factor in Leonardo’s extraordinary creativity and achievements across the arts and sciences.

Professor Catani, from the Institute of Psychiatry, Psychology & Neuroscience at King’s, says: ‘While impossible to make a post-mortem diagnosis for someone who lived 500 years ago, I am confident that ADHD is the most convincing and scientifically plausible hypothesis to explain Leonardo’s difficulty in finishing his works. Historical records show Leonardo spent excessive time planning projects but lacked perseverance. ADHD could explain aspects of Leonardo’s temperament and his strange mercurial genius.’

ADHD is a behavioral disorder characterized by continuous procrastination, the inability to complete tasks, mind-wandering and a restlessness of the body and mind. While most commonly recognised in childhood, ADHD is increasingly being diagnosed among adults including university students and people with successful careers.

Leonardo’s difficulties with sticking to tasks were pervasive from childhood. Accounts from biographers and contemporaries show Leonardo was constantly on the go, often jumping from task to task. Like many of those suffering with ADHD, he slept very little and worked continuously night and day by alternating rapid cycles of short naps and time awake.

Leonardo da Vinci, possible self-portrait, c. 1513
Credit: Wikimedia Commons

Alongside reports of erratic behaviour and incomplete projects from fellow artists and patrons, including Pope Leone X, there is indirect evidence to suggest that Leonardo’s brain was organised differently compared to average. He was left-handed and likely to be both dyslexic and have a dominance for language in the right-hand side of his brain, all of which are common among people with ADHD.

Perhaps the most distinctive and yet disruptive side of Leonardo’s mind was his voracious curiosity, which both propelled his creativity and also distracted him. Professor Catani suggests ADHD can have positive effects, for example mind-wandering can fuel creativity and originality. However, while beneficial in the initial stages of the creative process, the same traits can be a hindrance when interest shifts to something else.

Design for a helicopter
Credit: Wikimedia Commons

Professor Catani, who specialises in treating neurodevelopmental conditions like autism and ADHD, says: ‘There is a prevailing misconception that ADHD is typical of misbehaving children with low intelligence, destined for a troubled life. On the contrary, most of the adults I see in my clinic report having been bright, intuitive children but develop symptoms of anxiety and depression later in life for having failed to achieve their potential.’

‘It is incredible that Leonardo considered himself as someone who had failed in life. I hope that the case of Leonardo shows that ADHD is not linked to low IQ or lack of creativity but rather the difficulty of capitalising on natural talents. I hope that Leonardo’s legacy can help us to change some of the stigma around ADHD.’

The article, published in the journal BRAIN, is available online.


Contacts and sources:
King's College London

Citation: Leonardo da Vinci: a genius driven to distraction.
Marco Catani, Paolo Mazzarello. Brain, 2019; DOI: 10.1093/brain/awz131



Soft, Social Robot Brings Coziness to Home Robotics

Soft and cuddly social robots are coming. 

A few years ago, when social robots began appearing in stores and homes, Guy Hoffman wondered why they all looked so much alike.

“I noticed a lot of them had a very similar kind of feature – white and plasticky, designed like consumer electronic devices,” said Hoffman, assistant professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering. “Especially when these social robots were marketed to be part of our families, I thought it would be strange to all have identical family members.”

He envisioned robots built from warmer, homier materials, such as wood and wool; he also imagined robots that could be customized by their owners, so each would be unique. A friend gave him crocheted models of his robots and he thought: What if the robot itself was crocheted? So he learned to crochet.

Project "Blossom" is a robot that is soft inside and out. It is built using traditional crafts like wool and wood. We wanted to bring back warm materials to home robotics, instead of more plastics, glass, and metal. You can customize Blossom by knitting new exteriors and attaching different crafted parts to make each robot unique.

Credit: Human-Robot Collaboration and Companionship Lab

Then he watched another friend crochet part of the robot far faster than he could. “That made me think people who are not engineers could also participate in making a robot,” he said.

These ideas led Hoffman to create Blossom – a simple, expressive, inexpensive robot platform that could be made from a kit and creatively outfitted with handcrafted materials.

“We wanted to empower people to build their own robot, but without sacrificing how expressive it is,” said Hoffman, senior author of “Blossom: A Handcrafted Open-Source Robot,” published in March in the Association for Computing Machinery Transactions on Human-Robot Interaction. “Also, it’s nice to have every robot be a little bit different. If you knit your robot, every family would have their own robot that would be unique to them.”

Blossom robots can be constructed by users from handcrafted materials, making each one a little bit different
Trio of robots
Credit: Michael Suguitan

Blossom’s mechanical design – developed with Michael Suguitan, a doctoral student in Hoffman’s lab and first author of the paper – is centered on a floating “head” platform using strings and cables for movement, making its gestures more flexible and organic than those of a robot composed of rigid parts.

Blossom can be controlled by moving a smartphone using an open-source puppeteering app; the robot’s movements resemble bouncing, stretching and dancing. The cost of the parts needed to assemble a Blossom is less than $250, and researchers are currently working on a Blossom kit made entirely of cardboard, which would be even cheaper.

Partly because of its simplicity, Blossom has a variety of potential uses, Hoffman said. Human-robot interaction researchers who aren’t engineers could build their own from a kit to use in studies. Because of the ease of interacting with the robot and the hands-on experience of helping to build it, it could help teach children about robotics.

In a case study, children ages 4-8 had a chance to control and make accessories for Blossom at a science fair. Some children created accessories, such as appendages or jewelry, while others controlled the robot so the new items could be attached, illustrating how Blossom could inspire collaboration.

“The children also had additional expectation of the robot’s movement, such as making it locomote and jump. These expectations were emphasized by that fact that several children chose to make appendages such as legs and wings,” the authors wrote.

In the coming months, Blossom will be used by the Upper Grand school district in Ontario, Canada, to help teach math to fourth-graders, Hoffman said.

He said his team also has been working on an algorithm to make Blossom react to YouTube videos – performing a certain dance in response to a certain song, for instance, building on previous research showing that a robot’s response to listening to songs can influence a human’s reaction. This might be particularly useful in modeling behavior for children with autism, Hoffman said.

“It’s meant to be a flexible kit that is also very low cost. Especially if we can make it out of cardboard, you could make it very inexpensively,” he said. “Because of computation becoming so powerful, it could be a really open-ended way for people to do whatever they want with robotics.”

The work was partly supported by a grant from Google Creative Robotics.





Contacts and sources:
Melanie Lefkowitz
Cornell University


Citation: Blossom: A Handcrafted Open-Source Robot.
Michael Suguitan, Guy Hoffman. ACM Transactions on Human-Robot Interaction, 2019; 8 (1): 1 DOI: 10.1145/3310356




Complex Bacterial Ecosystem at Work on The International Space Station


A new genomic approach provides a glimpse into the diverse bacterial ecosystem on the International Space Station. Scientists at Université de Montréal and McGill University have pioneered and tested a new genomic methodology which reveals a complex bacterial ecosystem at work on the International Space Station.

NASA astronaut Peggy Whitson walks in space outside the Destiny and Harmony modules of the International Space Station.
Credit:  University of Montreal.

Their study is published today in Environmental Microbiology.

Until now, relatively little was known about the different types of microbes found on the space station. The new approach enables researchers to identify and map different species inside the ISS, which will ultimately help safeguard astronauts’ health and be key to future long-term space travel.

It will also have applications in the realms of environmental management and health care.

“The new methodology provides us spectacular snapshots of the bacterial world in space and the possibilities of applying this method to explore new microbiome environments are really exciting,” said Nicholas Brereton, a researcher at UdeM's Institut de recherche en biologie végétale.



Credit University of Montreal 

The challenge of maintaining cleanliness within space environments was first documented on the Russian MIR space station, where conditions eventually deteriorated so much that mold became widespread. On the ISS, space agencies have been trying to reduce the amount of microbial growth in the station since it was first launched in 1998

Resupply missions bring new bacteria

Strict cleaning and decontamination protocols are now in place to maintain a healthy ISS environment; in orbit, crew members regularly clean and vacuum the space station’s living and working quarters. But as resupply missions arrive carrying a range of material including food, lab equipment, live plants and animals, new bacteria species are continually being added.

Combined with existing human bacteria, and also because no windows can be opened, the build-up of bacteria inside the cramped quarters can be significant.

"Scientists have a well-documented understanding of broad bacterial families on the ISS, but now we’ve discovered a more diverse bacterial ecosystem that we ever expected," said Emmanuel Gonzalez, a metagenomic specialist at McGill. "It’s an exciting step forward in understanding the biosphere that will accompany humans into extra-terrestrial habitats."




Credit:  University of Montreal.

Although the microbial characterization method was piloted in space, its applications will be far broader, say the scientists behind the technology. Researchers can replicate this approach to address many challenges and environments, including in oceans and soils It is already being applied to human diseases and microbiomes.




Contacts and sources:
Jeff Heinrich
University of Montreal
Citation: . ANCHOR: a 16S rRNA gene amplicon pipeline for microbial analysis of multiple environmental samples.
Emmanuel Gonzalez, Frederic E. Pitre, Nicholas J. B. BreretonEnvironmental Microbiology, 2019; DOI: 10.1111/1462-2920.14632




Oldest Meteorite Collection on Earth Found in One of the Driest Places

Meteorite with thin, dark, fusion crust in the Atacama Desert.
 Photo by Jérôme Gattacceca (CEREGE).

Earth is bombarded every year by rocky debris, but the rate of incoming meteorites can change over time. Finding enough meteorites scattered on the planet’s surface can be challenging, especially if you are interested in reconstructing how frequently they land. Now, researchers have uncovered a wealth of well-preserved meteorites that allowed them to reconstruct the rate of falling meteorites over the past two million years.

“Our purpose in this work was to see how the meteorite flux to Earth changed over large timescales—millions of years, consistent with astronomical phenomena,” says Alexis Drouard, Aix-Marseille Université, lead author of the new paper in Geology.

The L6 ordinary chondrite El Médano 128, a 556 g meteorite recovered in the Atacama Desert.

Photo courtesy CCJ-CNRS, P. Groscaux.


To recover a meteorite record for millions of years, the researchers headed to the Atacama Desert. Drouard says they needed a study site that would preserve a wide range of terrestrial ages where the meteorites could persist over long time scales.

While Antarctica and hot deserts both host a large percentage of meteorites on Earth (about 64% and 30%, respectively), Drouard says, “Meteorites found in hot deserts or Antarctica are rarely older than half a million years.” He adds that meteorites naturally disappear because of weathering processes (e.g., erosion by wind), but because these locations themselves are young, the meteorites found on the surface are also young.

“The Atacama Desert in Chile, is very old ([over] 10 million years),” says Drouard. “It also hosts the densest collection of meteorites in the world.”

Meteorite recovery campaign in the Atacama Desert (Nov. 2017). 
Photo by Katherine Joy (University of Manchester


The team collected 388 meteorites and focused on 54 stony samples from the El Médano area in the Atacama Desert. Using cosmogenic age dating, they found that the mean age was 710,000 years old. In addition, 30% of the samples were older than one million years, and two samples were older than two million. All 54 meteorites were ordinary chondrites, or stony meteorites that contain grainy minerals, but spanned three different types.

“We were expecting more ‘young’ meteorites than ‘old’ ones (as the old ones are lost to weathering),” says Drouard. “But it turned out that the age distribution is perfectly explained by a constant accumulation of meteorites for millions years.” The authors note that this is the oldest meteorite collection on Earth’s surface.

Large meteorite found in the Atacama Desert. 
Photo by Jérôme Gattacceca (CEREGE).

Drouard says this terrestrial crop of meteorites in the Atacama can foster more research on studying meteorite fluxes over large time scales. “We found that the meteorite flux seems to have remained constant over this [two-million-year] period in numbers (222 meteorites larger than 10 g per squared kilometer per million year), but not in composition,” he says. Drouard adds that the team plans to expand their work, measuring more samples and narrowing in on how much time the meteorites spent in space. “This will tell us about the journey of these meteorites from their parent body to Earth’s surface.”



Contacts and sources:
Kea Giles
Geological Society of America
Citation: The meteorite flux of the past 2 m.y. recorded in the Atacama Desert
Alexis Drouard; J. Gattacceca; A. Hutzler; P. Rochette; R. Braucher; D. Bourlès; ASTER Team; M. Gounelle; A. Morbidelli; V. Debaille; M. Van Ginneken; M. Valenzuela; Y. Quesnel; R. Martinez.  https://pubs.geoscienceworld.org/gsa/geology/article/570818/the-meteorite-flux-of-the-past-2-m-y-recorded-in.




Machines Getting "Spidey Senses"



Researchers are building animal-inspired sensors into the shells of aircraft, cars, giving them "spidey senses."

Soon drones and self-driving cars may have the tingling “spidey senses” of Spider-Man?

They might actually detect and avoid objects better, says Andres Arrieta, an assistant professor of mechanical engineering at Purdue University, because they would process sensory information faster.

Illustration by Taylor Callery

Better sensing capabilities would make it possible for drones to navigate in dangerous environments and for cars to prevent accidents caused by human error. Current state-of-the-art sensor technology doesn’t process data fast enough – but nature does.

And researchers wouldn’t have to create a radioactive spider to give autonomous machines superhero sensing abilities.

Instead, Purdue researchers have built sensors inspired by spiders, bats, birds and other animals, whose actual spidey senses are nerve endings linked to special neurons called mechanoreceptors.

In nature, ‘spidey-senses’ are activated by a force associated with an approaching object. Researchers are giving autonomous machines the same ability through sensors that change shape when prompted by a predetermined level of force.

































Credit: ETH Zürich images/Hortense Le Ferrand

The nerve endings – mechanosensors – only detect and process information essential to an animal’s survival. They come in the form of hair, cilia or feathers.  Mechanosensing is ubiquitous in natural systems. From the skin ridges of our finger tips to the microscopic ion channels in cells, mechanosensors allow organisms to probe their environment and gather information needed for processing, decision making, and actuation

“There is already an explosion of data that intelligent systems can collect – and this rate is increasing faster than what conventional computing would be able to process,” said Arrieta, whose lab applies principles of nature to the design of structures, ranging from robots to aircraft wings.

“Nature doesn’t have to collect every piece of data; it filters out what it needs,” he said.

Many biological mechanosensors filter data – the information they receive from an environment – according to a threshold, such as changes in pressure or temperature.

A spider’s hairy mechanosensors, for example, are located on its legs. When a spider’s web vibrates at a frequency associated with prey or a mate, the mechanosensors detect it, generating a reflex in the spider that then reacts very quickly. The mechanosensors wouldn’t detect a lower frequency, such as that of dust on the web, because it’s unimportant to the spider’s survival.

The idea would be to integrate similar sensors straight into the shell of an autonomous machine, such as an airplane wing or the body of a car. The researchers demonstrated in a paper published in ACS Nano that engineered mechanosensors inspired by the hairs of spiders could be customized to detect predetermined forces. In real life, these forces would be associated with a certain object that an autonomous machine needs to avoid.

But the sensors they developed don’t just sense and filter at a very fast rate – they also compute, and without needing a power supply.

“There’s no distinction between hardware and software in nature; it’s all interconnected,” Arrieta said. “A sensor is meant to interpret data, as well as collect and filter it.”

In nature, once a particular level of force activates the mechanoreceptors associated with the hairy mechanosensor, these mechanoreceptors compute information by switching from one state to another.

Purdue researchers, in collaboration with Nanyang Technology University in Singapore and ETH Zürich, designed their sensors to do the same, and to use these on/off states to interpret signals. An intelligent machine would then react according to what these sensors compute.

These artificial mechanosensors are capable of sensing, filtering and computing very quickly because they are stiff, Arrieta said. The sensor material is designed to rapidly change shape when activated by an external force. Changing shape makes conductive particles within the material move closer to each other, which then allows electricity to flow through the sensor and carry a signal. This signal informs how the autonomous system should respond.

“With the help of machine learning algorithms, we could train these sensors to function autonomously with minimum energy consumption,” Arrieta said. “There are also no barriers to manufacturing these sensors to be in a variety of sizes.”

This work is financially supported by ETH Zürich and Purdue University, and aligns with Purdue's Giant Leaps celebration, acknowledging the university’s global advancements made in AI, algorithms and automation as part of Purdue’s 150th anniversary. This is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.



Contacts and sources:
Kayla Wiles
Purdue University


Citation: Filtered Mechanosensing Using Snapping Composites with Embedded Mechano-Electrical Transduction.
Hortense Le Ferrand, André R. Studart, Andres F. Arrieta. ACS Nano, 2019; 13 (4): 4752 DOI: 10.1021/acsnano.9b01095




On Mars, Sands Shift to a Different Drum

In the most detailed analysis of how sands move around on Mars, a team of planetary scientists led by the University of Arizona (UA) found that processes not involved in controlling sand movement on Earth play major roles on Mars.

The retreat of Mars' polar cap of frozen carbon dioxide during the spring and summer generates winds that drive the largest movements of sand dunes observed on the red planet. 
The retreat of Mars' polar cap of frozen carbon dioxide during the spring and summer generates winds that drive the largest movements of sand dunes observed on the red planet. (Image: NASA/JPL/University of Arizona/USGS)
Credit: NASA/JPL/University of Arizona/USGS)


Wind has shaped the face of Mars for millennia, but its exact role in piling up sand dunes, carving out rocky escarpments or filling impact craters has eluded scientists until now.

In the most detailed analysis of how sands move around on Mars, a team of planetary scientists led by Matthew Chojnacki at the University of Arizona Lunar and Planetary Lab set out to uncover the conditions that govern sand movement on Mars and how they differ from those on Earth.

The results, published in the current issue of the journal Geology, reveal that processes not involved in controlling sand movement on Earth play major roles on Mars, especially large-scale features on the landscape and differences in landform surface temperature.

"Because there are large sand dunes found in distinct regions of Mars, those are good places to look for changes," said Chojnacki, associate staff scientist at the UA and lead author of the paper, "Boundary conditions controls on the high-sand-flux regions of Mars." "If you don't have sand moving around, that means the surface is just sitting there, getting bombarded by ultraviolet and gamma radiation that would destroy complex molecules and any ancient Martian biosignatures."

Compared to Earth's atmosphere, the Martian atmosphere is so thin its average pressure on the surface is a mere 0.6 percent of our planet's air pressure at sea level. Consequently, sediments on the Martian surface move more slowly than their Earthly counterparts.

The Martian dunes observed in this study ranged from 6 to 400 feet tall and were found to creep along at a fairly uniform average speed of two feet per Earth year. For comparison, some of the faster terrestrial sand dunes on Earth, such as those in North Africa, migrate at 100 feet per year.

"On Mars, there simply is not enough wind energy to move a substantial amount of material around on the surface," Chojnacki said. "It might take two years on Mars to see the same movement you'd typically see in a season on Earth."

Planetary geologists had been debating whether the sand dunes on the red planet were relics from a distant past, when the atmosphere was much thicker, or whether drifting sands still reshape the planet's face today, and if so, to what degree.

"We wanted to know: Is the movement of sand uniform across the planet, or is it enhanced in some regions over others?" Chojnacki said. "We measured the rate and volume at which dunes are moving on Mars."

The team used images taken by the HiRISE camera aboard NASA's Mars Reconnaissance Orbiter, which has been surveying Earth's next-door neighbor since 2006. HiRISE, which stands for High Resolution Imaging Science Experiment, is led by the UA's Lunar and Planetary Laboratory and has captured about three percent of the Martian surface in stunning detail.

The researchers mapped sand volumes, dune migration rates and heights for 54 dune fields, encompassing 495 individual dunes.

"This work could not have been done without HiRISE," said Chojnacki, who is a member of the HiRISE team. "The data did not come just from the images, but was derived through our photogrammetry lab that I co-manage with Sarah Sutton. We have a small army of undergraduate students who work part time and build these digital terrain models that provide fine-scale topography."

Across Mars, the survey found active, wind-shaped beds of sand and dust in structural fossae – craters, canyons, rifts and cracks – as well as volcanic remnants, polar basins and plains surrounding craters.

In the study's most surprising finding, the researchers discovered that the largest movements of sand in terms of volume and speed are restricted to three distinct regions: Syrtis Major, a dark spot larger than Arizona that sits directly west of the vast Isidis basin; Hellespontus Montes, a mountain range about two-thirds the length of the Cascades; and North Polar Erg, a sea of sand lapping around the north polar ice cap. All three areas are set apart from other parts of Mars by conditions not known to affect terrestrial dunes: stark transitions in topography and surface temperatures.

"Those are not factors you would find in terrestrial geology," Chojnacki said. "On Earth, the factors at work are different from Mars. For example, ground water near the surface or plants growing in the area retard dune sand movement."

On a smaller scale, basins filled with bright dust were found to have higher rates of sand movement, as well.

"A bright basin reflects the sunlight and heats up the air above much more quickly than the surrounding areas, where the ground is dark," Chojnacki said, "so the air will move up the basin toward the basin rim, driving the wind, and with it, the sand."

Understanding how sand and sediment move on Mars may help scientists plan future missions to regions that cannot easily be monitored and has implications for studying ancient, potentially habitable environments.

Funded by NASA and HiRISE, the study is co-authored by Maria Banks at NASA's Goddard Space Flight Center in Greenbelt, Maryland, Lori Fenton at the Carl Sagan Center at the SETI institute in Mountain View, California, and Anna Urso at LPL.



Contacts and sources:
Daniel Stolte
University of Arizona

Citation: Boundary conditions controls on the high-sand-flux regions of Mars
Matthew Chojnacki, Maria E. Banks, Lori K. Fenton, Anna C. Urso. Geology, 2019; 47 (5): 427 DOI: 10.1130/G45793.1



A Driverless Car Than Reasons Like a Human


With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments.  The autonomous control system “learns” to use simple maps and image data to navigate new, complex routes.

To bring more human-like reasoning to autonomous vehicle navigation, MIT researchers have created a system that enables driverless cars to check a simple map and use visual data to follow routes in new, complex environments.
To bring more human-like reasoning to autonomous vehicle navigation, MIT researchers have created a system that enables driverless cars to check a simple map and use visual data to follow routes in new, complex environments.
Image: Chelsea Turner

Human drivers are exceptionally good at navigating roads they haven’t driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. Driverless cars, however, struggle with this basic reasoning. In every new area, the cars must first map and analyze all the new roads, which is very time consuming. The systems also rely on complex maps — usually generated by 3-D scans — which are computationally intensive to generate and process on the fly.

In a paper being presented at this week’s International Conference on Robotics and Automation, MIT researchers describe an autonomous control system that “learns” the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple GPS-like map. Then, the trained system can control a driverless car along a planned route in a brand-new area, by imitating the human driver.

Similarly to human drivers, the system also detects any mismatches between its map and features of the road. This helps the system determine if its position, sensors, or mapping are incorrect, in order to correct the car’s course.

To train the system initially, a human operator controlled an automated Toyota Prius — equipped with several cameras and a basic GPS navigation system — to collect data from local suburban streets including various road structures and obstacles. When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.

“With our system, you don’t need to train on every road beforehand,” says first author Alexander Amini, an MIT graduate student. “You can download a new map for the car to navigate through roads it has never seen before.”

“Our objective is to achieve autonomous navigation that is robust for driving in new environments,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “For example, if we train an autonomous vehicle to drive in an urban setting such as the streets of Cambridge, the system should also be able to drive smoothly in the woods, even if that is an environment it has never seen before.”

Joining Rus and Amini on the paper are Guy Rosman, a researcher at the Toyota Research Institute, and Sertac Karaman, an associate professor of aeronautics and astronautics at MIT.

Credit: Alexander Amini

Point-to-point navigation

Traditional navigation systems process data from sensors through multiple modules customized for tasks such as localization, mapping, object detection, motion planning, and steering control. For years, Rus’s group has been developing “end-to-end” navigation systems, which process inputted sensory data and output steering commands, without a need for any specialized modules.

Until now, however, these models were strictly designed to safely follow the road, without any real destination in mind. In the new paper, the researchers advanced their end-to-end system to drive from goal to destination, in a previously unseen environment. To do so, the researchers trained their system to predict a full probability distribution over all possible steering commands at any given instant while driving.

The system uses a machine learning model called a convolutional neural network (CNN), commonly used for image recognition. During training, the system watches and learns how to steer from a human driver. The CNN correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map. Eventually, it learns the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries.

“Initially, at a T-shaped intersection, there are many different directions the car could turn,” Rus says. “The model starts by thinking about all those directions, but as it sees more and more data about what people do, it will see that some people turn left and some turn right, but nobody goes straight. Straight ahead is ruled out as a possible direction, and the model learns that, at T-shaped intersections, it can only move left or right.”

What does the map say?

In testing, the researchers input the system with a map with a randomly chosen route. When driving, the system extracts visual features from the camera, which enables it to predict road structures. For instance, it identifies a distant stop sign or line breaks on the side of the road as signs of an upcoming intersection. At each moment, it uses its predicted probability distribution of steering commands to choose the most likely one to follow its route.

Importantly, the researchers say, the system uses maps that are easy to store and process. Autonomous control systems typically use LIDAR scans to create massive, complex maps that take roughly 4,000 gigabytes (4 terabytes) of data to store just the city of San Francisco. For every new destination, the car must create new maps, which amounts to tons of data processing. Maps used by the researchers’ system, however, captures the entire world using just 40 gigabytes of data.

During autonomous driving, the system also continuously matches its visual data to the map data and notes any mismatches. Doing so helps the autonomous vehicle better determine where it is located on the road. And it ensures the car stays on the safest path if it’s being fed contradictory input information: If, say, the car is cruising on a straight road with no turns, and the GPS indicates the car must turn right, the car will know to keep driving straight or to stop.

“In the real world, sensors do fail,” Amini says. “We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road.”


Contacts and sources:
Rob Matheson.
Massachusetts Institute of Technology (MIT)


Citation: Variational End-to-End Navigation and LocalizationAlexander Amini, Guy Rosman, Sertac Karaman, Daniela Rus Authors: Alexander Amini, Guy Rosman, Sertac Karaman, and Daniela Rus http://arxiv.org/abs/1811.10119  




Global Temperature Change Attributable to External Factors Says New Study

Human activity and other external factors are responsible for the rise in global temperature, researchers at the University of Oxford confirm.

While this has been the consensus of the scientific community for a long time, uncertainty remained around how natural ocean-cycles might be influencing global warming over the course of multiple decades. The answer we can now give is: very little to none.

In a new study, published in the Journal of Climate, researchers at the Environmental Change Institute looked at observed ocean and land temperature data since 1850. Apart from human-induced factors such as greenhouse gas concentrations, other occurrences such as volcanic eruptions, solar activity and air pollution peaks were included in the analysis. The findings demonstrated that slow-acting ocean cycles do not explain the long-term changes in global temperature, which includes several decades of accelerated or slowed warming.

‘We can now say with confidence that human factors like greenhouse gas emissions and particulate pollution, along with year-to-year changes brought on by natural phenomenon like volcanic eruptions or the El Niño, are sufficient to explain virtually all of the long-term changes in temperature,’ says study lead author Dr Karsten Haustein. ‘The idea that oceans could have been driving the climate in a colder or warmer direction for multiple decades in the past, and therefore will do so in the future, is unlikely to be correct.’

A spiral graph represents global temperature change (1850 to 2017) 
Credit: Ed Hawkins / http://www.climate-lab-book.ac.uk/spirals/  Data from the HadCRUT4 dataset http://www.metoffice.gov.uk/hadobs/hadcrut4/

'Unfortunately a number of previous studies have compared flawed observations with flawed modelling results to claim naturally occurring ocean cycles have played a large role in changes in the global temperature record,' says Peter Jacobs, co-author on the study and PhD student at George Mason University in the USA.  'We show here that in fact there's little role for such cycles in explaining temperature changes when more accurate representations of both the temperature record and factors like volcanic eruptions, solar energy, and of course human activities are used. The climate system is endlessly interesting and no doubt has many mysteries left to explore, but this is really not one of them. Being sure that we're comparing like-with-like before jumping to the conclusion that there are discrepancies between our understanding of the climate and how it is behaving in the real world is a lesson we seem to have to relearn over and over again.'

The study showed that global warming that occurred during the ‘early warming’ period (1915 – 1945) was in fact caused by external factors as well. Formerly, it had been largely attributed to natural ocean temperature changes, which is why there has been uncertainty over how much of global warming is influenced by unpredictable natural factors.

‘Our study showed that there are no hidden drivers of global mean temperature,’ says co-author Dr Friederike Otto. ‘The temperature change we observe is due to the drivers we know. This sounds boring, but sometimes boring results are really important. In this case, it means we will not see any surprises when these drivers - such as gas emissions - change. In good news, this means when greenhouse gas concentrations go down, temperatures will do so as predicted; the bad news is there is nothing that saves us from temperatures going up as forecasted if we fail to drastically cut greenhouse gas emissions.’


Contacts and sources:
University of Oxford

Citation: A limited role for unforced internal variability in 20th century warming.
Karsten Haustein, Friederike E.L. Otto, Victor Venema, Peter Jacobs, Kevin Cowtan, Zeke Hausfather, Robert G. Way, Bethan White, Aneesh Subramanian, Andrew P. Schurer. . Journal of Climate, 2019; DOI: 10.1175/JCLI-D-18-0555.1

Ice 9 Created on Earth, Strange Ice Only Exists Naturally on Other Planets and Moons

Ice 9 has been created on Earth. In Kurt Vonnegut's novel Cat's Cradle ice 9 acts as a seed crystal that turns all the water in the world's seas, rivers, and groundwater turns into ice-nine, sure to kill almost all life in a few days. Fortunately the real ice 9 lacks that power.
An ORNL-led team's observation of certain crystalline ice phases challenges accepted theories about super-cooled water and non-crystalline ice. Their findings, reported in the journal Nature, will also lead to better understanding of ice and its various phases found on other planets, moons and elsewhere in space.

Credit: Jill Hemman/Oak Ridge National Laboratory, U.S. Dept. of Energy


Through an experiment designed to create a super-cold state of water, scientists at the Department of Energy’s Oak Ridge National Laboratory used neutron scattering to discover a pathway to the unexpected formation of dense, crystalline phases of ice thought to exist beyond Earth’s limits.

Observation of these particular crystalline ice phases, known as ice IX, ice XV and ice VIII, challenges accepted theories about super-cooled water and amorphous, or non-crystalline, ice. The researchers’ findings, reported in the journal Nature, will also lead to better basic understanding of ice and its various phases found on other planets and moons and elsewhere in space.

“Hydrogen and oxygen are among the most abundant elements in the universe, and the simplest molecular compound of the two, H2O, is common,” said Chris Tulk, ORNL neutron scattering scientist and lead author. “In fact, a popular theory suggests that most of Earth’s water was brought here through collisions with icy comets.”

On Earth, when water molecules reach zero degrees Celsius, they enter a lower energy state and settle onto a hexagonal crystalline lattice. This frozen form is denoted as ice Ih, the most common phase of water that can be found in household freezers or at skating rinks.

Ice IX, ice XV and ice VIII are three of at least 17 ice phases realized when molecules reorganize into a stable crystalline structure at varying super-low temperatures and very high pressures, conditions that don’t occur naturally on Earth.

“As ice changes phases, it’s similar to water going from a gas to a liquid to a solid except at low temperatures and high pressure—the ice transforms between various different solid forms,” Tulk said.

Each known ice phase is characterized by its unique crystal structure within its pressure-temperature range of stability, where the molecules reach equilibrium and the water molecules exhibit a regular three-dimensional pattern, and the structure becomes stable.

Initially, Tulk and colleagues at the National Research Council of Canada and from the University of California at Los Angeles were exploring the structural nature of amorphous ice—a state of ice that forms with no ordered crystalline structure—as it recrystallizes at even higher pressures.

ORNL scientists Chris Tulk, left, and Jamie Molaison were part of a team that discovered a pathway to the unexpected formation of dense, crystalline phases of ice thought to exist beyond Earth’s limits. They used the unique neutron scattering capability of the Spallation Neutrons and Pressure Diffractometer at ORNL’s Spallation Neutron Source for the experiment.

 Credit: Genevieve Martin/Oak Ridge National Laboratory, U.S. Dept. of Energy


To make amorphous ice, scientists freeze water into a high-pressure device that is cooled to minus 173 degrees Celsius and pressurized to approximately 10,000 atmospheres, or 147,000 pounds per square inch (car tires are inflated to about 32 pounds per square inch).

“This type of amorphous ice is thought to be related to liquid water, and understanding that link was the original purpose of this study,” said Tulk.

At ORNL’s Spallation Neutron Source, the team froze a three-millimeter sphere, or about half a drop, of deuterated water, which has an additional neutron in the hydrogen nucleus needed for neutron scattering analysis. Then, they programmed the Spallation Neutrons and Pressure, or SNAP, diffractometer to minus 173 degrees C. The instrument increased the pressure incrementally every couple of hours up to 411,000 pounds per square inch, or about 28,000 atmospheres while collecting neutron scattering data between each hike in pressure.

“Once we achieved amorphous ice, we planned to raise the temperature and pressure and observe the local molecular ordering as the amorphous ice ‘melts’ into a supercooled liquid and then recrystallizes,” Tulk said. However, after analyzing the data, they were surprised to learn they had not created amorphous ice, but rather a sequence of crystalline transformations through four phases of ice with ever-increasing density: from ice Ih to ice IX to ice XV to ice XIII. There was no evidence of amorphous ice at all.

“I’ve made many of these samples always by compressing ice at low temperature,” said co-author Dennis Klug from the National Research Council of Canada, the lab that originally discovered the pressure-induced amorphization of ice in 1984. “I’ve never previously seen this pressure-temperature path result in a series of crystalline forms like this.”

“If the data from our experiment was true, it would mean that amorphous ice is not related to liquid water but is rather an interrupted transformation between two crystalline phases, a major departure from widely accepted theory,” Klug added.

At first, the team thought their observation was the result of a contaminated sample.

Three more experiments with a fresh, carefully handled samples on SNAP produced identical results, reconfirming the structural transformation sequence with no formation of amorphous ice.

The key was the slow rate of pressure increase and collection of data at a lower pressure that allowed the ice structure to relax and become the stable ice IX form. Previous experiments quickly passed over the ice IX structure without relaxation, this resulted in the amorphous phase.

For 35 years, scientists have been researching the properties of super-cold water and looking for what’s known as the second critical point, which is buried within the solid ice phases. But these results question its very existence. “The relationship between pressure-induced amorphous ice and water is now in doubt, and the second critical point may not even exist,” Tulk said.

“The results of this paper will form the basis of the analysis of future studies of amorphous ice phases during upcoming experiments done at the SNS,” he added.

Co-authors of the study titled, “Absence of amorphous forms when ice is compressed at low temperature,” included Chris A. Tulk and Jamie J. Molaison of ORNL; Adam Makhluf and Craig E. Manning of UCLA; and Dennis D. Klug of the NRC of Canada.

Experimental measurements were performed at ORNL’s Spallation Neutron Source, a DOE Office of Science User Facility, using the Spallation Neutrons and Pressure diffractometer. The research was supported by DOE’s Office of Science and the Sloan Foundation’s Deep Carbon Observatory, a global community of more than 1,000 scientists working to better understand the quantities, movements, forms and origins of carbon inside the Earth.

UT-Battelle manages ORNL for DOE’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.




Contacts and sources
Sara S Shoemaker
DOE/Oak Ridge National Laboratory

Citation: Absence of amorphous forms when ice is compressed at low temperature.
Chris A. Tulk, Jamie J. Molaison, Adam R. Makhluf, Craig E. Manning, Dennis D. Klug. Nature, 2019; 569 (7757): 542 DOI: 10.1038/s41586-019-1204-5





Thursday, May 23, 2019

Artificial Photosynthesis Transforms Carbon Dioxide into Liquefiable Fuels



Chemists at the University of Illinois have successfully produced fuels using water, carbon dioxide and visible light through artificial photosynthesis. By converting carbon dioxide into more complex molecules like propane, green energy technology is now one step closer to using excess CO2 to store solar energy – in the form of chemical bonds – for use when the sun is not shining and in times of peak demand.

Plants use sunlight to drive chemical reactions between water and CO2 to create and store solar energy in the form of energy-dense glucose. In the new study, the researchers developed an artificial process that uses the same green light portion of the visible light spectrum used by plants during natural photosynthesis to convert CO2 and water into fuel, in conjunction with electron-rich gold nanoparticles that serve as a catalyst. The new findings are published in the journal Nature Communications.

“The goal here is to produce complex, liquefiable hydrocarbons from excess CO2 and other sustainable resources such as sunlight,” said Prashant Jain, a chemistry professor and co-author of the study. “Liquid fuels are ideal because they are easier, safer and more economical to transport than gas and, because they are made from long-chain molecules, contain more bonds – meaning they pack energy more densely.”


Jain, left, and Yu performing artificial photosynthesis experiments using green light.
Photo by Fred Zwicky

In Jain’s lab, Sungju Yu, a postdoctoral researcher and first author of the study, uses metal catalysts to absorb green light and transfer electrons and protons needed for chemical reactions between CO2 and water – filling the role of the pigment chlorophyll in natural photosynthesis.

Gold nanoparticles work particularly well as a catalyst, Jain said, because their surfaces interact favorably with the CO2 molecules, are efficient at absorbing light and do not break down or degrade like other metals that can tarnish easily.

Under green light and assisted by an ionic liquid, gold nanoparticles, bottom, lend electrons to convert CO2 molecules, the red and grey spheres in the center, to more complex hydrocarbon fuel molecules.


Graphic courtesy Sungju Yu, Jain Lab at University of Illinois at Urbana-Champaign

There are several ways in which the energy stored in bonds of the hydrocarbon fuel is freed. However, the easy conventional method of combustion ends up producing more CO2 – which is counterproductive to the notion of harvesting and storing solar energy in the first place, Jain said.

“There are other, more unconventional potential uses from the hydrocarbons created from this process,” he said. “They could be used to power fuel cells for producing electrical current and voltage. There are labs across the world trying to figure out how the hydrocarbon-to-electricity conversion can be conducted efficiently,” Jain said.

As exciting as the development of this CO2-to-liquid fuel may be for green energy technology, the researchers acknowledge that Jain’s artificial photosynthesis process is nowhere near as efficient as it is in plants.

“We need to learn how to tune the catalyst to increase the efficiency of the chemical reactions,” he said. “Then we can start the hard work of determining how to go about scaling up the process. And, like any unconventional energy technology, there will be many economic feasibility questions to be answered, as well.”



The Energy and Biosciences Institute, through the EBI-Shell program, supported this research.

Contacts and sources
Lois Yoksoulian
University of Illinois at Urbana-Champaign

Citation: “Plasmonic photosynthesis of C1–C3 hydrocarbons from carbon dioxide assisted by an ionic liquid” is available online and from the U. of I. News Bureau. DOI: 10.1038/s41467-019-10084-5.

Octopus “Suckers” Inspire Wearable Biosensor for Wet and Dry Situations

Wearable electronics that adhere to skin are an emerging trend in health sensor technology for their ability to monitor a variety of human activities, from heart rate to step count. But finding the best way to stick a device to the body has been a challenge. Now, a team of researchers reports the development of a graphene-based adhesive biosensor inspired by octopus “suckers.” 

A graphene-based adhesive biosensor inspired by octopus “suckers” is flexible and holds up in wet and dry environments.

Credit: Adapted from ACS Appl. Mater. Interfaces 2019, 11, 16951−16957

The findings are reported in ACS Applied Materials & Interfaces.

For a wearable sensor to be truly effective, it must be flexible and adhere fully to both wet and dry skin but still remain comfortable for the user. Thus, the choice of substrate, the material that the sensing compounds rest upon, is crucial. Woven yarn is a popular substrate, but it sometimes doesn’t fully contact the skin, especially if that skin is hairy. Typical yarns and threads are also vulnerable to wet environments. Adhesives can lose their grip underwater, and in dry environments they can be so sticky that they can be painful when peeled off. To overcome these challenges, Changhyun Pang, Changsoon Choi and colleagues worked to develop a low-cost, graphene-based sensor with a yarn-like substrate that uses octopus-like suckers to adhere to skin.

The researchers coated an elastic polyurethane and polyester fabric with graphene oxide and soaked in L-ascorbic acid to aid in conductivity while still retaining its strength and stretch. From there, they added a coating of a graphene and poly(dimethylsiloxane) (PDMS) film to form a conductive path from the fabric to the skin. Finally, they etched tiny, octopus-like patterns on the film. The sensor could detect a wide range of pressures and motions in both wet and dry environments. The device also could monitor an array of human activities, including electrocardiogram signals, pulse and speech patterns, demonstrating its potential use in medical applications, the researchers say.

The authors acknowledge funding from the National Research Foundation of Korea, the Korean Ministry of Education and the Korean Ministry of Science.


Contacts and sources:
American Chemical Society

Citation:“Water-Resistant and Skin-Adhesive Wearable Electronics Using Graphene Fabric Sensor with Octopus-Inspired Microsuckers
ACS Applied Materials & Interfaces



Are Your Clothes Toxic?

 A growing body of evidence shows clothing is an important factor in "human exposure to chemicals and particles, which may have public health significance." According to new research, "chemicals of concern have been identified in clothing, including byproducts of their manufacture and chemicals that adhere to clothing during use and care."

Dusan Licina, a tenure-track assistant professor at the Smart Living Lab, EPFL Fribourg, has taken a critical look at how much we really know about our exposure to particles and chemicals transported by our clothing. His study concludes that further research is needed and opens up new areas of investigation. 
Some substances are removed by washing, while others stick around and become difficult to get rid of. © iStock
Credit EPFL

There is growing evidence that our clothing exposes us to particles and chemicals on a daily basis – and that this exposure could carry significant health risks. Scientists therefore need to better quantify this exposure so that we can develop strategies for mitigating those risks. At least that’s according to Dusan Licina, a tenure-track assistant professor at EPFL’s Smart Living Lab in Fribourg, who has just published a critical review of research on this topic in Environmental Science and Technology.

Our clothing acts as a protective barrier against physical and chemical hazards. However, it can also expose us to potentially toxic chemicals and biological particles by releasing millions of such substances every day, depending on how we use and treat the fabrics. Some substances are removed by washing, drying and storing clothing properly, while others stick around and become difficult to get rid of.

Analysis of 260 articles
These potentially toxic substances include molecular compounds, abiotic particles and biotic particles (such as microbes and allergens), and can end up in our lungs. Common examples include nicotine residue from cigarette smoke, microbes from pets and hazardous compounds used in the farming, medical and manufacturing industries. Surprisingly, until now scientists have taken little interest in this issue. In the first part of his paper, Licina summaries the findings of 260 articles on the subject and identifies some serious knowledge gaps as well as specific avenues for further research.

“We feel this issue has been understudied so far. The clothing and fabrics people wear have changed enormously over the past few years – today our clothes contain synthetic materials with antimicrobial, anti-UV, stain-repellent and water-repellent additives – but nobody can say whether these new materials expose us to more chemicals and particles than natural fibers,” says Licina.

A significant impact
He suggests requiring that all clothing include a label indicating not just what materials it is made from, but also what substances were used in the fabrication process – much like the ingredient and nutrition-information labels that are required on food. “Today there are no laws or regulations addressing this issue,” says Licina. In short, we know that clothing can have a significant impact on our daily exposure to particles and chemicals, through the air we breathe and the contact with our skin, but we don’t what the full ramifications are in terms of public health.


Dusan Licina has been writing articles on this subject for several years. While he was performing research in the US (from 2016 to 2018), he spent a year continually monitoring indoor air quality at a neonatal intensive-care unit. He measured how particles are transported inside the unit and even to babies’ incubators. Licina found that when nurses entered the unit, the air-particle concentration increased by a factor of 2.5, and that some of those particles could be traced directly to the shirts that nurses wore during their commute to the hospital. These particles could feasibly play a major role in the development of the babies’ immune systems. But, once again, more research is needed.

Non-smokers exposed
Similar studies carried out on clothing worn in other settings revealed significant traces of insecticides, fungicides and herbicides – compounds that could be absorbed by fabrics in one place and released in another. “Research has already shown that an individual’s clothing can carry potentially toxic particles that can expose people nearby. For example, scientists have found that non-smokers who sit next to smokers with nicotine particles on their clothing have traces of nicotine in their blood and urine later on," says Licina. “What’s more, the particle concentrations that people are thus exposed to from clothing are substantial when compared with total exposure estimates in health effect studies. However, what’s missing are data on how that exposure affects us on a day-to-day basis.”

To fill in those gaps, Licina calls on biologists and chemists to work closely with environmental engineers, in the interests of public health. He also suggests that until further research is conducted and better clothing-information regulations are adopted, consumers pay more attention to how their clothes are made and wash them regularly using gentle, all-natural detergents.
 


Contacts and sources:
Sandrine Perroud
École Polytechnique Fédérale de Lausanne

Citation: “Clothing-mediated exposure to chemicals and particles,” Environmental Science and Technology, Dusan Licina, Glenn C. Morrison, Gabriel Beko, Charles J. Weschler and William W. Nazaroff, Environmental Science and Technology   https://doi.org/10.1021/acs.est.9b00272