Thursday, January 31, 2019

Hubble Fortuitously Discovers a New Galaxy in the Cosmic Neighborhood

Astronomers using the NASA/ESA Hubble Space Telescope to study some of the oldest and faintest stars in the globular cluster NGC 6752 have made an unexpected finding. They discovered a dwarf galaxy in our cosmic backyard, only 30 million light-years away. The finding is reported in the journal Monthly Notices of the Royal Astronomical Society: Letters.

An international team of astronomers recently used the NASA/ESA Hubble Space Telescope to study white dwarf stars within the globular cluster NGC 6752. The aim of their observations was to use these stars to measure the age of the globular cluster, but in the process they made an unexpected discovery.

This image, taken with Hubble's Advanced Camera for Surveys shows a part the globular cluster NGC 6752. Behind the bright stars of the cluster a denser collection of faint stars is visible -- a previously unknown dwarf spheroidal galaxy. This galaxy, nicknamed Bedin 1, is about 30 million light-years from Earth.

Credit: ESA/Hubble, NASA, Bedin et al.


In the outer fringes of the area observed with Hubble's Advanced Camera for Surveysa compact collection of stars was visible. After a careful analysis of their brightnesses and temperatures, the astronomers concluded that these stars did not belong to the cluster -- which is part of the Milky Way -- but rather they are millions of light-years more distant.

Our newly discovered cosmic neighbour, nicknamed Bedin 1 by the astronomers, is a modestly sized, elongated galaxy. It measures only around 3000 light-years at its greatest extent -- a fraction of the size of the Milky Way. Not only is it tiny, but it is also incredibly faint. These properties led astronomers to classify it as a dwarf spheroidal galaxy.

Dwarf spheroidal galaxies are defined by their small size, low-luminosity, lack of dust and old stellar populations [1]. 36 galaxies of this type are already known to exist in the Local Group of Galaxies, 22 of which are satellite galaxies of the Milky Way.

While dwarf spheroidal galaxies are not uncommon, Bedin 1 has some notable features. Not only is it one of just a few dwarf spheroidals that have a well established distance but it is also extremely isolated. It lies about 30 million light-years from the Milky Way and 2 million light-years from the nearest plausible large galaxy host, NGC 6744. This makes it possibly the most isolated small dwarf galaxy discovered to date.

From the properties of its stars, astronomers were able to infer that the galaxy is around 13 billion years old -- nearly as old as the Universe itself. Because of its isolation -- which resulted in hardly any interaction with other galaxies -- and its age, Bedin 1 is the astronomical equivalent of a living fossil from the early Universe.

The discovery of Bedin 1 was a truly serendipitous find. Very few Hubble images allow such faint objects to be seen, and they cover only a small area of the sky. Future telescopes with a large field of view, such as the WFIRST telescope, will have cameras covering a much larger area of the sky and may find many more of these galactic neighbours.


Contacts and sources:
L. R. Bedin
INAF-Osservatorio Astronomico di Padova 

Mathias Jäger
ESA/Hubble,


Citation: The HST Large Programme on NGC 6752. I. Serendipitous discovery of a dwarf Galaxy in background⋆ http://www.spacetelescope.org/static/archives/releases/science_papers/heic1903/heic1903a.pdf

Bacteria Promote Lung Tumor Development, Study Suggests

Antibiotics or anti-inflammatory drugs may help combat lung cancer.

MIT cancer biologists have discovered a new mechanism that lung tumors exploit to promote their own survival: These tumors alter bacterial populations within the lung, provoking the immune system to create an inflammatory environment that in turn helps the tumor cells to thrive.

In mice that were genetically programmed to develop lung cancer, those raised in a bacteria-free environment developed much smaller tumors than mice raised under normal conditions, the researchers found. Furthermore, the researchers were able to greatly reduce the number and size of the lung tumors by treating the mice with antibiotics or blocking the immune cells stimulated by the bacteria.

MIT researchers found that lung tumors in mice treated with antibiotics (right, purple stain) were much smaller than untreated lung tumors (left).
MIT researchers found that lung tumors in mice treated with antibiotics (right, purple stain) were much smaller than untreated lung tumors (left).
Image: Chengcheng Jin

The findings suggest several possible strategies for developing new lung cancer treatments, the researchers say.

“This research directly links bacterial burden in the lung to lung cancer development and opens up multiple potential avenues toward lung cancer interception and treatment,” says Tyler Jacks, director of MIT’s Koch Institute for Integrative Cancer Research and the senior author of the paper.

Chengcheng Jin, a Koch Institute postdoc, is the lead author of the study, which appears in the Jan. 31 online edition of Cell.

Linking bacteria and cancer

Lung cancer, the leading cause of cancer-related deaths, kills more than 1 million people worldwide per year. Up to 70 percent of lung cancer patients also suffer complications from bacterial infections of the lung. In this study, the MIT team wanted to see whether there was any link between the bacterial populations found in the lungs and the development of lung tumors.

To explore this potential link, the researchers studied genetically engineered mice that express the oncogene Kras and lack the tumor suppressor gene p53. These mice usually develop a type of lung cancer called adenocarcinoma within several weeks.

Mice (and humans) typically have many harmless bacteria growing in their lungs. However, the MIT team found that in the mice engineered to develop lung tumors, the bacterial populations in their lungs changed dramatically. The overall population grew significantly, but the number of different bacterial species went down. The researchers are not sure exactly how the lung cancers bring about these changes, but they suspect one possibility is that tumors may obstruct the airway and prevent bacteria from being cleared from the lungs.

This bacterial population expansion induced immune cells called gamma delta T cells to proliferate and begin secreting inflammatory molecules called cytokines. These molecules, especially IL-17 and IL-22, create a progrowth, prosurvival environment for the tumor cells. They also stimulate activation of neutrophils, another kind of immune cell that releases proinflammatory chemicals, further enhancing the favorable environment for the tumors.

“You can think of it as a feed-forward loop that forms a vicious cycle to further promote tumor growth,” Jin says. “The developing tumors hijack existing immune cells in the lungs, using them to their own advantage through a mechanism that’s dependent on local bacteria.”

However, in mice that were born and raised in a germ-free environment, this immune reaction did not occur and the tumors the mice developed were much smaller.

Blocking tumor growth

The researchers found that when they treated the mice with antibiotics either two or seven weeks after the tumors began to grow, the tumors shrank by about 50 percent. The tumors also shrank if the researchers gave the mice drugs that block gamma delta T cells or that block IL-17.

The researchers believe that such drugs may be worth testing in humans, because when they analyzed human lung tumors, they found altered bacterial signals similar to those seen in the mice that developed cancer. The human lung tumor samples also had unusually high numbers of gamma delta T cells.

“If we can come up with ways to selectively block the bacteria that are causing all of these effects, or if we can block the cytokines that activate the gamma delta T cells or neutralize their downstream pathogenic factors, these could all be potential new ways to treat lung cancer,” Jin says.

Many such drugs already exist, and the researchers are testing some of them in their mouse model in hopes of eventually testing them in humans. The researchers are also working on determining which strains of bacteria are elevated in lung tumors, so they can try to find antibiotics that would selectively kill those bacteria.

The research was funded, in part, by a Lung Cancer Concept Award from the Department of Defense, a Cancer Center Support (core) grant from the National Cancer Institute, the Howard Hughes Medical Institute, and a Margaret A. Cunningham Immune Mechanisms in Cancer Research Fellowship Award.


Contacts and sources:
Anne Trafton
Massachusetts Institute of Technology (MIT)


Citation:

The 210-Million-Year-Old Smok Was Crushing Bones Like A Hyena

Coprolites, or fossil droppings, of the dinosaur-like archosaur Smok wawelski contain lots of chewed-up bone fragments. This led researchers at Uppsala University to conclude that this top predator was exploiting bones for salt and marrow, a behavior often linked to mammals but seldom to archosaurs.

Credit: Martin Qvarnström


Most predatory dinosaurs used their blade-like teeth to feed on the flesh of their prey, but they are commonly not thought to be much of bone crushers. The major exception is seen in the large tyrannosaurids, such as Tyrannosaurus rex, that roamed North America toward the end of the age of dinosaurs. The tyrannosaurids are thought to have been osteophagous (voluntarily exploiting bone) based on findings of bone-rich coprolites, bite-marked bones, and their robust teeth being commonly worn.

In a study published in Scientific Reports, researchers from Uppsala University were able to link ten large coprolites to Smok wawelski, a top predator of a Late Triassic (210 million year old) assemblage unearthed in Poland. This bipedal, 5-6 meters long animal lived some 140 million years before the tyrannosaurids of North America and had a T. rex-like appearance, although it is not fully clear whether it was a true dinosaur or a dinosaur-like precursor.

The researchers found several crushed teeth in the fossil droppings, probably belonging to Smok wawelski itself. The teeth were crushed against hard food items and involuntarily ingested.

Credit: Gerard Gierlinski

Three of the coprolites were scanned using synchrotron microtomography. This method has just recently been applied to coprolites and works somewhat like a CT scanner in a hospital, with the difference that the energy in the x-ray beams is much stronger. This makes it possible to visualize internal structures in fossils in three dimensions.

The coprolites were shown to contain up to 50 percent of bones from prey animals such as large amphibians and juvenile dicynodonts. Several crushed serrated teeth, probably belonging to the coprolite producer itself, were also found in the coprolites. This means that the teeth were repeatedly crushed against the hard food items (and involuntarily ingested) and replaced by new ones.

Coprolites, or fossil droppings, of the dinosaur-like archosaur Smok wawelski contain lots of chewed-up bone fragments. This led researchers at Uppsala University to conclude that this top predator was exploiting bones for salt and marrow, a behavior often linked to mammals but seldom to archosaurs.

Credit: Jakub Kowalski

Further evidence for a bone-crushing behaviour can also be found in the fossils from the same bone beds in Poland. These include worn teeth and bone-rich fossil regurgitates from Smok wawelski, as well as numerous crushed or bite-marked bones.

Several of the anatomical characters related to osteophagy, such as a massive head and robust body, seem to be shared by S. wawelski and the tyrannosaurids, despite them being distantly related and living 140 million years apart. These large predators therefore seem to provide evidence of similar feeding adaptations being independently acquired at the beginning and end of the age of dinosaurs.

Contacts and sources:
Grzegorz Niedzwiedzki
Uppsala University

Citation:

'Antarctic King:' Iguana-Sized Dinosaur Cousin Discovered in Antarctica

'Antarctic king' shows how life at the South Pole bounced back after mass extinction.

Antarctica wasn't always a frozen wasteland--250 million years ago, it was covered in forests and rivers, and the temperature rarely dipped below freezing. It was also home to diverse wildlife, including early relatives of the dinosaurs. Scientists have just discovered the newest member of that family--an iguana-sized reptile whose name means "Antarctic king."

"The midnight sun over Early Triassic Antarctica." Along the banks of a river, three archosaur inhabitants of the dense Voltzia conifer forest cross paths: Antarctanax shackletoni sneaks up on an early titanopetran insect, Prolacerta lazes on a log, and an enigmatic large archosaur pursues two unsuspecting dicynodonts, Lystrosaurus maccaigi.

Credit: (c) Adrienne Stroup, Field Museum

"This new animal was an archosaur, an early relative of crocodiles and dinosaurs," says Brandon Peecook, a Field Museum researcher and lead author of a paper in the Journal of Vertebrate Paleontology describing the new species. "On its own, it just looks a little like a lizard, but evolutionarily, it's one of the first members of that big group. It tells us how dinosaurs and their closest relatives evolved and spread."

The fossil skeleton is incomplete, but paleontologists still have a good feel for the animal, named Antarctanax shackletoni (the former means "Antarctic king," the latter is a nod to polar explorer Ernest Shackleton). Based on its similarities to other fossil animals, Peecook and his coauthors (Roger Smith of the University of Witwatersrand and the Iziko South African Museum and Christian Sidor of the Burke Museum and University of Washington) surmise that Antarctanax was a carnivore that hunted bugs, early mammal relatives, and amphibians.

A slab containing fossils of Antarctanax.
Credit: (c) Brandon Peecook, Field Museum


The most interesting thing about Antarctanax, though, is where it lived, and when. "The more we find out about prehistoric Antarctica, the weirder it is," says Peecook, who is also affiliated with the Burke Museum. "We thought that Antarctic animals would be similar to the ones that were living in southern Africa, since those landmasses were joined back then. But we're finding that Antarctica's wildlife is surprisingly unique."

About two million years before Antarctanax lived--the blink of an eye in geologic time--Earth underwent its biggest-ever mass extinction. Climate change, caused by volcanic eruptions, killed 90% of all animal life. The years immediately after that extinction event were an evolutionary free-for-all--with the slate wiped clean by the mass extinction, new groups of animals vied to fill the gaps. The archosaurs, including dinosaurs, were one of the groups that experienced enormous growth. 

Lead author Dr. Brandon Peecook prospecting for Triassic vertebrate fossils at Coalsack Bluff, a famous site in Antarctic paleontology.

Credit: (c) Adam Huttenlocker, Field Museum

"Before the mass extinction, archosaurs were only found around the Equator, but after it, they were everywhere," says Peecook. "And Antarctica had a combination of these brand-new animals and stragglers of animals that were already extinct in most places--what paleontologists call 'dead clades walking.' You've got tomorrow's animals and yesterday's animals, cohabiting in a cool place."

The fact that scientists have found Antarctanax helps bolster the idea that Antarctica was a place of rapid evolution and diversification after the mass extinction. "The more different kinds of animals we find, the more we learn about the pattern of archosaurs taking over after the mass extinction," notes Peecook.

"Antarctica is one of those places on Earth, like the bottom of the sea, where we're still in the very early stages of exploration," says Peecook. "Antarctanax is our little part of discovering the history of Antarctica."


Contacts and sources:
Kate Golembiewski
Field Museum
Citation:  A novel archosauromorph from Antarctica and an updated review of a high-latitude vertebrate assemblage in the wake of the end-Permian mass extinction.  Brandon R. Peecook, Roger M. H. Smith & Christian A. Sidor Journal of Vertebrate Paleontology
 http://dx.doi.org/10.1080/02724634.2018.1536664

Super-Resolution Technique Could Improve Tissue Imaging Tenfold

An approach developed by MIT engineers surmounts longstanding problem of light scattering within biological tissue and other complex materials.

Imaging deep inside biological tissue has long been a significant challenge. That is because light tends to be scattered by complex media such as biological tissue, bouncing around inside until it comes out again at a variety of different angles. This distorts the focus of optical microscopes, reducing both their resolution and imaging depth. Using light of a longer wavelength can help to avoid this scattering, but it also reduces imaging resolution.

MIT researchers have developed a new technique using quantum reference beacons for superresolution optical focusing in complex media.
MIT researchers have developed a new technique using quantum reference beacons for superresolution optical focusing in complex media.
Image: Donggyu Kim and Dirk R. Englund


Now, instead of attempting to avoid scattering, researchers at MIT have developed a technique to use the effect to their advantage. The new technique, which they describe in a paper published in the journal Science, allows them to use light scattering to improve imaging resolution by up to 10 times that of existing systems.

Indeed, while conventional microscopes are limited by what is known as the diffraction barrier, which prevents them focusing beyond a given resolution, the new technique allows imaging at “optical super-resolution,” or beyond this diffraction limit.

The technique could be used to improve biomedical imaging, for example, by allowing more precise targeting of cancer cells within tissue. It could also be combined with optogenetic techniques, to excite particular brain cells. It could even be used in quantum computing, according to Donggyu Kim, a graduate student in mechanical engineering at MIT and first author of the paper.

In 2007, researchers first proposed that by shaping a wave of light before sending it into the tissue, it is possible to reverse the scattering process, focusing the light at a single point. However, taking advantage of this effect has long been hampered by the difficulty of gaining sufficient information about how light is scattered within complex media such as biological tissue.

To obtain this information, researchers have developed numerous techniques for creating “guide stars,” or feedback signals from points within the tissue that allow the light to be focused correctly. But these approaches have so far resulted in imaging resolution well above the diffraction limit, Kim says.

In order to improve the resolution, Kim and co-author Dirk Englund, an associate professor in MIT’s Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, developed something they call quantum reference beacons (QRBs).

These QRBs are made using nitrogen-vacancy (N-V) centers within diamonds. These tiny molecular defects within the crystal lattice of diamonds are naturally fluorescent, meaning they will emit light when excited by a laser beam.

What’s more, when a magnetic field is applied to the QRBs, they each resonate at their own specific frequency. By targeting the tissue sample with a microwave signal of the same resonant frequency as a particular QRB, the researchers can selectively alter its fluorescence.

“Imagine a navigator trying to get their vessel to its destination at night,” Kim says. “If they see three beacons, all of which are emitting light, they will be confused. But, if one of the beacons deliberately twinkles to generate a signal, they will know where their destination is,” he says.

In this way the N-V centers act as beacons, each emitting fluorescent light. By modulating a particular beacon’s fluorescence to create an on/off signal, the researchers can determine the beacon’s location within the tissue.

“We can read out where this light is coming from, and from that we can also understand how the light scatters inside the complex media,” Kim says.

The researchers then combine this information from each of the QRBs to create a precise profile of the scattering pattern within the tissue.

By displaying this pattern with a spatial light modulator — a device used to produce holograms by manipulating light — the laser beam can be shaped in advance to compensate for the scattering that will take place inside the tissue. The laser is then able to focus with super resolution on a point inside the tissue.

In biological applications, the researchers envision that a suspension of nanodiamonds could be injected into the tissue, much as a contrast agent is already used in some existing imaging systems. Alternatively, molecular tags attached to the diamond nanoparticles could guide them to specific types of cells.

The QRBs could also be used as qubits for quantum sensing and quantum information processing, Kim says. “The QRBs can be used as quantum bits to store quantum information, and with this we can do quantum computing,” he says.

Super-resolution imaging within complex scattering media has been hampered by the deficiency of guide stars that report their positions with subdiffraction precision, according to Wonshik Choi, a professor of physics at Korea University, who was not involved in the research.

“The researchers have developed an elegant method of exploiting quantum reference beacons made of the nitrogen vacancy center in nanodiamonds as such guide stars,” he says. “This work opens up new venues for deep-tissue super-resolution imaging and quantum information processing within subwavelength nanodevices.”

The researchers now hope to explore the use of quantum entanglement and other types of semiconductors for use as QRBs, Kim says.

Contacts and sources:
Helen Knight
Massachusetts Institute of Technology (MIT)


 

Robot Combines Vision and Touch to Learn the Game of Jenga

A machine-learning approach could help robots assemble cellphones and other small parts in a manufacturing line.

The Jenga-playing robot demonstrates something that’s been tricky to attain in previous systems: the ability to quickly learn the best way to carry out a task, not just from visual cues, as it is commonly studied today, but also from tactile, physical interactions.
The Jenga-playing robot demonstrates something that’s been tricky to attain in previous systems: the ability to quickly learn the best way to carry out a task, not just from visual cues, as it is commonly studied today, but also from tactile, physical interactions.
Courtesy of the researchers

In the basement of MIT’s Building 3, a robot is carefully contemplating its next move. It gently pokes at a tower of blocks, looking for the best block to extract without toppling the tower, in a solitary, slow-moving, yet surprisingly agile game of Jenga.

The robot, developed by MIT engineers, is equipped with a soft-pronged gripper, a force-sensing wrist cuff, and an external camera, all of which it uses to see and feel the tower and its individual blocks.

As the robot carefully pushes against a block, a computer takes in visual and tactile feedback from its camera and cuff, and compares these measurements to moves that the robot previously made. It also considers the outcomes of those moves — specifically, whether a block, in a certain configuration and pushed with a certain amount of force, was successfully extracted or not. In real-time, the robot then “learns” whether to keep pushing or move to a new block, in order to keep the tower from falling.

Details of the Jenga-playing robot are published today in the journal Science Robotics. Alberto Rodriguez, the Walter Henry Gale Career Development Assistant Professor in the Department of Mechanical Engineering at MIT, says the robot demonstrates something that’s been tricky to attain in previous systems: the ability to quickly learn the best way to carry out a task, not just from visual cues, as it is commonly studied today, but also from tactile, physical interactions.

“Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces. It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks,” Rodriguez says. “This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics.”

He says the tactile learning system the researchers have developed can be used in applications beyond Jenga, especially in tasks that need careful physical interaction, including separating recyclable objects from landfill trash and assembling consumer products.

“In a cellphone assembly line, in almost every single step, the feeling of a snap-fit, or a threaded screw, is coming from force and touch rather than vision,” Rodriguez says. “Learning models for those actions is prime real-estate for this kind of technology.”

The paper’s lead author is MIT graduate student Nima Fazeli. The team also includes Miquel Oller, Jiajun Wu, Zheng Wu, and Joshua Tenenbaum, professor of brain and cognitive sciences at MIT.





Push and pull

In the game of Jenga — Swahili for “build” — 54 rectangular blocks are stacked in 18 layers of three blocks each, with the blocks in each layer oriented perpendicular to the blocks below. The aim of the game is to carefully extract a block and place it at the top of the tower, thus building a new level, without toppling the entire structure.

To program a robot to play Jenga, traditional machine-learning schemes might require capturing everything that could possibly happen between a block, the robot, and the tower — an expensive computational task requiring data from thousands if not tens of thousands of block-extraction attempts.

Instead, Rodriguez and his colleagues looked for a more data-efficient way for a robot to learn to play Jenga, inspired by human cognition and the way we ourselves might approach the game.

The team customized an industry-standard ABB IRB 120 robotic arm, then set up a Jenga tower within the robot’s reach, and began a training period in which the robot first chose a random block and a location on the block against which to push. It then exerted a small amount of force in an attempt to push the block out of the tower.

For each block attempt, a computer recorded the associated visual and force measurements, and labeled whether each attempt was a success.

Rather than carry out tens of thousands of such attempts (which would involve reconstructing the tower almost as many times), the robot trained on just about 300, with attempts of similar measurements and outcomes grouped in clusters representing certain block behaviors. For instance, one cluster of data might represent attempts on a block that was hard to move, versus one that was easier to move, or that toppled the tower when moved. For each data cluster, the robot developed a simple model to predict a block’s behavior given its current visual and tactile measurements.

Fazeli says this clustering technique dramatically increases the efficiency with which the robot can learn to play the game, and is inspired by the natural way in which humans cluster similar behavior: “The robot builds clusters and then learns models for each of these clusters, instead of learning a model that captures absolutely everything that could happen.”

Stacking up

The researchers tested their approach against other state-of-the-art machine learning algorithms, in a computer simulation of the game using the simulator MuJoCo. The lessons learned in the simulator informed the researchers of the way the robot would learn in the real world.

“We provide to these algorithms the same information our system gets, to see how they learn to play Jenga at a similar level,” Oller says. “Compared with our approach, these algorithms need to explore orders of magnitude more towers to learn the game.”

Curious as to how their machine-learning approach stacks up against actual human players, the team carried out a few informal trials with several volunteers.

“We saw how many blocks a human was able to extract before the tower fell, and the difference was not that much,” Oller says.

But there is still a way to go if the researchers want to competitively pit their robot against a human player. In addition to physical interactions, Jenga requires strategy, such as extracting just the right block that will make it difficult for an opponent to pull out the next block without toppling the tower.

For now, the team is less interested in developing a robotic Jenga champion, and more focused on applying the robot’s new skills to other application domains.

“There are many tasks that we do with our hands where the feeling of doing it ‘the right way’ comes in the language of forces and tactile cues,” Rodriguez says. “For tasks like these, a similar approach to ours could figure it out.”

This research was supported, in part, by the National Science Foundation through the National Robotics Initiative.


Contacts and sources:
Abby Abazorius / Jennifer Chu
Massachusetts Institute of Technology (MIT)



Citation: See, feel, act: Hierarchical learning for complex manipulation skills with multisensory fusion.
N. Fazeli, M. Oller, J. Wu, Z. Wu, J. B. Tenenbaum, A. Rodriguez. Science Robotics, 2019; 4 (26): eaav3123 DOI: 10.1126/scirobotics.aav3123

Learning New Vocabulary During Deep Sleep Possible



Researchers of the University of Bern, Switzerland, showed that we can acquire the vocabulary of a new language during distinct phases of slow-wave sleep and that the sleep-learned vocabulary could be retrieved unconsciously following waking. Memory formation appeared to be mediated by the same brain structures that also mediate wake vocabulary learning.

Panneau-dormir.png
Credit; Liquid 2003 / Wikimedia Commons

Sleeping time is sometimes considered unproductive time. This raises the question whether the time spent asleep could be used more productively – e.g. for learning a new language? To date sleep research focused on the stabilization and strengthening (consolidation) of memories that had been formed during preceding wakefulness. However, learning during sleep has rarely been examined. There is considerable evidence for wake-learned information undergoing a recapitulation by replay in the sleeping brain. The replay during sleep strengthens the still fragile memory traces und embeds the newly acquired information in the preexisting store of knowledge.

If re-play during sleep improves the storage of wake-learned information, then first-play – i.e., the initial processing of new information – should also be feasible during sleep, potentially carving out a memory trace that lasts into wakefulness. This was the research question of Katharina Henke, Marc Züst und Simon Ruch of the Institute of Psychology and of the Interfaculty Research Cooperation “Decoding Sleep” at the University of Bern, Switzerland. 

These investigators now showed for the first time that new foreign words and their translation words could be associated during a midday nap with associations stored into wakefulness. Following waking, participants could reactivate the sleep-formed associations to access word meanings when represented with the formerly sleep-played foreign words. The hippocampus, a brain structure essential for wake associative learning, also supported the retrieval of sleep-formed associations. The results of this experiment are published open access in the scientific journal “Current Biology”. 

The brain cells’ active states are central for sleep-learning

The research group of Katharina Henke examined whether a sleeping person is able to form new semantic associations between played foreign words and translation words during the brain cells’ active states, the so-called “Up-states”. When we reach deep sleep stages, our brain cells progressively coordinate their activity. During deep sleep, the brain cells are commonly active for a brief period of time before they jointly enter into a state of brief inactivity. The active state is called “Up-state” and the inactive state “Down-state”. The two states alternate about every half-second.

Semantic associations between sleep-played words of an artificial language and their German translations words were only encoded and stored, if the second word of a pair was repeatedly (2, 3 or 4 times) played during an Up-state. E.g., when a sleeping person heard the word pairs “tofer = key” and “guga = elephant”, then after waking they were able to categorize with a better-than-chance accuracy whether the sleep-played foreign words denominated something large (“Guga”) or small (“Tofer”). “It was interesting that language areas of the brain and the hippocampus – the brain’s essential memory hub – were activated during the wake retrieval of sleep-learned vocabulary because these brain structures normally mediate wake learning of new vocabulary”, says Marc Züst, co-first-author of this paper. “These brain structures appear to mediate memory formation independently of the prevailing state of consciousness – unconscious during deep sleep, conscious during wakefulness”.

Memory formation does not require consciousness

Besides its practical relevance, this new evidence for sleep-learning challenges current theories of sleep and theories of memory. The notion of sleep as an encapsulated mental state, in which we are detached from the physical environment is no longer tenable. “We could disprove that sophisticated learning be impossible during deep sleep”, says Simon Ruch, co-first-author. The current results underscore a new theoretical notion of the relationship between memory and consciousness that Katharina Henke published in 2010 (Nature Reviews Neuroscience). “In how far and with what consequences deep sleep can be utilized for the acquisition of new information will be a topic of research in upcoming years”, says Katharina Henke.

Decoding sleep

The research group of Katharina Henke is part of the Interfaculty Research Cooperation “Decoding Sleep: From Neurons to Health & Mind” (IRC). Decoding Sleep is a large, interdisciplinary research project that is financed by the University of Bern, Switzerland. Thirteen research groups in medicine, biology, psychology, and informatics are part of the IRC. The aim of these research groups is to gain a better understanding of the mechanisms involved in sleep, consciousness, and cognition.

The reported study was carried out in collaboration with Roland Wiest who is affiliated with the Support Center for Advanced Neuroimaging (SCAN) at the Institute of Diagnostic and Interventional Neuroradiology, Inselspital, University of Bern.

Both research groups also belong to the BENESCO consortium, which consists of 22 interdisciplinary research groups specialized in sleep medicine, epilepsy and research on altered states of consciousness.
Contacts and sources:
University of Bern
Citation: 
 Implicit Vocabulary Learning during Sleep Is Bound to Slow-Wave Peaks.
Marc Alain Züst, Simon Ruch, Roland Wiest, and Katharina Henke.Current Biology, 2019 DOI: 10.1016/j.cub.2018.12.038

A model for memory systems based on processing modes rather than consciousness, Katharina Henke: Nature Reviews Neuroscience, 11, 9. Juni 2010, https://doi.org/10.1038/nrn2850



The Great Dying: Earth's Greatest Extinction Event Killed Plant Life First


A view of Coalcliff in New South Wales, Australia, where Nebraska researchers Christopher Fielding and Tracy Frank discovered evidence that Earth's largest extinction may have extinguished plant life nearly 400,000 years before marine animal species disappeared.
.
Credit: Christopher Fielding

Little life could endure the Earth-spanning cataclysm known as the Great Dying, but plants may have suffered its wrath long before many animal counterparts, says new research led by the University of Nebraska-Lincoln.

About 252 million years ago, with the planet’s continental crust mashed into the supercontinent called Pangaea, volcanoes in modern-day Siberia began erupting. Spewing carbon and methane into the atmosphere for roughly 2 million years, the eruption helped extinguish about 96 percent of oceanic life and 70 percent of land-based vertebrates — the largest extinction event in Earth’s history.

Yet the new study suggests that a byproduct of the eruption — nickel — may have driven some Australian plant life to extinction nearly 400,000 years before most marine species perished.

Christopher Fielding
Credit: University of Nebraska-Lincoln

“That’s big news,” said Christopher Fielding, lead author and professor of Earth and atmospheric sciences. “People have hinted at that, but nobody’s previously pinned it down. Now we have a timeline.”

The researchers reached the conclusion by studying fossilized pollen, the chemical composition and age of rock, and the layering of sediment on the southeastern cliffsides of Australia. There they discovered surprisingly high concentrations of nickel in the Sydney Basin’s mud-rock – surprising because there are no local sources of the element.

Tracy Frank

Credit: University of Nebraska-Lincoln


Tracy Frank, professor and chair of Earth and atmospheric sciences, said the finding points to the eruption of lava through nickel deposits in Siberia. That volcanism could have converted the nickel into an aerosol that drifted thousands of miles southward before descending on, and poisoning, much of the plant life there. Similar spikes in nickel have been recorded in other parts of the world, she said.

“So it was a combination of circumstances,” Fielding said. "And that's a recurring theme through all five of the major mass extinctions in Earth’s history.”

If true, the phenomenon may have triggered a series of others: herbivores dying from the lack of plants, carnivores dying from a lack of herbivores, and toxic sediment eventually flushing into seas already reeling from rising carbon dioxide, acidification and temperatures.
‘It lets us see what’s possible’

One of three married couples on the research team, Fielding and Frank also found evidence for another surprise. Much of the previous research into the Great Dying — often conducted at sites now near the equator — has unearthed abrupt coloration changes in sediment deposited during that span.

Shifts from grey to red sediment generally indicate that the volcanism’s ejection of ash and greenhouse gases altered the world’s climate in major ways, the researchers said. Yet that grey-red gradient is much more gradual at the Sydney Basin, Fielding said, suggesting that its distance from the eruption initially helped buffer it against the intense rises in temperature and aridity found elsewhere.



Credit: Christopher Fielding


Though the time scale and magnitude of the Great Dying exceeded the planet’s current ecological crises, Frank said the emerging similarities — especially the spikes in greenhouse gases and continuous disappearance of species — make it a lesson worth studying.

“Looking back at these events in Earth's history is useful because it lets us see what's possible,” she said. “How has the Earth's system been perturbed in the past? What happened where? How fast were the changes? It gives us a foundation to work from — a context for what's happening now.”

The researchers detailed their findings in the journal Nature Communications. Fielding and Frank authored the study with Allen Tevyaw, graduate student in geosciences at Nebraska; Stephen McLoughlin, Vivi Vajda and Chris Mays from the Swedish Museum of Natural History; Arne Winguth and Cornelia Winguth from the University of Texas at Arlington; Robert Nicoll of Geoscience Australia; Malcolm Bocking of Bocking Associates; and James Crowley of Boise State University.

The National Science Foundation and the Swedish Research Council funded the team’s work.



Contacts and sources:
Scott Schrage
University of Nebraska-Lincoln
Citation:Age and pattern of the southern high-latitude continental end-Permian extinction constrained by multiproxy analysis
Christopher R. Fielding, Tracy D. Frank, Stephen McLoughlin, Vivi Vajda, Chris Mays, Allen P. Tevyaw, Arne Winguth, Cornelia Winguth, Robert S. Nicoll, Malcolm Bocking, James L. Crowley. . Nature Communications, 2019; 10 (1) DOI: 10.1038/s41467-018-07934-z



NASA’s NICER Mission Maps ‘Light Echoes’ of New Black Hole

Scientists have charted the environment surrounding a stellar-mass black hole that is 10 times the mass of the Sun using NASA’s Neutron star Interior Composition Explorer (NICER) payload aboard the International Space Station. NICER detected X-ray light from the recently discovered black hole, called MAXI J1820+070 (J1820 for short), as it consumed material from a companion star. Waves of X-rays formed “light echoes” that reflected off the swirling gas near the black hole and revealed changes in the environment’s size and shape.


illustration of black hole MAXI J1820+070
Credit: Aurore Simonnet and NASA's Goddard Space Flight Center

“NICER has allowed us to measure light echoes closer to a stellar-mass black hole than ever before,” said Erin Kara, an astrophysicist at the University of Maryland, College Park and NASA’s Goddard Space Flight Center in Greenbelt, Maryland, who presented the findings at the 233rd American Astronomical Society meeting in Seattle. “Previously, these light echoes off the inner accretion disk were only seen in supermassive black holes, which are millions to billions of solar masses and undergo changes slowly. Stellar black holes like J1820 have much lower masses and evolve much faster, so we can see changes play out on human time scales.”

A paper describing the findings, led by Kara, appeared in the Jan. 10 issue of Nature and is available online.

Watch how X-ray echoes, mapped by NASA’s Neutron star Interior Composition Explorer (NICER) revealed changes to the corona of black hole MAXI J1820+070.

Credits: NASA’s Goddard Space Flight Center

J1820 is located about 10,000 light-years away toward the constellation Leo. The companion star in the system was identified in a survey by ESA’s (European Space Agency) Gaia mission, which allowed researchers to estimate its distance. Astronomers were unaware of the black hole’s presence until March 11, 2018, when an outburst was spotted by the Japan Aerospace Exploration Agency’s Monitor of All-sky X-ray Image (MAXI), also aboard the space station. J1820 went from a totally unknown black hole to one of the brightest sources in the X-ray sky over a few days. NICER moved quickly to capture this dramatic transition and continues to follow the fading tail of the eruption.

“NICER was designed to be sensitive enough to study faint, incredibly dense objects called neutron stars,” said Zaven Arzoumanian, the NICER science lead at Goddard and a co-author of the paper. “We’re pleased at how useful it’s also proven in studying these very X-ray-bright stellar-mass black holes.”

A black hole can siphon gas from a nearby companion star into a ring of material called an accretion disk. Gravitational and magnetic forces heat the disk to millions of degrees, making it hot enough to produce X-rays at the inner parts of the disk, near the black hole. Outbursts occur when an instability in the disk causes a flood of gas to move inward, toward the black hole, like an avalanche. The causes of disk instabilities are poorly understood.

Above the disk is the corona, a region of subatomic particles around 1 billion degrees Celsius (1.8 billion degrees Fahrenheit) that glows in higher-energy X-rays. Many mysteries remain about the origin and evolution of the corona. Some theories suggest the structure could represent an early form of the high-speed particle jets these types of systems often emit.

Astrophysicists want to better understand how the inner edge of the accretion disk and the corona above it change in size and shape as a black hole accretes material from its companion star. If they can understand how and why these changes occur in stellar-mass black holes over a period of weeks, scientists could shed light on how supermassive black holes evolve over millions of years and how they affect the galaxies in which they reside.

One method used to chart those changes is called X-ray reverberation mapping, which uses X-ray reflections in much the same way sonar uses sound waves to map undersea terrain. Some X-rays from the corona travel straight toward us, while others light up the disk and reflect back at different energies and angles.

X-ray reverberation mapping of supermassive black holes has shown that the inner edge of the accretion disk is very close to the event horizon, the point of no return. The corona is also compact, lying closer to the black hole rather than over much of the accretion disk. Previous observations of X-ray echoes from stellar black holes, however, suggested the inner edge of the accretion disk could be quite distant, up to hundreds of times the size of the event horizon. The stellar-mass J1820, however, behaved more like its supermassive cousins.

As they examined NICER’s observations of J1820, Kara’s team saw a decrease in the delay, or lag time, between the initial flare of X-rays coming directly from the corona and the flare’s echo off the disk, indicating that the X-rays traveled shorter and shorter distances before they were reflected. From 10,000 light-years away, they estimated that the corona contracted vertically from roughly 100 to 10 miles — that’s like seeing something the size of a blueberry shrink to something the size of a poppy seed at the distance of Pluto.

“This is the first time that we’ve seen this kind of evidence that it’s the corona shrinking during this particular phase of outburst evolution,” said co-author Jack Steiner, an astrophysicist at the Massachusetts Institute of Technology’s Kavli Institute for Astrophysics and Space Research in Cambridge. “The corona is still pretty mysterious, and we still have a loose understanding of what it is. But we now have evidence that the thing that’s evolving in the system is the structure of the corona itself.”

To confirm the decreased lag time was due to a change in the corona and not the disk, the researchers used a signal called the iron K line created when X-rays from the corona collide with iron atoms in the disk, causing them to fluoresce. Time runs slower in stronger gravitational fields and at higher velocities, as stated in Einstein’s theory of relativity. When the iron atoms closest to the black hole are bombarded by light from the core of the corona, the X-ray wavelengths they emit get stretched because time is moving slower for them than for the observer (in this case, NICER).

Kara’s team discovered that J1820’s stretched iron K line remained constant, which means the inner edge of the disk remained close to the black hole — similar to a supermassive black hole. If the decreased lag time was caused by the inner edge of the disk moving even further inward, then the iron K line would have stretched even more.

These observations give scientists new insights into how material funnels in to the black hole and how energy is released in this process.

The NICER instrument installed on the International Space Station, as captured by a high-definition external camera on Oct. 22, 2018.


Credits: NASA

“NICER’s observations of J1820 have taught us something new about stellar-mass black holes and about how we might use them as analogs for studying supermassive black holes and their effects on galaxy formation,” said co-author Philip Uttley, an astrophysicist at the University of Amsterdam. “We’ve seen four similar events in NICER’s first year, and it’s remarkable. It feels like we’re on the edge of a huge breakthrough in X-ray astronomy.”

NICER is an Astrophysics Mission of Opportunity within NASA's Explorer program, which provides frequent flight opportunities for world-class scientific investigations from space utilizing innovative, streamlined and efficient management approaches within the heliophysics and astrophysics science areas. NASA's Space Technology Mission Directorate supports the SEXTANT component of the mission, demonstrating pulsar-based spacecraft navigation.

Banner image: In this illustration of a newly discovered black hole named MAXI J1820+070, a black hole pulls material off a neighboring star and into an accretion disk. Above the disk is a region of subatomic particles called the corona. Credit: Aurore Simonnet and NASA’s Goddard Space Flight Center




Contacts and sources:
Jeanette Kazmierczak
NASA’s Goddard Space Flight Center


Citation: Letter he corona contracts in a black-hole transient
E. Kara, J. F. Steiner, A. C. Fabian, E. M. Cackett, P. Uttley, R. A. Remillard, K. C. Gendreau, Z. Arzoumanian, D. Altamirano, S. Eikenberry, T. Enoto, J. Homan, J. Neilsen & A. L. Stevens
 http://dx.doi.org/10.1038/s41586-018-0803-x





Conundrum: Earth's Inner Core Is Much Younger Than the Planet

One enduring mystery about Earth is the age of its solid inner core.

Researchers have long recognized that Earth’s core plays a vital role in generating the magnetic shield that protects our planet from harmful solar wind—streams of radiation from the Sun—and makes Earth habitable. They differ, however, on estimates of when the inner core actually formed. Now, research from the University of Rochester indicates that Earth’s inner core is younger than scientists previously thought, offering new insight into the history of Earth’s magnetic shielding and planetary habitability.

In a paper published in Nature Geosciences, the researchers report that the inner core is only about 565 million years old—relatively young compared to the age of our 4.5-billion-year-old planet. “Until this data, the age of the inner core was uncertain,” says John Tarduno, a professor and chair of earth and environmental sciences at Rochester. “There’s this huge range of 2 billion years where scientists think the inner core could’ve formed. These are the first field-strength data from the younger part of the range of possibilities suggesting that the inner core is really young.”

INNER CORE, THEN AND NOW: Earth’s magnetic field is generated in its liquid iron core via a geodynamo. Researchers believe a weak geodynamo—and a weak magnetic shield—formed early in Earth’s history, but decreased for the next several billion years until a critical point 565 million years ago (left image). The researchers conjecture it was at this point in the geological time scale that the inner core began to form, increasing the strength of the geodynamo and the magnetic field (right image).

Credit: University of Rochester illustration / Michael Osadciw 

The geodynamo

Earth’s magnetic field is generated in its liquid iron core via a geodynamo—a process during which the kinetic energy of conducting moving fluids is converted to magnetic energy. Researchers believe a weak geodynamo—and a magnetic shield—formed fairly early in Earth’s history, shortly after the event that created Earth’s moon. For the next several billion years, the energy to drive the dynamo decreased until a critical point 565 million years ago, when “the dynamo was on the point of collapse,” Tarduno says. Despite its drastically weakened state, however, the dynamo did not go away. The researchers conjecture it was at this point in the geological time scale—or sometime shortly after—that the inner core began to form, giving strength to the geodynamo.

“This is a critical point in the evolution of the planet,” Tarduno says. “The field did not collapse because the inner core started to grow and provided a new energy source for the formation of the geodynamo.”

Unlocking the ancient magnetic field

In order to learn about the evolution of the geodynamo, the researchers measured the strength of the ancient magnetic field locked within single crystals of the mineral feldspar. The samples were collected from the Sept-Îles Complex in northern Quebec and contain tiny magnetic needles with “ideal recording properties,” Tarduno says. “The feldspar protects those needles from later alteration on geological time scales, so we get a much higher resolution record of the ancient strains in the magnetic field by measuring these single crystals.”

By studying the magnetism locked in ancient crystals—a field known as paleomagnetism—the researchers found that the intensity of the magnetic field was extremely low 565 million years ago, “lower than anything we’ve ever seen before,” Tarduno says. This indicates that the inner core may have formed around this time to restore strength to the dynamo and, in turn, to the magnetic field.
The inner core and planet habitability

Today, the geodynamo is powered by the growth of the inner core and is essential to the habitability of our planet, says Richard Bono, a former post-doctoral research associate in Tarduno’s lab, and now a post-doctoral researcher at the University of Liverpool. “Our magnetic field is part of what makes Earth a special planet, and, so far, the only one that has life. The evolution of Earth’s interior and the resulting geodynamo generated within plays a critical role in the preservation of life.”

An improved understanding of this evolution of Earth’s interior may provide researchers key clues, not only for planet formation and habitability on Earth, but in the search for life on exoplanets that resemble Earth.

“The same factors that drive dynamos on Earth might affect the magnetic shielding on exoplanets,” Tarduno says. “It could be the case that some planets don’t have long-lived dynamos and those planets would not have the magnetic shielding we have, meaning that their atmosphere and water might be removed.”

Besides being a critical point in the evolution of Earth, 565 million years ago was also a critical time for the major diversification of life on Earth, Tarduno says. “This is a time of Ediacaran fauna, the first large complex organisms we see in the geologic record. These are a fundamental change from the microbial life preserved in older rocks.”

Is there then some type of causal link between a stronger dynamo and a burst of life?

“It’s true that if we have lower magnetic shielding, we’d have more harmful radiation coming in to Earth,” Tarduno says. “That radiation might be harmful for DNA, for example, and there has been speculation that this could stimulate mutations.” Tarduno cautions, however, that there isn’t strong evidence of this correlation in the geological record, although the new data “will certainly stimulate more thought on this issue.”






Contacts and sources:Lindsey Valich
University of Rochester


Citation:



Wednesday, January 30, 2019

Taking a Step Closer to Self-Aware Machines



-Robots that are self-aware have been science fiction fodder for decades, and now we may finally be getting closer. Humans are unique in being able to imagine themselves--to picture themselves in future scenarios, such as walking along the beach on a warm sunny day. Humans can also learn by revisiting past experiences and reflecting on what went right or wrong. While humans and animals acquire and adapt their self-image over their lifetime, most robots still learn using human-provided simulators and models, or by laborious, time-consuming trial and error. Robots have not learned simulate themselves the way humans do.

An image of the deformed robotic arm in multiple poses as it was collecting data through random motion.
Credit: Robert Kwiatkowski/Columbia Engineering

Columbia Engineering researchers have made a major advance in robotics by creating a robot that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot does not know if it is a spider, a snake, an arm--it has no clue what its shape is. After a brief period of "babbling," and within about a day of intensive computing, their robot creates a self-simulation. The robot can then use that self-simulator internally to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its own body. The work is published today in Science Robotics.

Video of Columbia Engineering robot that learns what it is, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot has no clue what its shape is. After a brief period of "babbling," and within about a day of intensive computing, the robot creates a self-simulation, which it can then use to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its body.
Credit: Robert Kwiatkowski/Columbia Engineering

To date, robots have operated by having a human explicitly model the robot. "But if we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators, then it's essential that they learn to simulate themselves," says Hod Lipson, professor of mechanical engineering, and director of the Creative Machines lab, where the research was done.

For the study, Lipson and his PhD student Robert Kwiatkowski used a four-degree-of-freedom articulated robotic arm. Initially, the robot moved randomly and collected approximately one thousand trajectories, each comprising one hundred points. The robot then used deep learning, a modern machine learning technique, to create a self-model. The first self-models were quite inaccurate, and the robot did not know what it was, or how its joints were connected. But after less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters. The self-model performed a pick-and-place task in a closed loop system that enabled the robot to recalibrate its original position between each step along the trajectory based entirely on the internal self-model. With the closed loop control, the robot was able to grasp objects at specific locations on the ground and deposit them into a receptacle with 100 percent success.

Even in an open-loop system, which involves performing a task based entirely on the internal self-model, without any external feedback, the robot was able to complete the pick-and-place task with a 44 percent success rate. "That's like trying to pick up a glass of water with your eyes closed, a process difficult even for humans," observed the study's lead author Kwiatkowski, a PhD student in the computer science department who works in Lipson's lab.

The self-modeling robot was also used for other tasks, such as writing text using a marker. To test whether the self-model could detect damage to itself, the researchers 3D-printed a deformed part to simulate damage and the robot was able to detect the change and re-train its self-model. The new self-model enabled the robot to resume its pick-and-place tasks with little loss of performance.

An image of the intact robotic arm used to perform all of the tasks


Credit: Robert Kwiatkowski/Columbia Engineering

Lipson, who is also a member of the Data Science Institute, notes that self-imaging is key to enabling robots to move away from the confinements of so-called "narrow-AI" towards more general abilities. "This is perhaps what a newborn child does in its crib, as it learns what it is," he says. "We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot's ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness."

Lipson believes that robotics and AI may offer a fresh window into the age-old puzzle of consciousness. "Philosophers, psychologists, and cognitive scientists have been pondering the nature self-awareness for millennia, but have made relatively little progress," he observes. "We still cloak our lack of understanding with subjective terms like 'canvas of reality,' but robots now force us to translate these vague notions into concrete algorithms and mechanisms."

Lipson and Kwiatkowski are aware of the ethical implications. "Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control," they warn. "It's a powerful technology, but it should be handled with care."

The researchers are now exploring whether robots can model not just their own bodies, but also their own minds, whether robots can think about thinking.


Contacts and sources:
Holly Evarts
Columbia University School of Engineering and Applied Science


Citation: “Task-Agnostic Self-Modeling Machines.”Authors are: Robert Kwiatkowski, Department of Computer Science, and Hod Lipson, Department of Mechanical Engineering, Columbia Engineering, and Data Science Institute, Columbia University. http://robotics.sciencemag.org/lookup/doi/10.1126/
scirobotics.aau9354

The study was supported by the Defense Advanced Research Projects Agency (DARPA MTO HR0011-18-2-0020).



A 'Batman' for Hydrogen Fuel Cells

Scientists found a way to help fuel cells work better and stay clean in the cold

Hydrogen is considered one of the most promising clean energy sources of the future. Hydrogen fuel cell vehicles use hydrogen as fuel, which has high energy conversion efficiency, and zero emissions. But the development of hydrogen fuel cells faces many challenges, including the issue of carbon-monoxide (CO) poisoning of the fuel cell electrodes.

 Currently, hydrogen is mainly derived from such processes as steam reforming of hydrocarbons, such as methanol and natural gas, and water gas shift reaction. The resulting hydrogen usually contains 0.5% to 2% of trace CO. As the "heart" of hydrogen fuel cell vehicles, fuel cell electrodes are easily "poisoned" by CO impurity gas, resulting in reduced battery performance and shortened life, which severely hampers the application fuel cells in vehicles.

In a study published in Nature on January 31st, researchers at the University of Science and Technology of China (USTC) report advances in the development of hydrogen fuel cells that could increase its application in vehicles, especially in extreme temperatures like cold winters.

The catalyst developed here shows great potential to thoroughly guard the fuel cell during not only the continuous operation but also during frequent cold-start periods even under extremely cold conditions.

Credit: Junling Lu's research group

Earlier research has identified a method, called preferential oxidation in CO in Hydrogen (PROX), as a promising way to on-board remove trace amounts of CO from hydrogen by using catalysts. However, existing PROX catalysts can only work in high temperatures (above room temperature) and within a narrow temperature range, making it impractical for civil applications, such as fuel cell vehicles, that must be reliable even in winter months (Fig. 1).

Now, a USTC team led by Junling Lu, professor at the Hefei National Laboratory for Physical Sciences at the Microscale, has designed a new structure of atomically dispersed iron hydroxide on platinum nanoparticles (Fig. 2) to efficiently purify hydrogen fuel over a broad temperature range of 198 -380 Kelvin, which is approximately -103oF-224oF or -75oC-107oC. They also found that the material provided a thorough protection of fuel cells against CO poisoning during both frequent cold-starts and continuous operations in extremely cold temperatures.

"These findings might greatly accelerate the arrival of the hydrogen fuel cell vehicle era," said Prof. Lu.

"Our ultimate goal is to develop a cost-effective catalyst with high activity and selectivity that provides continuous on-board fuel cell protection and one that enables complete and 100% selective CO removal in a fuel cell that can be used for broader purposes," Prof. Lu adds.

One referee of the article commented: "When comparing with other catalyst systems reported in the literature, this reverse single-atom catalyst appears the best in terms of activity, selectivity, and stability in CO2-containing streams."



Contacts and sources:
Jane Fan Qiong
University of Science and Technology of China


Citation:



Ancient Mongolian Skull Is the Earliest Modern Human Yet Found in the Region



A much debated ancient human skull from Mongolia has been dated and genetically analysed, showing that it is the earliest modern human yet found in the region, according to new research from the University of Oxford. Radiocarbon dating and DNA analysis have revealed that the only Pleistocene hominin fossil discovered in Mongolia, initially called Mongolanthropus, is in reality a modern human who lived approximately 34 - 35 thousand years ago.

The skullcap, found in the Salkhit Valley northeast Mongolia is, to date, the only Pleistocene hominin fossil found in the country.

This is a Salkhit skullcap

Credit: © Maud Dahlem, Muséum de Toulouse (France)

The skullcap is mostly complete and includes the brow ridges and nasal bones. The presence of archaic or ancient features have led in the past to the specimen being linked with uncharacterized archaic hominin species, such as Homo erectus and Neanderthals. Previous research suggested ages for the specimen ranging from the Early Middle Pleistocene to the terminal Late Pleistocene.

The Oxford team re-dated the specimen to 34,950 - 33,900 years ago. This is around 8,000 years older than the initial radiocarbon dates obtained on the same specimen.

To make this discovery, the Oxford team employed a new optimized technique for radiocarbon dating of heavily contaminated bones. This method relies on extracting just one of the amino acids from the collagen present in the bone. The amino acid hydroxyproline (HYP), which accounts for 13% of the carbon in mammalian collagen, was targeted by the researchers. Dating this amino acid allows for the drastic improvement in the removal of modern contaminants from the specimens.

The new and reliable radiocarbon date obtained for the specimen shows that this individual dates to the same period as the Early Upper Paleolithic stone tool industry in Mongolia, which is usually associated with modern humans. The age is later than the earliest evidence for anatomically modern humans in greater Eurasia, which could be in excess of 100,000 years in China according to some researchers.

This is a view of the find spot in the Salkhit Valley, Mongolia © Institute of History and Archaeology & Academy of Sciences (Mongolia).

Credit: © Institute of History and Archaeology & Academy of Sciences (Mongolia)

This new result also suggests that there was still a significant amount of unremoved contamination in the sample during the original radiocarbon measurements. Additional analyses performed in collaboration with scientists at the University of Pisa (Italy) confirmed that the sample was heavily contaminated by the resin that had been used to cast the specimen after its discovery.

"The research we have conducted shows again the great benefits of developing improved chemical methods for dating prehistoric material that has been contaminated, either in the site after burial, or in the museum or laboratory for conservation purposes." said Dr Thibaut Devièse first author on the new paper and leading the method developments in compound specific analysis at the University of Oxford. "Robust sample pretreatment is crucial in order to build reliable chronologies in archaeology."

DNA analyses were also performed on the hominin bones by Professor Svante Pääbo's team at the Max-Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Diyendo Massiliani and colleagues reconstructed the complete mitochondrial genome of the specimen. It falls within a group of modern human mtDNAs (haplogroup N) that is widespread in Eurasia today, confirming the view of some researchers that the cranium is indeed a modern human. Further nuclear DNA work is underway to shed further light on the genetics of the cranium.

'This enigmatic cranium has puzzled researchers for some time", said Professor Tom Higham, who leads the PalaeoChron research group at the University of Oxford. "A combination of cutting edge science, including radiocarbon dating and genetics, has now shown that this is the remain of a modern human, and the results fit perfectly within the archaeological record of Mongolia which link moderns to the Early Upper Paleolithic industry in this part of the world.'


Contacts and sources:
Lanisha Butterfield
University of Oxford
Citation:


Compound-specific radiocarbon dating and mitochondrial DNA analysis of the Pleistocene hominin from Salkhit Mongolia
Thibaut Devièse, Diyendo Massilani, Seonbok Yi, Daniel Comeskey, Sarah Nagel, Birgit Nickel, Erika Ribechini, Jungeun Lee, Damdinsuren Tseveendorj, Byambaa Gunchinsuren, Matthias Meyer, Svante Pääbo & Tom Higham Nature Communications volume 10, Article number: 274 (2019) |
https://www.nature.com/articles/s41467-018-08018-8





Edible, Expanding Pill Monitors the Stomach for Up to a Month



A soft, squishy device could potentially track ulcers, cancers, and other GI conditions over the long term.

MIT engineers have designed an ingestible, Jell-O-like pill that, upon reaching the stomach, quickly swells to the size of a soft, squishy ping-pong ball big enough to stay in the stomach for an extended period of time.

The ingestible hydrogel device as a small pill can swell to a large soft sphere, and deswell to a floppy membrane.
The ingestible hydrogel device as a small pill can swell to a large soft sphere, and deswell to a floppy membrane.
Image: Xinyue Liu, Shaoting Lin

The inflatable pill is embedded with a sensor that continuously tracks the stomach’s temperature for up to 30 days. If the pill needs to be removed from the stomach, a patient can drink a solution of calcium that triggers the pill to quickly shrink to its original size and pass safely out of the body.

The new pill is made from two types of hydrogels — mixtures of polymers and water that resemble the consistency of Jell-O. The combination enables the pill to quickly swell in the stomach while remaining impervious to the stomach’s churning acidic environment.

The ingestible hydrogel device swells in water with high speed and high ratio.
The ingestible hydrogel device swells in water with high speed and high ratio.
Image: Xinyue Liu

The hydrogel-based design is softer, more biocompatible, and longer-lasting than current ingestible sensors, which either can only remain in the stomach for a few days, or are made from hard plastics or metals that are orders of magnitude stiffer than the gastrointestinal tract.

“The dream is to have a Jell-O-like smart pill, that once swallowed stays in the stomach and monitors the patient’s health for a long time such as a month,” says Xuanhe Zhao, associate professor of mechanical engineering at MIT.

Zhao and senior collaborator Giovanni Traverso, a visiting scientist who will join the MIT faculty in 2019, along with lead authors Xinyue Liu, Christoph Steiger, and Shaoting Lin, have published their results today in Nature Communications.



Pills, ping-pongs, and pufferfish

The design for the new inflatable pill is inspired by the defense mechanisms of the pufferfish, or blowfish. Normally a slow-moving species, the pufferfish will quickly inflate when threatened, like a spiky balloon. It does so by sucking in a large amount of water, fast.

The puffer’s tough, fast-inflating body was exactly what Zhao was looking to replicate in hydrogel form. The team had been looking for ways to design a hydrogel-based pill to carry sensors into the stomach and stay there to monitor, for example, vital signs or disease states for a relatively long period of time.

They realized that if a pill were small enough to be swallowed and passed down the esophagus, it would also be small enough to pass out of the stomach, through an opening known as the pylorus. To keep it from exiting the stomach, the group would have to design the pill to quickly swell to the size of a ping-pong ball.

“Currently, when people try to design these highly swellable gels, they usually use diffusion, letting water gradually diffuse into the hydrogel network,” Liu says. “But to swell to the size of a ping-pong ball takes hours, or even days. It’s longer than the emptying time of the stomach.”

The researchers instead looked for ways to design a hydrogel pill that could inflate much more quickly, at a rate comparable to that of a startled pufferfish.



A new hydrogel device swells to more than twice its size in just a few minutes in water.

An ingestible tracker

The design they ultimately landed on resembles a small, Jell-O-like capsule, made from two hydrogel materials. The inner material contains sodium polyacrylate — superabsorbent particles that are used in commercial products such as diapers for their ability to rapidly soak up liquid and inflate.

The researchers realized, however, that if the pill were made only from these particles, it would immediately break apart and pass out of the stomach as individual beads. So they designed a second, protective hydrogel layer to encapsulate the fast-swelling particles. This outer membrane is made from a multitude of nanoscopic, crystalline chains, each folded over another, in a nearly impenetrable, gridlock pattern — an “anti-fatigue” feature that the researchers reported in an earlier paper.

“You would have to crack through many crystalline domains to break this membrane,” Lin says. “That’s what makes this hydrogel extremely robust, and at the same time, soft.”

In the lab, the researchers dunked the pill in various solutions of water and fluid resembling gastric juices, and found the pill inflated to 100 times its original size in about 15 minutes — much faster than existing swellable hydrogels. Once inflated, Zhao says the pill is about the softness of tofu or Jell-O, yet surprisingly strong.

To test the pill’s toughness, the researchers mechanically squeezed it thousands of times, at forces even greater than what the pill would experience from regular contractions in the stomach.

“The stomach applies thousands to millions of cycles of load to grind food down,” Lin explains. “And we found that even when we make a small cut in the membrane, and then stretch and squeeze it thousands of times, the cut does not grow larger. Our design is very robust.”

The researchers further determined that a solution of calcium ions, at a concentration higher than what’s in milk, can shrink the swollen particles. This triggers the pill to deflate and pass out of the stomach.

Finally, Steiger and Traverso embedded small, commercial temperature sensors into several pills, and fed the pills to pigs, which have stomachs and gastrointestinal tracts very similar to humans. The team later retrieved the temperature sensors from the pigs’ stool and plotted the sensors’ temperature measurements over time. They found that the sensor was able to accurately track the animals’ daily activity patterns up to 30 days.

“Ingestible electronics is an emerging area to monitor important physiological conditions and biomarkers,” says Hanqing Jiang, a professor of mechanical and aerospace engineering at Arizona State University, who was not involved in the work. “Conventional ingestible electronics are made of non-bio-friendly materials. Professor Zhao’s group is making a big leap on the development of biocompatible and soft but tough gel-based ingestible devices, which significantly extends the horizon of ingestible electronics. It also represents a new application of tough hydrogels that the group has been devoted to for years.”

Down the road, the researchers envision the pill may safely deliver a number of different sensors to the stomach to monitor, for instance, pH levels, or signs of certain bacteria or viruses. Tiny cameras may also be embedded into the pills to image the progress of tumors or ulcers, over the course of several weeks. Zhao says the pill might also be used as a safer, more comfortable alternative to the gastric balloon diet, a form of diet control in which a balloon is threaded through a patient’s esophagus and into the stomach, using an endoscope.

“With our design, you wouldn’t need to go through a painful process to implant a rigid balloon,” Zhao says. “Maybe you can take a few of these pills instead, to help fill out your stomach, and lose weight. We see many possibilities for this hydrogel device.”

This research was supported, in part, by the National Science Foundation, National Institutes of Health, and the Bill and Melinda Gates Foundation.
Contacts and sources:
Jennifer Chu
Massachusetts Institute of Technology

Citation: Ingestible hydrogel device.
Xinyue Liu, Christoph Steiger, Shaoting Lin, German Alberto Parada, Ji Liu, Hon Fai Chan, Hyunwoo Yuk, Nhi V. Phan, Joy Collins, Siddartha Tamang, Giovanni Traverso, Xuanhe Zhao. Nature Communications, 2019; 10 (1) DOI: 10.1038/s41467-019-08355-2


Hubble Sees Plunging Galaxy Bleeding Its Gas

The spiral galaxy D100, on the far right of this Hubble Space Telescope image, is being stripped of its gas as it plunges toward the center of the giant Coma galaxy cluster. The dark brown streaks near D100's central region are silhouettes of dust escaping from the galaxy. The dust is part of a long, thin tail, also composed of hydrogen gas, that stretches like taffy from the galaxy's core. Hubble, however, sees only the dust. The telescope's sharp vision also uncovered the blue glow of clumps of young stars in the tail. The brightest clump in the middle of the tail (the blue feature) contains at least 200,000 stars, fueled by the ongoing loss of hydrogen gas from D100. The gas-loss process occurs when D100, due to the pull of gravity, begins falling toward the dense center of the massive Coma cluster, consisting of thousands of galaxies. 

During its plunge, D100 plows through intergalactic material like a boat plowing through water. This material pushes gas and dust out of the galaxy. Once D100 loses all of its hydrogen gas, its star-making fuel, it can no longer create new stars. The gas-stripping process in the beleaguered galaxy began roughly 300 million years ago. The reddish galaxies in the image contain older stars between the ages of 500 million to 13 billion years old. One of those galaxies is D99, just below and to the left of D100. It was stripped of its gas by the same process as the one that is siphoning gas from D100. The blue galaxies contain a mixture of young and old stars. Some of the stars are less than 500 million years old. The Coma cluster is located 330 million light-years from Earth. The Hubble image is a blend of several exposures taken in visible light between May 10 and July 10, 2016, and November 2017 to January 2018, by the Advanced Camera for Surveys.
Newswise: Hubble Sees Plunging Galaxy Losing Its Gas
Credit: NASA, ESA, and M. Sun (University of Alabama), and W. Cramer and J. Kenney (Yale University)



The rough-and-tumble environment near the center of the massive Coma galaxy cluster is no match for a wayward spiral galaxy. New images from NASA's Hubble Space Telescope show a spiral galaxy being stripped of its gas as it plunges toward the cluster's center. A long, thin streamer of gas and dust stretches like taffy from the galaxy's core and on into space. Eventually, the galaxy, named D100, will lose all of its gas and become a dead relic, deprived of the material to create new stars and shining only by the feeble glow of old, red stars.

"This galaxy stands out as a particularly extreme example of processes common in massive clusters, where a galaxy goes from being a healthy spiral full of star formation to a 'red and dead galaxy,'" said William Cramer of Yale University in New Haven, Connecticut, leader of the team using the Hubble observations. "The spiral arms disappear, and the galaxy is left with no gas and only old stars. This phenomenon has been known about for several decades, but Hubble provides the best imagery of galaxies undergoing this process."

Called "ram pressure stripping," the process occurs when a galaxy, due to the pull of gravity, falls toward the dense center of a massive cluster of thousands of galaxies, which swarm around like a hive of bees. During its plunge, the galaxy plows through intergalactic material, like a boat moving through water. The material pushes gas and dust from the galaxy. Once the galaxy loses all of its hydrogen gas — fuel for starbirth — it meets an untimely death because it can no longer create new stars. The gas-stripping process in D100 began roughly 300 million years ago.

In the massive Coma cluster this violent gas-loss process occurs in many galaxies. But D100 is unique in several ways. Its long, thin tail is its most unusual feature. The tail, a mixture of dust and hydrogen gas, extends nearly 200,000 light-years, about the width of two Milky Way galaxies. But the pencil-like structure is comparatively narrow, only 7,000 light-years wide.

"The tail is remarkably well-defined, straight and smooth, and has clear edges," explained team member Jeffrey Kenney, also of Yale University. "This is a surprise because a tail like this is not seen in most computer simulations. Most galaxies undergoing this process are more of a mess. The clean edges and filamentary structures of the tail suggest that magnetic fields play a prominent role in shaping it. Computer simulations show that magnetic fields form filaments in the tail's gas. With no magnetic fields, the tail is more clumpy than filamentary."

The researchers' main goal was to study star formation along the tail. Hubble's sharp vision uncovered the blue glow of clumps of young stars. The brightest clump in the middle of the tail contains at least 200,000 stars, triggered by the ongoing gas loss from the galaxy. However, based on the amount of glowing hydrogen gas contained in the tail, the team had expected Hubble to uncover three times more stars than it detected.

The Subaru Telescope in Hawaii observed the glowing tail in 2007 during a survey of the Coma cluster's galaxies. But the astronomers needed Hubble observations to confirm that the hot hydrogen gas contained in the tail was a signature of star formation.

"Without the depth and resolution of Hubble, it's hard to say if the glowing hydrogen-gas emission is coming from stars in the tail or if it's just from the gas being heated," Cramer said. "These Hubble visible-light observations are the first and best follow-up of the Subaru survey."

The Hubble data show that the gas-stripping process began on the outskirts of the galaxy and is moving in towards the center, which is typical in this type of mass loss. Based on the Hubble images, the gas has been cleared out all the way down to the central 6,400 light-years.

Within that central region, there is still a lot of gas, as seen in a burst of star formation. "This region is the only place in the galaxy where gas exists and star formation is taking place," Cramer said. "But now that gas is being stripped out of the center, forming the long tail."

A long streamer of hydrogen gas is being stripped from the spiral galaxy D100 as it plunges toward the center of the giant Coma galaxy cluster. This wide view is a composite of the Hubble Space Telescope's visible-light view of the galaxy combined with a photo of a glowing red streamer of hydrogen gas taken by the Subaru Telescope in Hawaii. The narrow funnel-shaped feature emanating from the galaxy's center is the red glow of hydrogen gas. This glowing tail extends for nearly 200,000 light-years, but the pencil-like structure is comparatively narrow – only 7,000 light-years wide. The tail's clean edges and smooth structure suggests that magnetic fields play a prominent role in shaping it. Hubble's sharp vision uncovered the blue, glowing clumps of young stars in the tail. The brightest clump, near the middle of the tail [the blue feature], contains at least 200,000 stars, triggered by the ongoing gas loss from the galaxy. The gas-loss process occurs when a galaxy, due to the pull of gravity, falls toward the dense center of a massive cluster of thousands of galaxies. During its plunge, the galaxy plows through intergalactic material, like a boat moving through water. This material pushes gas and dust out of the galaxy. 

Once the galaxy loses all of its gas – its star-making fuel – it can no longer create new stars. The gas-stripping process in D100 began roughly 300 million years ago. The reddish galaxies in the image contain older stars between the ages of 500 million to 13 billion years old. One of those galaxies is D99, just below and to the left of D100. It was stripped of its gas by the same process as the one that is siphoning gas from D100. The blue galaxies contain a mixture of young and old stars. Some of the stars are less than 500 million years old. The Coma cluster is located 330 million light-years from Earth. This image is a blend of several exposures taken in visible light between May 10 and July 10, 2016, and November 2017 to January 2018, by Hubble’s Advanced Camera for Surveys. Researchers overlaid an image of the glowing, red, hydrogen tail, taken in visible light between April 28 and May 3, 2006, by the Subaru Telescope's Subaru Prime Focus Camera (Suprime-Cam) in Hawaii.
Newswise: Hubble Sees Plunging Galaxy Losing Its Gas
Credit: Hubble: NASA, ESA, M. Sun (University of Alabama), and W. Cramer and J. Kenney (Yale University); Subaru: M. Yagi (National Astronomical Observatory of Japan)

Adding to this compelling narrative is another galaxy in the image that foreshadows D100's fate. The object, named D99, began as a spiral galaxy similar in mass to D100. It underwent the same violent gas-loss process as D100 is now undergoing, and is now a dead relic. All of the gas was siphoned from D99 between 500 million and 1 billion years ago. Its spiral structure has mostly faded away, and its stellar inhabitants consist of old, red stars. "D100 will look like D99 in a few hundred million years," Kenney said.

The Coma cluster is located 330 million light-years from Earth.

The team's results appear online in the January 8, 2019, issue of The Astrophysical Journal.

The Hubble Space Telescope is a project of international cooperation between NASA and ESA (European Space Agency). NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy in Washington, D.C.


Contacts and sources:
Donna Weaver / Ray Villard
 Space Telescope Science Institute (STScI)