Wednesday, March 24, 2021

Reading Minds with Ultrasound: A Less-Invasive Technique to Decode the Brain's Intentions



What is happening in your brain as you are scrolling through this page? In other words, which areas of your brain are active, which neurons are talking to which others, and what signals are they sending to your muscles?

Mapping neural activity to corresponding behaviors is a major goal for neuroscientists developing brain–machine interfaces (BMIs): devices that read and interpret brain activity and transmit instructions to a computer or machine. Though this may seem like science fiction, existing BMIs can, for example, connect a paralyzed person with a robotic arm; the device interprets the person's neural activity and intentions and moves the robotic arm correspondingly.

Details of the vasculature in the non-human primate brain, imaged using functional ultrasound.
Credit: S. Norman


A major limitation for the development of BMIs is that the devices require invasive brain surgery to read out neural activity. But now, a collaboration at Caltech has developed a new type of minimally invasive BMI to read out brain activity corresponding to the planning of movement. Using functional ultrasound (fUS) technology, it can accurately map brain activity from precise regions deep within the brain at a resolution of 100 micrometers (the size of a single neuron is approximately 10 micrometers).


A diagram illustrating how a new type of ultrasound is used to image a motor-planning region of the brain in non-human primates. The neural activity shown in those brain images was then decoded to correspond with movements. This process was shown to accurately predict movements even before they happened.

Credit: S. Norman

The new fUS technology is a major step in creating less invasive, yet still highly capable, BMIs.

"Invasive forms of brain–machine interfaces can already give movement back to those who have lost it due to neurological injury or disease," says Sumner Norman, postdoctoral fellow in the Andersen lab and co-first author on the new study. "Unfortunately, only a select few with the most severe paralysis are eligible and willing to have electrodes implanted into their brain. Functional ultrasound is an incredibly exciting new method to record detailed brain activity without damaging brain tissue. We pushed the limits of ultrasound neuroimaging and were thrilled that it could predict movement. What's most exciting is that fUS is a young technique with huge potential—this is just our first step in bringing high performance, less invasive BMI to more people."

The new study is a collaboration between the laboratories of Richard Andersen, James G. Boswell Professor of Neuroscience and Leadership Chair and director of the Tianqiao and Chrissy Chen Brain–Machine Interface Center in the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech; and of Mikhail Shapiro, professor of chemical engineering and Heritage Medical Research Institute Investigator. Shapiro is an affiliated faculty member with the Chen Institute.

A paper describing the work appears in the journal Neuron on March 22.

In general, all tools for measuring brain activity have drawbacks. Implanted electrodes (electrophysiology) can very precisely measure activity on the level of single neurons, but, of course, require the implantation of those electrodes into the brain. Non-invasive techniques like functional magnetic resonance imaging (fMRI) can image the entire brain but require bulky and expensive machinery. Electroencephalography (EEGs) does not require surgery but can only measure activity at low spatial resolution.

Ultrasound works by emitting pulses of high frequency sound and measuring how those sound vibrations echo throughout a substance, such as various tissues of the human body. Sound travels at different speeds through these tissue types and reflects at the boundaries between them. This technique is commonly used to take images of a fetus in utero, and for other diagnostic imaging.

Ultrasound can also "hear" the internal motion of organs. For example, red blood cells, like a passing ambulance, will increase in pitch as they approach the source of the ultrasound waves, and decrease as they flow away. Measuring this phenomenon allowed the researchers to record tiny changes in the brain's blood flow down to 100 micrometers (on the scale of the width of a human hair).

"When a part of the brain becomes more active, there's an increase in blood flow to the area. A key question in this work was: If we have a technique like functional ultrasound that gives us high-resolution images of the brain's blood flow dynamics in space and over time, is there enough information from that imaging to decode something useful about behavior?" Shapiro says. "The answer is yes. This technique produced detailed images of the dynamics of neural signals in our target region that could not be seen with other non-invasive techniques like fMRI. We produced a level of detail approaching electrophysiology, but with a far less invasive procedure."

The collaboration began when Shapiro invited Mickael Tanter, a pioneer in functional ultrasound and director of Physics for Medicine Paris (ESPCI Paris Sciences et Lettres University, Inserm, CNRS), to give a seminar at Caltech in 2015. Vasileios Christopoulos, a former Andersen lab postdoctoral scholar (now an assistant professor at UC Riverside), attended the talk and proposed a collaboration. Shapiro, Andersen, and Tanter then received an NIH BRAIN Initiative grant to pursue the research. The work at Caltech was led by Norman, former Shapiro lab postdoctoral fellow David Maresca (now assistant professor at Delft University of Technology), and Christopoulos. Along with Norman, Maresca and Christopoulos are co-first authors on the new study.

The technology was developed with the aid of non-human primates, who were taught to do simple tasks that involved moving their eyes or arms in certain directions when presented with certain cues. As the primates completed the tasks, the fUS measured brain activity in the posterior parietal cortex (PPC), a region of the brain involved in planning movement. The Andersen lab has studied the PPC for decades and has previously created maps of brain activity in the region using electrophysiology. To validate the accuracy of fUS, the researchers compared brain imaging activity from fUS to previously obtained detailed electrophysiology data.

Next, through the support of the T&C Chen Brain–Machine Interface Center at Caltech, the team aimed to see if the activity-dependent changes in the fUS images could be used to decode the intentions of the non-human primate, even before it initiated a movement. The ultrasound imaging data and the corresponding tasks were then processed by a machine-learning algorithm, which learned what patterns of brain activity correlated with which tasks. Once the algorithm was trained, it was presented with ultrasound data collected in real time from the non-human primates.

The algorithm predicted, within a few seconds, what behavior the non-human primate was going to carry out (eye movement or reach), direction of the movement (left or right), and when they planned to make the movement.

"The first milestone was to show that ultrasound could capture brain signals related to the thought of planning a physical movement," says Maresca, who has expertise in ultrasound imaging. "Functional ultrasound imaging manages to record these signals with 10 times more sensitivity and better resolution than functional MRI. This finding is at the core of the success of brain–machine interfacing based on functional ultrasound."

"Current high-resolution brain–machine interfaces use electrode arrays that require brain surgery, which includes opening the dura, the strong fibrous membrane between the skull and the brain, and implanting the electrodes directly into the brain. But ultrasound signals can pass through the dura and brain non-invasively. Only a small, ultrasound-transparent window needs to be implanted in the skull; this surgery is significantly less invasive than that required for implanting electrodes," says Andersen.

Though this research was carried out in non-human primates, a collaboration is in the works with Dr. Charles Liu, a neurosurgeon at USC, to study the technology with human volunteers who, because of traumatic brain injuries, have had a piece of skull removed. Because ultrasound waves can pass unaffected through these "acoustic windows," it will be possible to study how well functional ultrasound can measure and decode brain activity in these individuals.

The paper is titled "Single-trial decoding of movement intentions using functional ultrasound neuroimaging." Additional co-authors are Caltech graduate student Whitney Griggs and Charlie Demene of Paris Sciences et Lettres University and INSERM Technology Research Accelerator in Biomedical Ultrasound in Paris, France. Funding was provided by a Della Martin Postdoctoral Fellowship, a Human Frontiers Science Program Cross-Disciplinary Postdoctoral Fellowship, the UCLA–Caltech Medical Science Training Program, the National Institutes of Health BRAIN Initiative, the Tianqiao and Chrissy Chen Brain–Machine Interface Center, the Boswell Foundation, and the Heritage Medical Research Institute.



Contacts and sources:
Lori Dajose
California Institute of Technology


Publication: Single-trial decoding of movement intentions using functional ultrasound neuroimaging.
Sumner L. Norman, David Maresca, Vassilios N. Christopoulos, Whitney S. Griggs, Charlie Demene, Mickael Tanter, Mikhail G. Shapiro, Richard A. Andersen. Neuron, 2021; DOI: 10.1016/j.neuron.2021.03.003



‘Zombie’ Cells Come to Life after the Death of the Human Brain

In the hours after we die, certain cells in the human brain are still active. Some cells even increase their activity and grow to gargantuan proportions, according to new research from the University of Illinois Chicago.

In a newly published study in the journal Scientific Reports, the UIC researchers analyzed gene expression in fresh brain tissue — which was collected during routine brain surgery — at multiple times after removal to simulate the post-mortem interval and death. They found that gene expression in some cells actually increased after death.

‘Zombie’ cells come to life after the death of the human brain.
 (Image: Dr. Jeffrey Loeb/UIC).


These ‘zombie genes’ — those that increased expression after the post-mortem interval — were specific to one type of cell: inflammatory cells called glial cells. The researchers observed that glial cells grow and sprout long arm-like appendages for many hours after death.

“That glial cells enlarge after death isn’t too surprising given that they are inflammatory and their job is to clean things up after brain injuries like oxygen deprivation or stroke,” said Dr. Jeffrey Loeb, the John S. Garvin Professor and head of neurology and rehabilitation at the UIC College of Medicine and corresponding author on the paper.

What’s significant, Loeb said, is the implications of this discovery — most research studies that use postmortem human brain tissues to find treatments and potential cures for disorders such as autism, schizophrenia and Alzheimer’s disease, do not account for the post-mortem gene expression or cell activity.

“Most studies assume that everything in the brain stops when the heart stops beating, but this is not so,” Loeb said. “Our findings will be needed to interpret research on human brain tissues. We just haven’t quantified these changes until now.”

Loeb and his team noticed that the global pattern of gene expression in fresh human brain tissue didn’t match any of the published reports of postmortem brain gene expression from people without neurological disorders or from people with a wide variety of neurological disorders, ranging from autism to Alzheimer’s.

“We decided to run a simulated death experiment by looking at the expression of all human genes, at time points from 0 to 24 hours, from a large block of recently collected brain tissues, which were allowed to sit at room temperature to replicate the postmortem interval,” Loeb said.

 Jeffrey Loeb  
Photo: Jenny Fontaine/UIC

Loeb and colleagues are at a particular advantage when it comes to studying brain tissue. Loeb is director of the UI NeuroRepository, a bank of human brain tissues from patients with neurological disorders who have consented to having tissue collected and stored for research either after they die, or during standard of care surgery to treat disorders such as epilepsy. For example, during certain surgeries to treat epilepsy, epileptic brain tissue is removed to help eliminate seizures. Not all of the tissue is needed for pathological diagnosis, so some can be used for research. This is the tissue that Loeb and colleagues analyzed in their research.

They found that about 80% of the genes analyzed remained relatively stable for 24 hours — their expression didn’t change much. These included genes often referred to as housekeeping genes that provide basic cellular functions and are commonly used in research studies to show the quality of the tissue. Another group of genes, known to be present in neurons and shown to be intricately involved in human brain activity such as memory, thinking and seizure activity, rapidly degraded in the hours after death. These genes are important to researchers studying disorders like schizophrenia and Alzheimer’s disease, Loeb said.

A third group of genes — the ‘zombie genes’ — increased their activity at the same time the neuronal genes were ramping down. The pattern of post-mortem changes peaked at about 12 hours.

“Our findings don’t mean that we should throw away human tissue research programs, it just means that researchers need to take into account these genetic and cellular changes, and reduce the post-mortem interval as much as possible to reduce the magnitude of these changes,” Loeb said. “The good news from our findings is that we now know which genes and cell types are stable, which degrade, and which increase over time so that results from postmortem brain studies can be better understood.”

Fabien Dachet, Tibor Valyi-Nagy, Kunwar Narayan, Anna Serafini and Gayatry Mohapatra of UIC; James Brown and Susan Celniker of Lawrence Berkeley National Laboratory; Nathan Boley of the University of California, Berkeley; and Thomas Gingeras of Cold Spring Harbor Laboratory are co-authors on the paper.

This research was funded by grants from the National Institutes of Health (R01NS109515, R56NS083527, and UL1TR002003).


Contacts and sources:
Sharon Parmet
University of Illinois at Chicago




Publication: Selective time-dependent changes in activity and cell-specific gene expression in human postmortem brain.
Fabien Dachet, James B. Brown, Tibor Valyi-Nagy, Kunwar D. Narayan, Anna Serafini, Nathan Boley, Thomas R. Gingeras, Susan E. Celniker, Gayatry Mohapatra, Jeffrey A. Loeb.Scientific Reports, 2021; 11 (1) DOI: 10.1038/s41598-021-85801-6


Big Breakthrough for ’Massless’ Energy Storage: Load-Bearing Batteries Are Coming



Researchers from Chalmers University of Technology have produced a structural battery that performs ten times better than all previous versions. It contains carbon fibre that serves simultaneously as an electrode, conductor, and load-bearing material. Their latest research breakthrough paves the way for essentially ’massless’ energy storage in vehicles and other technology.

Credit: Chalmers University of Technology

The batteries in today's electric cars constitute a large part of the vehicles' weight, without fulfilling any load-bearing function. A structural battery, on the other hand, is one that works as both a power source and as part of the structure – for example, in a car body. This is termed ‘massless’ energy storage, because in essence the battery’s weight vanishes when it becomes part of the load-bearing structure. Calculations show that this type of multifunctional battery could greatly reduce the weight of an electric vehicle.

The development of structural batteries at Chalmers University of Technology has proceeded through many years of research, including previous discoveries involving certain types of carbon fibre. In addition to being stiff and strong, they also have a good ability to store electrical energy chemically. This work was named by Physics World as one of 2018’s ten biggest scientific breakthroughs.

The first attempt to make a structural battery was made as early as 2007, but it has so far proven difficult to manufacture batteries with both good electrical and mechanical properties.

But now the development has taken a real step forward, with researchers from Chalmers, in collaboration with KTH Royal Institute of Technology in Stockholm, presenting a structural battery with properties that far exceed anything yet seen, in terms of electrical energy storage, stiffness and strength. Its multifunctional performance is ten times higher than previous structural battery prototypes.

The battery has an energy density of 24 Wh/kg, meaning approximately 20 percent capacity compared to comparable lithium-ion batteries currently available. But since the weight of the vehicles can be greatly reduced, less energy will be required to drive an electric car, for example, and lower energy density also results in increased safety. And with a stiffness of 25 GPa, the structural battery can really compete with many other commonly used construction materials.

“Previous attempts to make structural batteries have resulted in cells with either good mechanical properties, or good electrical properties. But here, using carbon fibre, we have succeeded in designing a structural battery with both competitive energy storage capacity and rigidity,” explains Leif Asp, Professor at Chalmers and leader of the project.

Super light electric bikes and consumer electronics could soon be a reality

The new battery has a negative electrode made of carbon fibre, and a positive electrode made of a lithium iron phosphate-coated aluminium foil. They are separated by a fibreglass fabric, in an electrolyte matrix. Despite their success in creating a structural battery ten times better than all previous ones, the researchers did not choose the materials to try and break records – rather, they wanted to investigate and understand the effects of material architecture and separator thickness.

Now, a new project, financed by the Swedish National Space Agency, is underway, where the performance of the structural battery will be increased yet further. The aluminium foil will be replaced with carbon fibre as a load-bearing material in the positive electrode, providing both increased stiffness and energy density. The fibreglass separator will be replaced with an ultra-thin variant, which will give a much greater effect – as well as faster charging cycles. The new project is expected to be completed within two years.

Leif Asp, who is leading this project too, estimates that such a battery could reach an energy density of 75 Wh/kg and a stiffness of 75 GPa. This would make the battery about as strong as aluminium, but with a comparatively much lower weight.

Read the article in the scientific journal Advanced Energy & Sustainability Research:
A Structural Battery and its Multifunctional Performance


Credit: Chalmers University of Technology

“The next generation structural battery has fantastic potential. If you look at consumer technology, it could be quite possible within a few years to manufacture smartphones, laptops or electric bicycles that weigh half as much as today and are much more compact”, says Leif Asp.

And in the longer term, it is absolutely conceivable that electric cars, electric planes and satellites will be designed with and powered by structural batteries.

“We are really only limited by our imaginations here. We have received a lot of attention from many different types of companies in connection with the publication of our scientific articles in the field. There is understandably a great amount of interest in these lightweight, multifunctional materials,” says Leif Asp.

Watch a Youtube video here on: Structural battery with record performance
<iframe width="700" height="406" src="https://www.youtube.com/embed/yikwX_ehKAQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

More about: The research on structural batteries
The structural battery uses carbon fibre as a negative electrode, and a lithium iron phosphate-coated aluminium foil as the positive electrode. The carbon fibre acts as a host for the lithium and thus stores the energy. Since the carbon fibre also conducts electrons, the need for copper and silver conductors is also avoided – reducing the weight even further. Both the carbon fibre and the aluminium foil contribute to the mechanical properties of the structural battery. The two electrode materials are kept separated by a fibreglass fabric in a structural electrolyte matrix. The task of the electrolyte is to transport the lithium ions between the two electrodes of the battery, but also to transfer mechanical loads between carbon fibres and other parts.

The project is run in collaboration between Chalmers University of Technology and KTH Royal Institute of Technology, Sweden's two largest technical universities. The battery electrolyte has been developed at KTH. The project involves researchers from five different disciplines: material mechanics, materials engineering, lightweight structures, applied electrochemistry and fibre and polymer technology. Funding has come from the European Commission's research program Clean Sky II, as well as the US Airforce.

  

Contacts and sources:
Christian Borg, Chalmers University of Technology
Leif Asp, Professor at the Department of Industrial and Materials Sciences, Chalmers University of Technology 

Publication: A Structural Battery and its Multifunctional Performance.
Leif E. Asp, Karl Bouton, David Carlstedt, Shanghong Duan, Ross Harnden, Wilhelm Johannisson, Marcus Johansen, Mats K. G. Johansson, Göran Lindbergh, Fang Liu, Kevin Peuvot, Lynn M. Schneider, Johanna Xu, Dan Zenkert. Advanced Energy and Sustainability Research, 2021; 2 (3): 2000093 DOI: 10.1002/aesr.202000093



The "Spiders of Mars" What Causes The Strange Shapes Investigated

Researchers at Trinity College Dublin have been shedding light on the enigmatic “spiders from Mars”, providing the first physical evidence that these unique features on the planet’s surface can be formed by the sublimation of CO2 ice.

Spiders, more formally referred to as araneiforms, are strange-looking negative topography radial systems of dendritic troughs; patterns that resemble branches of a tree or fork lightning. These features, which are not found on Earth, are believed to be carved into the Martian surface by dry ice changing directly from solid to gas (sublimating) in the spring. Unlike Earth, Mars’ atmosphere comprises mainly CO2 and as temperatures decrease in winter, this deposits onto the surface as CO2 frost and ice.

Image from NASA’s Mars Reconnaissance Orbiter, acquired May 13, 2018 during winter at the South Pole of Mars, shows a carbon dioxide ice cap covering the region and as the sun returns in the spring, “spiders” begin to emerge from the landscape.
 
Credit: NASA

The Trinity team, along with colleagues at Durham University and the Open University, conducted a series of experiments funded by the Irish Research Council and Europlanet at the Open University Mars Simulation Chamber (pictured below), under Martian atmospheric pressure, in order to investigate whether patterns similar to Martian spiders could form by dry ice sublimation.


Credit: Trinity College Dublin

Its findings are detailed in a paper published today in the Nature Journal Scientific Reports: “The Formation of Araneiforms by Carbon Dioxide Venting and Vigorous Sublimation Dynamics Under Martian Atmospheric Pressure”.

Dr Lauren McKeown drilling holes in the iceblocks for the project
 
Credit: Trinity College Dublin

Dr Lauren McKeown, who led this work during her PhD at Trinity and is now at the Open University, said: “This research presents the first set of empirical evidence for a surface process that is thought to modify the polar landscape on Mars. Kieffer’s hypothesis [explained below] has been well-accepted for over a decade, but until now, it has been framed in a purely theoretical context. … The experiments show directly that the spider patterns we observe on Mars from orbit can be carved by the direct conversion of dry ice from solid to gas. It is exciting because we are beginning to understand more about how the surface of Mars is changing seasonally today.”

The research team drilled holes in the centres of CO2 ice blocks and suspended them with a claw similar to those found in arcades, above granular beds of different grain sizes. They lowered the pressure inside a vacuum chamber to Martian atmospheric pressure (6mbar) and then used a lever system to place the CO2 ice block on the surface

They made use of an effect known as the Leidenfrost Effect, whereby if a substance comes in contact with a surface much hotter than its sublimation point, it will form a gaseous layer around itself. When the block reached the sandy surface, CO2 turned directly from solid to gas and material was seen escaping through the central hole in the form of a plume

In each case, once the block was lifted, a spider pattern had been eroded by the escaping gas. The spider patterns were more branched when finer grain sizes were used and less branched when coarser grain sizes were used.

This is the first set of empirical evidence for this extant surface process.

Dr Mary Bourke, of Trinity’s Department of Geography, who supervised the Ph.D research, said:
“This innovative work supports the emergent theme that the current climate and weather on Mars has an important influence not only on dynamic surface processes, but also for any future robotic and/or human exploration of the planet.”

The main hypothesis proposed for spider formation (Kieffer’s hypothesis) suggests that in spring, sunlight penetrates this translucent ice and heats the terrain beneath it. The ice will sublimate from its base, causing pressure to build up and eventually the ice will rupture, allowing pressurised gas to escape through a crack in the ice. The paths of the escaping gas will leave behind the dendritic patterns observed on Mars today and the sandy/dusty material will be deposited on top of the ice in the form of a plume.

However, until now, it has not been known if such a theoretical process is possible and this process has never been directly observed on Mars.

Additionally, the researchers observed that when CO2 blocks were released and allowed to sublimate within the sand bed, sublimation was much more vigorous than expected and material was thrown all over the chamber. This observation will be useful in understanding models of other CO2 sublimation-related processes on Mars, such as the formation of lateral Recurring Diffusive Flows surrounding linear dune gullies on Mars.

The methodology used can be refocused to study the geomorphic role of CO2 sublimation on other active Martian surface feature formation – and indeed, can pave the way for further research on sublimation processes on other planetary bodies with no/scant atmospheres like Europa or Enceladus.

 




Contacts and sources:
Catherine O’Mahony
Trinity College Dublin



Publication: The formation of araneiforms by carbon dioxide venting and vigorous sublimation dynamics under martian atmospheric pressure
Lauren Mc Keown, J. N. McElwaine, M. C. Bourke, M. E. Sylvest, M. R. Patel. . Scientific Reports, 2021; 11 (1) DOI: 10.1038/s41598-021-82763-7



Tuesday, March 23, 2021

When Zero Was Not Zero: Finally a Sea Level for All for the Future

Maps generally indicate elevation in meters above sea level. But sea level is not the same everywhere. A group of experts headed by the Technical University of Munich (TUM), has developed an International Height Reference System (IHRS) that will unify geodetic measurements worldwide.

How high is Mount Everest? 8848 meters? 8844 meters? Or 8850 meters? For years, China and Nepal could not agree. In 2019, Nepal sent a team of geodesists to measure the world’s highest mountain. A year later a team from China climbed the peak. Last December the two governments jointly announced the outcome of the new measurement: 8848.86 meters.

With the help of satellite data, a hypothetical sea level can be calculated.


Image: Curioso , Photography / pexels


The fact that both China and Nepal recognize this result must be seen as a diplomatic success. It was made possible by the new International Height Reference System (IHRS), used for the first time by the geodetic specialists conducting the new measurement. Scientists from TUM played a leading role in developing the new system. It establishes a generally agreed zero level as a basis for all future measurements. It thus replaces the mean sea level, which has traditionally served as the zero level for surveyors and thus for all topographical maps. A paper in the Journal of Geodesy, jointly authored by TUM scientists and international research groups, outlines the scientific background and theoretical concept of the IHRS as well as the strategy for implementing it.

When zero is not always zero

The standard used until now – the mean sea level – was flawed from the outset: There was never a fixed definition. Every country could use arbitrary tide gauges to define its own zero level. As a result, Germany’s official sea level is 31 centimeters higher than Italy’s, 50 cm higher than that used in Spain and actually 2.33 m higher than in Belgium, where the zero height is based on low water in Ostend.

When topographical maps are only used for hiking, no one is bothered by such differences. But for geodetics specialists trying to arrive at a universally agreed height – for Mount Everest, for example, half in Nepal and half in China – the inconsistent zero levels are a bigger problem. And it can be very costly when planners of cross-border structures such as bridges and tunnels forget to check the different coordinates used by the teams and convert them as needed. On the Hochrheinbrücke, a bridge connecting Germany and Switzerland, a discrepancy of this kind was noticed just in time.

Surveys from orbit

“The introduction of an internationally valid height reference system was long overdue,” says TUM researcher Dr. Laura Sánchez of the Deutsches Geodätisches Forschungsinstitut (DGFI-TUM), who has headed working groups studying theoretical aspects and implementing the new global height reference system at the International Association of Geodesy for several years.

What is needed is obvious: a universally accepted zero level. The new International Height Reference System (IHRS) defines how it can be calculated: It takes into account the shape of the Earth – which is close to spherical, but flattened at the poles and bulging slightly at the equator due to its rotation – and the uneven distribution of masses in the interior and on the surface. The resulting irregularities in the gravity field are the basis for calculating the height system because the strength and direction of the force determine the distribution of water in the oceans. If we assume that the Earth’s surface is completely covered with water, the height of a hypothetical sea level and thus the zero level for the entire globe can be calculated precisely.

In construction projects, even the smallest deviations can be crucial

“It became possible to realize the IHRS only with the availability of global data from satellite missions such as the ESA earth observation satellite GOCE (Gravity field and steady-state Ocean Circulation Explorer),” says Prof. Roland Pail of the TUM Chair of Astronomical and Physical Geodesy (APG). His team played an integral role in analyzing the GOCE measurements and using them to calculate global models of the Earth’s gravity field. “The information gained in this way provides the basis to calculate the mean sea level for every point on Earth with the new International Height Reference System, regardless of whether it is on a continent or in an ocean, and thus to compute the internationally accepted zero level,” explains Sánchez.

Does every map have to be redrawn? “It won’t be that dramatic,” says Sánchez. “In the industrial countries, where they have been making gravity measurements for decades, the deviations are quite small – only in the decimeter range.” But with construction projects, for example, even small deviations can cause serious troubles. Consequently, the scientist is confident that the new reference system will gain acceptance quickly.

 

Contacts and sources:
Technical University of Munich (TUM)

 

Publication:  
Sánchez L., J. Ågren, J. Huang, Y. Ming Wang, J. Mäkinen, R. Pail, R. Barzaghi, G. Vergos, K. Ahlgren, Q. Liu: Strategy for the realisation of the International Height Reference System (IHRS). Journal of Geodesy, 95(33), doi: 10.1007/s00190-021-01481-0, 2021.



Oldest Cephalopods Emerged from Seas of a Lost Continent 522 Million Years Ago

The possibly oldest cephalopods in the earth’s history stem from the Avalon Peninsula in Newfoundland (Canada). They were discovered by earth scientists from Heidelberg University. The 522 million-year-old fossils could turn out to be the first known form of these highly evolved invertebrate organisms, whose living descendants today include species such as the cuttlefish, octopus and nautilus. In that case, the find would indicate that the cephalopods evolved about 30 million years earlier than has been assumed.

Longitudinal and cross section of fossils that could turn out to be the first known form of a cephalopod.
Credit:  © Gregor Austermann / Communications Biology

“If they should actually be cephalopods, we would have to backdate the origin of cephalopods into the early Cambrian period,” says Dr Anne Hildenbrand from the Institute of Earth Sciences. Together with Dr Gregor Austermann, she headed the research projects carried out in cooperation with the Bavarian Natural History Collections. “That would mean that cephalopods emerged at the very beginning of the evolution of multicellular organisms during the Cambrian explosion.”
 
The chalky shells of the fossils found on the eastern Avalon Peninsula are shaped like a longish cone and subdivided into individual chambers. These are connected by a tube called the siphuncle. The cephalopods were thus the first organisms able to move actively up and down in the water and thus settle in the open ocean as their habitat. The fossils are distant relatives of the spiral-shaped nautilus, but clearly differ in shape from early finds and the still existing representatives of that class.

Credit: University of Heidelberg

“This find is extraordinary,” says Dr Austermann. “In scientific circles it was long suspected that the evolution of these highly developed organisms had begun much earlier than hitherto assumed. But there was a lack of fossil evidence to back up this theory.” According to the Heidelberg scientists, the fossils from the Avalon Peninsula might supply this evidence, as on the one hand, they resemble other known early cephalopods but, on the other, differ so much from them that they might conceivably form a link leading to the early Cambrian.

Credit: University of Heidelberg

The former and little explored micro-continent of Avalonia, which – besides the east coast of Newfoundland – comprises parts of Europe, is particularly suited to paleontological research, since many different creatures from the Cambrian period are still preserved in its rocks. The researchers hope that other, better preserved finds will confirm the classification of their discoveries as early cephalopods.

Credit: University of Heidelberg

The research results about the 522 million-year-old fossils were published in the Nature-journal “Communications Biology”. Logistic support was given by the province of Newfoundland and the Manuels River Natural Heritage Society located there. The publication in open-access format was enabled in the context of Project DEAL.
  
Contacts and sources:
University of Heidelberg


Publication: A potential cephalopod from the early Cambrian of eastern Newfoundland, Canada.
Anne Hildenbrand, Gregor Austermann, Dirk Fuchs, Peter Bengtson, Wolfgang Stinnesbeck. Communications Biology, 2021; 4 (1) DOI: 10.1038/s42003-021-01885-w



Denisovans Interbred with Modern Humans in Southeast Asia 50,000–60,000 Years Ago

An international group of researchers led by the University of Adelaide has conducted a comprehensive genetic analysis and found no evidence of interbreeding between modern humans and the ancient humans known from fossil records in Island Southeast Asia. They did find further DNA evidence of our mysterious ancient cousins, the Denisovans, which could mean there are major discoveries to come in the region.

In the study published in Nature Ecology and Evolution , the researchers examined the genomes of more than 400 modern humans to investigate the interbreeding events between ancient humans and modern human populations who arrived at Island Southeast Asia 50,000–60,000 years ago.

Replica Homo erectus skull from Java - supplied by Trustees of Natural History Museum.

Credit: University of Adelaide

In particular, they focused on detecting signatures that suggest interbreeding from deeply divergent species known from the fossil record of the area.

The region contains one of the richest fossil records (from at least 1.6 million years) documenting human evolution in the world. Currently there are three distinct ancient humans recognised from the fossil record in the area: Homo erectus, Homo floresiensis (known as Flores Island hobbits) and Homo luzonensis.

These species are known to have survived until approximately 50,000–60,000 years ago in the cases of Homo floresiensis and Homo luzonensis, and approximately 108,000 years for Homo erectus, which means they may have overlapped with the arrival of modern human populations.

The results of the study showed no evidence of interbreeding. Nevertheless, the team were able to confirm previous results showing high levels of Denisovan ancestry in the region.

Lead author and ARC Research Associate from the University of Adelaide Dr João Teixeira, said: “In contrast to our other cousins the Neanderthals, which have an extensive fossil record in Europe, the Denisovans are known almost solely from the DNA record. The only physical evidence of Denisovan existence has been a finger bone and some other fragments found in a cave in Siberia and, more recently, a piece of jaw found in the Tibetan Plateau.”

“We know from our own genetic records that the Denisovans mixed with modern humans who came out of Africa 50,000–60,000 years ago both in Asia, and as the modern humans moved through Island Southeast Asia on their way to Australia.

“The levels of Denisovan DNA in contemporary populations indicates that significant interbreeding happened in Island Southeast Asia.
“The mystery then remains, why haven’t we found their fossils alongside the other ancient humans in the region? Do we need to re-examine the existing fossil record to consider other possibilities?”Dr João Teixeira

Co-author Chris Stringer of the Natural History Museum in London added:

“While the known fossils of Homo erectus, Homo floresiensis and Homo luzonensis might seem to be in the right place and time to represent the mysterious ‘southern Denisovans’, their ancestors were likely to have been in Island Southeast Asia at least 700,000 years ago. Meaning their lineages are too ancient to represent the Denisovans who, from their DNA, were more closely related to the Neanderthals and modern humans.”

Co-author Prof Kris Helgen, Chief Scientist and Director of the Australian Museum Research Institute, said: “These analyses provide an important window into human evolution in a fascinating region, and demonstrate the need for more archaeological research in the region between mainland Asia and Australia.”

Helgen added: “This research also illuminates a pattern of ‘megafaunal’ survival which coincides with known areas of pre-modern human occupation in this part of the world. Large animals that survive today in the region include the Komodo Dragon, the Babirusa (a pig with remarkable upturned tusks), and the Tamaraw and Anoas (small wild buffalos).

“This hints that long-term exposure to hunting pressure by ancient humans might have facilitated the survival of the megafaunal species in subsequent contacts with modern humans. Areas without documented pre-modern human occurrence, like Australia and New Guinea, saw complete extinction of land animals larger than humans over the past 50,000 years.”

Dr Teixeira said: “The research corroborates previous studies that the Denisovans were in Island Southeast Asia, and that modern humans did not interbreed with more divergent human groups in the region. This opens two equally exciting possibilities: either a major discovery is on the way, or we need to re-evaluate the current fossil record of Island Southeast Asia.”

“Whichever way you choose to look at it, exciting times lie ahead in palaeoanthropology.




Contacts and sources:
Kelly Brown.
University of Adelaide



Publication: Widespread Denisovan ancestry in Island Southeast Asia but no evidence of substantial super-archaic hominin admixture.
João C. Teixeira, Guy S. Jacobs, Chris Stringer, Jonathan Tuke, Georgi Hudjashov, Gludhug A. Purnomo, Herawati Sudoyo, Murray P. Cox, Raymond Tobler, Chris S. M. Turney, Alan Cooper, Kristofer M. Helgen. Nature Ecology & Evolution, 2021; DOI: 10.1038/s41559-021-01408-0


When Volcanoes Go Metal

What would a volcano – and its lava flows – look like on a planetary body made primarily of metal? A pilot study from North Carolina State University offers insights into ferrovolcanism that could help scientists interpret landscape features on other worlds.

Volcanoes form when magma, which consists of the partially molten solids beneath a planet’s surface, erupts. On Earth, that magma is mostly molten rock, composed largely of silica. But not every planetary body is made of rock – some can be primarily icy or even metallic.
Credit: Photo by Piermanuele Sberni on Unsplash

“Cryovolcanism is volcanic activity on icy worlds, and we’ve seen it happen on Saturn’s moon Enceladus,” says Arianna Soldati, assistant professor of marine, earth and atmospheric sciences at NC State and lead author of a paper describing the work. “But ferrovolcanism, volcanic activity on metallic worlds, hasn’t been observed yet.”

Enter 16 Psyche, a 140-mile diameter asteroid situated in the asteroid belt between Mars and Jupiter. Its surface, according to infrared and radar observations, is mainly iron and nickel. 16 Psyche is the subject of an upcoming NASA mission, and the asteroid inspired Soldati to think about what volcanic activity might look like on a metallic world.

“When we look at images of worlds unlike ours, we still use what happens on Earth – like evidence of volcanic eruptions – to interpret them,” Soldati says. “However, we don’t have widespread metallic volcanism on Earth, so we must imagine what those volcanic processes might look like on other worlds so that we can interpret images correctly.”

Soldati defines two possible types of ferrovolcanism: Type 1, or pure ferrovolcanism, occurring on entirely metallic bodies; and Type 2, spurious ferrovolcanism, occurring on hybrid rocky-metallic bodies.

In a pilot study, Soldati and colleagues from the Syracuse Lava Project produced Type 2 ferrovolcanism, in which metal separates from rock as the magma forms.

Metallic lava flow emerging from rocky lava.
 Image: Arianna Soldati

“The Lava Project’s furnace is configured for melting rock, so we were working with the metals (mainly iron) that naturally occur within them,” Soldati says. “When you melt rock under the extreme conditions of the furnace, some of the iron will separate out and sink to the bottom since it’s heavier. By completely emptying the furnace, we were able to see how that metal magma behaved compared to the rock one.”

The metallic lava flows travelled 10 times faster and spread more thinly than the rock flows, breaking into a myriad of braided channels. The metal also traveled largely beneath the rock flow, emerging from the leading edge of the rocky lava.

The smooth, thin, braided, widely spread layers of metallic lava would leave a very different impression on a planet’s surface than the often thick, rough, rocky flows we find on Earth, according to Soldati.

“Although this is a pilot project, there are still some things we can say,” Soldati says. “If there were volcanoes on 16 Psyche – or on another metallic body – they definitely wouldn’t look like the steep-sided Mt. Fuji, an iconic terrestrial volcano. Instead, they would probably have gentle slopes and broad cones. That’s how an iron volcano would be built – thin flows that expand over longer distances.”

The work appears in Nature Communications. James Farrell, Bob Wysocki, and Jeff Karson of Syracuse University’s Syracuse Lava Project are coauthors of the work.

 


Contacts and sources:
Tracey Peake 
North Carolina State University




Publication:  “Imagining and constraining ferrovolcanic eruptions and landscapes through large-scale experiments”

DOI: 10.1038/s41467-021-21582-w

Authors: A. Soldati, North Carolina State University; J.A. Farrell, R. Wysocki, J.A. Karson, Syracuse University
Published: Nature Communications



Let Cows Eat Weed, Seaweed in Feed Cuts Beef Cattle Methane by 80%

A bit of seaweed in cattle feed could reduce methane emissions from beef cattle as much as 82 percent, according to new findings from researchers at the University of California, Davis. The results, published today (March 17) in the journal PLOS ONE, could pave the way for the sustainable production of livestock throughout the world.


Credit: University of California - Davis

“We now have sound evidence that seaweed in cattle diet is effective at reducing greenhouse gases and that the efficacy does not diminish over time,” said Ermias Kebreab, professor and Sesnon Endowed Chair of the Department of Animal Science and director of the World Food Center. Kebreab conducted the study along with his Ph.D. graduate student Breanna Roque.

“This could help farmers sustainably produce the beef and dairy products we need to feed the world,” Roque added.

Over the course of five months last summer, Kebreab and Roque added scant amounts of seaweed to the diet of 21 beef cattle and tracked their weight gain and methane emissions. Cattle that consumed doses of about 80 grams (3 ounces) of seaweed gained as much weight as their herd mates while burping out 82 percent less methane into the atmosphere. Kebreab and Roque are building on their earlier work with dairy cattle, which was the world’s first experiment reported that used seaweed in cattle.
 
Less gassy, more sustainable

Greenhouse gases are a major cause of climate change, and methane is a potent greenhouse gas. Agriculture is responsible for 10 percent of greenhouse gas emissions in the U.S., and half of those come from cows and other ruminant animals that belch methane and other gases throughout the day as they digest forages like grass and hay.

Since cattle are the top agricultural source of greenhouse gases, many have suggested people eat less meat to help address climate change. Kebreab looks to cattle nutrition instead.

“Only a tiny fraction of the earth is fit for crop production,” Kebreab explained. “Much more land is suitable only for grazing, so livestock plays a vital role in feeding the 10 billion people who will soon inhabit the planet. Since much of livestock’s methane emissions come from the animal itself, nutrition plays a big role in finding solutions.”

In 2018, Kebreab and Roque were able to reduce methane emissions from dairy cows by over 50 percent by supplementing their diet with seaweed for two weeks. The seaweed inhibits an enzyme in the cow’s digestive system that contributes to methane production.

In the new study, Kebreab and Roque tested whether those reductions were sustainable over time by feeding cows a touch of seaweed every day for five months, from the time they were young on the range through their later days on the feed lot.

Four times a day, the cows ate a snack from an open-air contraption that measured the methane in their breath. The results were clear. Cattle that consumed seaweed emitted much less methane, and there was no drop-off in efficacy over time.

Next steps

Results from a taste-test panel found no differences in the flavor of the beef from seaweed-fed steers compared with a control group. Similar tests with dairy cattle showed that seaweed had no impact on the taste of milk.

Also, scientists are studying ways to farm the type of seaweed — Asparagopsis taxiformis — that Kebreab’s team used in the tests. There is not enough of it in the wild for broad application.

Another challenge: How do ranchers provide seaweed supplements to grazing cattle on the open range? That’s the subject of Kebreab’s next study.

Kebreab and Roque collaborated with a federal scientific agency in Australia called the Commonwealth Scientific and Industrial Research Organization, James Cook University in Australia, Meat and Livestock Australia, and Blue Ocean Barns, a startup company that sources, processes, markets and certifies seaweed-based additives to cattle feed. Kebreab is a scientific adviser to Blue Ocean Barns.

“There is more work to be done, but we are very encouraged by these results,” Roque said. “We now have a clear answer to the question of whether seaweed supplements can sustainably reduce livestock methane emissions and its long term effectiveness.”

Support for the research comes from Blue Ocean Barns, the David and Lucile Packard Foundation and the Grantham Foundation.


Contacts and sources:
Diane Nelson
University of California - Davis

Publication: Red seaweed (Asparagopsis taxiformis) supplementation reduces enteric methane by over 80 percent in beef steers.
Breanna M. Roque, Marielena Venegas, Robert D. Kinley, Rocky de Nys, Toni L. Duarte, Xiang Yang, Ermias Kebreab. PLOS ONE, 2021; 16 (3): e0247820 DOI: 10.1371/journal.pone.0247820



The First Documented Record of Salt as an Ancient Maya Commodity

The first documented record of salt as an ancient Maya commodity at a marketplace is depicted in a mural painted more than 1,000 years ago at Calakmul, a UNESCO World Heritage site in the Yucatan Peninsula in Mexico. 

The earliest known record of salt being sold in a marketplace in the Maya region depicted in a mural at Calakmul, a UNESCO World Heritage site in the Yucatan Peninsula in Mexico.

Photo Credit: Rogelio Valencia, Proyecto Arqueológico Calakmul

In the mural that portrays daily life, a salt vendor shows what appears to be a salt cake wrapped in leaves to another person, who holds a large spoon over a basket, presumably of loose, granular salt. This is the earliest known record of salt being sold at a marketplace in the Maya region. Salt is a basic biological necessity and is also useful for preserving food. Salt also was valued in the Maya area because of its restricted distribution.

Salt cakes could have been easily transported in canoes along the coast and up rivers in southern Belize, writes LSU archaeologist Heather McKillop in a new paper published in the Journal of Anthropological Archaeology. She discovered in 2004 the first remnants of ancient Maya salt kitchen buildings made of pole and thatch that had been submerged and preserved in a saltwater lagoon in a mangrove forest in Belize. Since then, she and her team of LSU graduate and undergraduate students and colleagues have mapped 70 sites that comprise an extensive network of rooms and buildings of the Paynes Creek Salt Works.

“It’s like a blueprint for what happened in the past,” McKillop said. “They were boiling brine in pots over fires to make salt.”

A pot 3D printed in the LSU Digital Imaging & Visualization in Archeology Lab by archeology students based on scans collected at the ancient Maya salt works field site.

 Photo Credit: LSU


Her research team has discovered at the Paynes Creek Salt Works, 4,042 submerged architectural wooden posts, a canoe, an oar, a high-quality jadeite tool, stone tools used to salt fish and meat and hundreds of pieces of pottery.

“I think the ancient Maya who worked here were producer-vendors and they would take the salt by canoe up the river. They were making large quantities of salt, much more than they needed for their immediate families. This was their living,” said McKillop, who is the Thomas & Lillian Landrum Alumni Professor in the LSU Department of Geography & Anthropology.

She investigated hundreds of pieces of pottery including 449 rims of ceramic vessels used to make salt. Two of her graduate students were able to replicate the pottery on a 3D printer in McKillop’s Digital Imaging Visualization in Archaeology lab at LSU based on scans taken in Belize at the study site. She discovered that the ceramic jars used to boil the brine were standardized in volume; thus, the salt producers were making standardized units of salt.

“Produced as homogeneous units, salt may have been used as money in exchanges,” McKillop said.

An ethnographic interview with a modern day salt producer in Sacapulas, Guatemala collected in 1981 supports the idea that the ancient Maya also may have viewed salt as a valuable commodity:

“The kitchen is a bank with money for us…So when we need money at any time during the year we come to the kitchen and make money, salt.”




Contacts and sources:
Alison Satake
LSU






Publication:  Salt as a commodity or money in the Classic Maya economy, Journal of Anthropological Archaeology: https://www.sciencedirect.com/science/article/abs/pii/S0278416521000106?dgcid=author



Friday, March 19, 2021

How Life on Land Recovered after "The Great Dying" and What It Teaches

Over the course of Earth’s history, several mass extinction events have destroyed ecosystems, including one that famously wiped out the dinosaurs. But none were as devastating as “The Great Dying,” which took place 252 million years ago during the end of the Permian period. By characterizing how ancient life responded to environmental stressors, researchers gain insights into how modern species might fare.

The plant-eating pareiasaurs were preyed upon by sabre-toothed gorgonopsians. Both groups went extinct during The Great Dying.
Credit: © Xiaochong Guo

 A new study, published today in Proceedings of the Royal Society B, shows in detail how life recovered in comparison to two smaller extinction events. The international study team—composed of researchers from the China University of Geosciences, the California Academy of Sciences, the University of Bristol, Missouri University of Science and Technology, and the Chinese Academy of Sciences—showed for the first time that the end-Permian mass extinction was harsher than other events due to a major collapse in diversity.


To better characterize “The Great Dying,” the team sought to understand why communities didn’t recover as quickly as other mass extinctions. The main reason was that the end-Permian crisis was much more severe than any other mass extinction, wiping out 19 out of every 20 species. With survival of only 5% of species, ecosystems had been destroyed, and this meant that ecological communities had to reassemble from scratch.

After The Great Dying, the ecosystem changed drastically, and included many Lystrosaurus. 
Credit: © Xiaochong Guo


To investigate, lead author and Academy researcher Yuangeng Huang, now at the China University of Geosciences, Wuhan, reconstructed food webs for a series of 14 life assemblages spanning the Permian and Triassic periods. These assemblages, sampled from north China, offered a snapshot of how a single region on Earth responded to the crises. “By studying the fossils and evidence from their teeth, stomach contents, and excrement, I was able to identify who ate whom,” says Huang. “It’s important to build an accurate food web if we want to understand these ancient ecosystems.”

The food webs are made up of plants, molluscs, and insects living in ponds and rivers, as well as the fishes, amphibians, and reptiles that eat them. The reptiles range in size from that of modern lizards to half-ton herbivores with tiny heads, massive barrel-like bodies, and a protective covering of thick bony scales. Sabre-toothed gorgonopsians also roamed, some as large and powerful as lions and with long canine teeth for piercing thick skins. When these animals died out during the end-Permian mass extinction, nothing took their place, leaving unbalanced ecosystems for ten million years. Then, the first dinosaurs and mammals began to evolve in the Triassic. The first dinosaurs were small—bipedal insect-eaters about one meter long—but they soon became larger and diversified as flesh- and plant-eaters.

“Yuangeng Huang spent a year in my lab,” says Peter Roopnarine, Academy Curator of Geology. “He applied ecological modelling methods that allow us to look at ancient food webs and determine how stable or unstable they are. Essentially, the model disrupts the food web, knocking out species and testing for overall stability.”


By the end of the Permian, pareiasaurs had become large and armored for self-protection. This complex ecosystem collapsed during The Great Dying.


Credit"  © Xiaochong Guo


“We found that the end-Permian event was exceptional in two ways,” says Professor Mike Benton from the University of Bristol. “First, the collapse in diversity was much more severe, whereas in the other two mass extinctions there had been low-stability ecosystems before the final collapse. And second, it took a very long time for ecosystems to recover, maybe 10 million years or more, whereas recovery was rapid after the other two crises.”

Ultimately, characterizing communities—especially those that recovered successfully—provides valuable insights into how modern species might fare as humans push the planet to the brink.

“This is an amazing new result,” says Professor Zhong-Qiang Chen of the China University of Geosciences, Wuhan. “Until now, we could describe the food webs, but we couldn’t test their stability. The combination of great new data from long rock sections in north China with cutting-edge computational methods allows us to get inside these ancient examples in the same way we can study food webs in the modern world.”




Contacts and sources:
Katie Jewett
California Academy of Sciences

  



Debris of Supernova Found at Unusual Location


In the first all-sky survey by the eROSITA X-ray telescope onboard SRG, astronomers at the Max Planck Institute for Extraterrestrial Physics have identified a previously unknown supernova remnant, dubbed “Hoinga”. The finding was confirmed in archival radio data and marks the first discovery of a joint Australian-eROSITA partnership established to explore our Galaxy using multiple wavelengths, from low-frequency radio waves to energetic X-rays. The Hoinga supernova remnant is very large and located far from the galactic plane – a surprising first finding – implying that the next years might bring many more discoveries.


Composite X-ray and radio image of Hoinga (see also Fig.2 and Fig.3). The X-rays discovered by eROSITA are emitted by the hot debris of the exploded progenitor, whereas the radio antennae detect synchrotron emission from relativistic electrons, which are decelerated at the outer remnant layer.

© eROSITA/MPE (X-ray), CHIPASS / SPASS / N. Hurley-Walker, ICRAR-Curtin (Radio)

Massive stars end their lives in gigantic supernova explosions when the fusion processes in their interiors no longer produce enough energy to counter their gravitational collapse. But even with hundreds of billions of stars in a galaxy, these events are pretty rare. In our Milky Way, astronomers estimate that a supernova should happen on average every 30 to 50 years. While the supernova itself is only observable on a timescale of months, their remnants can be detected for about 100 000 years. These remnants are composed of the material ejected by the exploding star at high velocities and forming shocks when hitting the surrounding interstellar medium.

About 300 such supernova remnants are known today – much less than the estimated 1200 that should be observable throughout our home Galaxy. So, either astrophysicists have misunderstood the supernova rate or a large majority has been overlooked so far. An international team of astronomers are now using the all-sky scans of the eROSITA X-ray telescope to look for previously unknown supernova remnants. With temperatures of millions of the degrees, the debris of such supernovae emits high-energy radiation, i.e. they should show up in the high-quality X-ray survey data.

“We were very surprised that the first supernova remnant popped up straight away,” says Werner Becker at the Max Planck Institute for Extraterrestrial Physics. Named after the first author’s hometown's Roman name, “Hoinga” is the largest supernova remnant ever discovered in X-rays. With a diameter of about 4.4 degrees, it covers an area about 90 times bigger than the size of the full Moon. “Moreover, it lies very far off the galactic plane, which is very unusual,” he adds. Most previous searches for supernova remnants have concentrated on the disk of our galaxy, where star formation activity is highest and stellar remnants therefore should be more numerous, but it seems that many supernova remnants have been overlooked by this search strategy.


Cutout of the first SRG/eROSITA all-sky survey. The Hoinga supernova remnant is marked. The large bright source in the lower quadrant of the image is from the supernova remnant “Vela” with “Pupis-A”. The image colours are correlated with the energies of the detected X-ray photons. Red represents the 0.3-0.6 keV energy range, green the 0.6-1.0 keV and blue the 1.0-2.3 keV waveband.[less]

© SRG / eROSITA

After the astronomers found the object in the eROSITA all-sky data, they turned to other resources to confirm its nature. Hoinga is – although barely – visible also in data taken by the ROSAT X-ray telescope 30 years ago, but nobody noticed it before due to its faintness and its location at high galactic latitude. However, the real confirmation came from radio data, the spectral band where 90% of all known supernova remnants were found so far.

“We went through archival radio data and it had been sitting there, just waiting to be discovered,” marvels Natasha Walker-Hurley, from the Curtin University node of the International Centre for Radio Astronomy Research in Australia. “The radio emission in 10-year-old surveys clearly confirmed that Hoinga is a supernova remnant, so there may be even more of these out there waiting for keen eyes.”

This animation shows how the X-ray telescope eROSITA scans the entire sky in the X-ray range on its orbit far from Earth.
<iframe width="703" height="410" src="https://www.youtube.com/embed/JbDJUoikec0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Credit: Max Planck Institute for Extraterrestrial Physics  / eROSITA


The eROSITA X-ray telescope will perform a total of eight all-sky surveys and is about 25 times more sensitive than its predecessor ROSAT. Both observatories were designed, build and are operated by the Max Planck Institute for Extraterrestrial Physics. The astronomers expected to discover new supernova remnants in its X-ray data over the next few years, but they were surprised to identify one so early in the programme. Combined with the fact that the signal is already present in decades-old data, this implies that many supernova remnants might have been overlooked in the past due to low-surface brightness, being in unusual locations or because of other nearby emission from brighter sources. Together with upcoming radio surveys, the eROSITA X-ray survey shows great promise for finding many of the missing supernova remnants, helping to solve this long-standing astrophysical mystery.



Contacts and sources:
Dr. Hannelore Hämmerle
Max Planck Institute for Extraterrestrial Physics, Garching


Publication:  A Supernova Remnant Discovered in the SRG/eROSITA All-Sky Survey eRASS1 
W. Becker, N. Hurley-Walker, Ch. Weinberger, L. Nicastro, M. G. F. Mayer, A. Merloni, J. Sanders
Hoinga -
Astronomy & Astrophysics, accepted 12. February 2021 
DOI https://www.aanda.org/component/article?access=doi&doi=10.1051/0004-6361/202040156






Melting Glaciers and Rebounding Land Contribute to Alaska Earthquakes

In 1958, a magnitude 7.8 earthquake triggered a rockslide into Southeast Alaska’s Lituya Bay, creating a tsunami that ran 1,700 feet up a mountainside before racing out to sea.

Researchers now think the region’s widespread loss of glacier ice helped set the stage for the quake.

Glaciers such as the Yakutat in Southeast Alaska, shown here, have been melting since the end of the Little Ice Age, influencing earthquakes in the region.
Photo by Sam Herreid

In a recently published research article, scientists with the University of Alaska Fairbanks Geophysical Institute found that ice loss near Glacier Bay National Park has influenced the timing and location of earthquakes with a magnitude of 5.0 or greater in the area during the past century.

Scientists have known for decades that melting glaciers have caused earthquakes in otherwise tectonically stable regions, such as Canada’s interior and Scandinavia. In Alaska, this pattern has been harder to detect, as earthquakes are common in the southern part of the state.

Alaska has some of the world’s largest glaciers, which can be thousands of feet thick and cover hundreds of square miles. The ice’s weight causes the land beneath it to sink, and, when a glacier melts, the ground springs back like a sponge.

“There are two components to the uplift,” said Chris Rollins, the study’s lead author who conducted the research while at the Geophysical Institute. “There’s what’s called the ‘elastic effect,’ which is when the earth instantly springs back up after an ice mass is removed. Then there’s the prolonged effect from the mantle flowing back upwards under the vacated space.”

In the study, researchers link the expanding movement of the mantle with large earthquakes across Southeast Alaska, where glaciers have been melting for over 200 years. More than 1,200 cubic miles of ice have been lost.

Southern Alaska sits at the boundary between the continental North American plate and the Pacific Plate. They grind past each other at about two inches per year — roughly twice the rate of the San Andreas fault in California — resulting in frequent earthquakes.

The disappearance of glaciers, however, has also caused Southeast Alaska’s land to rise at about 1.5 inches per year.

Rollins ran models of earth movement and ice loss since 1770, finding a subtle but unmistakable correlation between earthquakes and earth rebound.

When they combined their maps of ice loss and shear stress with seismic records back to 1920, they found that most large quakes were correlated with the stress from long-term earth rebound.

An earthquake-triggered tsunami stripped vegetation from the hills and mountains above Lituya Bay in 1958. The treeless areas are visible as lighter ground surrounding the bay in this photograph taken shortly after the event.
Photo by Donald Miller, U.S. Geological Survey


Unexpectedly, the greatest amount of stress from ice loss occurred near the exact epicenter of the 1958 quake that caused the Lituya Bay tsunami.

While the melting of glaciers is not the direct cause of earthquakes, it likely modulates both the timing and severity of seismic events.

When the earth rebounds following a glacier’s retreat, it does so much like bread rising in an oven, spreading in all directions. This effectively unclamps strike-slip faults, such as the Fairweather in Southeast Alaska, and makes it easier for the two sides to slip past one another.

In the case of the 1958 quake, the postglacial rebound torqued the crust around the fault in a way that increased stress near the epicenter as well. Both this and the unclamping effect brought the fault closer to failure.

“The movement of plates is the main driver of seismicity, uplift and deformation in the area,” said Rollins. “But postglacial rebound adds to it, sort of like the de-icing on the cake. It makes it more likely for faults that are in the red zone to hit their stress limit and slip in an earthquake.”



Contacts and sources:
Jerald Pinson
University of Alaska Fairbanks



 



Record Breaking Gold Q-Factor Debunks Metal Myths

Researchers at the University of Ottawa have debunked the decade-old myth of metals being useless in photonics – the science and technology of light – with their findings, recently published in Nature Communications, expected to lead to many applications in the field of nanophotonics.

“We broke the record for the resonance quality factor (Q-factor) of a periodic array of metal nanoparticles by one order of magnitude compared to previous reports,” said senior author Dr. Ksenia Dolgaleva, Canada Research Chair in Integrated Photonics (Tier 2) and Associate Professor in the School of Electrical Engineering and Computer Science (EECS) at the University of Ottawa.

An artist's view of a metasurface consisting of a rectangular array of rectangular gold nanostructures generating plasmonic surface lattice resonances. 

Illustration: Yaryna Mamchur, co-author and Mitacs Summer Student from the National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute,” who worked in Professor Ksenia Dolgaleva's lab in the summer of 2019 at uOttawa.


“It is a well-known fact that metals are very lossy when they interact with light, which means they cause the dissipation of electrical energy. The high losses compromise their use in optics and photonics. We demonstrated ultra-high-Q resonances in a metasurface (an artificially structured surface) comprised of an array of metal nanoparticles embedded inside a flat glass substrate. These resonances can be used for efficient light manipulating and enhanced light-matter interaction, showing metals are useful in photonics.”

“In previous works, researchers attempted to mitigate the adverse effect of losses to access favorable properties of metal nanoparticle arrays,” observed the co-lead author of the study Md Saad Bin-Alam, a uOttawa doctoral student in EECS.

“However, their attempts did not provide a significant improvement in the quality factors of the resonances of the arrays. We implemented a combination of techniques rather than a single approach and obtained an order-of-magnitude improvement demonstrating a metal nanoparticle array (metasurface) with a record-high quality factor.”

According to the researchers, structured surfaces – also called metasurfaces – have very promising prospects in a variety of nanophotonic applications that can never be explored using traditional natural bulk materials. Sensors, nanolasers, light beam shaping and steering are just a few examples of the many applications.

“Metasurfaces made of noble metal nanoparticles – gold or silver for instance – possess some unique benefits over non-metallic nanoparticles. They can confine and control light in a nanoscale volume that is less than one quarter of the wavelength of light (less than 100 nm, while the width of a hair is over 10 000 nm),” explained Md Saad Bin-Alam.

“Interestingly, unlike in non-metallic nanoparticles, the light is not confined or trapped inside the metal nanoparticles but is concentrated close to their surface. This phenomenon is scientifically called 'localized surface plasmon resonances (LSPRs)'. This feature gives a great superiority to metal nanoparticles compared to their dielectric counterparts, because one could exploit such surface resonances to detect bio-organisms or molecules in medicine or chemistry. Also, such surface resonances could be used as the feedback mechanism necessary for laser gain. In such a way, one can realize a nanoscale tiny laser that can be adopted in many future nanophotonic applications, like light detection and ranging (LiDAR) for the far-field object detection.”

According to the researchers, the efficiency of these applications depends on the resonant Q-factors.

“Unfortunately, due to the high 'absorptive' and 'radiative' loss in metal nanoparticles, the LSPRs Q-factors are very low,” said co-lead author Dr. Orad Reshef, a postdoctoral fellow in the Department of Physics at the University of Ottawa.

“More than a decade ago, researchers found a way to mitigate the dissipative loss by carefully arranging the nanoparticles in a lattice. From such 'surface lattice' manipulation, a new 'surface lattice resonance (SLR)' emerges with suppressed losses. Until our work, the maximum Q-factors reported in SLRs was around a few hundred. Although such early reported SLRs were better than the low-Q LSPRs, they were still not very impressive for efficient applications. It led to the myth that metals are not useful for practical applications.”

A myth that the group was able to deconstruct during its work at the University of Ottawa's Advanced Research Complex between 2017 and 2020.


From left to right: Dr. Orad Reshef, Md Saad Bin-Alam and Yaryna Mamchur.

Credit: University of Ottawa

“At first, we performed numerical modelling of a gold nanoparticle metasurface and were surprised to obtain quality factors of several thousand,” said Md Saad Bin-Alam, who primarily designed the metasurface structure.

“This value has never been reported experimentally, and we decided to analyze why and to attempt an experimental demonstration of such a high Q. We observed a very high-Q SLR of value nearly 2400, that is at least 10 times larger than the largest SLRs Q reported earlier.”

A discovery that made them realize that there’s still a lot to learn about metals.

“Our research proved that we are still far from knowing all the hidden mysteries of metal (plasmonic) nanostructures,” concluded Dr. Orad Reshef, who fabricated the metasurface sample. “Our work has debunked a decade-long myth that such structures are not suitable for real-life optical applications due to the high losses. We demonstrated that, by properly engineering the nanostructure and carefully conducting an experiment, one can improve the result significantly.”

The paper “Ultra-high-Q resonances in plasmonic metasurfaces” is published in Nature Communications. Md Saad Bin-Alam and Dr. Orad Reshef primarily conducted the research. They were supported by Yaryna Mamchur and Dr. Mikko Huttunen in the experiment and the numerical modelling, respectively. Professors Ksenia Dolgaleva and Robert W. Boyd jointly supervised the research in collaboration with Professor Jean-Michel Ménard and Iridian Spectral Inc. The other co-authors, Dr. Zahirul Alam and Dr. Jeremy Upham, took part in preparing the manuscript. Dr. Alam also helped with the experimental setup.

 
 


Contacts and sources:
Justine Boutet
University of Ottawa



 




Space Jellyfish Spotted

A radio telescope located in outback Western Australia has observed a cosmic phenomenon with a striking resemblance to a jellyfish.

Published today in The Astrophysical Journal, an Australian-Italian team used the Murchison Widefield Array (MWA) telescope to observe a cluster of galaxies known as Abell 2877.

Lead author and PhD candidate Torrance Hodgson, from the Curtin University node of the International Centre for Radio Astronomy Research (ICRAR) in Perth, said the team observed the cluster for 12 hours at five radio frequencies between 87.5 and 215.5 megahertz.

“We looked at the data, and as we turned down the frequency, we saw a ghostly jellyfish-like structure begin to emerge,” he said.

A composite image of the USS Jellyfish in Abell 2877 showing the optical Digitised Sky Survey (background) with XMM X-ray data (magenta overlay) and MWA 118 MHz radio data (red-yellow overlay). 

Credit: Torrance Hodgson, ICRAR/Curtin University.

“This radio jellyfish holds a world record of sorts. Whilst it’s bright at regular FM radio frequencies, at 200 MHz the emission all but disappears.

“No other extragalactic emission like this has been observed to disappear anywhere near so rapidly.”

<iframe title="vimeo-player" src="https://player.vimeo.com/video/522634130" width="700" height="400" frameborder="0" allowfullscreen></iframe>
Credit: ICRAR

This uniquely steep spectrum has been challenging to explain. “We’ve had to undertake some cosmic archaeology to understand the ancient background story of the jellyfish,” said Hodgson.

“Our working theory is that around 2 billion years ago, a handful of supermassive black holes from multiple galaxies spewed out powerful jets of plasma. This plasma faded, went quiet, and lay dormant.

“Then quite recently, two things happened—the plasma started mixing at the same time as very gentle shock waves passed through the system.

“This has briefly reignited the plasma, lighting up the jellyfish and its tentacles for us to see.”
The jellyfish is over a third of the Moon’s diameter when observed from Earth, but can only be seen with low-frequency radio telescopes.
“Most radio telescopes can’t achieve observations this low due to their design or location,” said Hodgson.

The MWA—a precursor to the Square Kilometre Array (SKA)—is located at CSIRO’s Murchison Radio-astronomy Observatory in remote Western Australia.

The site has been chosen to host the low-frequency antennas for the SKA, with construction scheduled to begin in less than a year.

Tile 107, or “the Outlier” as it is known, is one of 256 tiles of the MWA located 1.5km from the core of the telescope. The MWA is a precursor instrument to the SKA.

 Photographed by Pete Wheeler, ICRAR.

Professor Johnston-Hollitt, Mr Hodgson’s supervisor and co-author, said the SKA will give us an unparalleled view of the low-frequency Universe.

“The SKA will be thousands of times more sensitive and have much better resolution than the MWA, so there may be many other mysterious radio jellyfish waiting to be discovered once it’s operational.

Composite image of the SKA-Low telescope in Western Australia. The image blends a real photo (on the left) of the SKA-Low prototype station AAVS2.0 which is already on site, with an artists impression of the future SKA-Low stations as they will look when constructed. These dipole antennas, which will number in their hundreds of thousands, will survey the radio sky in frequencies as low at 50Mhz.

 Credit ICRAR and SKAO.


“We’re about to build an instrument to make a high resolution, fast frame-rate movie of the evolving radio Universe. It will show us from the first stars and galaxies through to the present day,” she said.

“Discoveries like the jellyfish only hint at what’s to come, it’s an exciting time for anyone seeking answers to fundamental questions about the cosmos.”


 
Contacts and sources:
Pete Wheeler. The International Centre for Radio Astronomy Research (ICRAR)
Vanessa Beasley,  Curtin University
Professor Melanie Johnston-Hollitt (ICRAR / Curtin University)

Publication:  ‘Ultra-Steep Spectrum Radio Jellyfish Uncovered in Abell 2877’, published in The Astrophysical Journal on March 18th, 2021.
CLICK HERE FOR THE PAPER



The Ancient Winged "Eagle Sharks" of Mexico

Like a monster ready for a science fiction film, a strange winged sea creature from millions of years ago has been discovered. 

The fossil of an unusual shark specimen reminiscent of manta rays sheds light on morphological diversity in Cretaceous sharks. This plankton feeder was discovered in Mexico and analyzed by an international team of palaeontologists led by a CNRS researcher from Géosciences Rennes (CNRS/University of Rennes 1). The study was the lead in Science on 19 March 2021.

Some 93 million years ago, bizarre, winged sharks swam in the waters of the Gulf of Mexico. This newly described fossil species, called Aquilolamna milarcae, has allowed its discoverers to erect a new family. Like manta rays, these ‘eagle sharks’ are characterized by extremely long and thin pectoral fins reminiscent of wings. The specimen studied was 1.65 meters long and had a span of 1.90 metres.

Aquilolamna milarcae had a caudal fin with a well-developed superior lobe, typical of most pelagic sharks, such as whale sharks and tiger sharks. Thus, its anatomical features thus give it a chimeric appearance that combines both sharks and rays.

Artist’s impression showing one eagle shark.


© Oscar Sanisidro


With its large mouth and supposed very small teeth, it must have fed on plankton, according to the international research team led by Romain Vullo of the CNRS.

Scientists have identified only one category of large plankton feeders in Cretaceous seasuntil now: a group of large bony fish (pachycormidae), which is now extinct. Thanks to this discovery, they now know that a second group, the eagle sharks, was also present in the Cretaceous oceans.


Fossil of the Aquilolamna milarcae shark found in the limestone of Vallecillo (Mexico).
© Wolfgang Stinnesbeck


The complete specimen was found in 2012 in Vallecillo (Mexico), a locality yielding remarkably preserved fossils. This site, already famous for its many fossils of ammonites, bony fish and other marine reptiles, is most useful for documenting the evolution of oceanic animals.

As well as shedding light on the structure of Cretaceous marine ecosystems, the discovery of eagle sharks reveals a new, hitherto unsuspected, facet of sharks’ evolutionary history.



Contacts and sources:
Romain Vullo
CNRS  
 
Elie Stecyna
CNRS  


Publication:  Manta-like planktivorous sharks in Late Cretaceous oceans. Vullo R, Frey E, Ifrim C, González González MA, Stinnesbeck ES, Stinnesbeck W. 2021. Science, 19 March 2021. DOI: 10.1126/science.abc1490