Friday, August 31, 2018

Solar Eruptions May Not Have Slinky-like Shapes After Al



As the saying goes, everything old is new again. While the common phrase often refers to fashion, design, or technology, scientists at the University of New Hampshire have found there is some truth to this mantra even when it comes to research. Revisiting some older data, the researchers discovered new information about the shape of coronal mass ejections (CMEs) - large-scale eruptions of plasma and magnetic field from the sun – that could one day help protect satellites in space as well as the electrical grid on Earth.

“Since the late 1970s, coronal mass ejections have been assumed to resemble a large Slinky – one of those spring toys - with both ends anchored at the sun, even when they reach Earth about one to three days after they erupt,” said Noe Lugaz, research associate professor in the UNH Space Science Center. “But our research suggests their shapes are possibly different.”

An image from NASA Solar Dynamics Observatory (SDO) satellite that shows an example of a commonly believed Slinky-like shaped coronal mass ejection (CME) -- in this case a long filament of solar material hovering in the sun's atmosphere, or corona. This CME traveled 900 miles per second connecting with Earth's magnetic environment and causing aurora to appear four days later on Sept. 3, 2012.On August 31, 2012 a long filament of solar material that had been hovering in the sun's atmosphere, the corona, erupted out into space at 4:36 p.m. EDT. The coronal mass ejection, or CME, traveled at over 900 miles per second. The CME did not travel directly toward Earth, but did connect with Earth's magnetic environment, or magnetosphere, causing aurora to appear on the night of Monday, September 3. 

This is a a lighten blended version of the 304 and 171 angstrom wavelengths.

Credit: NASA/GSFC/SDO

<b><a href="http://www.nasa.gov/audience/formedia/features/MP_Photo_Guidelines.html" rel="nofollow">NASA image use policy.</a></b>

<b><a href="http://www.nasa.gov/centers/goddard/home/index.html" rel="nofollow">NASA Goddard Space Flight Center</a></b> enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission.

<b>Follow us on <a href="http://twitter.com/NASAGoddardPix" rel="nofollow">Twitter</a></b>

<b>Like us on <a href="http://www.facebook.com/pages/Greenbelt-MD/NASA-Goddard/395013845897?ref=tsd" rel="nofollow">Facebook</a></b>

<b>Find us on <a href="http://instagrid.me/nasagoddard/?vm=grid" rel="nofollow">Instagram</a></b>
Credit: NASA/GSFC/SDO

Knowing the shape and size of CMEs is important because it can help better forecast when and how they will impact Earth. While they are one of the main sources for creating beautiful and intense auroras, like the Northern and Southern Lights, they can also damage satellites, disrupt radio communications and wreak havoc on the electrical transmission system causing massive and long-lasting power outages. Right now, only single point measurements exist for CMEs making it hard for scientists to judge their shapes. But these measurements have been helpful to space forecasters, allowing them a 30 to 60 minute warning before impact. The goal is to lengthen that notice time to hours - ideally 24 hours - to make more informed decisions on whether to power down satellites or the grid.

In their study, published in Astrophysical Journal Letters, the researchers took a closer look at data from two NASA spacecraft, Wind and ACE, typically orbiting upstream of Earth. They analyzed the data of 21 CMEs over a two-year period between 2000 and 2002 when Wind had separated from ACE. Wind had only separated one percent of one astronomical unit (AU), which is the distance from the sun to the Earth (93,000,000 miles). So, instead of now being in front of Earth, with ACE, Wind was now perpendicular to the Sun-Earth line, or on the side.

“Because they are usually so close to one another, very few people compare the data from both Wind and ACE,” said Lugaz. “But 15 years ago, they were apart and in the right place for us to go back and notice the difference in measurements, and the differences became larger with increasing separations, making us question the Slinky shape.”

The data points toward a few other shape possibilities: CMEs are not simple Slinky shapes (they might be deformed ones or something else entirely), or CMEs are Slinky-shaped but on a much smaller scale (roughly four times smaller) than previously thought.

While the researchers say more studies are needed, Lugaz says this information could be important for future space weather forecasting. With other missions being considered by NASA and NOAA, the researchers say this study shows that future spacecraft may first need to investigate how close to the Sun-Earth line they have to remain to make helpful and more advanced forecast predictions.

This research was supported by NASA and the National Science Foundation.







Contacts and sources:
Robbin Ray
The University of New Hampshire


Citation: On the Spatial Coherence of Magnetic Ejecta: Measurements of Coronal Mass Ejections by Multiple Spacecraft Longitudinally Separated by 0.01 au
Noé Lugaz, Charles J. Farrugia, Reka M. Winslow, Nada Al-Haddad, Antoinette B. Galvin, Teresa Nieves-Chinchilla, Christina O. Lee, Miho Janvier.. The Astrophysical Journal, 2018; 864 (1): L7 DOI: 10.3847/2041-8213/aad9f4

Protein Identified That May Have Existed When Life Began



How did life arise on Earth? Rutgers researchers have found among the first and perhaps only hard evidence that simple protein catalysts – essential for cells, the building blocks of life, to function – may have existed when life began.

Their study of a primordial peptide, or short protein, is published in the Journal of the American Chemical Society.

In the late 1980s and early 1990s, the chemist Günter Wächtershäuser postulated that life began on iron- and sulfur-containing rocks in the ocean. Wächtershäuser and others predicted that short peptides would have bound metals and served as catalysts of life-producing chemistry, according to study co-author Vikas Nanda, an associate professor at Rutgers’ Robert Wood Johnson Medical School.

Researchers have designed a synthetic small protein that wraps around a metal core composed of iron and sulfur. This protein can be repeatedly charged and discharged, allowing it to shuttle electrons within a cell. Such peptides may have existed at the dawn of life, moving electrons in early metabolic cycles.

Image: Vikas Nanda

Human DNA consists of genes that code for proteins that are a few hundred to a few thousand amino acids long. These complex proteins – needed to make all living-things function properly – are the result of billions of years of evolution. When life began, proteins were likely much simpler, perhaps just 10 to 20 amino acids long. With computer modeling, Rutgers scientists have been exploring what early peptides may have looked like and their possible chemical functions, according to Nanda.

The scientists used computers to model a short, 12-amino acid protein and tested it in the laboratory. This peptide has several impressive and important features. It contains only two types of amino acids (rather than the estimated 20 amino acids that synthesize millions of different proteins needed for specific body functions), it is very short and it could have emerged spontaneously on the early Earth in the right conditions. The metal cluster at the core of this peptide resembles the structure and chemistry of iron-sulfur minerals that were abundant in early Earth oceans. The peptide can also charge and discharge electrons repeatedly without falling apart, according to Nanda, a resident faculty member at the Center for Advanced Technology and Medicine.

“Modern proteins called ferredoxins do this, shuttling electrons around the cell to promote metabolism,” said senior author Professor Paul G. Falkowski, who leads Rutgers’ Environmental Biophysics and Molecular Ecology Laboratory. “A primordial peptide like the one we studied may have served a similar function in the origins of life.”

Falkowski is the principal investigator for a NASA-funded ENIGMA project led by Rutgers scientists that aims to understand how protein catalysts evolved at the start of life. Nanda leads one team that will characterize the full potential of the primordial peptide and continue to develop other molecules that may have played key roles in the origins of life.

With computers, Rutgers scientists have smashed and dissected nearly 10,000 proteins and pinpointed four “Legos of life” – core chemical structures that can be stacked to form the innumerable proteins inside all organisms. The small primordial peptide may be a precursor to the longer Legos of life, and scientists can now run experiments on how such peptides may have functioned in early-life chemistry.

Study co-lead authors are John Dongun Kim, postdoctoral researcher, and graduate student Douglas H. Pike. Other authors include Alexei M. Tyryshkin and G.V.T. Swapna, staff scientists; Hagai Raanan, postdoctoral researcher; and Gaetano T. Montelione, Jerome and Lorraine Aresty Chair and distinguished professor in the Department of Molecular Biology and Biochemistry. He is also a resident faculty member at the Center for Advanced Technology and Medicine.


Contacts and sources:
Todd Bates
Rutgers University
Citation: Minimal Heterochiral de Novo Designed 4Fe–4S Binding Peptide Capable of Robust Electron Transfer.
J. Dongun Kim, Douglas H. Pike, Alexei M. Tyryshkin, G. V. T. Swapna, Hagai Raanan, Gaetano T. Montelione, Vikas Nanda, Paul G. Falkowski. Journal of the American Chemical Society, 2018; DOI: 10.1021/jacs.8b07553

Thursday, August 30, 2018

The Time You Eat: Time Restricted Feeding Helps with Fat Loss

Modest changes to breakfast and dinner times can reduce body fat, a new pilot study in the Journal of Nutritional Sciences reports.

During a 10-week study on ‘time-restricted feeding’ (a form of intermittent fasting), researchers led by Dr Jonathan Johnston from the University of Surrey investigated the impact changing meal times has on dietary intake, body composition and blood risk markers for diabetes and heart disease.

Participants were split into two groups – those who were required to delay their breakfast by 90 minutes and have their dinner 90 minutes earlier, and those who ate meals as they would normally (the controls). Participants were required to provide blood samples and complete diet diaries before and during the 10-week intervention and complete a feedback questionnaire immediately after the study.

Unlike previous studies in this area, participants were not asked to stick to a strict diet and could eat freely, provided it was within a certain eating window. This helped researchers assess whether this type of diet was easy to follow in everyday life.

Credit: Jorge Barrios / Wikimedia Commons

Researchers found that those who changed their mealtimes lost on average more than twice as much body fat as those in the control group, who ate their meals as normal. If these pilot data can be repeated in larger studies, there is potential for time-restricted feeding to have broad health benefits.

Although there were no restrictions on what participants could eat, researchers found that those who changed their mealtimes ate less food overall than the control group. This result was supported by questionnaire responses which found that 57 percent of participants noted a reduction in food intake either due to reduced appetite, decreased eating opportunities or a cutback in snacking (particularly in the evenings). It is currently uncertain whether the longer fasting period undertaken by this group was also a contributing factor to this reduction in body fat.

As part of the study, researchers also examined if fasting diets are compatible with everyday life and long term commitment. When questioned, 57 percent of participants felt they could not have maintained the new meal times beyond the prescribed 10 weeks because of their incompatibility with family and social life. However, 43 per cent of participants would consider continuing if eating times were more flexible.

Dr Jonathan Johnston, Reader in Chronobiology and Integrative Physiology at the University of Surrey, said: “Although this study is small, it has provided us with invaluable insight into how slight alterations to our meal times can have benefits to our bodies. Reduction in body fat lessens our chances of developing obesity and related diseases, so is vital in improving our overall health.

“However, as we have seen with these participants, fasting diets are difficult to follow and may not always be compatible with family and social life. We therefore need to make sure they are flexible and conducive to real life, as the potential benefits of such diets are clear to see.

“We are now going to use these preliminary findings to design larger, more comprehensive studies of time-restricted feeding."



Contacts and sources:
University of Surrey


Citation: A pilot feasibility study exploring the effects of a moderate time-restricted feeding intervention on energy intake, adiposity and metabolic physiology in free-living human subjects.
Rona Antoni, Tracey M. Robertson, M. Denise Robertson, Jonathan D. Johnston. Journal of Nutritional Science, 2018; 7 DOI: 10.1017/jns.2018.13

Most Land-Based Ecosystems Risk ‘Major Transformation’ Due to Climate Change



Without dramatic reductions in greenhouse-gas emissions, most of the planet’s land-based ecosystems—from its forests and grasslands to the deserts and tundra—are at high risk of “major transformation” due to climate change, according to a new study from an international research team.

The researchers used fossil records of global vegetation change that occurred during a period of post-glacial warming to project the magnitude of ecosystem transformations likely in the future under various greenhouse gas emissions scenarios.

They found that under a “business as usual” emissions scenario, in which little is done to rein in heat-trapping greenhouse-gas emissions, vegetation changes across the planet’s wild landscapes will likely be more far-reaching and disruptive than earlier studies suggested.

Researchers compiled and evaluated pollen and plant-fossil records from nearly 600 sites worldwide for their study of vegetation change.
Researchers compiled and evaluated pollen and plant-fossil records from nearly 600 sites worldwide for their study of vegetation change. Map reprinted with permission from Nolan et al., Science, 2018 (10.1126/science.aan5360).
Map reprinted with permission from Nolan et al., Science, 2018 (10.1126/science.aan5360


The changes would threaten global biodiversity and derail vital services that nature provides to humanity, such as water security, carbon storage and recreation, according to study co-author Jonathan Overpeck, dean of the School for Environment and Sustainability at the University of Michigan.

“If we allow climate change to go unchecked, the vegetation of this planet is going to look completely different than it does today, and that means a huge risk to the diversity of the planet,” said Overpeck, who conceived the idea for the study with corresponding author Stephen T. Jackson of the U.S. Geological Survey.

The findings are scheduled for publication in the Aug. 31 edition of the journal Science. Forty-two researchers from around the world contributed to the paper. The first author is geosciences graduate student Connor Nolan of the University of Arizona.

Overpeck stressed that the team’s results are not merely hypothetical. Some of the expected vegetational changes are already underway in places like the American West and Southwest, where forest dieback and massive wildfires are transforming landscapes.

“We’re talking about global landscape change that is ubiquitous and dramatic,” Overpeck said. “And we’re already starting to see it in the United States, as well as around the globe.”

Previous studies based largely on computer modeling and present-day observations also predicted sweeping vegetational changes in response to climate warming due to the ongoing buildup of carbon dioxide and other greenhouse gases.

But the new study, which took five years to complete, is the first to use paleoecological data—the records of past vegetation change present in ancient pollen grains and plant fossils from hundreds of sites worldwide—to project the magnitude of future ecosystem changes on a global scale.

The team focused on vegetation changes that occurred during Earth’s last deglaciation, a period of warming that began 21,000 years ago and that was roughly comparable in magnitude (4 to 7 degrees Celsius, or 7 to 13 degrees Fahrenheit) to the warming expected in the next 100 to 150 years if greenhouse gas emissions are not reduced significantly.

Because the amount of warming in the two periods is similar, a post-glacial to modern comparison provides “a conservative estimate of the extent of ecological transformation to which the planet will be committed under future climate scenarios,” the authors wrote.

The estimate is considered conservative in part because the rate of projected future global warming is at least an order of magnitude greater than that of the last deglaciation and is therefore potentially far more disruptive.

“We’re talking about the same amount of change in 10-to-20 thousand years that’s going to be crammed into a century or two,” said Jackson, director of the U.S. Geological Survey’s Southwest Climate Adaptation Center. “Ecosystems are going to be scrambling to catch up.”

To determine the extent of the vegetation change following the last glacial peak, the researchers first compiled and evaluated pollen and plant-fossil records from 594 sites worldwide—from every continent except Antarctica. All of the sites in their global database of ecological change had been reliably radiocarbon-dated to the period between 21,000 and 14,000 years before present.

Then they used paleoclimatic data from a number of sources to infer the corresponding temperature increases responsible for the vegetation changes seen in the fossils. That, in turn, enabled them to calculate how various levels of future warming will likely affect the planet’s terrestrial vegetation and ecosystems.

“We used the results from the past to look at the risk of future ecosystem change,” said the University of Arizona’s Nolan. “We find that as temperatures rise, there are bigger and bigger risks for more ecosystem change.”

Under a business as usual emissions scenario, the probability of large-scale vegetation change is greater than 60 percent, they concluded. In contrast, if greenhouse-gas emissions are reduced to levels targeted in the 2015 Paris Agreement, the probability of large-scale vegetation change is less than 45 percent.

Much of the change could occur during the 21st century, especially where vegetation disturbance is amplified by other factors, such as climate extremes, widespread plant mortality events, habitat fragmentation, invasive species and natural resource harvesting. The changes will likely continue into the 22nd century or beyond, the researchers concluded.

The ecosystem services that will be significantly impacted include carbon storage—currently, vast amounts of carbon are stored in the plants and soils of land-based ecosystems.

“A lot of the carbon now locked up by vegetation around the planet could be released to the atmosphere, further amplifying the magnitude of the climate change,” Overpeck said.

The authors say their empirically based, paleoecological approach provides an independent perspective on climate-driven vegetation change that complements previous studies based on modeling and present-day observations.

The fact that predictions from these diverse approaches are converging “strengthens the inference that projected climate changes will drive major ecosystem transformations,” the authors wrote.

“It’s a huge challenge we as a nation and global community need to take more seriously,” Overpeck said.

The paper is titled “Past and future global transformation of terrestrial ecosystems under climate change.” The research was supported by the National Science Foundation, U.S. Department of the Interior’s Southwest Climate Science Center, Russian Academy of Sciences and Russian Foundation for Fundamental Research.


Contacts and sources:
Jim Erickson
University of Michigan


Citation: Past and future global transformation of terrestrial ecosystems under climate change.
Connor Nolan, Jonathan T. Overpeck, Judy R. M. Allen, Patricia M. Anderson, Julio L. Betancourt, Heather A. Binney, Simon Brewer, Mark B. Bush, Brian M. Chase, Rachid Cheddadi, Morteza Djamali, John Dodson, Mary E. Edwards, William D. Gosling, Simon Haberle, Sara C. Hotchkiss, Brian Huntley, Sarah J. Ivory, A. Peter Kershaw, Soo-Hyun Kim, Claudio Latorre, Michelle Leydet, Anne-Marie Lézine, Kam-Biu Liu, Yao Liu, A. V. Lozhkin, Matt S. McGlone, Robert A. Marchant, Arata Momohara, Patricio I. Moreno, Stefanie Müller, Bette L. Otto-Bliesner, Caiming Shen, Janelle Stevenson, Hikaru Takahara, Pavel E. Tarasov, John Tipton, Annie Vincens, Chengyu Weng, Qinghai Xu, Zhuo Zheng, Stephen T. Jackson. Science, 2018; 361 (6405): 920 DOI: 10.1126/science.aan5360

How Your Brain Experiences Time

Researchers at the Kavli Institute for Systems Neuroscience have discovered a network of brain cells that expresses our sense of time within experiences and memories. The area of the brain where time is experienced is located right next to the area that codes for space.

Clocks are devices created by humans to measure time. By social contract, we agree to coordinate our own activities according to clock time. Nevertheless, your brain does not perceive the duration in time with the standardized units of minutes and hours on your wristwatch. The signature of time in our experiences and memories belongs to a different kind of temporality altogether.

The illustration shows the episodic time from the experience of a 4-hour-long ski trip up and down a steep mountain, including events that alter the skier’s perception of time. The idea is that experienced time is event-dependent and may be perceived as faster or slower than clock time. The newly discovered neural record of experienced time is in the lateral entorhinal cortex (LEC) in green. Next to the LEC is the MEC, the brain’s seat for space (not depicted). Next to the MEC is the hippocampus, the structure in which information from the time and space networks come together to form episodic memories. 
Infographic: Kolbjørn Skarpnes & Rita Elmkvist Nilsen

Over the course of evolution, living organisms, including humans, have developed multiple biological clocks to help us keep track of time. What separates the brain’s various timekeepers is not only the scale of time that is measured, but also the phenomena the neural clocks are tuned to.

Some timekeepers are set by external processes, like the circadian clock that is tuned to the rise and fall of daylight. This clock helps organisms adapt to the rhythms of a day.

Other timekeepers are set by phenomena of more intrinsic origins, like the hippocampal time cells that form a domino-like chain signal that tracks time spans up to 10 seconds precisely. Today we know a great deal about the brain’s mechanisms for measuring small timescales like seconds. Little is known, however, about the timescale the brain uses to record our experiences and memories, which can last anywhere from seconds to minutes to hours.

A Neural Clock for Experienced Time

A neural clock that keeps track of time during experiences is precisely what Albert Tsao and his colleagues at the Norwegian University of Science and Technology’s Kavli Institute for Systems Neuroscience believe they have discovered. By recording from a population of brain cells the researchers identified a strong time-coding signal deep inside the brain.

Albert Tsao took his PhD at NTNU’s Kavli Institute and was supervised by the Mosers. Tsao is now postdoc at Stanford University.

 Photo: Private

“Our study reveals how the brain makes sense of time as an event is experienced,” says Tsao. “The network does not explicitly encode time. What we measure is rather a subjective time derived from the ongoing flow of experience.”

“This network provides timestamps to events and keeps track of the order of events within an experience,”says Professor Moser.

The neural clock operates by organizing the flow of our experiences into an orderly sequence of events. This activity gives rise to the brain’s clock for subjective time. Experience, and the succession of events within experience, are thus the substance of which subjective time is generated and measured by the brain.

Time, Space and Memory in the Brain

“Today, we have a fairly good understanding of the way our brains process space, whereas our knowledge of time is less coherent,” Professor Moser says.

“Space in the brain is relatively easy to investigate. It consists of specialized cell types that are dedicated to specific functions. Together they constitute the nuts and bolts of the system,” he says.

In 2005, May-Britt and Edvard Moser discovered grid cells, which map our environment at different scales by dividing space into hexagonal units. In 2014, the Mosers shared the Nobel Prize in Physiology or Medicine with their colleague and mentor John O’Keefe at University College London for their discoveries of cells that constitute the brain’s positioning system.

In 2007, inspired by the Mosers’ discovery of spatially coding grid cells, then-Kavli Institute PhD candidate Albert Tsao set out to crack the code of what was happening in the enigmatic lateral entorhinal cortex (LEC). This area of the brain is right next to the medial entorhinal cortex (MEC), where his supervisors, the Mosers, had discovered grid cells.

“I was hoping to find a similar key operating cell that would reveal the functional identity of this neural network”, Tsao says. The task proved to be a time-consuming project.

A brief primer on the brain and time

What is episodic memory?

Your ability to recall and mentally relive specific episodes from your past is called episodic memory. This is the type of memories that you can visualize and talk about. The episodic memory is explicit in the way that its content is always anchored to a time and a place. Simply stated, episodic memories are a composition of what (content), where (position) and when (time). The brain area called medial entorhinal cortex is particularly important for mapping positions in space. This study suggests that the lateral entorhinal cortex may be important for putting experience into a temporal context. Information from both of these structures come together in the hippocampus to form episodic memories.

Where is the brain’s subjective clock located?

The researchers recorded the time signal from a neural network in the lateral entorhinal cortex (LEC). LEC, the medial entorhinal cortex (MEC) and hippocampus (Hipp) are components of the hippocampal formation, which are located in the cortices of the left and right temporal lobes of the brain.

What is experienced time?

Subjective experience is the very substrate from which our concept of time arises. Time as we perceive it. Subjective time. Psychological time. Experienced time. Mind time. Episodic time. That time which flies when you’re having fun, which stretches when you are waiting, and which nearly comes to arrest in the split seconds of a catastrophe unfolding, is in its essence relational and relative to the multiple aspects of experience it is woven into.

“There didn’t seem to be a pattern to the activity of these cells. The signal changed all the time, says Professor Moser.

It was only in the last couple of years that the researchers began to suspect that the signal was indeed changing with time. Suddenly the recoded data started to make sense.

“Time is a non-equilibrial process. It is always unique and changing,” Professor Moser says. “If this network was indeed coding for time, the signal would have to change with time in order to record experiences as unique memories.”

Technological advancements

The Mosers needed only to decode the signal of one single grid cell to discover how space is encoded in the in the medial entorhinal cortex. Decoding time in the lateral entorhinal cortex proved to be a more complex task. It was only when looking at activity from hundreds of cells that Tsao and his colleagues were able to see that the signal encoded time.

Marco the rat chasing bits of chocolate during a test.

Photo: Erlend Lånke Solbu/Norwegian Broadcasting Corporation, NRK

“The activity in these neural networks is so distributed that the mechanism itself probably lies in the structure of connectivity within the networks. The fact that it can be shaped into various unique patterns implies a high level of plasticity,” Professor Moser says. “I believe distributed networks and the combination of structures of activity may deserve more attention in the future. With this work, we have found an area with activity so strongly relating to the time of an event or experience, it may open up a whole new research field.”


The Shape of Time

The structure of time has long been a disputed topic by philosophers and physicists alike. What can the newly discovered brain’s mechanism for episodic time tell us about how we perceive time? Is our perception of time linear resembling a flowing river, or cyclical like a wheel or a helix? Data from the Kavli study suggest both are correct, and that the signal in the time-coding network can take on many forms depending on the experience.

Professor Edvard Moser, Jørgen Sugar, a postdoc at the Kavli Institute, and Professor May-Britt Moser. The Kavli scientists believe that this discovery will bring us one leap closer to solving the challenge of brain diseases such as Alzheimer’s. The neural clock for subjective time serves a critical function in memory and learning, in our ability to organize experiences as a succession of events, and to form memories, to learn, and in the shaping of who we are.

 Photo: Erlend Lånke Solbu/Norwegian Broadcasting Corporation, NRK

In 2016, PhD candidate Jørgen Sugar joined the Kavli project to perform a new set of experiments that would test the hypothesis that the LEC network coded for episodic time. In one experiment a rat was introduced to a wide range of experiences and options for action. It was free to run around, investigate and chase bits of chocolate while visiting a series of open space environments.

“The uniqueness of the time signal during this experiment suggests that the rat had a very good record of time and temporal sequence of events throughout the two hours the experiment lasted,” Sugar says. “We were able to use the signal from the time-coding network to track exactly when in the experiment various events had occurred.”

In the second experiment, the task was more structured with a narrower range of experiences and options for action. The rat was trained to chase after bits of chocolate while turning left or right in a figure-8 maze.

The recorded data from the repetitive tasks of the figure-8 maze show that the rat’s encoding of time relative to each lap time (left or right turn) improved, while time coding across laps were overlapping and thus reduced.


 Illustration: Albert Tsao

“With this activity, we saw the time-coding signal change character from unique sequences in time to a repetitive and partly overlapping pattern,“ Tsao says. “On the other hand, the time signal became more precise and predictable during the repetitive task. The data suggest that the rat had a refined understanding of temporality during each lap, but a poor understanding of time from lap to lap and from the start to end throughout the experiment.”

Professor Moser says the study shows that by changing the activities you engage in, the content of your experience, you can actually change the course of the time-signal in LEC and thus the way you perceive time.

Reference: Episodic time coding in lateral entorhinal cortex. Nature 30 August 2018. Albert Tsao, Jørgen Sugar, Li Lu, Cheng Wang, James J. Knierim, May-Britt Moser, Edvard I. Moser. Kavli Institute for Systems Neuroscience and Centre for Neural Computation, NTNU, Trondheim, Norway




Contacts and sources:
Rita Elmkvist Nilsen
The Norwegian University of Science and Technology (NTNU)
Citation: Episodic time coding in lateral entorhinal cortex. Nature, August 30, 2018. Albert Tsao, Jørgen Sugar, Li Lu, Cheng Wang, James J. Knierim, May-Britt Moser, Edvard I. Moser. Kavli Institute for Systems Neuroscience and Centre for Neural Computation, NTNU, Trondheim, Norway.

Stars Versus Dust in the Carina Nebula

The Carina Nebula, one of the largest and brightest nebulae in the night sky, has been beautifully imaged by ESO’s VISTA telescope at the Paranal Observatory in Chile. By observing in infrared light, VISTA has peered through the hot gas and dark dust enshrouding the nebula to show us myriad stars, both newborn and in their death throes.

Credit: ESO 

About 7500 light-years away, in the constellation of Carina, lies a nebula within which stars form and perish side-by-side. Shaped by these dramatic events, the Carina Nebula is a dynamic, evolving cloud of thinly spread interstellar gas and dust.

This spectacular image of the Carina nebula reveals the dynamic cloud of interstellar matter and thinly spread gas and dust as never before. The massive stars in the interior of this cosmic bubble emit intense radiation that causes the surrounding gas to glow. By contrast, other regions of the nebula contain dark pillars of dust cloaking newborn stars.
The Carina Nebula in infrared light
Credit: ESO/J. Emerson/M. Irwin/J. Lewis

The massive stars in the interior of this cosmic bubble emit intense radiation that causes the surrounding gas to glow. By contrast, other regions of the nebula contain dark pillars of dust cloaking newborn stars. There’s a battle raging between stars and dust in the Carina Nebula, and the newly formed stars are winning — they produce high-energy radiation and stellar winds which evaporate and disperse the dusty stellar nurseries in which they formed.

Spanning over 300 light-years, the Carina Nebula is one of the Milky Way's largest star-forming regions and is easily visible to the unaided eye under dark skies. Unfortunately for those of us living in the north, it lies 60 degrees below the celestial equator, so is visible only from the Southern Hemisphere.

This zoom video starts with a wide view of the Milky Way and ends with a close-up look at the Carina Nebula and its surroundings in the constellation of Carina.

Credit: ESO

Within this intriguing nebula, Eta Carinae takes pride of place as the most peculiar star system. This stellar behemoth — a curious form of stellar binary— is the most energetic star system in this region and was one of the brightest objects in the sky in the 1830s. It has since faded dramatically and is reaching the end of its life, but remains one of the most massive and luminous star systems in the Milky Way.

This image is a colour composite made from exposures from the Digitized Sky Survey 2 (DSS2). The field of view is approximately 4.7 x 4.9 degrees.
Digitized Sky Survey image of Eta Carinae Nebula
Credit:  ESO/Digitized Sky Survey 2. Acknowledgement: Davide De Martin.

Eta Carinae can be seen in this image as part of the bright patch of light just above the point of the “V” shape made by the dust clouds. Directly to the right of Eta Carinae is the relatively small Keyhole Nebula — a small, dense cloud of cold molecules and gas within the Carina Nebula — which hosts several massive stars, and whose appearance has also changed drastically over recent centuries.

The Carina Nebula was discovered from the Cape of Good Hope by Nicolas Louis de Lacaille in the 1750s and a huge number of images have been taken of it since then. But VISTA — the Visible and Infrared Survey Telescope for Astronomy — adds an unprecedentedly detailed view over a large area; its infrared vision is perfect for revealing the agglomerations of young stars hidden within the dusty material snaking through the Carina Nebula. In 2014, VISTA was used to pinpoint nearly five million individual sources of infrared light within this nebula, revealing the vast extent of this stellar breeding ground. VISTA is the world’s largest infrared telescope dedicated to surveys and its large mirror, wide field of view andexquisitely sensitive detectors enable astronomers [1] to unveil a completely new view of the southern sky.
Notes

[1] The Principal Investigator of the observing proposal which led to this spectacular image was Jim Emerson (School of Physics & Astronomy, Queen Mary University of London, UK). His collaborators were Simon Hodgkin and Mike Irwin (Cambridge Astronomical Survey Unit, Cambridge University, UK). The data reduction was performed by Mike Irwin and Jim Lewis (Cambridge Astronomical Survey Unit, Cambridge University, UK).


Contacts and sources:
Calum Turner
ESO

Jim Emerson
School of Physics & Astronomy, Queen Mary University of London

 

Wednesday, August 29, 2018

The Fate of Plastic in the Oceans: Experiment Shows: Microplastics Aggregate with Natural Particles

The oceans contain large numbers of particles of biological origin, including, for example, living and dead plankton organisms and their faecal material. These so-called biogenic particles interact with each other and often form lumps, or scientifically correct aggregates, many of which sink down in the water column. In addition to these natural particles, large amounts of plastic particles with a size of less than five millimetres, i.e. microplastics, have been in the oceans for some time.

Although new microplastics are constantly entering the oceans and some types of plastic have a relatively low density and therefore drift at the water surface, the microplastics concentrations at the surface of the oceans are often lower than expected. 

In addition, microplastics have repeatedly been found in deep-sea sediments in recent years. What happens to the microplastics in the ocean surface layer? How do they get to great water depths? "Our hypothesis was that microplastics, together with the biogenic particles in the seawater, form aggregates that possibly sink into deeper water layers," explains Dr. Jan Michels, member of the Cluster of Excellence 'The Future Ocean' and lead author of the study, which was published in the international journal Proceedings of the Royal Society B today.

Biofilm formed by bacteria and microalgae on a plastic surface in water from the Kiel Fjord, visualized with confocal laser scanning microscopy.

Credit: Jan Michels/Future Ocean

To test this hypothesis, the researchers conducted laboratory experiments with polystyrene beads featuring a size of 700 to 900 micrometres. The aggregation behaviour of the beads was compared in the presence and in the absence of biogenic particles. The experiments provided a clear result: "The presence of biogenic particles was decisive for the formation of aggregates. While microplastic particles alone did nearly not aggregate at all, they formed quite pronounced and stable aggregates together with biogenic particles within a few days," describes Prof. Dr. Anja Engel, head of the GEOMAR research group, in which the study was carried out. After twelve days, an average of 73 percent of the microplastics were included in the aggregates.

"In addition, we assumed that biofilms that are present on the surface of the microplastics play a role in the formation of aggregates," explains Michels, who led the investigations during his time at GEOMAR and now works at Kiel University. Such biofilms are formed by microorganisms, typically bacteria and unicellular algae, and are relatively sticky. To investigate their influence on the aggregation, comparative experiments were conducted with plastic beads that were either purified or coated with a biofilm. 

"Together with biogenic particles, the biofilm-coated microplastics formed the first aggregates after only a few hours, much earlier and faster than the microplastics that were purified at the beginning of the experiments," says Michels. On average, 91 percent of the microplastics coated with biofilm were included in aggregates after three days.

Photographs showing typical aggregates formed by polystyrene beads and biogenic particles during the laboratory experiments.

Credit: Jan Michels/Future Ocean

"If microplastics are coated with a biofilm and biogenic particles are simultaneously present, stable aggregates of microplastics and biogenic particles are formed very quickly in the laboratory," summarises Michels. In many regions of the oceans, the presence of both numerous biogenic particles and biofilms on the microplastics is probably a typical situation.

 "This is why the aggregation processes that we observed in our laboratory experiments very likely also take place in the oceans and have a great influence on the transport and distribution of microplastics," explains Prof. Dr. Kai Wirtz, who works at the Helmholtz-Zentrum Geesthacht and was involved in the project. This could be further investigated in the future through a targeted collection of aggregates in the oceans and subsequent systematic analyses for the presence of microplastic


Contacts and sources:
Lisa Wolf
Helmholtz Centre For Ocean Research Kiel (GEOMAR)


Citation: Rapid aggregation of biofilm-covered microplastics with marine biogenic particles Michels J, Stippkugel A, Lenz M, Wirtz K, Engel A. 2018 . Proc. R. Soc. B 20181203 , http://dx.doi.org/10.1098/rspb.2018.1203 http://dx.doi.org/10.1098/rspb.2018.1203

Diplomat's Mystery Illness and Pulsed Radiofrequency / Microwave Radiation

Writing in advance of the September 15 issue of Neural Computation, Beatrice Golomb, MD, PhD, professor of medicine at University of California San Diego School of Medicine, says publicly reported symptoms and experiences of a "mystery illness" afflicting American and Canadian diplomats in Cuba and China strongly match known effects of pulsed radiofrequency/microwave electromagnetic (RF/MW) radiation.

Her conclusions, she said, may aid in the treatment of the diplomats (and affected family members) and assist U.S. government agencies seeking to determine the precise cause. More broadly, Golomb said her research draws attention to a larger population of people who are affected by similar health problems.

"I looked at what's known about pulsed RF/MW in relation to diplomats' experiences," said Golomb. "Everything fits. The specifics of the varied sounds that the diplomats reported hearing during the apparent inciting episodes, such as chirping, ringing and buzzing, cohere in detail with known properties of so-called 'microwave hearing,' also known as the Frey effect.

This is Beatrice Golomb, MD, PhD, professor of medicine at UC San Diego School of Medicine.
Credit: UC San Diego Health

"And the symptoms that emerged fit, including the dominance of sleep problems, headaches and cognitive issues, as well as the distinctive prominence of auditory symptoms. Even objective findings reported on brain imaging fit with what has been reported for persons affected by RF/MW radiation."

Beginning in 2016, personnel at the U.S. Embassy in Havana, Cuba (as well as Canadian diplomats and family members) described hearing strange sounds, followed by development of an array of symptoms. The source of the health problems has not been determined. Though some officials and media have described the events as "sonic attacks," some experts on sound have rejected this explanation. In May of this year, the State Department reported that U.S. government employees in Guangzhou, China had also experienced similar sounds and health problems.

Affected diplomats and family members from both locations were medically evacuated to the U.S. for treatment, but despite multiple government investigations, an official explanation of events and subsequent illnesses has not been announced. At least two early published studies examining available data were inconclusive.

In her paper, scheduled to be published September 15 in Neural Computation, Golomb compared rates of described symptoms among diplomats with a published 2012 study of symptoms reported by people affected by electromagnetic radiation in Japan. By and large, she said the cited symptoms -- headache, cognitive problems, sleep issues, irritability, nervousness or anxiety, dizziness and tinnitus (ringing in the ears) -- occurred at strikingly similar rates.

Some diplomats reported hearing loss. That symptom was not assessed in both studies so rates could not be compared, but Golomb said it is widely reported in both conditions. She also noted that previous brain imaging research in persons affected by RF/ EMR "showed evidence of traumatic brain injury, paralleling reports in diplomats."

David O. Carpenter, MD, is director of the Institute for Health and the Environment at the University of Albany, part of the State University of New York. He was not involved in Golomb's study. He said evidence cited by Golomb illustrates "microwave hearing," which results "from heating induced in tissue, which causes 'waves' in the ear and results in clicks and other sounds." Reported symptoms, he said, characterize the syndrome of electrohypersensitivity (EHS), in which unusual exposure to radiofrequency radiation can trigger symptoms in vulnerable persons that may be permanent and disabling.

"We have seen this before when the Soviets irradiated the U.S. Embassy in Moscow in the days of the Cold War," he said.

Golomb, whose undergraduate degree was in physics, conducts research investigating the relationship of oxidative stress and mitochondrial function -- mechanisms shown to be involved with RF/EMR injury -- to health, aging, behavior and illness. Her work is wide-ranging, with published studies on Gulf War illness, statins, antibiotic toxicity, ALS, autism and the health effects of chocolate and trans fats, with a secondary interest in research methods, including placebos.

Golomb said an analysis of 100 studies examining whether low-level RF produced oxidative injury found that 93 studies concluded that it did. Oxidative injury or stress arises when there is an imbalance between the production of reactive oxygen species (free radicals) and the body's detoxifying antioxidant defenses. Oxidative stress has been linked to a range of diseases and conditions, from Alzheimer's disease, autism and depression to cancer and chronic fatigue syndrome, as well as toxic effects linked to certain drugs and chemicals. More to the point, Golomb said, oxidative injury has been linked to the symptoms and conditions reported in diplomats.

The health consequences of RF/MW exposure is a matter of on-going debate. Some government agencies, such as the National Institute of Environmental Health Sciences and the National Cancer Institute, publicly assert that low- to mid-frequency, non-ionizing radiation like those from microwaves and RF is generally harmless. They cite studies that have found no conclusive link between exposure and harm.

But others, including researchers like Golomb, dispute that conclusion, noting that many of the no-harm studies were funded by vested industries or had other conflicts of interest. She said independent studies over decades have reported biological effects and harms to health from nonionizing radiation, specifically RF/MW radiation, including via oxidative stress and downstream mechanisms, such as inflammation, autoimmune activation and mitochondrial injury.

Golomb compared the situation to persons with peanut allergies: Most people do not experience any adverse effect from peanut exposure, but for a vulnerable subgroup, exposure produces negative, even life-threatening, consequences.

In her analysis, Golomb concludes that "of hypotheses tendered to date, (RF/MW exposure) alone fits the facts, including the peculiar ones" regarding events in Cuba and China. She said her findings advocate for more robust attention to pulsed RF/MW and associated adverse health effects.

"The focus must be on research by parties free from ties to vested interests. Such research is needed not only to explain and address the symptoms in diplomats, but also for the benefit of the small fraction - but large number -- of persons outside the diplomatic corps, who are beset by similar problems."


Contacts and sources:
Scott LaFeeUniversity of California San Diego

How Brown Adipose Tissue Reacts to a Carbohydrate-Rich Meal: Food Activates Brown Fat

Brown fat consumes energy, which is the reason why it could be important for preventing obesity and diabetes. Working together with an international team, researchers at the Technical University of Munich (TUM) were able to demonstrate that food also increases the thermogenesis of brown fat, and not just cold as previously assumed.

Brown adipose tissue in humans has been the subject of numerous studies, as it has the exact opposite function of white adipose tissue, which stores energy in the form of storage fats called triacylglycerides. Specifically, brown fat burns the energy of the triacylglycerides (thermogenesis).

For the tests of the study, subjects consumed a high-carbohydrate meal such as such a vegetable lasagna. 
Picture: iStockphoto/ gbh007

However, the activity of this physiologically highly favorable adipose tissue changes over time: It decreases with age, just as it does in obese individuals and diabetics. Hence, ways to heat up thermogenesis in brown fat are being sought which can be used to prevent obesity and diabetes.

Brown adipose tissue can be trained

To date, only one option has been acknowledged in this context: Cold-induced thermogenesis. "Studies showed that participants who spent hours in the cold chamber daily not only experienced an increase in the heat output of brown fat in the cold as they got used to the lower temperatures, but also an improvement in the control of blood sugar via insulin," reports Professor Martin Klingenspor, head of the Chair for Molecular Nutritional Medicine at the Else Kröner-Fresenius Center at TU Munich.

Carbohydrate-rich meal as effective as cold stimuli

For the current study by the University of Turku in collaboration with international experts, among them Professor Martin Klingenspor with his team from the Else Kröner-Fresenius Center of TUM, it was investigated how a carbohydrate-rich meal affected the activity of brown adipose tissue. "For the first time, it could be demonstrated that heat generation in brown adipose tissue could be activated by a test meal just as it would be by exposure to cold," said Klingenspor, summarizing the findings.

For the study, the same subjects were investigated twice: once after exposure to a cold stimulus, and a second time after ingestion of a carbohydrate-rich meal. In addition, a control group was included. Important markers for thermogenesis were measured before and after, which not only included the absorption of glucose and fatty acids, but also the oxygen consumption in brown fat. To do so, the researchers employed indirect calorimetry in combination with positron emission tomography and computer tomography (PET/CT).

"Ten percent of daily energy input is lost due to the thermogenic effect of the food," says Prof. Martin Klingenspor. This postprandial thermogenesis after eating comes not only from the obligatory heat generation due to muscle activity in the intestines, secretion, and digestive processes. There is apparently also a facultative component to which brown fat contributes.

The next step of the experiments will now be to find out whether this is energy that is simply "lost" or whether this phenomenon has another function. "We now know that the activation of brown adipose tissue could be linked to a feeling of being full," reports Klingenspor. Further studies will now be conducted to prove this.


Contacts and sources:
 Professor Martin Klingenspor
Technical University of Munich

Citation:  Postprandial Oxidative Metabolism of Human Brown Fat Indicates Thermogenesis,U. Din et al.: Cell Metabolism 07/2018. https://doi.org/10.1016/j.cmet.2018.05.020

Researchers Unearth Secret Tunnels Between the Skull and the Brain

Bone marrow, the spongy tissue inside most of our bones, produces red blood cells as well as immune cells that help fight off infections and heal injuries. According to a new study of mice and humans, tiny tunnels run from skull bone marrow to the lining of the brain and may provide a direct route for immune cells responding to injuries caused by stroke and other brain disorders. The study was funded in part by the National Institutes of Health and published in Nature Neuroscience.

“We always thought that immune cells from our arms and legs traveled via blood to damaged brain tissue. These findings suggest that immune cells may instead be taking a shortcut to rapidly arrive at areas of inflammation,” said Francesca Bosetti, Ph.D., program director at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS), which provided funding for the study. “Inflammation plays a critical role in many brain disorders and it is possible that the newly described channels may be important in a number of conditions. The discovery of these channels opens up many new avenues of research.”
File:2017-04-29-IMG 6888-Skull2 (retouched).jpg
Credit: Grey Geezer / Wikimedia Commons

Using state-of-the-art tools and cell-specific dyes in mice, Matthias Nahrendorf, M.D., Ph.D., professor at Harvard Medical School and Massachusetts General Hospital in Boston, and his colleagues were able to distinguish whether immune cells traveling to brain tissue damaged by stroke or meningitis, came from bone marrow in the skull or the tibia, a large legbone. In this study, the researchers focused on neutrophils, a particular type of immune cell, which are among the first to arrive at an injury site.

Results in mouse brains showed that during stroke, the skull is more likely to supply neutrophils to the injured tissue than the tibia. In contrast, following a heart attack, the skull and tibia provided similar numbers of neutrophils to the heart, which is far from both of those areas.

Dr. Nahrendorf’s group also observed that six hours after stroke, there were fewer neutrophils in the skull bone marrow than in the tibia bone marrow, suggesting that the skull marrow released many more cells to the injury site. These findings indicate that bone marrow throughout the body does not uniformly contribute immune cells to help injured or infected tissue and suggests that the injured brain and skull bone marrow may “communicate” in some way that results in a direct response from adjacent leukocytes.

Dr. Nahrendorf’s team found that differences in bone marrow activity during inflammation may be determined by stromal cell-derived factor-1 (SDF-1), a molecule that keeps immune cells in the bone marrow. When levels of SDF-1 decrease, neutrophils are released from marrow. The researchers observed levels of SDF-1 decreasing six hours after stroke, but only in the skull marrow, and not in the tibia. The results suggest that the decrease in levels of SDF-1 may be a response to local tissue damage and alert and mobilize only the bone marrow that is closest to the site of inflammation.

Next, Dr. Nahrendorf and his colleagues wanted to see how the neutrophils were arriving at the injured tissue.

“We started examining the skull very carefully, looking at it from all angles, trying to figure out how neutrophils are getting to the brain,” said Dr. Nahrendorf. “Unexpectedly, we discovered tiny channels that connected the marrow directly with the outer lining of the brain.”

With the help of advanced imaging techniques, the researchers watched neutrophils moving through the channels. Blood normally flowed through the channels from the skull’s interior to the bone marrow, but after a stroke, neutrophils were seen moving in the opposite direction to get to damaged tissue.

Dr. Nahrendorf’s team detected the channels throughout the skull as well as in the tibia, which led them to search for similar features in the human skull. Detailed imaging of human skull samples obtained from surgery uncovered the presence of the channels. The channels in the human skull were five times larger in diameter compared to those found in mice. In human and mouse skulls, the channels were found in the both in the inner and outer layers of bone.

Future research will seek to identify the other types of cells that travel through the newly discovered tunnels and the role these structures play in health and disease.

This study was supported by NINDS (NS084863) and the NIH’s National Heart, Lung and Blood Institute (HL139598).

The NINDS is the nation’s leading funder of research on the brain and nervous system. The mission of NINDS is to seek fundamental knowledge about the brain and nervous system and to use that knowledge to reduce the burden of neurological disease.




Contacts and sources:
Barbara McMakin
NIH/National Institute of Neurological Disorders and Stroke


Citation: Direct vascular channels connect skull bone marrow and the brain surface enabling myeloid cell migration.
Fanny Herisson, Vanessa Frodermann, Gabriel Courties, David Rohde, Yuan Sun, Katrien Vandoorne, Gregory R. Wojtkiewicz, Gustavo Santos Masson, Claudio Vinegoni, Jiwon Kim, Dong-Eog Kim, Ralph Weissleder, Filip K. Swirski, Michael A. Moskowitz, Matthias Nahrendorf. Nature Neuroscience, 2018; DOI: 10.1038/s41593-018-0213-2

How We Judge Personality from Faces Depends on Our Beliefs about How Personality Works



We make snap judgments of others based not only on their facial appearance, but also on our pre-existing beliefs about how others’ personalities work.

We make snap judgments of others based not only on their facial appearance, but also on our pre-existing beliefs about how others’ personalities work, finds a new study by a team of psychology researchers.

Credit: NYU

Its work, reported in the journal Proceedings of the National Academy of Sciences, underscores how we interpret others’ facial features to form impressions of their personalities.

“People form personality impressions from others’ facial appearance within only a few hundred milliseconds,” observes Jonathan Freeman, the paper’s senior author and an associate professor in NYU’s Department of Psychology and Center for Neural Science. “Our findings suggest that face impressions are shaped not only by a face’s specific features but also by our own beliefs about personality—for instance, the cues that make a face look competent and make a face look friendly are physically more similar for those who believe competence and friendliness co-occur in other people’s personalities.”

“Although these impressions are highly reliable, they are often quite inaccurate,” Freeman adds. “And yet they are consequential, as previous research has found face impressions to predict a range of real-world outcomes, from political elections, to hiring decisions, criminal sentencing, or dating. Initial impressions of faces can bias how we interact and make critical decisions about people, and so understanding the mechanisms behind these impressions is important for developing techniques to reduce biases based on facial features that typically operate outside of awareness.”

The paper’s other authors included Ryan Stolier, lead author of the paper and doctoral candidate in NYU’s Department of Psychology, Eric Hehman of McGill University, and Matthias Keller and Mirella Walker of the University of Basel in Switzerland.

We have long known that people make some personality impressions of others based merely upon their facial appearance. For instance, we see those with babyish features as agreeable and harmless and those with faces that resemble anger as dishonest and unfriendly.

What’s less clear is how widespread this process is and how, precisely, it transpires.

In their PNAS study, the researchers explored these questions through a series of experiments, specifically seeking to determine whether our own pre-existing beliefs about how personality works affect the way we “see” it on others’ faces.


The experiments’ 920 subjects indicated how much they believed different traits co-occur in other people's personalities. For example, they would indicate how much they believe competence co-occurs with friendliness in others. The subjects were each then shown dozens of faces on a computer screen and quickly judged those faces on competence and friendliness, allowing the researchers to see if subjects thought the same faces that are competent are also friendly—or not friendly. In all, subjects were asked about several personality traits, including the following: “agreeable,” “aggressive,” “assertive,” “caring,” “competent,” “conscientious,” “confident,” “creative,” “dominant,” “egotistic,” “emotionally stable,” “extroverted,” “intelligent,” “mean,” “neurotic,” “open to experience,” “responsible,” “self-disciplined,” “sociable,” “trustworthy,” “unhappy,” and “weird.”

NYU researchers tested how much we believe different traits co-occur in other people's personalities—for instance, how much we think competence co-occurs with friendliness in others. They then used a method able to visualize the subjects’ mental image of a personality trait, allowing them to see if subjects who believe competent people tend to also be friendly have mental images of a competent face and friendly face that are physically more resembling.

 Image courtesy of Ryan Stolier and Jonathan Freeman, NYU.

Overall, the findings confirmed what the researchers predicted. The more that subjects believed any two traits, such as competence and friendliness, co-occurs in others predicted their impressions of those two traits on faces to be more similar.

In a final experiment, the researchers measured the exact facial features used to make personality impressions using a cutting-edge method that can visualize subjects’ mental image of a personality trait in their mind’s eye. They found that the facial features used to judge personality indeed change based upon our beliefs. For instance, people who believe competent others tend to also be friendly have mental images of what makes a face look competent and what makes a face look friendly that are physically more resembling.

“Generally, the results suggest that beliefs about personality drive face impressions, such that people who believe any set of personality traits are related tend to see those traits similarly in faces,” says Stolier. “This may explain how humans can make any set of impressions from a face.”

The results lend evidence for the researchers’ perspective that most traits perceived from others’ faces are not unique but merely derived from one another, with a few core traits driving the process.

“For instance, while a face may not appear right away to be conscientious, it may appear to be agreeable, intelligent, and emotional—personality traits a perceiver may believe underlie creativity, resulting in them seeing a face as conscientious,” adds Stolier.

The results also provide an explanation for how people can make so many different impressions of someone just from a handful of features present on a face.

“We may only see cues in a face that directly elicit several personality impressions, such as ‘submissiveness’ for those who have ‘baby faces,’ ” observes Stolier. “However, the perceptual system may take these few impressions and add them together, such that we see a face as conscientious or religious, to the extent we think the personality judgment is related to those impressions we initially make from a face—such as agreeableness and submissiveness.”

The research was supported, in part, by grants from the National Institutes of Health (F31-MH114505) and the National Science Foundation (BCS-1654731).

DOI: 10.1073/pnas.1807222115





Contacts and sources:
James Devitt
New York University

Citation: The conceptual structure of face impressions.
Ryan M. Stolier, Eric Hehman, Matthias D. Keller, Mirella Walker, Jonathan B. Freeman. Proceedings of the National Academy of Sciences, 2018; 201807222 DOI: 10.1073/pnas.1807222115

From Shrews to Elephants, Animal Reflexes Surprisingly Slow

While speediness is a priority for any animal trying to escape a predator or avoid a fall, a new study by Simon Fraser University researchers suggests that even the fastest reflexes among all animals are remarkably slow.

"Animals as small as shrews and as large as elephants are built out of the same building blocks of nerve and muscle,” says Max Donelan, a professor of Biomedical Physiology and Kinesiology (BPK) and director of SFU’s Locomotion Lab. “We sought to understand how these building blocks are configured in different sized animals, and how this limits their performance.”

The study is published today in the Proceedings of the Royal Society B.

Since an animal’s life can hinge on how quickly it can sense and respond to stimuli, the team set out to quantify the speed of the fastest reflex involved in the locomotion of terrestrial mammals, in animals ranging in size from minuscule shrews to massive elephants.

“Not surprisingly, we found that reflexes take a lot longer in large animals--about 17 times longer than their smallest counterparts,” says SFU postdoctoral researcher Heather More. “What was more interesting to us is that these delays are mostly offset by movement times that also increase with size—relative delay is only twice as long in an elephant as in a shrew, putting large animals at only a slight disadvantage.”

SFU postdoc researcher Heather More, who is working with professor Max Donelan as part of his team in SFU's Locomotion Lab

Credit: SFU

More says their findings have implications for all animals, no matter what their size. “When running quickly, all animals are challenged by their lengthy response times which comprise nearly all of their available movement time—even the fastest reflex for the control of running is remarkably slow.” She adds: “If a small animal puts its foot in a hole when sprinting, there is barely enough time for it to adjust its motion while the foot is on the ground, and a large animal has no time at all—it has to wait until the next step.”

More puts these delays in context: “One component of response time, nerve conduction delay, is particularly long in large animals. To compare to engineered systems, it takes less time for an orbiting satellite to send a signal to earth than for an elephant’s spinal cord to send a signal to its lower leg.”

A different component delay—the time for a nerve impulse to cross a single synapse in the spinal cord—is relatively long for small animals and relatively short for large animals. “This synaptic delay is one measure of the time to think—so large animals have lots of time to think about how to respond to a disturbance, whereas as small animals don’t."

The researchers say this means small and large animals likely compensate for their relatively slow reflexes in different ways. “We suspect that small animals rely on pre-flexive control, where their bodies are built in such a way that they can reject disturbances like stepping in a hole without intervention from their nervous system,” says Donelan.

“Large animals, on the other hand, may rely more on prediction to think ahead about the consequences of their movements and adjust accordingly.”

Donelan’s lab has carried out previous locomotion studies involving elephants, giraffes and even kangaroos. A founder of Bionic Power and one of the original inventors of the bionic energy harvester, his research over the years has garnered international attention.


Contacts and sources:
Marianne MeadahlSimon Fraser University
Citation: Scaling of sensorimotor delays in terrestrial mammals.
Heather L. More, J. Maxwell Donelan. Proceedings of the Royal Society B: Biological Sciences, 2018; 285 (1885): 20180613 DOI: 10.1098/rspb.2018.0613

Mammal Forerunner that Reproduced Like a Reptile Sheds Light on Brain Evolution

Researchers from The University of Texas at Austin found a fossil of an extinct mammal relative with a clutch of 38 babies that were near miniatures of their mother. 
Multiple kayentatherium skeletons
Credit: Eva Hoffman / The University of Texas at Austin.


Compared with the rest of the animal kingdom, mammals have the biggest brains and produce some of the smallest litters of offspring. A newly described fossil of an extinct mammal relative — and her 38 babies — is among the best evidence that a key development in the evolution of mammals was trading brood power for brain power.

A figure representing the 38 Kayentatherium babies found with an adult specimen. They are the only known fossils of babies from an extinct mammal relative that lived during the Early Jurassic.

Credit:  Eva Hoffman / The University of Texas at Austin.

The find is among the rarest of the rare because it contains the only known fossils of babies from any mammal precursor, said researchers from The University of Texas at Austin who discovered and studied the fossilized family. But the presence of so many babies — more than twice the average litter size of any living mammal — revealed that it reproduced in a manner akin to reptiles. Researchers think the babies were probably developing inside eggs or had just recently hatched when they died.

The study, published in the journal Nature on Aug. 29, describes specimens that researchers say may help reveal how mammals evolved a different approach to reproduction than their ancestors, which produced large numbers of offspring.

“These babies are from a really important point in the evolutionary tree,” said Eva Hoffman, who led research on the fossil as a graduate student at the UT Jackson School of Geosciences. “They had a lot of features similar to modern mammals, features that are relevant in understanding mammalian evolution.”

Hoffman co-authored the study with her graduate adviser, Jackson School Professor Timothy Rowe.

The skull of a baby Kayentatherium. It is about 1 centimeter long.

Credit:  Eva Hoffman / The University of Texas at Austin.

The mammal relative belonged to an extinct species of beagle-size plant-eaters called Kayentatherium wellesi that lived alongside dinosaurs about 185 million years ago. Like mammals, Kayentatheriumprobably had hair.

When Rowe collected the fossil more than 18 years ago from a rock formation in Arizona, he thought that he was bringing a single specimen back with him. He had no idea about the dozens of babies it contained.

Sebastian Egberts, a former graduate student and fossil preparator at the Jackson School, spotted the first sign of the babies years later when a grain-sized speck of tooth enamel caught his eye in 2009 as he was unpacking the fossil.

“It didn’t look like a pointy fish tooth or a small tooth from a primitive reptile,” said Egberts, who is now an instructor of anatomy at the Philadelphia College of Osteopathic Medicine. “It looked more like a molariform tooth (molar-like tooth) — and that got me very excited.”

A CT scan of the fossil revealed a handful of bones inside the rock. However, it took advances in CT-imaging technology during the next seven years, the expertise of technicians at UT Austin’s High-Resolution X-ray Computed Tomography Facility (UTCT), and extensive digital processing by Hoffman to reveal the rest of the babies — not only jaws and teeth, but complete skulls and partial skeletons.

The 3D visualizations Hoffman produced allowed her to conduct an in-depth analysis of the fossil that verified that the tiny bones belonged to babies and were the same species as the adult. Her analysis also revealed that the skulls of the babies were like scaled-down replicas of the adult, with skulls a tenth the size but otherwise proportional. This finding is in contrast to mammals, which have babies that are born with shortened faces and bulbous heads to account for big brains.

The fossils discovered by the researchers belong to Kayentatherium, an extinct mammal relative that lived during the Early Jurassic.

Credit: Eva Hoffman / The University of Texas at Austin.

The brain is an energy-intensive organ, and pregnancy — not to mention childrearing — is an energy-intensive process. The discovery that Kayentatheriumhad a tiny brain and many babies, despite otherwise having much in common with mammals, suggests that a critical step in the evolution of mammals was trading big litters for big brains, and that this step happened later in mammalian evolution.

“Just a few million years later, in mammals, they unquestionably had big brains, and they unquestionably had a small litter size,” Rowe said.

The mammalian approach to reproduction directly relates to human development — including the development of our own brains. By looking back at our early mammalian ancestors, humans can learn more about the evolutionary process that helped shape who we are as a species, he said.

“There are additional deep stories on the evolution of development, and the evolution of mammalian intelligence and behavior and physiology that can be squeezed out of a remarkable fossil like this now that we have the technology to study it,” Rowe said.

Funding for the research was provided by the National Science Foundation, The University of Texas Geology Foundation and the Jackson School of Geosciences.



Contacts and sources:
Monica Kortsha
University of Texas at Austin

Citation: Jurassic stem-mammal perinates and the origin of mammalian reproduction and growth.
Eva A. Hoffman, Timothy B. Rowe. Nature, 2018; DOI: 10.1038/s41586-018-0441-3

'Archived' Heat Could Melt Entire Arctic Sea-Ice Pack

Arctic sea ice isn’t just threatened by the melting of ice around its edges, a new study has found: Warmer water that originated hundreds of miles away has penetrated deep into the interior of the Arctic.

That “archived” heat, currently trapped below the surface, has the potential to melt the region’s entire sea-ice pack if it reaches the surface, researchers say.

The study appears online Aug. 29 in the journal Science Advances.

“We document a striking ocean warming in one of the main basins of the interior Arctic Ocean, the Canadian Basin,” said lead author Mary-Louise Timmermans, a professor of geology and geophysics at Yale University.

Heat trapped below the surface has the potential to melt the Arctic region's entire sea-ice pack if it reaches the surface
A heat map of the Arctic interior
Credit: Yale University

The upper ocean in the Canadian Basin has seen a two-fold increase in heat content over the past 30 years, the researchers said. They traced the source to waters hundreds of miles to the south, where reduced sea ice has left the surface ocean more exposed to summer solar warming. In turn, Arctic winds are driving the warmer water north, but below the surface waters.

“This means the effects of sea-ice loss are not limited to the ice-free regions themselves, but also lead to increased heat accumulation in the interior of the Arctic Ocean that can have climate effects well beyond the summer season,” Timmermans said. “Presently this heat is trapped below the surface layer. Should it be mixed up to the surface, there is enough heat to entirely melt the sea-ice pack that covers this region for most of the year.”

The co-authors of the study are John Toole and Richard Krishfield of the Woods Hole Oceanographic Institution.

The National Science Foundation Division of Polar Programs provided support for the research.


Contacts and sources:
Jim Shelton
Yale University
Citation: Warming of the interior Arctic Ocean linked to sea ice losses at the basin margins.
Mary-Louise Timmermans, John Toole, Richard Krishfield. Science Advances, 2018; 4 (8): eaat6773 DOI: 10.1126/sciadv.aat6773

Scientist Looks in the Depths of the Great Red Spot to Find Water on Jupiter

For centuries, scientists have worked to understand the makeup of Jupiter. It’s no wonder: this mysterious planet is the biggest one in our solar system by far, and chemically, the closest relative to the Sun. Understanding Jupiter is key to learning more about how our solar system formed, and even about how other solar systems develop.

But one critical question has bedeviled astronomers for generations: Is there water deep in Jupiter's atmosphere, and if so, how much?

Gordon L. Bjoraker, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland, reported in a recent paper in the Astronomical Journalthat he and his team have brought the Jovian research community closer to the answer.

By looking from ground-based telescopes at wavelengths sensitive to thermal radiation leaking from the depths of Jupiter's persistent storm, the Great Red Spot, they detected the chemical signatures of water above the planet’s deepest clouds. The pressure of the water, the researchers concluded, combined with their measurements of another oxygen-bearing gas, carbon monoxide, imply that Jupiter has 2 to 9 times more oxygen than the Sun. This finding supports theoretical and computer-simulation models that have predicted abundant water (H2O) on Jupiter made of oxygen (O) tied up with molecular hydrogen (H2).

This animation takes the viewer on a simulated flight into, and then out of, Jupiter’s upper atmosphere at the location of the Great Red Spot. It was created by combining an image from the JunoCam imager on NASA's Juno spacecraft with a computer-generated animation. The perspective begins about 2,000 miles (3,000 kilometers) above the cloud tops of the planet's southern hemisphere. The bar at far left indicates altitude during the quick descent; a second gauge next to that depicts the dramatic increase in temperature that occurs as the perspective dives deeper down. The clouds turn crimson as the perspective passes through the Great Red Spot. Finally, the view ascends out of the spot.

Credits: NASA/JPL

The revelation was stirring given that the team’s experiment could have easily failed. The Great Red Spot is full of dense clouds, which makes it hard for electromagnetic energy to escape and teach astronomers anything about the chemistry within.

“It turns out they're not so thick that they block our ability to see deeply,” said Bjoraker. “That’s been a pleasant surprise.”

New spectroscopic technology and sheer curiosity gave the team a boost in peering deep inside Jupiter, which has an atmosphere thousands of miles deep, Bjoraker said: “We thought, well, let’s just see what’s out there.”

The data Bjoraker and his team collected will supplement the information NASA’s Juno spacecraft is gathering as it circles the planet from north to south once every 53 days.

Among other things, Juno is looking for water with its own infrared spectrometer and with a microwave radiometer that can probe deeper than anyone has seen — to 100 bars, or 100 times the atmospheric pressure at Earth’s surface. (Altitude on Jupiter is measured in bars, which represent atmospheric pressure, since the planet does not have a surface, like Earth, from which to measure elevation.)

If Juno returns similar water findings, thereby backing Bjoraker’s ground-based technique, it could open a new window into solving the water problem, said Goddard’s Amy Simon, a planetary atmospheres expert.

“If it works, then maybe we can apply it elsewhere, like Saturn, Uranus or Neptune, where we don’t have a Juno,” she said.

Juno is the latest spacecraft tasked with finding water, likely in gas form, on this giant gaseous planet.

Water is a significant and abundant molecule in our solar system. It spawned life on Earth and now lubricates many of its most essential processes, including weather. It’s a critical factor in Jupiter’s turbulent weather, too, and in determining whether the planet has a core made of rock and ice.

Jupiter is thought to be the first planet to have formed by siphoning the elements left over from the formation of the Sun as our star coalesced from an amorphous nebula into the fiery ball of gases we see today. A widely accepted theory until several decades ago was that Jupiter was identical in composition to the Sun; a ball of hydrogen with a hint of helium — all gas, no core.

But evidence is mounting that Jupiter has a core, possibly 10 times Earth’s mass. Spacecraft that previously visited the planet found chemical evidence that it formed a core of rock and water ice before it mixed with gases from the solar nebula to make its atmosphere. The way Jupiter’s gravity tugs on Juno also supports this theory. There’s even lightning and thunder on the planet, phenomena fueled by moisture.

“The moons that orbit Jupiter are mostly water ice, so the whole neighborhood has plenty of water,” said Bjoraker. “Why wouldn't the planet — which is this huge gravity well, where everything falls into it — be water rich, too?”

The water question has stumped planetary scientists; virtually every time evidence of H2O materializes, something happens to put them off the scent. A favorite example among Jupiter experts is NASA’s Galileo spacecraft, which dropped a probe into the atmosphere in 1995 that wound up in an unusually dry region. "It's like sending a probe to Earth, landing in the Mojave Desert, and concluding the Earth is dry,” pointed out Bjoraker.

In their search for water, Bjoraker and his team used radiation data collected from the summit of Maunakea in Hawaii in 2017. They relied on the most sensitive infrared telescope on Earth at the W.M. Keck Observatory, and also on a new instrument that can detect a wider range of gases at the NASA Infrared Telescope Facility.

The Great Red Spot is the dark patch in the middle of this infrared image of Jupiter. It is dark due to the thick clouds that block thermal radiation. The yellow strip denotes the portion of the Great Red Spot used in astrophysicist Gordon L. Bjoraker’s analysis.

Credits: NASA's Goddard Space Flight Center/Gordon Bjoraker

The idea was to analyze the light energy emitted through Jupiter’s clouds in order to identify the altitudes of its cloud layers. This would help the scientists determine temperature and other conditions that influence the types of gases that can survive in those regions.

Planetary atmosphere experts expect that there are three cloud layers on Jupiter: a lower layer made of water ice and liquid water, a middle one made of ammonia and sulfur, and an upper layer made of ammonia.

To confirm this through ground-based observations, Bjoraker’s team looked at wavelengths in the infrared range of light where most gases don’t absorb heat, allowing chemical signatures to leak out. Specifically, they analyzed the absorption patterns of a form of methane gas. Because Jupiter is too warm for methane to freeze, its abundance should not change from one place to another on the planet.

“If you see that the strength of methane lines vary from inside to outside of the Great Red Spot, it's not because there's more methane here than there,” said Bjoraker, “it's because there are thicker, deep clouds that are blocking the radiation in the Great Red Spot.”

Bjoraker’s team found evidence for the three cloud layers in the Great Red Spot, supporting earlier models. The deepest cloud layer is at 5 bars, the team concluded, right where the temperature reaches the freezing point for water, said Bjoraker, “so I say that we very likely found a water cloud.” The location of the water cloud, plus the amount of carbon monoxide that the researchers identified on Jupiter, confirms that Jupiter is rich in oxygen and, thus, water.

Bjoraker’s technique now needs to be tested on other parts of Jupiter to get a full picture of global water abundance, and his data squared with Juno’s findings.

“Jupiter’s water abundance will tell us a lot about how the giant planet formed, but only if we can figure out how much water there is in the entire planet,” said Steven M. Levin, a Juno project scientist at NASA’s Jet Propulsion Laboratory in Pasadena, California.




Contacts and sources:
Lonnie Shekhtman
NASA's Goddard Space Flight Center

Citation: The Gas Composition and Deep Cloud Structure of Jupiter's Great Red Spot The Astronomical Journal, Volume 156, Number 3
http://dx.doi.org/10.3847/1538-3881/aad186

Cold Climates Contributed to The Extinction of The Neanderthals



Climate change may have played a more important role in the extinction of Neanderthals than previously believed, according to a new study published in the journal, Proceedings of the Natural Academy of Sciences.

A team of researchers from a number of European and American research institutions, including Northumbria University, Newcastle, have produced detailed new natural records from stalagmites that highlight changes in the European climate more than 40,000 years ago.

They found several cold periods that coincide with the timings of a near complete absence of archaeological artefacts from the Neanderthals, suggesting the impact that changes in climate had on the long-term survival of Neanderthal man.


Credit: Northumbria University

Stalagmites grow in thin layers each year and any change in temperature alters their chemical composition. The layers therefore preserve a natural archive of climate change over many thousands of years.

The researchers examined stalagmites in two Romanian caves, which revealed more detailed records of climate change in continental Europe than had previously been available.

The layers of the stalagmites showed a series of prolonged extreme cold and excessively dry conditions in Europe between 44,000 and 40,000 years ago. They highlight a cycle of temperatures gradually cooling, staying very cold for centuries to millennia and then warming again very abruptly.

The researchers compared these palaeoclimate records with archaeological records of Neanderthal artefacts and found a correlation between the cold periods - known as stadials - and an absence of Neanderthal tools.

This indicates the Neanderthal population greatly reduced during the cold periods, suggesting that climate change played a role in their decline.

Dr Vasile Ersek is co-author of the study and a senior lecturer in physical geography in Northumbria University's Department of Geography and Environmental Sciences. He explained: "The Neanderthals were the human species closest to ours and lived in Eurasia for some 350,000 years. However, around 40,000 years ago - during the last Ice Age and shortly after the arrival of anatomically modern humans in Europe - they became extinct.

Credit: Northumbria University

"For many years we have wondered what could have caused their demise. Were they pushed 'over the edge' by the arrival of modern humans, or were other factors involved? Our study suggests that climate change may have had an important role in the Neanderthal extinction."

The researchers believe that modern humans survived these cold stadial periods because they were better adapted to their environment than the Neanderthals.

Neanderthals were skilled hunters and had learned how to control fire, but they had a less diverse diet than modern humans, living largely on meat from the animals they had successfully pursued. These food sources would naturally become scarce during colder periods, making the Neanderthals more vulnerable to rapid environmental change.

In comparison, modern humans had incorporated fish and plants into their diet alongside meat, which supplemented their food intake and potentially enabled their survival.

Dr Ersek said the research team's findings had indicated that this cycle of "hostile climate intervals" over thousands of years, in which the climate varied abruptly and was characterised by extreme cold temperatures, was responsible for the future demographic character of Europe.

"Before now, we did not have climate records from the region where Neanderthals lived which had the necessary age accuracy and resolution to establish a link between when Neanderthals died out and the timing of these extreme cold periods," he said, "But our findings indicate that the Neanderthal populations successively decreased during the repeated cold stadials.

"When temperatures warmed again, their smaller populations could not expand as their habitat was also being occupied by modern humans and this facilitated a staggered expansion of modern humans into Europe.

"The comparable timing of stadials and population changes seen in the archaeologic and genetic record suggests that millennial-scale hostile climate intervals may have been the pacesetter of multiple depopulation-repopulation cycles. These cycles ultimately drew the demographic map of Europe's Middle-Upper Paleolithic transition."

The Impact of climate change on the transition of Neanderthals to modern humans in Europe study involved academics from the universities of Northumbria (UK), Cologne (Germany) and South Florida, Tampa (USA) together with experts from the Institute of Speleology (Romania), the International Atomic Energy Agency (Austria), and the Max Planck Institute for Evolutionary Anthropology (Germany).


Contacts and sources:
Andrea Slowey
Northumbria University


Citation: Impact of climate change on the transition of Neanderthals to modern humans in Europe Michael Staubwasser, Virgil Drăgușin, Bogdan P. Onac, Sergey Assonov, Vasile Ersek, Dirk L. Hoffmann, and Daniel Veres PNAS published ahead of print August 27, 2018 https://doi.org/10.1073/pnas.1808647115  www.pnas.org/content/early/2018/08/21/1808647115