Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Tuesday, February 28, 2017

Missing Link In Planet Formation: Astronomers Discover Spontaneous 'Dust Traps'


Planets are thought to form in the disks of dust and gas found around young stars. But astronomers have struggled to assemble a complete theory of their origin that explains how the initial dust develops into planetary systems. A French-UK-Australian team now think they have the answer, with their simulations showing the formation of 'dust traps' where pebble-sized fragments collect and stick together, to grow into the building blocks of planets. They publish their results in Monthly Notices of the Royal Astronomical Society.

Our Solar system, and other planetary systems, began life with disks of gas and dust grains around a young star. The processes that convert these tiny grains, each a few millionths of a metre (a micron) across, into aggregates a few centimetres in size, and the mechanism for making kilometre-sized 'planetesimals' into planetary cores, are both well understood.


An image of a protoplanetary disk, made using results from the new model, after the formation of a spontaneous dust trap, visible as a bright dust ring. Gas is depicted in blue and dust in red.

Credit: Jean-Francois Gonzalez



The intermediate stage, taking pebbles and joining them together into objects the size of asteroids, is less clear, but with more than 3,500 planets already found around other stars, the whole process must be ubiquitous.

Dr Jean-Francois Gonzalez, of the Centre de Recherche Astrophysique de Lyon, in France, led the new work. He comments: "Until now we have struggled to explain how pebbles can come together to form planets, and yet we've now discovered huge numbers of planets in orbit around other stars. That set us thinking about how to solve this mystery."

There are two main barriers that need to be overcome for pebbles to become planetesimals. Firstly the drag of gas on dust grains in a disk makes them drift rapidly towards the central star, where they are destroyed, leaving no material to form planets. The second challenge is that growing grains can be broken up in high-speed collisions, breaking them into a large number of smaller pieces and reversing the aggregation process.


This cartoon illustrates the stages of the formation mechanism for dust traps. The central star is depicted as yellow, surrounded by the protoplanetary disk, here shown in blue. The dust grains make up the band running through the disk.

In the first stage, the dust grains grown in size, and move inwards towards the central star. The now pebble-sized larger grains (in the second panel) then pile up and slow down, and in the third stage the gas is pushed outwards by the back-reaction, creating regions where dust accumulates, the so-called dust traps. The traps then allow the pebbles to aggregate to form planetesimals, and eventually planet-sized worlds.
Credit: © Volker Schurbert


The only locations in planet forming disks where these problems can be overcome are so-called 'dust traps'. In these high-pressure regions, the drift motion slows, allowing dust grains to accumulate. With their reduced velocity, the grains can also avoid fragmentation when they collide.

Until now, astronomers thought that dust traps could only exist in very specific environments, but the computer simulations run by the team indicate that they are very common. Their model pays particular attention to the way the dust in a disk drags on the gas component. In most astronomical simulations, gas causes the dust to move, but sometimes, in the dustiest settings, the dust acts more strongly on the gas.

This effect, known as aerodynamic drag back-reaction, is usually negligible, so up to now has been ignored in studies of growing and fragmenting grains. But its effects become important in dust rich environments, like those found where planets are forming.

The effect of the back-reaction is to slow the inward drift of the grains, which gives them time to grow in size. Once large enough, the grains are their own masters, and the gas can no longer govern their motion. The gas, under the influence of this back-reaction, will be pushed outwards and form a high-pressure region: the dust trap. These spontaneous traps then concentrate the grains coming from the outer disk regions, creating a very dense ring of solids, and giving a helping hand to the formation of planets.

Gonzalez concludes: "We were thrilled to discover that, with the right ingredients in place, dust traps can form spontaneously, in a wide range of environments. This is a simple and robust solution to a long standing problem in planet formation."

Observatories like ALMA in Chile already see bright and dark rings in forming planetary systems that are thought to be dust traps. Gonzalez and his team, and other research groups around the world, now plan to extend the trap model all the way to the formation of planetesimals.




Contacts and sources:
Dr Robert Massey
Royal Astronomical Society

Dr Jean-Francois Gonzalez
Centre de Recherche Astrophysique de Lyon
Observatoire de Lyon

The new work appears in "Self-induced dust traps: overcoming planet formation barriers", J.-F. Gonzalez, G. Laibe, and S. T. Maddison, Monthly Notices of the Royal Astronomical Society, in press. After the embargo expires the final paper will be available from OUP via http://doi.org/10.1093/mnras/stx016

Monday, February 27, 2017

Resurrecting Extinct Species: What Could Go Wrong?


Bringing back extinct species could lead to biodiversity loss rather than gain, according to work featuring University of Queensland researchers.

UQ scientist Professor Hugh Possingham said the research suggested further stretching already-strained conservation budgets to cover the costs of de-extinction could endanger extant species (species still in existence).

"If the risk of failure and the costs associated with establishing viable populations could also be calculated, estimates of potential net losses or missed opportunities would probably be considerably higher," Professor Possingham said.

"De-extinction could be useful for inspiring new science and could be beneficial for conservation if we ensure it doesn't reduce existing conservation resources.


This image shows a Lord Howe Island woodhen Gallirallus sylvestris.

Credit: Toby Hudson


"However, in general it is best if we focus on the many species that need our help now."

"Given the considerable potential for missed opportunity, and risks inherent in assuming a resurrected species would fulfil its role as an ecosystem engineer or flagship species, it is unlikely that de-extinction could be justified on grounds of biodiversity conservation."

The study was led by Dr Joseph Bennett, formerly of the ARC Centre for Environmental Decisions at UQ and now of Carleton University, Canada.

It analysed the number of species governments in New Zealand and New South Wales could afford to conserve.

"We based cost estimates on recently extinct species and similar extant species," Dr Bennett said.

The Lord Howe pigeon, eastern bettong, bush moa and Waitomo frog were among the extinct species included in calculations.

The researchers found reintroducing some recently extinct species to their old habitats might improve biodiversity locally, but government-funded conservation for 11 focal extinct species in New Zealand would sacrifice conservation for nearly three times as many (31) extant species.

External funding for conservation of the five focal extinct NSW species could instead be used to conserve more than eight times as many (42) extant species.

Although the technology for de-extinction is still some way off, the research found that careful thought would be required about what species to reintroduce, and where.

Professor Possingham is Chief Scientist with The Nature Conservancy, the world's largest conservation organisation, and a scientist with UQ's School of Biological Sciences, The Centre for Biodiversity and Conservation Science at UQ, the ARC Centre of Excellence for Environmental Decisions (CEED) and the Australian Government's National Environmental Science Program Threatened Species Recovery Hub.



Contacts and sources:
Hugh Possingham
University of Queensland 

The research is published in Nature Ecology and Evolution (DOI: 10.1038/s41559-016-0053). http://dx.doi.org/10.1038/s41559-016-0053

Getting Cyborg Cockroaches to Stay on Track: One Day They Will Explore Disaster Areas


New research from North Carolina State University offers insights into how far and how fast cyborg cockroaches - or biobots - move when exploring new spaces. The work moves researchers closer to their goal of using biobots to explore collapsed buildings and other spaces in order to identify survivors.

NC State researchers have developed cockroach biobots that can be remotely controlled and carry technology that may be used to map disaster areas and identify survivors in the wake of a calamity.

For this technology to become viable, the researchers needed to answer fundamental questions about how and where the biobots move in unfamiliar territory. Two forthcoming papers address those questions.

NC State researchers have found that by sending cockroach biobots random commands, the biobots spent more time moving, moved more quickly and were at least five times more likely to move away walls and into open space. The finding is a significant advance for developing biobots that can search collapsed buildings and other disaster areas for survivors.

Credit: Edgar Lobaton


The first paper answers questions about whether biobot technology can accurately determine how and whether biobots are moving.

The researchers followed biobot movements visually and compared their actual motion to the motion being reported by the biobot's inertial measurement units. The study found that the biobot technology was a reliable indicator of how the biobots were moving.

The second paper addresses bigger questions: How far will the biobots travel? How fast? Are biobots more efficient at exploring space when allowed to move without guidance? Or can remote-control commands expedite the process?

These questions are important because the answers could help researchers determine how many biobots they may need to introduce to an area in order to explore it effectively in a given amount of time.

For this study, researchers introduced biobots into a circular structure. Some biobots were allowed to move at will, while others were given random commands to move forward, left or right.

Credit: ARoS Lab

The researchers found that unguided biobots preferred to hug the wall of the circle. But by sending the biobots random commands, the biobots spent more time moving, moved more quickly and were at least five times more likely to move away from the wall and into open space.

"Our earlier studies had shown that we can use neural stimulation to control the direction of a roach and make it go from one point to another," says Alper Bozkurt, an associate professor of electrical and computer engineering at NC State and co-author of the two papers. "This [second] study shows that by randomly stimulating the roaches we can benefit from their natural walking and instincts to search an unknown area. Their electronic backpacks can initiate these pulses without us seeing where the roaches are and let them autonomously scan a region."

"This is practical information we can use to get biobots to explore a space more quickly," says Edgar Lobaton, an assistant professor of electrical and computer engineering at NC State and co-author on the two papers. "That's especially important when you consider that time is of the essence when you are trying to save lives after a disaster."

Lead author of the first paper, "A Study on Motion Mode Identification for Cyborg Roaches," is NC State Ph.D. student Jeremy Cole. The paper was co-authored by Ph.D. student Farrokh Mohammadzadeh, undergraduate Christopher Bollinger, former Ph.D. student Tahmid Latif, Bozkurt and Lobaton.

Lead author of the second paper, "Biobotic Motion and Behavior Analysis in Response to Directional Neurostimulation," is former NC State Ph.D. student Alireza Dirafzoon. The paper was co-authored by Latif, former Ph.D. student Fengyuan Gong, professor of electrical and computer engineering Mihail Sichitiu, Bozkurt and Lobaton.

Both papers will be presented at the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing, being held March 5-9 in New Orleans.



Contacts and sources:
Matt Shipman
North Carolina State University

Earth Likely Began with a Solid Shell: Plate Tectonics Came Later in Earth's History Says New Research


Today's Earth is a dynamic planet with an outer layer composed of giant plates that grind together, sliding past or dipping beneath one another, giving rise to earthquakes and volcanoes. Others separate at undersea mountain ridges, where molten rock spreads out from the centers of major ocean basins.

But new research suggests that this was not always the case. Instead, shortly after Earth formed and began to cool, the planet's first outer layer was a single, solid but deformable shell. Later, this shell began to fold and crack more widely, giving rise to modern plate tectonics.


The outer layer of modern Earth is a collection of interlocking rigid plates, as seen in this illustration. These plates grind together, sliding past or dipping beneath one another, giving rise to earthquakes and volcanoes. But new research suggests that plate tectonics did not begin until much later in Earth's history.

Credit: USGS


The research, described in a paper published February 27, 2017 in the journal Nature, is the latest salvo in a long-standing debate in the geological research community: did plate tectonics start right away--a theory known as uniformitarianism--or did Earth first go through a long phase with a solid shell covering the entire planet? The new results suggest the solid shell model is closest to what really happened.

"Models for how the first continental crust formed generally fall into two groups: those that invoke modern-style plate tectonics and those that do not," said Michael Brown, a professor of geology at the University of Maryland and a co-author of the study. "Our research supports the latter--a 'stagnant lid' forming the planet's outer shell early in Earth's history."

To reach these conclusions, Brown and his colleagues from Curtin University and the Geological Survey of Western Australia studied rocks collected from the East Pilbara Terrane, a large area of ancient granitic crust located in the state of Western Australia. Rocks here are among the oldest known, ranging from 3.5 to about 2.5 billion years of age. (Earth is roughly 4.5 billion years old.) The researchers specifically selected granites with a chemical composition usually associated with volcanic arcs--a telltale sign of plate tectonic activity.

Brown and his colleagues also looked at basalt rocks from the associated Coucal formation. Basalt is the rock produced when volcanoes erupt, but it also forms the ocean floor, as molten basalt erupts at spreading ridges in the center of ocean basins. In modern-day plate tectonics, when ocean floor basalt reaches the continents, it dips--or subducts--beneath the Earth's surface, where it generates fluids that allow the overlying mantle to melt and eventually create large masses of granite beneath the surface.

Previous research suggested that the Coucal basalts could be the source rocks for the granites in the Pilbara Terrane, because of the similarities in their chemical composition. Brown and his collaborators set out to verify this, but also to test another long-held assumption: could the Coucal basalts have melted to form granite in some way other than subduction of the basalt beneath Earth's surface? If so, perhaps plate tectonics was not yet happening when the Pilbara granites formed.

To address this question, the researchers performed thermodynamic calculations to determine the phase equilibria of average Coucal basalt. Phase equilibria are precise descriptions of how a substance behaves under various temperature and pressure conditions, including the temperature at which melting begins, the amount of melt produced and its chemical composition.

For example, one of the simplest phase equilibria diagrams describes the behavior of water: at low temperatures and/or high pressures, water forms solid ice, while at high temperatures and/or low pressures, water forms gaseous steam. Phase equilibria gets a bit more involved with rocks, which have complex chemical compositions that can take on very different mineral combinations and physical characteristics based on temperature and pressure.

"If you take a rock off the shelf and melt it, you can get a phase diagram. But you're stuck with a fixed chemical composition," Brown said. "With thermodynamic modeling, you can change the composition, pressure and temperature independently. It's much more flexible and helps us to answer some questions we can't address with experiments on rocks."

Using the Coucal basalts and Pilbara granites as a starting point, Brown and his colleagues constructed a series of modeling experiments to reflect what might have transpired in an ancient Earth without plate tectonics. Their results suggest that, indeed, the Pilbara granites could have formed from the Coucal basalts.

More to the point, this transformation could have occurred in a pressure and temperature scenario consistent with a "stagnant lid," or a single shell covering the entire planet.

Plate tectonics substantially affects the temperature and pressure of rocks within Earth's interior. When a slab of rock subducts under the Earth's surface, the rock starts off relatively cool and takes time to gain heat. By the time it reaches a higher temperature, the rock has also reached a significant depth, which corresponds to high pressure--in the same way a diver experiences higher pressure at greater water depth.

In contrast, a "stagnant lid" regime would be very hot at relatively shallow depths and low pressures. Geologists refer to this as a "high thermal gradient."

"Our results suggest the Pilbara granites were produced by melting of the Coucal basalts or similar materials in a high thermal gradient environment," Brown said. "Additionally, the composition of the Coucal basalts indicates that they, too, came from an earlier generation of source rocks. We conclude that a multi-stage process produced Earth's first continents in a 'stagnant lid' scenario before plate tectonics began."






Contacts and sources:
Matthew Wright
University of Maryland
College of Computer, Mathematical, and Natural Sciences


Citation: The research paper, "Earth's first stable continents did not form by subduction," Tim Johnson, Michael Brown, Nicholas Gardiner, Christopher Kirkland and Hugh Smithies, was published February 27, 2017 in the journal Nature.

This work was supported by The Institute of Geoscience Research at Curtin University, Perth, Australia. The content of this article does not necessarily reflect the views of this organization.

A Rose Is a Rose Is a Transistor: Bionic Flowers Deliver Electric Power

 Flower power is taking on a whole new meaning. Life can be electric. Rechargeable rose bouquets are possible.  A bionic future gets nearer. 

In November 2015, the research group presented results showing that they had caused roses to absorb a conducting polymer solution. Conducting hydrogel formed in the rose's stem in the form of wires. With an electrode at each end and a gate in the middle, a fully functional transistor was created. The results were presented in Science Advances and have aroused considerable interest all over the world.

One member of the group, Assistant Professor Roger Gabrielsson, has now developed a material specially designed for this application. The material polymerizes inside the rose without any external trigger. The innate fluid that flows inside the rose contributes to create long, conducting threads, not only in the stem but also throughout the plant, out into the leaves and petals.

This is a supercapacitor Rose from Laboratory of Organic Electronic, Linköping University.

 Credit: Thor Balkhed


"We have been able to charge the rose repeatedly, for hundreds of times without any loss on the performance of the device. The levels of energy storage we have achieved are of the same order of magnitude as those in supercapacitors. The plant can, without any form of optimization of the system, potentially power our ion pump, for example, and various types of sensors," says Eleni Stavrinidou, Assistant Professor at the Laboratory of Organic Electronics.

The results are now to be published in the prestigious scientific journal Proceedings of the National Academy of Sciences of the United States of America (PNAS).

"This research is in a very early stage, and what the future will bring is an open question," says Eleni Stavrinidou.

Some examples are autonomous energy systems, the possibility of harvesting energy from plants to power sensors and various types of switches, and the possibility of creating fuel cells inside plants.

"A few years ago, we demonstrated that it is possible to create electronic plants, 'power plants', but we have now shown that the research has practical applications. We have not only shown that energy storage is possible, but also that we can deliver systems with excellent performance," says Professor Magnus Berggren, head of the Laboratory of Organic Electronics, Linköping University, Campus Norrköping.

The research into electronic plants has been funded by unrestricted research grants from the Knut and Alice Wallenberg Foundation. The foundation appointed Professor Magnus Berggren a Wallenberg Scholar in 2012.




Contacts and sources:
Eleni Stavrinidou
Magnus Berggren
Linköping University

Citation: In vivo polymerization and manufacturing of wires and supercapacitors in plants,
Eleni Stavrinidou, Roger Gabrielsson, K Peter R Nilsson, Sandeep Kumar Singh, Juan Felipe Franco-Gonzalez, Anton V Volkov, Magnus P Jonsson, Andrea Grimoldi, Mathias Elgland, Igor V Zozoulenko, Daniel T Simon and Magnus Berggren, Linköping University, PNAS 2017, DOI 10.1073/pnas.1616456114 http://dx.doi.org/10.1073/pnas.1616456114

Caution Subconscious at Work: How Your Brain Makes Articles Go Viral

The subconscious is at work when posts go viral on the internet, according to new research.

Activity in the self-related, mentalizing, and positive valuation regions of the brain combine unconsciously in our thoughts to determine what we want to read and share, such that brain scans from a small group of people can predict large-scale virality

It is a question that has mystified countless people: Why does one article spread like wildfire through social media and another -- seemingly similar -- doesn't? How does your brain decide what is valuable enough to read and share?

Christin Scholz and Elisa Baek, both students in the Ph.D. program at the Annenberg School for Communication at the University of Pennsylvania, are the lead authors behind two new research papers that document for the first time the specific brain activity that leads us to read or share articles -- in this case, health articles from the New York Times. And by looking at this specific pattern of brain activity in 80 people, they also were able to predict the virality of these articles among real New York Times readers around the world.
FMRI scan during working memory tasks
Credit: Wikimedia Commons

Fundamentally, explains Emily Falk, Ph.D., senior author on both papers and the director of Penn's Communication Neuroscience Lab, specific regions of the brain determine how valuable it would be to share information, and that value translates to its likelihood of going viral.

"People are interested in reading or sharing content that connects to their own experiences, or to their sense of who they are or who they want to be," she says. "They share things that might improve their relationships, make them look smart or

empathic or cast them in a positive light."

By using fMRI, the researchers were able to measure people's brain activity in real time as they viewed the headlines and abstracts of 80 New York Times health articles and rated how likely they were to read and share them. The articles were chosen for their similarity of subject matter -- nutrition, fitness, healthy living -- and number of words.

The researchers honed in on regions of the brain associated with self-related thinking, regions associated with mentalizing -- imagining what others might think -- and with overall value.

Although it might be intuitive to expect people would think about themselves in deciding what to read personally and think about others in deciding what to share, the researchers found something else: Whether they were choosing to read for themselves or deciding what to recommend to others, the neural data suggest that people think about both themselves and others.

In fact, the researchers report in an upcoming issue of Psychological Science, that thinking about what to share brought out the highest levels of activity in both of these neural systems.

"When you're thinking about what to read yourself and about what to share, both are inherently social, and when you're thinking socially, you're often thinking about yourself and your relationships to others," says Baek. "Your self-concept and understanding of the social world are intertwined."

A second study, to be published next week in the Proceedings of the National Academy of Sciences (PNAS), shows how these brain signals can be used to predict virality of the same news articles around the world.

When stories go viral through the 4 billion Facebook messages, 500 million tweets and 200 billion emails shared daily, they can have real impact on our health, politics and society. But not all articles are shared equally. Why do some articles get shared while others don't?

By looking at brain activity as 80 test subjects considered sharing the same New York Times health articles, researchers predicted an article's virality among the actual New York Times readership, which shared this group of articles a combined total of 117,611 times.

They found that activity in the self-related and mentalizing regions of the brain combine unconsciously in our minds to produce an overall signal about an article's value. That value signal then predicts whether or not we want to share.

Even though the pool of test subjects -- 18-to-24-year olds, many of them university students, living around Philadelphia -- represented different demographics than the overall New York Times readership, brain activity in key brain regions that track value accurately scaled with the global popularity of the articles.

"If we can use a small number of brains to predict what large numbers of people who read the New York Times are doing, it means that similar things are happening across people," Scholz says. "The fact that the articles strike the same chord in different brains suggests that similar motivations and similar norms may be driving these behaviors. Similar things have value in our broader society."

Scholz acknowledges that exactly how we're thinking about ourselves and others varies from person to person. For example, one person may think that an article will make her friends laugh, while another may think that sharing it will help his friend solve a particular problem. But neural activity in regions associated with the self and with social considerations serves as a type of common denominator for various types of social and self-related thinking.

"In practice, if you craft a message in a way that makes the reader understand how it's going to make them look positive, or how it could enhance a relationship," Scholz says, "then we predict it would increase the likelihood of sharing that message."



Contacts and sources:
Julie Sloane
University of Pennsylvania

Publications:
"The Value of Sharing Information: A Neural Account of Information Transmission," will be published in an upcoming issue of Psychological Science. In addition to Baek and Scholz, other co-authors include Matthew Brook O'Donnell, Ph.D., and Emily Falk, Ph.D.

"A Neural Model of Valuation and Information Virality" will be published next week in the Proceedings of the National Academy of Sciences (PNAS). In addition to Scholz and Baek, its co-authors include Matthew Brook O'Donnell, Ph.D., Hyun Suk Kim, Ph.D., Joseph N. Cappella, Ph.D., and Emily B. Falk, Ph.D.

Both studies were supported by The Defense Advanced Research Projects Agency (D14AP00048; PI Falk) and NIH (1DP2DA03515601; PI Falk).

Brain Sync: Headband Measures How Our Minds Align When We Communicate


Great ideas so often get lost in translation -- from the math teacher who can't get through to his students, to a stand-up comedian who bombs during an open mic night.

But how can we measure whether our audiences understand what we're trying to convey? And better yet, how can we improve that exchange?

Drexel University biomedical engineers, in collaboration with Princeton University psychologists, are using a wearable brain-imaging device to see just how brains sync up when humans interact. It is one of many applications for this functional near-infrared spectroscopy (or fNIRS) system, which uses light to measure neural activity during real-life situations and can be worn like a headband.  

This is a cartoon image of brain 'coupling' during communication

Drexel University


Published in Scientific Reports on February 27th, 2017, a new study shows that the fNIRS device can successfully measure brain synchronization during conversation. The technology can now be used to study everything from doctor-patient communication, to how people consume cable news.

"Being able to look at how multiple brains interact is an emerging context in social neuroscience," said Hasan Ayaz, PhD, an associate research professor in Drexel's School of Biomedical Engineering, Science and Health Systems, who led the research team. "We live in a social world where everybody is interacting. And we now have a tool that can give us richer information about the brain during everyday tasks -- such as natural communication -- that we could not receive in artificial lab settings or from single brain studies."

The current study is based on previous research from Uri Hasson, PhD, associate professor at Princeton University, who has used functional Magnetic Resonance Imaging (fMRI) to study the brain mechanisms underlying the production and comprehension of language. Hasson has found that a listener's brain activity actually mirrors the speaker's brain when he or she is telling story about a real-life experience. And higher coupling is associated with better understanding.

However, traditional brain imaging methods have certain limitations. In particular, fMRI requires subjects to lie down motionlessly in a noisy scanning environment. With this kind of set-up, it is not possible to simultaneously scan the brains of multiple individuals who are speaking face-to-face.

This is why the Drexel researchers sought to investigate whether the portable fNIRS system could be a more effective approach to probe the brain-to-brain coupling question in natural settings.

For their study, a native English speaker and two native Turkish speakers told an unrehearsed, real-life story in their native language. Their stories were recorded and their brains were scanned using fNIRS. Fifteen English speakers then listened to the recording, in addition to a story that was recorded at a live storytelling event.

The researchers targeted the prefrontal and parietal areas of the brain, which include cognitive and higher order areas that are involved in a person's capacity to discern beliefs, desires and goals of others. They hypothesized that a listener's brain activity would correlate with the speaker's only when listening to a story they understood (the English version). A second objective of the study was to compare the fNIRS results with data from a similar study that had used fMRI, in order to compare the two methods.

They found that when the fNIRS measured the oxygenation and deoxygenation of blood cells in the test subject's brains, the listeners' brain activity matched only with the English speakers. These results also correlated with the previous fMRI study.

This new research supports fNIRS as a viable future tool to study brain-to-brain coupling during social interaction. The system can be used to offer important information about how to better communicate in many different environments, including classrooms, business meetings, political rallies and doctors' offices.

"This would not be feasible with fMRI. There are too many challenges," said Banu Onaral, PhD, the H. H. Sun Professor in the School of Biomedical Engineering, Science and Health Systems. "Now that we know fNIRS is a feasible tool, we are moving into an exciting era when we can know so much more about how the brain works as people engage in everyday tasks."

This study was conducted at the Cognitive Neuroengineering and Quantitative Experimental Research (CONQUER) Collaborative, a multi-disciplinary brain observatory at Drexel University.



Contacts and sources:
Lauren Ingeno
Drexel University

Citation: Measuring speaker–listener neural coupling with functional near infrared spectroscopy
Scientific Reports 7, Article number: 43293 (2017) http://dx.doi.org/10.1038/srep43293

Materials That Emit Rainbows

Mechanochromic luminescent (MCL) materials change their color in response to a change in their environment, like pressure and temperature. To date, most MCL materials only change between two colors, limiting their applications.

The international research team comprising of chemists at Osaka University and physicists at Durham University has developed tricolor-changing MLC materials. Not only that, the developed materials exhibited efficient thermally activated delayed fluorescence (TADF) and allowed high performance organic light-emitting diodes (OLEDs) devices. The findings can be read about in Chemical Science.


Fig.1. Illustrative summary of the developed organic luminescent material


"Most MCL materials generate two colors by switching between a stable state and one metastable state. To realize multi-color MCL, more metastable states are necessary," explain Professors Youhei Takeda and Satoshi Minakata at the Department of Applied Chemistry, Graduate School of Engineering of Osaka University. To create these states, the chemist team led by Takeda and Minakata designed a new molecule by applying a conformationally-switchable phenothiazine (PTZ) as the donor.


Fig.2. Image of the change in luminescence colors of MCL materials


"By making the use of a promising and unique acceptor, dibenzophenazine (DBPHZ), which we previously developed, we made a PTZ-DBPHZ-PTZ triad," said Takeda. "In this structure, the PTZ moiety could take two distinct conformers, which therefore in principle creates in total four metastable states as a whole molecule."

In response to heating, fuming, and grinding, the molecule switched its color between yellow, red and orange. The team found that the three colors derive from different conformers in which each PTZ takes either an equatorial or axial conformation relative to the DBPHZ core.


Fig.3. Comparison of a) the previously reported and b) the developed molecular materials



"For red, both of PTZ units take an equatorial-equatorial conformer, for orange, PTZ had an equatorial-axial conformer, and for yellow, PTZ had an axial-axial conformer."

Most OLEDs devices with high energy conversion efficiencies depend on expensive precious metals. TADF light emitting devices, on the other hand, can achieve equal or better efficiency at much lower cost, which is why they have gained popularity for the design of displays in daily electronics like smart phones.

In collaboration with the physicists team at Durham University, the United Kingdom, led by Dr Data and Professor Monkman, they successfully made highly efficient OLED devices by applying the newly developed MCL-TADF molecule as an emissive material. Incorporating the PTZ-DBPHZ-PTZ triad into a light emitting device resulted in an efficiency three times higher than the theoretical maximum of conventional fluorescent materials.

Takeda says that, "Our molecule could become a basis for efficient light-emitting devices and pressure- and temperature-responsive sensors in the future."



Contacts and sources:

Citation: Thermally activated delayed fluorescent phenothiazine–dibenzo[a,j]phenazine–phenothiazine triads exhibiting tricolor-changing mechanochromic luminescence  Journal: Chemical Science
Authors:Masato Okazaki, Youhei Takeda, Przemyslaw Data, Piotr Pander, Heather Higginbotham, Andrew P. Monkman, and Satoshi Minakata
DOI: 10.1039/C6SC04863C
Funded by: Ministry of Education, Culture, Sports, Science and Technology, Japan, Japan Society for the Promotion of Science, Japan Prize Foundation

Cancer Killing Molecule Destroys Tumor Cells and Leaves Normal Cells Unaffected

Scientists at the University of Huddersfield are the first to arrive at a deep understanding of a molecule that destroys cancerous tumours without harming healthy cell tissue. The discovery opens up the potential for highly effective new cancer treatments that are free of serious side effects.

A new journal article describes the science behind the breakthrough. Now the research team headed by Dr Nikolaos Georgopoulos has developed and patented a cancer treatment regime that exploits the unique properties of the molecule – a protein named Cluster of Differentiation 40 (CD40). The next phase is to secure funding for clinical trials.

 Pictured above are tumour cells before and after the treatment with the protein named Cluster of Differentiation 40 (CD40). After the treatment the tumour cells are significantly reduced and will soon disappear altogether.
Credit:  University of Huddersfield

Dr Georgopoulos is a specialist in cancer research and he has been investigating CD40 for almost 16 years.

“In 2002, we first reported that this particular member of the TNF receptor family is unique,” he said. “A lot of members of this family are very good at triggering cell death. But the molecule CD40 is special. It seems to specifically kill tumour cells, but when you activate it on normal cells, they don’t die.”

It was vital to understand these remarkable properties of CD40, with their immense potential for cancer therapy. Years of investigation began to unlock the mystery.

“Cancer therapies, such as chemotherapy and radiotherapy, are ‘hit with a hammer’ approaches. Hit as hard as you can and kill the tumours as well as you can. But there is usually some collateral damage. There are side effects,” said Dr Georgopoulos.

“We knew this CD40 molecule seemed to be very good at killing tumour cells. So we decided to observe what it does at the molecular level. If we understand what it does and what’s so special about it, we can design our own way to kill tumours. We have now identified exactly why this molecule can kill tumour cells and why it leaves normal cells unaffected.”

Tumour cells proliferate by continuously dividing. This places them under considerable stress, but they have developed protective properties that enable them to cope. CD40 removes this protection so that the tumour cells die, but because normal cells are not placed under “oxidative stress” they are unharmed by the protein. 

Cluster of Differentiation CD40
Credit:  University of Huddersfield

Dr Georgopoulos and his co-researchers at the University of Huddersfield made this discovery because instead of working purely with tumour cells, they were able to make comparisons with the effects of CD40 on normal cells as well as engineered – para-malignant – cells that allowed them to mimic the process of carcinogenesis – cancer development.

The team has also worked on a method of using CD40 in targeted, intravenous bio-therapy by discovering the best way to deploy the molecule – using its ligand to activate it. The discovery has been patented, and the University is exploring commercialisation through a spin-out company – provisionally called ThanatoCure™ – Thanatos is the Greek word for ‘death’, referring here to cell death.

Advanced discussions are being held with a company that specialises in early-stage development of innovative cancer therapies. It is hoped that the company will secure funding in the region of £900,000 for clinical trials that would see colorectal cancer patients receiving the new treatment. The trials could start as early as the end of 2017.

A big proportion of research leading to the breakthrough was conducted by Dr Chris Dunnill, during and beyond his PhD, supervised by Dr Georgopoulos. Also part of the research team – and co-contributors to the new article in a leading journal – were PhD students Khalidah Ibraheem and Albashir Mohamed, supervised by Dr Georgopoulos, and Professor Jenny Southgate from the Department of Biology, University of York.

· The article A redox state-dictated signalling pathway deciphers the malignant cell specificity of CD40-mediated apoptosis is in the journal, Oncogene.



Contacts and sources:

 University of Huddersfield

Citation: A redox state-dictated signalling pathway deciphers the malignant cell specificity of CD40-mediated apoptosis. Authors: C J Dunnill1, K Ibraheem1, A Mohamed1, J Southgate2 and N T Georgopoulos1  1Department of Biological Sciences, School of Applied Sciences, University of Huddersfield, Huddersfield, UK 2Jack Birch Unit of Molecular Carcinogenesis, Department of Biology, University of York, York, UK
Received 25 January 2016; Revised 8 September 2016; Accepted 16 September 2016
Advance online publication 21 November 2016 Oncogene.
 

New Super-Material Shapes and Focuses Sound

A super-material that bends, shapes and focuses sound waves that pass through it has been invented by scientists.

The creation pushes the boundaries of metamaterials – a new class of finely-engineered surfaces that perform nature-defying tasks.

These materials have already shown remarkable results with light manipulation, allowing scientists to create a real-life version of Harry Potter’s invisibility cloak, for example.

But a research team from the Universities of Sussex and Bristol have now shown that they also work with sound waves, which could transform medical imaging and personal audio.

Finely shaped sound fields are used in medical imaging and therapy as well as in a wide range of consumer products such as audio spotlights and ultrasonic haptics. The research published today (Monday 27 February 2017) in Nature Communications shows a simple and cheap way of creating these shaped sound waves using acoustic metamaterials.

Metamaterial bricks are assembled into a layer to produce a meta-surface, which could have applications across healthcare and entertainment

These space coiling bricks act to slow down sound waves, meaning that they can be transformed into any required sound field.


The collaborative research team assembled a metamaterial layer out of lots of small bricks that each coil up space. The space coiling bricks act to slow down the sound meaning that incoming sound waves can be transformed into any required sound field.

The new metamaterial layers could be used in many applications. Large versions could be used to direct or focus sound to a particular location and form an audio hotspot. Much smaller versions could be used to focus high intensity ultrasound to destroy tumours deep within the body. Here, a metamaterial layer could be tailor-made to fit the body of a patient and tuned to focus the ultrasound waves where they are needed most. In both cases the layer could be fitted to existing loudspeaker technology and be made rapidly and cheaply.



Dr Gianluca Memoli, from the Interact Lab at the University of Sussex who led the study, said: “Our metamaterial bricks can be 3D printed and then assembled together to form any sound field you can imagine. We also showed how this can be achieved with only a small number of different bricks. You can think of a box of our metamaterial bricks as a do-it-yourself acoustics kit.

Professor Sriram Subramanian, Head of the Interact Lab at the University of Sussex, added: “We want to create acoustic devices that manipulate sound with the same ease and flexibility with which LCDs and projectors do to light. Our research opens the door to new acoustic devices combining diffraction, scattering and refraction, and enables the future development of fully digital spatial sound modulators, which can be controlled in real time with minimal resources.”

Bruce Drinkwater, Professor of Ultrasonics at the University of Bristol, explained: “In the future I think there will be many exciting applications of this technology. We are now working on making the metamaterial layers dynamically reconfigurable. This will mean we can make cheap imaging systems which could be used either for medical diagnostics or crack detection.”




Contacts and sources:
By: James Hakner
University of Sussex

Sunday, February 26, 2017

Humans Take The Path of Least Resistance: It's in Our Nature

The amount of effort required to do something influences what we think we see, finds a new University College London (UCL) study suggesting we’re biased towards perceiving anything challenging to be less appealing.

 “Our brain tricks us into believing the low-hanging fruit really is the ripest,” says Dr Nobuhiro Hagura, who led the UCL team before moving to NICT in Japan. “We found that not only does the cost to act influence people’s behaviour, but it even changes what we think we see.”

 Credit: UCL


For the study, published in eLife, a total of 52 participants took part in a series of tests where they had to judge whether a cloud of dots on a screen was moving to the left or to the right. They expressed their decisions by moving a handle held in the left or right hand respectively. When the researchers gradually added a load to one of the handles, making it more difficult to move, the volunteers’ judgements about what they saw became biased, and they started to avoid the effortful response. If weight was added to the left handle, participants were more likely to judge the dots to be moving rightwards as that decision was slightly easier for them to express. Crucially, the participants did not become aware of the increasing load on the handle: their motor system automatically adapted, triggering a change in their perception.

“The tendency to avoid the effortful decision remained even when we asked people to switch to expressing their decision verbally, instead of pushing on the handles,” Dr Hagura said. “The gradual change in the effort of responding caused a change in how the brain interpreted the visual input. Importantly, this change happened automatically, without any awareness or deliberate strategy.”

“Traditionally, scientists have assumed the visual system gives us perceptual information, and the motor system is a mere downstream output channel, which expresses our decision based on what we saw, without actually influencing the decision itself. Our experiments suggest an alternative view: the motor response that we use to report our decisions can actually influence the decision about what we have seen,” he said.

The researchers believe that our daily decisions could be modified not just through deliberate cognitive strategies, but also by designing the environment to make these decisions slightly more effortful. “The idea of ‘implicit nudge’ is currently popular with governments and advertisers,” said co-author Professor Patrick Haggard (UCL Institute of Cognitive Neuroscience). “Our results suggest these methods could go beyond changing how people behave, and actually change the way the world looks. Most behaviour change focuses on promoting a desired behaviour, but our results suggest you could also make it less likely that people see the world a certain way, by making a behaviour more or less effortful. Perhaps the parent who places the jar of biscuits on a high shelf actually makes them look less tasty to the toddler playing on the floor.”

The study was performed under an international collaboration between UCL, NICT (Japan) and Western University (Canada). The researchers were funded by the European Research Council, the Japan Society for the Promotion of Science, and the James S. McDonnell Foundation.




Contacts and sources:
Chris Lane
University College London (UCL)


Mars Mantle More Earth-like than Moon-like


New Mars research shows evidence of a complex mantle beneath the Elysium volcanic province.

Mars' mantle may be more complicated than previously thought. In a new study published today in the Nature-affiliated journal Scientific Reports, researchers at Louisiana State University (LSU) document geochemical changes over time in the lava flows of Elysium, a major martian volcanic province.

LSU Geology and Geophysics graduate researcher David Susko led the study with colleagues at LSU including his advisor Suniti Karunatillake, the University of Rahuna in Sri Lanka, the SETI Institute, Georgia Institute of Technology, NASA Ames, and the Institut de Recherche en Astrophysique et Planétologie in France.

They found that the unusual chemistry of lava flows around Elysium is consistent with primary magmatic processes, such as a heterogeneous mantle beneath Mars' surface or the weight of the overlying volcanic mountain causing different layers of the mantle to melt at different temperatures as they rise to the surface over time.


This is a solidified lava flow over the side of a crater rim of Elysium.

Credit: NASA HiRISE image, David Susko, LSU.


Elysium is a giant volcanic complex on Mars, the second largest behind Olympic Mons. For scale, it rises to twice the height of Earth's Mount Everest, or approximately 16 kilometers. Geologically, however, Elysium is more like Earth's Tibesti Mountains in Chad, the Emi Koussi in particular, than Everest. This comparison is based on images of the region from the Mars Orbiter Camera, or MOC, aboard the Mars Global Surveyor, or MGS, Mission.

Elysium is also unique among martian volcanoes. It's isolated in the northern lowlands of the planet, whereas most other volcanic complexes on Mars cluster in the ancient southern highlands. Elysium also has patches of lava flows that are remarkably young for a planet often considered geologically silent.

"Most of the volcanic features we look at on Mars are in the range of 3-4 billion years old," Susko said. "There are some patches of lava flows on Elysium that we estimate to be 3-4 million years old, so three orders of magnitude younger. In geologic timescales, 3 million years ago is like yesterday."

In fact, Elysium's volcanoes hypothetically could still erupt, Susko said, although further research is needed to confirm this. "At least, we can't yet rule out active volcanoes on Mars," Susko said. "Which is very exciting."

Susko's work in particular reveals that the composition of volcanoes on Mars may evolve over their eruptive history. In earlier research led by Karunatillake, assistant professor in LSU's Department of Geology and Geophysics, researchers in LSU's Planetary Science Lab, or PSL, found that particular regions of Elysium and the surrounding shallow subsurface of Mars are geochemically anomalous, strange even relative to other volcanic regions on Mars. They are depleted in the radioactive elements thorium and potassium. Elysium is one of only two igneous provinces on Mars where researchers have found such low levels of these elements so far.

"Because thorium and potassium are radioactive, they are some of the most reliable geochemical signatures that we have on Mars," Susko said. "They act like beacons emitting their own gamma photons. These elements also often couple in volcanic settings on Earth."

In their new paper, Susko and colleagues started to piece together the geologic history of Elysium, an expansive volcanic region on Mars characterized by strange chemistry. They sought to uncover why some of Elysium's lava flows are so geochemically unusual, or why they have such low levels of thorium and potassium. Is it because, as other researchers have suspected, glaciers located in this region long ago altered the surface chemistry through aqueous processes? Or is it because these lava flows arose from different parts of Mars' mantle than other volcanic eruptions on Mars?

Perhaps the mantle has changed over time, meaning that more recent volcanic eruption flows differ chemically from older ones. If so, Susko could use Elysium's geochemical properties to study how Mars' bulk mantle has evolved over geologic time, with important insights for future missions to Mars. Understanding the evolutionary history of Mars' mantle could help researchers gain a better understanding of what kinds of valuable ores and other materials could be found in the crust, as well as whether volcanic hazards could unexpectedly threaten human missions to Mars in the near future. Mars' mantle likely has a very different history than Earth's mantle because the plate tectonics on Earth are absent on Mars as far as researchers know. The history of the bulk interior of the red planet also remains a mystery.

Susko and colleagues at LSU analyzed geochemical and surface morphology data from Elysium using instruments on board NASA's Mars Odyssey Orbiter (2001) and Mars Reconnaissance Orbiter (2006). They had to account for the dust that blankets Mars' surface in the aftermath of strong dust storms, to make sure that the shallow subsurface chemistry actually reflected Elysium's igneous material and not the overlying dust.

Through crater counting, the researchers found differences in age between the northwest and the southeast regions of Elysium -- about 850 million years of difference. They also found that the younger southeast regions are geochemically different from the older regions, and that these differences in fact relate to igneous processes, not secondary processes like the interaction of water or ice with the surface of Elysium in the past.

"We determined that while there might have been water in this area in the past, the geochemical properties in the top meter throughout this volcanic province are indicative of igneous processes," Susko said. "We think levels of thorium and potassium here were depleted over time because of volcanic eruptions over billions of years. The radioactive elements were the first to go in the early eruptions. We are seeing changes in the mantle chemistry over time."

"Long-lived volcanic systems with changing magma compositions are common on Earth, but an emerging story on Mars," said James Wray, study co-author and associate professor in the School of Earth and Atmospheric Sciences at Georgia Tech.

Wray led a 2013 study that showed evidence for magma evolution at a different martian volcano, Syrtis Major, in the form of unusual minerals. But such minerals could be originating at the surface of Mars, and are visible only on rare dust-free volcanoes.

"At Elysium we are truly seeing the bulk chemistry change over time, using a technique that could potentially unlock the magmatic history of many more regions across Mars," he said.

Susko speculates that the very weight of Elysium's lava flows, which make up a volcanic province six times higher and almost four times wider than its morphological sister on Earth, Emi Koussi, has caused different depths of Mars' mantle to melt at different temperatures. In different regions of Elysium, lava flows may have come from different parts of the mantle. Seeing chemical differences in different regions of Elysium, Susko and colleagues concluded that Mars' mantle might be heterogeneous, with different compositions in different areas, or that it may be stratified beneath Elysium.

Overall, Susko's findings indicate that Mars is a much more geologically complex body than originally thought, perhaps due to various loading effects on the mantle caused by the weight of giant volcanoes.

"It's more Earth-like than moon-like," Susko said. "The moon is cut and dry. It often lacks the secondary minerals that occur on Earth due to weathering and igneous-water interactions. For decades, that's also how we envisioned Mars, as a lifeless rock, full of craters with a number of long inactive volcanoes. We had a very simple view of the red planet. But the more we look at Mars, the less moon-like it becomes. We're discovering more variety in rock types and geochemical compositions, as seen across the Curiosity Rover's traverse in Gale Crater, and more potential for viable resource utilization and capacity to sustain a human population on Mars. It's much easier to survive on a complex planetary body bearing the mineral products of complex geology than on a simpler body like the moon or asteroids."

Susko plans to continue clarifying the geologic processes that cause the strange chemistry found around Elysium. In the future, he will study these chemical anomalies through computational simulations, to determine if recreating the pressures in Mars' mantle caused by the weight of giant volcanoes could affect mantle melting to yield the type of chemistry observed within Elysium.



Contacts and sources:
Alison Satake
Louisiana State University (LSU)

Do You or Don't You Want To Know Your Future? Most People Don't Says New Study

Learning what the future holds, good or bad, not appealing to most, study says

Given the chance to see into the future, most people would rather not know what life has in store for them, even if the news is positive, according to new research conducted by scientists at the Max Planck Institute for Human Development and the University of Granada, which has been published by the American Psychological Association.


In Greek mythology, the princess and seeress Cassandra was cursed, so that no one believes her words and prophecies. In the light of most recent research findings, this is hardly surprising, which show that most people prefer not to know what the future has in store for them.
Credit: © Flickr/Internet Archive Book Images/public domain


“In Greek mythology, Cassandra, daughter of the king of Troy, had the power to foresee the future. But she was also cursed, so that no one believed her prophecies,” said the study’s lead author, Gerd Gigerenzer of the Max Planck Institute for Human Development. “In our study, we found that people would not want the powers that made Cassandra famous, in order to avoid the suffering and regret that knowing the future may cause and also to maintain the enjoyment of suspense that pleasurable events provide.”

Two nationally representative studies involving more than 2,000 adults in Germany and Spain found that 86 to 90 percent of people would not want to know about upcoming negative events, and 40 to 77 percent preferred to remain ignorant of upcoming positive events. Only 1 percent of participants consistently wanted to know what the future held. The findings are published in the APA journal Psychological Review.

The researchers also found that people who prefer not to know the future are more risk averse and more frequently buy life and legal insurance than those who want to know the future. This suggests that those who choose to be ignorant anticipate regret, Gigerenzer said. The time frame also played a role: Deliberate ignorance was more likely the nearer the event was expected to take place. For example, older adults were less likely than younger adults to want to know the date and cause of their or their partner's death.

Participants were asked about a large range of potential events, both positive and negative. For example, they were asked if they wanted to know who won a soccer game they had planned to watch later, what they were getting for Christmas, whether there is life after death, and if their marriage would eventually end in divorce. Finding out the sex of their unborn child was the only item in the survey where more people wanted to know, with only 37 percent of participants saying they wouldn’t want to know.

Although the people living in Germany and Spain varied in age, education and other important aspects, the pattern of deliberate ignorance was highly consistent across the two countries.

“Wanting to know is assumed to be the norm for humans, and in no need of justification. People are not just invited but also often expected to participate in early detection for cancer screening or in regular health check-ups, to subject their unborn babies to dozens of prenatal genetic tests, or to use self-tracking health devices,” said Gigerenzer. “Not wanting to know appears counterintuitive and may raise eyebrows, but deliberate ignorance, as we’ve shown here, doesn’t just exist; it is a widespread state of mind.”




Contacts and sources:
Prof. Dr. Dr. h.c. Gerd Gigerenzer
Max Planck Institute for Human Development, Berlin

Citation: Gigerenzer, G., & García-Retamero, R.
Cassandra's regret: The psychology of not wanting to know.
Psychological Review. Psychological Review, Vol 124(2), Mar 2017, 179-196.

Musical and Speech Melodies May Be the “Social Glue” or the “Lowest Common Denominator in Human Evolution”


Daniela Sammler conducts research into the structures of the brain that process speech and music, and finds many commonalities

A mother sings a lullaby to her baby. When she talks to her child she modifies the pitch of her voice. What the baby “understands” is the melody and the emotions that this expresses.

Daniela Sammler, a neuropsychologist at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, considers both musical melodies and speech melodies to be the “social glue” or the “lowest common denominator in human evolution”.

“Both obey a grammar – naturally culture-specific – that we learn early on in life. Speech is clearly governed by the order of clauses in a sentence,” explains the 38-year-old, who has led her own Research Group in Leipzig since the summer of 2013. But how individual words and parts of a sentence are stressed can also fundamentally change the meaning of a sentence. Take the sentence “Mary has given a book to John”, where the meaning depends on whether Mary or John has been stressed.

Credit: Max Planck Society

Music, similarly, follows a sequence of tones and harmonies – its “musical grammar.” If a pianist, for example, breaks these rules, brain regions activate that are astonishingly similar to those that fire when grammatical mistakes are made in a sentence.


Music and speech: two channels of communications only available to humans

Daniela Sammler doesn't consider it chance that we humans alone, among all other animals, possess both speech and music as channels of communication. She is convinced that over the course of evolution the human brain has evolved to process both. And she has set out to uncover the underlying structures of the brain. 

One part of the Research Group she leads investigates the role of speech melodies – word stress, the sequence of pitches in a sentence, and the cadence of speech. The other part researches how melodies are perceived in music. To do this she has had a special piano constructed by the Julius Blüthner piano manufacturing company in Leipzig that can be played while in a magnetic resonance imaging (MRI) scanner. With its help scientists can measure the brain activity of pianists while playing the piano. 


What's really fascinating is how our sense for the rules of music governs how we interpret it. Both of these investigations suggest that similar regions of the brain are employed to process melodies in both speech and music, and colleagues in the same scientific circles are taking note: “Thanks to the intensive research that Daniela Sammler has undertaken, we now know that the neuronal substrates of music and speech are more similar than we ever suspected,” says Angela Friederici, Director at the same Institute. “It’s her work that has demonstrated the central role of speech melody in our interpersonal communication.”


Daniela Sammler

Credit: © Amac Garbe

“Our brains don’t have separate specialized regions for speech and for music,” stresses Daniela Sammler. Music, like speech, activates a number of brain regions that are often also responsible for other functions. “Take hearing for example, and also motor function – like tapping your foot. Not to forget the emotional centres, like those used to store memories,” adds Sammler. In the brain, different highly interconnected regions all work together. In the process, similar tasks are bundled together in specialized regions. How this happens in detail is what Daniela is hoping to understand.

What unites and what separates individual cultures?

For this reason she is investigating both the “universals” – the commonalities that exist in our understanding of music and speech across many cultures – as well as the culturally-learned differences. Do speakers of Arabic who understand no German have the same experience of German sentence melodies that a native German speaker might have? Is the reverse also true? Do we recognize a critical tone in the cadence of speech whether or not we speak the language?

Daniela Sammler is fascinated by this and many other new projects, and her students are often astonished at how analogous the results of speech and music research are. She supervises four doctoral students in her Group, as well as an ever-changing number of undergraduates. What are her further plans? “What I’m interested in could go on forever,” says Sammler. She recently submitted her German Habilitation (extended postdoctoral qualification), and she is now in the process of applying for vacant posts as a professor. Her scientific journey is ongoing in other words. She hopes to stay in Germany, or at least in Europe.



Contacts and sources:
Daniela Sammler
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig
Text: Mechthild Zimmermann / Barbara Abrell

The Ancient Art of Kirigami Is Inspiring a New Class of Materials

Origami-inspired materials use folds in materials to embed powerful functionality. However, all that folding can be pretty labor intensive. Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) are drawing material inspiration from another ancient Japanese paper craft — kirigami.

Kirigami relies on cuts, rather than folds, to change the structure and function of materials.

The buckling-induced cubic patterned kirigami sheet can be folded flat 
Image courtesy of Ahmad Rafsanjani/Harvard SEAS

In a new paper published in Physical Review Letters, SEAS researchers demonstrate how a thin, perforated sheet can be transformed into a foldable 3D structure by simply stretching the cut material.

“We find that applying sufficiently large amounts of stretching, buckling is triggered and results in the formation of a 3D structure comprising a well-organized pattern of mountains and valleys, very similar to popular origami folds such as the Miura-ori,” said Ahmad Rafsanjani, a postdoctoral fellow at SEAS and first author of the paper.



The team found that if the material is stretched more, the temporary deformations become permanent folds. The team also found that the pop-up pattern and resulting mechanical properties of the material can be controlled by varying the orientation of the cuts.

“This study shows a robust pop-up strategy to manufacture complex morphable structures out of completely flat perforated sheets,” said Katia Bertoldi, the John L. Loeb Associate Professor of the Natural Sciences at SEAS and senior author of the paper.



Contacts and sources:
Leah Burrows
Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) 

Even Shredded to Pieces They Live: How Nearly Immortal Hydras Know Where to Regrow Lost Body Parts

Hydras the almost immortal animals. immortal.

Few animals can match the humble hydra’s resilience. The small, tentacled freshwater animals can be literally shredded into pieces and regrow into healthy animals.

Hydras are a genus of the Cnidaria phylum. All cnidarians can regenerate, allowing them to recover from injury and to reproduce asexually. Hydras are simple, freshwater animals possessing radial symmetry and no post-mitotic cells. All hydra cells continually divide. It has been suggested that hydras do not undergo senescence, and, as such, are biologically

A study published February 7 in Cell Reports suggests that pieces of hydras have structural memory that helps them shape their new body plan according to the pattern inherited by the animal’s “skeleton.” Previously, scientists thought that only chemical signals told a hydra where its heads and/or feet should form.
Credit: Technion

Regenerating hydras use a network of tough, stringy protein fibers, called the cytoskeleton, to align their cells. When pieces are cut or torn from hydras, the cytoskeletal pattern survives and becomes part of the new animal. The pattern generates a small but potent amount of mechanical force that shows cells where to line up. This mechanical force can serve as a form of “memory” that stores information about the layout of animal bodies. “You have to think of it as part of the process of defining the pattern and not just an outcome”, says senior author, biophysicist Kinneret Keren of the Technion – Israel Institute of Technology

When pieces of hydra begin the regeneration process, the scraps of hydra fold into little balls, and the cytoskeleton has to find a balance between maintaining its old shape and adapting to the new conditions. “If you take a strip or a square fragment and turn it into a sphere, the fibers have to change or stretch a lot to do that,” explains Keren. However, some portions retain their pattern. As the little hydra tissue ball stretches into a tube and grows a tentacle-ringed mouth, the new body parts follow the template set by the cytoskeleton in fragments from the original hydra. 


Hydras
Credit: Stephen Friedt/Wikipedia

The main cytoskeletal structure in adult hydra is an array of aligned fibers that span the entire organism. Tampering with the cytoskeleton is enough to disrupt the formation of new hydras, the researchers found. In many ways, the cytoskeleton is like a system of taut wires that helps the hydra keep its shape and function. In one experiment, the researchers cut the original hydra into rings which folded into balls that contained multiple domains of aligned fibers. Those ring-shaped pieces grew into two-headed hydras. However, anchoring the hydra rings to stiff wires resulted in healthy one-headed hydras, suggesting that mechanical feedbacks promote order in the developing animal.

Hydras are much simpler than most of their cousins in the animal kingdom, but the basic pattern of aligned cytoskeletal fibers is common in many organs, including human muscles, heart, and guts. Studying hydra regeneration may lead to a better understanding of how mechanics integrate with biochemical signals to shape tissues and organs in other species. “The actomyosin cytoskeleton are the main force generator across the animal kingdom,” says Keren. “This is very universal.”



Contacts and sources:
Technion – Israel Institute of Technology.

Citation: "Structural Inheritance of the Actin Cytoskeletal Organization Determines the Body Axis in Regenerating Hydra"   Cell Reports

‘Eye-Opening’ Study Shows Rural U.S. Loses Forests Faster Than Cities

Americans are spending their lives farther from forests than they did at the end of the 20th century and, contrary to popular wisdom, the change is more pronounced in rural areas than in urban settings.

A study published today (Feb. 22) in the journal PLOS ONE says that between 1990 and 2000, the average distance from any point in the United States to the nearest forest increased by 14 percent - or about a third of a mile. And while the distance isn't insurmountable for humans in search of a nature fix, it can present challenges for wildlife and have broad effects on ecosystems.

Dr. Giorgos Mountrakis, an associate professor in the ESF Department of Environmental Resources, and co-author of the study, called the results "eye opening."

"Our study analyzed geographic distribution of forest losses across the continental U.S. While we focused on forests, the implications of our results go beyond forestry," Mountrakis said.

Figure;  Forest cover change (FCC) and forest attrition distance change (FADC) in level III ecoregions.While the southeastern U.S. is experiencing high forest loss, the highest forest attrition is concentrated in other parts of the country















Credit: Forest dynamics in the U.S. indicate disproportionate attrition in western forests, rural areas and public lands Sheng Yang Giorgos Mountrakis PLoS One

The study overturned conventional wisdom about forest loss, the researcher noted. The amount of forest attrition - the complete removal of forest patches - is considerably higher in rural areas and in public lands. "The public perceives the urbanized and private lands as more vulnerable," said Mountrakis, "but that's not what our study showed. Rural areas are at a higher risk of losing these forested patches.

"Patches of forests are important to study because they serve a lot of unique ecoservices," Mountrakis said, citing bird migration as one example. "You can think of the forests as little islands that the birds are hopping from one to the next."


Illustration shows a female spirit labeled "Public Spirit" warning two men cutting logs, of the consequences of deforestation.
Credit: Wikimedia Commons/Joseph Keppler - Library of Congress  Illus. from Puck, v. 14, no. 357, (1884 January 9), centerfold


"Typically we concentrate more on urban forest," said Sheng Yang, an ESF graduate student and co-author of the study, "but we may need to start paying more attention - let's say for biodiversity reasons - in rural rather than urban areas. Because the urban forests tend to receive much more attention, they are better protected."

Forest dynamics are an integral part of larger ecosystems and have the potential to significantly affect water chemistry, soil erosion, carbon sequestration patterns, local climate, biodiversity distribution and human quality of life, Mountrakis said.

Using forest maps over the entire continental United States, researchers compared satellite data from the 1990s with data from 2000. "We did a statistical analysis starting with forest maps from 1990 and compared it to forests in 2000," said Mountrakis.

The study looked at the loss of forest by calculating the distance to the nearest forest from every area in the landscape, Mountrakis said. The loss of a smaller isolated forest could have a greater environmental impact than losing acreage within a larger forest.

Credit; William B. Greeley, US Forest Service

The study also found distance to the nearest forest is considerably greater in western forests than eastern forests.

"So if you are in the western U.S. or you are in a rural area or you are in land owned by a public entity, it could be federal, state or local, your distance to the forest is increasing much faster than the other areas," he said. "The forests are getting further away from you."

"Distances to nearest forest are also increasing much faster in less forested landscapes. This indicates that the most spatially isolated - and therefore important - forests are the ones under the most pressure," said Yang.

Credit; William B. Greeley, US Forest Service

The loss of these unique forests proposes a different set of side effects, Mountrakis said, "for local climate, for biodiversity, for soil erosion. This is the major driver - we can link the loss of the isolated patches to all these environmental degradations."

Along with research into the drivers behind the loss of forests, Mountrakis expects the differing geographic distributions and differences in land ownership and urbanization levels will initiate new research and policy across forestry, ecology, social science and geography.

This work was supported by the National Urban and Community Forestry Advisory Council and the McIntire-Stennis program, U.S. Forest Service.






Contacts and sources:
 SUNY College of Environmental Science and Forestry

Citation: Forest dynamics in the U.S. indicate disproportionate attrition in western forests, rural areas and public lands Authors: Sheng Yang ,Giorgos Mountrakis
Published: February 22, 2017 http://dx.doi.org/10.1371/journal.pone.0171383

Nanoconfinement: A Boon for Hydrogen Vehicles?

Lawrence Livermore scientists have collaborated with an interdisciplinary team of researchers including colleagues from Sandia National Laboratories to develop an efficient hydrogen storage system that could be a boon for hydrogen powered vehicles.

Hydrogen is an excellent energy carrier, but the development of lightweight solid-state materials for compact, low-pressure storage is a huge challenge.

Complex metal hydrides are a promising class of hydrogen storage materials, but their viability is usually limited by slow hydrogen uptake and release. Nanoconfinement — infiltrating the metal hydride within a matrix of another material such as carbon — can, in certain instances, help make this process faster by shortening diffusion pathways for hydrogen or by changing the thermodynamic stability of the material.


Hydrogenation forms a mixture of lithium amide and hydride (light blue) as an outer shell around a lithium nitride particle (dark blue) nanoconfined in carbon. Nanoconfinement suppresses all other intermediate phases to prevent interface formation, which has the effect of dramatically improving the hydrogen storage performance.
Credit; LLNL


However, the Livermore-Sandia team, in conjunction with collaborators from Mahidol University in Thailand and the National Institute of Standards and Technology, showed that nanoconfinement can have another, potentially more important consequence. They found that the presence of internal “nano-interfaces” within nanoconfined hydrides can alter which phases appear when the material is cycled.

The researchers examined the high-capacity lithium nitride (Li3N) hydrogen storage system under nanoconfinement. Using a combination of theoretical and experimental techniques, they showed that the pathways for the uptake and release of hydrogen were fundamentally changed by the presence of nano-interfaces, leading to dramatically faster performance and reversibility. The research appears on the cover of the Feb. 23 edition of the journal Advanced Materials Interfaces.

“The key is to get rid of the undesirable intermediate phases, which slow down the material’s performance as they are formed or consumed. If you can do that, then the storage capacity kinetics dramatically improve and the thermodynamic requirements to achieve full recharge become far more reasonable,” said Brandon Wood, an LLNL materials scientist and lead author of the paper. “In this material, the nano-interfaces do just that, as long as the nanoconfined particles are small enough. It’s really a new paradigm for hydrogen storage, since it means that the reactions can be changed by engineering internal microstructures.”

The Livermore researchers used a thermodynamic modeling method that goes beyond conventional descriptions to consider the contributions from the evolving solid phase boundaries as the material is hydrogenated and dehydrogenated. They showed that accounting for these contributions eliminates intermediates in nanoconfined lithium nitride, which was confirmed spectroscopically.

Beyond demonstrating nanoconfined lithium nitride as a rechargeable, high-performing hydrogen-storage material, the work establishes that proper consideration of solid–solid nanointerfaces and particle microstructure are necessary for understanding hydrogen-induced phase transitions in complex metal hydrides.

“There is a direct analogy between hydrogen storage reactions and solid-state reactions in battery electrode materials,” said Tae Wook Heo, another LLNL co-author on the study. “People have been thinking about the role of interfaces in batteries for some time, and our work suggests that some of the same strategies being pursued in the battery community could also be applied to hydrogen storage. Tailoring morphology and internal microstructure could be the best way forward for engineering materials that could meet performance targets.”

Other Livermore researchers on the study include Keith Ray and Jonathan Lee.

The research is supported through the Hydrogen Storage Materials Advanced Research Consortium of the Department of Energy Office of Energy Efficiency and Renewable Energy, Fuel Cell Technologies Office.


Contacts and sources: 
Anne M Stark
Lawrence Livermore National Laboratory