Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Sunday, February 26, 2017

Humans Take The Path of Least Resistance: It's in Our Nature

The amount of effort required to do something influences what we think we see, finds a new University College London (UCL) study suggesting we’re biased towards perceiving anything challenging to be less appealing.

 “Our brain tricks us into believing the low-hanging fruit really is the ripest,” says Dr Nobuhiro Hagura, who led the UCL team before moving to NICT in Japan. “We found that not only does the cost to act influence people’s behaviour, but it even changes what we think we see.”

 Credit: UCL


For the study, published in eLife, a total of 52 participants took part in a series of tests where they had to judge whether a cloud of dots on a screen was moving to the left or to the right. They expressed their decisions by moving a handle held in the left or right hand respectively. When the researchers gradually added a load to one of the handles, making it more difficult to move, the volunteers’ judgements about what they saw became biased, and they started to avoid the effortful response. If weight was added to the left handle, participants were more likely to judge the dots to be moving rightwards as that decision was slightly easier for them to express. Crucially, the participants did not become aware of the increasing load on the handle: their motor system automatically adapted, triggering a change in their perception.

“The tendency to avoid the effortful decision remained even when we asked people to switch to expressing their decision verbally, instead of pushing on the handles,” Dr Hagura said. “The gradual change in the effort of responding caused a change in how the brain interpreted the visual input. Importantly, this change happened automatically, without any awareness or deliberate strategy.”

“Traditionally, scientists have assumed the visual system gives us perceptual information, and the motor system is a mere downstream output channel, which expresses our decision based on what we saw, without actually influencing the decision itself. Our experiments suggest an alternative view: the motor response that we use to report our decisions can actually influence the decision about what we have seen,” he said.

The researchers believe that our daily decisions could be modified not just through deliberate cognitive strategies, but also by designing the environment to make these decisions slightly more effortful. “The idea of ‘implicit nudge’ is currently popular with governments and advertisers,” said co-author Professor Patrick Haggard (UCL Institute of Cognitive Neuroscience). “Our results suggest these methods could go beyond changing how people behave, and actually change the way the world looks. Most behaviour change focuses on promoting a desired behaviour, but our results suggest you could also make it less likely that people see the world a certain way, by making a behaviour more or less effortful. Perhaps the parent who places the jar of biscuits on a high shelf actually makes them look less tasty to the toddler playing on the floor.”

The study was performed under an international collaboration between UCL, NICT (Japan) and Western University (Canada). The researchers were funded by the European Research Council, the Japan Society for the Promotion of Science, and the James S. McDonnell Foundation.




Contacts and sources:
Chris Lane
University College London (UCL)


Mars Mantle More Earth-like than Moon-like


New Mars research shows evidence of a complex mantle beneath the Elysium volcanic province.

Mars' mantle may be more complicated than previously thought. In a new study published today in the Nature-affiliated journal Scientific Reports, researchers at Louisiana State University (LSU) document geochemical changes over time in the lava flows of Elysium, a major martian volcanic province.

LSU Geology and Geophysics graduate researcher David Susko led the study with colleagues at LSU including his advisor Suniti Karunatillake, the University of Rahuna in Sri Lanka, the SETI Institute, Georgia Institute of Technology, NASA Ames, and the Institut de Recherche en Astrophysique et Planétologie in France.

They found that the unusual chemistry of lava flows around Elysium is consistent with primary magmatic processes, such as a heterogeneous mantle beneath Mars' surface or the weight of the overlying volcanic mountain causing different layers of the mantle to melt at different temperatures as they rise to the surface over time.


This is a solidified lava flow over the side of a crater rim of Elysium.

Credit: NASA HiRISE image, David Susko, LSU.


Elysium is a giant volcanic complex on Mars, the second largest behind Olympic Mons. For scale, it rises to twice the height of Earth's Mount Everest, or approximately 16 kilometers. Geologically, however, Elysium is more like Earth's Tibesti Mountains in Chad, the Emi Koussi in particular, than Everest. This comparison is based on images of the region from the Mars Orbiter Camera, or MOC, aboard the Mars Global Surveyor, or MGS, Mission.

Elysium is also unique among martian volcanoes. It's isolated in the northern lowlands of the planet, whereas most other volcanic complexes on Mars cluster in the ancient southern highlands. Elysium also has patches of lava flows that are remarkably young for a planet often considered geologically silent.

"Most of the volcanic features we look at on Mars are in the range of 3-4 billion years old," Susko said. "There are some patches of lava flows on Elysium that we estimate to be 3-4 million years old, so three orders of magnitude younger. In geologic timescales, 3 million years ago is like yesterday."

In fact, Elysium's volcanoes hypothetically could still erupt, Susko said, although further research is needed to confirm this. "At least, we can't yet rule out active volcanoes on Mars," Susko said. "Which is very exciting."

Susko's work in particular reveals that the composition of volcanoes on Mars may evolve over their eruptive history. In earlier research led by Karunatillake, assistant professor in LSU's Department of Geology and Geophysics, researchers in LSU's Planetary Science Lab, or PSL, found that particular regions of Elysium and the surrounding shallow subsurface of Mars are geochemically anomalous, strange even relative to other volcanic regions on Mars. They are depleted in the radioactive elements thorium and potassium. Elysium is one of only two igneous provinces on Mars where researchers have found such low levels of these elements so far.

"Because thorium and potassium are radioactive, they are some of the most reliable geochemical signatures that we have on Mars," Susko said. "They act like beacons emitting their own gamma photons. These elements also often couple in volcanic settings on Earth."

In their new paper, Susko and colleagues started to piece together the geologic history of Elysium, an expansive volcanic region on Mars characterized by strange chemistry. They sought to uncover why some of Elysium's lava flows are so geochemically unusual, or why they have such low levels of thorium and potassium. Is it because, as other researchers have suspected, glaciers located in this region long ago altered the surface chemistry through aqueous processes? Or is it because these lava flows arose from different parts of Mars' mantle than other volcanic eruptions on Mars?

Perhaps the mantle has changed over time, meaning that more recent volcanic eruption flows differ chemically from older ones. If so, Susko could use Elysium's geochemical properties to study how Mars' bulk mantle has evolved over geologic time, with important insights for future missions to Mars. Understanding the evolutionary history of Mars' mantle could help researchers gain a better understanding of what kinds of valuable ores and other materials could be found in the crust, as well as whether volcanic hazards could unexpectedly threaten human missions to Mars in the near future. Mars' mantle likely has a very different history than Earth's mantle because the plate tectonics on Earth are absent on Mars as far as researchers know. The history of the bulk interior of the red planet also remains a mystery.

Susko and colleagues at LSU analyzed geochemical and surface morphology data from Elysium using instruments on board NASA's Mars Odyssey Orbiter (2001) and Mars Reconnaissance Orbiter (2006). They had to account for the dust that blankets Mars' surface in the aftermath of strong dust storms, to make sure that the shallow subsurface chemistry actually reflected Elysium's igneous material and not the overlying dust.

Through crater counting, the researchers found differences in age between the northwest and the southeast regions of Elysium -- about 850 million years of difference. They also found that the younger southeast regions are geochemically different from the older regions, and that these differences in fact relate to igneous processes, not secondary processes like the interaction of water or ice with the surface of Elysium in the past.

"We determined that while there might have been water in this area in the past, the geochemical properties in the top meter throughout this volcanic province are indicative of igneous processes," Susko said. "We think levels of thorium and potassium here were depleted over time because of volcanic eruptions over billions of years. The radioactive elements were the first to go in the early eruptions. We are seeing changes in the mantle chemistry over time."

"Long-lived volcanic systems with changing magma compositions are common on Earth, but an emerging story on Mars," said James Wray, study co-author and associate professor in the School of Earth and Atmospheric Sciences at Georgia Tech.

Wray led a 2013 study that showed evidence for magma evolution at a different martian volcano, Syrtis Major, in the form of unusual minerals. But such minerals could be originating at the surface of Mars, and are visible only on rare dust-free volcanoes.

"At Elysium we are truly seeing the bulk chemistry change over time, using a technique that could potentially unlock the magmatic history of many more regions across Mars," he said.

Susko speculates that the very weight of Elysium's lava flows, which make up a volcanic province six times higher and almost four times wider than its morphological sister on Earth, Emi Koussi, has caused different depths of Mars' mantle to melt at different temperatures. In different regions of Elysium, lava flows may have come from different parts of the mantle. Seeing chemical differences in different regions of Elysium, Susko and colleagues concluded that Mars' mantle might be heterogeneous, with different compositions in different areas, or that it may be stratified beneath Elysium.

Overall, Susko's findings indicate that Mars is a much more geologically complex body than originally thought, perhaps due to various loading effects on the mantle caused by the weight of giant volcanoes.

"It's more Earth-like than moon-like," Susko said. "The moon is cut and dry. It often lacks the secondary minerals that occur on Earth due to weathering and igneous-water interactions. For decades, that's also how we envisioned Mars, as a lifeless rock, full of craters with a number of long inactive volcanoes. We had a very simple view of the red planet. But the more we look at Mars, the less moon-like it becomes. We're discovering more variety in rock types and geochemical compositions, as seen across the Curiosity Rover's traverse in Gale Crater, and more potential for viable resource utilization and capacity to sustain a human population on Mars. It's much easier to survive on a complex planetary body bearing the mineral products of complex geology than on a simpler body like the moon or asteroids."

Susko plans to continue clarifying the geologic processes that cause the strange chemistry found around Elysium. In the future, he will study these chemical anomalies through computational simulations, to determine if recreating the pressures in Mars' mantle caused by the weight of giant volcanoes could affect mantle melting to yield the type of chemistry observed within Elysium.



Contacts and sources:
Alison Satake
Louisiana State University (LSU)

Do You or Don't You Want To Know Your Future? Most People Don't Says New Study

Learning what the future holds, good or bad, not appealing to most, study says

Given the chance to see into the future, most people would rather not know what life has in store for them, even if the news is positive, according to new research conducted by scientists at the Max Planck Institute for Human Development and the University of Granada, which has been published by the American Psychological Association.


In Greek mythology, the princess and seeress Cassandra was cursed, so that no one believes her words and prophecies. In the light of most recent research findings, this is hardly surprising, which show that most people prefer not to know what the future has in store for them.
Credit: © Flickr/Internet Archive Book Images/public domain


“In Greek mythology, Cassandra, daughter of the king of Troy, had the power to foresee the future. But she was also cursed, so that no one believed her prophecies,” said the study’s lead author, Gerd Gigerenzer of the Max Planck Institute for Human Development. “In our study, we found that people would not want the powers that made Cassandra famous, in order to avoid the suffering and regret that knowing the future may cause and also to maintain the enjoyment of suspense that pleasurable events provide.”

Two nationally representative studies involving more than 2,000 adults in Germany and Spain found that 86 to 90 percent of people would not want to know about upcoming negative events, and 40 to 77 percent preferred to remain ignorant of upcoming positive events. Only 1 percent of participants consistently wanted to know what the future held. The findings are published in the APA journal Psychological Review.

The researchers also found that people who prefer not to know the future are more risk averse and more frequently buy life and legal insurance than those who want to know the future. This suggests that those who choose to be ignorant anticipate regret, Gigerenzer said. The time frame also played a role: Deliberate ignorance was more likely the nearer the event was expected to take place. For example, older adults were less likely than younger adults to want to know the date and cause of their or their partner's death.

Participants were asked about a large range of potential events, both positive and negative. For example, they were asked if they wanted to know who won a soccer game they had planned to watch later, what they were getting for Christmas, whether there is life after death, and if their marriage would eventually end in divorce. Finding out the sex of their unborn child was the only item in the survey where more people wanted to know, with only 37 percent of participants saying they wouldn’t want to know.

Although the people living in Germany and Spain varied in age, education and other important aspects, the pattern of deliberate ignorance was highly consistent across the two countries.

“Wanting to know is assumed to be the norm for humans, and in no need of justification. People are not just invited but also often expected to participate in early detection for cancer screening or in regular health check-ups, to subject their unborn babies to dozens of prenatal genetic tests, or to use self-tracking health devices,” said Gigerenzer. “Not wanting to know appears counterintuitive and may raise eyebrows, but deliberate ignorance, as we’ve shown here, doesn’t just exist; it is a widespread state of mind.”




Contacts and sources:
Prof. Dr. Dr. h.c. Gerd Gigerenzer
Max Planck Institute for Human Development, Berlin

Citation: Gigerenzer, G., & García-Retamero, R.
Cassandra's regret: The psychology of not wanting to know.
Psychological Review. Psychological Review, Vol 124(2), Mar 2017, 179-196.

Musical and Speech Melodies May Be the “Social Glue” or the “Lowest Common Denominator in Human Evolution”


Daniela Sammler conducts research into the structures of the brain that process speech and music, and finds many commonalities

A mother sings a lullaby to her baby. When she talks to her child she modifies the pitch of her voice. What the baby “understands” is the melody and the emotions that this expresses.

Daniela Sammler, a neuropsychologist at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, considers both musical melodies and speech melodies to be the “social glue” or the “lowest common denominator in human evolution”.

“Both obey a grammar – naturally culture-specific – that we learn early on in life. Speech is clearly governed by the order of clauses in a sentence,” explains the 38-year-old, who has led her own Research Group in Leipzig since the summer of 2013. But how individual words and parts of a sentence are stressed can also fundamentally change the meaning of a sentence. Take the sentence “Mary has given a book to John”, where the meaning depends on whether Mary or John has been stressed.

Credit: Max Planck Society

Music, similarly, follows a sequence of tones and harmonies – its “musical grammar.” If a pianist, for example, breaks these rules, brain regions activate that are astonishingly similar to those that fire when grammatical mistakes are made in a sentence.


Music and speech: two channels of communications only available to humans

Daniela Sammler doesn't consider it chance that we humans alone, among all other animals, possess both speech and music as channels of communication. She is convinced that over the course of evolution the human brain has evolved to process both. And she has set out to uncover the underlying structures of the brain. 

One part of the Research Group she leads investigates the role of speech melodies – word stress, the sequence of pitches in a sentence, and the cadence of speech. The other part researches how melodies are perceived in music. To do this she has had a special piano constructed by the Julius Blüthner piano manufacturing company in Leipzig that can be played while in a magnetic resonance imaging (MRI) scanner. With its help scientists can measure the brain activity of pianists while playing the piano. 


What's really fascinating is how our sense for the rules of music governs how we interpret it. Both of these investigations suggest that similar regions of the brain are employed to process melodies in both speech and music, and colleagues in the same scientific circles are taking note: “Thanks to the intensive research that Daniela Sammler has undertaken, we now know that the neuronal substrates of music and speech are more similar than we ever suspected,” says Angela Friederici, Director at the same Institute. “It’s her work that has demonstrated the central role of speech melody in our interpersonal communication.”


Daniela Sammler

Credit: © Amac Garbe

“Our brains don’t have separate specialized regions for speech and for music,” stresses Daniela Sammler. Music, like speech, activates a number of brain regions that are often also responsible for other functions. “Take hearing for example, and also motor function – like tapping your foot. Not to forget the emotional centres, like those used to store memories,” adds Sammler. In the brain, different highly interconnected regions all work together. In the process, similar tasks are bundled together in specialized regions. How this happens in detail is what Daniela is hoping to understand.

What unites and what separates individual cultures?

For this reason she is investigating both the “universals” – the commonalities that exist in our understanding of music and speech across many cultures – as well as the culturally-learned differences. Do speakers of Arabic who understand no German have the same experience of German sentence melodies that a native German speaker might have? Is the reverse also true? Do we recognize a critical tone in the cadence of speech whether or not we speak the language?

Daniela Sammler is fascinated by this and many other new projects, and her students are often astonished at how analogous the results of speech and music research are. She supervises four doctoral students in her Group, as well as an ever-changing number of undergraduates. What are her further plans? “What I’m interested in could go on forever,” says Sammler. She recently submitted her German Habilitation (extended postdoctoral qualification), and she is now in the process of applying for vacant posts as a professor. Her scientific journey is ongoing in other words. She hopes to stay in Germany, or at least in Europe.



Contacts and sources:
Daniela Sammler
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig
Text: Mechthild Zimmermann / Barbara Abrell

The Ancient Art of Kirigami Is Inspiring a New Class of Materials

Origami-inspired materials use folds in materials to embed powerful functionality. However, all that folding can be pretty labor intensive. Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) are drawing material inspiration from another ancient Japanese paper craft — kirigami.

Kirigami relies on cuts, rather than folds, to change the structure and function of materials.

The buckling-induced cubic patterned kirigami sheet can be folded flat 
Image courtesy of Ahmad Rafsanjani/Harvard SEAS

In a new paper published in Physical Review Letters, SEAS researchers demonstrate how a thin, perforated sheet can be transformed into a foldable 3D structure by simply stretching the cut material.

“We find that applying sufficiently large amounts of stretching, buckling is triggered and results in the formation of a 3D structure comprising a well-organized pattern of mountains and valleys, very similar to popular origami folds such as the Miura-ori,” said Ahmad Rafsanjani, a postdoctoral fellow at SEAS and first author of the paper.



The team found that if the material is stretched more, the temporary deformations become permanent folds. The team also found that the pop-up pattern and resulting mechanical properties of the material can be controlled by varying the orientation of the cuts.

“This study shows a robust pop-up strategy to manufacture complex morphable structures out of completely flat perforated sheets,” said Katia Bertoldi, the John L. Loeb Associate Professor of the Natural Sciences at SEAS and senior author of the paper.



Contacts and sources:
Leah Burrows
Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) 

Even Shredded to Pieces They Live: How Nearly Immortal Hydras Know Where to Regrow Lost Body Parts

Hydras the almost immortal animals. immortal.

Few animals can match the humble hydra’s resilience. The small, tentacled freshwater animals can be literally shredded into pieces and regrow into healthy animals.

Hydras are a genus of the Cnidaria phylum. All cnidarians can regenerate, allowing them to recover from injury and to reproduce asexually. Hydras are simple, freshwater animals possessing radial symmetry and no post-mitotic cells. All hydra cells continually divide. It has been suggested that hydras do not undergo senescence, and, as such, are biologically

A study published February 7 in Cell Reports suggests that pieces of hydras have structural memory that helps them shape their new body plan according to the pattern inherited by the animal’s “skeleton.” Previously, scientists thought that only chemical signals told a hydra where its heads and/or feet should form.
Credit: Technion

Regenerating hydras use a network of tough, stringy protein fibers, called the cytoskeleton, to align their cells. When pieces are cut or torn from hydras, the cytoskeletal pattern survives and becomes part of the new animal. The pattern generates a small but potent amount of mechanical force that shows cells where to line up. This mechanical force can serve as a form of “memory” that stores information about the layout of animal bodies. “You have to think of it as part of the process of defining the pattern and not just an outcome”, says senior author, biophysicist Kinneret Keren of the Technion – Israel Institute of Technology

When pieces of hydra begin the regeneration process, the scraps of hydra fold into little balls, and the cytoskeleton has to find a balance between maintaining its old shape and adapting to the new conditions. “If you take a strip or a square fragment and turn it into a sphere, the fibers have to change or stretch a lot to do that,” explains Keren. However, some portions retain their pattern. As the little hydra tissue ball stretches into a tube and grows a tentacle-ringed mouth, the new body parts follow the template set by the cytoskeleton in fragments from the original hydra. 


Hydras
Credit: Stephen Friedt/Wikipedia

The main cytoskeletal structure in adult hydra is an array of aligned fibers that span the entire organism. Tampering with the cytoskeleton is enough to disrupt the formation of new hydras, the researchers found. In many ways, the cytoskeleton is like a system of taut wires that helps the hydra keep its shape and function. In one experiment, the researchers cut the original hydra into rings which folded into balls that contained multiple domains of aligned fibers. Those ring-shaped pieces grew into two-headed hydras. However, anchoring the hydra rings to stiff wires resulted in healthy one-headed hydras, suggesting that mechanical feedbacks promote order in the developing animal.

Hydras are much simpler than most of their cousins in the animal kingdom, but the basic pattern of aligned cytoskeletal fibers is common in many organs, including human muscles, heart, and guts. Studying hydra regeneration may lead to a better understanding of how mechanics integrate with biochemical signals to shape tissues and organs in other species. “The actomyosin cytoskeleton are the main force generator across the animal kingdom,” says Keren. “This is very universal.”



Contacts and sources:
Technion – Israel Institute of Technology.

Citation: "Structural Inheritance of the Actin Cytoskeletal Organization Determines the Body Axis in Regenerating Hydra"   Cell Reports

‘Eye-Opening’ Study Shows Rural U.S. Loses Forests Faster Than Cities

Americans are spending their lives farther from forests than they did at the end of the 20th century and, contrary to popular wisdom, the change is more pronounced in rural areas than in urban settings.

A study published today (Feb. 22) in the journal PLOS ONE says that between 1990 and 2000, the average distance from any point in the United States to the nearest forest increased by 14 percent - or about a third of a mile. And while the distance isn't insurmountable for humans in search of a nature fix, it can present challenges for wildlife and have broad effects on ecosystems.

Dr. Giorgos Mountrakis, an associate professor in the ESF Department of Environmental Resources, and co-author of the study, called the results "eye opening."

"Our study analyzed geographic distribution of forest losses across the continental U.S. While we focused on forests, the implications of our results go beyond forestry," Mountrakis said.

Figure;  Forest cover change (FCC) and forest attrition distance change (FADC) in level III ecoregions.While the southeastern U.S. is experiencing high forest loss, the highest forest attrition is concentrated in other parts of the country















Credit: Forest dynamics in the U.S. indicate disproportionate attrition in western forests, rural areas and public lands Sheng Yang Giorgos Mountrakis PLoS One

The study overturned conventional wisdom about forest loss, the researcher noted. The amount of forest attrition - the complete removal of forest patches - is considerably higher in rural areas and in public lands. "The public perceives the urbanized and private lands as more vulnerable," said Mountrakis, "but that's not what our study showed. Rural areas are at a higher risk of losing these forested patches.

"Patches of forests are important to study because they serve a lot of unique ecoservices," Mountrakis said, citing bird migration as one example. "You can think of the forests as little islands that the birds are hopping from one to the next."


Illustration shows a female spirit labeled "Public Spirit" warning two men cutting logs, of the consequences of deforestation.
Credit: Wikimedia Commons/Joseph Keppler - Library of Congress  Illus. from Puck, v. 14, no. 357, (1884 January 9), centerfold


"Typically we concentrate more on urban forest," said Sheng Yang, an ESF graduate student and co-author of the study, "but we may need to start paying more attention - let's say for biodiversity reasons - in rural rather than urban areas. Because the urban forests tend to receive much more attention, they are better protected."

Forest dynamics are an integral part of larger ecosystems and have the potential to significantly affect water chemistry, soil erosion, carbon sequestration patterns, local climate, biodiversity distribution and human quality of life, Mountrakis said.

Using forest maps over the entire continental United States, researchers compared satellite data from the 1990s with data from 2000. "We did a statistical analysis starting with forest maps from 1990 and compared it to forests in 2000," said Mountrakis.

The study looked at the loss of forest by calculating the distance to the nearest forest from every area in the landscape, Mountrakis said. The loss of a smaller isolated forest could have a greater environmental impact than losing acreage within a larger forest.

Credit; William B. Greeley, US Forest Service

The study also found distance to the nearest forest is considerably greater in western forests than eastern forests.

"So if you are in the western U.S. or you are in a rural area or you are in land owned by a public entity, it could be federal, state or local, your distance to the forest is increasing much faster than the other areas," he said. "The forests are getting further away from you."

"Distances to nearest forest are also increasing much faster in less forested landscapes. This indicates that the most spatially isolated - and therefore important - forests are the ones under the most pressure," said Yang.

Credit; William B. Greeley, US Forest Service

The loss of these unique forests proposes a different set of side effects, Mountrakis said, "for local climate, for biodiversity, for soil erosion. This is the major driver - we can link the loss of the isolated patches to all these environmental degradations."

Along with research into the drivers behind the loss of forests, Mountrakis expects the differing geographic distributions and differences in land ownership and urbanization levels will initiate new research and policy across forestry, ecology, social science and geography.

This work was supported by the National Urban and Community Forestry Advisory Council and the McIntire-Stennis program, U.S. Forest Service.






Contacts and sources:
 SUNY College of Environmental Science and Forestry

Citation: Forest dynamics in the U.S. indicate disproportionate attrition in western forests, rural areas and public lands Authors: Sheng Yang ,Giorgos Mountrakis
Published: February 22, 2017 http://dx.doi.org/10.1371/journal.pone.0171383

Nanoconfinement: A Boon for Hydrogen Vehicles?

Lawrence Livermore scientists have collaborated with an interdisciplinary team of researchers including colleagues from Sandia National Laboratories to develop an efficient hydrogen storage system that could be a boon for hydrogen powered vehicles.

Hydrogen is an excellent energy carrier, but the development of lightweight solid-state materials for compact, low-pressure storage is a huge challenge.

Complex metal hydrides are a promising class of hydrogen storage materials, but their viability is usually limited by slow hydrogen uptake and release. Nanoconfinement — infiltrating the metal hydride within a matrix of another material such as carbon — can, in certain instances, help make this process faster by shortening diffusion pathways for hydrogen or by changing the thermodynamic stability of the material.


Hydrogenation forms a mixture of lithium amide and hydride (light blue) as an outer shell around a lithium nitride particle (dark blue) nanoconfined in carbon. Nanoconfinement suppresses all other intermediate phases to prevent interface formation, which has the effect of dramatically improving the hydrogen storage performance.
Credit; LLNL


However, the Livermore-Sandia team, in conjunction with collaborators from Mahidol University in Thailand and the National Institute of Standards and Technology, showed that nanoconfinement can have another, potentially more important consequence. They found that the presence of internal “nano-interfaces” within nanoconfined hydrides can alter which phases appear when the material is cycled.

The researchers examined the high-capacity lithium nitride (Li3N) hydrogen storage system under nanoconfinement. Using a combination of theoretical and experimental techniques, they showed that the pathways for the uptake and release of hydrogen were fundamentally changed by the presence of nano-interfaces, leading to dramatically faster performance and reversibility. The research appears on the cover of the Feb. 23 edition of the journal Advanced Materials Interfaces.

“The key is to get rid of the undesirable intermediate phases, which slow down the material’s performance as they are formed or consumed. If you can do that, then the storage capacity kinetics dramatically improve and the thermodynamic requirements to achieve full recharge become far more reasonable,” said Brandon Wood, an LLNL materials scientist and lead author of the paper. “In this material, the nano-interfaces do just that, as long as the nanoconfined particles are small enough. It’s really a new paradigm for hydrogen storage, since it means that the reactions can be changed by engineering internal microstructures.”

The Livermore researchers used a thermodynamic modeling method that goes beyond conventional descriptions to consider the contributions from the evolving solid phase boundaries as the material is hydrogenated and dehydrogenated. They showed that accounting for these contributions eliminates intermediates in nanoconfined lithium nitride, which was confirmed spectroscopically.

Beyond demonstrating nanoconfined lithium nitride as a rechargeable, high-performing hydrogen-storage material, the work establishes that proper consideration of solid–solid nanointerfaces and particle microstructure are necessary for understanding hydrogen-induced phase transitions in complex metal hydrides.

“There is a direct analogy between hydrogen storage reactions and solid-state reactions in battery electrode materials,” said Tae Wook Heo, another LLNL co-author on the study. “People have been thinking about the role of interfaces in batteries for some time, and our work suggests that some of the same strategies being pursued in the battery community could also be applied to hydrogen storage. Tailoring morphology and internal microstructure could be the best way forward for engineering materials that could meet performance targets.”

Other Livermore researchers on the study include Keith Ray and Jonathan Lee.

The research is supported through the Hydrogen Storage Materials Advanced Research Consortium of the Department of Energy Office of Energy Efficiency and Renewable Energy, Fuel Cell Technologies Office.


Contacts and sources: 
Anne M Stark
Lawrence Livermore National Laboratory 

Saturday, February 25, 2017

Is the Kid More Like Mom or Dad: Brain Cells Prefer One Parent’s Gene Over the Other’s


Many kids say they love their mom and dad equally, but there are times when even the best prefers one parent over the other. The same can be said for how the body’s cells treat our DNA instructions. It has long been thought that each copy - one inherited from mom and one from dad - is treated the same. A new study from scientists at the University of Utah School of Medicine shows that it is not uncommon for cells in the brain to preferentially activate one copy over the other. The finding breaks basic tenants of classic genetics and suggests new ways in which genetic mutations might cause brain disorders.

In at least one region of the newborn mouse brain, the new research shows, inequality seems to be the norm. About 85 percent of genes in the dorsal raphe nucleus, known for secreting the mood-controlling chemical serotonin, differentially activate their maternal and paternal gene copies. Ten days later in the juvenile brain, the landscape shifts, with both copies being activated equally for all but 10 percent of genes.

More than an oddity of the brain, the disparity also takes place at other sites in the body, including liver and muscle. It also occurs in humans.


Many cells in the brain express two copies of each gene, one inherited from mom and one from dad. Others express just one copy. If the single copy happens to carry a genetic mutation, it may cause the cell to become sick. The discovery from the University of Utah offers a previously undescribed nuanced view of genetics that has consequences at the cellular level.
Credit: Christopher Gregg


“We usually think of traits in terms of a whole person, or animal. We’re finding that when we look at the level of cells, genetics is much more complicated than we thought,” says Christopher Gregg, Ph.D., assistant professor of neurobiology and anatomy and senior author of the study which publishes online in Neuron on Feb. 23. “This new picture may help us understand brain disorders,” he continues.

Among genes regulated in this unorthodox way are risk factors for mental illness. In humans, a gene called DEAF1, implicated in autism and intellectual disability, shows preferential expression of one gene copy in multiple regions of the brain. A more comprehensive survey in primates, which acts as a proxy for humans, indicates the same is true for many other genes including some linked to Huntington’s Disease, schizophrenia, attention deficit disorder, and bipoloar disorder.

What the genetic imbalance could mean for our health remains to be determined, but preliminary results suggest that it could shape vulnerabilities to disease, explains Gregg. Normally, having two copies of a gene acts as a protective buffer in case one is defective. Activating a gene copy that is mutated and silencing the healthy copy - even temporarily - could be disruptive enough to cause trouble in specific cells.

Supporting the idea, Gregg’s lab found that some brain cells in transgenic mice preferentially activate mutated gene copies over healthy ones. “It has generally been assumed that there is correlation between both copies of a gene,” says Elliott Ferris, a computer scientist who co-led the study with graduate student Wei-Chao Huang. Instead, they found something unexpected. “We developed novel methods for mining big data, and discovered something new,” Huang explains.

The investigators screened thousands of genes in their study, quantifying the relative levels of activation for each maternal and paternal gene copy and discovered that expression of the two is different for many genes. Surprised by what they saw, they developed statistical methods to rigorously test their validity and determined that they were not due to technical artifacts, nor genetic noise. Following up on their findings, they examined a subset of genes more closely, directly visualized imbalances between gene copies at the cellular level in the mouse and human brain.

Results from Gregg and colleagues build on previous research, expanding on scenarios in which genes play favorites. Imprinted genes and X-linked genes are specific gene categories that differentially activate their maternal and paternal gene copies. Studies in cultured cells had also determined that some genes vary which copy they express. The results from this study, however, suggests that silencing one gene copy may be a way in which cells fine tune their genetic program at specific times during the lifecycle of the animal, or in discrete places.

“Our new findings reveal a new landscape of diverse effects that shape the expression of maternal and paternal gene copies in the brain according to age, brain region, and tissue type,” explains Gregg. “The implication is a new view of genetics, one that starts up close.”



Contacts and sources:

Fracking Wastewater Spills Alter Microbes in West Virginia Waters

Wastewater from oil and gas operations – including fracking for shale gas – at a West Virginia site altered microbes downstream, according to a Rutgers-led study.

The study, published recently in Science of the Total Environment, showed that wastewater releases, including briny water that contained petroleum and other pollutants, altered the diversity, numbers and functions of microbes. The shifts in the microbial community indicated changes in their respiration and nutrient cycling, along with signs of stress.

The study also documented changes in antibiotic resistance in downstream sediments, but did not uncover hot spots, or areas with high levels of resistance. The findings point to the need to understand the impacts on microbial ecosystems from accidental releases or improper treatment of fracking-related wastewater. Moreover, microbial changes in sediments may have implications for the treatment and beneficial reuse of wastewater, the researchers say.


The hydraulic fracturing (fracking) water cycle includes withdrawing water, adding chemicals, injecting fracking fluids through a well to a rock formation, and pumping wastewater to the surface for disposal or reuse.

Credit: U.S. Environmental Protection Agency


“My hope is that the study could be used to start making hypotheses about the impacts of wastewater,” said Nicole Fahrenfeld, lead author of the study and assistant professor in Rutgers’ Department of Civil and Environmental Engineering. Much remains unknown about the impacts of wastewater from fracking, she added.

“I do think we’re at the beginning of seeing what the impacts could be,” said Fahrenfeld, who works in the School of Engineering. “I want to learn about the real risks and focus our efforts on what matters in the environment.”

Underground reservoirs of oil and natural gas contain water that is naturally occurring or injected to boost production, according to the U.S. Geological Survey (USGS), whose scientists contributed to the study. During fracking, a fracturing fluid and a solid material are injected into an underground reservoir under very high pressure, creating fractures to increase the porosity and permeability of rocks.


Nicole Fahrenfeld, assistant professor in the Department of Civil and Environmental Engineering. 
Photo: Nick Romanenko


Liquid pumped to the surface is usually a mixture of the injected fluids with briny water from the reservoir. It can contain dissolved salt, petroleum and other organic compounds, suspended solids, trace elements, bacteria, naturally occurring radioactive materials and anything injected into wells, the USGS says. Such water is recycled, treated and discharged; spread on roads, evaporated or infiltrated; or injected into deep wells.

Fracking for natural gas and oil and its wastewater has increased dramatically in recent years. And that could overwhelm local infrastructure and strain many parts of the post-fracking water cycle, including the storage, treatment, reuse, transportation or disposal of the wastewater, according to the USGS.

For the Rutgers-USGS study, water and sediment samples were collected from tributaries of Wolf Creek in West Virginia in June 2014, including an unnamed tributary that runs through an underground injection control facility.

The facility includes a disposal well, which injects wastewater to 2,600 feet below the surface, brine storage tanks, an access road and two lined ponds (now-closed) that were used to temporarily store wastewater to allow particles to settle before injection.

Water samples were shipped to Rutgers, where they were analyzed. Sediment samples were analyzed at the Waksman Genomics Core Facility at Rutgers. The study generated a rich dataset from metagenomic sequencing, which pinpoints the genes in entire microbial communities, Fahrenfeld noted.

“The results showed shifts in the microbial community and antibiotic resistance, but this site doesn’t appear to be a new hot spot for antibiotic resistance,” she said. The use of biocides in some fracturing fluids raised the question of whether this type of wastewater could serve as an environment that is favorable for increasing antimicrobial resistance. Antimicrobial resistance detected in these sediments did not rise to the levels found in municipal wastewater – an important environmental source of antimicrobial resistance along with agricultural sites.

Antibiotics and similar drugs have been used so widely and for so long that the microbes the antibiotics are designed to kill have adapted to them, making the drugs less effective, according to the U.S. Centers for Disease Control and Prevention. At least 2 million people become infected with antibiotic-resistant bacteria each year in the U.S., with at least 23,000 of them dying from the infections.

“We have this really nice dataset with all the genes and all the microbes that were at the site,” Fahrenfeld said. “We hope to apply some of these techniques to other environmental systems.”

Study authors include Rutgers undergraduate Hannah Delos Reyes and Rutgers doctoral candidate Alessia Eramo. Other authors include Denise M. Akob, Adam C. Mumford and Isabelle M. Cozzarelli of the U.S. Geological Survey’s National Research Program. Mumford earned a doctorate in microbiology at Rutgers.


Contacts and sources:
Todd B. Bates
Rutgers University

700% Surge in Infections Caused by Antibiotic Resistant Bacteria: A Fast Growing Problem for Kids Too

The adage that kids are growing up too fast these days has yet another locus of applicability.

In a new, first-of-its-kind study, researchers from Case Western Reserve University School of Medicine have found a 700-percent surge in infections caused by bacteria from the Enterobacteriaceae family resistant to multiple kinds of antibiotics among children in the US. These antibiotic resistant infections are in turn linked to longer hospital stays and potentially greater risk of death.

The research, published in the March issue of the Journal of the Pediatric Infectious Diseases Society, is the first known effort to comprehensively examine the problem of multi-drug resistant infections among patients under 18 admitted to US children’s hospitals with Enterobacteriaceae infections. Earlier studies focused mainly on adults, while some looked at young people in more limited geographical areas, such as individual hospitals or cities, or used more limited surveillance data.

Credit: Penn State

“There is a clear and alarming upswing throughout this country of antibiotic resistant Enterobacteriaceae infections in kids and teens,” said lead author Sharon B. Meropol, MD, PhD, a pediatrician and epidemiologist at Case Western Reserve University School of Medicine and Rainbow Babies and Children’s Hospital in Cleveland. “This makes it harder to effectively treat our patients’ infections. The problem is compounded because there are fewer antibiotics approved for young people than adults to begin with. Health care providers have to make sure we only prescribe antibiotics when they’re really needed. It’s also essential to stop using antibiotics in healthy agricultural animals.”

In the retrospective study, Meropol and co-authors Allison A. Haupt, MSPH, and Sara M. Debanne, PhD, both from Case Western Reserve University School of Medicine, analyzed medical data from nearly 94,000 patients under the age of 18 years diagnosed with Enterobacteriaceae-associated infections at 48 children’s hospitals throughout the US. The average age was 4.1 years. Enterobacteriaceae are a family of bacteria; some types are harmless, but they also include such pathogens as Salmonella and Escherichia coli; Enterobacteriaceae are responsible for a rising proportion of serious bacterial infections in children.



The researchers found that the share of these infections resistant to multiple antibiotics rose from 0.2 percent in 2007 to 1.5 percent in 2015, a seven-fold-plus increase in a short, eight-year span. Children with other health problems were more likely to have the infections while there were no overall differences based on sex or insurance coverage. The yearly number of discharges with Enterobacteriaceae-associated infections remained relatively stable over the course of the study years.

In a key finding, more than 75 percent of the antibiotic-resistant infections were already present when the young people were admitted to the hospital, upending previous findings that the infections were mostly picked up in the hospital. “This suggests that the resistant bacteria are now more common in many communities,” said Meropol. For reasons that are unclear, older children and those living in the Western US were more likely to have the infections.

The investigators also found that young people with antibiotic-resistant infections stayed in the hospital 20 percent longer than those whose infections could be addressed by antibiotics. Additionally, there was a greater—but not statistically significant—risk of death among pediatric patients infected with the resistant bacterial strains.

Previous studies have shown that the problem is even worse elsewhere in the world, with an 11.4 percent global rate of antibiotic-resistant Enterobacteriaceae infections among young people, including 27 percent in Asia and the Pacific, 8.8 percent in Latin America, and 2.5 percent in Europe.

“Escalating antibiotic resistance limits our treatment options, worsens clinical results, and is a growing global public health crisis,” said Meropol. “What’s more, the development of new antibacterial drugs, especially ones appropriate for children, remains essentially stagnant. We need to stop over-using antibiotics in animals and humans and develop new ones if we want to stop a bad problem from getting worse.”

This work was supported by the National Institute for Allergy and Infectious Diseases at the National Institutes of Health [K23AI097284-01A1].



Contacts and sources: 

Human Brains Could Evolve to Require Very Little Sleep, Just Like The Cavefish


We all do it; we all need it – humans and animals alike. Sleep is an essential behavior shared by nearly all animals and disruption of this process is associated with an array of physiological and behavioral deficits. Although there are so many factors contributing to sleep loss, very little is known about the neural basis for interactions between sleep and sensory processing.

Neuroscientists at Florida Atlantic University have been studying Mexican cavefish to provide insight into the evolutionary mechanisms regulating sleep loss and the relationship between sensory processing and sleep. They are investigating how sleep evolves and using this species as a model to understand how human brains could evolve to require very little sleep, just like the cavefish

The Pachón cavefish live in deep, dark caves in central Mexico, with little food, oxygen or light, and have lost their eyes completely. Because of their harsh environment, they have evolved to get very creative in order to survive and suppress sleep. They are able to find their way around by means of their lateral lines, which are highly sensitive to fluctuating water pressure.

Credit: Pavel Masek


In their latest study, just published in the Journal of Experimental Biology, findings suggest that an inability to block out your environment is one of the ways to lose sleep. The study also provides a model for understanding how the brain’s sensory systems modulate sleep and sheds light into the evolution of the significant differences in sleep duration observed throughout the animal kingdom.

“Animals have dramatic differences in sleep with some sleeping as much as 20 hours and others as little as two hours and no one knows why these dramatic differences in sleep exist,” said Alex C. Keene, Ph.D., corresponding author of the study and an associate professor in the Department of Biological Sciences in FAU’s Charles E. Schmidt College of Science. “Our study suggests that differences in sensory systems may contribute to this sleep variability. It is possible that evolution drives sensory changes and changes in sleep are a secondary consequence, or that evolution selects for changes in sensory processing in order to change sleep.”


Credit: FAU Science Jupiter

Because the cave environment differs dramatically from the rivers inhabited by surface fish, cavefish have evolved robust differences in foraging and feeding behavior, raising the possibility that differences in nutrient availability contribute to the evolution of sleep loss in cave populations. Furthermore, multiple cave populations have evolved substantial reductions in sleep duration and enhanced sensory systems, suggesting that sleep loss is evolutionary and functionally associated with sensory and metabolic changes.

Key findings of the study have shown that the evolution of enhanced sensory capabilities contribute to sleep loss in cavefish and that sleep in cavefish is plastic and may be regulated by seasonal changes in food availability.

There are more than 29 different populations of cavefish and many of them evolved independently. This enabled the researchers to determine whether evolution occurs through the same or different mechanisms. The Pachon cavefish, the population they studied, appear to have lost sleep due to increased sensory input, but not the other populations.

“We were surprised to find that there are multiple independent mechanisms regulating sleep loss in different cave populations and this can be a significant strength moving forward,” said James Jaggard, first author and a graduate student at FAU working with Keene. “This means that there are many different ways to lose sleep or evolve a brain that sleeps less and we are going to search to identify these mechanisms.”

Keene, Jaggard and their colleagues use Mexican cavefish because they are a powerful system for examining trait evolution. In earlier research studies, they observed the evolutionary convergence on sleep loss in these fish. However, the neural mechanisms underlying this dramatic behavioral shift remained elusive. Since they already knew that cavefish also had evolved a highly sensitive lateral line (the groups of sensory neurons that line the body of the fish), they wondered if an increase in sensory input from these neurons contribute to sleep loss.

For the study, the researchers recorded the cavefish under infrared light set up in individual tanks. They automated video-tracking software that told them when the fish were inactive and they defined sleep as one minute of immobility because it correlated with changes in arousal threshold.

“Humans block out sensory cues when we enter a sleep-like state,” said Keene. “For example, we close our eyes and there are mechanisms in the brain to reduce auditory input. This is one of the reasons why a sensory stimuli like someone entering a room is less likely to get your attention if you are asleep. Our thinking was that cavefish have to some degree lost this ability and this drives sleep loss.”

The researchers recently generated transgenic fish lines and they will be able to image brain activity and genetically map anatomical differences between the Mexican cavefish populations.

This study is supported by a grant from the National Science Foundation (1601004).


Contacts and sources:
Gisele Galoustian
Florida Atlantic University:

Cat Ownership Not Linked to Mental Health Problems

New UCL research has found no link between cat ownership and psychotic symptoms, casting doubt on previous suggestions that people who grew up with cats are at higher risk of mental illness.

Recent research has suggested that cat ownership might contribute to some mental disorders, because cats are the primary host of the common parasite Toxoplasma Gondii (T. Gondii), itself linked to mental health problems such as schizophrenia.

"The message for cat owners is clear: there is no evidence that cats pose a risk to children's mental health," says lead author Dr Francesca Solmi (UCL Psychiatry). "In our study, initial unadjusted analyses suggested a small link between cat ownership and psychotic symptoms at age 13, but this turned out to be due to other factors. Once we controlled for factors such as household over-crowding and socioeconomic status, the data showed that cats were not to blame. Previous studies reporting links between cat ownership and psychosis simply failed to adequately control for other possible explanations."

Credit: UCL


The new study, published in Psychological Medicine, suggests that cat ownership in pregnancy and childhood does not play a role in developing psychotic symptoms during adolescence. The study looked at nearly 5000 people born in 1991 or 1992 who were followed-up until the age of 18. The researchers had data on whether the household had cats while the mother was pregnant and when the children were growing up.

The new study was significantly more reliable than previous research in this area since the team looked at families who were followed up regularly for almost 20 years. This is much more reliable than methods used in previous studies, which asked people with and without mental health problems to remember details about their childhood. Such accounts are more vulnerable to errors in recall which can lead to spurious findings.

Previous studies were also relatively small and had significant gaps in the data, whereas the new study looked at a large population and was able to account for missing data. The new study was not able to measure T. Gondii exposure directly, but the results suggest that if the parasite does cause psychiatric problems then cat ownership does not significantly increase exposure.


6 paintings of cats by Louis Wain with an increasing degree of abstractedness, attributed by some to his suffering from schizophrenia
Credit; Louis Wain


"Our study suggests that cat ownership during pregnancy or in early childhood does not pose a direct risk for later psychotic symptoms," explains senior author Dr James Kirkbride (UCL Psychiatry). "However, there is good evidence that T. Gondii exposure during pregnancy can lead to serious birth defects and other health problems in children. As such, we recommend that pregnant women should continue to follow advice not to handle soiled cat litter in case it contains T. Gondii."



Contacts and sources:
Harry Dayantis
University College London

Simple Rule Predicts an Ice Age’s End

A simple rule can accurately predict when Earth’s climate warms out of an ice age, according to new research led by UCL.

In a new study published  in Nature, researchers from UCL, University of Cambridge and University of Louvain have combined existing ideas to solve the problem of which solar energy peaks in the last 2.6 million years led to the melting of the ice sheets and the start of a warm period.

During this interval, Earth’s climate has alternated between cold (glacial) and warm (interglacial) periods. In the cold times, ice sheets advanced over large parts of North America and northern Europe. In the warm periods like today, the ice sheets retreated completely.


The Antarctic ice sheet 

Credit: Stephen Hudson via Wikimedia Commons


It has long been realised that these cycles were paced by astronomical changes in the Earth’s orbit around the Sun and in the tilt of its axis, which change the amount of solar energy available to melt ice at high northern latitudes in summer.

However, of the 110 incoming solar energy peaks (about every 21,000 years) only 50 led to complete melting of the ice sheets. Finding a way to translate the astronomical changes into the sequence of interglacials has previously proved elusive.

Professor Chronis Tzedakis (UCL Geography) said: “The basic idea is that there is a threshold for the amount of energy reaching high northern latitudes in summer. Above that threshold, the ice retreats completely and we enter an interglacial.”

From 2.6 to 1 million years ago, the threshold was reached roughly every 41,000 years, and this predicts almost perfectly when interglacials started and the ice sheets disappeared. Professor Eric Wolff (University of Cambridge) said: “Simply put, every second solar energy peak occurs when the Earth’s axis is more inclined, boosting the total energy at high latitudes above the threshold.”

Somewhere around a million years ago, the threshold rose, so that the ice sheets kept growing for longer than 41,000 years. However, as a glacial period lengthens, ice sheets become larger, but also more unstable.

The researchers combined these observations into a simple model, using only solar energy and waiting time since the previous interglacial, that was able to predict all the interglacial onsets of the last million years, occurring roughly every 100,000 years.

Dr Takahito Mitsui (University of Louvain) said: “The next step is to understand why the energy threshold rose around a million years ago – one idea is that this was due to a decline in the concentration of CO2, and this needs to be tested.”

The results explain why we have been in a warm period for the last 11,000 years: despite the weak increase in solar energy, ice sheets retreated completely during our current interglacial because of the very long waiting time since the previous interglacial and the accumulated instability of ice sheets.

Intriguingly, the researchers found that sometimes the amount of energy was very close to the threshold, so that some interglacials were just aborted, while others just made it. “The threshold was only just missed 50,000 years ago. If it hadn’t been missed, then we wouldn’t have had an interglacial in the last 11,000 years” added Professor Michel Crucifix (University of Louvain).

However, statistical analysis shows that the succession of interglacials is not chaotic: the sequence that has occurred is one among a very small set of possibilities. “Finding order among what can look like unpredictable swings in climate is aesthetically rather pleasing” said Professor Tzedakis.




Contacts and sources:
Ruth Howells
University College London 

FuturaCorp: A.I. Will Make Us More Human By Eliminating Workplace Drudgery Says New Research

The arrival of Artificial Intelligence (AI) in the workplace could triple productivity by automating more than 80 per cent of repetitive, process-oriented tasks - freeing human minds from tedium and enabling them to focus on creating and innovating, according to research from Goldsmiths, University of London and IPsoft.

The result will be a revolutionary shift in workplace productivity and a fundamental restructuring of work as we know it as humans are redeployed in higher-skill roles. 


The study, FuturaCorp: Artificial Intelligence & The Freedom To Be Human paints a vision of ‘FuturaCorp’ – an idealised man + machine workplace of tomorrow.
  
Credit: IPSoft 


The research describes job roles as comprised of a series of tasks. Some are repetitive and process-oriented (deterministic). Some require a human working in concert with machines (probabilistic). Some rely on the types of connections that can only be made by the human brain, from ideas generation to complex problem solving (cross-functional reasoning).

The Goldsmiths team predicts that, in the near future:

• More than 80% of deterministic tasks will be done by machines
• Probabilistic tasks will be shared 50:50 by machines and humans
• But humans will still carry out 80% of all cross-functional reasoning tasks

Dr. Chris Brauer, Director of Innovation and a Senior Lecturer at Goldsmiths, University of London, says: "AI will do far more than automate existing processes. It will free our minds from process-oriented repetition, enabling a refocusing of time and capital for our most human of pursuits: innovation and creativity. So the arrival of AI in workplaces will engender entirely new, unknown possibilities for humans and what they can achieve."



The study paints an optimistic picture of the future for individuals, pointing out that previous waves of automation have led to low-skill work being replaced by new, higher-skill jobs. It predicts that the arrival of the robots in the workplace will make us more human, pointing to crucial human skills that we will need to nurture to complement our digital colleagues.

Chetan Dube, CEO and President of IPsoft said: “AI engenders emergent individual qualities which push us to access the more complex parts of our minds. When routine work is automated, we will be able – and required – to flex our most human of skills. To do what the machines can’t, and likely never will be able to do. The future of society relies on individuals accessing higher reasoning, critical thinking and complex problem solving skills.” 
Credit: IPSoft


However, the need for rapid skill transformation could lead to a near-term skills shortage, according to the research.

The Goldsmiths team found little widespread evidence of businesses, universities and training institutions preparing individuals to manage these looming future shifts.

Finally, the research team developed in liaison with IPsoft a first-of-its-kind ‘organisational readiness equation’ for business leaders to assess how equipped their company is to take its first brave steps into an AI future. The equation scores an organisation in relation to the utopian vision of FuturaCorp, and helps leaders to determine what changes need to be made to push the business model towards this ideal.

Chetan Dube concludes: “CEOs must be prepared to redefine their business in order to capitalise on the productivity potential of AI. That journey begins with fundamental change to organization structure, who they hire for which roles, and how they use the new relationship between humans and machines to maximize efficiency and innovation.”



Contacts and sources:
Oliver Fry, Goldsmiths University of London
IPSoft

Citation: FuturaCorp: Artificial Intelligence & the Freedom to Be Human

New “Tougher-Than-Metal” Fiber-Reinforced Hydrogels

A team of Hokkaido University scientists has succeeded in creating “fiber-reinforced soft composites,” or tough hydrogels combined with woven fiber fabric. These fabrics are highly flexible, tougher than metals, and have a wide range of potential applications.

Efforts are currently underway around the world to create materials that are friendly to both society and the environment. Among them are those that comprise different materials, which exhibit the merits of each component.


The newly developed fiber-reinforced hydrogel consists of polyampholyte (PA) gels and glass fiber fabric. The team theorizes that toughness is increased by dynamic ionic bonds between the fiber and hydrogels, and within the hydrogels. 
Credit: Hokkaido University


Hokkaido University researchers, led by Professor Jian Ping Gong, have focused on creating a reinforced material using hydrogels. Though such a substance has potential as a structural biomaterial, up until now no material reliable and strong enough for long-term use has been produced. This study was conducted as a part of the Cabinet Office’s Impulsing Paradigm Change through Disruptive Technologies Program (ImPACT).

To address the problem, the team combined hydrogels containing high levels of water with glass fiber fabric to create bendable, yet tough materials, employing the same method used to produce reinforced plastics. The team found that a combination of polyampholyte (PA) gels, a type of hydrogel they developed earlier, and glass fiber fabric with a single fiber measuring around 10μm in diameter produced a strong, tensile material. The procedure to make the material is simply to immerse the fabric in PA precursor solutions for polymerization.

When used alone, the fiber-reinforced hydrogels developed by the team are 25 times tougher than glass fiber fabric, and 100 times tougher than hydrogels - in terms of the energy required to destroy them. Combining these materials enables a synergistic toughening. The team theorizes that toughness is increased by dynamic ionic bonds between the fiber and hydrogels, and within the hydrogels, as the fiber’s toughness increases in relation to that of the hydrogels. Consequently, the newly developed hydrogels are 5 times tougher compared to carbon steel.

“The fiber-reinforced hydrogels, with a 40 percent water level, are environmentally friendly,” says Dr. Jian Ping Gong, “The material has multiple potential applications because of its reliability, durability and flexibility. For example, in addition to fashion and manufacturing uses, it could be used as artificial ligaments and tendons, which are subject to strong load-bearing tensions.” The principles to create the toughness of the present study can also be applied to other soft components, such as rubber.



Contacts and sources:
Professor Jiang Ping Gong
Graduate School of Life Science
Hokkaido University
 

Measuring the True Size of Gods and Giants

Archeological artefacts, such as the Jupiter Column of Ladenburg, a town with an impressive Roman history, hold many as yet undiscovered secrets. Discovered in 1973, the history of the monument that is more than 1800 years old is still unclear. 

The HEiKA MUSIEKE project is aimed at uncovering some of these secrets and making the cultural heritage of Ladenburg visible and perceptible. For this purpose, modern digitization techniques of Karlsruhe Institute of Technology (KIT) are used.

“Contact-free digitization of objects opens up new approaches to research,” Dr. Thomas Vögtle of KIT’s Institute of Photogrammetry and Remote Sensing says. The Jupiter Column is about four meters high and combines Roman and Germanic symbols and conceptions. The figures on the column represent the battle between the Roman god Jupiter and a giant. The texture of the column and the equestrian figure, however, appear to follow Celtic tradition. “The digital model makes archeologists and laymen experience the artefact in an entirely new way.”


Digitization of the Jupiter Column makes this cultural heritage perceptible by both archaeologists and laymen. 
Photo: KIT/IPEF


To model the three-dimensional structure of the column on the computer, the KIT team uses a professional, commercially available digital single-lens reflex camera of 36 megapixels resolution with conventional illumination technology. “Our hardware is robust and mobile so that we can collect our data easily, rapidly, and at low costs at any place,” Vögtle explains. 

On a single working day, the team took about 800 photos of the column from all perspectives. On the computer, characteristic features of the column were identified and interlinked in the different images. Information of the two-dimensional photos was processed to yield a photorealistic, three-dimensional model. Using this model, hardly visible structures can be seen with the bare eye. “The computer model then is the basis for further work of archeologists.” 

“Digital objects may also provide laymen with a new experience of cultural heritage,” Dr. Ralf Schneider of ZAK I Center for Cultural and General Studies of KIT says. He coordinates the HEiKA-MUSIEKE – Multidimensional Perceptibility of Cultural Heritage project. Large parts of our cultural heritage have long been lost from our world of interest. With the help of digital methods, cultural heritage can be acquired, analyzed, and presented to a broader public in a new way, in a context that is also understandable by laymen. 


The Jupiter Column from Ladenburg. 
Photo: KIT/IPF


The MUSIEKE project combines archeology, remote sensing, forensic computer science, geoinformatics, and applied cultural science to make cultural heritage perceptible. Apart from the digitization of artefacts, it also covers the generation of databases with geoinformation or production of digital maps of various historic stages of settlements and cities.

Vögtle normally uses photogrammetry and digitization methods for technical purposes. Based on aerial photos, he determines the orientation of roofs in cities for finding out whether they are suited for the installation of solar facilities. In industrial production, camera photos are used to find out whether the product was produced with the required accuracy and can be used in the next production stage or needs to be adjusted. Or the progress of construction of an underground station can be compared with the planned target. “In production or in the construction sector in particular, objects have to be measured in a contact-free, automatic, and rapid way. Cameras and digitization are very valuable tools for this purpose,” Vögtle says.



Contacts and sources:
Kosta Schinarakis

Darwin’s “Abominable Mystery: ”Where Do Flowers Come From? Researchers Find Clues

The mystery that is the origin of flowering plants has been partially solved thanks to a team of French researchers.  

Their discovery, published in the journal New Phytologist on February 24, 2017, sheds light on a question that much intrigued Darwin: the appearance of a structure as complex as the flower over the course of evolution.  The team was made up of researchers from the Laboratoire de Physiologie Cellulaire et Végétale (CNRS/Inra/CEA/Université Grenoble Alpes), in collaboration with the Reproduction et Développement des Plantes laboratory (CNRS/ENS Lyon/Inra/Université Claude Bernard Lyon 1) and Kew Gardens (UK).

Terrestrial flora is today dominated by flowering plants. They provide our food and contribute color to the plant world. But they have not always existed. While plants colonized the land over 400 million years ago, flowering plants appeared only 150 million years ago. They were directly preceded by a group known as the gymnosperms, whose mode of reproduction is more rudimentary and whose modern-day representatives include conifers.


Detail of a Welwitschia mirabilis plant showing its two leaves and male cones. 
Credit: Michael W. Frohlich 


Darwin long pondered the origin and rapid diversification of flowering plants, describing them as an “abominable mystery”. In comparison with gymnosperms, which possess rather rudimentary male and female cones (like the pine cone), flowering plants present several innovations: the flower contains the male organs (stamens) and the female organs (pistil), surrounded by petals and sepals, while the ovules, instead of being naked, are protected within the pistil.


A female Welwitschia mirabilis plant in its natural environment in the desert of Namibia. 
Credit: Stephen G. Weller & Ann K. Sakai


How was nature able to invent the flower, a structure so different from that of cones? The team led by François Parcy, a CNRS senior researcher at the Cell and Plant Physiology Laboratory (CNRS/Inra/CEA/Université Grenoble Alpes), has just provided part of the answer. To do so, the researchers studied a rather original gymnosperm called Welwitschia mirabilis. This plant, which can live for more than a millennium, grows in the extreme conditions of the deserts of Namibia and Angola, and, like other gymnosperms, possesses separate male and female cones.


Close-up on male cones, on which pollen can be seen. 

Credit: Michael W. Frohlich


 What is exceptional is that the male cones possess a few sterile ovules and nectar, which indicates a failed attempt to invent the bisexual flower. Yet, in this plant (as well as in certain conifers), the researchers found genes similar to those responsible for the formation of flowers, and which are organized according to the same hierarchy (with the activation of one gene activating the next gene, and so on)!

The fact that a similar gene cascade has been found in flowering plants and their gymnosperm cousins indicates that this is inherited from their common ancestor. This mechanism did not have to be invented at the time of the origins of the flower: it was simply inherited and reused by the plant, a process that is often at work in evolution.

The study of the current biodiversity of plants thus enables us to go back in time and gradually sketch the genetic portrait of the common ancestor of a large proportion of modern-day flowers. The team is continuing to study other traits to better understand how the first flower emerged.



Contacts and sources:
CNRS (Délégation Paris Michel-Ange)


Citation: A link between LEAFY and B-gene homologs in Welwitschia mirabilis sheds light on ancestral mechanisms prefiguring floral development, Edwige Moyroud, Marie Monniaux, Emmanuel Thévenon, Renaud Dumas, Charles P. Scutt, Michael W.Frohlich, François Parcy. New Phytologist, 24 February 2017. DOI:10.1111/nph.14483