Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Saturday, December 31, 2016

Gut Microorganisms Affect Our Physiology

Researchers have found evidence that could shed new light on the complex community of trillions of microorganisms living in all our guts, and how they interact with our bodies.

Scientists at the University of Exeter Medical School and University of Zaragoza in Spain studied a protein known as TLR2, a critical detector of the microbiota found in the intestine. They found that it regulates levels of serotonin - a neurotransmitter which carries messages to the brain, and is also found in the gut, where it regulates our bowel routines.

The research, carried out in cell cultures and verified in mice, provides strong evidence that microbiota can interfere with human physiology by modulating the serotonin transporter activity. Serotonin transporter is a target for numerous diseases and it seems that microbiota living in our guts is able to interfere with this transporter, controlling our serotonin levels.

Serotonin transporter expression (marked in brown) in human colon.
Credit: University of Exeter 


The finding, published in PLOS ONE, comes as scientists across the world are working to understand the complicated interactions between the "invisible world" of the microbiota in our bodies and the impact they have on our health and even our moods. Recently, scientists in California found evidence that the bacteria in the gut play a role in causing Parkinson's Disease.

It may also help explain how the microbiota in our guts affect our physiology. Inflammatory bowel disease is thought to be triggered when TLR2 is not functioning properly, but so far, the mechanisms behind this have not been fully understood. This study aimed to further this understanding, and was supported the Foundation for the Study of Inflammatory Bowel Diseases in Aragón (ARAINF), in Spain.

Dr Eva Latorre, a postdoctoral researcher at the University of Exeter Medical School, said the new finding helped to further understanding in a fast-growing research area. She said: "This paper has concluded that the protein TLR2 alters the availability of serotonin, which is important in a range of conditions from depression to inflammatory bowel disease. It is early days in this research though. We need to understand much more about the relationship between the microbiota in our guts and how they interact, before we can hope to harness effective new treatments."

The research team examined human cells in a model of the intestine in the laboratory, looking at how they express proteins and RNA - activities which regulate how they behave. They found that TLR2 controls serotonin transporter - obtaining the same result in studies on mice.

Principal investigator of this study, Professor José E Mesonero, at the University of Zaragoza, said: "This paper opens our minds about the complex universe of this forgotten organ: the microbiome. We have concluded that TLR2 not only can detect microbiota, but also modulate serotonin transport, one of the crucial mechanism in neurological and inflammatory diseases. Much has to be yet studied, but this work can improve our understanding about the connection between gut and brain thought microbiota."





Contacts and sources:
Louise Vennells
University of Exeter

The paper, called ‘Intestinal serotonin transporter inhibition by Toll-like receptor 2 activation. A feedback modulation’, is published in PLOS ONE, by Eva Latorre, Elena Layunta, Laura Grasa, Marta Castro, Julián Pardo, Fernando Gomollón, Ana I. Alcalde and José E. Mesonero.

Millions of Tons of Food Could Be Saved with Better Logistics

Each year, around 88 million tonnes of food is discarded in the EU. This is something that Kristina Liljestrand, researcher at Chalmers University of Technology, wants to do something about. She is now giving companies in the food supply chain specific tools that can reduce both food waste and the environmental impact of food transport.

It is hard to grasp the true scale of food waste in Europe. In 2012, the costs associated with food waste in the EU were estimated at around 143 billion euros.

“The amount of food that is thrown away nowadays is incredible. Most food waste comes from consumers, but the amount lost in the logistics systems comes in a close second. By tweaking the logistics systems, we can ensure that the food maintains good quality and lasts as long as possible when it reaches the store,” says Kristina Liljestrand.

Kristina Liljestrand's research area is green food logistics. 

Photo: Caroline Örmgård

This is where Kristina Liljestrand's research comes into play. She holds a PhD from Chalmers University of Technology in Sweden. In recent years, she has figured out how companies in the food supply chain can work to reduce their environment impact in terms of both food waste and emissions from transports.

Her work is unique in many ways since logistics improvement actions to combat the waste problem is a relatively unexplored area. There is no overview of ways that the companies in the supply chain can reduce waste – but this is something that Liljestrand delivers in her doctoral thesis.

“The logistics systems are what bind everything together, from production of the food products to the products sitting on the store shelves. We need to understand how to work here to reduce food waste,” she says.

Through an extensive study among Swedish producers, wholesalers and retailers, she has identified nine improvement actions.

“I describe the improvement actions, the logistics activities, and what players are involved. The compilation can be seen as a buffet for those who want to work to reduce food waste,” she says.

An important conclusion is that collaboration throughout the food supply chain is crucial.

“Several stages of the food chain are involved when it comes to waste, making it hard for a single company working alone to reduce it. Collaboration is necessary to create effective systems that span from beginning to end so that the food products reach the stores in time,” she says.

In the second part of her research, Liljestrand reviewed how the environmental impact from transports in the food logistics system can be reduced. By looking at aspects such as load factor (how well the space in/on pallets, crates and trucks is utilized) and the proportion of intermodal transports (where road transport is combined with rail or sea transport), she identified which shipments are most effective to work with, and the best way of doing this.

This resulted in two frameworks that provide great help in the quest to reduce transport emissions.

“Many logistics systems are extremely large and complex, and it can be hard to know where to begin. The frameworks that I developed give companies tools that enable them to see what factors in their logistics systems affect transport emissions,” she says.

Liljestrand has also incorporated an economic perspective in that her research also shows what savings can be made through the various measures. One thing is clear – there is money to be made by increasing the load factor and focusing more on intermodal transport.

“If you work to reduce environmental impact, you often also reduce your costs,” she says.

Facts: Food waste in EU:

In 2012, the estimated amount of food waste in the EU was 88 million tonnes (including both edible food and inedible parts associated with food). This equates to 173 kilograms of food waste per person, and it means that we are wasting about 20 percent of the total food produced.

The costs associated with food waste in the EU in 2012 were estimated at around 143 billion euros.
The households are contributing the most to food waste. 2012 the households accounted for 47 million tonnes - or 53 percent – of the total amount of food waste. But there are also a lot of food waste in the logistics chain on the way out to the consumers. 

The processing sector, together with the wholesale and retail sector, accounted for 24 percent of the total amount of food waste in 2012.

Source: The report Estimates of European food waste levels (2016), within the EU Fusions project (Food Use for Social Innovation by Optimizing Waste Prevention Strategies).



Contacts and sources:
Chalmers University of Technology

Fossil Fuel Formation: Key to Atmosphere’s Oxygen?

For the development of animals, nothing — with the exception of DNA — may be more important than oxygen in the atmosphere.

Oxygen enables the chemical reactions that animals use to get energy from stored carbohydrates — from food. So it may be no coincidence that animals appeared and evolved during the “Cambrian explosion,” which coincided with a spike in atmospheric oxygen roughly 500 million years ago.

It was during the Cambrian explosion that most of the current animal designs appeared.

In green plants, photosynthesis separates carbon dioxide into molecular oxygen (which is released to the atmosphere), and carbon (which is stored in carbohydrates).

But photosynthesis had already been around for at least 2.5 billion years. So what accounted for the sudden spike in oxygen during the Cambrian?

This black shale, formed 450 million years ago, contains fossils of trilobites and other organic material that, by removing carbon from Earth's surface, helped support increases in oxygen in the atmosphere.

Credit: Jon Husson and Shanan Peters/UW-Madison


A study now online in the February issue of Earth and Planetary Science Letters links the rise in oxygen to a rapid increase in the burial of sediment containing large amounts of carbon-rich organic matter. The key, says study co-author Shanan Peters, a professor of geoscience at the University of Wisconsin-Madison, is to recognize that sediment storage blocks the oxidation of carbon.

Without burial, this oxidation reaction causes dead plant material on Earth’s surface to burn. That causes the carbon it contains, which originated in the atmosphere, to bond with oxygen to form carbon dioxide. And for oxygen to build up in our atmosphere, plant organic matter must be protected from oxidation.

And that’s exactly what happens when organic matter — the raw material of coal, oil and natural gas — is buried through geologic processes.

To make this case, Peters and his postdoctoral fellow Jon Husson mined a unique data set called Macrostrat, an accumulation of geologic information on North America whose construction Peters has masterminded for 10 years.

The parallel graphs of oxygen in the atmosphere and sediment burial, based on the formation of sedimentary rock, indicate a relationship between oxygen and sediment. Both graphs show a smaller peak at 2.3 billion years ago and a larger one about 500 million years ago.

“It’s a correlation, but our argument is that there are mechanistic connections between geology and the history of atmospheric oxygen,” Husson says. “When you store sediment, it contains organic matter that was formed by photosynthesis, which converted carbon dioxide into biomass and released oxygen into the atmosphere. Burial removes the carbon from Earth’s surface, preventing it from bonding molecular oxygen pulled from the atmosphere.”

Some of the surges in sediment burial that Husson and Peters identified coincided with the formation of vast fields of fossil fuel that are still mined today, including the oil-rich Permian Basin in Texas and the Pennsylvania coal fields of Appalachia.

“Burying the sediments that became fossil fuels was the key to advanced animal life on Earth,” Peters says, noting that multicellular life is largely a creation of the Cambrian.

Today, burning billions of tons of stored carbon in fossil fuels is removing large amounts of oxygen from the atmosphere, reversing the pattern that drove the rise in oxygen. And so the oxygen level in the atmosphere falls as the concentration of carbon dioxide rises.

Jon Husson points to a fossilized tree stump at Joggins Fossil Cliffs, Nova Scotia, a famous fossil site visited by Charles Darwin. These rocks contain large amounts of organic carbon, part of the carbon sequestration process studied by Husson and Shanan Peters at the University of Wisconsin-Madison.
Credit: Courtesy of Jon Husson/UW-Madison

The data about North America in Macrostrat reflects the work of thousands of geoscientists over more than a century. The current study only concerns North America, since comprehensive databases concerning the other 80 percent of Earth’s continental surface do not yet exist.

The ultimate geological cause for the accelerated sediment storage that promoted the two surges in oxygen remains murky. “There are many ideas to explain the different phases of oxygen concentration," Husson concedes. "We suspect that deep-rooted changes in the movement of tectonic plates or conduction of heat or circulation in the mantle may be in play, but we don’t have an explanation at this point.”

Holding a chunk of trilobite-studded Ordovician shale that formed approximately 450 million years ago, Peters asks, “Why is there oxygen in the atmosphere? The high school explanation is 'photosynthesis.' But we’ve known for a long time, going all the way back to Wisconsin geologist (and University of Wisconsin president) Thomas Chrowder Chamberlin, that building up oxygen requires the formation of rocks like this black shale, which can be rich enough in carbon to actually burn. The organic carbon in this shale was fixed from the atmosphere by photosynthesis, and its burial and preservation in this rock liberated molecular oxygen.”

What's new in the current study, Husson says, is the ability to document this relationship in a broad database that covers 20 percent of Earth's land surface.

Continual burial of carbon is needed to keep the atmosphere pumped up with oxygen. Many pathways on Earth’s surface, Husson notes, like oxidation of iron — rust — consume free oxygen. “The secret to having oxygen in the atmosphere is to remove a tiny portion of the present biomass and sequester it in sedimentary deposits. That’s what happened when fossil fuels were deposited.”







Contacts and sources:
David Tenenbaum
University of Wisconsin-Madison

Researchers Urge Caution Around Psilocybin Use


In a survey of almost 2,000 people who said they had had a past negative experience when taking psilocybin-containing "magic mushrooms," Johns Hopkins researchers say that more than 10 percent believed their worst "bad trip" had put themselves or others in harm's way, and a substantial majority called their most distressing episode one of the top 10 biggest challenges of their lives. Despite the difficulty, however, most of the respondents still reported the experience to be "meaningful" or "worthwhile," with half of these positive responses claiming it as one of the top most valuable experiences in their life.

The results of the survey were published in the Dec. 1 print issue of the Journal of Psychopharmacology.

The researchers caution that their survey results don't apply to all psilocybin mushroom use, since the questionnaire wasn't designed to assess "good trip" experiences. And, the survey wasn't designed to determine how often bad trips occur.

Mushroom trip hallucination.
Credit: iStock


"Considering both the negative effects and the positive outcomes that respondents sometimes reported, the survey results confirm our view that neither users nor researchers can be cavalier about the risks associated with psilocybin," says Roland Griffiths, Ph.D., a psychopharmacologist and professor of psychiatry and behavioral sciences and neurosciences at the Johns Hopkins University School of Medicine. Griffiths has spent more than 15 years conducting studies of psilocybin's capacity to produce profound, mystical-type experiences, treat psychological anxiety and depression and to aid in smoking cessation.

Psilocybin and use of other hallucinogens became popular in the U.S. in the 1960s due to charismatic proponents, who suggested anecdotally that users would experience profound psychological insights and benefits. But drugs such as psilocybin and LSD were banned for supposed safety reasons shortly thereafter, in the 1970s, without much scientific evidence about risks or benefits.

In recent years, Griffiths and his team have conducted more than a dozen studies confirming some of those benefits. The current study was designed, he said, to shed light on the impact of so-called "bad trips."

For the new survey, Griffiths' team used advertisements on social media platforms and email invitations to recruit people who self-reported a difficult or challenging experience while taking psilocybin mushrooms. The survey took about an hour to complete and included three questionnaires: the Hallucinogen Rating Scale, the Mystical Experience Questionnaire, developed by Griffiths and colleagues in 2006, and parts of the 5D-Altered States of Consciousness Questionnaire.

Participants were asked in the survey to focus only on their worst bad trip experience, and then to report about the dose of psilocybin they took, the environment in which the experience occurred, how long it lasted, and strategies available and used to stop this negative experience and any unwanted consequences.

Of 1,993 completed surveys, 78 percent of respondents were men, 89 percent were white, and 51 percent had college or graduate degrees. Sixty-six percent were from the U.S. On average, the survey participants were 30 years old at the time of the survey and 23 years old at the time of their bad trips, with 93 percent responding that they used psilocybin more than two times.

Based on the survey data that assessed each respondent's absolute worst bad trip, 10.7 percent of the respondents said they put themselves or others at risk for physical harm during their bad trip. Some 2.6 percent said they acted aggressively or violently, and 2.7 percent said they sought medical help. Five of the participants with self-reported pre-existing anxiety, depression or suicidal thoughts attempted suicide while on the drug during their worst bad trip, which the researchers say is indicative of requiring a supportive and safe environment during use, like those conditions used in ongoing research studies. However, six people reported that their suicidal thoughts disappeared after their experience on their worst bad trip -- the latter result coinciding with a recent study published by Griffiths showing the antidepressive properties of psilocybin in cancer patients.

Still, Griffiths said, a third of the participants also said their experience was among the top five most meaningful, and a third ranked it in the top five most spiritually significant experiences of their lives. Sixty-two percent of participants said the experience was among the top 10 most difficult ones in their lifetime; 39 percent listed it in their top five most difficult experiences; and 11 percent listed it as their single most difficult experience.

"The counterintuitive finding that extremely difficult experiences can sometimes also be very meaningful experiences is consistent with what we see in our studies with psilocybin -- that resolution of a difficult experience, sometimes described as catharsis, often results in positive personal meaning or spiritual significance," Griffiths says.

¬In all of Griffiths' clinical research, people given psilocybin are provided a safe, comfortable space with trained experts to offer support to participants. "Throughout these carefully managed studies, the incidence of risky behaviors or enduring psychological problems has been extremely low," Griffiths says. "We are vigilant in screening out volunteers who may not be suited to receive psilocybin, and we mentally prepare study participants before their psilocybin sessions."

"Cultures that have long used psilocybin mushrooms for healing or religious purposes have recognized their potential dangers and have developed corresponding safeguards," says Griffiths. "They don't give the mushrooms to just anyone, anytime, without a contained setting and supportive, skillful monitoring."

The researchers say that survey studies like this one rely on self-reporting that cannot be objectively substantiated, and that additional scientifically rigorous studies are needed to better understand the risks and potential benefits of using hallucinogenic drugs.

According to the Substance Abuse and Mental Health Services Administration's National Survey on Drug Use and Health, about 22.9 million people or 8.7 percent of Americans reported prior use of psilocybin. While not without behavioral and psychological risks, psilocybin is not regarded as addictive or as toxic to the brain, liver or other organs.

Please see the Q&A with Griffiths for more information on the study: http://bit.ly/2ivcRJ7

Additional authors included Theresa Carbonaro, Matthew Bradstreet, Frederick Barrett, Katherine MacLean, Robert Jesse and Matthew Johnson, of The Johns Hopkins University.

The study was funded by grants from the National Institute on Drug Abuse (R01 DA03889 and 5T32 DA007209), the Council on Spiritual Practices and the Heffter Research Institute.




Contacts and sources:
Johns Hopkins Medicine

Divide and Conquer Pattern Searching


A new data-mining strategy that offers unprecedented pattern search speed could glean new insights from massive datasets.

Searching for recurring patterns in network systems has become a fundamental part of research and discovery in fields as diverse as biology and social media. King Abdullah University of Science & Technology (KAUST) researchers have developed a pattern or graph-mining framework that promises to significantly speed up searches on massive network data sets.

"A graph is a data structure that models complex relationships among objects," explained Panagiotis Kalnis, leader of the research team from the KAUST Extreme Computing Research Center. "Graphs are widely used in many modern applications, including social networks, biological networks like protein-to-protein interactions, and communication networks like the internet."

A new data mining strategy that offers unprecedented pattern search speed could lead to new insights from massive data.

Credit: © Mopic / Alamy Stock Photo DTFTEM

In these applications, one of the most important operations is the process of finding recurring graphs that reveal how objects tend to connect to each other. The process, which is called frequent subgraph mining (FSM), is an essential building block of many knowledge extraction techniques in social studies, bioinformatics and image processing, as well as in security and fraud detection. However, graphs may contain hundreds of millions of objects and billions of relationships, which means that extracting recurring patterns places huge demands on time and computing resources.

"In essence, if we can provide a better algorithm, all the applications that depend on FSM will be able to perform deeper analysis on larger data in less time," Kalnis noted.

Kalnis and his colleagues developed a system called ScaleMine that offers a ten-fold acceleration compared with existing methods.

"FSM involves a vast number of graph operations, each of which is computationally expensive, so the only practical way to support FSM in large graphs is by massively parallel computation," he said.

In parallel computing, the graph search is divided into multiple tasks and each is run simultaneously on its own processor. If the tasks are too large, the entire search is held up by waiting for the slowest task to complete; if the tasks are too small, the extra communication needed to coordinate the parallelization becomes a significant additional computational load.

Kalnis' team overcame this limitation by performing the search in two steps: a first approximation step to determine the search space and the optimal division of tasks and a second computational step in which large tasks are split dynamically into the optimal number of subtasks. This resulted in search speeds up to ten times faster than previously possible.

"Hopefully this performance improvement will enable deeper and more accurate analysis of large graph data and the extraction of new knowledge," Kalnis said.



Contacts and sources:
Michelle D'Antoni
King Abdullah University of Science & Technology (KAUST)

Omega-3 Supplements Can Prevent Childhood Asthma

Taking certain omega-3 fatty acid supplements during pregnancy can reduce the risk of childhood asthma by almost one third, according to a new study from the Copenhagen Prospective Studies on Asthma in Childhood (COPSAC) and the University of Waterloo.

The study, published in the New England Journal of Medicine, found that women who were prescribed 2.4 grams of long-chain omega-3 supplements during the third trimester of pregnancy reduced their children's risk of asthma by 31 per cent. Long-chain omega-3 fatty acids, which include eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), are found in cold water fish, and key to regulating human immune response.

Professor Ken Stark taking a sample of blood in Waterloo's Laboratory of Nutritional and Nutraceutical Research to determine the levels of omega-3 fatty acids.

Credit: Light Imaging

"We've long suspected there was a link between the anti-inflammatory properties of long-chain omega-3 fats, the low intakes of omega-3 in Western diets and the rising rates of childhood asthma," said Professor Hans Bisgaard of COPSAC at the Copenhagen University Hospital. "This study proves that they are definitively and significantly related."

The study used rapid analytical techniques developed and performed at the University of Waterloo to measure levels of EPA and DHA in pregnant women's blood. The University of Waterloo is one of a few laboratories in the world equipped to run such tests.

"Measuring the levels of omega-3 fatty acids in blood provides an accurate and precise assessment of nutrient status," said Professor Ken Stark, Canada Research Chair in Nutritional Lipidomics and professor in the Faculty of Applied Health Sciences at Waterloo, who led the testing. "Our labs are uniquely equipped to measure fatty acids quickly, extremely precisely, and in a cost-efficient manner."

The testing also revealed that women with low blood levels of EPA and DHA at the beginning of the study benefitted the most from the supplements. For these women, it reduced their children's relative risk of developing asthma by 54 per cent.

"The proportion of women with low EPA and DHA in their blood is even higher in Canada and the United States as compared with Denmark. So we would expect an even greater reduction in risk among North American populations," said Professor Stark. "Identifying these women and providing them with supplements should be considered a front-line defense to reduce and prevent childhood asthma."

Researchers analyzed blood samples of 695 Danish women at 24 weeks' gestation and one week after delivery. They then monitored the health status of each participating child for five years, which is the age asthma symptoms can be clinically established.

"Asthma and wheezing disorders have more than doubled in Western countries in recent decades," said Professor Bisgaard. "We now have a preventative measure to help bring those numbers down."

Currently, one out of five young children suffer from asthma or a related disorder before school age.




Contacts and sources:
Pamela Smyth
University of Waterloo

Friday, December 30, 2016

The Rhythm That Makes Memories Permanent

Scientists at the Institute of Science and Technology Austria (IST Austria) identify mechanism that regulates rhythmic brain waves -- inhibition at synapses is the key to make memories permanent.

Every time we learn something new, the memory does not only need to be acquired, it also needs to be stabilized in a process called memory consolidation. Brain waves are considered to play an important role in this process, but the underlying mechanism that dictates their shape and rhythm was still unknown. A study now published in Neuron shows that one of the brain waves important for consolidating memory is dominated by synaptic inhibition.

So-called sharp wave ripples (SWRs) are one of three major brain waves coming from the hippocampus. The new study, a cooperation between the research groups of Professors Peter Jonas and Jozsef Csicsvari at the IST Austria, found the mechanism that generates this oscillation of neuronal activity in mice. "Our results shed light on the mechanisms underlying this high-frequency network oscillation. As our experiments provide information both about the phase and the location of the underlying conductance, we were able to show that precisely timed synaptic inhibition is the current generator for sharp wave ripples." explains author Professor Peter Jonas.

During sharp wave ripples (shown on the top) the inhibitory conductance (blue curve) has a much higher amplitude than the excitatory conductance (red curve). This shows that inhibition is the underlying mechanism that creates the brain wave.
More IST Austria

When neurons oscillate in synchrony, their electrical activity adds together so that measurements of field potential can pick them up. SWRs are one of the most synchronous oscillations in the brain. Their name derives from their characteristic trace when measuring local field potential: the slow sharp waves have a triangular shape with ripples, or fast field oscillations, added on. SWRs have been suggested to play a key role in making memories permanent. 

In this study, the researchers wanted to identify whether ripples are caused by a temporal modulation of excitation or of inhibition at the synapse, the connection between neurons. For Professor Jozsef Csicsvari, a pooling of expertise was crucial in answering this question: "SWRs play an important role in the brain, but the mechanism generating them has not been identified so far - probably partly because of technical limitations in the experiments. We combined the Jonas group's experience in recording under voltage-clamp conditions with my group's expertise in analyzing electrical signals while animals are behaving. This collaborative effort made unprecedented measurements possible and we could achieve the first high resolution recordings of synaptic currents during SWR in behaving mice."

Preamplifiers and amplifiers used for the in vivo recording experiments described in the study.

Credit: MotionManager
The neuroscientists found that the frequency of both excitatory and inhibitory events at the synapse increased during SWRs. But quantitatively, synaptic inhibition dominated over excitation during the generation of SWRs. Furthermore, the magnitude of inhibitory events positively correlated with SWR amplitude, indicating that the inhibitory events are the driver of the oscillation. Inhibitory events were phase locked to individual cycles of ripple oscillations. Finally, the researchers showed that so-called PV+ interneurons - neurons that provide inhibitory output onto other neurons - are mainly responsible for generating SWRs.

The authors propose a model involving two specific regions in the hippocampus, CA1 and CA3. In their model SWRs are generated by a combination of tonic excitation from the CA3 region and phasic inhibition within the CA1 region. Jian Gan, first author and postdoc in the group of Peter Jonas, explains the implications for temporal coding of information in the CA1 region: "In our ripple model, inhibition ensures the precise timing of neuronal firing. This could be critically important for preplay or replay of neuronal activity sequences, and the consolidation of memory. Inhibition may be the crucial player to make memories permanent."



Contacts and sources:
Peter Jonas
Institute of Science and Technology Austria (IST Austria) 

Hubble Gazes at a Cosmic Megamaser

This galaxy has a far more exciting and futuristic classification than most -- it hosts a megamaser. Megamasers are intensely bright, around 100 million times brighter than the masers found in galaxies like the Milky Way. The entire galaxy essentially acts as an astronomical laser that beams out microwave emission rather than visible light (hence the 'm' replacing the 'l').

A megamaser is a process where some components within a galaxy (like gas clouds) are in the right stimulated physical condition to radiate intense energy (in this case, microwaves).



This megamaser galaxy is named IRAS 16399-0937 and is located over 370 million light-years from Earth. This NASA/ESA Hubble Space Telescope image belies the galaxy's energetic nature, instead painting it as a beautiful and serene cosmic rosebud. The image comprises observations captured across various wavelengths by two of Hubble's instruments: the Advanced Camera for Surveys (ACS), and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS).

NICMOS's superb sensitivity, resolution, and field of view gave astronomers the unique opportunity to observe the structure of IRAS 16399-0937 in detail. They found it hosts a double nucleus -- the galaxy's core is thought to be formed of two separate cores in the process of merging. The two components, named IRAS 16399N and IRAS 16399S for the northern and southern parts respectively, sit over 11,000 light-years apart. However, they are both buried deep within the same swirl of cosmic gas and dust and are interacting, giving the galaxy its peculiar structure.

The nuclei are very different. IRAS 16399S appears to be a starburst region, where new stars are forming at an incredible rate. IRAS 16399N, however, is something known as a LINER nucleus (Low Ionization Nuclear Emission Region), which is a region whose emission mostly stems from weakly-ionized or neutral atoms of particular gases. The northern nucleus also hosts a black hole with some 100 million times the mass of the sun!


Contacts and sources:
Rob Gutro
Goddard Space Flight Center

Diamonds Are Technologists' Best Friends: Researchers Grow and Study Needle- And Thread-Like Diamonds


Physicists from the Lomonosov Moscow State University have obtained diamond crystals in the form of a regular pyramid of micrometer size. Moreover, in cooperation with co-workers from other Russian and foreign research centers they have also studied the luminescence and electron emission properties of obtained diamond crystals. The research results have been represented in a serie of articles published in the leading peer review journals, the most recent appeared in Scientific Reports.

Researchers from the Faculty of Physics, the Lomonosov Moscow State University, have described structural peculiarities of micrometer size diamond crystals of needle- and thread-like shapes, and their interrelation with luminescence features and efficiency of field electron emission. The luminescence properties of such thread-like diamond crystals could be used in different types of sensors, quantum optical devices and also for creation of element base for quantum computers and in other areas of science and technology.

Example of diamond crystallites of different shapes, obtained with the help of the technology, worked out in the Lomonosov Moscow State University. There are electron microscopy images of diamond films' fragments after their oxidation in the air. The material left after the oxidation is represented by needle-like diamond monocrystals of pyramid shape.
Credit: Alexander Obraztsov


The best friends of girls and technologists

Brilliants are polished rough diamond crystals and glorified as "a girl's best friend". Wide use of diamonds in various industrial processes is relatively less famous among ordinary people. However, technological application of diamonds significantly outweighs their jewelry usage and is constantly increasing both in terms of quantity and enhancing the diversity of areas of their application. Such high application significance turns out to be a constant motivation for researchers, busy with elaboration of new methods of diamond synthesis, processing and enduing with necessary features.

One of the problems, which are to be solved for a number of technology developments, is production of needle- and thread-like diamond crystals. Such shaping of original natural and synthetic diamonds is possible due to man-handling (polishing) in the same way as it happens during brilliant production. Other means imply usage of lithography and ion beam technologies, which help to separate fragments of necessary shape from crystals of large size. However, such "cutting" techniques are quite expensive and not always acceptable.

A team of researchers, working at the Faculty of Physics ofthe Lomonosov Moscow State University under the guidance of Professor Alexander Obraztsov, has suggested a technology, which makes possible mass production of small diamond crystals (or crystallites) of needle- and thread-like shapes. The first results, got during the studies in this direction, were published seven years ago in Diamond & Related Materials journal.

Alexander Obraztsov, Professor at the Department of Polymer and Crystal Physics, at the Faculty of Physics of the Lomonosov Moscow State University; Doctor of Science in Physics and Mathematics, being the main research author shares the following comments. He says: "The proposed technique involves usage of a well-known regularity, determining formation of polycrystalline films from crystallites of elongate ("columnar") shape. For instance, ice on a surface of lake often consists of such crystallites, what could be observed while it's melting. Usually, during diamond polycrystalline films production, one strives to provide such conditions, which allow crystallites of columnar shape, composing the films, to tightly connect with each other, creating dense homogeneous structure".

Everything, except diamonds, is gasified

Researchers from the Lomonosov Moscow State University have shown that diamond films, which have been previously perceived as "bad quality" ones as they consist of separate crystallites, not connecting with each other, now could be used for production of diamonds in the form of needle- or thread-like developments of regular pyramid form. In order to achieve this, it's necessary to heat such films to definite temperature in air or in another oxygen-containing environment. When heated, a part of the film material begins oxidizing and gasifies. Due to the fact that oxidation temperature depends on the carbon material features, and diamond crystallites oxidation need maximum temperature, it's possible to adjust this temperature so that all the material, except these diamond crystallites, is gasified. This relatively simple technology combines production of polycrystalline diamond films with specified structural characteristics with their heating in the air. It makes possible mass production of diamond crystallites of various shapes (needle- and thread-like ones and so on). Some idea about such crystallites can be obtained from electron microscopy images.

Example of diamond crystallites of different shapes, obtained with the help of the technology, worked out in the Lomonosov Moscow State University. There are electron microscopy images of diamond films' fragments after their oxidation in the air. The material left after the oxidation is represented by needle-like diamond monocrystals of pyramid shape.

Credit: Alexander Obraztsov

 The crystallites could be used, for instance, as high hardness elements: a cutter for high- precision processing, indenters or probes for scanning microscopes. Such application was described in the article, published earlier by the team in journal Review of Scientific Instruments. At the moment all probes, produced using this technology, are commercially offered.

It's possible to manage useful properties of a diamond

During follow-up research and developments, conducted at the Faculty of Physics, the Lomonosov Moscow State University, the initial technology has been significantly improved, what has allowed to diversify shapes and sizes of the needle-like crystallites and extend prospective field of their application. Researchers from the Lomonosov Moscow State University have drawn attention to optical properties of a diamond, which are of significant fundamental scientific and applied interest. The results of these studies are represented in the series of articles in Journal of Luminescence, Nanotechnology, and Scientific Reports.

These recent publications describe structural peculiarities of such diamond crystallites and their interrelation with luminescence features and efficiency of field electron emission. As it is mentioned by the researchers, the latter is, probably, the first example of genuine diamond field-emission (or cold) cathode realization. Many efforts have been made for its obtaining and studying of such kind of cathodes for the last two decades. Luminescence properties of the needle-like diamond crystals could be applied in different types of sensors, quantum optical devices and also in creation of element base for quantum computers and in other areas of science and technology.

Alexander Obraztsov further notices: "I'd especially like to highlight the significant input of young researchers - Viktor Kleshch and Rinat Ismagilov - to these studies. Their enthusiasm and intense work have allowed to get the above described results, which are truly new and possess fundamental scientific and applied importance".

The studies have been conducted with support of the Russian Science Foundation.



Contacts and sources:
Vladimir Koryagin
Lomonosov Moscow State University 

Thursday, December 29, 2016

Investigation into New Molecules That Could Potentially Treat Alzheimer's

Scientists have not yet succeeded in finding an effective cure for Alzheimer's. Pharmaceutical studies are still being conducted to be able to reduce the symptoms of the disease.

Alzheimer’s is one of the most widespread diseases in elderly people. People over the age of 60 are at the greatest risk of developing the disease, but it can also occur at a younger age. Patients suffer from loss of memory and cognitive functions; they become socially detached and lose their independence, and the body can no longer function properly, which inevitably leads to death. According to medical statistics, Alzheimer’s is the cause of two out of every three cases of dementia in the elderly and it is a huge economic problem in developed countries - the financial impact in the US, for example, is higher than for cancer or cardiovascular diseases.

This year, results have been published of two significant research studies about molecules that could potentially treat Alzheimer's disease. The chief researcher in both studies was the head of the Laboratory of Medical Chemistry and Bioinformatics at the Moscow Institute of Physics and Technology (MIPT). Yan Ivanenkov. Papers on the two new molecules were published in Molecular Pharmaceutics and Current Alyheimer Research. Mark Veselov, another MIPT employee, also participated in the second study.

Both papers cover the study of neuroprotectors - antagonists to the 5-HT6R receptor. The latest research confirms that this target has a high therapeutic potential in the treatment of Alzheimer's disease. Preclinical studies on lab animals have shown that the compounds have a high selectivity.



Credit: MIPT Press Office, by Lion on helium

Scientists have not yet succeeded in finding an effective cure for Alzheimer's. Despite the fact that we know how the disease develops, we cannot say that we are even close to a solution. Pharmaceutical studies are still being conducted in order to be able to reduce the symptoms of the disease.

In the first paper, specialists Alexander Ivashenko and Yan Lavrovsky from Alla Chem LLC, Avineuro Pharmaceuticals Inc. and R-Pharm Overseas Inc. (all US companies), in collaboration with MIPT’s Yan Ivanenkov, worked on a 5-HT6R activity blocking compound. A similar task was investigated in Yan and Alexander’s second study with another MIPT employee, Mark Veselov. 5-HT6R receptors were chosen because they are integrated into nerve cell membranes and are capable of reacting to certain external signals, which is why scientists consider them as targets for AD treatment. The antagonists to the receptor are able to ease the symptoms of the disease in a clinical environment.

Studying AVN-211

Scientists studied the pharmacokinetic features, activity, efficiency, and also the toxicity profile of AVN-211. First, a screening test was performed using recombinant human cells containing 5-HT6R to make sure that AVN-211 really is an antagonist. Another series of experiments with cell cultures demonstrated its ability to spread in a tissue and provided preliminary data about its state in the human body - metabolism, biochemical interactions, etc.

Tests were then performed on lab animals - mice, rats and monkeys to obtain the pharmacokinetic profile of a drug candidate in a real body. Observing concentration changes in the animals’ blood after intake provided information about the compound’s pharmacodynamics.

Memory disorder stress tests have shown that AVN-211 might be able to improve memory function. Rats and mice were taught to find an exit from a maze, while their cognition was imaired by drugs provoking memory loss. Animals who were given the drug demonstrated better results. In addition, healthy animals who received the new drug were better learners and could be trained more efficiently.

These results led the researchers to believe that AVN-211 will be able to combat cognitive dysfunction caused by AD.

Scientists also think that this compound can be used to treat certain mental disorders. Tests with chemicals that produce the same symptoms as psychosis have shown a possible antipsychotic and anxiolytic (reducing anxiety) effect. Such effects are used in treating schizophrenia and depression. It was also noticed that AVN has a comparable effect to haloperidol - a common antipsychotic drug.

In vitro studies revealed that this compound affects the 5-HT6R receptor more effectively and selectively compared to all other drugs, including those currently in clinical trials. Studies on lab animals showed that AVN-211 has low toxicity.

Studying AVN-322

The same tests were performed for AVN-322.

Screening with the 5-HT6R receptor on human cell culture proved that the molecule is a highly effective antagonist.

In vivo tests were performed on mice: the animals were taught how to get out of a maze and had to remember that a section of the floor was electrified. The results showed that mice that received low levels of AVN-322 performed better than after any existing neuroleptic drugs.

The pharmacokinetics of AVN-322 were analyzed in mice, rats, dogs and monkeys. During a 30-day intake monkeys did not have any toxic after-effects. A possible danger was noticed after a 180-day intake in rats - the substance can cause brachycardia and hypotension. However the exact after-effects are less serious than all other existing drugs.

Pre-clinical data proves that AVN-322 also has a good pharmacokinetic profile - it is very digestible and passes well through the blood-brain barrier.

In conclusion, we can say that both compounds have a high pharmaceutical potential and low toxicity. The positive results of the studies mean that researchers can move on to clinical trials in order to verify the safety and effectiveness of a drug that could potentially treat one of the most serious diseases of our time.


Contacts and sources:,
Ilyana Shaybakova
Moscow Institute of Physics and Technology (MIPT).

Wednesday, December 28, 2016

The Late Effects of Stress: New Insights into How the Brain Responds to Trauma

Mrs. M would never forget that day. She was walking along a busy road next to the vegetable market when two goons zipped past on a bike. One man's hand shot out and grabbed the chain around her neck. The next instant, she had stumbled to her knees, and was dragged along in the wake of the bike. Thankfully, the chain snapped, and she got away with a mildly bruised neck. Though dazed by the incident, Mrs. M was fine until a week after the incident.

Then, the nightmares began.

She would struggle and yell and fight in her sleep every night with phantom chain snatchers. Every bout left her charged with anger and often left her depressed. The episodes continued for several months until they finally stopped. How could a single stressful event have such extended consequences?

A new study by Indian scientists has gained insights into how a single instance of severe stress can lead to delayed and long-term psychological trauma. The work pinpoints key molecular and physiological processes that could be driving changes in brain architecture.

This is a pyramidal neuron.

Credit: Chattarji laboratory 

The team, led by Sumantra Chattarji from the National Centre for Biological Sciences (NCBS) and the Institute for Stem Cell Biology and Regenerative Medicine (inStem), Bangalore, have shown that a single stressful incident can lead to increased electrical activity in a brain region known as the amygdala. This activity sets in late, occurring ten days after a single stressful episode, and is dependent on a molecule known as the N-Methyl-D-Aspartate Receptor (NMDA-R), an ion channel protein on nerve cells known to be crucial for memory functions.

The amygdala is a small, almond-shaped groups of nerve cells that is located deep within the temporal lobe of the brain. This region of the brain is known to play key roles in emotional reactions, memory and making decisions. Changes in the amygdala are linked to the development of Post-Traumatic Stress Disorder (PTSD), a mental condition that develops in a delayed fashion after a harrowing experience.

Previously, Chattarji's group had shown that a single instance of acute stress had no immediate effects on the amygdala of rats. But ten days later, these animals began to show increased anxiety, and delayed changes in the architecture of their brains, especially the amygdala. "We showed that our study system is applicable to PTSD. This delayed effect after a single episode of stress was reminiscent of what happens in PTSD patients," says Chattarji. "We know that the amygdala is hyperactive in PTSD patients. But no one knows as of now, what is going on in there," he adds.

Investigations revealed major changes in the microscopic structure of the nerve cells in the amygdala. Stress seems to have caused the formation of new nerve connections called synapses in this region of the brain. However, until now, the physiological effects of these new connections were unknown.

In their recent study, Chattarji's team has established that the new nerve connections in the amygdala lead to heightened electrical activity in this region of the brain.

"Most studies on stress are done on a chronic stress paradigm with repeated stress, or with a single stress episode where changes are looked at immediately afterwards - like a day after the stress," says Farhana Yasmin, one of the Chattarji's students. "So, our work is unique in that we show a reaction to a single instance of stress, but at a delayed time point," she adds.

Furthermore, a well-known protein involved in memory and learning, called NMDA-R has been recognised as one of the agents that bring about these changes. Blocking the NMDA-R during the stressful period not only stopped the formation of new synapses, it also blocked the increase in electrical activity at these synapses. "So we have for the first time, a molecular mechanism that shows what is required for the culmination of events ten days after a single stress," says Chattarji. "In this study, we have blocked the NMDA Receptor during stress. But we would like to know if blocking the molecule after stress can also block the delayed effects of the stress. And if so, how long after the stress can we block the receptor to define a window for therapy," he adds.

Chattarji's group first began their investigations into how stress affects the amygdala and other regions of the brain around ten years ago. The work has required the team to employ an array of highly specialised and diverse procedures that range from observing behaviour to recording electrical signals from single brain cells and using an assortment of microscopy techniques. "To do this, we have needed to use a variety of techniques, for which we required collaborations with people who have expertise in such techniques," says Chattarji. "And the glue for such collaborations especially in terms of training is vital. We are very grateful to the Wadhwani Foundation that supports our collaborative efforts and to the DBT and DAE for funding this work," he adds.



Contacts and sources:
Sumantra Chattarji
National Centre for Biological Sciences (NCBS)

A Genetic Key for Sperm Production?

Sperm are constantly replenished in the adult male body. Understanding the workings of stem cells responsible for this replenishment is expected to shed light on why male fertility diminishes with age, and possibly lead to new treatments for infertility.

"So-called Myc genes play an important role in stem cells' ability to self-renew," explains Kyoto University's Takashi Shinohara, who is interested specifically in spermatogonial stem cells (SSCs), which are responsible for producing sperm. Shinohara adds that SSCs are unique, because they are "the only stem cells that transmit genetic information to offspring."

In a new report in Genes & Development, the Shinohara lab demonstrates how the Myc gene regulates the self-renewal of mouse SSCs, via a process of glycolysis control. Glycolysis is a key part of cells' energy-making mechanism.

Finding a genetic key for sperm-producing stem cells in mice.

Credit: Kyoto University

The scientists injected two types of SSCs into mouse testes: normal cells in some, and Myc gene-suppressed in others. Two months later, they found that the total number of abnormal SSCs was far fewer than normal ones. Gene analysis showed that the capacity for self-renewal had been compromised, with possibly important implications for sperm production in these mice.

"We found changes in the expression of genes that would slow the cell cycle," says Shinohara.

In other words, suppressed SSCs could self-renew, but at a slower than normal rate. Further study showed that this diminished rate was accompanied by impaired glycolysis, suggesting that the cells were not generating sufficient energy.

"A difference in glycolysis could explain natural differences in SSC self-renewal between mice," elaborates Mito Kanatsu-Shinohara, first-author of the paper. "DBA/2 and B6 are two mouse types in which SSCs are know to self-renew at different rates."

Further experiments confirmed that glycolysis was more active in the cells of DBA/2 mice. Moreover, isolating cells from B6 mice and treating them with certain chemicals that enhanced glycolysis could increase the proliferation rate to levels comparable with DBA/2.

"These findings could have important implications for infertility research in the future," says Shinohara. "Stimulating the metabolism of SSCs could improve their proliferation. However, more careful study of the molecular pathways is necessary."




Contacts and sources:
David Kornhauser
Kyoto University

The paper "Myc/Mycn-mediated glycolysis enhances mouse spermatogonial stem cell self-renewal" appeared 22 December 2016 in Genes & Development, with doi: 10.1101/gad.287045.116 http://dx.doi.org/10.1101/gad.287045.116

New Study Sets Oxygen-Breathing Limit for Ocean’s Hardiest Organisms


Around the world, wide swaths of open ocean are nearly depleted of oxygen. Not quite dead zones, they are “oxygen minimum zones,” where a confluence of natural processes has led to extremely low concentrations of oxygen.

Only the hardiest of organisms can survive in such severe conditions, and now MIT oceanographers have found that these tough little life-forms — mostly bacteria — have a surprisingly low limit to the amount of oxygen they need to breathe.

In a paper published by the journal Limnology and Oceanography, the team reports that ocean bacteria can survive on oxygen concentrations as low as approximately 1 nanomolar per liter. To put this in perspective, that’s about 1/10,000th the minimum amount of oxygen that most small fish can tolerate and about 1/1,000th the level that scientists previously suspected for marine bacteria.

MIT oceanographers have found that some small marine organisms — mostly bacteria — have a surprisingly low limit to the amount of oxygen they need to breathe.

Image: MIT News

The researchers have found that below this critical limit, microbes either die off or switch to less common, anaerobic forms of respiration, taking up nitrogen instead of oxygen to breathe.

With climate change, the oceans are projected to undergo a widespread loss of oxygen, potentially increasing the spread of oxygen minimum zones around the world. The MIT team says that knowing the minimum oxygen requirements for ocean bacteria can help scientists better predict how future deoxygenation will change the ocean’s balance of nutrients and the marine ecosystems that depend on them.

“There’s a question, as circulation and oxygen change in the ocean: Are these oxygen minimum zones going to shoal and become more shallow, and decrease the habitat for those fish near the surface?” says Emily Zakem, the paper’s lead author and a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Knowing this biological control on the process is really necessary to making those sorts of predictions.”

Zakem’s co-author is EAPS Associate Professor Mick Follows.

How low does oxygen go?

Oxygen minimum zones, sometimes referred to as “shadow zones,” are typically found at depths of 200 to 1,000 meters. Interestingly, these oxygen-depleted regions are often located just below a layer of high oxygen fluxes and primary productivity, where fish swimming near the surface are in contact with the oxygen-rich atmosphere. Such areas generate a huge amount of organic matter that sinks to deeper layers of the ocean, where bacteria use oxygen — far less abundant than at the surface — to consume the detritus. Without a source to replenish the oxygen supply at such depths, these zones quickly become depleted.

Other groups have recently measured oxygen concentrations in depleted zones using a highly sensitive instrument and observed, to their surprise, levels as low as a few nanomolar per liter — about 1/1,000th of what many others had previously measured — across hundreds of meters of deep ocean.

Zakem and Follows sought to identify an explanation for such low oxygen concentrations, and looked to bacteria for the answer.

“We’re trying to understand what controls big fluxes in the Earth system, like concentrations of carbon dioxide and oxygen, which set the parameters of life,” Zakem says. “Bacteria are among the organisms on Earth that are integral to setting large-scale nutrient distributions. So we came into this wanting to develop how we think of bacteria at the climate scale.”

Setting a limit

The researchers developed a simple model to simulate how a bacterial cell grows. They focused on particularly resourceful strains that can switch between aerobic, oxygen-breathing respiration, and anaerobic, nonoxygen-based respiration. Zakem and Follows assumed that when oxygen is present, such microbes should use oxygen to breathe, as they would expend less energy to do so. When oxygen concentrations dip below a certain level, bacteria should switch over to other forms of respiration, such as using nitrogen instead of oxygen to fuel their metabolic processes.

The team used the model to identify the critical limit at which this switch occurs. If that critical oxygen concentration is the same as the lowest concentrations recently observed in the ocean, it would suggest that bacteria regulate the ocean’s lowest oxygen zones.

To identify bacteria’s critical oxygen limit, the team included in its model several key parameters that regulate a bacterial population: the size of an individual bacterial cell; the temperature of the surrounding environment; and the turnover rate of the population, or the rate at which cells grow and die. They modeled a single bacterial cell’s oxygen intake with changing parameter values and found that, regardless of the varying conditions, bacteria’s critical limit for oxygen intake centered around vanishingly small values.

“What’s interesting is, we found that across all this parameter space, the critical limit was always centered at about 1 to 10 nanomolar per liter,” Zakem says. “This is the minimum concentration for most of the realistic space you would see in the ocean. This is useful because we now think we have a good handle on how low oxygen gets in the ocean, and [we propose] that bacteria control that process.”

Ocean fertility

Looking forward, Zakem says the team’s simple bacterial model can be folded into global models of atmospheric and ocean circulation. This added nuance, she says, can help scientists better predict how changes to the world’s climate, such as widespread warming and ocean deoxygenation, may affect bacteria.

While they are the smallest organisms, bacteria can potentially have global effects, Zakem says. For instance, as more bacteria switch over to anaerobic forms of respiration in deoxygenated zones, they may consume more nitrogen and give off as a byproduct nitrogen dioxide, which can be released back into the atmosphere as a potent greenhouse gas.

“We can think of this switch in bacteria as setting the ocean’s fertility,” Zakem says. “When nitrogen is lost from the ocean, you’re losing accessible nutrients back into the atmosphere. To know how much denitrification and nitrogen dioxide flux will change in the future, we absolutely need to know what controls that switch from using oxygen to using nitrogen. In that regard, this work is very fundamental.”

This research was supported, in part, by the Gordon and Betty Moore Foundation, the Simons Foundation, NASA, and the National Science Foundation.



Contacts and sources:
Jennifer Chu
MIT

Diabetes, Heart Disease, and Back Pain Dominate US Health Care Spending

Just 20 conditions make up more than half of all spending on health care in the United States, according to a new comprehensive financial analysis that examines spending by diseases and injuries.

The most expensive condition, diabetes, totaled $101 billion in diagnoses and treatments, growing 36 times faster than the cost of ischemic heart disease, the number-one cause of death, over the past 18 years. While these two conditions typically affect individuals 65 and older, low back and neck pain, the third-most expensive condition, primarily strikes adults of working age.

These three top spending categories, along with hypertension and injuries from falls, comprise 18% of all personal health spending, and totaled $437 billion in 2013.

Link to the data visualization tool: http://vizhub.healthdata.org/dex

This study, published today in JAMA, distinguishes spending on public health programs from personal health spending, including both individual out-of-pocket costs and spending by private and government insurance programs. It covers 155 conditions.

"While it is well known that the US spends more than any other nation on health care, very little is known about what diseases drive that spending." said Dr. Joseph Dieleman, lead author of the paper and Assistant Professor at the Institute for Health Metrics and Evaluation (IHME) at the University of Washington. "IHME is trying to fill the information gap so that decision-makers in the public and private sectors can understand the spending landscape, and plan and allocate health resources more effectively."



In addition to the $2.1 trillion spent on the 155 conditions examined in the study, Dr. Dieleman estimates that approximately $300 billion in costs, such as those of over-the-counter medications and privately funded home health care, remain unaccounted for, indicating total personal health care costs in the US reached $2.4 trillion in 2013.

Other expensive conditions among the top 20 include musculoskeletal disorders, such as tendinitis, carpal tunnel syndrome, and rheumatoid arthritis; well-care associated with dental visits; and pregnancy and postpartum care.

The paper, "US Spending on Personal Health Care and Public Health, 1996-2013," tracks a total of $30.1 trillion in personal health care spending over 18 years. While the majority of those costs were associated with non-communicable diseases, the top infectious disease category was respiratory infections, such as bronchitis and pneumonia.

Other key findings from the paper include: 

* Women ages 85 and older spent the most per person in 2013, at more than $31,000 per person. More than half of this spending (58%) occurred in nursing facilities, while 40% was expended on cardiovascular diseases, Alzheimer's disease, and falls. 

* Men ages 85 and older spent $24,000 per person in 2013, with only 37% on nursing facilities, largely because women live longer and men more often have a spouse at home to provide care. 

* Less than 10% of personal health care spending is on nursing care facilities, and less than 5% of spending is on emergency department care. The conditions leading to the most spending in nursing care facilities are Alzheimer's and stroke, while the condition leading to the most spending in emergency departments is falls. * Public health education and advocacy initiatives, such as anti-tobacco and cancer awareness campaigns, totaled an estimated $77.9 billion in 2013, less than 3% of total health spending. 

* Only 6% of personal health care spending was on well-care, which is all care unrelated to the diagnosis and treatment of illnesses or injuries. Of this, nearly a third of the spending was on pregnancy and postpartum care, which was the 10th-largest category of spending.

"This paper offers private insurers, physicians, health policy experts, and government leaders a comprehensive review," said IHME's Director, Dr. Christopher Murray. "As the United States explores ways to deliver services more effectively and efficiently, our findings provide important metrics to influence the future, both in short- and long-term planning."

The top 10 most costly health expenses in 2013 were:

1. Diabetes - $101.4 billion

2. Ischemic heart disease - $88.1 billion

3. Low back and neck pain - $87.6 billion

4. Hypertension - $83.9 billion

5. Injuries from falls - $76.3 billion

6. Depressive disorders - $71.1 billion

7. Oral-related problems - $66.4 billion

8. Vision and hearing problems - $59 billion

9. Skin-related problems, such as cellulitis and acne - $55.7 billion

10. Pregnancy and postpartum care - $55.6 billion





Contacts and sources:
Kayla Albrecht, MPH,
Institute for Health Metrics and Evaluation (IHME)

New Atom Interferometer Could Measure Inertial Forces With Record-Setting Accuracy: Could Yield Hyper-Precise Gravitational Measurements

Atom interferometry is the most sensitive known technique for measuring gravitational forces and inertial forces such as acceleration and rotation. It’s a mainstay of scientific research and is being commercialized as a means of location-tracking in environments where GPS is unavailable. It’s also extremely sensitive to electric fields and has been used to make minute measurements of elements’ fundamental electrical properties.

The most sensitive atom interferometers use exotic states of matter called Bose-Einstein condensates. In the latest issue of Physical Review Letters, MIT researchers present a way to make atom interferometry with Bose-Einstein condensates even more precise, by eliminating a source of error endemic to earlier designs.

Interferometers using the new design could help resolve some fundamental questions in physics, such as the nature of the intermediate states between the quantum description of matter, which prevails at very small scales, and the Newtonian description that everyday engineering depends on.

MIT researchers describe a way to make atom interferometry with Bose-Einstein condensates even more precise by eliminating a source of error endemic to earlier designs.

Credit; MIT

“The idea here is that Bose-Einstein condensates are actually pretty big,” says William Burton, an MIT graduate student in physics and first author on the paper. “We know that very small things act quantum, but then big things like you and me don’t act very quantum. So we can see how far apart we can stretch a quantum system and still have it act coherently when we bring it back together. It’s an interesting question.”

Joining Burton on the paper are his advisor, professor of physics Wolfgang Ketterle, who won the 2001 Nobel Prize in physics for his pioneering work on Bose-Einstein condensates, and four other members of the MIT-Harvard Center for Ultracold Atoms, which Ketterle directs.

Carving up condensates

Bose-Einstein condensates are clusters of atoms that, when cooled almost to absolute zero, all inhabit exactly the same quantum state. This gives them a number of unusual properties, among them extreme sensitivity to perturbation by outside forces.

A common approach to building a Bose-Einstein condensate interferometer involves suspending a cloud of atoms — the condensate — in a chamber and then firing a laser beam into it to produce a “standing wave.” If a wave is thought of as a squiggle with regular troughs and crests, then a standing wave is produced when a wave is exactly aligned with its reflection. The zero points — the points of transition between trough and crest — of the wave and its reflection are identical.

The standing wave divides the condensate into approximately equal-sized clusters of atoms, each its own condensate. In the MIT researchers’ experiment, for instance, the standing wave divides about 20,000 rubidium atoms into 10 groups of about 2,000, each suspended in a “well” between two zero points of the standing wave.

When outside forces act on the condensate, the laser trap keeps them from moving. But when the laser is turned off, the condensates expand, and their energy reflects the forces they were subjected to. Shining a light through the cloud of atoms produces an interference pattern from which that energy, and thus the force the condensates experienced, can be calculated.

This technique has yielded the most accurate measurements of gravitational and inertial forces on record. But it has one problem: The division of the condensate into separate clusters is not perfectly even. One well of the standing wave might contain, say, 1,950 atoms, and the one next to it 2,050. This imbalance yields differences in energy between wells that introduce errors into the final energy measurement, limiting its precision.

Balancing act

To solve this problem, Burton, Ketterle, and their colleagues use not one but two condensates as the starting point for their interferometer. In addition to trapping the condensates with a laser, they also subject them to a magnetic field.

Both condensates consist of rubidium atoms, but they have different “spins,” a quantum property that describes their magnetic alignment. The standing wave segregates both groups of atoms, but only one of them — the spin-down atoms — feels the magnetic field. That means that the atoms in the other group — the spin-up atoms — are free to move from well to well of the standing wave.

Since a relative excess of spin-down atoms in one well gives it a slight boost in energy, it will knock some of its spin-up atoms into the neighboring wells. The spin-up atoms shuffle themselves around the standing wave until every well has the exact same number of atoms. At the end of the process, when the energies of the atoms are read out, the spin-up atoms correct the imbalances between spin-down atoms.

Bose-Einstein condensates are interesting because they exhibit relatively large-scale quantum effects, and quantum descriptions of physical systems generally reflect wave-particle duality — the fact that, at small enough scales, matter will exhibit behaviors characteristic of both particles and waves. The condensates in the MIT researchers’ experiments can thus be thought of as waves, with their own wavelengths, amplitudes, and phases.

To do atom interferometry, the clusters of atoms trapped by the laser must all be in phase, meaning that the troughs and crests of their waves are aligned. The researchers showed that their “shielding” method kept the condensates in phase much longer than was previously possible, which should improve the accuracy of atom interferometry.

“One of the great expectations for Bose-Einstein condensates [BECs], which was highlighted in the Nobel citation, was that they would lead to applications,” says Dominik Schneble, an associate professor of physics at Stony Brook University. “And one of those applications is atom interferometry.”

“But interactions between BECs basically give rise to de-phasing, which cannot be very well-controlled,” Schneble says. “One approach has been to turn the interactions off. In certain elements, one can do this very well. But it’s not a universal property. What they are doing in this paper is they’re saying, ‘We accept the fact that the interactions are there, but we are using interactions such that it’s not only not a problem but also solves other problems.’ It’s very elegant and very clever. It fits the situation like a natural glove.”


Contacts and sources:
Larry Hardesty MIT

Driverless Platoons: Autonomous Trucks Traveling In Packs Could Save Time And Fuel

As driverless cars merge into our transportation system in the coming years, some researchers believe autonomous vehicles may save fuel by trailing each other in large platoons. Like birds and fighter jets flying in formation, or bikers and race car drivers drafting in packs, vehicles experience less aerodynamic drag when they drive close together.

But assembling a vehicle platoon to deliver packages between distribution centers, or to transport passengers between stations, requires time. The first vehicle to arrive at a station must wait for others to show up before they can all leave as a platoon, creating inevitable delays.

Now MIT engineers have studied a simple vehicle-platooning scenario and determined the best ways to deploy vehicles in order to save fuel and minimize delays. Their analysis, presented this week at the International Workshop on the Algorithmic Foundations of Robotics, shows that relatively simple, straightforward schedules may be the optimal approach for saving fuel and minimizing delays for autonomous vehicle fleets. The findings may also apply to conventional long-distance trucking and even ride-sharing services.

MIT engineers have studied a simple vehicle-platooning scenario and determined the best ways to deploy vehicles in order to save fuel and minimize delays.

Credit; MIT

“Ride-sharing and truck platooning, and even flocking birds and formation flight, are similar problems from a systems point of view,” says Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics

at MIT. “People who study these systems only look at efficiency metrics like delay and throughput. We look at those same metrics, versus sustainability such as cost, energy, and environmental impact. This line of research might really turn transportation on its head.”

Karaman is a co-author of the paper, along with Aviv Adler, a graduate student in the Department of Electrical Engineering and Computer Science, and David Miculescu, a graduate student in the Department of Aeronautics and Astronautics.

Pushing through drag

Karaman says that for truck-driving — particularly over long distances — most of a truck’s fuel is spent on trying to overcome aerodynamic drag, that is, to push the truck through the surrounding air. Scientists have previously calculated that if several trucks were to drive just a few meters apart, one behind the other, those in the middle should experience less drag, saving fuel by as much as 20 percent, while the last truck should save 15 percent — slightly less, due to air currents that drag behind.

If more vehicles are added to a platoon, more energy can collectively be saved. But there is a cost in terms of the time it takes to assemble a platoon.

Karaman and his colleagues developed a mathematical model to study the effects of different scheduling policies on fuel consumption and delays. They modeled a simple scenario in which multiple trucks travel between two stations, arriving at each station at random times. The model includes two main components: a formula to represent vehicle arrival times, and another to predict the energy consumption of a vehicle platoon.

The group looked at how arrival times and energy consumption changed under two general scheduling policies: a time-table policy, in which vehicles assemble and leave as a platoon at set times; and a feedback policy, in which vehicles assemble and leave as a platoon only when a certain number of vehicles are present — a policy that Karaman first experienced in Turkey.

“I grew up in Turkey, where there are two types of public transportation buses: normal buses that go out at certain time units, and another set where the driver will sit there until the bus is full, and then will go,” Karaman says.

When to stay, when to go

In their modeling of vehicle platooning, the researchers analyzed many different scenarios under the two main scheduling policies. For example, to evaluate the effects of time-table scheduling, they modeled scenarios in which platoons were sent out at regular intervals — for instance, every five minutes — versus over more staggered intervals, such as every three and seven minutes. Under the feedback policy, they compared scenarios in which platoons were deployed once a certain number of trucks reached a station, versus sending three trucks out one time, then five trucks out the next time.

Ultimately, the team found the simplest policies incurred the least delays while saving the most fuel. That is, time tables set to deploy platoons at regular intervals were more sustainable and efficient than those that deployed at more staggered times. Similarly, feedback scenarios that waited for the same number of trucks before deploying every time were more optimal than those that varied the number of trucks in a platoon.

Overall, feedback policies were just slightly more sustainable than time-table policies, saving only 5 percent more fuel.

“You’d think a more complicated scheme would save more energy and time,” Karaman says. “But we show in a formal proof that in the long run, it’s the simpler policies that help you.”

Ahead of the game

Karaman is currently working with trucking companies in Brazil that are interested in using the group’s model to determine how to deploy truck platoons to save fuel. He hopes to use data from these companies on when trucks enter highways to compute delay and energy tradeoffs with his mathematical model.

Eventually, he says, the model may suggest that trucks follow each other at very close range, within 3 to 4 meters, which is difficult for a driver to maintain. Ultimately, Karaman says, truck platoons may require autonomous driving systems to kick in during long stretches of driving, to keep the platoon close enough together to save the most fuel.

“There are already experimental trials testing autonomous trucks [in Europe],” Karaman says. “I imagine truck platooning is something we might see early in the [autonomous transportation] game.”

The researchers are also applying their simulations to autonomous ride-sharing services. Karaman envisions a system of driverless shuttles that transport passengers between stations, at rates and times that depend on the overall system’s energy capacity and schedule requirements. The team’s simulations could determine, for instance, the optimal number of passengers per shuttle in order to save fuel or prevent gridlock.

“We believe that ultimately this thinking will allow us to build new transportation systems in which the cost of transportation will be reduced substantially,” Karaman says.

This research was funded, in part, by the National Science Foundation.



Contacts and sources:
Jennifer Chu
 MIT

For Geriatric Falls, ‘Brain Speed’ May Matter More Than Lower Limb Strength


Why does a 30-year-old hit their foot against the curb in the parking lot and take a half step and recover, whereas a 71-year-old falls and an 82-year-old falls awkwardly and fractures their hip?” asks James Richardson, M.D., professor of physical medicine and rehabilitation at the University of Michigan Comprehensive Musculoskeletal Center.

For the last several years, Richardson and his team set out to answer these questions, attempting to find which specific factors determine whether, and why, an older person successfully recovers from a trip or stumble. All this in an effort to help prevent the serious injuries, disability, and even death, that too often follow accidental falls.

Credit: University of Michigan Health System

“Falls research has been sort of stuck, with investigators re-massaging over 100 identified fall ‘risk factors,’ many of which are repetitive and circular,” Richardson explains. “For example, a 2014 review lists the following three leading risk factors for falls: poor gait/balance, taking a large number of prescription medications and having a history of a fall in the prior year.”

Richardson continues, “If engineers were asked why a specific class of boat sank frequently and the answer came back: poor flotation and navigational ability, history of sinking in the prior year and the captain took drugs, we would fire the engineers! Our goal has been to develop an understanding of the specific, discrete characteristics that are responsible for success after a trip or stumble while walking, and to make those characteristics measurable in the clinic.”

Richardson’s latest research finds that it’s not only risk factors like lower limb strength and precise perception of the limb’s position that determine if a geriatric patient will recover from a perturbation, but also complex and simple reaction times, or as he prefers to refer to it, a person’s “brain speed.” The work is published in the January 2017 edition of the American Journal of Physical Medicine & Rehabilitation.

“Our study wanted to identify relationships between complex and simple clinical measures of reaction time and indicators of balance in elderly subjects with diabetic peripheral neuropathy, nerve damage that can occur in those with diabetes,” Richardson says.

“These patients fall twice as often as people their age typically do, so we wanted to examine each person’s ability to make a decision in less than half a second, or around 400 milliseconds. Importantly, this is also about the length of time the foot is in the air before landing while walking, and about the time available to recover from a stumble or trip.”

He realized they needed a new, easy way to measure that rapid decision-making ability.

Measuring simple and complex reaction time

Using a device developed with U-M co-inventors James T. Eckner, Hogene Kim and James A. Ashton-Miller, simple reaction time is measured much like a drop-ruler test used in many school science classes, but is a bit more standardized.

“The clinical reaction time assessment device consists of a long, lightweight stick attached to a rectangular box at one end. The box serves as a finger spacer to standardize initial hand position and finger closure distance, as well as a housing for the electronic components of the device,” Richardson says.


Clinical reaction time assessment device

Credit: University of Michigan Health System

To measure simple reaction time, the patient or subject sits with the forearm resting on a desk with the hand off the edge of the surface. The examiner stands and suspends the device with the box hanging between the subject’s thumb and other fingers and lets the device drop at varying intervals. The subject catches it as quickly as possible and the device provides a display of the elapsed time between drop and catch, which serves as a measurement of simple reaction time.

Although measuring simple reaction time is useful, Richardson says that the complex reaction time accuracy has been more revealing. The initial set up of the device and subject is the same. However, in this instance, the subject’s task is to catch the falling device only during the random 50 percent of trials where lights attached to the box illuminate at the moment the device is dropped, and to resist catching it when the lights do not illuminate.

“Resisting catching when the lights don’t go off is the hard part,” Richardson says. “We all want to catch something that is falling. The subject must perceive light illumination status and then act very quickly to withhold the natural tendency to catch a falling object.”

In the study, Richardson and team used the device with a sample of 42 subjects, 26 with diabetic neuropathy and 16 without, with an average age of 69.1 years old, to examine their complex reaction time accuracy and their simple reaction time latency, in addition to the usual measures of leg strength and perception of motion.

They then looked to see how well these measures predicted one-legged balance time, the ability to control step width when walking on a hazardous uneven surface in the research lab and major fall-related injuries over the next 12 months.

Examining the results

In the subjects with diabetic peripheral neuropathy, good complex reaction time accuracy and quick simple reaction time were strongly associated with a longer one-legged balance time, and were the only predictors of good control of step width on the uneven surface. In addition, they appeared to identify those who sustained major fall-related injury during the one-year follow up. Surprisingly, the measures of leg strength and motion perception had no influence on step width control on the hazardous surface and did not appear to predict major injury.

“Essentially we found that those who were able to grab the device quickly, or quickly make the decision to let it drop, had quick brains that were somehow helping them stay balanced and avoid aberrant steps on the uneven surface,” Richardson says.

He explains that the ability to avoid aberrant steps after hitting a bump while walking, and stay balanced while performing the trials, were likely based on the participant’s brain processing speed. In particular, the ability to quickly withhold, or inhibit, a planned movement is required for good complex reaction accuracy and responding to a perturbation while walking. In both cases, the original plan of action must be aborted and a new one substituted within approximately a 400 milliseconds time interval.

“With this in mind, it makes perfect sense that brains fast enough to have good complex reaction time accuracy were also fast enough to quickly pay attention to the perturbation while walking, inhibit the step that was planned and quickly execute a safer alternative,” Richardson says. “The faster your brain can oscillate between various external stimuli, or events, and your own internal thinking clutter, the better off you are. When an elderly person falls, it seems likely that their brain is not keeping up with what is happening and so it is not able to quickly, and selectively, attend to a particular stimulus, such as hitting a curb.”

Richardson says this assessment, which cannot be produced from a computer or pen/pencil tests, could be valuable to other health care providers, such as primary care physicians, neurologists, geriatricians and a variety of rehabilitation professionals.



Contacts and sources:
University of Michigan Health System

Citation: Complex and Simple Clinical Reaction Times Are Associated with Gait, Balance, and Major Fall Injury in Older Subjects with Diabetic Peripheral Neuropathy.  Authors: Richardson, James K. MD; Eckner, James T. MD; Allet, Lara PhD; Kim, Hogene PhD; Ashton-Miller, James A. American Journal of Physical Medicine & Rehabilitation: January 2017 - Volume 96 - Issue 1 - p 8–16 doi: 10.1097/PHM.0000000000000604