Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

Monday, November 30, 2015

Dopamine Affects Learning: Abrupt Dopamine Increases Result as a Person Perceives Stimuli that Predict Rewards

If you've ever felt lackadaisical to start a new project, focus on imagining the joy of completing it, say University of Michigan researchers.

Both are a function of dopamine, which explains the motivation to start and the satisfaction of finishing work, they say.

Credit: University of Michigan 

In a new study, U-M researchers Arif Hamid and Joshua Berke, professor of psychology and biomedical engineering, argue that dopamine levels continuously signal how good or valuable the current situation is regarding obtaining a reward. This message helps people decide how vigorously to work toward a goal, while also allowing themselves to learn from mistakes.

"We provide a new theoretical account for how dopamine affects learning (what to do later) and motivation (getting fired up to go now) simultaneously," said study lead author Hamid, U-M neuroscience doctoral student.

For many years, researchers have known that dopamine is important for arousal, movement, mood and executing activities with haste and vigor. Aspects of these normal dopamine functions are highlighted in disorders, such as Parkinson's disease and depression. Drugs that elevate brain dopamine levels, like cocaine or amphetamines, produce euphoric feelings of well-being, in addition to heightened arousal and attention.

Aside from affecting immediate mood and behavior, dopamine also produces changes in the brain that are persistent, sometimes lasting a lifetime.

"This is basically how we stamp in memories of what the smell of cookies or the McDonald's sign mean: predictors of delicious, calorie rich rewards," Hamid said.

Abrupt dopamine increases when a person perceives stimuli that predict rewards is a dominant mechanism of reward learning within the brain—a concept similar to Russian physiologist Ivan Pavlov's dog hearing the bell and salivating at a response to stimuli, he said.

Hamid said the precise mechanism of how a neurotransmitter can achieve both invigorating and learning functions is counterintuitive, and many decades of neuropsychological research has attempted to resolve exactly how.

One theory, spearheaded by U-M psychologists Kent Berridge and Terry Robinson, suggests that dopamine invigorates actions toward desired goals. For example, rats with almost no brain dopamine will not retrieve food a few inches away while they're starving.

Another theory suggests dopamine is a "teaching signal," like a coach who tells his player "good job" or "bad job" to encourage a future reward. In the current study, U-M researchers describe those dopamine fluctuations as a continuous cheer to motivate, with brief moments of criticism.

They measured dopamine levels in rats while they performed a decision-making task, and compared it with how motivated the rats were and how much they learned. They also increased dopamine levels to artificially motivate the rats and repeatedly made them learn to perform actions that did not produce rewards.

The findings appear in the current issue of Nature Neuroscience.

The study's other authors include Jeffrey Pettibone, Omar Mabrouk, Vaughn Hetrick, Robert Schmidt, Caitlin Vander Weele, Robert Kennedy and Brandon Aragona.

The work was supported by grants from the National Institute on Drug Abuse (DA032259, DA007281), National Institute of Mental Health (MH093888, MH101697), National Institute on Neurological Disorders and Stroke (NS078435, NS076401) and National Institute of Biomedical Imaging and Bioengineering (EB003320). R.S. was supported by the BrainLinks-BrainTools Cluster of Excellence funded by the German Research Foundation (DFG grant number EXC1086).

Contacts and sources:
Jared Wadley
University of Michigan

Rice Basket Study Rethinks Roots Of Human Culture

A new study from the University of Exeter has found that teaching is not essential for people to learn to make effective tools.

The results counter established views about how human tools and technologies come to improve from generation to generation and point to an explanation for the extraordinary success of humans as a species. The study reveals that although teaching is useful, it is not essential for cultural progress because people can use reasoning and reverse engineering of existing items to work out how to make tools.

Although teaching is useful, it is not essential for cultural progress. 
  Rice in basket
Credit: Wikipedia

The capacity to improve the efficacy of tools and technologies from generation to generation, known as cumulative culture, is unique to humans and has driven our ecological success. It has enabled us to inhabit the coldest and most remote regions on Earth and even have a permanent base in space. The way in which our cumulative culture has boomed compared to other species however remains a mystery.

It had long been thought that the human capacity for cumulative culture was down to special methods of learning from others - such as teaching and imitation - that enable information to be transmitted with high fidelity.

To test this idea, the researchers recreated conditions encountered during human evolution by asking groups of people to build rice baskets from everyday materials. Some people made baskets alone, while others were in ’transmission chain’ groups, where each group member could learn from the previous person in the chain either by imitating their actions, receiving teaching or simply examining the baskets made by previous participants.

Teaching produced the most robust baskets but after six attempts all groups showed incremental improvements in the amount of rice their baskets could carry.

Dr Alex Thornton from the Centre for Ecology and Conservation at the University of Exeter’s Penryn Campus in Cornwall said: “Our study helps uncover the process of incremental improvements seen in the tools that humans have used for millennia. While a knowledgeable teacher clearly brings important advantages, our study shows that this is not a limiting factor to cultural progress. Humans do much more than learn socially, we have the ability to think independently and use reason to develop new ways of doing things. This could be the secret to our success as a species.”

The results of the study shed light on ancient human society and help to bridge the cultural chasm between humans and other species. The researchers say that to fully understand those elements that make us different from other animals, future work should focus on the mental abilities of individuals and not solely mechanisms of social learning.

Cognitive requirements of cumulative culture: teaching is useful but not essential by Elena Zwirner and Alex Thornton is published in Scientific Reports

Contacts and sources:
University of Exeter
Louise Vennells

Prescription for Bullying: Kids on ADHD Meds Twice As Likely To Face Bullies

Kids and teens who take medications like Ritalin to treat attention-deficit hyperactivity disorder are twice as likely to be physically or emotionally bullied by peers than those who don't have ADHD, a new University of Michigan study found.

At even higher risk were middle and high school students who sold or shared their medications--those kids were four-and-a-half times likelier to be victimized by peers than kids without ADHD.

The main findings are the same for both sexes, said the study's first author, Quyen Epstein-Ngo, research assistant professor at the U-M Institute for Research on Women and Gender and a fellow at the U-M Injury Center. Carol Boyd, professor of nursing, is the principal investigator.

Credit: The Simpsons

It's long been known that kids with ADHD have a harder time making and keeping friends and are bullied and victimized more. This study is believed to be the first known to look at how stimulant medications affect their relationships with peers.

"Many youth with ADHD are prescribed stimulant medications to treat their ADHD and we know that these medications are the most frequently shared or sold among adolescents," said Epstein-Ngo, a licensed clinical psychologist.

The U-M researchers surveyed nearly 5,000 middle and high school students over four years. About 15 percent were diagnosed with ADHD and roughly 4 percent were prescribed stimulants within the past 12 months.

Of those who took ADHD meds, 20 percent reported being approached to sell or share them, and about half of them did.

Credit: nobullying.com 

When looking at the overall figures, relatively few students were asked to divert their medications or did. However, Epstein-Ngo said the numbers don't tell the entire story.

"Having a diagnosis of ADHD has lifelong consequences," she said. "These youth aren't living in isolation. As they transition into adulthood, the social effects of their ADHD diagnosis will impact a broad range of people with whom they come into contact."

From 2003 to 2011, there was a 42 percent increase in ADHD cases diagnosed in the U.S., and between 2007 and 2011, there was a 27 percent increase in stimulant-treated.

Epstein-Ngo said the findings shouldn't scare parents away from considering a stimulant medication. Rather, the study reinforces why parents must talk to kids about never sharing their medications.

"For some children stimulant medications are immensely helpful in getting through school," Epstein-Ngo said. "This study doesn't say 'don't give your child medication.' It suggests that it's really important to talk to your children about who they tell."

It's unclear why kids with prescriptions for stimulant medications are more at risk for bullying and victimization, but Epstein-Ngo said it's probably several factors.

"Is it a function of the fact that they are in riskier situations, or are they being coerced and forced to give up their medications? Probably a little bit of both," she said.

Epstein-Ngo believes the biggest takeaway is to have compassion for kids with ADHD.

"I think the biggest misconception about ADHD is that these kids aren't trying hard enough, and that's just not the case," she said. "If these kids could do better they would. With the proper support and treatment they can overcome this." 

The study, "Diversion of ADHD stimulants and peer victimization among adolescents," is scheduled to appear in the Journal of Pediatric Psychology. It was funded by the National Institute on Drug Abuse

Contacts and sources: 
Laura Bailey
University of Michigan

Earth's First Ecosystems Were More Complex Than Previously Thought, Study Finds

Computer simulations have allowed scientists to work out how a puzzling 555-million-year-old organism with no known modern relatives fed, revealing that some of the first large, complex organisms on Earth formed ecosystems that were much more complex than previously thought.

The international team of researchers from Canada, the UK and the USA, including Dr Imran Rahman from the University of Bristol, UK studied fossils of an extinct organism called Tribrachidium, which lived in the oceans some 555 million years ago. Using a computer modelling approach called computational fluid dynamics, they were able to show that Tribrachidium fed by collecting particles suspended in water. This is called suspension feeding and it had not previously been documented in organisms from this period of time.

Computer simulations have allowed scientists, led by Dr Imran Rahman of the University of Bristol, UK to work out how this 555-million-year-old organism with no known modern relatives fed. Their research reveals that some of the first large, complex organisms on Earth formed ecosystems that were much more complex than previously thought.
Credit: M. Laflamme

Tribrachidium lived during a period of time called the Ediacaran, which ranged from 635 million to 541 million years ago. This period was characterised by a variety of large, complex organisms, most of which are difficult to link to any modern species. It was previously thought that these organisms formed simple ecosystems characterised by only a few feeding modes, but the new study suggests they were capable of more types of feeding than previously appreciated.

Dr Simon Darroch, an Assistant Professor at Vanderbilt University, said: "For many years, scientists have assumed that Earth's oldest complex organisms, which lived over half a billion years ago, fed in only one or two different ways. Our study has shown this to be untrue,Tribrachidium and perhaps other species were capable of suspension feeding. This demonstrates that, contrary to our expectations, some of the first ecosystems were actually quite complex."

Co-author Dr Marc Laflamme, an Assistant Professor at the University of Toronto Mississauga, added: "Tribrachidium doesn't look like any modern species, and so it has been really hard to work out what it was like when it was alive. The application of cutting-edge techniques, such as CT scanning and computational fluid dynamics, allowed us to determine, for the first time, how this long-extinct organism fed."

Such simulations have allowed scientists, led by Dr Imran Rahman of the University of Bristol, UK to work out how this 555-million-year-old organism with no known modern relatives fed. Their research reveals that some of the first large, complex organisms on Earth formed ecosystems that were much more complex than previously thought.
Credit: I.A. Rahman

Computational fluid dynamics is a method for simulating fluid flows that is commonly used in engineering, for example in aircraft design, but this is one of the first applications of the technique in palaeontology (following up previous research carried out at Bristol).

Dr Rahman, a Research Fellow in Bristol's School of Earth Sciences said: "The computer simulations we ran allowed us to test competing theories for feeding in Tribrachidium. This approach has great potential for improving our understanding of many extinct organisms."

Co-author Dr Rachel Racicot, a postdoctoral researcher at the Natural History Museum of Los Angeles County added: "Methods for digitally analysing fossils in 3D have become increasingly widespread and accessible over the last 20 years. We can now use these data to address any number of questions about the biology and ecology of ancient and modern organisms."

The research was funded by the UK's Royal Commission for the Exhibition of 1851.

The study is published today in the journal Science Advances.

 Contacts and sources:
Hannah Johnson
University of Bristol

Citation: 'Suspension feeding in the enigmatic Ediacaran organism Tribrachidium demonstrates complexity of Neoproterozoic ecosystems' by Imran A. Rahman, Simon A. F. Darroch, Rachel A. Racicot and Marc Laflamme in Science Advances

Friday, November 27, 2015

Vitamin B May Counter Negative Effects of Pesticide on Fertility

Women exposed to DDT who have adequate B vitamin intake more likely to get and stay pregnant than those who are deficient

Women who have adequate levels of B vitamins in their bodies are more likely to get and stay pregnant even when they also have high levels of a common pesticide known to have detrimental reproductive effects, according to new Johns Hopkins Bloomberg School of Public Health research.

Credit: Johns Hopkins Bloomberg School of Public Health

The findings, published in the December issue of the American Journal of Clinical Nutrition, suggest that B vitamins may have a protective effect that counteracts the levels of DDT in their bodies. DDT, a known endocrine disruptor, is still used to kill mosquitoes in many countries where malaria remains a serious public health concern. The United States banned the pesticide in 1972; China, where the study was conducted, followed suit in 1984. DDT, however, can remain in the body and environment for decades.

“Our previous work has shown that high levels of DDT in the body can increase the risk of early miscarriage,” says study leader Xiaobin Wang, MD, ScD, MPH, the Zanvyl Krieger Professor and Director of the Center on the Early Life Origins of Disease at the Johns Hopkins Bloomberg School of Public Health. “This study tells us that improved nutrition may modify the toxic effects of DDT, by better preparing the body to cope with environmental toxins and stressors. We have shown that women with high levels of DDT who also had high levels of B vitamins had a better chance of getting and staying pregnant than those were deficient in those vitamins.”

The findings suggest that looking at toxins and nutrition in isolation doesn’t paint a full picture of how these different factors intersect. She says studies like this may provide a model for how future research groups can examine the relationship between other toxins and nutrients.

For the study, conducted between 1996 and 1998, Wang and her colleagues recruited and followed a group of female Chinese textile workers who were trying to get pregnant. Every day for up to a year, researchers tested the urine of the women in the study, detecting their levels of hCG the hormone that signals conception. This approach allowed researchers to determine whether a woman was pregnant much earlier than she might normally find out, in the days over even weeks before she realized she’d missed a menstrual period. It also allowed the researchers to determine whether women had an early pregnancy loss (miscarried before six weeks of pregnancy). DDT and DDE (major degraded product of DDT) and B vitamin levels were measured in the women before conception.

A 2003 study by Wang and her colleagues showed that one-third of all conceptions end before women even know they are pregnant. “This is a very vulnerable period,” she says.

Among the 291 women ultimately included in the study, there were 385 conceptions, 31 percent of which were lost before six weeks. Women with high DDT levels and sufficient levels of vitamin B had a 42 percent greater chance of early miscarriage than women with lower DDT levels. But in those with high DDT levels and vitamin B deficiencies, women were twice as likely to suffer a miscarriage before six weeks of gestation. The researchers looked at three types of B vitamins – B-6, B-12 and folic acid – and determined that the risk to a pregnancy was higher with B-12 and folic acid deficiencies and with deficiencies in more than one type of B vitamins.

The researchers also found that women with high DDT and low B vitamin levels took nearly twice as long to conceive in the first place.

The standard of care in many nations is to give an iron-folate supplement to women once they seek prenatal health care, which typically occurs between eight and 12 weeks of gestation, if at all. But that supplement is rarely taken prior to conception, meaning it likely comes too late to prevent early pregnancy loss. And unlike in the United States, where many foods are fortified with folic acid, that is not the norm around the world.

Difficulty conceiving is prevalent in both developed and developing countries. In the United States, the percentage of married women between the ages of 15 and 44 who had difficulty achieving and maintaining pregnancy increased from 8 percent in 1982 to 11.8 percent in 2002, an increase that cannot be completely explained by the age of the women.

Better nutrition – including fortifying foods with B vitamins – in countries where DDT is still in wide use could improve pregnancy outcomes, Wang says. She says there may also be implications in the United States, particularly among women who immigrate from countries where DDT is still common and among low-income women whose diets may not include foods high in B vitamins such as leafy green vegetables and beans. Even women from the United States may have DDT in their systems, which may come from imported foods or even from local food grown in soil still contaminated with DDT.

“Health care providers need to make sure women get adequate micronutrients including B vitamins in their diets not only during pregnancy but before they even conceive,” she says. “Otherwise, we may miss that critical window.”

Preconception serum 1,1,1-trichloro-2,2,bis(p-chlorophenyl)ethane and B-vitamin status: independent and joint effects on women’s reproductive outcomes” was written by Fengxiu Ouyang, Matthew P Longnecker, Scott A Venners, Sara Johnson, Susan Korrick, Jun Zhang, Xiping Xu, Parul Christian, Mei-Cheng Wang and Xiaobin Wang.

The research was supported in part by grants from the National Institutes of Health’s National Institute of Child Health and Human Development (R01 HD32505) and the National Institute of Environmental Health Sciences (R01 ES11682 and R03 ES022790), the Intramural Research Program of the National Institute of Environmental Health Sciences.

Contacts and sources:
 Johns Hopkins Bloomberg School of Public Health 

Researchers Find Link Between Air Pollution and Heart Disease

Researchers from the Johns Hopkins Bloomberg School of Public Health have found a link between higher levels of a specific kind of air pollution in major urban areas and an increase in cardiovascular-related hospitalizations such as for heart attacks in people 65 and older.

The findings, published in the November issue of Environmental Health Perspectives, are the strongest evidence to date that coarse particulate matter – airborne pollutants that range in size from 2.5 to 10 microns in diameter and can be released into the air from farming, construction projects or even wind in the desert – impacts public health. It has long been understood that particles smaller in size, which typically come from automobile exhaust or power plants, can damage the lungs and even enter the bloodstream. This is believed to be the first study that clearly implicates larger particles, which are smaller in diameter than a human hair.

Schematic drawing, causes and effects of air pollution: (1) greenhouse effect, (2) particulate contamination, (3) increased UV radiation, (4) acid rain, (5) increased ground level ozone concentration, (6) increased levels of nitrogen oxides.File:Air Pollution-Causes&Effects.svg
Credit: Wikimedia Commons

“We suspected that there was an association between coarse particles and health outcomes, but we didn’t have the research to back that up before,” says study leader Roger D. Peng, PhD, an associate professor of biostatistics at the Bloomberg School. “This work provides the evidence, at least for cardiovascular disease outcomes. I don’t feel like we need another study to convince us. Now it’s time for action.”

The researchers also studied respiratory diseases but did not find a correlation between high levels of coarse particles and hospitalizations for those illnesses.

For the national study, Peng and his colleagues studied data from an air monitoring network set up by the U.S. Environmental Protection Agency (EPA) in 110 large urban counties in the United States and linked it to Medicare data on hospitalizations in those same areas from 1999 to 2010. The hospitalizations covered people ages 65 and older.

Counties were included in the study if they had more than 20,000 Medicare enrollees in 2010 and had equipment that monitored fine and coarse particles for at least 200 days of the study. Over the time period, there were 6.37 million cardiovascular and 2.51 million respiratory emergency hospitalizations over the 110 counties.

 Smog in New York City
Credit: Wikimedia Commons

The researchers found that on days when coarse particle levels were higher, cardiovascular hospitalizations were also higher that same day. They did not find a correlation in the following days.

As part of the Clean Air Act, the EPA more closely regulates finer particles, which are more likely to come from manmade sources. States work to reduce those levels through various mechanisms, including stronger car emissions standards or adding scrubbers to coal-fired power plants. In some areas, coarse particles may be more difficult to reduce, as they can come from natural sources.

The coarse particles enter the respiratory tract and can trigger systemic health problems, though the mechanism is not fully understood.

The findings varied by geographic region. While there were higher concentrations of coarse particles found in the western United States, there were more cardiovascular events requiring hospitalization in the eastern United States.

“Just because the particles are the same size doesn’t mean they are made of the same material,” Peng says. “It’s possible that the chemical composition of the particles in the east could make them more toxic.”

Peng says that EPA’s monitoring network is not designed to measure coarse particles and they may need a national monitoring network for particles of that size. In the past, he says, the EPA has proposed tighter regulations on coarse particles, but they were never finalized, in part because there wasn’t enough evidence.

“It’s worth revisiting given this new data,” he says.

Contacts and sources: 
Stephanie Desmon
Johns Hopkins Bloomberg School of Public Health

Citation: “Ambient Coarse Particulate Matter and Hospital Admissions in the Medicare Cohort Air Pollution Study, 1999-2010” was written by Helen Powell, Jenna R. Krall, Yun Wang, Michelle L. Bell and Roger D. Peng.

The research was supported by grants from the National Institutes of Health’s National Institute of Environmental Health Sciences (R01ES019560, R01ES019587 and R21ES021427); the National Institute on Aging (T32AG000247) and the EPA.

Thursday, November 26, 2015

Rapid Plankton Growth in Ocean Seen As Sign of Carbon Dioxide Loading

A microscopic marine alga is thriving in the North Atlantic to an extent that defies scientific predictions, suggesting swift environmental change as a result of increased carbon dioxide in the ocean, a study led a by Johns Hopkins University scientist has found.

What these findings mean remains to be seen, as does whether the rapid growth in the tiny plankton's population is good or bad news for the planet.

A scanning electron microscope image of a coccolithophore, which can measure from 5 to 15 microns across, less than a fifth the width of a human hair.  

Image: Amy Wyeth, Bigelow Laboratory for Ocean Sciences

Published recently in the journal Science, the study details a tenfold increase in the abundance of single-cell coccolithophores between 1965 and 2010, and a particularly sharp spike since the late 1990s in the population of these pale-shelled floating phytoplankton.

"Something strange is happening here, and it's happening much more quickly than we thought it should," saidAnand Gnanadesikan, associate professor in the Morton K. Blaustein Department of Earth and Planetary Sciences at Johns Hopkins and one of the study's five authors.

Gnanadesikan said the Science report certainly is good news for creatures that eat coccolithophores, but it's not clear what those are. "What is worrisome," he said, "is that our result points out how little we know about how complex ecosystems function." The result highlights the possibility of rapid ecosystem change, suggesting that prevalent models of how these systems respond to climate change may be too conservative, he said.

The team's analysis of Continuous Plankton Recorder survey data from the North Atlantic Ocean and North Sea since the mid-1960s suggests rising carbon dioxide in the ocean is causing the coccolithophore population spike, said Sara Rivero-Calle, a Johns Hopkins doctoral student and lead author of the study. A stack of laboratory studies supports the hypothesis, she said. Carbon dioxide is a greenhouse gas already fingered by scientific consensus as one of the triggers of global warming.

"Our statistical analyses on field data from the CPR point to carbon dioxide as the best predictor of the increase" in coccolithophores, Rivero-Calle said. "The consequences of releasing tons of CO2 over the years are already here and this is just the tip of the iceberg."

The CPR survey is a continuing study of plankton, floating organisms that form a vital part of the marine food chain. The project was launched by a British marine biologist in the North Atlantic and North Sea in the early 1930s. It is conducted by commercial ships trailing mechanical plankton-gathering contraptions through the water as they sail their regular routes.

William M. Balch of the Bigelow Laboratory for Ocean Sciences in Maine, a co-author of the study, said scientists might have expected that ocean acidity due to higher carbon dioxide would suppress these chalk-shelled organisms. It didn't. On the other hand, their increasing abundance is consistent with a history as a marker of environmental change.

"Coccolithophores have been typically more abundant during Earth's warm interglacial and high CO2 periods," said Balch, an authority on the algae. "The results presented here are consistent with this and may portend, like the 'canary in the coal mine,' where we are headed climatologically."

Coccolithophores are single-cell algae that cloak themselves in a distinctive cluster of pale disks made of calcium carbonate, or chalk. They play a role in cycling calcium carbonate, a factor in atmospheric carbon dioxide levels. In the short term they make it more difficult to remove carbon dioxide from the atmosphere, but in the long term—tens and hundreds of thousands of years—they help remove carbon dioxide from the atmosphere and oceans and confine it in the deep ocean.

In vast numbers and over eons, coccolithophores have left their mark on the planet, helping to show significant environmental shifts. The White Cliffs of Dover are white because of massive deposits of coccolithophores. But closer examination shows the white deposits interrupted by slender, dark bands of flint, a product of organisms that have glassy shells made of silicon, Gnanadesikan said.

"These clearly represent major shifts in ecosystem type," Gnanadesikan said. "But unless we understand what drives coccolithophore abundance, we can't understand what is driving such shifts. Is it carbon dioxide?"

The study was supported by the Sir Alister Hardy Foundation for Ocean Science, which now runs the CPR, and by the Johns Hopkins Applied Physics Laboratory. Other co-authors are Carlos del Castillo, a former biological oceanographer at APL who now leads NASA's Ocean Ecology Laboratory, and Seth Guikema, a former Johns Hopkins faculty member now at the University of Michigan.

Contacts and sources:
Arthur Hirsch
Johns Hopkins University

Diamonds May Be Pervasive Not Rare

Diamonds may not be as rare as once believed, but this finding in a new Johns Hopkins University research report won’t mean deep discounts at local jewelry stores.

“Diamond formation in the deep Earth, the very deep Earth, may be a more common process than we thought,” said Johns Hopkins geochemist Dimitri A. Sverjensky, whose article co-written with doctoral student Fang Huang appears today in the online journal Nature Communications. The report says the results “constitute a new quantitative theory of diamond formation,” but that does not mean it will be easier to find gem-quality diamonds and bring them to market.

Raw diamonds
Credit: www.indus-global.com

For one thing, the prevalence of diamonds near the Earth’s surface – where they can be mined – still depends on relatively rare volcanic magma eruptions that raise them from the depths where they form. For another, the diamonds being considered in these studies are not necessarily the stuff of engagement rings, unless the recipient is equipped with a microscope. Most are only a few microns across and are not visible to the unaided eye.

Using a chemical model, Sverjensky and Huang found that these precious stones could be born in a natural chemical reaction that is simpler than the two main processes that up to now have been understood to produce diamonds. Specifically, their model – yet to be tested with actual materials – shows that diamonds can form with an increase in acidity during interaction between water and rock.

The common understanding up to now has been that diamonds are formed in the movement of fluid by the oxidation of methane or the chemical reduction of carbon dioxide. Oxidation results in a higher oxidation state, or a loss of electrons. Reduction means a lower oxidation state, and collectively the two are known as “redox” reactions.

“It was always hard to explain why the redox reactions took place,” said Sverjensky, a professor in the Morton K. Blaustein Department of Earth and Planetary Sciences in the university’s Krieger School of Arts and Sciences. The reactions require different types of fluids to be moving through the rocks encountering environments with different oxidation states.

The new research showed that water could produce diamonds as its pH falls naturally – that is, as it becomes more acidic – while moving from one type of rock to another, Sverjensky said.

The finding is one of many in about the last 25 years that expands scientists’ understanding of how pervasive diamonds may be, Sverjensky said.

“The more people look, the more they’re finding diamonds in different rock types now,” Sverjensky said. “I think everybody would agree there’s more and more environments of diamond formation being discovered.”

Nobody has yet put a number on the greater abundance of diamonds, but Sverjensky said scientists are working on that with chemical models. It’s impossible to physically explore the great depths at which diamonds are created: roughly 90 to 120 miles below the Earth’s surface at intense pressure and at temperatures about 1,650 to 2,000 degrees Fahrenheit.

The deepest drilling exploration ever made was about 8 or 9 miles below the surface, he said.

If the study doesn’t shake the diamond markets, it promises to help shed light on fluid movement in the deep Earth, which helps account for the carbon cycle on which all life on the planet depends.

“Fluids are the key link between the shallow and the deep Earth,” Sverjensky said. “That’s why it’s important.”

This research was supported by grants from the Sloan Foundation through the Deep Carbon Observatory (Reservoirs and Fluxes and Extreme Physics and Chemistry programs) and by a U.S. Energy Department grant, DE-FG-02-96ER-14616.

Contacts and sources:
Arthur Hirsch
Johns Hopkins University 

Massive 'Development Corridors' In Africa Could Spell Environmental Disaster

In sub-Saharan Africa, dozens of major 'development corridors,' including roads, railroads, pipelines, and port facilities, are in the works to increase agricultural production, mineral exports, and economic integration. And, if all goes according to plan, it's going to be a disaster, say researchers reporting in the Cell Press journal Current Biology on Nov. 25. They assessed the potential environmental impacts of 33 planned or existing corridors -- spanning 53,000 kilometers -- that would crisscross much of the African continent, opening up vast areas of sparsely populated land.

This photo shows a savannah elephant, one of many animals expected to be affected by poaching with a dramatic expansion of African roads.
Credit: William Laurance

"In terms of development pressures, these corridors would be the biggest thing to hit Africa -- ever," says William Laurance of the Centre for Tropical Environmental and Sustainability Science at James Cook University in Cairns, Australia.

Earlier this year, Laurance and his colleagues issued a warning that this unprecedented expansion of infrastructure would come at a great cost. In the new study, he and his colleagues sought to quantify those costs by mapping each corridor along with the estimated human occupancy and the environmental values, including endangered and endemic vertebrates, plant diversity, critical habitats, carbon storage, and climate-regulation services, inside a 50-kilometer-wide band overlaid onto each corridor. They also assessed the potential for each corridor to facilitate increases in agricultural production.

Over 53,000 kilometers of development corridors have been proposed or are underway in sub-Saharan Africa. These corridors are intended to rapidly accelerate agriculture, mining, and land colonization, often in sparsely populated areas. In this Current Biology video abstract, Distinguished Professor William Laurance describes how the corridors were found to vary greatly in their potential socioeconomic benefits and likely environmental costs, with some corridors posing severe risks for key African ecosystems.

Credit: Laurance et al./Current Biology 2015

"We found striking variability in the likely environmental costs and economic benefits of the 33 'development corridors' that we assessed,' Laurance says. "Some of the corridors seem like a good idea, and some appear to be a really bad idea. Several of the corridors could be environmentally disastrous, in our view."

Based on the findings, Laurance says he thinks some of the planned development corridors should be cancelled altogether. His biggest concerns fall in areas near the equator, such as the Congo Basin, West Africa, and the rich tropical savannahs. Other corridors should proceed only with the most stringent safeguards and mitigation measures in place.

Change won't come easily. "The proponents of these projects include some very powerful economic interests, and no one can dispute the dire need to increase food security and economic development in Africa, where populations are growing very rapidly," Laurance says. "The trick -- and it's going to be a great challenge -- will be to amp up African food production without creating an environmental crisis in the process."

However, he and his colleagues were surprised to find that many of the proposed corridors are planned for places where the agricultural potential appears to be very limited, because of poor soils or climates or the remoteness of the area in question.

"One of the key justifications for these corridors is to ramp up farm and food production, but in fact it appears that massive mining investments--securing access to high-volume minerals such as iron ore and coal--are actually a key driver for a number of the corridors," Laurance says.

This photo shows deforestation in Africa, one of the expected effects of poorly planned road expansion.
Credit: William Laurance

The researchers are calling for key stakeholders -- African governments, international lenders and donors, private investors, and others -- to carefully scrutinize the development corridors. He and his team hope to advance a research agenda aimed at better environmental assessment and planning for those corridors that do move forward. They also plan to follow up this comprehensive survey with more detailed studies of key corridors and to develop more local partnerships to advance this work.

"Africa is now facing a 'decade of decision,'" Laurance says. "The stakes are enormous. Once any particular development corridor is established, Pandora's Box will be opened and there won't be very much that one can do to control the onslaught of hunting, habitat disruption, and legal and illegal mining activities. The only real chance to manage this situation is to stop those corridors that are most likely to cause truly profound environmental damage and to create stringent land-use planning around those corridors that do proceed."

This work was supported by the Australian Research Council, the Australian Council for International Agricultural Research, and James Cook University.

Contacts and sources:
Joseph Caputo
Cell Press

Current Biology, Laurance et al.: "Estimating the Environmental Costs of Africa's Massive ''Development Corridors.'' http://www.cell.com/current-biology


Aging Star’s Weight Loss Secret Revealed

VY Canis Majoris is a stellar goliath, a red hypergiant, one of the largest known stars in the Milky Way. It is 30-40 times the mass of the Sun and 300 000 times more luminous. In its current state, the star would encompass the orbit of Jupiter, having expanded tremendously as it enters the final stages of its life.

The new observations of the star used the SPHERE instrument on the VLT. Theadaptive optics system of this instrument corrects images to a higher degree than earlier adaptive optics systems. This allows features very close to bright sources of light to be seen in great detail [1]. SPHERE clearly revealed how the brilliant light of VY Canis Majoris was lighting up clouds of material surrounding it.

In this very close-up view from SPHERE the star itself is hidden behind an obscuring disc. The crosses are artefacts due to features in the instrument.
VLT image of the surroundings of VY Canis Majoris seen with SPHERE

Credit; ESO

And by using the ZIMPOL mode of SPHERE, the team could not only peer deeper into the heart of this cloud of gas and dust around the star, but they could also see how the starlight was scattered and polarised by the surrounding material. These measurements were key to discovering the elusive properties of the dust.

Careful analysis of the polarisation results revealed these grains of dust to be comparatively large particles, 0.5 micrometres across, which may seem small, but grains of this size are about 50 times larger than the dust normally found in interstellar space.

This wide-field view shows the sky around the very brilliant red hypergiant star VY Canis Majoris, one of the largest stars known in the Milky Way. The star itself appears at the centre of the picture, which also includes clouds of glowing red hydrogen gas, dust clouds and the bright star cluster around the bright star Tau Canis Majoris towards the upper right. This picture was created from images forming part of the Digitized Sky Survey 2.
Credit:ESO/Digitized Sky Survey 2. Acknowledgement: Davide De Martin

Throughout their expansion, massive stars shed large amounts of material -- every year, VY Canis Majoris sees 30 times the mass of the Earth expelled from its surface in the form of dust and gas. This cloud of material is pushed outwards before the star explodes, at which point some of the dust is destroyed, and the rest cast out into interstellar space. This material is then used, along with the heavier elements created during the supernova explosion, by the next generation of stars, which may make use of the material for planets.

This chart shows the location of the very brilliant red hypergiant star VY Canis Majoris, one of the largest stars known in the Milky Way. Most of the stars visible to the naked eye on a clear and dark night are shown and the location of VY Canis Majoris is marked with a red circle. This star is visible in a small telescope and has a strikingly red colour.

Credit: ESO, IAU and Sky & Telescope

Until now, it had remained mysterious how the material in these giant stars' upper atmospheres is pushed away into space before the host explodes. The most likely driver has always seemed to be radiation pressure, the force that starlight exerts. As this pressure is very weak, the process relies on large grains of dust, to ensure a broad enough surface area to have an appreciable effect [2].

This video sequence takes you on a voyage from a broad vista of the sky into a close-up look at one of the biggest stars in the Milky Way, VY Canis Majoris. The final image comes from the SPHERE instrument on ESO’s Very Large Telescope in Chile.

Credit: ESO/Digitized Sky Survey 2/N. Risinger (skysurvey.org)
Music: Johan B. Monell

"Massive stars live short lives," says lead author of the paper, Peter Scicluna, of the Academia Sinica Institute for Astronomy and Astrophysics, Taiwan. "When they near their final days, they lose alot of mass. In the past, we could only theorise about how this happened. But now, with the new SPHERE data, we have found large grains of dust around this hypergiant. These are big enough to be pushed away by the star's intense radiation pressure, which explains the star's rapid mass loss."

The large grains of dust observed so close to the star mean that the cloud can effectively scatter the star's visible light and be pushed by the radiation pressure from the star. The size of the dust grains also means much of it is likely to survive the radiation produced by VY Canis Majoris' inevitable dramatic demise as a supernova [3]. This dust then contributes to the surrounding interstellar medium, feeding future generations of stars and encouraging them to form planets.


[1] SPHERE/ZIMPOL uses extreme adaptive optics to create diffraction-limited images, which come a lot closer than previous adaptive optics instruments to achieving the theoretical limit of the telescope if there were no atmosphere. Extreme adaptive optics also allows much fainter objects to be seen very close to a bright star.

The images in the new study are also taken in visible light -- shorter wavelengths than the near-infrared regime, where most earlier adaptive optics imaging was performed. These two factors result in significantly sharper images than earlier VLT images. Even higher spatial resolution has been achieved with the VLTI, but the interferometer does not create images directly.

[2] The dust particles must be large enough to ensure the starlight can push it, but not so large that it simply sinks. Too small and the starlight would effectively pass through the dust; too large and the dust would be too heavy to push. The dust the team observed about VY Canis Majoris was precisely the right size to be most effectively propelled outwards by the starlight.

[3] The explosion will be soon by astronomical standards, but there is no cause for alarm, as this dramatic event is not likely for hundreds of thousands of years. It will be spectacular as seen from Earth -- perhaps as bright as the Moon -- but not a hazard to life here.

Contacts and sources:
Peter Scicluna
Academia Sinica Institute for Astronomy and AstrophysicsTaiwan

Richard Hook
ESO Public Information Officer
Garching bei München, Germany

This research was presented in a paper entitled "Large dust grains in the wind of VY Canis Majoris", by P. Scicluna et al., to appear in the journal Astronomy & Astrophysics. Research paper - http://www.eso.org/public/archives/releases/sciencepapers/eso1546/eso1546a.pdf

The team is composed of P. Scicluna (Academia Sinica Institute for Astronomy and Astrophysics, Taiwan), R. Siebenmorgen (ESO, Garching, Germany), J. Blommaert (Vrije Universiteit, Brussels, Belgium), M. Kasper (ESO, Garching, Germany), N.V. Voshchinnikov (St. Petersburg University, St. Petersburg, Russia), R. Wesson (ESO, Santiago, Chile) and S. Wolf (Kiel University, Kiel, Germany).

Brain Training: Researchers at Johns Hopkins Solve Puzzle of How We Learn

A new study sheds light on relationship between stimuli and delayed rewards, explaining how Pavlov's dogs were conditioned to drool.

More than a century ago, Pavlov figured out that dogs fed after hearing a bell eventually began to salivate when they heard the ring. A Johns Hopkins University-led research team has now figured out a key aspect of why.

In an article published in the journal Neuron, Johns Hopkins neuroscientist Alfredo Kirkwood settles a mystery of neurology that has stumped scientists for years: Precisely what happens in the brain when we learn, or how Pavlov's dogs managed to associate an action with a delayed reward to create knowledge. For decades scientists had a working theory of how it happened, but Kirkwood's team is now the first to prove it.

"If you're trying to train a dog to sit, the initial neural stimuli, the command, is gone almost instantly—it lasts as long as the word sit," said Kirkwood, a professor with the university's Zanvyl Krieger Mind/Brain Institute. "Before the reward comes, the dog's brain has already turned to other things. The mystery was, 'How does the brain link an action that's over in a fraction of a second with a reward that doesn't come until much later?'"

Credit: Johns Hopkins University

The working theory—which Kirkwood's team has validated—is that invisible "eligibility traces" effectively tag the synapses activated by the stimuli so that it can be cemented as true learning with the arrival of a reward.

In the case of a dog learning to sit, when the dog gets a treat or a reward, neuromodulators like dopamine flood the dog's brain with "good feelings." Though the brain has long since processed the sit command, eligibility traces respond to the neuromodulators, prompting a lasting synaptic change.

The team was able to prove the theory by isolating cells in the visual cortex of a mouse. When they stimulated the axon of one cell with an electrical impulse, they sparked a response in another cell. By doing this repeatedly, they mimicked the synaptic response between two cells as they process a stimulus and create an eligibility trace. When the researchers later flooded the cells with neuromodulators, simulating the arrival of a delayed reward, the response between the cells strengthened or weakened, showing the cells had "learned" and were able to do so because of the eligibility trace.

"This is the basis of how we learn things through reward," Kirkwood said, "a fundamental aspect of learning."

In addition to a greater understanding of the mechanics of learning, these findings could enhance teaching methods and lead to treatments for cognitive problems.

Researchers included Johns Hopkins postdoctoral fellow Su Hong; Johns Hopkins graduate student Xiaoxiu Tie; former Johns Hopkins research associate Kaiwen He; along with Marco Huertas and Harel Shouval, neurobiology researchers at the University of Texas at Houston; and Johannes W. Hell, a professor of pharmacology at University of California, Davis. The research was supported by grants from JHU's Science of Learning Institute and National Institutes of Health.

Contacts and sources:
Jill Rosen
 Johns Hopkins University

Scientists Get First Glimpse of Black Hole Eating Star, Ejecting High-Speed Flare

An international team of astrophysicists led by a Johns Hopkins University scientist has for the first time witnessed a star being swallowed by a black hole and ejecting a flare of matter moving at nearly the speed of light.

The finding reported Thursday in the journal Science tracks the star -- about the size of our sun -- as it shifts from its customary path, slips into the gravitational pull of a supermassive black hole and is sucked in, said Sjoert van Velzen, a Hubble fellow at Johns Hopkins.

"These events are extremely rare," van Velzen said. "It's the first time we see everything from the stellar destruction followed by the launch of a conical outflow, also called a jet, and we watched it unfold over several months."

Artist’s conception of a star being drawn toward a black hole and destroyed (left), and the black hole later emitting a “jet” of plasma composed of debris left from the star’s destruction. 
Modified from an original image by Amadeo Bachar.

Black holes are areas of space so dense that irresistible gravitational force stops the escape of matter, gas and even light, rendering them invisible and creating the effect of a void in the fabric of space. Astrophysicists had predicted that when a black hole is force-fed a large amount of gas, in this case a whole star, then a fast-moving jet of plasma - elementary particles in a magnetic field - can escape from near the black hole rim, or "event horizon." This study suggests this prediction was correct, the scientists said.

"Previous efforts to find evidence for these jets, including my own, were late to the game," said van Velzen, who led the analysis and coordinated the efforts of 13 other scientists in the United States, the Netherlands, Great Britain and Australia.

Supermassive black holes, the largest of black holes, are believed to exist at the center of most massive galaxies. This particular one lies at the lighter end of the supermassive black hole spectrum, at only about a million times the mass of our sun, but still packing the force to gobble a star.

The first observation of the star being destroyed was made by a team at the Ohio State University, using an optical telescope in Hawaii. That team announced its discovery on Twitter in early December 2014.

After reading about the event, van Velzen contacted an astrophysics team led by Rob Fender at the University of Oxford in Great Britain. That group used radio telescopes to follow up as fast as possible. They were just in time to catch the action.

By the time it was done, the international team had data from satellites and ground-based telescopes that gathered X-ray, radio and optical signals, providing a stunning "multi-wavelength" portrait of this event.

It helped that the galaxy in question is closer to Earth than those studied previously in hopes of tracking a jet emerging after the destruction of a star. This galaxy is about 300 million light years away, while the others were at least three times farther away. One light year is 5.88 trillion miles.

The first step for the international team was to rule out the possibility that the light was from a pre-existing expansive swirling mass called an "accretion disk" that forms when a black hole is sucking in matter from space. That helped to confirm that the sudden increase of light from the galaxy was due to a newly trapped star.

"The destruction of a star by a black hole is beautifully complicated, and far from understood," van Velzen said. "From our observations, we learn the streams of stellar debris can organize and make a jet rather quickly, which is valuable input for constructing a complete theory of these events."

Van Velzen last year completed his doctoral dissertation at Radboud University in the Netherlands, where he studied jets from supermassive black holes. In the last line of the dissertation, he expressed his hope to discover these events within four years. It turned out to take only a few months after the ceremony for his dissertation defense.

Van Velzen and his team were not the only ones to hunt for radio signals from this particular unlucky star. A group at Harvard observed the same source with radio telescopes in New Mexico and announced its results online. Both teams presented results at a workshop in Jerusalem in early November. It was the first time the two competing teams had met face to face.

"The meeting was an intense, yet very productive exchange of ideas about this source," van Velzen said. "We still get along very well; I actually went for a long hike near the Dead Sea with the leader of the competing group."

Support for this study came from sources including NASA, the Netherlands Foundation for Scientific Research (NOW), the European Research Council, the International Centre for Radio Astronomy Research, the Alfred P. Sloan Foundation and the Australian Research Council.

Contacts and sources:
Arthur Hirsch
Johns Hopkins University s

Study Shows White Matter Brain Damage Caused By 'Skunk-Like' Cannabis

Smoking high potency 'skunk-like' cannabis can damage a crucial part of the brain responsible for communication between the two brain hemispheres, according to a new study by scientists from King's College London and Sapienza University of Rome.

Researchers have known for some time that long-term cannabis use increases the risk of psychosis, and recent evidence suggests that alterations in brain function and structure may be responsible for this greater vulnerability. However, this new research, published today in Psychological Medicine, is the first to examine the effect of cannabis potency on brain structure.

The corpus callosum
Credit: King's College London

Exploring the impact of cannabis potency is particularly important since today's high potency 'skunk-like'products have been shown to contain higher proportions of Δ9-tetrahydrocannabinol (THC) than they did around a decade ago. In experimental studies THC has been shown to induce psychotic symptoms and 'skunk-like' products high in THC are now thought to be the most commonly used form of cannabis in the UK.

Dr Paola Dazzan, Reader in Neurobiology of Psychosis from the Institute of Psychiatry, Psychology & Neuroscience (IoPPN) at King's College London, and senior researcher on the study, said: 'We found that frequent use of high potency cannabis significantly affects the structure of white matter fibres in the brain, whether you have psychosis or not.

High potency cannabis
Credit: Wikimedia Commons

'This reflects a sliding scale where the more cannabis you smoke and the higher the potency, the worse the damage will be.'

Diffusion Tensor Imaging (DTI), a Magnetic Resonance Imaging (MRI) technique, was used to examine white matter in the brains of 56 patients who had reported a first episode of psychosis at the South London and Maudsley NHS Foundation Trust (SLaM), as well as 43 healthy participants from the local community.

The researchers specifically examined the corpus callosum, the largest white matter structure in the brain, which is responsible for communication between the left and right hemispheres. White matter consists of large bundles of nerve cell projections (called axons), which connect different regions of the brain, enabling communication between them.

The corpus callosum is particularly rich in cannabinoid receptors, on which the THC content of cannabis acts.

The study found that frequent use of high potency cannabis was linked to significantly higher mean-diffusivity (MD), a marker of damage in white matter structure.

Dr Tiago Reis Marques, a senior research fellow from the IoPPN at King's College London, said: 'This white matter damage was significantly greater among heavy users of high potency cannabis than in occasional or low potency users, and was also independent of the presence of a psychotic disorder.'

Dr Dazzan added: 'There is an urgent need to educate health professionals, the public and policymakers about the risks involved with cannabis use.

'As we have suggested previously, when assessing cannabis use it is extremely important to gather information on how often and what type of cannabis is being used. These details can help quantify the risk of mental health problems and increase awareness on the type of damage these substances can do to the brain.'

This research was funded primarily by the NIHR Biomedical Research Centre at the South London and Maudsley (SLaM) NHS Foundation Trust and King's College London.

The study was led by Dr Tiago Reis Marques and Dr Paola Dazzan of the IoPPN at King's, and Dr Silvia Rigucci of Sapienza University of Rome.

Contacts and sources:
Jack Stonebridge
King's College London

Loss of Carbon in Martian Atmosphere Explained

Mars is blanketed by a thin, mostly carbon dioxide atmosphere -- one that is far too thin to keep water from freezing or quickly evaporating. However, geological evidence has led scientists to conclude that ancient Mars was once a warmer, wetter place than it is today. To produce a more temperate climate, several researchers have suggested that the planet was once shrouded in a much thicker carbon dioxide atmosphere. For decades that left the question, "Where did all the carbon go?"

The solar wind stripped away much of Mars' ancient atmosphere and is still removing tons of it every day. But scientists have been puzzled by why they haven't found more carbon -- in the form of carbonate -- captured into Martian rocks. They have also sought to explain the ratio of heavier and lighter carbons in the modern Martian atmosphere.

This graphic depicts paths by which carbon has been exchanged among Martian interior, surface rocks, polar caps, waters and atmosphere, and also depicts a mechanism by which it is lost from the atmosphere with a strong effect on isotope ratio. 
Image Credit: Lance Hayashida/Caltech 

Now a team of scientists from the California Institute of Technology and NASA's Jet Propulsion Laboratory, both in Pasadena, offer an explanation of the "missing" carbon, in a paper published today by the journal Nature Communications.

They suggest that 3.8 billion years ago, Mars might have had a moderately dense atmosphere. Such an atmosphere -- with a surface pressure equal to or less than that found on Earth -- could have evolved into the current thin one, not only minus the "missing" carbon problem, but also in a way consistent with the observed ratio of carbon-13 to carbon-12, which differ only by how many neutrons are in each nucleus.

"Our paper shows that transitioning from a moderately dense atmosphere to the current thin one is entirely possible," says Caltech postdoctoral fellow Renyu Hu, the lead author. "It is exciting that what we know about the Martian atmosphere can now be pieced together into a consistent picture of its evolution -- and this does not require a massive undetected carbon reservoir."

When considering how the early Martian atmosphere might have transitioned to its current state, there are two possible mechanisms for the removal of the excess carbon dioxide. Either the carbon dioxide was incorporated into minerals in rocks called carbonates or it was lost to space.

An August 2015 study used data from several Mars-orbiting spacecraft to inventory carbonates, showing there are nowhere near enough in the upper half mile (one kilometer) or the crust to contain the missing carbon from a thick early atmosphere during a time when networks of ancient river channels were active, about 3.8 billion years ago.

The escaped-to-space scenario has also been problematic. Because various processes can change the relative amounts of carbon-13 to carbon-12 isotopes in the atmosphere, "we can use these measurements of the ratio at different points in time as a fingerprint to infer exactly what happened to the Martian atmosphere in the past," says Hu. The first constraint is set by measurements of the ratio in meteorites that contain gases released volcanically from deep inside Mars, providing insight into the starting isotopic ratio of the original Martian atmosphere. The modern ratio comes from measurements by the SAM (Sample Analysis at Mars) instrument on NASA's Curiosity rover.

One way carbon dioxide escapes to space from Mars' atmosphere is called sputtering, which involves interactions between the solar wind and the upper atmosphere. NASA's MAVEN (Mars Atmosphere and Volatile Evolution) mission has yielded recent results indicating that about a quarter pound (about 100 grams) of particles every second are stripped from today's Martian atmosphere via this process, likely the main driver of atmospheric loss. Sputtering slightly favors loss of carbon-12, compared to carbon-13, but this effect is small. The Curiosity measurement shows that today's Martian atmosphere is far more enriched in carbon-13 -- in proportion to carbon-12 -- than it should be as a result of sputtering alone, so a different process must also be at work.

Hu and his co-authors identify a mechanism that could have significantly contributed to the carbon-13 enrichment. The process begins with ultraviolet (UV) light from the sun striking a molecule of carbon dioxide in the upper atmosphere, splitting it into carbon monoxide and oxygen. Then, UV light hits the carbon monoxide and splits it into carbon and oxygen. Some carbon atoms produced this way have enough energy to escape from the atmosphere, and the new study shows that carbon-12 is far more likely to escape than carbon-13.

Modeling the long-term effects of this "ultraviolet photodissociation" mechanism, the researchers found that a small amount of escape by this process leaves a large fingerprint in the carbon isotopic ratio. That, in turn, allowed them to calculate that the atmosphere 3.8 billion years ago might have had a surface pressure a bit less thick than Earth's atmosphere today.

"This solves a long-standing paradox," said Bethany Ehlmann of Caltech and JPL, a co-author of both today's publication and the August one about carbonates. "The supposed very thick atmosphere seemed to imply that you needed this big surface carbon reservoir, but the efficiency of the UV photodissociation process means that there actually is no paradox. You can use normal loss processes as we understand them, with detected amounts of carbonate, and find an evolutionary scenario for Mars that makes sense."

Contacts and sources:
Deborah Williams-Hedges
California Institute of Technology, Pasadena

Guy Webster
Jet Propulsion Laboratory, Pasadena, Calif.

NEOWISE Observes Carbon Gases in Comets

After its launch in 2009, NASA's NEOWISE spacecraft observed 163 comets during the WISE/NEOWISE prime mission. This sample from the space telescope represents the largest infrared survey of comets to date. Data from the survey are giving new insights into the dust, comet nucleus sizes, and production rates for difficult-to-observe gases like carbon dioxide and carbon monoxide. Results of the NEOWISE census of comets were recently published in the Astrophysical Journal
An expanded view of comet C/2006 W3 (Christensen) is shown here. The WISE spacecraft observed this comet on April 20th, 2010 as it traveled through the constellation Sagittarius. 
An expanded view of comet C/2006 W3 (Christensen) is shown here.
Image credit: NASA/JPL-Caltech

Carbon monoxide (CO) and carbon dioxide (CO2) are common molecules found in the environment of the early solar system, and in comets. In most circumstances, water-ice sublimation likely drives the activity in comets when they come nearest to the sun, but at larger distances and colder temperatures, other common molecules like CO and CO2 may be the main drivers. Spaceborne carbon dioxide and carbon monoxide are difficult to directly detect from the ground because their abundance in Earth's own atmosphere obscures the signal. The NEOWISE spacecraft soars high above Earth's atmosphere, making these measurements of a comet's gas emissions possible.

"This is the first time we've seen such large statistical evidence of carbon monoxide taking over as a comet's gas of choice when they are farther out from the sun," said James Bauer, deputy principal investigator of the NEOWISE mission from NASA's Jet Propulsion Laboratory in Pasadena, California, and author of a paper on the subject. "By emitting what is likely mostly carbon monoxide beyond four astronomical units (4 times the Earth-Sun distance; about 370 million miles, 600 million kilometers) it shows us that comets may have stored most of the gases when they formed, and secured them over billions of years. Most of the comets that we observed as active beyond 4 AU are long-period comets, comets with orbital periods greater than 200 years that spend most of their time beyond Neptune's orbit."

While the amount of carbon monoxide and dioxide increases relative to ejected dust as a comet gets closer to the sun, the percentage of these two gases, when compared to other volatile gases, decreases.

"As they get closer to the sun, these comets seem to produce a prodigious amount of carbon dioxide," said Bauer. "Your average comet sampled by NEOWISE would expel enough carbon dioxide to provide the bubble power for thousands of cans of soda per second."

The pre-print version of this paper is available at: http://arxiv.org/abs/1509.08446

The NEOWISE mission hunts for near-Earth objects using the Wide-field Infrared Survey Explorer (WISE) spacecraft. Funded by NASA's Planetary Science division, the NEOWISE project uses images taken by the spacecraft to look for asteroids and comets, providing a rich source of measurements of solar system objects at infrared wavelengths. These measurements include emission lines that are difficult or impossible to detect directly from the ground.

Contacts and sources:
DC Agle
Jet Propulsion Laboratory,

Strange Star Likely Swarmed by Comets Not Alien Structures

Was it a catastrophic collision in the star's asteroid belt? A giant impact that disrupted a nearby planet? A dusty cloud of rock and debris? A family of comets breaking apart? Or was it alien megastructures built to harvest the star's energy?

Just what caused the mysterious dimming of star KIC 8462852?

This illustration shows a star behind a shattered comet. Observations of the star KIC 8462852 by NASA's Kepler and Spitzer space telescopes suggest that its unusual light signals are likely from dusty comet fragments, which blocked the light of the star as they passed in front of it in 2011 and 2013. The comets are thought to be traveling around the star in a very long, eccentric orbit.
 This illustration shows a star behind a shattered comet.
Illustration by NASA/JPL-Caltech.

Massimo Marengo, an Iowa State University associate professor of physics and astronomy, wondered when he saw all the buzz about the mysterious star found by citizen scientists on the Planet Hunters website.

Those citizen scientists were highlighting measurements of star brightness recorded by NASA's Kepler spacecraft. Tiny dips in a star's brightness can indicate a planet is passing in front of the star. That's how Kepler astronomers - and citizen scientists using the internet to help analyze the light curves of stars - are looking for planets.

But this star had deep dips in brightness - up to 22 percent. The star's brightness also changed irregularly, sometimes for days and even months at a time. A search of the 150,000-plus stars in Kepler's database found nothing like this.

So Marengo and two other astronomers decided to take a close look at the star using data taken with the Infrared Array Camera of NASA's Spitzer Space Telescope. They report their findings in a paper recently published online by The Astrophysical Journal Letters.

Their conclusion?

"The scenario in which the dimming in the KIC 8462852 light curve were caused by the destruction of a family of comets remains the preferred explanation ...," wrote the three - Marengo; Alan Hulsebus, an Iowa State doctoral student; and Sarah Willis, a former Iowa State graduate student now with the Massachusetts Institute of Technology's Lincoln Laboratory.

Questions about the star were launched last month when a research team led by Tabetha Boyajian of Yale University reported on the star in the Monthly Notices of the Royal Astronomical Society. The astronomers reported how citizen scientists tagged the star's deep and irregular dips in brightness as "bizarre" and "interesting."

Boyajian and the other researchers looked at the data and investigated several possible causes. They wrote the "most promising theory" was a barrage of crumbling comets passing in front of the star.

In a subsequent paper submitted to The Astrophysical Journal, Jason Wright and colleagues at Penn State University speculated about other causes, including alien megastructures built to harvest energy while orbiting the star.

When the Iowa State astronomers studied the star with Spitzer infrared data from January 2015 - two years after the Kepler measurements - Marengo said they didn't see much. If there had been some kind of catastrophe near the star, he said there would be a lot of dust and debris. And that would show up as extra infrared emissions.

Marengo said the study looked at two different infrared wavelengths: the shorter was consistent with a typical star and the longer showed some infrared emissions, but not enough to reach a detection threshold. The astronomers concluded there were no excess infrared emissions and therefore no sign of an asteroid belt collision, a giant impact on a planet or a dusty cloud of rock and debris.

So Marengo and his colleagues say the destruction of a family of comets near the star is the most likely explanation for the mysterious dimming. The comet fragments coming in rapidly at a steep, elliptical orbit could create a big debris cloud that could dim the star. Then the cloud would move off, restoring the star's brightness and leaving no trace of excess infrared light.

And the alien megastructure theory?

"We didn't look for that," Marengo said. "We can't really say it is, or is not. But what the star is doing is very strange. It's interesting when you have phenomena like that - typically it means there's some new physical explanation or a new concept to be discovered."

Contacts and sources: 
Massimo Marengo
 Iowa State University

Wednesday, November 25, 2015

Volcanic Rocks Hold Clues To Earth's Interior

The journey for volcanic rocks found on many volcanic islands began deep within the Earth.

Brought to the Earth's surface in eruptions of deep volcanic material, these rocks hold clues as to what is going on deep beneath Earth's surface.

Studies of rocks found on certain volcanic islands, known as ocean island basalts, revealed that although these erupted rocks originate from Earth's interior, they are not the same chemically.

A group of former and current Arizona State University researchers say chemical differences found between rocks samples at volcanic hotspots around the world can be explained by a model of mantle dynamics that involves plumes, upwellings of abnormally hot rock within the Earth's mantle, that originate in the lower mantle and physically interact with chemically distinct piles of material.

Credit:  NASA/Jeff Schmaltz/LANCE/EOSDIS MODIS Rapid Response Team/GSFC

According to a group of current and former researchers at Arizona State University, the key to unlocking this complex, geochemical puzzle rests in a model of mantle dynamics consisting of plumes - upwelling's of abnormally hot rock within the Earth's mantle - that originate in the lower mantle and physically interact with chemically distinct piles of material.

The team revealed that this theoretical model of material transport can easily produce the chemical variability observed at hotspot volcanoes (such as Hawaii) around the world.

"This model provides a platform for understanding links between the physics and chemistry that formed our modern world as well as habitable planets elsewhere," says Curtis Williams, lead author of the study whose results are published in the Nov. 24 issue of the journalNature Communications.

Basalts collected from ocean islands such as Hawaii and those collected from mid-ocean ridges (that erupt at spreading centers deep below oceans) may look similar to the naked eye; however, in detail their trace elements and isotopic compositions can be quite distinct. These differences provide valuable insight into the chemical structure and temporal evolution of Earth's interior.

"In particular, it means that the Earth's mantle - the hot rock below Earth's crust but above the planet's iron core - is compositionally heterogeneous. Understanding when and where these heterogeneities are formed and how they are transported through the mantle directly relates to the initial composition of the Earth and how it has evolved to its current, habitable state," said Williams, a postdoc at UC Davis.

While a graduate student in ASU's School of Earth and Space Exploration, Williams and faculty members Allen McNamara and Ed Garnero conceived a study to further understand how chemical complexities that exist deep inside the Earth are transported to the surface and erupt as intraplate volcanism (such as that which formed the Hawaiian islands). Along with fellow graduate student Mingming Li and Professional Research Associate Matthijs van Soest, the researchers depict a model Earth, where in its interior resides distinct reservoirs of mantle material that may have formed during the earliest stages of Earth's evolution.

Employing such reservoirs into their models is supported by geophysical observations of two, continent-sized regions - one below the Pacific Ocean and one below parts of the Atlantic Ocean and Africa - sitting atop the core-mantle boundary.

"In the last several years, we have witnessed a sharpening of the focus knob on seismic imaging of Earth's deep interior. We have learned that the two large anomalous structures at the base of the mantle behave as if they are compositionally distinct. That is, we are talking about different stuff compared to the surrounding mantle. These represent the largest internal anomalies in Earth of unknown chemistry and origin," said Garnero.

These chemically distinct regions also underlie a majority of hotspot volcanism, via hot mantle plumes from the top of the piles to Earth's surface, suggesting a potential link between these ancient, chemically distinct regions and the chemistry of hotspot volcanism.

To test the validity of their model, Williams and coauthors compare their predictions of the variability of the ratios of helium isotopes (helium-3 and helium-4) in plumes to that observed in ocean island basalts.

3He is a so-called primordial isotope found in the Earth's mantle. It was created before the Earth was formed and is thought to have become entrapped within the Earth during planetary formation. Today, it is not being added to Earth's inventory at a significant rate, unlike 4He, which accumulates over time.

Williams explained: "The ratio of helium-3 to helium-4 in mid-ocean ridge basalts are globally characterized by a narrow range of small values and are thought to sample a relatively homogenous upper mantle. On the other hand, ocean island basalts display a much wider range, from small to very large, providing evidence that they are derived from different source regions and are thought to sample the lower mantle either partially or in its entirety."

The variability of 3He to 4He in ocean island basalts is not only observed between different hotspots, but temporally within the different-aged lavas of a single hotspot track.

"The reservoirs and dynamics associated with this variability had remained unclear and was the primary motivation behind the study presented here," said Williams.

Contacts and sources:
Karin Valentine
Arizona State University

Mars' Moon Phobos Will Break into Rings Like Saturn

Mars' largest moon, Phobos, is slowly falling toward the planet, but rather than smash into the surface, it likely will be shredded and the pieces strewn about the planet in a ring like the rings encircling Saturn, Jupiter, Uranus and Neptune.

Though inevitable, the demise of Phobos is not imminent. It will probably happen in 20 to 40 million years, leaving a ring that will persist for anywhere from one million to 100 million years, according to two young earth scientists at the University of California, Berkeley.

Mars could gain a ring in 10-20 million years when its moon Phobos is torn to shreds by tidal forces due to Mars' gravitational pull.

Image by Tushar Mittal using Celestia 2001-2010, Celestia Development Team.

In a paper appearing online this week in Nature Geoscience, UC Berkeley postdoctoral fellow Benjamin Black and graduate student Tushar Mittal estimate the cohesiveness of Phobos and conclude that it is insufficient to resist the tidal forces that will pull it apart when it gets closer to Mars.

Just as earth's moon pulls on our planet in different directions, raising tides in the oceans, for example, so too Mars tugs differently on different parts of Phobos. As Phobos gets closer to the planet, the tugs are enough to actually pull the moon apart, the scientists say. This is because Phobos is highly fractured, with lots of pores and rubble. Dismembering it is analogous to pulling apart a granola bar, Black said, scattering crumbs and chunks everywhere.

The resulting rubble from Phobos - rocks of various sizes and a lot of dust - would continue to orbit Mars and quickly distribute themselves around the planet in a ring.

While the largest chunks would eventually spiral into the planet and collide at a grazing angle to produce egg-shaped craters, the majority of the debris would circle the planet for millions of years until these pieces, too, drop onto the planet in 'moon' showers, like meteor showers. Only Mars' other moon, Deimos, would remain.

Different moons, different fates

Black and Mittal, both in UC Berkeley's Department of Earth and Planetary Science, were drawn to the question of what might happen to Phobos because its fate is expected to be so different from that of most other moons in our solar system.

"While our moon is moving away from earth at a few centimeters per year, Phobos is moving toward Mars at a few centimeters per year, so it is almost inevitable that it will either crash into Mars or break apart," Black said. "One of our motivations for studying Phobos was as a test case to develop ideas of what processes a moon might undergo as it moves inward toward a planet."

Only one other moon in the solar system, Neptune's largest moon, Triton, is known to be moving closer to its planet.

Studying such moons is relevant to conditions in our early solar system, Mittal said, when it's likely there were many more moons around the planets that have since disintegrated into rings - the suspected origins of the rings of the outer planets. Some studies estimate that during planet formation, 20-30 percent of planets acquire moons moving inward and destined for destruction, though they would have long since disappeared. Some of Mars' several thousand elliptical craters may even have been formed by remnants of such moonlets crashing to the surface at a grazing angle.

When tidal stresses overcome rock strength

To estimate the strength of Phobos, Black and Mittal looked data from similarly fractured rocks on Earth and from meteorites that struck Earth and have a density and composition similar to Phobos. They also constrained the strength of Phobos based on results from simulations of the 10-kilometer diameter Stickney impact crater, which formed in the past when a rock rammed into Phobos without quite smashing the moon apart. That crater spans about one-sixth the circumference of Phobos and looks as if someone took a scoop out of the moon.

Mars could gain a ring in 10-20 million years when its moon Phobos is torn to shreds by Mars gravity.

Credit:  Tushar Mittal using Celestia 2001-2010, Celestia Development Team.

Once they determined when and how they expected tidal forces to tear Phobos apart, Mittal modeled the evolution of the ring, adapting techniques developed to understand Saturn's rings.

"If the moon broke apart at 1.2 Mars radii, about 680 kilometers above the surface, it would form a really narrow ring comparable in density to that of one of Saturn's most massive rings," Mittal said. "Over time it would spread out and get wider, reaching the top of the Martian atmosphere in a few million years, when it would start losing material because stuff would keep raining down on Mars."

If the moon breaks up farther from Mars, the ring could persist for 100 million years before raining down on Mars, they found.

Mittal said it's not clear whether the dust and debris rings would be visible from earth, since dust does not reflect much sunlight, whereas ice in the rings of the outer planets makes them easily visible. But Mars' ring may reflect enough light to make Mars slightly brighter as seen from Earth, he said, and through a telescope the shadows of the rings might also be visible on the surface.

"Standing on the surface of Mars a few tens of millions of years from now, it would be pretty spectacular to watch," Black said.

Contacts and sources:
Robert Sanders
 University of California, Berkeley

Earth's Magnetic Field Is Not About To Flip

The intensity of earth's magnetic field has been weakening in the last couple of hundred years, leading some scientists to think that its polarity might be about to flip. But the field's intensity may simply be coming down from an abnormal high rather than approaching a reversal, scientists write in a new paper in the Proceedings of the National Academy of Sciences.

Humans have lived through dips in the field's intensity before, and there are debates about whether reversals in the more distant past had any connection to species extinctions. Today, we have something else today that would be affected by weakening of the magnetic field alone: technology. The magnetic field deflects the solar wind and cosmic rays. When the field is weaker, more radiation gets through, which can disrupt power grids and satellite communications.

This is an artistic impression of how auroras could be more widespread under a geomagnetic field much weaker than today's.

Credit: Huapei Wang, with source files courtesy of NASA's Earth Observatory/NOAA/DOD

"The field may be decreasing rapidly, but we're not yet down to the long-term average. In 100 years, the field may even go back the other direction [in intensity]," said Dennis Kent, an expert in paleomagnetism at Columbia University's Lamont-Doherty Earth Observatory and co-author of the study with his former student, Huapel Wang, now a post-doctoral research associate at MIT, and Pierre Rochette of Aix-Marseille Université.

The scientists used a new technique to measure changes in the magnetic field's strength in the past and found that its long-term average intensity over the past five million years was much weaker than the global database of paleointensity suggests - only about 60 percent of the field's strength today. The findings raise questions both about claims that the magnetic field may be nearing a reversal and about the database itself.

Geomagnetic polarity timescale: Filled and open blocks represent intervals of normal and reverse geomagnetic field polarity over the past 40 million years.

Credit: Gee and Kent, 2007

The study's results fit expectations that the magnetic field's intensity at the poles should be twice its intensity at the equator. In contrast, the time-averaged intensity calculated from the PINT paleointensity database doesn't meet the two-to-one, poles-to-equator dipole hypothesis, and the database calculation suggests that the long-term average intensity over the past 5 million years is similar to the field's intensity today.

The authors believe the difference is in how the samples are analyzed. They say the database, which catalogs paleointensity data from published papers, includes a variety of methods and doesn't clearly delineate data from two different types of magnetized mineral samples, tiny single-domain grains that come from sites that cooled quickly, like basalt glass on the outer edges of lava flows, and more common larger multi-domain grains found deeper inside lava whose magnetic behavior is more complex and require a different type of analysis.

Earth's magnetic poles have reversed several hundred times over the past 100 million years, most recently about 780,000 years ago. Some scientists believe a dip in the magnetic field's intensity 41,000 years ago was also a brief reversal. When scientists recently began noticing a decline in the magnetic field - about 10 percent over the past two centuries - it led to speculation that another reversal could be coming. That doesn't mean it would happen quickly, if it happens at all. The magnetic field's intensity rises and dips without a clear pattern, only sometimes dipping far enough to become unstable and possibly reverse. During a reversal, geomagnetic intensity declines during a transition period that typically lasts hundreds to thousands of years, then rebuilds.

For the new study, the scientists used ancient lava flows from sites near the equator and compared the paleointensity data with what had been regarded as an anomalously low intensity obtained by others from lavas from near the South Pole. As lava cools, iron-bearing minerals form inside and act like tiny magnets, aligning with the Earth's magnetic field. Scientists can analyze ancient lava to determine both the direction and the intensity of the magnetic field at the time the lava formed.

For the new study, the scientists used ancient lava flows from sites near the equator and compared the paleointensity data with from lavas collected near the South Pole. As lava cools, iron-bearing minerals form inside and act like tiny magnets, aligning with the Earth's magnetic field. Scientists can analyze ancient lava to determine both the direction and the intensity of the magnetic field at the time the lava formed.

The scientists used a new technique for analyzing multi-domain samples. They worked with a representative range from the past 5 million years using 27 lavas from the Galápagos Islands, about 1 degree of latitude from the equator. The results were then compared to those from 38 lavas with single-domain properties from a volcanic area near McMurdo Station in Antarctica, about 12 degrees from the South Pole.

When they averaged the geomagnetic intensity of each set, it revealed close to a two-to-one intensity difference between the polar site and the equatorial site, fitting the geocentric axial dipole (GAD) hypothesis, on which most paleogeographic reconstructions rely.

The results show that the time-averaged geomagnetic field intensity over the past 5 million years is about 60 percent of the field's intensity today and aligns with the GAD hypothesis, both in direction and intensity. Other studies using only single-domain basalt glass from the ocean floor have found a similar time-averaged intensity, but they did not have samples to test the polar-equator ratio. The agreement helps to validate the new multiple-domain analysis technique, Kent said.

The lower time-averaged paleointensity also suggests a shorter average magnetopause standoff distance--the distance at which the Earth's magnetic field repels the solar wind. The average is about 9 times the Earth's radius compared to nearly 11 times the Earth's radius today, according to the paper. A shorter standoff distance results in stronger radiation at Earth's surface and in the atmosphere, causing more frequent low-latitude auroras.

Contacts and sources:
 Dennis Kent
 Kevin Krajick, Senior editor, science news
The Earth Institute, Columbia University