Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Monday, April 30, 2012

Old Maps And Dead Clams Help Solve Coastal Boulder Mystery

Perched atop the sheer coastal cliffs of Ireland's Aran Islands, ridges of giant boulders have puzzled geologists for years. What forces could have torn these rocks from the cliff edges high above sea level and deposited them far inland?

This is the boulder ridge around the coastline of the Aran Islands. New research finds that storm waves have formed these ridges, despite the contention of some researchers that only a tsunami would have enough power to do this.
 
Credit: Ronadh Cox

While some researchers contend that only a tsunami could push these stones, new research in The Journal of Geology finds that plain old ocean waves, with the help of some strong storms, did the job.

And they're still doing it.

The three tiny Aran Islands are just off the western coast of Ireland. The elongated rock ridges form a collar along extended stretches of the islands' Atlantic coasts. The sizes of the boulders in the formations range "from merely impressive to mind-bogglingly stupendous," writes Dr. Rónadh Cox, who led the research with her Williams College students. One block the team studied weighs an estimated 78 tons, yet was still cut free from its position 36 feet above sea level and shoved further inland.


This is the boulder ridge around the coastline of the Aran Islands. New research finds that storm waves have formed these ridges, despite the contention of some researchers that only a tsunami would have enough power to do this.
 
Credit: Ronadh Cox


Armed with equations that model the forces generated by waves, some researchers have concluded that no ordinary ocean waves could muster the force necessary to move the largest of the boulders this high above the ocean surface and so far inland. The math suggests the rocks in the ridges could only have been put there by a tsunami.

The equations tell one story. The islands' residents tell another. According to some locals, enormous rocks have moved in their lifetimes, despite the fact that there hasn't been a tsunami to hit the islands since 1755.

"Unless you have little green men from mars doing this on the quiet, it must be storm waves," Cox said.

While the anecdotes from residents are interesting, Cox and her team went in search of more concrete evidence. The clincher came when the team compared modern high-altitude photos of the coastline to set of meticulous maps surveyed in 1839. The 19th century surveyors, who Cox describes as "possibly the most anal men on the planet," carefully mapped not only the boulder ridges, but all of the criss-crossing stone walls that farmers built between fields. The researchers digitized the maps and overlaid them on the modern images, using the walls to line the two up accurately.

"Not only did they map every wall, they did it right. The maps aren't even off by even a meter."

The overlay of the new photos with the old maps shows definitively that sections of the ridges have moved substantially since 


This image shows year-to-year movement of a large boulder on the Aran Islands off the coast of Ireland. Skid marks can be seen where this rock was recently pushed inland by storm waves.
 
Credit: The Journal of Geology/University of Chicago Press


1839—nearly 100 years after the most recent tsunami. Some sections moved inland at an average rate of nearly 10 feet per decade. In some places, the ridge had run over and demolished field walls noted on the old maps.

Other lines of evidence corroborate residents' accounts of recent movement. When the boulders were ripped from the bedrock, tiny clams that live in cracks and crevices sometimes came along for the ride. Using radiocarbon dating, Cox and her team found that some of the rocks have been pulled from the coastline within the last 60 years. What's more, the researchers have been photographing sections of the ridge during each field season since 2006, and they've documented movement from year to year.

So what of the equations that point to tsunami as the only possible earth mover?

"We've eliminated tsunami and I think we can rule out little green men," Cox said. "What that says is our equations aren't good enough."

Cox thinks the characteristics of the Aran Island shoreline are throwing off the calculations. The Aran cliffs rise nearly vertically out of the Atlantic, leaving very deep water close to the shore. As waves slam into the sheer cliff, that water is abruptly deflected back out toward the oncoming waves. This backflow may amplify subsequent waves. The result is an occasional storm wave that is much larger than one would expect.

"In this kind of environment these would be less rare," Cox said. "You only need a couple of them to move these rocks around. The radiocarbon data show that not only are some boulders moving in recent years, but also that some of them have been in the ridges for hundreds and even a couple of thousand years. Accumulated activity of rare large-wave events over that time could certainly build these structures"

Cox plans to add a physicist to her research team in the near future to try to shed some light on the wave dynamics on the islands, but it's clear from the evidence the team has already gathered that storm waves can do more than some researchers thought.

Following the devastating Indonesian tsunami in 2004, there has been renewed interest in learning about how a tsunami can change the landscape. Cox's findings have important implications for that research.

"There's a tendency to attribute the movement of large objects to tsunami," she said. "We're saying hold the phone. Big boulders are getting moved by storm waves."

Rónadh Cox, Danielle B. Zentner, Brian J. Kirchner, and Mea S. Cook, "Boulder Ridges on the Aran Islands (Ireland): Recent Movements Caused by Storm Waves, Not Tsunami." The Journal of Geology 120:3 (May 2012). The issue will publish online Tuesday, May 1.

One of the oldest journals in geology, The Journal of Geology has since 1893 promoted the systematic philosophical and fundamental study of geology. The journal publishes original research across a broad range of subfields in geology, including geophysics, geochemistry, sedimentology, geomorphology, petrology, plate tectonics, volcanology, structural geology, mineralogy, and planetary sciences.


Contacts and sources:
Kevin Stacey
University of Chicago Press Journals

About 1 Baby Born Each Hour Addicted To Opiate Drugs In U.S., U-M Study Shows

More mothers using drugs like OxyContin, Vicodin, giving birth to babies in drug withdrawal, results of study published in JAMA

About one baby is born every hour addicted to opiate drugs in the United States, according to new research from University of Michigan physicians.

In the research published April 30 in the Journal of the American Medical Association, U-M physicians found that diagnosis of neonatal abstinence syndrome, a drug withdrawal syndrome among newborns, almost tripled between 2000 and 2009.

By 2009, the estimated number of newborns with the syndrome was 13,539 – or about one baby born each hour, according to the study that U-M researchers believe is the first to assess national trends in neonatal abstinence syndrome and mothers using opiate drugs.

"Recently, the Centers for Disease Control and Prevention released a report which found that over the last decade sales for opiate pain relievers like OxyContin and Vicodin have quadrupled," says Stephen W. Patrick, M.D., M.P.H., M.S., lead author of the study and a fellow in the University of Michigan's Division of Neonatal-Perinatal Medicine.

"Although our study was not able to distinguish the exact opiate used during pregnancy, we do know that the overall use of this class of drugs grew by 5-fold over the last decade and this appears to correspond with much higher rates of withdrawal in their infants."

Patrick, a Robert Wood Johnson Clinical Scholar at the University of Michigan, says multiple factors are likely to blame for the dramatic spike in use of opiate pain relievers, from their potential overuse for chronic pain to illegal sales of these drugs on the street. Overall, the U-M study showed that the number of mothers using opiate drugs increased five times over the last decade.

"Opiate use in our country is becoming an epidemic. Too often our health system reacts to problems; instead, we must address opiate use as a public health issue. To do this, we must limit opiate pain reliever use through healthcare provider education and statewide systems that watch for abuses, like people going to multiple doctors to get opiate prescriptions," Patrick says.

Neonatal abstinence syndrome causes a wide array of symptoms including increased irritability, hypertonia, or heightened muscle tone, tremors, feeding intolerance, seizures, and respiratory distress. In addition, babies with the syndrome are more likely to be born with a low birthweight.

"You can often stand in the hallway and know which babies are experiencing withdrawal. They are irritable, their cries are different, and they appear uncomfortable," Patrick says.

The majority of the mothers of babies born with the syndrome were covered by Medicaid for health care costs. The average hospital bill for babies with the syndrome increased from $39,400 in 2000 to $53,400 in 2009, a 35 percent increase. By 2009, 77.6 percent of charges for babies with the syndrome were charged to Medicaid.

In Florida, where opiate pain reliever death now accounts for four times the number of deaths as illicit drugs, the number of newborns diagnosed with the syndrome has increase five-fold in the last six years. The Florida state House and Senate recently passed legislation to form a task force to evaluate the issue.

"Given that newborns with neonatal abstinence syndrome experience longer, often medically complex and costly initial hospitalizations, this study highlights the need for increased public health measures to reduce the number of babies exposed to opiate drugs," says Matthew M. Davis, M.D., M.A.P.P., associate professor in the Child Health Evaluation and Research Unit at the U-M Medical School, and associate professor of Public Policy at the Gerald R. Ford School of Public Policy. Davis is senior author on the paper and co-director of the Robert Wood Johnson Clinical Scholar Program at U-M.

"We hope that state leaders will call for more research into the data we've provided because the majority of hospital expenditures for this condition are shouldered by state Medicaid programs."

The study is being released early to coincide with its presentation at the Pediatric Academic Societies Annual Meeting.

Journal citation: doi:10.1001/JAMA.2012.3951. Available to the media at www.jamamedia.org

Contacts and sources:
Mary F. Masson
University of Michigan Health System

A New Way To Track Molecular Changes In Living Mammalian Cells

Knowing how a living cell works means knowing how the chemistry inside the cell changes as the functions of the cell change. Protein phosphorylation, for example, controls everything from cell proliferation to differentiation to metabolism to signaling, and even programmed cell death (apoptosis), in cells from bacteria to humans. It’s a chemical process that has long been intensively studied, not least in hopes of treating or eliminating a wide range of diseases. But until now the close-up view – watching phosphorylation work at the molecular level as individual cells change over time – has been impossible without damaging the cells or interfering with the very processes that are being examined.

Berkeley Lab scientists observed phosphorylation in living PC12 cells stimulated by nerve growth factor as they differentiated and sent out neuron-like neurites. The researchers imaged individual cells and simultaneously obtained absorption spectra using synchrotron radiation from the Advanced Light Source. Cells not stimulated with nerve growth factor did not differentiate and showed different infrared absorption spectra.Berkeley Lab scientists observed phosphorylation in living PC12 cells stimulated by nerve growth factor as they differentiated and sent out neuron-like neurites. The researchers imaged individual cells and simultaneously obtained absorption spectra using synchrotron radiation from the Advanced Light Source. Cells not stimulated with nerve growth factor did not differentiate and showed different infrared absorption spectra.
Credit: Berkeley Lab

“To look into phosphorylation, researchers have labeled specific phosphorylated proteins with antibodies that carry fluorescent dyes,” says Hoi-Ying Holman of the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab). “That gives you a great image, but you have to know exactly what to label before you can even begin.”

Holman and her coworkers worked with colleagues from the San Diego and Berkeley campuses of the University of California to develop a new technique for monitoring protein phosphorylation inside single living cells, tracking them over a week’s time as they underwent a series of major changes.

“Now we can follow cellular chemical changes without preconceived notions of what they might be,” says Holman, a pioneer in infrared (IR) studies of living cells who is director of the Berkeley Synchrotron Infrared Structural Biology program at Berkeley Lab’s Advanced Light Source (ALS) and head of the Chemical Ecology Research group in the Earth Sciences Division . “We’ve monitored unlabeled living cells by studying the nonperturbing absorption of a wide spectrum of bright synchrotron infrared radiation from the ALS.”

The researchers report their results in the American Chemical Society journal Analytical Chemistry.

Phosphorylation fundamentals

Phosphorylating enzymes add one or more phosphate groups to three amino-acid residues common in proteins – serine, threonine, or tyrosine – which activates the proteins; removing the phosphate reverses the process. The research goal is to learn exactly when proteins such as enzymes and receptors are switched on and off by phosphorylation, and which cells within a population are responding to cause specific changes – for example, during differentiation of a progenitor cell into its functional form.

To avoid killing cells or introducing modified proteins or foreign bodies that may alter their behavior, scientists can use a method called Fourier-transform infrared (FTIR) spectromicroscopy; because infrared light has lower photon energy than x-rays, it can peer inside living cells without damaging them. Different components and different states of the cell absorb different wavelengths of the broad infrared spectrum; applying the Fourier-transform algorithm allows signals of all frequencies to be recorded simultaneously, pinpointing when, where, and what chemical changes are occurring.

Most infrared sources are dim, however, so the information from typical IR set-ups is limited in resolution and has a low signal-to-noise ratio. Infrared from the ALS’s synchrotron light source is a hundred to a thousand times brighter.


PC12 cells treated with nerve growth factor underwent a series of changes due to phosphorylation. Beginning at Day 3 they sent out neurites, resembling the growth of nerve cells. Spectromicroscopy at beamline 1.4.3 of the Advanced Light Source tracked specific local chemical changes in the living cells.

Credit: Berkeley Lab

Previously Holman and her colleagues have used IR beamline 1.4.3, managed by Berkeley Lab’s Michael Martin and Hans Bechtel, to obtain spectra from living organisms in rock, soil, and water. They have monitored ongoing biochemistry within living bacteria adapting to stress, and more recently within individual skin connective tissue cells (fibroblasts) from patients with mitochondrial disorders. (Mitochondria are the cellular organelles commonly known as the “power-plants” of the cell.)

The present study was done with a line of cultured cells called PC12. When nerve growth factor, a small protein, is introduced into a PC12 cell, the cell begins to send out neurites resembling the projections from nerve cell bodies. Although originally derived from a tumor of the rat’s adrenal gland, PC12 has become, rather counterintuitively, a valuable model of how nerve cells differentiate from their unspecialized progenitors.

Berkeley Lab postdoctoral fellow Liang Chen began the current experiments by introducing nerve growth factor to groups of PC12 cells to induce them to differentiate; one group of cells was left untreated as a control. The cells were cultured on gold-coated slides in chambers maintained at body temperature in a humidified environment and supplied with nutrients. Individual cells of a group were positioned under the infrared beam at the beamline 1.4.3 endstation.

FTIR spectra were collected before and after the nerve growth factor was introduced. After stimulation, the spectra were taken first at short intervals, from two to sixty minutes apart. Additional spectra were collected of cells in other groups on the third, fifth, and seventh day of continued stimulation.

The first day’s spectra revealed spikes in phosphorylation activity within minutes after the addition of the nerve growth factor, in concert with changes in the ratios of such important chemical contents of the cell as proteins, carbohydrates, and lipids. Phosphorylation subsequently waned, then picked up again in another burst of activity on Day 3, just as the cells began to extend neurites.

By comparing results with quantum chemistry simulations by Berkeley Lab’s Zhao Hao — predicting what should be observed from first principles — as well as with results from partial studies using other methods, the researchers confirmed the monitoring of phosphorylation phases, their timing, and their target proteins, along with associated changes in other substances in the cell.

In the top panel, different modes of imaging of the same cell show the differences between visible light microscopy and fluorescence imaging, and in the lower panel, the images resulting from Fourier-transform infrared spectromicroscopy. Infrared absorption at different frequencies pinpoints different cell components at specific locations in the living cell.  

Credit: Berkeley Lab

A new technique takes off

“This experiment was a proof of the concept,” says Liang Chen. “We demonstrated the dynamics of protein phosphorylation in controlling differentiation in this biological system using synchrotron infrared spectromicroscopy, and we pointed the way to answering the many questions a biologist has to ask about measuring the coordination of specific processes in real time.”

Although in this first experiment the team was not able to follow individual cells continuously, they were able to monitor differentiation in groups of cultured PC12 cells in real time, without labeling or any other invasive procedure. It was the first step in an ambitious inquiry into the real-time biochemistry of living mammalian cells over the long term.

At beamline 1.4.3., with the help of new team members Kevin Loutherback and Rafael Gomez-Sjoberg, the team is designing equipment to maintain mammalian cells in a thin layer of culture media that will keep them healthy yet not interfere with the infrared beam, while automatically monitoring and adjusting temperature, humidity, and nutrient ratios, and removing waste products. This will allow data on individual cells to be gathered continuously throughout the entire phosphorylation process.

Meanwhile the Berkeley Synchrotron Infrared Structural Biology program at ALS beamline 5.4 is building multimodal facilities that will monitor cell development in human cells, bacteria, and plants, within soils, minerals, and other environments, via “hyperspectromicroscopy” – from the ultraviolet through visible light and deep into the infrared. Researchers will be able to choose the frequency window (or combination of windows) best suited to the sample and the conditions – in Holman’s words, “to watch almost everything at once.”

Says Holman, “Many researchers from the medical communities are interested in using the technology, and we are particularly interested in collaborating with university centers and private firms that are seeking a broad view of how promising drugs act within specific cells.”

Some of the projects will target Alzheimer’s disease, macular degeneration of the retina in diabetes, and mitochondrial diseases in children. In addition, specific processes like protein glycation can also be identified. Since different cells and different organisms respond differently, the eventual goal is to develop specific ways to screen the mechanisms of individual medicines.


Contacts and sources:
Paul Preuss
DOE/Lawrence Berkeley National Laboratory

A 100-Gigabit Highway For Science

Climate researchers are producing some of the fastest growing datasets in science. Five years ago, the amount of information generated for the Nobel Prize-winning United Nations International Panel on Climate Change (IPCC) Fourth Assessment Report was 35 terabytes—equivalent to the amount of text in 35 million books, occupying a bookshelf 248 miles (399 km) long. By 2014, when the next IPCC report is published, experts predict that 2 petabytes of data will have been generated for it—that's a 580 percent increase in data production.

Because thousands of researchers around the world contribute to the generation and analysis of this data, a reliable, high-speed network is needed to transport the torrent of information. Fortunately, the Department of Energy's (DOE) ESnet (Energy Sciences Network) has laid the foundation for such a network—not just for climate research, but for all data-intensive science.

"There is a data revolution occurring in science," says Greg Bell, acting director of ESnet, which is managed by Lawrence Berkeley National Laboratory. "Over the last decade, the amount of scientific data transferred over our network has increased at a rate of about 72 percent per year, and we see that trend potentially accelerating."

In an effort to spur U.S. scientific competitiveness, as well as accelerate development and widespread deployment of 100-gigabit technology, the Advanced Networking Initiative (ANI) was created with $62 million in funding from the American Recovery and Reinvestment Act (ARRA) and implemented by ESnet. ANI was established to build a 100 Gbps national prototype network and a wide-area network testbed.

To cost-effectively deploy ANI, ESnet partnered with Internet2—a consortium that provides high-performance network connections to universities across America—which also received a stimulus grant from the Department of Commerce's Broadband Technologies Opportunities Program.

Researchers Take a "Test Drive" on ANI

So far more than 25 groups have taken advantage of ESnet's wide-area testbed, which is open to researchers from government agencies and private industry to test new, potentially disruptive technologies without interfering with production science network traffic. The testbed currently connects three unclassified DOE supercomputing facilities: the National Energy Research Scientific Computing Center (NERSC) in Oakland, Calif., the Argonne Leadership Computing Facility (ALCF) in Argonne, Ill., and the Oak Ridge Leadership Computing Facility (OLCF) in Oak Ridge, Tenn.

"No other networking organization has a 100-gigabit network testbed that is available to researchers in this way," says Brian Tierney, who heads ESnet's Advanced Networking Technologies Group. "Our 100G testbed has been about 80 percent booked since it became available in January, which just goes to show that there are a lot of researchers hungry for a resource like this."

Climate 100

To ensure that researchers will use future 100-gigabit effectively, another ARRA-funded project called Climate 100 brought together middleware and network engineers to develop tools and techniques for moving unprecedentedly massive amounts of climate data.

"Increasing network bandwidth is an important step toward tackling ever-growing scientific datasets, but it is not sufficient by itself; next-generation high-bandwidth networks need to be evaluated carefully from the applications perspective as well," says Mehmet Balman of Berkeley Lab's Scientific Data Management group, a member of the Climate 100 collaboration.

According to Balman, climate simulation data consists of a mix of relatively small and large files with irregular file size distribution in each dataset. This requires advanced middleware tools to move data efficiently on long-distance high-bandwidth networks.

"The ANI testbed essentially allowed us to 'test drive' on a 100-gigabit network to determine what kind of middleware tools we needed to build to transport climate data," says Balman. "Once the development was done, we used the testbed to optimize and tune."

At the 2011 Supercomputing Conference in Seattle, Wash., the Climate 100 team used their tool and the ANI testbed to transport 35 terabytes of climate data from NERSC's data storage to compute nodes at ALCF and OLCF.

"It took us approximately 30 minutes to move 35 terabytes of climate data over a wide-area 100 Gbps network. This is a great accomplishment," says Balman. "On a 10 Gbps network, it would have taken five hours to move this much data across the country."

Space Exploration

In 2024, the most powerful radio telescope ever constructed will go online. Comprising 3,000 satellite dishes spread over 250 acres, this instrument will generate more data in a single day than the entire Internet carries today. Optical fibers will connect each of these 15-meter-wide (50 ft.) satellite dishes to a central high performance computing system, which will combine all of the signals to create a detailed "big picture."

"Given the immense sensor payload, optical fiber interconnects are critical both at the central site and from remote stations to a single correlation facility," says William Ivancic, a senior research engineer at NASA's Glenn Research Center. "Future radio astronomy networks need to incorporate next generation network technologies like 100 Gbps long-range Ethernet links, or better, into their designs."

In anticipation of these future networks, Ivancic and his colleagues are utilizing a popular high-speed transfer protocol, called Saratoga, to effectively carry data over 100-gigabit long-range Ethernet links. But because it was cost-prohibitive to upgrade their local network with 100-gigabit hardware, the team could not determine how their software would perform in a real-world scenario—that is, until they got access to the ANI testbed.

"Quite frankly, we would not be doing these speed tests without the ANI testbed," says David Stewart, an engineer at Verizon Federal Systems and Ivancic's colleague. "We are currently in the development and debugging phase, and have several implementations of our code. With the ANI testbed, we were able to optimize and scale our basic PERL implementation to far higher speeds than our NASA testbed."

End-to-End Delivery

Meanwhile, Dantong Yu, who leads the Computer Science Group at Brookhaven National Laboratory, used the ANI testbed to design an ultra-high-speed, end-to-end file transfer protocol tool to move science data at 100 gigabits per second across a national network.

"A network like ANI may be able to move data at 100 Gbps, but at each end of that connection there is a host server that either uploads or downloads data from the network," says Yu. "While the host servers may be capable of feeding data into the network and downloading it at 100 Gbps, the current software running on these systems is a bottleneck."

According to Yu, the bottlenecks are primarily caused by the number of times the current software forces the computer to make copies of the data before uploading it to the network.

"Initially I was testing this protocol at a very local lab level. In this scenario transfers happen in a split-second, which is far from reality," says Yu. "ANI allowed me to see how long it really takes to move data across the country, from East-to West Coast, with my software, which in turn helped me optimize the code."

The Next Steps

Within the next few months, the official ANI project will be coming to an end, but the community will continue to benefit for decades to come from its investments. The 100-gigabit prototype network will be converted into ESnet's fifth-generation production infrastructure, one that will be scale to 44 times its current. ESnet will also seek new sources of funding for the 100-gigabit testbed to ensure that it will be available to network researchers on a sustained basis.

"Since its inception, ESnet has delivered the advanced capabilities required by DOE science. Many of these capabilities are cost-prohibitive, or simply unavailable, on the commercial market," says Bell. "Because our network is optimized for the needs of DOE science, we're always looking for efficient ways to manage our large science flows. ESnet's new 100-Gigabit network will allow us to do that more flexibly and morecost-effectively than ever." 

About ESnet

ESnet provides the high-bandwidth, reliable connections that link scientists at national laboratories, universities and other research institutions, enabling them to work together on some of the world's most important scientific challenges including energy, climate science, and the origins of the universe. Funded by the U.S. Department of Energy's Office of Science, and managed and operated by the ESnet team at Lawrence Berkeley National Laboratory (Berkeley Lab), ESnet provides scientists with access to unique DOE research facilities and computing resources, as well as to scientific collaborators including research and education networks around the world.

Contacts and sources:
Linda Vu
DOE/Lawrence Berkeley National Laboratory

From Decade To Decade: What's The Status Of Our Groundwater Quality?

This report, Methods for Evaluating Temporal Groundwater Quality Data and Results of Decadal-Scale Changes in Chloride, Dissolved Solids, and Nitrate Concentrations in Groundwater in the United States, 1998-2010, as well as a series of interactive maps showing long-term groundwater trends, can be found online.

Thumbnail of and link to report PDF (7.27 MB) 
Credit: USGS

There was no change in concentrations of chloride, dissolved solids, or nitrate in groundwater for more than 50 percent of well networks sampled in a new analysis by the USGS that compared samples from 1988-2000 to samples from 2001-2010. For those networks that did have a change, seven times more networks saw increases as opposed to decreases.

The analysis was done by the USGS National Water Quality Assessment Program (NAWQA) to determine if concentrations of these constituents have increased or decreased significantly from the 1990's to the early 2000's nationwide.

"By providing a nation-wide, long-term, uniformly consistent analysis of trends in groundwater quality, communities can see whether they belong in the group of more than 50 percent which are maintaining their water quality, or within the group of more than 40 percent for which water quality is back sliding," said USGS Director Marcia McNutt. "Communities in the latter group can decide whether and what action may be warranted to address quality issues so they do not cause concern to human health." 

NO3_small
Credit: USGS

Though chloride, nitrate, and dissolved solids occur naturally in the environment, human activities can cause concentrations to exceed levels that would be found naturally. At high concentrations, these chemicals can have adverse effects on human and environmental health.

High levels of chloride and dissolved solids in water don't present a risk to human health, but are considered nuisance chemicals that can cause the water to become unusable without treatment because of taste or hardness. Additionally, these chemicals can have adverse effects on ecosystems in streams and rivers when they discharge from the groundwater to these water bodies.

Excessive nitrate concentrations in groundwater have the potential to affect its suitability for drinking water. Also, when nitrate-laden water is discharged from groundwater to streams, the nitrate can end up in downstream water bodies, such as the Gulf of Mexico, and cause algal blooms. These algal blooms lead to low oxygen zones, which can be deadly to aquatic life.

Chloride, dissolved solids, and nitrate have many sources, including agricultural fertilizers, wastewater disposal, and runoff from salt used for deicing or other chemicals. Understanding changes in groundwater quality may help assess the effectiveness of management practices that have been implemented to control these sources.

"This type of long-term trend analysis is crucial for assessing whether the nation's groundwater is adequately protected from excessive concentrations of these potential contaminants," said Bruce Lindsey, lead scientist on the report. "USGS is uniquely positioned to provide this type of nationally consistent, scientific information to managers at the federal, state, and local level, so that they can make decisions that protect people and the environment."

Though a majority of the well networks tested saw no change, chloride concentrations increased in 43 percent of the well networks from the first decade to the second decade of study. Dissolved solids concentrations increased in 41 percent, and nitrate concentrations in 23 percent of well networks.

Although concentrations of these three constituents generally meet their respective EPA drinking water standards or guidelines, the proportion of samples exceeding the limits for nitrate and dissolved solids increased significantly over the decadal period at the national level.

Other important findings include:
  • The largest increases in chloride concentrations were in urban areas in the Northeastern and Upper Midwestern United States, including suburban Boston, Chicago, Detroit and Milwaukee.
  • Dissolved solids concentrations increased throughout the nation, including areas of Florida, Illinois, and the Rio Grande region.
  • The largest increases in nitrate concentrations were in key agricultural areas, including the Great Plains, areas east of Lake Michigan, and in California.
  • The magnitudes of increases in concentrations in deeper groundwater used as a source of drinking-water supply were generally less than in shallow groundwater. However, the proportions of networks with increases for both deep and shallow groundwater were similar. 
The analysis consists of samples from 1,236 wells in 56 well networks, representing major aquifers and urban and agricultural land-use areas. Samples for chloride, dissolved solids, and nitrate collected from 1988-2000 were compared to corresponding samples taken from the same well between 2001 and 2010.

The NAWQA program continues to conduct studies on long-term groundwater trends. This analysis, which provides an overview of current water quality conditions and trends over time, is an important foundation for future NAWQA studies that examine the causes of changing concentrations and generate water-quality forecasts.

This report, "Methods for Evaluating Temporal Groundwater Quality Data and Results of Decadal-Scale Changes in Chloride, Dissolved Solids, and Nitrate Concentrations in Groundwater in the United States, 1988-2010" as well as links to a series of interactive maps showing long-term groundwater trends, can be found online.

This information will be presented at the eighth National Monitoring Conference, which features the latest information about the nation’s water quality from governmental and tribal organizations, academia, environmental groups, and the private sector, held in Portland, Ore. from April 30th – May 4th, 2012.



Contacts and sources:
Kara Capelli
United States Geological Survey

NASA's Chandra Sees Remarkable Outburst From Old Black Hole

An extraordinary outburst produced by a black hole in a nearby galaxy has provided direct evidence for a population of old, volatile stellar black holes. The discovery, made by astronomers using NASA's Chandra X-ray Observatory, provides new insight into the nature of a mysterious class of black holes that can produce as much energy in X-rays as a million suns radiate at all wavelengths.

An extraordinary outburst from a black hole -- where its X-ray output increased at least 3,000 times -- has been seen with NASA's Chandra X-ray Observatory in the galaxy M83. Chandra observed what is called a ULX, or ultraluminous X-ray source. The panel on the left features an optical view of the full M83 galaxy, while the right panel shows a close up of the region where the ULX was found with data from Chandra (pink) and Hubble (blue and yellow). The remarkable behavior of this ULX in M83 provides direct evidence for a population of older, volatile, stellar-mass black holes.

Credit: Credit: Left image - Optical: ESO/VLT; Close-up - X-ray: NASA/CXC/Curtin University/R. Soria et al., Optical: NASA/STScI/Middlebury College/F. Winkler et al.

Researchers used Chandra to discover a new ultraluminous X-ray source, or ULX. These objects give off more X-rays than most binary systems, in which a companion star orbits the remains of a collapsed star. These collapsed stars form either a dense core called a neutron star or a black hole. The extra X-ray emission suggests ULXs contain black holes that might be much more massive than the ones found elsewhere in our galaxy.

The companion stars to ULXs, when identified, are usually young, massive stars, implying their black holes are also young. The latest research, however, provides direct evidence that ULXs can contain much older black holes and some sources may have been misidentified as young ones.

The intriguing new ULX is located in M83, a spiral galaxy about 15 million light years from Earth, discovered in 2010 with Chandra. Astronomers compared this data with Chandra images from 2000 and 2001, which showed the source had increased in X-ray brightness by at least 3,000 times and has since become the brightest X-ray source in M83.

The sudden brightening of the M83 ULX is one of the largest changes in X-rays ever seen for this type of object, which do not usually show dormant periods. No sign of the ULX was found in historical X-ray images made with Einstein Observatory in 1980, ROSAT in 1994, the European Space Agency's XMM-Newton in 2003 and 2008, or NASA's Swift observatory in 2005.

In Chandra observations that spanned several years, the ULX in M83 increased in X-ray brightness by at least 3,000 times. This sudden brightening is one of the largest changes in X-rays ever seen for this type of object, which do not usually show dormant periods. This can be seen in the difference between the top left and top right panels. The bottom two panels show changes in optical during that time.

Credit: Credit: Optical: ESO/VLT; Close-up - X-ray: NASA/CXC/Curtin University/R. Soria et al., Optical: NASA/STScI/Middlebury College/F. Winkler et al.

"The flaring up of this ULX took us by surprise and was a sure sign we had discovered something new about the way black holes grow," said Roberto Soria of Curtin University in Australia, who led the new study. The dramatic jump in X-ray brightness, according to the researchers, likely occurred because of a sudden increase in the amount of material falling into the black hole.

In 2011, Soria and his colleagues used optical images from the Gemini Observatory and NASA's Hubble Space Telescope to discover a bright blue source at the position of the X-ray source. The object had not been previously observed in a Magellan Telescope image taken in April 2009 or a Hubble image obtained in August 2009. The lack of a blue source in the earlier images indicates the black hole's companion star is fainter, redder and has a much lower mass than most of the companions that previously have been directly linked to ULXs. The bright, blue optical emission seen in 2011 must have been caused by a dramatic accumulation of more material from the companion star.

"If the ULX only had been observed during its peak of X-ray emission in 2010, the system easily could have been mistaken for a black hole with a massive, much younger stellar companion, about 10 to 20 million years old," said co-author William Blair of Johns Hopkins University in Baltimore.

The companion to the black hole in M83 is likely a red giant star at least 500 million years old, with a mass less than four times the sun's. Theoretical models for the evolution of stars suggest the black hole should be almost as old as its companion.

Another ULX containing a volatile, old black hole recently was discovered in the Andromeda galaxy by Amanpreet Kaur, from Clemson University, and colleagues and published in the February 2012 issue of Astronomy and Astrophysics. Matthew Middleton and colleagues from the University of Durham reported more information in the March 2012 issue of the Monthly Notices of the Royal Astronomical Society. They used data from Chandra, XMM-Newton and HST to show the ULX is highly variable and its companion is an old, red star.

"With these two objects, it's becoming clear there are two classes of ULX, one containing young, persistently growing black holes and the other containing old black holes that grow erratically," said Kip Kuntz, a co-author of the new M83 paper, also of Johns Hopkins University. "We were very fortunate to observe the M83 object at just the right time to make the before and after comparison."


###



A paper describing these results will appear in the May 10th issue of The Astrophysical Journal.

NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra's science and flight operations from Cambridge, Mass.
Contacts and sources:
Megan Watzke
Chandra X-ray Center

"Love thy neighbor"- Highly Religious People Are Less Motivated By Compassion Than Are Non-Believers

UC Berkeley study finds atheists, agnostics and less religious people are more driven by compassion to be generous

"Love thy neighbor" is preached from many a pulpit. But new research from the University of California, Berkeley, suggests that the highly religious are less motivated by compassion when helping a stranger than are atheists, agnostics and less religious people.

In three experiments, social scientists found that compassion consistently drove less religious people to be more generous. For highly religious people, however, compassion was largely unrelated to how generous they were, according to the findings which are published in the July issue of the journal Social Psychological and Personality Science.
\Credit: UCLA

The results challenge a widespread assumption that acts of generosity and charity are largely driven by feelings of empathy and compassion, researchers said. In the study, the link between compassion and generosity was found to be stronger for those who identified as being non-religious or less religious.

"Overall, we find that for less religious people, the strength of their emotional connection to another person is critical to whether they will help that person or not," said UC Berkeley social psychologist Robb Willer, a co-author of the study. "The more religious, on the other hand, may ground their generosity less in emotion, and more in other factors such as doctrine, a communal identity, or reputational concerns."

Compassion is defined in the study as an emotion felt when people see the suffering of others which then motivates them to help, often at a personal risk or cost.

While the study examined the link between religion, compassion and generosity, it did not directly examine the reasons for why highly religious people are less compelled by compassion to help others. However, researchers hypothesize that deeply religious people may be more strongly guided by a sense of moral obligation than their more non-religious counterparts.

"We hypothesized that religion would change how compassion impacts generous behavior," said study lead author Laura Saslow, who conducted the research as a doctoral student at UC Berkeley.

Saslow, who is now a postdoctoral scholar at UC San Francisco, said she was inspired to examine this question after an altruistic, nonreligious friend lamented that he had only donated to earthquake recovery efforts in Haiti after watching an emotionally stirring video of a woman being saved from the rubble, not because of a logical understanding that help was needed.

"I was interested to find that this experience – an atheist being strongly influenced by his emotions to show generosity to strangers – was replicated in three large, systematic studies," Saslow said.

In the first experiment, researchers analyzed data from a 2004 national survey of more than 1,300 American adults. Those who agreed with such statements as "When I see someone being taken advantage of, I feel kind of protective towards them" were also more inclined to show generosity in random acts of kindness, such as loaning out belongings and offering a seat on a crowded bus or train, researchers found.

When they looked into how much compassion motivated participants to be charitable in such ways as giving money or food to a homeless person, non-believers and those who rated low in religiosity came out ahead: "These findings indicate that although compassion is associated with pro-sociality among both less religious and more religious individuals, this relationship is particularly robust for less religious individuals," the study found.

In the second experiment, 101 American adults watched one of two brief videos, a neutral video or a heartrending one, which showed portraits of children afflicted by poverty. Next, they were each given 10 "lab dollars" and directed to give any amount of that money to a stranger. The least religious participants appeared to be motivated by the emotionally charged video to give more of their money to a stranger.

"The compassion-inducing video had a big effect on their generosity," Willer said. "But it did not significantly change the generosity of more religious participants."

In the final experiment, more than 200 college students were asked to report how compassionate they felt at that moment. They then played "economic trust games" in which they were given money to share – or not – with a stranger. In one round, they were told that another person playing the game had given a portion of their money to them, and that they were free to reward them by giving back some of the money, which had since doubled in amount.

Those who scored low on the religiosity scale, and high on momentary compassion, were more inclined to share their winnings with strangers than other participants in the study.

"Overall, this research suggests that although less religious people tend to be less trusted in the U.S., when feeling compassionate, they may actually be more inclined to help their fellow citizens than more religious people," Willer said.



In addition to Saslow and Willer, other co-authors of the study are UC Berkeley psychologists Dacher Keltner, Matthew Feinberg and Paul Piff; Katharine Clark at the University of Colorado, Boulder; and Sarina Saturn at Oregon State University.

The study was funded by grants from UC Berkeley's Greater Good Science Center, UC Berkeley's Center for the Economics and Demography of Aging, and the Metanexus Institute.





Contacts and sources:
Yasmin Anwar
University of California - Berkeley

700 Rogue Stars Ejected From The Galaxy Are Found In Intergalactic Space

It's very difficult to kick a star out of the galaxy.

In fact, the primary mechanism that astronomers have come up with that can give a star the two-million-plus mile-per-hour kick it takes requires a close encounter with the supermassive black hole at the galaxy's core.

So far astronomers have found 16 of these "hypervelocity" stars. Although they are traveling fast enough to eventually escape the galaxy's gravitational grasp, they have been discovered while they are still inside the galaxy.

Vanderbilt astronomers have identified nearly 700 rogue stars that appear to have been ejected from the Milky Way galaxy. When these stars received the powerful kick that knocked them out of the galaxy, they were small, yellow stars like the sun. But in the multi-million-year journey they evolved into red giant stars.

Credit: Michael Smelzer, Vanderbilt University

Now, Vanderbilt astronomers report in the May issue of the Astronomical Journal that they have identified a group of more than 675 stars on the outskirts of the Milky Way that they argue are hypervelocity stars that have been ejected from the galactic core. They selected these stars based on their location in intergalactic space between the Milky Way and the nearby Andromeda galaxy and by their peculiar red coloration.

"These stars really stand out. They are red giant stars with high metallicity which gives them an unusual color," says Assistant Professor Kelly Holley-Bockelmann, who conducted the study with graduate student Lauren Palladino.

In astronomy and cosmology, "metallicity" is a measure of the proportion of chemical elements other than hydrogen and helium that a star contains. In this case, high metallicity is a signature that indicates an inner galactic origin: Older stars and stars from the galactic fringes tend to have lower metallicities.

The researchers identified these candidates by analyzing the millions of stars catalogued in the Sloan Digital Sky Survey.

"We figured that these rogue stars must be there, outside the galaxy, but no one had ever looked for them. So we decided to give it a try," said Holley-Bockelmann, who is studying the behavior of the black hole at the center of the Milky Way galaxy.

Astronomers have now found evidence for giant black holes at the centers of many galaxies. They estimate that the Milky Way's central black hole has a mass of four million solar masses. They calculate that the gravitational field surrounding such a supermassive black hole is strong enough to accelerate stars to hypervelocities.

The typical scenario involves a binary pair of stars that get caught in the black hole's grip. As one of the stars spirals in towards the black hole, its companion is flung outward at a tremendous velocity.

A second scenario takes place during periods when the central black hole is in the process of ingesting a smaller black hole. Any star that ventures too close to the circling pair can also get a hypervelocity kick.

Red giant stars are the end stage in the evolution of small, yellow stars like the Sun. So, the stars in Holley-Bockelmann's rogues' gallery should have been small stars like the Sun when they tangled with the central black hole. As they traveled outward, they continued to age until they reached the red giant stage. Even traveling at hypervelocities, it would take a star about 10 million years to travel from the central hub to the spiral's edge, 50,000 light years away.

"Studying these rogue stars can provide us with new insights into the history and evolution of our home galaxy," said Holley-Bockelmann. The researchers' next step is determine if any of their candidates are unusually red brown dwarfs instead of red giants. Because brown dwarfs produce a lot less light than red giants, they would have to be much closer to appear equally bright.

Heather Morrison at Case Western Reserve University, Patrick Durrell and John Feldmeier at Youngstown State University, Robin Ciardullo and Richard Wade at Pennsylvania State University, and J. Davy Kirkpatrick and Patrick Lowrance at the California Institute of California also contributed to the research, which was funded by grants from the National Science Foundation and the Department of Education's Graduate Assistance in Areas of National Need fellowship.





Contacts and sources:
David F Salisbury
Vanderbilt University

Yellowstone 'Super-Eruption' Less Super, More Frequent Than Thought

The Yellowstone "super-volcano" is a little less super—but more active—than previously thought.

Researchers at Washington State University and the Scottish Universities Environmental Research Centre say the biggest Yellowstone eruption, which created the 2 million year old Huckleberry Ridge deposit, was actually two different eruptions at least 6,000 years apart.

File:Yellowstone Caldera.svg

Their results paint a new picture of a more active volcano than previously thought and can help recalibrate the likelihood of another big eruption in the future. Before the researchers split the one eruption into two, it was the fourth largest known to science.

"The Yellowstone volcano's previous behavior is the best guide of what it will do in the future," says Ben Ellis, co-author and post-doctoral researcher at Washington State University's School of the Environment. "This research suggests explosive volcanism from Yellowstone is more frequent than previously thought."

The new ages for each Huckleberry Ridge eruption reduce the volume of the first event to 2,200 cubic kilometers, roughly 12 percent less than previously thought. A second eruption of 290 cubic kilometers took place more than 6,000 years later.

That first eruption still deserves to be called "super," as it is one of the largest known to have occurred on Earth and darkened the skies with ash from southern California to the Mississippi River. By comparison, the 1980 eruption of Mount St. Helens produced 1 cubic kilometer of ash. The larger blast of Oregon's Mount Mazama 6,850 years ago produced 116 cubic kilometers of ash.

The study, funded by the National Science Foundation and published in the June issue of the Quaternary Geochronology, used high-precision argon isotope dating to make the new calculations. The radioactive decay rate from potassium 40 to argon 40 serves as a "rock clock" for dating samples and has a precision of .2 percent. Darren Mark, co-author and a post-doctoral research fellow at the SUERC, recently helped fine tune the technique and improve it by 1.2 percent—a small-sounding difference that can become huge across geologic time.

"Improved precision for greater temporal resolution is not just about adding another decimal place to a number, says Mark. "It's far more exciting. It's like getting a sharper lens on a camera. It allows us to see the world more clearly."

The project asks the question: Might super-eruptions actually be products of multiple, closely spaced eruptions through time? With improved temporal resolution, in times to come, maybe super-eruptions will be not quite so super.

Reference: http://dx.doi.org/10.1016/j.quageo.2012.01.006


Contacts and sources:
Ben Ellis
Washington State University

Vitamin D Supplements May Protect Against Viral Infections During The Winter

New research published in the Journal of Leukocyte Biology suggests that the older population could benefit from vitamin D supplementation in autumn and winter to protect against viral infections

Vitamin D may be known as the sunshine vitamin, but a new research report appearing in the Journal of Leukocyte Biology shows that it is more than that. According to the report, insufficient levels of vitamin D are related to a deficiency in our innate immune defenses that protect us from infections, neoplasias or autoimmune diseases. Since vitamin D levels decrease during autumn and winter when days are shorter and sunlight is relatively weak, this may explain why people are more prone to viral infection during these times. It also suggests that vitamin D supplementation, especially in older populations, could strengthen people's innate immunity against viral infections.

"There are numerous studies showing the benefits of maintaining adequate Vitamin D levels. As more and more research into Vitamin D is conducted, we are learning that it is extremely important for human health. Our study is no different, and vitamin D supplements should be considered one of many tools that might help when conventional therapies are not enough," said Victor Manuel Martinez-Taboada, M.D., a researcher involved in the work from the Division of Rheumatology at the Hospital Universitario "Marque's de Valdecilla," Facultad de Medicina at the Unversidad de Cantabria, in Santander, Spain.

To make this discovery, the researchers compared the changes in the blood levels of vitamin D among three groups of healthy subjects: young (age range: 20-30), middle (age range: 31-59), and elderly (age range: 60-86). They found decreased levels of vitamin D with aging, prompting researchers to compare whether such changes kept any relationship with toll-like receptor (TLR) expression measured on lymphocytes and monocytes and function after in vitro stimulation with specific ligands for each of the nine human TLRs and measurement of effector molecules, such as proinflammatory cytokines. Specifically, they found that the TRL most affected by a vitamin D insufficiency is TLR7, which regulates the immune response against viruses. Finally, scientists studied whether there was any difference in the three age groups depending on the season of the year since it is well known that a limited sun exposure during darker winter months is related with vitamin D deficiency.

"Any school teacher will tell you that people tend to be sicker during the winter than any other time of the year," said John Wherry, Ph.D., Deputy Editor of the Journal of Leukocyte Biology. "There have been numerous studies showing several environmental factors during winter months may allow viruses to spread easier. This study shows that sunlight, or more precisely the lack of vitamin D, could have a role in the seasonally higher rates of infection. More extensive studies must be conducted for this link to be conclusive, but since vitamin D supplements are inexpensive and generally safe, this is a really exciting discovery."

Contacts and sources:
Cody Mooneyhan
Federation of American Societies for Experimental Biology

India Designs Its Own Image As Global Power

India's image has changed dramatically in the last decade from an aid recipient nation to a global power. The shift in this image is not merely a result of economic growth figures alone – it has resulted from a carefully managed "image campaign" that the Indian government has invested in. New research by Ravinder Kaur from the University of Copenhagen shows the processes through which a brand new identity for the Indian nation is manufactured. On 2 May, a large international conference in Copenhagen spotlights Asia.

Brand India is the name of the organisation that since the late 1990's has been tasked with convincing international corporations and heads of state that they should invest in new, modern India. In return for their investment, they get, among other things, access to cheap, well-educated labour.

Branding campaign in Davos 
Credit:  University of Copenhagen

"The branding campaigns produce seductive images of 'new' India: In the beginning the ads placed traditional Indian motifs together with images of mobile phones, computers, and motor ways to show the investors how India has developed," says Ravinder Kaur, who is director of Centre of Global South-Asian Studies at the University of Copenhagen.

In her article "Nation's Two Bodies: rethinking the 'new' India and its other", which will be published in the forthcoming issue of Third World Quarterly and available online 7 May, she analyses the Indian branding campaigns. And she points out that the campaigns are adjusted continuously so as to attract other kinds of investors and corporations:

"The most recent campaigns employ a different strategy in which large, multinational corporations, such as the luxury car manufacturer Bugatti, encourage other large, multinational corporations to invest in India and see the opportunities in India as a market for luxury goods that can be sold to the growing Indian middle-classes."

"Old" India rears its face

Not all Indians, however, recognise themselves in the campaigns' glitzy images, and according to Ravinder Kaur the discrepancy between image and reality inevitably cause internal conflicts and antagonisms which the Indian government will find it increasingly difficult to suppress.

"Corruption is a huge problem in India, which was amply demonstrated when the country hosted the 2010 Commonwealth Games. The Games should have been a triumph for 'new' India, but degenerated into a corruption scandal that government and industry desperately tried to talk down in order to assuage potential investors' concerns."
Get up-to-date on modern Asia

Ravinder Kaur stresses that Western nations need to look beyond all the clichés about the new Asian powers, which are constantly repeated in the media; a country or a corporation that wants to cooperate with for example India must have in-depth knowledge of recent developments in that particular nation.

"India is shrouded in myth and misunderstandings. If you uncritically buy into all the stories about modern India, much crucial knowledge will evade you. And that is unfortunate if you consider India important. That is why I recommend anyone who is interested in modern Asia to tune in when University of Copenhagen hosts the international conference Rising Asia – Anxious Europe 2 and 3 May," Ravinder Kaur points out.

During the conference, which will be streamed live on the conference website, a number of internationally recognised researchers will discuss the future relationship between Asia and Europe.

Contacts and sources:
Ravinder Kaur
University of Copenhagen

How Human Cells 'Hold Hands'

UI researchers explore how one cell binds itself to another, shedding light on neurodevelopmental disorders
A gamma-protocadherin mutant cortical neuron (green) superimposed on a field of wild type cortical neurons (white). 
research image
Images by Dr. Andrew M. Garrett, PhD.

University of Iowa biologists have advanced the knowledge of human neurodevelopmental disorders by finding that a lack of a particular group of cell adhesion molecules in the cerebral cortex—the outermost layer of the brain where language, thought and other higher functions take place —disrupts the formation of neural circuitry.

Andrew Garrett, former neuroscience graduate student and current postdoctoral fellow at the Jackson Laboratory, Bar Harbor, Maine; Dietmar Schreiner, former postdoctoral fellow currently at the University of Basel, Switzerland; Mark Lobas, current neuroscience graduate student; and Joshua A. Weiner, associate professor in the UI College of Liberal Arts and Sciences Department of Biology, published their findings in the April 26 issue of the journal Neuron.

Cell adhesion is the way in which cells “hold hands”—how one cell binds itself to another cell using specific molecules that protrude from cell membranes and bind each other together. The process is necessary to form all body tissues. The UI researchers studied a clustered family of 22 genes (gamma-protocadherins) that make such cellular hand-holding possible by encoding cell adhesion molecules.

Reconstructions of single wildtype (left) and gamma-protocadherin mutant (right) cortical neurons are superimposed upon a low magnification view of fluorescently-labeled neurons in the corresponding animals.
Credit: UI

In their previous work, they found that mice lacking the molecules exhibited death of neurons and loss of synapses in the spinal cord. So, they knew the gamma-protocadherins were important for neurons in the spinal cord, but not whether this was true in the cortex. However, in the current study, they found that an absence of the cell adhesion molecules had a significant and much different effect.

“We found that mice lacking the gamma-protocadherins in the cortex do not exhibit the severe loss of synapses and increased neuronal death that we observed in the spinal cord,” says Weiner. “Instead, we found that the cortical neurons had severely reduced development of their dendrites, tree-like branched structures that receive input from other neurons.

“We discovered the reason for this: gamma-protocadherins normally inhibit a key signaling pathway within neurons that acts to reduce dendrite branching. In the absence of the gamma-protocadherins, this signaling pathway was hyperactive, leading to defective branching of cortical neuron dendrites,” says Weiner.

In their previous work, the researchers showed that these molecules—the 22 distinct adhesion molecules, the gamma-protocadherins—are critical for the development of the animal, because when all of the genes are deleted from mice, they die shortly after birth with a variety of neurological defects including loss of connections (synapses) and excessive neuronal cell death in the spinal cord—an early-developing part of the nervous system.

Because those mutants die so young, the researchers could not assess a role for the gamma-protocadherins in the cerebral cortex. The reason is that the cortex develops only after birth. They used new genetic technologies to remove the gamma-protocadherins only from the cerebral cortex, which allowed the animals to survive to adulthood.

Weiner says that the latest research findings may help researchers to better understand the causes of various human developmental disorders.

“Human neurodevelopmental disorders such as autism, mental retardation, and schizophrenia all involve dysregulation of dendrite branching and synaptogenesis,” he says. “Our identification of a large family of 22 cell adhesion molecules—which we previously showed interact with each other in very complex and specific ways—as new regulators of dendrite branching raises the question of whether specific interactions between distinct neuronal groups during development is important for the spreading of dendritic branches. If so, the gamma-protocadherins and/or the signaling pathways they regulate might be disrupted in a variety of human brain disorders.”

Now that the researchers have shown that the gamma-protocadherin family, as a whole, is critical for dendrite branching, they plan to become more focused in their research. Next, they plan to ask whether specific interactions between individual members of the family are important for instructing neurons on the location and size of dendrite growth.

Their work was funded in part by a grant from the National Institutes of Health.
Contacts and sources:
Jennifer Brown
University of Iowa Health Care

Multitasking May Hurt Your Performance, But It Makes You Feel Better

People aren’t very good at media multitasking - like reading a book while watching TV - but do it anyway because it makes them feel good, a new study suggests.

The findings provide clues as to why multitasking is so popular, even though many studies show it is not productive.

Researchers had college students record all of their media use and other activities for 28 days, including why they used various media sources and what they got out of it.

The findings showed that multitasking often gave the students an emotional boost, even when it hurt their cognitive functions, such as studying.

“There’s this myth among some people that multitasking makes them more productive,” said Zheng Wang, lead author of the study and assistant professor of communication at Ohio State University.

“But they seem to be misperceiving the positive feelings they get from multitasking. They are not being more productive - they just feel more emotionally satisfied from their work.”

Zheng Wang 




Take, for example, students who watched TV while reading a book. They reported feeling more emotionally satisfied than those who studied without watching TV, but also reported that they didn’t achieve their cognitive goals as well, Wang said.

“They felt satisfied not because they were effective at studying, but because the addition of TV made the studying entertaining. The combination of the activities accounts for the good feelings obtained,” Wang said.

Wang conducted the study with John Tchernev, a graduate student in Communication at Ohio State. Their results appear online in theJournal of Communication and will be published in a future print edition.

Wang said many studies done in laboratory settings have found that people show poorer performance on a variety of tasks when they try to juggle multiple media sources at the same time: for example, going from texting a friend, to reading a book, to watching an online video.

But surveys show that media multitasking is only becoming more popular. The question, Wang said, is why do people do so much multitasking if it actually impairs their performance?

To answer that question, Wang said they had to move out of the laboratory and into real life. They recruited 32 college students who agreed to carry a cellphone-like device and report on their activities three times each day for four weeks.

The participants reported on each media use (such as computer, radio, print, television, radio) and sub types (for computer use, whether they were web browsing, using social networking, etc.). They reported the type of activity, the duration, and whether any other activities were performed simultaneously (in other words, whether they were multitasking).

They also provided their motivations for each activity or combination of activities from a list of seven potential needs, including social, fun/entertainment, study/work, and habits/background noise. For each need, they reported the strength of the need on a 10-point scale, and whether those needs were met on a 4-point scale.

The results showed that participants were more likely to multitask when they reported an increase in cognitive needs (such as study or work) or habitual needs or both.

That means, for example, that the students were more likely to multitask when they needed to study (a cognitive need).

But one of the key findings of the study is that this multitasking didn’t do a very good job of satisfying their cognitive needs which actually motivate the multitasking in the first place, Wang said. That’s probably because their other media use distracted them from the job of studying. However, the students reported that the multitasking was very good at meeting their emotional needs (fun/entertainment/relaxing) - interestingly, a need they weren’t even seeking to fulfill.

In addition, the results showed that habits played an important role in the use of media multitasking.

“Our findings showed that habitual needs increase media multitasking and are also gratified from multitasking,” she said.

This suggests that people get used to multitasking, which makes them more likely to continue.

“We found what we call a dynamical feedback loop. If you multitask today, you’re likely to do so again tomorrow, further strengthening the behavior over time,” she said.

“This is worrisome because students begin to feel like they need to have the TV on or they need to continually check their text messages or computer while they do their homework. It’s not helping them, but they get an emotional reward that keeps them doing it.

“It is critical that we carefully examine the long-term influence of media multitasking on how we perform on cognitive tasks.”

The study was supported by a grant from the National Science Foundation.

Contacts and sources:
Zheng Wang
Ohio State University

The Pit-Chains Of Mars – A Possible Place For Life?

The latest images released from ESA’s Mars Express reveal a series of ‘pit-chains’ on the flanks of one of the largest volcanoes in the Solar System. Depending on their origin, they might be tempting targets in the search for microbial life on the Red Planet.

Tractus Catena is shown here in a computer generated perspective view. The image was created using data obtained from the High-Resolution Stereo Camera (HRSC) on ESA’s Mars Express spacecraft. The pits seen in the background show hints of layered bedrock in the upper walls of each depression.

Credits: ESA/DLR/FU Berlin (G. Neukum)

The images, taken on 22 June 2011, cover Tractus Catena in the Arcadia quadrangle, part of the vast Tharsis region on Mars. This region boasts a number of huge volcanoes, including the three collectively known as Tharsis Montes. To their north sits Alba Mons, also known as Alba Patera, one of the largest volcanoes in the Solar System by area and volume.

Tractus Catena sits on its southeastern flank of Alba Mons and the pit-chains in that region are a series of circular depressions that formed along fracture points in the martian crust.

Pit-chains can have a volcanic origin. Lava streaming from a volcano solidifies on the surface, leaving a molten tube of lava running below.

A wider contextual image of the Tractus Catena region showing the surrounding fossae and the large shield volcano Ascraeus Mons, discovered by Mariner 9 in 1971.
 
Credits: ESA/DLR/FU Berlin (G. Neukum)

Once volcanic activity ceases, the tube empties, leaving behind a subterranean cavity. Over time, parts of the roof over the cavity may collapse, leaving circular depressions on the surface. On Earth, recent examples can be seen on the flanks of Kilauea volcano in Hawaii, while on the Moon, Hadley Rille, visited by Apollo 15 in 1971, is believed to have formed in the same way billions of years ago.

Tractus Catena in monochrome from the nadir channel on the HRSC camera on Mars Express. The resolution in this image is 20 m/pixel.

Credits: ESA/DLR/FU Berlin (G. Neukum)
Pit-chains can also be caused by strains in the Martian crust, which translates into a series of parallel elongated depressions known as grabens, in which pits can also form.

But the most dramatic scenario involves groundwater. On Earth, there are clear examples of similar structures in ‘Karst’ regions – after the German name for a region extending from Slovenia to Italy, where this phenomenon was first studied.

Some of Earth’s most famous examples are the network of ‘cenotes’ on the Yucatan peninsula of Mexico. These deep natural pits form when the surface limestone rocks collapse, exposing the groundwater underneath.

This origin is the most interesting in the context of the search for microbial life on Mars. If there are any cave-like structures associated with the pits, microorganisms could have survived, protected from the harsh surface environment.

Tractus Catena in a colour-coded plan view based on a digital terrain model of the region, from which the topography of the landscape can be derived.

Credits: ESA/DLR/FU Berlin (G. Neukum)

Mars landers have measured surface radiation around 250 times higher than that found on the Earth, and more than double that experienced by astronauts on board the International Space Station. Any caves associated with the pit-chains may in future provide a possible refuge for astronauts from the harsh surface radiation.

However they formed, these pit-chains show again just how similar many of the geological processes on Mars are to those on the Earth, and provide interesting targets for future missions.

Tractus Catena from the nadir and colour channels, combined to form a natural colour view of the region. The pit-chains have analogues on both Earth and the Moon, with the Mexican Cenotes being one of the most famous and dramatic examples on Earth. On the Moon, Hadley Rille, visited by Apollo 15 in 1971, is believed to have formed from the collapse of a lava tube billions of years ago.

Credits: ESA/DLR/FU Berlin (G. Neukum)


Tractus Catena was imaged during orbit 9538 of Mars Express by the HRSC camera. Centred at around 23°N and 103°W, this 3D image has a ground resolution of about 22 m per pixel. It combines data from HRSC’s nadir channel and one stereo channel to produce this 3D image. Stereoscopic glasses, using red-green or red-blue filters are required to see the 3D effect.

Credits: ESA/DLR/FU Berlin (G. Neukum)



Contacts and sources:
ESA

What Lies Beneath The Red Planet’s Largest Volcanoes

Five years of Mars Express gravity mapping data are providing unique insights into what lies beneath the Red Planet’s largest volcanoes. The results show that the lava grew denser over time and that the thickness of the planet's rigid outer layers varies across the Tharsis region.

As the spacecraft flies over a volcano, it experiences an attraction towards the elevated mass. Mars Express measures these perturbations in the spacecraft’s velocity that induce a detectable Doppler shift on the radio signal between the spacecraft and Earth tracking ground stations. These measurements can be translated into high-precision values for subsurface density variations, providing insight into the thermal history of Mars.
Credits: Royal Observatory of Belgium

The measurements were made while Mars Express was at altitudes of between 275–330 km above the Tharsis volcanic ‘bulge’ during the closest points of its eccentric orbit, and were combined with data from NASA’s Mars Reconnaissance Orbiter.

The Tharsis bulge includes Olympus Mons – the tallest volcano in the Solar System, at 21 km – and the three smaller Tharsis Montes that are evenly spaced in a row.

The region is thought to have been volcanically active until 100-250 million years ago, relatively recent on a geological timescale.

The large mass of the volcanoes caused tiny ‘wobbles’ in the trajectory of Mars Express as it flew overhead; these were measured from Earth via radio tracking and translated into measurements of density variations below the surface.

Shaded relief image of Tharsis Montes and Olympus Mons derived from Mars Orbiter Altimeter data which flew on board NASA’s Mars Global Surveyor. The new data suggest that Tharsis Montes formed one by one, starting with Arsia Mons, possibly by the movement of a single mantle plume moving under the surface.
 
Credits: NASA

Overall, the high density of the volcanoes corresponds to a basaltic composition that is in agreement with the many martian meteorites that have fallen to Earth.

The new data also reveal how the lava density changed during the construction of the three Tharsis Montes volcanoes. They started with a lighter andesitic lava that can form in the presence of water, and were then overlaid with heavier basaltic lava that makes up the visible surface of the martian crust.

“Combined with the varying height of the volcanoes, we can say that Arsia Mons is the oldest, then Pavonis Mons formed and finally Ascraeus Mons,” says Mikael Beuthe of the Royal Observatory of Belgium and lead author of the paper published in the Journal of Geophysical Research.

At Ascraeus Mons, however, the density of the lava decreased at a later stage, so that the top of the volcano is of lower density.”

The transition could reflect changes in heating beneath the surface in the form of a single mantle plume – an upwelling of abnormally hot rock from deeper within the viscous mantle, created in a process that can be likened to a lava lamp but on a gigantic scale – that slowly moved sideways to create each of the three Tharsis Montes in turn. This is the exact opposite of Earth where ‘plates’ of crust move above a stationary plume to form chains of volcanoes, such as the Hawaiian islands.

Olympus Mons colour-coded according to height from white (highest) to blue (lowest), based on images captured by the High Resolution Stereo Camera (HRSC) on board ESA’s Mars Express. The new data find that Olympus Mons is built on a rigid lithosphere whereas the nearby Tharsis Montes partially sank into a less rigid lithosphere, suggesting that there were large spatial variations in the heat flux from the mantle at the time of their formation.
 
Credits: ESA/DLR/FU Berlin (G. Neukum)

The data also describe the thickness of the lithosphere – the outermost shell of the planet, including the upper portion of the mantle – and find surprising lateral variations between Olympus Mons and the Tharsis Montes, with the three smaller volcanoes having a much higher density underground ‘root’ than Olympus Mons.

These roots could be dense pockets of solidified lava or an ancient network of underground magma chambers.

“The lack of a high-density root below Olympus Mons indicates it was built on a lithosphere of high rigidity, while the other volcanoes partially sank into a less rigid lithosphere,” says co-author Veronique Dehant, also of the Royal Observatory of Belgium. “This tells us that there were large spatial variations in the heat flux from the mantle at the time of their formation.”

Since the three Tharsis Montes sit on top of the Tharsis bulge, whereas Olympus Mons stands on the edge, the greater crustal thickness at the centre may have acted as an insulating lid to increase the temperature, creating a less rigid lithosphere. Here rising magma interacted with the pre-existing bulge, whereas the magma forming Olympus Mons ascended through the older crust that is supporting the Tharsis bulge, perhaps creating the observed density differences between the volcanoes.

“These results show that data on the Mars interior are key to understanding the evolution of the Red Planet,” says Olivier Witasse, ESA Mars Express Project Scientist. “One option for a future mission to Mars would be a network of small landers, simultaneously measuring seismic activity in order to probe the interior.

Source: ESA