Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Monday, October 31, 2011

The New Old Age –More Sex: Today's Pensioners Are Very Different To Yesterday's


Old people today have more sex, are more likely to be divorced, are cleverer and feel better, reveals a long-term research project comparing what it is like to be old today with 30 years ago. "It's time to start talking about the 'new old age'," says researcher Ingmar Skoog.

The number of elderly is rising worldwide, and it is estimated that average life expectancy in Europe will reach 100 by the end of the century.

At the same time, old age and what we expect from it are changing. An extensive research project at the University of Gothenburg's Sahlgrenska Academy has spent a number of years comparing the elderly of the 1970s with those of today. The project, known as the H70 study, reveals that old age has changed drastically in a number of ways.

For example, the proportion of elderly with schooling beyond secondary level has risen from 14% to almost 40% for both genders. This is reflected in a better performance in intelligence tests by today's 70-year-olds than their counterparts back in the 1970s.

The proportion of married people has increased, as has the proportion of divorcees. The elderly are also now more sexually active, and the number with sexual problems such as impotence has fallen.

The results of the long-term study can also be contradictory, not least when it comes to social networking:

"The H70 study shows that the elderly are more outgoing today than they were in the 1970s – they talk more to their neighbours, for example – yet the percentage of elderly who feel lonely has increased significantly," says professor Ingmar Skoog from the University of Gothenburg's Sahlgrenska Academy, who leads the study.

Old people's mental health does not seem to have changed, however. Dementia disorders are no more prevalent today than they were 30 years ago, and while more old people consider themselves to be mildly depressed, more severe forms of depression have not become more common. Meanwhile the elderly are coping better with everyday life: the number needing help with cleaning has fallen from 25% to 12%, and only 4% need help taking a bath, down from 14% in the 1970s.

"Our conclusion is that pensioners are generally healthier and perkier today than they were 30 years ago," says Skoog. "This may be of interest both in the debate about where to set the retirement age and in terms of the baby boomers now hitting retirement age."

The H70 study in Gothenburg began back in 1971. More than 1,000 70-year-old men and women born in 1901-02 were examined by doctors and interviewed about their lives to obtain a picture of diseases in elderly populations, risk factors and their functional capacity and social networks. The participants were examined again at the age of 75 and then at regular intervals until the final participant died at the age of 105. The year 2000 brought the start of a new study of 70-year-olds born in 1930, who were examined using the same methods, making it possible to follow a specific generation through life and compare different generations.

Contacts and sources:

Mathematically Detecting Financial Bubbles Before They Burst

Predicting economic crashes for more stable economies.

And we’re not talking figuratively. From the dotcom bust in the late nineties to the housing crash in the run-up to the 2008 crisis, financial bubbles have been a topic of major concern. Identifying bubbles is important in order to prevent collapses that can severely impact nations and economies.

A paper published this month in the SIAM Journal on Financial Mathematics addresses just this issue. Opening fittingly with a quote from New York Federal Reserve President William Dudley emphasizing the importance of developing tools to identify and address bubbles in real time, authors Robert Jarrow, Younes Kchia, and Philip Protter propose a mathematical model to detect financial bubbles.

A financial bubble occurs when prices for assets, such as stocks, rise far above their actual value. Such an economic cycle is usually characterized by rapid expansion followed by a contraction, or sharp decline in prices.

“It has been hard not to notice that financial bubbles play an important role in our economy, and speculation as to whether a given risky asset is undergoing bubble pricing has approached the level of an armchair sport. But bubbles can have real and often negative consequences,” explains Protter, who has spent many years studying and analyzing financial markets.

“The ability to tell when an asset is or is not in a bubble could have important ramifications in the regulation of the capital reserves of banks as well as for individual investors and retirement funds holding assets for the long term. For banks, if their capital reserve holdings include large investments with unrealistic values due to bubbles, a shock to the bank could occur when the bubbles burst, potentially causing a run on the bank, as infamously happened with Lehman Brothers, and is currently happening with Dexia, a major European bank,” he goes on to explain, citing the significance of such inflated prices.

Using sophisticated mathematical methods, Protter and his co-authors answer the question of whether the price increase of a particular asset represents a bubble in real time. “[In this paper] we show that by using tick data and some statistical techniques, one is able to tell with a large degree of certainty, whether or not a given financial asset (or group of assets) is undergoing bubble pricing,” says Protter.

This question is answered by estimating an asset’s price volatility, which is stochastic or randomly determined. The authors define an asset’s price process in terms of a standard stochastic differential equation, which is driven by Brownian motion. Brownian motion, based on a natural process involving the erratic, random movement of small particles suspended in gas or liquid, has been widely used in mathematical finance. The concept is specifically used to model instances where previous change in the value of a variable is unrelated to past changes.

The key characteristic in determining a bubble is the volatility of an asset’s price, which, in the case of bubbles is very high. The authors estimate the volatility by applying state of the art estimators to real-time tick price data for a given stock. They then obtain the best possible extension of this data for large values using a technique called Reproducing Kernel Hilbert Spaces (RKHS), which is a widely used method for statistical learning.

“First, one uses tick price data to estimate the volatility of the asset in question for various levels of the asset’s price,” Protter explains. “Then, a special technique (RKHS with an optimization addition) is employed to extrapolate this estimated volatility function to large values for the asset’s price, where this information is not (and cannot be) available from tick data. Using this extrapolation, one can check the rate of increase of the volatility function as the asset price gets arbitrarily large. Whether or not there is a bubble depends on how fast this increase occurs (its asymptotic rate of increase).”

If it does not increase fast enough, there is no bubble within the model’s framework.

The authors test their methodology by applying the model to several stocks from the dot-com bubble of the nineties. They find fairly successful rates in their predictions, with higher accuracies in cases where market volatilities can be modeled more efficiently. This helps establish the strengths and weaknesses of the method.

The authors have also used the model to test more recent price increases to detect bubbles. “We have found, for example, that the IPO [initial public offering] of LinkedIn underwent bubble pricing at its debut, and that the recent rise in gold prices was not a bubble, according to our models,” Protter says.

It is encouraging to see that mathematical analysis can play a role in the diagnosis and detection of bubbles, which have significantly impacted economic upheavals in the past few decades.

More about the authors:

Robert Jarrow is a professor at the Johnson Graduate School of Management at Cornell University in Ithaca, NY, and managing director of the Kamakura Corporation. Younes Kchia is a graduate student at Ecole Polytechnique in Paris, and Philip Protter is a professor in the Statistics Department at Columbia University in New York.

Professor Protter’s work was supported in part by NSF grant DMS-0906995.

Source article:

How to Detect an Asset Bubble

Robert Jarrow, Younes Kchia, and Philip Protter, SIAM Journal on Financial Mathematics 2 (2011), pp 839-865 (Online publish date: October 12, 2011)



Contacts and sources:
Karthika Muthukumaraswamy
Society for Industrial and Applied Mathematics

Technical Aptitude: Do Women Score Lower Because They Just Aren't Interested?

Boys do better on tests of technical aptitude (for example, mechanical aptitude tests) than girls. The same is true for adults. A new study published in Perspectives on Psychological Science, a journal of the Association for Psychological Science, describes a theory explaining how the difference comes about: the root cause is that boys are just more interested in technical things, like taking apart a bike, than girls are.

Aptitude tests are used to predict how well people will do in school and on jobs. These tests focus on particular skills or kinds of specific aptitude, like verbal or technical aptitude. But the last few decades of research have found that what really matters is general intelligence, not specific aptitudes, says Frank Schmidt of the University of Iowa, author of the new paper. “The factors that are measured by the specific aptitude tests independent of the general intelligence component in these tests don’t make any contribution to job performance.” Smart people, researchers have found, are able to learn the requirements of any job if they are motivated to. And research shows that men and women do not differ, on average, in general intelligence.

Technical aptitude measures are often used as a component of general intelligence measures, so Schmidt wanted to know why women and men score differently on technical aptitude in particular. He analyzed data from the 10 subtest Armed Services Vocational Aptitude Battery, or ASVAB, to look at how men and women differed on the tests, including those on technical aptitude. He found that at all intelligence levels women score lower on technical aptitude than men at that intelligence level. Also, at all levels of technical aptitude women had higher levels of general intelligence. So if technical aptitude tests are used as part of a measure of general intelligence, women could receive intelligence scores that are too low. That is, technical aptitude tests may be biased indicators of general intelligence for girls and women.

Schmidt presented a theory that posits that this difference stems from sex differences in interest in technical pursuits, which leads people to acquire technical experience, which in turn increases technical aptitude scores. He presented evidence that among men technical experience does lead to better scores on technical aptitude tests. To find out for sure, someone would have to do a long-term study looking at whether early interests develop into later aptitudes, as opposed to the opposite theory that aptitudes cause interests. If his theory is right, it might be possible to narrow the gap in technical aptitude by getting girls more interested in technical areas. Interest should lead to aptitude. But that may not work, Schmidt says. “The research shows it’s very hard to change people’s interests,” he says. “They’re pretty stable and they form pretty early in life.”

It’s more important, he says, to make sure that the tests used to measure general intelligence aren’t using biased indicators. “That is quite possible today. You can either not use technical aptitude tests or you can use them and counterbalance them,” he says, with tests that women tend to do better on, like perceptual speed or some verbal tests.

Navy Electromagnetic Railgun Fires 1,000th Shot From Laboratory

Scientists at the Naval Research Laboratory (NRL) hit a materials research milestone in the Office of Naval Research’s (ONR) Electromagnetic Railgun program when they fired a laboratory-scale system for the 1,000th time Oct. 31.

“A significant amount of development has been coming out of NRL to support the program,” said Roger Ellis, ONR’s Electromagnetic Railgun (EMRG) program officer. “It’s a key piece of making railgun successful.”


Credit: U.S. Navy

The EMRG is a long-range weapon that launches projectiles using electricity instead of chemical propellants. Under development by the Department of the Navy (DON) for use aboard ships, the system will provide Sailors with multi-mission capability, allowing them to conduct precise naval surface fire support, or land strikes; cruise missile and ballistic missile defense; and surface warfare to deter enemy vessels.

“The weapon does all its damage because of its speed,” said Dr. Roger McGinnis, program executive for ONR’s Naval Air Warfare and Weapons Department, which oversees EMRG. Launched at 2 to 2.5 kilometers per second (4,500 to 5,600 mph) without using explosives, the projectile reaches its target at speeds that require only a small charge similar to that found in automobile airbags to dispense its payload, eliminating the objective through the inherent kinetic energy.

“EMRG will provide the Department of Defense with an advantage in future conflicts by giving troops the ability to fire weapons inexpensively against targets,” McGinnis said.

As part of the EMRG development program, ONR and NRL co-funded scientists at NRL to build and operate a 6-meter long, 50 mm diameter railgun as a subscale experimental lab at the Materials Testing Facility (MTF). Researchers fired the first shot in March 2007. After improving the gun’s sliding armature and rails, the lab has fired an average of 300 shots per year since 2008.

A railgun launches projectiles by generating magnetic fields created by high electrical currents that accelerate a sliding metal conductor, or armature, between two rails.

“The 1,000th shot is testing new ideas of how the armature interacts with the rails,” said Dr. Robert Meger, head of NRL’s charged particle physics branch, which conducts about 30 experiments annually on the railgun. Following each test firing, researchers dismantle the gun to examine all the components. They slice up the rails for further analysis under a microscope to reveal surface damage.

During the course of firing all 1,000 shots, NRL scientists have experimented with a variety of materials and geometries to determine which ones can withstand the metal-melting temperatures and pressures of shooting a 1.5-megajoule energy weapon. One megajoule of energy is equivalent to a 1-ton car traveling at 100 miles per hour.

“We’ve really explored a lot of territory,” ONR’s Ellis said. “When you couple what we’re seeing in testing with what we’re seeing in modeling and simulation, it results in some interesting barrel shapes that you wouldn’t intuitively think about. Railgun barrels don’t necessarily have to be round as in most conventional gun designs.”

Since 2005, scientists have been working to increase the railgun’s barrel life, muzzle energy and size. Ultimately, their work will help to produce a 64-megajoule railgun with a range of about 220 nautical miles.

“You really have to look at the course of our understanding from the first day they shot to the 1,000th shot today, and how much our understanding of the rail life has dramatically increased, and how much science we have applied to ensure that we’re on the path toward a future fieldable system,” Ellis said.

Materials science breakthroughs resulting from the test firings have given researchers confidence to transition new technologies to a scaled-up experimental launcher at Naval Surface Warfare Center Dahlgren,Va., which fired a world record setting 33-megajoule shot in December 2010.



Contacts and sources:
Story By Grace Jean, Office of Naval Research

Don't Worry, Be Happy – Understanding Mindfulness Meditation

In times of stress, we’re often encouraged to pause for a moment and simply be in the ‘now.’ This kind of mindfulness, an essential part of Buddhist and Indian Yoga traditions, has entered the mainstream as people try to find ways to combat stress and improve their quality of life. And research suggests that mindfulness meditation can have benefits for health and performance, including improved immune function, reduced blood pressure, and enhanced cognitive function.

But how is it that a single practice can have such wide-ranging effects on well-being? A new article published in the latest issue of Perspectives on Psychological Science, a journal of the Association for Psychological Science, draws on the existing scientific literature to build a framework that can explain these positive effects.

The goal of this work, according to author Britta Hölzel, of Justus Liebig University and Harvard Medical School, is to “unveil the conceptual and mechanistic complexity of mindfulness, providing the ‘big picture’ by arranging many findings like the pieces of a mosaic.” By using a framework approach to understand the mechanisms of mindfulness, Hölzel and her co-authors point out that what we think of as mindfulness is not actually a single skill. Rather, it is a multi-faceted mental practice that encompasses several mechanisms.

The authors specifically identify four key components of mindfulness that may account for its effects: attention regulation, body awareness, emotion regulation, and sense of self. Together, these components help us attend to and deal with the mental and physiological effects of stress in ways that are non-judgmental.

Although these components are theoretically distinct, they are closely intertwined. Improvement in attention regulation, for example, may directly facilitate our awareness of our physiological state. Body awareness, in turn, helps us to recognize the emotions we are experiencing. Understanding the relationships between these components, and the brain mechanisms that underlie them, will allow clinicians to better tailor mindfulness interventions for their patients, says Hölzel.

On the most fundamental level, this framework underscores the point that mindfulness is not a vague cure-all. Effective mindfulness meditation requires training and practice and it has distinct measurable effects on our subjective experiences, our behavior, and our brain function. The authors hope that further research on this topic will “enable a much broader spectrum of individuals to utilize mindfulness meditation as a versatile tool to facilitate change – both in psychotherapy and in everyday life.”

Live Longer With Fewer Calories Says Scientists

By consuming fewer calories, ageing can be slowed down and the development of age-related diseases such as cancer and type 2 diabetes can be delayed. The earlier calorie intake is reduced, the greater the effect. Researchers at the University of Gothenburg have now identified one of the enzymes that hold the key to the ageing process.

"We are able to show that caloric restriction slows down ageing by preventing an enzyme, peroxiredoxin, from being inactivated. This enzyme is also extremely important in counteracting damage to our genetic material," says Mikael Molin of the Department of Cell and Molecular Biology.

This is Mikael Molin of the University of Gothenburg.
Credit: Photo: University of Gothenburg

By gradually reducing the intake of sugar and proteins, without reducing vitamins and minerals, researchers have previously shown that monkeys can live several years longer than expected. The method has also been tested on everything from fishes and rats to fungi, flies and yeasts with favourable results. Caloric restriction also has favourable effects on our health and delays the development of age-related diseases. Despite this, researchers in the field have found it difficult to explain exactly how caloric restriction produces these favourable effects.

Using yeast cells as a model, the research team at the University of Gothenburg has successfully identified one of the enzymes required. They are able to show that active peroxiredoxin 1, Prx1, an enzyme that breaks down harmful hydrogen peroxide in the cells, is required for caloric restriction to work effectively.

This is an image of yeast.
Credit: Photo: Mikael Molin 

The results, which have been published in the scientific journal Molecular Cell, show that Prx1 is damaged during ageing and loses its activity. Caloric restriction counteracts this by increasing the production of another enzyme, Srx1, which repairs Prx1. Interestingly, the study also shows that ageing can be delayed without caloric restriction by only increasing the quantity of Srx1 in the cell. Repair of the peroxiredoxin Prx1 consequently emerges as a key process in ageing.

"Impaired Prx1 function leads to various types of genetic defects and cancer. Conversely, we can now speculate whether increased repair of Prx1 during ageing can counteract, or at least delay, the development of cancer."

Peroxiredoxins have also been shown to be capable of preventing proteins from being damaged and aggregating, a process that has been linked to several age-related disorders affecting the nervous system, such as Alzheimer's and Parkinson's. The researchers are accordingly also considering whether stimulation of Prx1 can reduce and delay such disease processes.


Contacts and sources:
Mikael Molin
University of Gothenburg

The article 'Life Span Extension and H2O2 Resistance Elicited by Caloric Restriction Require the Peroxiredoxin Tsa1 in Saccharomyces cerevisiae' has been published in the journal Molecular Cell.
Link to the article: http://www.cell.com/molecular-cell/abstract/S1097-2765%2811%2900626-5?switch=standard

Expert comments in the same issue Translating a Low-Sugar Diet into a Longer Life by Maintaining Thioredoxin Peroxidase Activity of a Peroxiredoxin
Link to the article: http://www.cell.com/molecular-cell/fulltext/S1097-2765(11)00634-4

Forests Not Keeping Pace With Climate Change

More than half of eastern U.S. tree species examined in a massive new Duke University-led study aren't adapting to climate change as quickly or consistently as predicted.

"Many models have suggested that trees will migrate rapidly to higher latitudes and elevations in response to warming temperatures, but evidence for a consistent, climate-driven northward migration is essentially absent in this large analysis," says James S. Clark, H.L. Blomquist Professor of Environment at Duke's Nicholas School of the Environment.

Nearly 59 percent of the species examined by Clark and his colleagues showed signs that their geographic ranges are contracting from both the north and south.

Fewer species -- only about 21 percent -- appeared to be shifting northward as predicted. About 16 percent seemed to be advancing southward, and around 4 percent appeared to be expanding in both directions.

The scientists analyzed data on 92 species in more than 43,000 forest plots in 31 states. They published their findings this month in the journal Global Change Biology.

The study found no consistent evidence that population spread is greatest in areas where climate has changed the most; nor do the species' response patterns appear to be related to seed size or dispersal characteristics.

"Warm zones have shifted northward by up to 100 kilometers in some parts of the eastern United States, but our results do not inspire confidence that tree populations are tracking those changes," says Clark, who also holds appointments at Duke as a professor of biology and statistics. "This increases the risk of serious lags in tree migrations."

The concept of climate-driven migration is based on the assumption that as temperatures warm, the southern edge of some tree species' ranges could begin to erode as adult trees die and the seeds they leave behind in the soil can no longer sprout. At the same time, the species could spread to higher latitudes as seedlings dispersed on their northern boundaries are able to take root in newly favorable climates there.

To test whether this predicted response was occurring in real life, Clark and his colleagues pored through decades of data compiled by the U.S. Forest Service's Forest Inventory and Analysis Program. They compared the relative distributions of seedlings, saplings and adult trees of 92 widely distributed eastern U.S. species at 43,334 plots in 30 different longitudinal bands, and factored in things like seed characteristics, and changes in climate and precipitation.

"The patterns of tree responses we were able to document using this seedling-versus-tree analysis are more consistent with range contraction than with northward migration, although there are signs some species are shifting to higher elevations," Clark says.

The fact that the majority of the northernmost latitudes documented for seedlings was lower than those for adult trees of the same species indicates "a lack of evidence for climate-mediated migration, and should increase concern for the risks posed by climate change," he says.


Contacts and sources:
James S. Clark
Duke University

Kai Zhu, a doctoral student of Clark's at Duke, was lead author of the study. Christopher W. Woodall, research forester at the U.S. Forest Service's Northern Research Station in St. Paul, Minn., was a co-author.

The study was funded by the National Science Foundation.

Citation: "Failure to migrate: lack of tree range expansion in response to climate change," Kai Zhu, Christopher W. Woodall, James S. Clark. Global Change Biology, accepted article online. DOI: 10.1111/j.1365-2486.2011.02571.x

Piezo-Phototronic Effect: Zinc Oxide Microwires Improve The Performance Of Light-Emitting Diodes

Researchers have used zinc oxide microwires to significantly improve the efficiency at which gallium nitride light-emitting diodes (LED) convert electricity to ultraviolet light. The devices are believed to be the first LEDs whose performance has been enhanced by the creation of an electrical charge in a piezoelectric material using the piezo-phototronic effect.

By applying mechanical strain to the microwires, researchers at the Georgia Institute of Technology created a piezoelectric potential in the wires, and that potential was used to tune the charge transport and enhance carrier injection in the LEDs. This control of an optoelectronic device with piezoelectric potential, known as piezo-phototronics, represents another example of how materials that have both piezoelectric and semiconducting properties can be controlled mechanically.

"By utilizing this effect, we can enhance the external efficiency of these devices by a factor of more than four times, up to eight percent," said Zhong Lin Wang, a Regents professor in the Georgia Tech School of Materials Science and Engineering. "From a practical standpoint, this new effect could have many impacts for electro-optical processes – including improvements in the energy efficiency of lighting devices."

Details of the research were reported in the Sept. 14 issue of the journal Nano Letters. The research was sponsored by the Defense Advanced Research Projects Agency (DARPA) and the U.S. Department of Energy (DOE). In addition to Wang, the research team mainly included Qing Yang, a visiting scientist at Georgia Tech from the Department of Optical Engineering at Zhejiang University in China.

A light-emitting diode (LED) whose performance has been enhanced through the piezo-phototronic effect is studied in the laboratory of Regents professor Zhong Lin Wang.
Credit: Georgia Tech Photo: Gary Meek

Because of the polarization of ions in the crystals of piezoelectric materials such as zinc oxide, mechanically compressing or otherwise straining structures made from the materials creates a piezoelectric potential – an electrical charge. In the gallium nitride LEDs, the researchers used the local piezoelectric potential to tune the charge transport at the p-n junction.

The effect was to increase the rate at which electrons and holes recombined to generate photons, enhancing the external efficiency of the device through improved light emission and higher injection current. "The effect of the piezopotential on the transport behavior of charge carriers is significant due to its modification of the band structure at the junction," Wang explained.

The zinc oxide wires form the "n" component of a p-n junction, with the gallium nitride thin film providing the "p" component. Free carriers were trapped at this interface region in a channel created by the piezoelectric charge formed by compressing the wires.

Traditional LED designs use structures such as quantum wells to trap electrons and holes, which must remain close together long enough to recombine. The longer that electrons and holes can be retained in proximity to one another, the higher the efficiency of the LED device will ultimately be.

The devices produced by the Georgia Tech team increased their emission intensity by a factor of 17 and boosted injection current by a factor of four when compressive strain of 0.093 percent was applied to the zinc oxide wire. That improved conversion efficiency by as much as a factor of 4.25.

Georgia Tech Regents professor Zhong Lin Wang (right) and graduate research assistant Ying Liu study light-emitting diodes whose performance has been enhanced through the piezo-phototronic effect.
Credit: Georgia Tech Photo: Gary Meek

The LEDs fabricated by the research team produced emissions at ultraviolet frequencies (about 390 nanometers), but Wang believes the frequencies can be extended into the visible light range for a variety of optoelectronic devices. "These devices are important for today's focus on green and renewable energy technology," he said.

In the experimental devices, a single zinc oxide micro/nanowire LED was fabricated by manipulating a wire on a trenched substrate. A magnesium-doped gallium nitride film was grown epitaxially on a sapphire substrate by metalorganic chemical vapor deposition, and was used to form a p-n junction with the zinc oxide wire.

A sapphire substrate was used as the cathode that was placed side-by-side with the gallium nitride substrate with a well-controlled gap. The wire was placed across the gap in close contact with the gallium nitride. Transparent polystyrene tape was used to cover the nanowire. A force was then applied to the tape by an alumina rod connected to a piezo nanopositioning stage, creating the strain in the wire.

The researchers then studied the change in light emission produced by varying the amount of strain in 20 different devices. Half of the devices showed enhanced efficiency, while the others – fabricated with the opposite orientation of the microwires – showed a decrease. This difference was due to the reversal in the sign of the piezopotential because of the switch of the microwire orientation from +c to –c.

High-efficiency ultraviolet emitters are needed for applications in chemical, biological, aerospace, military and medical technologies. Although the internal quantum efficiencies of these LEDs can be as high as 80 percent, the external efficiency for a conventional single p-n junction thin-film LED is currently only about three percent.

Beyond LEDs, Wang believes the approach pioneered in this study can be applied to other optical devices that are controlled by electrical fields.

"This opens up a new field of using the piezoelectric effect to tune opto-electronic devices," Wang said. "Improving the efficiency of LED lighting could ultimately be very important, bringing about significant energy savings because so much of the world's energy is used for lighting."


Contacts and sources:

Bigger Birds In Central California, Courtesy Of Global Climate Change

Birds are getting bigger in central California, and that was a big surprise for Rae Goodman and her colleagues.

Goodman uncovered the trend while working as a graduate student for San Francisco State University biologist Gretchen LeBuhn, analyzing data from thousands of birds caught and released each year at two sites near San Francisco Bay and the Point Reyes National Seashore.

The SF State scientists, working with researchers from PRBO Conservation Science and the San Francisco Bay Bird Observatory who collected the data, found that birds' wings have grown longer and birds are increasing in mass over the last 27 to 40 years.

What's making the birds bigger? The researchers think that the trend is due to climate change, but their findings put a twist in the usual thinking about climate change and body size. A well-known ecological rule, called Bergmann's Rule, states that animals tend to be larger at higher latitudes. One reason for this rule might be that larger animals conserve body heat better, allowing them to thrive in the generally colder climate of higher latitudes.

Under this reasoning, some scientists have predicted that animals would get smaller as the Earth has warmed up over the past 100 years. But the study, published in the journal Global Change Biology, suggests that the connection may not be so simple.

Climate change may affect body size in a variety of ways, they note in their paper. For instance, birds might get bigger as they store more fat to ride out severe weather events, which are expected to be more common under global climate change. Climate change could also alter a region's plant growth, which may eventually lead to changes in a bird's diet that affect its size.

LeBuhn, an assistant professor of biology, said she was "completely surprised" to find that the central California birds were growing larger over time. "It's one of those moments where you ask, 'what's happening here?'" The results were so unexpected, she said, that the findings made them take a step back and look more closely at how climate change could influence body size.

The bird data come from two long-term "banding stations" in central California, where a wide variety of birds are captured, banded about the leg with an identification tag, and weighed and measured before being released. Many of the same birds were captured each year, allowing the researchers at the sites to build up a unique database that could be used to track changes among the birds over several decades.

The researchers used data from 14,735 individual birds collected from 1971 to 2010 at the Palomarin Field Station, near the southern end of the Point Reyes National Seashore, by researchers from PRBO Conservation Science. Their study also included data on 18,052 birds collected between 1983 and 2009, from the Coyote Creek Field Station at the southern end of the San Francisco Bay by the San Francisco Bay Bird Observatory.

"At the time I started my research, a few studies had looked at body size changes in a few species in Europe and the Middle East, but no one had examined bird body size changes in North America," said Goodman, who now teaches Biology and Environmental Science at San Francisco's Jewish Community High School of the Bay.

"We had the good fortune to find an unexpected result -- a gem in research science," she added. "But we were then left with the puzzle of figuring out what was going on."

After testing and discarding a number of other explanations, Goodman and her colleagues were confident that climate change was behind the longer wings and bigger bodies in most of the birds. The birds may be responding to climate-related changes in plant growth or increased climate variability in central California, the researchers suggest in the paper.

"The fingerprint of climate change is showing up in many of our ecosystems," explains Nat Seavy, research director for the Central Coast at PRBO Conservation Science. "The challenge is to use the long-term data we've been collecting to understand how, where and why these changes are occurring."

The findings offer a glimpse at the potent effects of climate change across a wide range of species, LeBuhn said. "Even over a pretty short period of time, we've documented changes in important traits like body size, where we don't expect to see much flexibility."

"But in some ways," she added, "it gave me a little more hope that these birds are able to respond -- hopefully in time -- to changes in climate."

"Although it is encouraging that species are changing in response to climate change," said Seavy, "it is also troubling that environmental stressors are pushing and pulling on species in diverse ways...What will happen to our ecosystems as some species get larger and others get smaller? We need long-term monitoring to help us understand the impact of these changes."

Contacts and sources:
Citation: "Avian body size changes and climate change: warming or increasing variability?" appeared online Oct. 12, 2011, published by Global Change Biology.http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2486.2011.02538.x/full

About San Francisco State University:

SF State is the only master's-level public university serving the counties of San Francisco, San Mateo and Marin. The University enrolls nearly 30,000 students each year and graduates about 8,000 annually. With nationally acclaimed programs in a range of fields -- from creative writing, cinema and biology to history, broadcast and electronic communication arts, theatre arts and ethnic studies -- the University's more than 205,000 graduates have contributed to the economic, cultural and civic fabric of San Francisco and beyond.

About PRBO Conservation Science:

PRBO Conservation Science, founded as Point Reyes Bird Observatory in 1965, works to conserve birds, other wildlife and ecosystems through innovative scientific research and outreach. We partner with hundreds of governmental and non-governmental agencies as well as private interests to ensure that every dollar invested in conservation yields the most for biodiversity -- benefiting our environment, our economy and our communities. Visit PRBO on the web at http://www.prbo.org.

About SFBBO

The San Francisco Bay Bird Observatory (SFBBO) is dedicated to the conservation of birds and their habitats through science and outreach. Since its beginning in 1981, SFBBO has been an authority on the birds that rely on the San Francisco Bay and its surrounding environments. SFBBO biologists collect crucial data that shape our understanding of the population ecology and conservation of Bay Area birds and guide management and restoration of Bay Area habitats. Learn more about SFBBO at http://www.sfbbo.org.

1 In 6 People Will Have A Stroke, But Most Strokes Can Be Prevented

There are now more than one million strokes per year in Europe, and stroke, along with heart disease, cancers diabetes and chronic respiratory diseases, is a non-communicable disease whose risk is increased by cigarette smoking, an unhealthy diet, physical inactivity and excessive alcohol. Global predictions indicate that the incidence of fatal stroke (along with heart disease and cancers) will continue to rise, from around 6 million per year in 2010 to almost 8 million per year by 2030.

CT scan slice of the brain showing a right-hemispheric ischemic stroke (left side of image) 
File:INFARCT.jpg
Credit: Wikipedia

The European Society of Cardiology emphasises that most of these same risks for stroke are also the major risks for coronary heart disease. Moreover, atrial fibrillation, the most common disorder of heart rhythm, has also been clearly associated with an increased risk of stroke.

Professor Freek Verheugt, from the Onze Lieve Vrouwe Gasthuis in Amsterdam and speaking on behalf of the ESC, says: "Stroke is not an inevitable consequence of ageing, so by identifying and modifying risk factors there are opportunities to reduce the incidence and mortality rate of this devastating condition."

According to the World Stroke Organization, there are six steps anyone can take to reduce their risk of stroke:
  • Know your personal risk factors: high blood pressure, diabetes, and high blood cholesterol.
  • Be physically active and exercise regularly.
  • Avoid obesity by keeping to a healthy diet.
  • Limit your alcohol consumption.
  • Avoid cigarette smoke. If you smoke, seek help to stop.
  • Learn to recognise the warning signs of a stroke.
The WSO also emphasises the importance of time if any of the warning signs are apparent: a sudden numbness, especially at one side of the body; sudden trouble speaking or seeing; loss of balance or sudden vertigo; and a sudden severe headache with no apparent cause. Any of these signs are a cause for alarm, because, says Professor Verheugt, stroke is a medical emergency and any minutes saved now can make a big difference to survival. "Time lost is brain function lost," says the WSO. It is also now clear that stroke survivors will do better if they are admitted to dedicated stroke units staffed by multidisciplinary teams. Hospital care, medication, vascular surgery and rehabilitation are the cornerstones of treatment.

Stroke (which is also known as cerebrovascular disease) occurs when a blood vessel carrying oxygen to the brain is either blocked by a clot (ischaemic stroke) or bursts (haemorrhagic stroke). Without oxygen and nutrients, brain cells begin to die, and it is the extent and location of this damage which determines the severity of the stroke. In 1976 the World health Organization defined stroke as "a neurological deficit of cerebrovascular cause that persists beyond 24 hours or is interrupted by death within 24 hours".

ESC Clinical Practice Guidelines on Cardiovascular Disease Prevention, which were updated in 2007, list stroke alongside coronary artery disease, heart failure and peripheral artery disease as cardiovascular diseases within the scope of prevention initiatives. The ESC Clinical Practice Guidelines distinguish between haemorrhagic stroke (around 15% of cases) and ischaemic stroke, but add that the cause of many strokes remains undetermined. The Guidelines note that "antihypertensive treatment reduces risk of both ischaemic and haemorrhagic stroke", and that "stroke prevention is still the most important effect of antihypertensive treatment".


Contacts and sources:
European Society of Cardiology

The Interstroke study

http://www.lancet.com/journals/lancet/article/PIIS0140-6736%2810%2960834-3/abstract , which was reported in 2010 following an analysis of stroke data from 22 countries, indicates that ten risk factors are associated with 90% of the risk of stroke.(1) The highest attributable effect of individual risk factors was 35% from hypertension, 26.5% for waist-to-hip ratio, and 19% for current smoking.

Moreover, a study of activity in the reduction of stroke risk in almost 50,000 people in Finland (and followed up for almost 20 years) found that "high physical activity" was associated with a lower risk of stroke than low physical activity. Similar reductions in risk were found with "daily active commuting".(2)

As defined in the European Heart Health Charterhttp://www.escardio.org/about/what/advocacy/Pages/health-charter.aspx , the ESC's declared targets for cardiovascular health throughout the European Union are:
  • zero smoking
  • three kilometres daily walking
  • five portions of fruit and vegetables per day
  • <140/90 mmHg blood pressure
  • <5 mmol/l total cholesterol
  • <3 mmol/l LDL-cholesterol
  • zero diabetes
Professor Verheugt also emphasises that some patients with irregular heart beat run a significant risk of stroke. He says: "All individuals with irregular heart beat should see a doctor, who can diagnose whether this heart rhythm disorder is likely to lead to stroke. If so, blood thinning medication can reduce the risk of stroke by up to 70%."

Citation: O'Donnell MJ, Xavier D, Liu L, et al. Risk factors for ischaemic and intracerebral haemorrhagic stroke in 22 countries (the INTERSTROKE study): a case-control study. Lancet 2010; 376: 112-123. 2. Hu G, Sarti C, Jousilahti P, et al. Leisure time, occupational, and commuting physical activity and the risk of stroke. Stroke 2005; 36: 1994.


One Step Closer To Dark Matter In Universe

Scientists all over the world are working feverishly to find the dark matter in the universe. Now researchers at Stockholm University have taken one step closer to solving the enigma with a new method.

The universe is still a mystery. We know what about 5 percent of the universe consists of. The rest is simply unknown. Researchers have gotten as far as knowing that a major portion, about 23 percent of the universe consists of a new kind of matter. No one has seen this matter, and no one knows what it consists of. The remaining roughly 72 percent of the universe is made up of something even more enigmatic, called dark energy. Jan Conrad and Maja Llena Garde are scientists at Fysikum, Stockholm University and the Oskar Klein Center for Cosmoparticle Physics, and they are part of the international research team that has taken a giant step toward finding dark matter with the help of a new method.

Estimated distribution of dark matter and dark energy in the universe
Credit: Wikipedia
“With our new method, for the first time we have been able to exclude models regarded by many as the most natural ones. Previous attempts did not achieve the same sensitivity. What’s more, our results are especially reliable,” says Jan Conrad.

“We can’t see dark matter because it doesn’t interact with the matter we know about. Nor does it emit any light. It’s virtually invisible. But we can determine that it affects the matter we’re familiar with.”


 Strong gravitational lensing as observed by the Hubble Space Telescope in Abell 1689 indicates the presence of dark matter—enlarge the image to see the lensing arcs.
File:Gravitationell-lins-4.jpg
Credit: NASA

“We see how the rotation of galaxies is affect by something that weighs a lot but is invisible. We also see how the gas in galaxy clusters doesn’t move as it would if there were only visible matter present. So we know it’s there. The question is simply what it is. Many theoretical models have been developed to predict particles that meet the requirements for being identified as dark matter. But experiments are needed if we are to determine whether any of these models are correct,” says Jan Conrad.

Since dark matter is invisible, we can only see traces of it, and one way to do this is to look at light with extremely high energy, so-called gamma radiation. With the help of the satellite-borne Fermi Large Area Telescope, scientists can study gamma radiation and look for traces of dark matter.

“We’ve looked at gamma radiation from dwarf galaxies. These galaxies are small and dim, but extremely massive, so they seem to consist largely of dark matter. Unfortunately we still haven’t detected a gamma signal from the dark matter in these objects, but we are definitely getting closer. Our new method involves looking at several dwarf galaxies at the same time and combining the observations in a new way, which yields excellent results. This is an exciting time for dark matter research, because we’re getting closer and closer,” says Maja Llena Garde.

“This is truly a giant step forward in our pursuit of dark matter,” says the director of the Oskar Klein Center, Lars Bergström. “With my colleague Joakim Edsjö, I’ve studied these processes theoretically for more than ten years, but this is the first time important experimental breakthroughs are being seen. Now we just hope that Jan, Maja, and the Fermi team will continue this exciting quest using their new method.”

The research team’s findings are being published in the journal Physical Review Letters under the title “Constraining dark matter models from a combined analysis of Milky Way satellites with the Fermi Large Area Telescope.”


Contacts and sources:
Expertanswer (Expertsvar in Swedish)

Citation: The research team’s findings are being published in the journal Physical Review Letters under the title “Constraining dark matter models from a combined analysis of Milky Way satellites with the Fermi Large Area Telescope.”


Researchers Strive To Identify The Atomic Origins Of Wear

To slide; perchance to fatigue. "Wear is so common in sliding systems that it has acquired this air of inevitability," says Greg Sawyer, a professor in mechanical engineering at the University of Florida who leads a team of researchers hoping to overturn this assumption. Sawyer and his collaborators have succeeded in modifying polytetrafluoroethylene (PTFE), the ubiquitous, already low-friction material also known as Teflon, to make it "nearly a million times more wear-resistant."

 By applying the lessons learned from this and other such success stories, the researchers are attempting to identify, and then eliminate, the atomic and molecular origins of wear. If they reach their goal, moving assemblies such as joint replacements might last, if not forever, then at least until their owners "have shuffled off this mortal coil."

Any device that has moving parts – be it a lawn mower, a dishwasher, or a drive train – experiences friction. "Friction is a beautiful, complex thing" that steals energy and efficiency from a system, but doesn't, by default, result in wear, says Sawyer. The characteristics of an entire system, as opposed to any inherent properties of the sliding materials, determine how much wear will result when two surfaces move past one another. 

Sawyer and his team have come up with a number of hypotheses to explain how frictional forces might rip off or grind away bits of material in particular sliding systems. A surface could erode through a slow rearrangement of the atoms and molecules; through small, discrete breaking events that add up over time; through rare, but catastrophic, cleaving events; or through other, unknown methods. "We don't have near all the answers yet," Sawyer says.

To test their hypotheses, the scientists use atomic force microscopes to create atomic-scale images of surfaces and use finely tuned instruments to measure the minute forces that occur as materials slide against each other. Once the researchers identify a factor that contributes to system wear, they try to design a way to stop it. In the instance of the ultra-low-wear PTFE, the researchers embedded nanoparticles made of alumina in the polymer, which dramatically reduced wear. And this effect isn't limited to PTFE. 

Other nanoparticle-filled plastic composites have been shown to display a decreased sliding coefficient of friction, although scientists are still investigating the precise mechanisms that result in the reduced wear. At the AVS Symposium in Nashville, Tenn., held Oct. 30 – Nov. 4, Sawyer will present results from a number of ultra-low-wear systems studied in his lab, including polymers, metals, and ceramics.

Other than obsolescence, wear is the number one cause of end-of-product life, Sawyer notes. Scientists and engineers from Da Vinci onward have been exploring ways to minimize it, he says, and his team is continuing the quest. Asked about the future, Sawyer envisions a world where myriad products might never wear out: "Can you imagine only ever owing one car? Ultra-low-wear systems could change everything."


Contacts and sources:
Catherine Meyers
American Institute of Physics

The AVS 58th International Symposium & Exhibition will be held Oct. 30 – Nov. 4 at the Nashville Convention Center.  Presentation TR-WeA7, "Going No Wear?," is at 4 p.m. on Wednesday, Nov. 2.

Main meeting website: http://www2.avs.org/symposium/AVS58/pages/greetings.html
Technical Program: http://www2.avs.org/symposium




Old, Cold Chemistry: Icy Dust Specks May Form Complex Organic Molecules In Interstellar Clouds

Icy dust specks could provide an interstellar staging ground for chemical reactions that form complex organic molecules

The creation of the Universe was a messy business, and billions of years after the Big Bang, material still litters the dark space between stars. In these cold interstellar regions, gas and dust specks swirl together, sometimes coalescing to form new stars, sometimes expanding as dying stars spew forth new material into the void. Much of the chemistry that happens in interstellar clouds remains a mystery, but recent work by astrochemists from Heriot-Watt University in Edinburgh sheds new light on this dark part of the Universe, demonstrating the key role that icy dust specks can play in facilitating the formation of a type of organic molecule that could be a precursor to the building blocks of life. The researchers presented their work at the AVS Symposium, held Oct. 30 – Nov. 4, in Nashville, Tenn.

By some estimates molecules make up less than 1 percent of the matter of the Universe, but they can still significantly influence the evolution of stars and planetary systems. Scientists suspect, based on infrared observations, that many of the dust specks within interstellar clouds are covered in a frosty coating of ice. The ice acts as a coolant during star formation, leading to smaller, longer-lived stars such as our own Sun. "Small stars give evolution on planets time to work," says Martin McCoustra, an astrochemist who studies interstellar ice grains. "Basically we wouldn't be here if the Universe was clean and dust free." In addition to slowing down star evolution, icy dust specks may also influence interstellar organic chemistry, speeding up chemical reactions or shielding molecules from the full energy of incoming cosmic rays.

It is this chemical catalyst behavior of interstellar dust that McCoustra and his colleagues are currently investigating. Using silica and water ice surfaces, the scientists created models of both bare and icy dust grains in the laboratory, and then bombarded the grains with low-energy electrons to mimic an influx of cosmic rays. The researchers were specifically looking for the effect that the rays would have on acetonitrile (CH3CN), a simple organic compound that has been observed in the interstellar medium. They found that for films of bulk CH3CN, the incoming electrons rapidly dislodged the molecules, but for CH3CN molecules isolated on icy surfaces, a chemical reaction took place. 

CH3CN is believed to be a precursor to amino acids, McCoustra says, and the product of the reaction, which the scientists are still working to precisely identify, is probably part of an intermediate step in the process that makes large organic molecules. "The key point is that the water is crucial for this chemistry," McCoustra notes, since the chemical reaction did not take place in bulk CH3CN.

The Scottish research team, part of a large European network studying solid state and surface astrochemistry (LASSIE), is now working with computational chemists to further investigate, from an energy point of view, how water might promote chemistry on icy grains. "Astronomers and astrochemists are working to try and understand the origin of chemical complexity," says McCoustra. "If that chemistry is the same wherever we look in our galaxy, and if we can reproduce it in the laboratory, then that chemistry can seed our galaxy and others with the chemical potential for life."


Contacts and sources:
Catherine Meyers
American Institute of Physics

The AVS 58th International Symposium & Exhibition will be held Oct. 30 – Nov. 4 at the Nashville Convention Center.  Presentation SS1-MoM1, "Surface Science of Acetonitrile on Model Interstellar Ices and Grains," was at 8:20 a.m. on Monday, Oct. 31.

Antarctica Rocks!

Researchers brave bone chilling cold to solve geologic mysteries

Geologist John Goodge looks for clues about Antarctica's past in the 2 percent of the continent that is not covered in ice!

The University of Minnesota-Duluth professor has been visiting Antarctica since 1985, finding and studying rocks that help tell the story of how this desolate continent has formed and changed over time.

In late 2010 and early 2011, he spent several weeks in the field, with other scientists, visiting a dozen sites along 1,200 miles of mountains.

"What we're doing is finding places along the Transantarctic Mountains and we are sampling those to pick up pieces of rock that can, hopefully, give us some examples of what's further under the ice sheet," he says.

Goodge and his colleagues are supported by the U.S. Antarctic Program, which is managed by the National Science Foundation (NSF).

The team first spent about three weeks preparing for its fieldwork at McMurdo Station, the main U.S. research facility in Antarctica.

Goodge says the "Antarctic experience" is a combination of the starkness of the terrain and the camaraderie of co-workers.

"In Antarctica, you have chances to sit down and socialize in a setting that you might not otherwise have so it's really fun. I meet people that are working on meteorology, sending weather balloons up, and people that study Antarctic fish that make their own antifreezes, and just all kinds of neat things," he says.

Long before touching down on the ice, scientists spend months determining what would likely be the most productive sites. They use a combination of satellite imagery and topographic maps. U.S. Geological Survey photos taken in the 1950s and 60s are now digitally available, helping pinpoint study sites. Goodge says getting the right experts together is also critical.

"I learned a long time ago that it is rewarding to pick a team of people who bring different expertise to what you're doing, and so I see myself as kind of the hub in a spoked wheel of people doing different things," he says.

His team included a certified guide and mountaineer, a graduate student from South Africa, an isotope geochemist, and a geochronologist.

They travelled in helicopters to get to sites within a couple hundred miles of McMurdo, and planes to get to sites located further away.

The rocks they collect give clues about our planet and what it was like before the seven continents we know now.

"When we think about Earth's history and tectonics, we come to understand that there have been several periods in history where we think there have been supercontinents; amalgamations of most, if not all the continents we know today. Pangea is the latest one; it was around 250 million years ago. Before Pangea assembled, there was another supercontinent that most people agree was in existence 500 million years ago, Gondwana," explains Goodge.

And another piece of very ancient history: "It turns out there are rocks and glacial deposits that we've been finding that actually seem to confirm the idea that Antarctica and North America would have been neighbors at one time!" says Goodge.

The team collected about 2,500 pounds of rock material, filling 30 sturdy wooden boxes.

Opening them up to analyze them, "is like having Christmas time back here," says Goodge jokingly.

"This particular rock, this granite, is loaded with the mineral zircon, which is good for us because zircon's structure has the ability to take up small amounts of uranium, which then decays naturally to isotopes of lead, which we can use as a clock," he explains, while holding up a rock from his latest trip. "We can analyze the zircon to tell how old this rock was when it formed. And also the isotope composition can tell us about the ancestry of the granite itself."

According to Goodge, Antarctica may not seem like it is very active, but in fact it is a dynamic environment. And it is an important place to study the health of the planet--including the impact of global climate change on ice sheet stability.

"And, if we can understand what's happened in the past, we have at least a way or a framework to be able to say what might happen in the future and then, of course, with respect to climate change, the question is what extra role do humans and their activity play in tweaking the natural system off of whatever cycles it may have already been on?" notes Goodge.

While he has traveled there many times, Goodge says the Antarctic environment is still exciting and unpredictable.

"One of the interesting things about being in Antarctica is that you have no sense of scale. If you're out hiking in the mountains in the western United States, you can see trees and you can see roads, and so you have a sense of how far things are and how big things really are. But in Antarctica, it's just an expanse of white, and it's rolling, and it might even be mountainous and there might be a lot of glaciers. It's beautiful scenery and yet, the distances are so deceiving. You feel very small, and it's fun to just witness the things that are going on around you," he says.

For Goodge, this frozen landscape is full of ancient clues just waiting to be interpreted.

Contacts and sources:
National Science Foundation
Miles O'Brien, Science Nation Correspondent
Marsha Walton , Science Nation Producer

Live Chat With Astronomers On Near Earth Objects And Asteroid 2005 YU55

Ask scientists about a large asteroid that will pass between the Earth and moon in a close encounter between our planet and a near-Earth object.

Asteroid 2005 YU55 observed by Arecibo Telescope--a National Science Foundation facility.
Credit: NASA

People are invited to participate in a live online chat this Thursday from 3 p.m. to 4 p.m. EDT with two eminent astronomers from the National Science Foundation (NSF) and NASA, who will discuss a large asteroid that will approach close to Earth on Nov. 8, 2011 and the science of such near-Earth asteroids.

This rare flyby of the asteroid--which is known as 2005 YU55--will likely draw significant public interest because of the asteroid's large size of about 396 meters (1,300 feet) in diameter, the nature of its close encounter with Earth and the public's fascination with near-Earth objects.

The chat is sponsored by ScienceNOW, the daily news site of the journal Science. To participate, go to ScienceNow's website on November 3 at 3 p.m. EDT and submit questions to:

Scott Fisher: A program director in the Division of Astronomical Sciences at the National Science Foundation and a staff scientist at the Gemini Observatory--a large international observatory with eight-meter telescopes located in Hawaii and Chile--where he researches planet formation.

Donald K. Yeomans: A scientific investigator on NASA's Deep Impact mission that successfully impacted comet Tempel 1 in July 2005, and a senior research scientist at the Jet Propulsion Laboratory, where he contributes to predictions of future close Earth approaches and impacts by comets and asteroids.

Asteroid 2005 YU55 will pass within 0.85 lunar distances from the Earth on November 8, 2011.
Screen capture showing position of Asteroid 2005 YU55 as it passes closer to Earth than the moon.
Credit: NASA

The November 3 chat about asteroid 2005 YU55 and other near-Earth objects provides an ideal opportunity to ask experts about these and other topics:
What is currently known about the 2005 YU55's size, shape, orbit and origins?
What are the real dangers potentially posed by asteroids and comets vs. threats hyped by doomsayers?
What is the likelihood that 2005 YU55 will ever crash into the Earth?
When and where is the best place to view 2005 YU55?
How are scientists currently tracking comets and asteroids that may impact the Earth?
What types of additional information are scientists likely to learn about 2005 YU55 and asteroids in general via radar, visual and infrared monitoring of the asteroid during its close approach to Earth?

The live online chat on 2055 YU55 is part of the journal Science'sweekly series of chats on the hottest topics in science held every Thursday at 3 p.m. EDT.

Redefining 'Clean' A Whole New Level Of Sterilization For Surgical Instruments And Medical Devices

Aiming to take "clean" to a whole new level, researchers at the University of California at Berkeley and the University of Maryland at College Park have teamed up to study how low-temperature plasmas can deactivate potentially dangerous biomolecules left behind by conventional sterilization methods. Using low-temperature plasmas is a promising technique for sterilization and deactivation of surgical instruments and medical devices, but the researchers say its effectiveness isn't fully understood yet. The researchers will present their findings at the AVS Symposium, held Oct. 30 – Nov. 4, in Nashville, Tenn.

"Bacteria are known to create virulence factors – biomolecules expressed and secreted by pathogens – even if they have been killed," says David Graves, a professor working on the research at UC Berkeley's Department of Chemical and Biomolecular Engineering. These molecules are not always inactivated by conventional sterilization methods, such as heating surgical equipment in an autoclave, and can cause severe medical problems.

The misfolded proteins called "prions" that are thought to cause mad cow disease are one well-known example of harmful biomolecules, Graves says. "These molecules may not be inactivated by conventional autoclaves or other methods of disinfection or sterilization," he says. "In some cases, expensive endoscopes used in the brain must be discarded after a single use because the only way to reliably decontaminate them would destroy them."

Another harmful biomolecule is called lipopolysaccharide (LPS), which are found in the membranes of E. coli bacteria. In humans, LPS can initiate an immune response that includes fever, hypotension, and respiratory dysfunction, and may even lead to multiple organ failure and death.

Graves' research team, in conjunction with a group led by Gottlieb Oehrlein at the University of Maryland in College Park has focused their attention on Lipid A, the major immune-stimulating region of LPS. The researchers exposed Lipid A to the effects of low-temperature plasmas using a vacuum-beam system.

"Low-temperature plasma generates vacuum ultraviolet photons, ions/electrons, and radicals that are known to be able to deactivate these molecules even at low temperature," notes Graves. "However, the mechanisms by which they do this [are] poorly understood, so we can't be sure when they work and when they don't. Our measurements and calculations are designed to reveal this information."

One of the biggest challenges, Oehrlein says, was producing samples of lipopolysaccharide and Lipid A that were compatible with the equipment typically used to study plasma-surface interactions. "The collaboration of Professor Joonil Seog, who is an expert on biological assay methodologies and characterization, has been crucial in this respect," Oehrlein notes. The scientists' results suggest that plasma-generated vacuum ultraviolet light can reduce the toxicity of Lipid A. "We have been surprised by the high sensitivity of endotoxins to UV or vacuum UV irradiation," says Oehrlein. The results mean that the ability of plasma to sterilize equipment might strongly depend on what the plasma is made of, since plasma optical emissions vary based on plasma compositions. As a next step, Oehrlein says that his group plans to focus their efforts on understanding the influence of plasma-generated radicals on the deactivation of biomolecules.

Both groups' results are a good indication that "clean" can indeed be redefined.


Contacts and sources:
Catherine Meyers
American Institute of Physics

The AVS 58th International Symposium & Exhibition will be held Oct. 30 – Nov. 4 at the Nashville Convention Center.  Presentation PS+BI-MoA7, "Deactivation of Lipopolysaccharide and Lipid A by Ar/H2 Inductively Coupled Plasma," will be presented by Oehrlein's doctoral student, Elliot Bartis, at 4 p.m. on Monday, Oct. 31.

Presentation PS+BI-MoA-10, "Plasma Deactivation of Pyrogenic Biomolecules: Vacuum Ultraviolet Photon and Radical Beam Effects on Lipid A," will be presented by Graves's doctoral student, Ting-Ling Chung, at 5 p.m. on Monday, Oct. 31.

Main meeting website: http://www2.avs.org/symposium/AVS58/pages/greetings.html
Technical Program: http://www2.avs.org/symposium



Hey, Bacteria, Get Off Of My Boat! New Nano-Surfaces Repel Bacteria And Barnacles

Submerge it and they will come. Opportunistic seaweed, barnacles, and bacterial films can quickly befoul almost any underwater surface, but researchers are now using advances in nanotechnology and materials science to design environmentally friendly underwater coatings that repel these biological stowaways.

"Sea water is a very aggressive biological system," says Gabriel Lopez, whose lab at Duke University studies the interface of marine bacterial films with submerged surfaces. While the teeming abundance of ocean life makes coral reefs and tide pools attractive tourist destinations, for ships whose hulls become covered with slime, all this life can, quite literally, be a big drag. On just one class of U.S. Navy destroyer, biological build-up is estimated to cost more than $50 million a year, mostly in extra fuel, according to a 2010 study performed by researchers from the U.S. Naval Academy and Naval Surface Warfare Center in Maryland. Marine biofouling can also disrupt the operation of ocean sensors, heat-exchangers that suck in water to cool mechanical systems, and other underwater equipment.

Traditionally, a ship's manufacturer could apply biocide-containing paint, designed to poison any colonizing organisms, to the underside of the hull. However, these paints often contain heavy metals or other toxic chemicals that might accumulate in the environment and unintentionally harm fish or other marine organisms. To replace toxic paints, scientists and engineers are now looking for ways to manipulate the physical properties of surface coatings to discourage biological colonization. "Our end goal is to develop greener technology," Lopez says.

Lopez and his group focus on a class of materials called stimuli-responsive surfaces. As the name implies, the materials will alter their physical or chemical properties in response to a stimulus, such as a temperature change. The coatings being tested in Lopez's lab wrinkle on the micro- or nano-scale, shaking off slimy colonies of marine bacteria in a manner similar to how a horse might twitch its skin to shoo away flies. The researchers also consider how a stimulus might alter the chemical properties of a surface in a way that could decrease a marine organism's ability to stick.

At the AVS Symposium, held Oct. 30 – Nov. 4 in Nashville, Tenn., Lopez will present results from experiments on two different types of stimuli-responsive surfaces: one that changes its texture in response to temperature and the other in response to an applied voltage. The voltage-responsive surfaces are being developed in collaboration with the laboratory of Xuanhe Zhao, also a Duke researcher, who found that insulating cables can fail if they deform under voltages. "Surprisingly, the same failure mechanism can be made useful in deforming surfaces of coatings and detaching biofouling," Zhao said.

"The idea of an active surface is inspired by nature," adds Lopez, who remembers being intrigued by the question of how a sea anemone's waving tentacles are able to clean themselves. Other biological surfaces, such as shark skin, have already been copied by engineers seeking to learn from nature's own successful anti-fouling systems.

The model surfaces that Lopez and his team study are not yet in forms suitable for commercial applications, but they help the scientists understand the mechanisms behind effective texture or chemical changes. Understanding these mechanisms will also help the team develop materials and methods for controlling biofouling in a wide range of additional contexts, including on medical implants and industrial surfaces. As a next step, the team will test how the surfaces are able to shake off other forms of marine life. Eventually the team hopes to submerge coated test panels in coastal waters and wait for the marine life to come, but hopefully not get too cozy.


Contacts and sources:
Catherine Meyers
American Institute of Physics

The AVS 58th International Symposium & Exhibition will be held Oct. 30 – Nov. 4 at the Nashville Convention Center. Presentation MB-MoM-9, "Micro to Nanostructured Stimuli-Responsive Surfaces for Study and Control of Bioadhesion," was presented at 11 a.m. on Monday, Oct. 31.

Main meeting website: http://www2.avs.org/symposium/AVS58/pages/greetings.html
Technical Program: http://www2.avs.org/symposium

DNA Origami For Synthetic Biology Building Blocks


Researchers fabricate DNA strands on a reusable chip, fold them into novel nanostructures

In the emerging field of synthetic biology, engineers use biological building blocks, such as snippets of DNA, to construct novel technologies. One of the key challenges in the field is finding a way to quickly and economically synthesize the desired DNA strands. Now scientists from Duke University have fabricated a reusable DNA chip that may help address this problem by acting as a template from which multiple batches of DNA building blocks can be photocopied. The researchers have used the device to create strands of DNA which they then folded into unique nanoscale structures. They will present their findings at the AVS Symposium, held Oct. 30 – Nov. 4, in Nashville, Tennessee.

Many different methods of DNA synthesis have been developed, but each method has its drawbacks. Bulk DNA synthesis, which makes use of separate columns to house the reactions, can produce large amounts of material, but is costly and limited in the number of different DNA sequences it can create. The Duke researchers, by contrast, used an inkjet printer head to deposit small droplets of chemicals on top of a plastic chip, gradually constructing DNA strands of mixed length and composition on the surface. 

The team then used a biological photocopying process to harvest the DNA from the chip. To the researchers' surprise, they found they could reuse the chip to harvest multiple batches of DNA. "We found that we had an "immortal" DNA chip in our hands," says Ishtiaq Saaem, a biomedical engineering researcher at Duke and member of the team. "Essentially, we were able to do the biological copying process to release material off the chip tens of times. The process seems to work even using a chip that we made, used, stored in -20C for a while, and brought out and used again."

After releasing the DNA from the chip, the team "cooked" it together with a piece of long viral DNA. "In the cooking process, the viral DNA is stapled into a desired shape by the smaller chip-derived DNA," explains Saaem. One of the team's first examples of DNA origami was a rectangle shape with a triangle attached on one side, which the researchers dubbed a "nano-house." The structure could be used to spatially orient organic and inorganic materials, serve as a scaffold for drug delivery, or act as a nanoscale ruler, Saaem says.

Going forward, the team intends to produce larger DNA structures, while also testing the limit of how often their chip can be reused. In the near-term, the research has applications in the spatial positioning of biomolecules, such as proteins, for research purposes. Long-term, it might even transform information technology: "I would not be surprised if this methodology is used to fabricate the next generation of microprocessors that can push Moore's law even further," Saaem says.


Contacts and sources:
Catherine Meyers
American Institute of Physics

The AVS 58th International Symposium & Exhibition will be held Oct. 30 – Nov. 4 at the Nashville Convention Center.  Presentation BI-MoM10, "DNA Origami from Inkjet Synthesis Produced Strands," was presented at 11:20 a.m. Monday, Oct. 31.

Main meeting website: http://www2.avs.org/symposium/AVS58/pages/greetings.html
Technical Program: http://www2.avs.org/symposium



The 'Freshman 15' Is Just A Myth, Nationwide Study Reveals, Average Weight Gain Closer To 3 Lbs

Contrary to popular belief, most college students don’t gain anywhere near 15 pounds during their freshman year, according to a new nationwide study.

Rather than adding “the freshman 15,” as it is commonly called, the average student gains between about 2.5 and 3.5 pounds during the first year of college.

And college has little to do with the weight gain, the study revealed. The typical freshman only gains about a half-pound more than a same-age person who didn’t go to college.

Jay Zagorsky
Credit: USA

“The ‘freshman 15’ is a media myth,” said Jay Zagorsky, co-author of the study and research scientist at Ohio State University’s Center for Human Resource Research.

“Most students don’t gain large amounts of weight. And it is not college that leads to weight gain – it is becoming a young adult.”

The results suggest that media reporting of the freshman 15 myth may have serious implications.

“Repeated use of the phrase ‘the freshman 15,’ even if it is being used just as a catchy, alliterative figure of speech, may contribute to the perception of being overweight, especially among young women,” Zagorsky said.

“Weight gain should not be a primary concern for students going off to college.”

Zagorsky conducted the study with Patricia Smith of the University of Michigan-Dearborn. The study will appear in the December 2011 issue of the journal Social Science Quarterly.

The study uses data from 7,418 young people from around the country who participated in the National Longitudinal Survey of Youth 1997. The NLSY97 interviewed people between the ages of 13 and 17 in 1997 and then interviewed the same people each year since then. The NLSY is conducted by Ohio State’s Center for Human Resource Research for the U.S. Bureau of Labor Statistics.

Among many other questions, respondents were asked their weight and college status each year.

Other studies have shown that college students tend to underestimate their weight by half a pound to 3 pounds. But if people are consistent in underestimating their weight from year to year, it would not impact these results, Zagorsky said.

The study found that women gained an average of 2.4 pounds during their freshman year, while men gained an average of 3.4 pounds. No more than 10 percent of college freshman gained 15 pounds or more -- and a quarter of freshman reported actually losing weight during their first year.

“It’s worth noting that while there’s this focus on weight gain among freshman, we found that one in four actually lost weight,” Zagorsky said.

The researchers examined a variety of factors that may be associated with freshman weight gain, including whether they lived in a dormitory, went to school full or part time, pursued a two-year or four-year degree, went to a private or public institution, or was a heavy drinker of alcohol (consuming six or more drinks on at least four days per month.)

None of these factors made a significant difference on weight gain, except for heavy drinking. Even then, those who were heavy drinkers gained less than a pound more than students who did not drink at that level.

Zagorsky said it was particularly significant that dorm living did not add to weight gain, since one hypothesis has been that the dorm environment encourages weight gain during the freshman year.

“There has been concern that access to all-you-can-eat cafeterias and abundant fast food choices, with no parental oversight, may lead to weight gain, but that doesn’t seem to hold true for most students,” he said.

The results do show, however, that college students do gain weight steadily over their college years.

The typical woman gains between seven and nine pounds, while men gain between 12 and 13 pounds.

“Not only is there not a ‘freshman 15,’ there doesn’t appear to be even a ‘college 15’ for most students,” Zagorsky said.

Over the course of the entire college career, students who both worked and attended college gained an extra one-fifth of a pound for each week they worked.

The researchers also examined what happened to college students’ weight after they graduated. They found that in the first four years after college, the typical respondent gained another 1.5 pounds per year.

“College students don’t face an elevated risk of obesity because they gain a large amount of weight during their freshman year,” Zagorsky said.

“Instead, they have moderate but steady weight gain throughout early adulthood. Anyone who gains 1.5 pounds every year will become obese over time, no matter their initial weight.”

Although most students don’t need to worry about large weight gains their freshman year, Zagorsky said they still should focus on a healthy lifestyle.

“Students should begin developing the habit of eating healthy foods and exercising regularly. Those habits will help them throughout their lives.”


Written by Jeff Grabmeier 

Deficits In Brain Cannabinoids May Contribute To Eating Disorders

A new report in Biological Psychiatry suggests that deficits in endocannabinoid function may contribute to anorexia nervosa and bulimia. 

Endocannabinoids are substances made by the brain that affect brain function and chemistry in ways that resemble the effects of cannabis derivatives, including marijuana and hashish. These commonly abused drugs are well known to increase appetite, i.e. to cause the “munchies”. Thus, it makes sense that deficits in this brain system would be associated with reduced appetite.

Researchers measured the status of the endocannabinoid system indirectly by determining whether there was an increase or decrease in the density of endocannabinoid receptors, called the CB1 receptor, in several brain regions using positron emission tomography, or PET, imaging. They compared these densities in women with anorexia or bulimia with those of healthy women.

They found global increases in ligand binding to CB1 receptors in the brains of women with anorexia nervosa. This finding is consistent with a compensatory process engaged by deficits in endocannabinoid levels or reduced CB1 receptor function.

CB1R availability was also increased in the insula in both anorexia and bulimia patients. The insula “is a region that integrates body perception, gustatory information, reward and emotion, functions known to be disturbed in these patients,” explained Dr. Koen Van Laere, the study’s lead author.

“The role of endocannabinoids in appetite control is clearly important. These new data point to important connections between this system and eating disorders,” added Dr. John Krystal, Editor of Biological Psychiatry.

Additional research is now needed to establish whether the observed changes are caused by the disease or whether these are neurochemical alterations that serve as risk factors for developing an eating disorder.

Furthermore, since very few effective treatments exist for these disorders, these data indicate that the endocannabinoid system may be a potential new target for developing drugs to treat eating disorders. Such new therapies are currently being investigated in animal models.

Contacts and sources:
Donna Santaromita
Elsevier

The article is “Brain Type 1 Cannabinoid Receptor Availability in Patients with Anorexia and Bulimia Nervosa” (DOI 10.1016/j.biolpsych.2011.05.010) by Nathalie Gérard, Guido Pieters, Karolien Goffin, Guy Bormans, and Koen Van Laere. Gérard, Goffin, and Van Laere are affiliated with Division of Nuclear Medicine, University Hospital and Katholieke Universiteit Leuven, Leuven, Belgium. Pieters is affiliated with University Psychiatric Centre, Katholieke Universiteit Leuven, Eating Disorder Clinic Kortenberg, Kortenberg, Belgium. Bormans is with the Laboratory for Radiopharmacy, Katholieke Universiteit Leuven, Leuven, Belgium. The article appears in Biological Psychiatry, Volume 70, Number 8 (October 15, 2011), published by Elsevier. The authors’ disclosures of financial and conflicts of interests are available in the article. John H. Krystal, M.D., is Chairman of the Department of Psychiatry at the Yale University School of Medicine and a research psychiatrist at the VA Connecticut Healthcare System. His disclosures of financial and conflicts of interests are available here


Before The G20 Summit In Cannes: IZA Researchers Propose Concept For A Global Debt Brake

A few days before the G20 summit in Cannes, economists from the Institute for the Study of Labor (IZA) in Bonn, Germany, have proposed a global consolidation strategy of public finances. 

In a new IZA Policy Paper entitled "A challenge for the G20: Globally stipulated debt brakes and transnational independent fiscal supervisory councils," Mathias Dolls, Andreas Peichl and Klaus F. Zimmermann recommend that political leaders of the G20 countries implement a global debt brake to push the process of consolidation of public finances and make it binding. To ensure this, the debt brakes should be fixed in national constitutions and enforced by transnational independent fiscal supervisory councils. The researchers point out that the debt brake is an important instrument for a long-lasting solution of the sovereign debt crisis.

The fiscal councils could be located at the European Stability Mechanism (ESM) and at the International Monetary Fund (IMF). They should conduct a regular evaluation of national budget plans in order to ensure that they meet the requirements stipulated by the debt brake. Through this global monitoring process, an early warning system could be developed to avoid sovereign debt crises and the resulting contagion risks among highly indebted countries in the future.

Klaus F. Zimmermann, Director of IZA: "The dramatic developments of the last weeks and months make clear that besides the emergency measures taken in the Eurozone and the debt deal in the United States, which only provide short-term relief, structural reforms to overcome the sovereign debt crisis are desperately needed. The G20 is the right place for the negotiation of global reform measures." The G20 does not only include representatives of countries with the highest debt levels (EU, US, Japan), but also the BRIC countries Brazil, Russia, India and China as well as large developing economies whose growth is particularly threatened by the sovereign debt crisis.

"It is not sufficient to negotiate at the G20 meeting in Cannes only emergency measures against the current crisis. A master plan is needed which puts the focus on the long-term challenges of the global sovereign debt crisis; if no convincing answers are found, financial market uncertainty will grow even further," said Zimmermann.

The sovereign debt crisis was already on the agenda of the last G20 summits in Toronto and Seoul. Political leaders agreed that budget deficits are to be halved by 2013. However, these commitments are not binding, and the state of the economy is much weaker today than it was hoped at that time. The same is true for the decisions made at the EU summit last week. They call for national debt brakes fixed in national constitutions, but fail to stipulate politically independent supervision. IZA Director Zimmermann: "This structure of fiscal supervision already failed in the Maastricht Treaty. We need a global solution which includes independent fiscal supervisory councils with the right to impose sanctions."

High sovereign debt is not only a problem in Europe, but also for other important world regions. At 233 percent, Japan has the highest debt-to-GDP ratio among industrialized economies. Almost half of the budget is financed by new credits. The United States will likely see a new record level of government debt this budget year. By the end of June 2011, the debt-to-GDP ratio already amounted to 98.6 percent.

The emerging markets are no uninterested bystanders in this debate. These countries and their vast populations feel that this is their time, and that a shadow that laid over them for centuries has been lifted at long last. If the Western countries now fail to get their fiscal acts together, there is a real danger of a new form of "colonialism," which would manifest itself in a very serious growth tax that would be imposed on the developing nations. 

This "tax" would take the form of a global economic collapse and a decline in development aid related to an unresolved Western debt overhang that may very soon prove unsustainable. Or it may take the form of high inflation, which would have the same effect on these countries, which are not only hoping for, but depend on, macroeconomic stability to execute their plan to move out from under the shackles of centuries of underdevelopment. For these reasons, imposing an irrevocable debt brake is not just a fair, but a very necessary quid pro quo if we want to have any chance to work off the public funds we are currently injecting into our economies.


Contacts and sources:
Mark Fallak
Institute for the Study of Labor (IZA), Bonn, Germany

The complete study is downloadable from the IZA homepage:
"A challenge for the G20: Globally stipulated debt brakes and transnational independent fiscal supervisory councils"
IZA Policy Paper No. 33
http://ftp.iza.org/pp33.pdf