Tuesday, November 30, 2010

Astronomers Use Moon In Effort To Corral Elusive Cosmic Particles

Radio telescopes normally can't detect neutrinos, but astronomers aimed Very Large Array antennas at the Moon in an innovative effort to detect radio "flashes" caused by the interaction of cosmic neutrinos with lunar material.

Search for ultra high energy neutrinos from space turns the moon into part of the 'detector'

Seeking to detect mysterious, ultra-high-energy neutrinos from distant regions of space, a team of astronomers used the Moon as part of an innovative telescope system for the search. Their work gave new insight on the possible origin of the elusive subatomic particles and points the way to opening a new view of the Universe in the future.

The team used special-purpose electronic equipment brought to the National Science Foundation's Very Large Array (VLA) radio telescope, and took advantage of new, more-sensitive radio receivers installed as part of the Expanded VLA (EVLA) project. Prior to their observations, they tested their system by flying a small, specialized transmitter over the VLA in a helium balloon.

In 200 hours of observations, Ted Jaeger of the University of Iowa and the Naval Research Laboratory, and Robert Mutel and Kenneth Gayley of the University of Iowa did not detect any of the ultra-high-energy neutrinos they sought. This lack of detection placed a new limit on the amount of such particles arriving from space, and cast doubt on some theoretical models for how those neutrinos are produced.

Neutrinos are fast-moving subatomic particles with no electrical charge that readily pass unimpeded through ordinary matter. Though plentiful in the Universe, they are notoriously difficult to detect. Experiments to detect neutrinos from the Sun and supernova explosions have used large volumes of material such as water or chlorine to capture the rare interactions of the particles with ordinary matter.

The ultra-high-energy neutrinos the astronomers sought are postulated to be produced by the energetic, black-hole-powered cores of distant galaxies; massive stellar explosions; annihilation of dark matter; cosmic-ray particles interacting with photons of the Cosmic Microwave Background; tears in the fabric of space-time; and collisions of the ultra-high-energy neutrinos with lower-energy neutrinos left over from the Big Bang.

Radio telescopes can't detect neutrinos, but the scientists pointed sets of VLA antennas around the edge of the Moon in hopes of seeing brief bursts of radio waves emitted when the neutrinos they sought passed through the Moon and interacted with lunar material. Such interactions, they calculated, should send the radio bursts toward Earth. This technique was first used in 1995 and has been used several times since then, with no detections recorded. The latest VLA observations have been the most sensitive yet done.

"Our observations have set a new upper limit -- the lowest yet -- for the amount of the type of neutrinos we sought," Mutel said. "This limit eliminates some models that proposed bursts of these neutrinos coming from the halo of the Milky Way Galaxy," he added. To test other models, the scientists said, will require observations with more sensitivity.

"Some of the techniques we developed for these observations can be adapted to the next generation of radio telescopes and assist in more-sensitive searches later," Mutel said. "When we develop the ability to detect these particles, we will open a new window for observing the Universe and advancing our understanding of basic astrophysics," he said.

The scientists reported their work in the December edition of the journal Astroparticle Physics.

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

Contacts and sources:
Astroparticle Physics
National Science Foundation

A New Paradigm:? Project Pioneers Use Of Silicon-Germanium For Space Electronics Applications

A five-year project led by the Georgia Institute of Technology has developed a novel approach to space electronics that could change how space vehicles and instruments are designed. The new capabilities are based on silicon-germanium (SiGe) technology, which can produce electronics that are highly resistant to both wide temperature variations and space radiation.

Titled "SiGe Integrated Electronics for Extreme Environments," the $12 million, 63-month project was funded by the National Aeronautics and Space Administration (NASA). In addition to Georgia Tech, the 11-member team included academic researchers from the University of Arkansas, Auburn University, University of Maryland, University of Tennessee and Vanderbilt University. Also involved in the project were BAE Systems, Boeing Co., IBM Corp., Lynguent Inc. and NASA's Jet Propulsion Laboratory.

This close-up image shows a remote electronics unit 16-channel sensor interface, developed for NASA using silicon-germanium microchips by an 11-member team led by Georgia Tech.
Credit: Credit: Gary Meek

"The team's overall task was to develop an end-to-end solution for NASA – a tested infrastructure that includes everything needed to design and build extreme-environment electronics for space missions," said John Cressler, who is a Ken Byers Professor in Georgia Tech's School of Electrical and Computer Engineering. Cressler served as principal investigator and overall team leader for the project.

A paper on the project findings will appear in December in IEEE Transactions on Device and Materials Reliability, 2010. During the past five years, work done under the project has resulted in some 125 peer-reviewed publications.

Unique Capabilities

SiGe alloys combine silicon, the most common microchip material, with germanium at nanoscale dimensions. The result is a robust material that offers important gains in toughness, speed and flexibility.

That robustness is crucial to silicon-germanium's ability to function in space without bulky radiation shields or large, power-hungry temperature control devices. Compared to conventional approaches, SiGe electronics can provide major reductions in weight, size, complexity, power and cost, as well as increased reliability and adaptability.

"Our team used a mature silicon-germanium technology – IBM's 0.5 micron SiGe technology – that was not intended to withstand deep-space conditions," Cressler said. "Without changing the composition of the underlying silicon-germanium transistors, we leveraged SiGe's natural merits to develop new circuit designs – as well as new approaches to packaging the final circuits – to produce an electronic system that could reliably withstand the extreme conditions of space."

At the end of the project, the researchers supplied NASA with a suite of modeling tools, circuit designs, packaging technologies and system/subsystem designs, along with guidelines for qualifying those parts for use in space. In addition, the team furnished NASA with a functional prototype – called a silicon-germanium remote electronics unit (REU) 16-channel general purpose sensor interface. The device was fabricated using silicon-germanium microchips and has been tested successfully in simulated space environments.

Georgia Tech Professor John Cressler displays a functional prototype device developed for NASA using silicon-germanium microchips. The device, a 16-channel sensor interface, has been tested successfully in simulated space environments.
 Testing SiGe devices
Credit:  Photo: Gary Meek

A New Paradigm

Andrew S. Keys, center chief technologist at the Marshall Space Flight Center and NASA program manager, said the now-completed project has moved the task of understanding and modeling silicon-germanium technology to a point where NASA engineers can start using it on actual vehicle designs.

"The silicon-germanium extreme environments team was very successful in doing what it set out to do," Keys said. "They advanced the state-of-the-art in analog silicon-germanium technology for space use – a crucial step in developing a new paradigm leading to lighter weight and more capable space vehicle designs."

Keys explained that, at best, most electronics conform to military specifications, meaning they function across a temperature range of minus- 55 degrees Celsius to plus-125 degrees Celsius. But electronics in deep space are typically exposed to far greater temperature ranges, as well as to damaging radiation. The Moon's surface cycles between plus-120 Celsius during the lunar day to minus-180 Celsius at night.

The silicon-germanium electronics developed by the extreme environments team has been shown to function reliably throughout that entire plus-120 to minus-180 Celsius range. It is also highly resistant or immune to various types of radiation.

The conventional approach to protecting space electronics, developed in the 1960s, involves bulky metal boxes that shield devices from radiation and temperature extremes, Keys explained. Designers must place most electronics in a protected, temperature controlled central location and then connect them via long and heavy cables to sensors or other external devices.

By eliminating the need for most shielding and special cables, silicon-germanium technology helps reduce the single biggest problem in space launches – weight. Moreover, robust SiGe circuits can be placed wherever designers want, which helps eliminate data errors caused by impedance variations in lengthy wiring schemes.

"For instance, the Mars Exploration Rovers, which are no bigger than a golf cart, use several kilometers of cable that lead into a warm box," Keys said. "If we can move most of those electronics out to where the sensors are on the robot's extremities, that will reduce cabling, weight, complexity and energy use significantly."

A Collaborative Effort

NASA currently rates the new SiGe electronics at a technology readiness level of six, which means the circuits have been integrated into a subsystem and tested in a relevant environment. The next step, level seven, involves integrating the SiGe circuits into a vehicle for space flight testing. At level eight, a new technology is mature enough to be integrated into a full mission vehicle, and at level nine the technology is used by missions on a regular basis.

Successful collaboration was an important part of the silicon-germanium team's effectiveness, Keys said. He remarked that he had "never seen such a diverse team work together so well."

Professor Alan Mantooth, who led a large University of Arkansas contingent involved in modeling and circuit-design tasks, agreed. He called the project "the most successful collaboration that I've been a part of."

Mantooth termed the extreme-electronics project highly useful in the education mission of the participating universities. He noted that a total of 82 students from six universities worked on the project over five years.

Richard W. Berger, a BAE Systems senior systems architect who collaborated on the project, also praised the student contributions.

"To be working both in analog and digital, miniaturizing, and developing extreme-temperature and radiation tolerance all at the same time – that's not what you'd call the average student design project," Berger said.

Miniaturizing an Architecture

BAE Systems' contribution to the project included providing the basic architecture for the remote electronics unit (REU) sensor interface prototype developed by the team. That architecture came from a previous electronics generation: the now cancelled Lockheed Martin X-33 Spaceplane initially designed in the 1990s.

In the original X-33 design, Berger explained, each sensor interface used an assortment of sizeable analog parts for the front end signal receiving section. That section was supported by a digital microprocessor, memory chips and an optical bus interface – all housed in a protective five-pound box.

The extreme environments team transformed the bulky X-33 design into a miniaturized sensor interface, utilizing silicon germanium. The resulting SiGe device weighs about 200 grams and requires no temperature or radiation shielding. Large numbers of these robust, lightweight REU units could be mounted on spacecraft or data-gathering devices close to sensors, reducing size, weight, power and reliability issues.

Berger said that BAE Systems is interested in manufacturing a sensor interface device based on the extreme environment team's discoveries.

Other space-oriented companies are also pursuing the new silicon-germanium technology, Cressler said. NASA, he explained, wants the intellectual-property barriers to the technology to be low so that it can be used widely.

"The idea is to make this infrastructure available to all interested parties," he said. "That way it could be used for any electronics assembly – an instrument, a spacecraft, an orbital platform, lunar-surface applications, Titan missions – wherever it can be helpful. In fact, the process of defining such an NASA mission-insertion road map is currently in progress."

Contacts and sources:
IEEE Transactions on Device and Materials Reliability

Discovering The Secrets Of Stonehenge, Revolutionary Method Of Moving Giant Stones

A revolutionary new idea on the movement of big monument stones like those at Stonehenge has been put forward by an archaeology student at the University of Exeter.

Whilst an undergraduate, Andrew Young saw a correlation between standing stone circles in Aberdeenshire, Scotland and a concentration of carved stone balls, which may have been used to help transport the big stones by functioning like ball bearings.

Archaeology students working on stone moving experiment
Credit: University of Exeter

Young discovered that many of the late Neolithic stone balls had a diameter within a millimetre of each other, which he felt indicated they would have been used together in some way rather than individually. By plotting on a map where the carved balls were found, he realised they were all within the vicinity of Neolithic monuments known as recumbent stone circles. These stone circle monuments in Aberdeenshire share an equivalent form to Stonehenge, yet with some much larger stones.

To test his theory Young built a model using small wooden balls which were placed in a grooved pieces of wood moulding, similar to a railway track but with a groove rather than a rail. The balls were spread apart and a mirror image of the track was placed on top supporting a wood platform. He then placed concrete slabs on the tracks, to replicate a heavy weight.

Young said, “I then sat on top of the slabs to add extra weight. The true test was when a colleague used his index finger to move me forward, a mere push and the slabs and I shot forward with great ease. This proved the balls could move large heavy objects and could be a viable explanation of how giant stones were moved, especially in relation to where the stone balls were originally found.”

A further experiment on a much larger scale was arranged with the financial assistance of Gemini Productions and WGBH, Boston for NOVA, an American documentary TV programme. They were focusing on Stonehenge and wanted to see if a team of archaeology students directed by Professor Bruce Bradley, a lead archaeologist at the University of Exeter could build and test a life size model using wood that might reflect how massive stones could have been moved across the landscape. Previous experiments, which others have carried out to move large stones had not been particularly effective. The building of a hardened surface to roll logs on and the trench experiments only moved the stone with great effort and if they had been moved in this way the hardened surface or trench would show up in the archaeological record, however these have not been found.

In the large scale experiment, green wood was used for cost purposes. Neolithic people would have had access to much better materials, such as cured oak, which is extremely tough and was in abundance due to the great forests at the time. They also had the technical ability to cut long timber planks, known through archaeological evidence of planks used as a way of creating tracks for people to walk on through bogs. The experiment used hand shaped granite spheres as well as wooden spheres.

Professor Bradley said, ‘Our experiment had to go for the much cheaper option of green wood, which is relatively soft, however, we successfully moved extremely heavy weights at a pace. The demonstration indicated that big stones could have been moved using this ball bearing system with roughly ten oxen and may have been able to transport stones up to ten miles per day. This method also has no lasting impact on the landscape, as the tracks with the ball bearings are moved along leap froging each other as the tracks get moved up the line.”
He added, “It demonstrates that the concept works. It does not prove that Neolithic people used this method, but it was and is possible. This is a radical new departure, because previous ideas were not particularly effective in transporting large stones and left unanswered questions about the archaeological record they would have left behind.”

The next stage in the project is to collaborate with the engineering experts at the University who can calculate the loads which could be transported using various combinations of variables such as hard wood and |U-shaped grooves. This will provide the mathematical evidence to see how much force would be needed to get the stone moving and to keep it moving. This will enable the project team to gain an even greater understanding of how stones may have been transported across huge distances and even up hills. The ultimate goal is for a full scale experiment in Aberdeenshire using more authentic materials, stone balls and a team of Oxen.

Source: University of Exeter

CERN Presents Matter As It Would Have Existed In The Very First Instants Of The Universe’s Life

After less than three weeks of heavy-ion running, the three experiments studying lead ion collisions at the LHC have already brought new insight into matter as it would have existed in the very first instants of the Universe’s life. The ALICE experiment, which is optimised for the study of heavy ions, published two papers just a few days after the start of lead-ion running. Animation:  http://www.atlas.ch/multimedia/html-nc/animation-heavy-ion-event.html

Now, the first direct observation of a phenomenon known as jet quenching has been made by both the ATLAS and CMS collaborations. This result is reported in a paper from the ATLAS collaboration accepted for publication yesterday in the scientific journal Physical Review Letters. A CMS paper will follow shortly, and results from all of the experiments will be presented at a seminar on Thursday 2 December at CERN. Data taking with ions continues to 6 December.

Snapshot of two colliding lead ions just after impact (simulation).
Snapshot of two colliding lead ions just after impact (simulation).

After less than three weeks of heavy-ion running, the three experiments studying lead ion collisions at the LHC have already brought new insight into matter as it would have existed in the very first instants of the Universe’s life. The ALICE experiment, which is optimised for the study of heavy ions, published two papers just a few days after the start of lead-ion running. Now, the first direct observation of a phenomenon known as jet quenching has been made by both the ATLAS and CMS collaborations. This result is reported in a paper from the ATLAS collaboration accepted for publication yesterday in the scientific journal Physical Review Letters. A CMS paper will follow shortly, and results from all of the experiments will be presented at a seminar on Thursday 2 December at CERN1. Data taking with ions continues to 6 December.

“It is impressive how fast the experiments have arrived at these results, which deal with very complex physics,” said CERN’s Research Director Sergio Bertolucci. “The experiments are competing with each other to publish first, but then working together to assemble the full picture and cross check their results. It’s a beautiful example of how competition and collaboration is a key feature of this field of research.”

This and other are images are an offline reconstructed event from the GRID, showing tracks from the Inner Tracking System and the Time Projection Chamber of ALICE. All are credited to CERN.
ALICE event display
One of the primary goals of the lead-ion programme at CERN is to create matter as it would have been at the birth of the Universe. Back then, the ordinary nuclear matter of which we and the visible Universe are made could not have existed: conditions would have been too hot and turbulent for quarks to be bound up by gluons into protons and neutrons, the building blocks of the elements. Instead, these elementary particles would have roamed freely in a sort of quark gluon plasma. Showing beyond doubt that we can produce and study quark gluon plasma will bring important insights into the evolution of the early Universe, and the nature of the strong force that binds quarks and gluons together into protons, neutrons and ultimately all the nuclei of the periodic table of the elements.

When lead-ions collide in the LHC, they can concentrate enough energy in a tiny volume to produce tiny droplets of this primordial state of matter, which signal their presence by a wide range of measureable signals. The ALICE papers point to a large increase in the number of particles produced in the collisions compared to previous experiments, and confirm that the much hotter plasma produced at the LHC behaves as a very low viscosity liquid (a perfect fluid), in keeping with earlier observations from Brookhaven’s RHIC collider. Taken together, these results have already ruled out some theories about how the primordial Universe behaved.

“With nuclear collisions, the LHC has become a fantastic 'Big Bang' machine,” said ALICE spokesperson Jürgen Schukraft. “In some respects, the quark-gluon matter looks familiar, still the ideal liquid seen at RHIC, but we’re also starting to see glimpses of something new.”
Highslide JS
The ATLAS and CMS experiments play to the strength of their detectors, which both have very powerful and hermetic energy measuring capability. This allows them to measure jets of particles that emerge from collisions. Jets are formed as the basic constituents of nuclear matter, quarks and gluons, fly away from the collision point. In proton collisions, jets usually appear in pairs, emerging back to back. However, in heavy ion collisions the jets interact in the tumultuous conditions of the hot dense medium. This leads to a very characteristic signal, known as jet quenching, in which the energy of the jets can be severely degraded, signalling interactions with the medium more intense than ever seen before. Jet quenching is a powerful tool for studying the behaviour of the plasma in detail.
Highslide JS
“ATLAS is the first experiment to report direct observation of jet quenching,” said ATLAS Spokesperson Fabiola Gianotti. “The excellent capabilities of ATLAS to determine jet energies enabled us to observe a striking imbalance in energies of pairs of jets, where one jet is almost completely absorbed by the medium. It’s a very exciting result of which the Collaboration is proud, obtained in a very short time thanks in particular to the dedication and enthusiasm of young scientists.”

“It is truly amazing to be looking, albeit on a microscopic scale, at the conditions and state of matter that existed at the dawn of time,” said CMS Spokesperson Guido Tonelli. “Since the very first days of lead-ion collisions the quenching of jets appeared in our data while other striking features, like the observation of Z particles, never seen before in heavy-ion collisions, are under investigation. The challenge is now to put together all possible studies that could lead us to a much better understanding of the properties of this new, extraordinary state of matter."

The ATLAS and CMS measurements herald a new era in the use of jets to probe the quark gluon plasma. Future jet quenching and other measurements from the three LHC experiments will provide powerful insight into the properties of the primordial plasma and the interactions among its quarks and gluons.
Highslide JS
With data taking continuing for over one more week, and the LHC already having delivered the programmed amount of data for 2010, the heavy-ion community at the LHC is looking forward to further analysing their data, which will greatly contribute to the emergence of a more complete model of quark gluon plasma, and consequently the very early Universe.Highslide JS

Source: CERN, all images are an offline reconstructed event from the GRID, showing tracks from the Inner Tracking System and the Time Projection Chamber of ALICE. All are credited to CERN.

Snakes On A Rope: Researchers Take A Unique Look At The Climbing Abilities Of Boa Constrictors

In the wild, how does a snake climb a vertical surface without slipping? An examination involving boa constrictors is published by University of Cincinnati researchers.

In a unique study involving young boa constrictors, University of Cincinnati researchers put snakes to work on varying diameters and flexibility of vertical rope to examine how they might move around on branches and vines to gather food and escape enemies in their natural habitat.

The findings by Greg Byrnes, a University of Cincinnati postdoctoral fellow in the department of biological sciences, and Bruce C. Jayne, a UC professor of biology, are published in the December issue of The Journal of Experimental Biology.

For many Americans, it was the most dreaded moment in gym class: the challenge to wrap oneself around a vertical rope and climb as high as possible. Some of us couldn’t even get off the floor. But for other animals – even with no arms, no hands, no legs and no feet – that climbing ability is a necessity to survive.

The UC researchers sent the snakes climbing up varying widths and tensions of ropes as they explored snake movement in relation to their musculoskeletal design and variation in their environment.

They found that regardless of diameter or flexibility of the rope, the snakes alternated curving between left and right as they climbed the ropes. On the thicker ropes, they were able to move greater portions of their bodies forward as they climbed. As the ropes became thinner and more flimsy, the snakes used more of their bodies – including their back, sides and belly – to manipulate the rope for climbing.

“Despite the likely physical and energetic challenges, the benefits of the ability to move on narrow and compliant substrates might have large ecological implications for animals,” write the authors. “Arboreal organisms must often feed or hunt in the terminal branch niche, which requires the ability to move safely on narrow and compliant substrates.”
 boa constrictor

Jayne points out that although the large muscles of boa constrictors make them fairly stocky and heavy compared to other snakes, this anatomy probably increases their strength. All of the snakes gripped the ropes using a concertina mode of locomotion, which is defined by some regions of the body periodically stopping while other regions of the body extend forward. “It turns out boa constrictors are strong enough so that they can support their weight with a modest number of gripping regions,” adds Jayne.

The researchers say their findings are the first study that has explicitly examined the combined effects of diameter and compliance on how an animal gets around. Future research is underway to compare differing muscular anatomies in snakes and relate it to their function in terms of their behavior and their environment.
The research was supported by a grant from the National Science Foundation.
Contacts and sources:
Journal of Experimental Biology
National Science Foundation

Scientists Find Plant Cell Genes That Manage Arsenic Accumulation, A Problem for 150 Million

Researchers from Europe, Asia and the US have identified the two essential genes that control the accumulation and detoxification of arsenic in plant cells. The findings will help scientists reduce the accumulation of the toxic metalloid in crops. The study was funded in part by the PHIME ('Public health impact of long-term, low-level mixed element exposure in susceptible population strata') project, which was backed under the 'Food quality and safety' Thematic area of the EU's Sixth Framework Programme (FP6) to the tune of EUR 13.43 million.

The sinking of tube wells, low-cost and shallow water wells in Southeast Asia as well as mining in various regions of China, Thailand, and the US often boost arsenic concentrations in water that frequently exceed the World Health Organization (WHO) limit of 10 micrograms per litre (mcg/l), the value above which health problems start to occur.
Illustration of this article
Tens of millions of people are exposed to the risks associated with high levels of arsenic by drinking contaminated water or by ingesting cereal crops cultivated in polluted soils. Long-lasting exposure to this highly toxic metalloid can have a disastrous affect on human organs including the gastrointestinal transit, kidneys, liver, lungs and skin, and it increases the risk of cancer. In Bangladesh alone, it is estimated that 25 million people drink water that contains more than 50 mcg/l of arsenic and that 2 million of them risk of dying from cancer caused by this toxic substance.

Scientists from laboratories in Switzerland, South Korea and the US, and from the Swiss National Centre of Competence in Research (NCCR) Plant Survival believe that by identifying the key genes responsible for the accumulation of arsenic in plant cells they have made the first step towards tackling these problems. They explained their findings in the journal Proceedings of the National Academy of Sciences (PNAS).

Plants offer a way for toxic metals to enter the food chain. For example, arsenic is stored within rice grains, which, in regions polluted with this toxic metalloid, constitutes a danger for the population whose diet depends to a great extent on this cereal. Arsenic or cadmium in soils is then transported to plant cells and stored in compartments called vacuoles.

Within the cell, the translocation of arsenic and its storage in vacuoles is ensured by a category of peptides - the phytochelatins, which are important for the detoxification of heavy metals - that bind to the toxic metalloid, and are transported into the vacuole for detoxification.

The researchers said the process was similar to hooking up a trailer to a truck with the 'truck and trailer' complex being stored in the vacuole.

'By identifying the genes responsible for the vacuolar phytochelatin transport and storage, we have found the missing link that the scientific community searched for the past 25 years,' said Enrico Martinoia, a professor in plant physiology at the University of Zurich in Switzerland. He and his team pointed out that controlling these genes will make it possible to develop plants capable of preventing the transfer of toxic metals and metalloids from the roots to the leaves and grains thereby limiting the entry of arsenic into the food chain.

'By focusing on these genes, we could avoid the accumulation of these heavy metals in edible portions of the plant such as grains or fruits,' said Youngsook Lee from the Pohang University of Science and Technology (POSTECH) in South Korea.

Source: Cordis
Citation: Song, W-Y., et al. (2010) Arsenic tolerance in Arabidopsis is mediated by two ABCC-type phytochelatin transporters. PNAS. DOI:10.1073/pnas.1013964107.
For more information, please visit:
PHIME: http://www.york.ac.uk/res/phime/
PNAS: http://www.pnas.org/

Space Science and Renaissance Tombs

A group of Renaissance Tomb-Monuments in Suffolk is being analysed with tools developed in Space Science, to unlock their mysterious past and offer new insights into the Tudor Reformation.

Led by the University of Leicester, this innovative Heritage Science project draws together space scientists, art-historians, archaeologists and museologists from Leicester, with historians at Oxford and Yale, and archaeologists and scientists from English Heritage.

An interdisciplinary research programme in Cultural Heritage, it is funded by a major award from the Science and Heritage Programme of the Arts and Humanities Research Council (AHRC) and the Engineering and Physical Sciences Research Council (EPSRC). The award is for £497,000 and an additional three fully-funded PhD studentships.

Principal Investigator Dr Phillip Lindley, from the University of Leicester Department of History of Art and Film, said: “Key to this programme is the innovative employment of techniques borrowed from Space Science, principally three-dimensional scanning and non-destructive materials analysis, to solve a complex set of historical, archaeological and art-historical problems.”

The researchers will first analyse the great Renaissance monuments of Thomas Howard, third Duke of Norfolk (d. 1554) and of Henry Fitzroy (d. 1539), Duke of Richmond, Henry VIII’s illegitimate son.

Dr Lindley said: “Both monuments seem to have been dramatically altered when they were moved in the middle of the sixteenth century from their original locations in Thetford Priory to Framlingham Parish Church, where they now stand.

Dr Phillip Lindley examining the Renaissance sculpted tomb-monument of Thomas Howard, Duke of Norfolk.
Credit: University of Leicester.

“Puzzlingly, pieces excavated at Thetford in the 1930s seem to have originally belonged to these monuments and this suggests that they used to look very different from what we now see.

“We shall virtually disassemble the monuments and reconstruct their original forms for the first time in half a millennium, trying to integrate the excavated fragments in our virtual reconstructions. It is as if we have two (or more) three-dimensional jigsaws: we need first to sort the pieces out and then put them back together.”

With scanning and analytical techniques borrowed from Space Science, all this can be done virtually, without even touching the monuments. Materials analysis (using XRF, RAMIN, and other non-destructive techniques) again developed for Space Science applications, will provide information about the original painted surfaces.

The project will function as a case study, adapting techniques for analysis, interpretation and display, to make them widely transferable, and to further the innovative deployment of science in the Cultural Heritage Sector.

The research project led by Dr Phillip Lindley comprises work teams in the University of Leicester departments of History of Art & Film (Dr Lindley, with Dr Jackie Hall), the Space Science Centre (Prof George Fraser), Museum Studies (Dr Ross Parry) and Computer Science (Dr Effie Law), with collaborating groups at Oxford (Dr Steve Gunn) and Yale universities (Dr Lisa Ford), and in English Heritage (Jan Summerfield, Dr Paul Bryan).

Source: University of Leicester

Caffeinated Alcoholic Beverages – A Growing Public Health Problem

In the wake of multiple state bans on caffeinated alcoholic beverages (CABs) and an FDA warning to four companies to remove their products from the marketplace, an article published online today in the American Journal of Preventive Medicine delineates the scope of the public health problem and suggests areas of research that might help address it.

“Although several manufacturers of caffeinated beer have withdrawn their products from the market, there is no sign that young people have decreased the practice of combining alcohol and energy drinks,” commented lead author Jonathan Howland, PhD, Department of Community Health Sciences and Department of Emergency Medicine, Boston University. “Critically, CABs may increase alcohol-related risks in a number of different domains, but have been subject to very little systematic research.”

The article provides 44 references gathered from newspapers, magazines, and the scientific literature showing the current understanding of the effects of stimulants combined with alcohol. One study found that bar patrons who consumed CABs had a three-fold risk of leaving the bar highly intoxicated, compared to those who consumed alcohol without caffeine, and a fourfold risk of intending to drive after leaving the bar. Another compelling study concluded that students who consumed CABs had approximately double the risk of experiencing or committing sexual assault, riding with an intoxicated driver, having an alcohol-related accident, or requiring medical treatment.

The root of the problem may have started with so-called energy drinks. Depending on the brand, these beverages contain several stimulants, primarily caffeine, but also guarana, taurine, and sugar derivatives. Of the 577 caffeinated beverages listed on the Energy Fiend website in 2008, at least 130 contained more than the 0.02% caffeine limit for soft drinks imposed by the U.S. Food and Drug Administration (FDA).

Combining these energy drinks with alcohol became popular when marketers promoted the perception that energy drinks counteract the sedating effects of alcohol and related impairment and suggested that caffeine will increase enjoyment by allowing one to party for a longer time. According to a 2006 survey, 24% of college students reported mixing energy drinks with alcohol in the past month.

The FDA issued warning letters on November 17, 2010 to the following companies, indicating that further actions, including seizure of their products, is possible under federal law.

Charge Beverages Corp.: Core High Gravity HG Green, Core High Gravity HG Orange, and Lemon Lime Core Spiked (http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/ucm233990.htm)
New Century Brewing Co., LLC: Moonshot (http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/ucm234028.htm)
Phusion Projects, LLC (doing business as Drink Four Brewing Co.): Four Loko (http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/ucm234023.htm)
United Brands Company Inc.: Joose and Max (http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/ucm234002.htm)
States with previously announced bans are New York, Washington, Iowa, Kansas, Massachusetts, Oregon, and Michigan. The FDA announcement will likely pre-empt further state bans.

The article is “Caffeinated Alcoholic Beverages: An Emerging Public Health Problem” by Jonathan Howland, PhD, Damaris J. Rohsenow, PhD, Tamara Vehige Calise, DrPH, James MacKillop, PhD, and Jane Metrik, PhD. It has been published online in advance of publication in the American Journal of Preventive Medicine, Volume 40, Issue 2 (February 2011) published by Elsevier. doi: 10.1016/j.amepre.2010.10.026

Citation: “Caffeinated Alcoholic Beverages: An Emerging Public Health Problem” by Jonathan Howland, PhD, Damaris J. Rohsenow, PhD, Tamara Vehige Calise, DrPH, James MacKillop, PhD, and Jane Metrik, PhD. It has been published online in advance of publication in the American Journal of Preventive Medicine, Volume 40, Issue 2 (February 2011) published by Elsevier. doi: 10.1016/j.amepre.2010.10.026

Source: Elsevier 

Water Molecules' Mystery Dance: A Key To Understanding How All of Life Works

Water (H2O) is a unique molecule that holds amazing properties. Scientists have a good grasp of the structure and chemistry of individual molecules of water. But understanding how large numbers of these molecules move and interact--within bulk liquid water, or at the interface between water and air--is much more complicated.

Theoretical chemist James Skinner, at the University of Wisconsin in Madison, has been researching water for over a decade. According to Skinner, understanding the dance of water molecules is key to understanding how all of life works.

Geraldine Richmond and her team at the University of Oregon were surprised to find that molecules of sulfur dioxide (SO2) in the atmosphere tend to form a weak bond with water molecules at the surface of the liquid before finally submerging, whereas carbon dioxide (CO2) molecules dive right in.

"This is the first time that anyone has ever measured, with this level of molecular detail, a gas-surface complex at the surface of liquid water," Richmond said. "We are now investigating a whole series of important environmental gases, ions and solutes at the water surface."

Sulfur dioxide molecules in the atmosphere form a weak bond with water molecules.
Illustration of sulfur dioxide molecules forming weak bonds with water molecules.
Credit: Nicolle Rager-Fuller

"We now appreciate that it is virtually impossible to grasp almost anything at the molecular level about biology, without understanding how water molecules interact with biomolecules within cells," said Skinner, who is supported by a grant from the National Science Foundation (NSF) Division of Chemistry. "This includes fundamental processes like protein folding, photosynthesis and the biology of vision."

But how do scientists search out the secrets of water's choreography? Skinner, along with three other scientists--Krzysztof Szalewicz, professor of physics and astronomy at the University of Delaware; Martin Gruebele and his research team at the University of Illinois; and Geraldine Richmond and her team at the University of Oregon--use different approaches to get at water's unique movements and interactions.

Making molecular movies

Skinner and his research group at the University of Wisconsin are analyzing and interpreting the results of experiments that use infrared light to probe the motions of molecules in liquid water.

The vibrations between oxygen and hydrogen atoms within a molecule absorb infrared light. By varying the frequency of the light and measuring how much is absorbed at each frequency, the researchers record a graph called an "absorption spectrum." More modern techniques use ultrashort pulses of infrared laser light.

"Data from these experiments contain information about local molecular environments," Skinner said. "But this information is often hard to extract. We use first principles calculations, molecular dynamics simulations, statistical mechanics, and basically any theoretical approach that will enable us to further our understanding."

One method in particular, molecular dynamics simulation, allows the researchers to "see" water molecules in action.

Like designing the computer graphics for a movie, they start with an initial configuration of the molecules and then advance it forward in time, frame by frame. To do this, the team created a model of the potential energy for a large collection of water molecules in terms of interactions between groups of two or three molecules.

"We can show the trajectories (in time) of all the atoms in a system," Skinner said."The simulations--and hence the movies--always involve approximations (for example, using classical mechanics instead of quantum mechanics for motion of the atomic nuclei)." Even so, the simulations have proved to be quite accurate.

Up next for Skinner's team, is a study of the ways biomolecules such as peptides, proteins, nucleic acids, and membrane lipids, dissolve in water. "Experiments, coupled with new theoretical and computational techniques, will surely shed new light on the critical problem of water dynamics around biomolecules, and its effect on how they function," he said.

In addition to boosting biomedical research, Skinner believes that understanding the dance of water is also crucial to climate science. "To model the chemical and physical processes in the atmosphere one needs to understand reactivity of aerosol particles, which is often controlled by water dynamics at the surface of the aerosols," he said.

First principles findings

Krzysztof Szalewicz , professor of physics and astronomy at the University of Delaware, uses the ab initio, or "first principles," approach to researching water molecules in motion.

With support from NSF's Quantum Calculations Program in the Division of Chemistry, Szalewicz and his team start with what is known about each type of atom within the molecule--in this case hydrogen and oxygen. Then they plug that information into the Schrödinger wave equation.

"By solving Schrödinger's equation, we can predict the properties of lighter atoms and molecules almost exactly," Szalewicz said. Physicist Erwin Schrödinger developed his Nobel Prize-winning equation in 1926 as a way to explain the wave-like nature of particles at the nanometer (billionth of a meter) scale. His equation is central to the theory of quantum mechanics.

Once Szalewicz's team solved the equation for individual water molecules, they used the results to create a model of two molecules interacting. Then they increased the number of molecules to three. By combining several of these models, the team was able to simulate a large number of molecules interacting in bulk liquid water.

"We can model water very accurately and we did systems as large as 40 atoms fairly accurately," Szalewicz said. "However, that is about the limit at the present time."

Following the first principles method, Szalewicz and his team used no data from lab experiments to develop their model. "So-called empirical approaches use experimental data to adjust their predictions," he said. "Thus, if experimental data are wrong, the predictions will be wrong."

But in this case, the model's predictions were gratifying. "The agreement of our predictions with experiment for water was excellent," he said.

"If you look at the whole range of properties, our predictions are better than any published ones, even including results obtained by empirical approaches," Szalewicz said. "Because liquid water is a rather complicated system despite the simplicity of water molecule itself, this was a demanding test for our approach."

Movers and shapers

Meanwhile, Martin Gruebele and his research team at the University of Illinois, sponsored by NSF's Division of Molecular and Cellular Biosciences, follow a different method. The team uses pulses of terahertz (THz) radiation, which falls between infrared and microwave radiation on the electromagnetic spectrum, to directly measure the movements of water molecules around proteins. Because water molecules consist of a single oxygen atom and two hydrogen atoms, they are dwarfed by the complex, folded protein molecules, made up of hundreds or thousands of individual atoms.

"Proteins influence the dynamics of up to thousands of water molecules surrounding them out to a distance comparable to the size of the protein," Gruebele said.

These affected molecules form a thick "solvation shell" around the unwieldy protein. In the process, the two-way interaction between water and protein causes hydrogen bonds and other weak bonds in the molecules to change and rearrange.

Good vibrations

To measure just how far this influence stretches, Gruebele's group flashes laser light pulses in the THz range through a mixed sample of protein and water. When this radiation hits a molecule, if its wavelength matches the natural vibration frequency of the atomic bonds that join the molecule together, it absorbs that energy. This extra energy causes the bonds between atoms to vibrate with regular, repetitive motions.

"Pulsed THz light oscillates about once every picosecond (one-trillionth of a second)," Gruebele said. And since water molecules and proteins both vibrate on a similar time scale, they can absorb light with wavelengths in that range.

In order to "see" what is taking place inside the molecules, the researchers used a spectroscope to measure the specific wavelengths absorbed.But in the process, they discovered something totally unexpected.

"We knew that dry protein powder absorbs less THz light than water does," Gruebele explained. "So we expected that adding protein powder to water would decrease the amount of light absorbed." But instead it increased, as long as protein concentrations remained low.

"We concluded that this happens because the solvation shell actually absorbs more light than bulk water, which more than makes up for the smaller absorption of light by the protein," Gruebele said.

"As you add more protein molecules, the solvation shells around each one begin to overlap and the absorption stops increasing," he added. "This actually allows you to measure the size of the shells: We found they are almost three nanometers (three billionths of a meter) in diameter. This is small in human terms, but gigantic in terms of the size of a water molecule."

In addition to this THz absorption spectrum method, Gruebele's team used three other measurement techniques to illuminate different aspects of the protein folding process.

"The combination of these four techniques shows that water adopts a "folded" structure and dynamics very early during the protein folding process," he explained. "Thus, water is an early driver and an integral part of that process."

In their next phase of research, Gruebele and his team will look at how water molecules mediate the binding of two biologically active molecules, such as two proteins, or a drug and a protein.

"Eventually, this would give us a much better estimate of how strongly drugs bind and why they are specific," he said. "Such information would save a great deal of live animal testing, and reduce the cost of pharmaceutical screening in the search for drugs."

Surface surprises

Another NSF-funded research effort, headed by Geraldine Richmond and her team at the University of Oregon, is probing the interactions between surface water molecules and atmospheric gases.

"I've always been fascinated by water--the way it flows, how it controls our body temperature and the temperature on this planet, how things can float on it," said Richmond.

"And the surface of water to me is the most fascinating," she said. "It plays such an important role in our environment and our bodies." Richmond and her group are supported by an award from NSF's Electrochemistry and Surface Chemistry Program.

"We want to understand how environmentally important gases such as sulfur dioxide (SO2) and carbon dioxide (CO2) interact with water surfaces," Richmond said. "We know a lot about how such gases behave once they are in water, but not much at all about what happens when they first make contact."

Richmond's team was surprised to find that SO2 tends to hang out at the surface before submerging, whereas CO2 dives right in.

"This is the first time that anyone has ever measured, with this level of molecular detail, a gas-surface complex at the surface of liquid water," she said. "We are now investigating a whole series of important environmental gases, ions and solutes at the water surface."

The group is also looking at the differences between surface chemistry and chemistry in bulk water. "What we do is to try to understand the underlying principles behind the chemistry that occurs in the atmosphere," Richmond said.

"For example, nitric acid accumulates in aerosols and clouds," she explained. "Since a lot of the reactions that happen in the atmosphere occur on the surface of aerosol particles that contain these strong acids, we wanted to understand if nitric acid continues to act as a strong acid when it sits at the surface of the aerosol."

Because aerosol interactions are so complex, the scientists simplified their study to focus on how individual molecules of nitric acid behave at the surface of a solution of nitric acid in water.

"What we find is quite remarkable," Richmond said. "When nitric acid sits at the surface of water, it acts as a weak acid -- not a strong one. This comes from the fact that at the surface, it is surrounded by fewer water molecules. Consequently it doesn't dissociate, or get pulled apart, and therefore is far less reactive there than inside the bulk liquid."

Mixing oil and water

Richmond's teamw is also fascinated by interactions between oil and water. "Normally people think that oil and water don't mix; but there is a weak attraction between them," she said. "That leads to some interesting consequences."

Richmond describes the human body as "one big oil/water interface, with water continually flowing past oily membranes," for example ions transporting through cell walls.

"We find at this oil-water interface, water is highly oriented and creates conditions that can facilitate some of the most important chemistry in our bodies," Richmond said.

"Related to this we try then to understand how the unique properties of the junction between water and oil influences the adsorption of surfactants such as soaps, dispersants and polymers."

Why is this important? "Everything from mayonnaise to oil-spill dispersants relies on surfactants such as these adsorbing at the interface and in some cases, working to keep the oil and water separated," she explained.

"Our laboratory work provides unique insights into the molecular properties of the interface but this information is somewhat two-dimensional," Richmond said. "There are missing pieces."

To fill in the three-dimensional picture, the scientists count on computer simulations. But there is a downside. "Often you don't know how realistic the results are and whether the models you are using in your simulations are correct," she pointed out.

"However, in our case we can check their accuracy by seeing how close the computer results match our experimental results," Richmond added. "Then we dig into the calculations to see what kind of molecular interactions gave us that match. So they really work hand-in-hand to give us a robust three dimensional picture of what is happening at the surface of water."

Contacts and sources:
 Holly Bigelow Martin
National Science Foundation
James Skinner
Martin Gruebele
Geraldine Richmond
Krzysztof Szalewicz
Related Institutions/Organizations
University of Delaware
University of Oregon Eugene
University of Wisconsin-Madison
University of Illinois at Urbana-Champaign

Hebrew University Researchers, Disputing Old Laws Of Physics, Reveal Way In Which Possible Severe Earthquakes Can Be Predicted

Researchers at the Hebrew University of Jerusalem who have been examining what happens in a “model earthquake” in their laboratory have discovered that basic assumptions about friction that have been accepted for hundreds of years are just wrong. Their findings provide a new means for replicating how earth ruptures develop and possibly enabling prediction of coming severe earthquakes.

“The findings have a wide variety of implications for materials science and engineering and could help researchers understand how earthquakes occur and how severely they may develop along a fault line,” said Jay Fineberg, the Max Born Professor of Natural Philosophy at the Racah Institute of Physics at the Hebrew University.

Prof. Jay Fineberg of the Hebrew University of Jerusalem

Credit: Hebrew University of Jerusalem

The work by Fineberg, his graduate student Oded Ben-David and fellow researcher Gil Cohen, has been published in an article in Science magazine. An article based on their work also has been published online in Wired magazine.

For centuries, physicists have thought that the amount of force needed to push an object in order to make it slide across a surface is determined by a number called the coefficient of friction, which is the ratio between the forces pushing sideways and pushing down (basically, how much the object weighs). First described by Leonardo da Vinci in the 15th century and redefined a few hundred years later, these laws are so widely accepted that consistently appear in introductory physics textbooks.

But, when Ben-David tried to check whether these “laws” work at different points along a block’s contact surface, the laws fell apart. In carefully controlled lab experiments, Ben-David found that points along the contact surface could sustain up to five times as much sideways force as the coefficient of friction predicted, and the object still wouldn’t budge.

The experiments actually studied two contacting blocks as they just start to slide apart. Although the blocks look like they are smoothly touching, in reality they are only connected by numerous, discrete, tiny contact points, whose total area is hundreds of times less than the blocks’ apparent contact area. Performing sensitive measurement of the stresses at contacting points, the researchers noted that the strength at each point along the contact surface could be much larger than the coefficient of friction allows before the contacts ruptured and the block began to slide.

Furthermore, the contacts don’t all break at the same time. They, instead rupture one after another in a cadence that sets the rupture speed. These rapidly moving ruptures are close cousins of earthquakes, Fineberg said. The blocks in effect represent two tectonic plates pushing one against each other, and when the force between them is enough to disengage the plates, the resulting contact surface rupture sends shock waves through the blocks, exactly as in an earthquake.

The team found that ruptures come in three distinct modes: slow ruptures that move at speeds well below the speed of sound; ruptures that travel at sound speed; and “supershear” ruptures that surpass sound speeds. Which type of wave one gets is determined by the stresses at the contact points, which provide a measure of how much energy would be released if an actual earthquake were to occur. These different types of earthquakes have all been seen in the earth, but these experiments provide the first clue of how the earth “chooses” how to let go.

“An earthquake is the same system as in the Hebrew University experiments, just scaled up by factors of thousands,” Fineberg said. “We can watch how these things unfold in the lab and measure all of the variables that might be actually relevant in a way that you could never observe under the earth.”

How an earthquake “chooses” to rupture is not simply an academic question. Each type of rupture mode determines how the earth releases the enormous pressures that are locking tectonic motion and is intimately related to the potential hazards embodied within an earthquake. Whereas sonic earthquakes are destructive, their supersonic cousins are potentially much more dangerous as they release the enormous stored energy within the earth as a shock wave. In contrast, slow ruptures create negligible damage for the same amount of stress release.

And while it is still impossible to make detailed measurements of the stresses along a real fault, the Hebrew University results suggest a method by which stresses can be tracked as an earthquake is under way, and how one earthquake can set the stage for the initial conditions for the next one. This new understanding has the potential to provide unprecedented predictive power, estimating both the rupture mode and extent of a future earthquake.

Source: Hebrew University of Jerusalem

How DNA Components Resist Damage From UV Exposure

The enormous computing power and quantum chemistry reveals the answer to how.

The genetic material of DNA contains shielding mechanisms to protect itself from the exposure to the UV light emitted by the sun. This is of crucial importance, since without photostability – i.e. without "programmed" defense mechanisms against UV irradiation – a rapid degradation of DNA and RNA would be the consequence.

As part of a project funded by the Austrian Science Fund (FWF) a group of researchers led by Hans Lischka, Quantum Chemist of the University of Vienna, Austria, could, for the first time, comprehensively unravel these ultra-fast processes of the photostability of the nucleobases. In this context a publication appeared in the current issue of the prestigious journal "Proceedings of the National Academy of Sciences of the United States of America" (PNAS).

The figure shows the special structures of DNA nucleobases, which – after exposure to solar radiation – are responsible for the ultrafast radiationless deactivation to the electronic ground state. \
Credit: Copyright: Felix Plasser, University of Vienna

The effect of sunlight on our skin not only leads to desired tanning which enhances pleasant holiday feelings, but it also initiates processes that can lead to serious health damage. A research team led by Hans Lischka, Professor at the Institute for Theoretical Chemistry, University of Vienna, Austria, investigated the shielding mechanisms that nature has provided to protect itself against such harmful effects. The strategy here is simple, yet highly complex: As soon as the UV light excites the electrons into a higher energy level, ultra-fast decay brings them back to its original state. In this way electronic energy is converted into heat. This process occurs in an incredibly short time dimension, in up to a quadrillionth of a second.

Computer simulations on the properties of light-active DNA components

In the group of Hans Lischka (Institute for Theoretical Chemistry, University of Vienna), together with Mario Barbatti (now Max-Planck-Institut for coal research, Mülheim/Ruhr, Germany) and in collaboration with colleagues of the Czech Academy of Sciences in Prague, Czech Republic, a vivid dynamic picture of the photostability of the nucleobases was given using innovative computer simulation techniques. It could be shown how the DNA components - the nucleotides that are responsible in DNA and RNA for the formation of base pairs - protect themselves against decomposition under UV irradiation.

New Quantum Chemical approaches for photophysical studies

The principal innovation of this work consists in the detailed calculation of the coupling of the electronic dynamics with that of the atomic nuclei. This goal was achieved with the help of worldwide unique quantum chemical methods developed at the Institute for Theoretical Chemistry, University of Vienna. The calculated states of motion of the nucleobases show a quite remarkable dynamic behavior in time that spans several orders of magnitude - from the pico/trillionth to the femto/quadrillionth-second range.

The newly developed methods are suitable not only for elucidation of the above-described dynamics in DNA nucleobases, but they are also applicable to studies of photo-physical processes in DNA itself and in the area of photovoltaics which is of high technological interest. The new methods allow a better understanding of the fundamental processes of transport of electronic excitation energy and of charge separation for production of electricity.

High computing power available from joint resources

The computational effort of these studies was enormous and could be only completed successfully with extensive use of computer resources of the University of Vienna and the Vienna Scientific Cluster of the University, the Technical University and the University of Natural Resources and Life Sciences (all Vienna).

Contacts and sources"
Felix Plasser, University of Vienna

Drug Pricing: British Reforms Based on Clinical Outcome, “Value-Based” Pricing

Parliamentary Office of Science and Technology (POST)

The government intends to reform the way in which drugs purchased by the NHS are priced. It aims to ensure that drug costs more fully reflect clinical benefit and to improve patient access to new treatments. This POSTnote outlines current pricing policy and examines other options to evaluate drug pricing, including “value-based” pricing.

Industry requires reassurance from the Government that there will be return on investment through purchasing new medicines. Evidence suggests that the UK lags in taking up new medicines compared with other European countries. For example, the DH’s Cancer Reform Strategy reports that the UK has a take up rate of new cancer medicines at 60% of that in other European counties.10 Clinical trials compare a new drug against the best standard of care currently available (for example the latest available drug). Due to a number of factors, such as the poor take up of new medicines in the UK, companies sometimes carry out more clinical trials abroad. It is considered unethical to perform a clinical trial against older, less effective drugs.

There are two elements in evaluating a drug. Licensing bodies assess the efficacy (success in providing a desired result) and safety of a new drug, while NICE measures the clinical and cost effectiveness of a new drug. Cost-effectiveness can be measured only once a drug has been licensed and in clinical use, allowing assessment over time, through comparison with other treatments. This can take a long time. NICE can perform an assessment at launch based on an estimate of cost-effectiveness taken from the pre-launch trials combined with modelling; information from industry is usually, but not always relevant to these analyses.

For instance, one of the first PAS was for beta-interferon, used to treat multiple sclerosis (MS). NICE initially rejected the drug as not being cost-effective, a decision that was criticised by patient groups which claimed that it had not looked at the long-term benefits of treatment. The DH implemented a PAS to make the treatment more cost-effective. Ten years on, there is still no consensus as to whether this is a cost-effective treatment for MS.

Manufacturing “Made To Measure” Atomic-Scale Electrodes

Thanks to collaborative work between scientists in Donostia-San Sebastian and the University of Kiel (Germany) it has been shown that it is possible to determine and control the number of atoms in contact between a molecule and a metal electrode of copper, at the same time as the electric current passing through the union being recorded. These results were published in the Nature Nanotechnology journal.

One of the key problems in nanotechnology is the formation of electrical contacts at an atomic scale. This demands the detailed characterisation of the current flowing through extremely small circuits – so small that their components can be individual atoms or molecules. It is precisely this miniature nature of the system, of typically nanometric dimensions (1 metro = a thousand million nanometers), where the difficulty of this yet unresolved problem arises. In particular, in unions formed by a single molecule, it has been shown that the number of individual atoms making up the contact and their positions are crucial when determining the electric current that is flowing. To date, there has been no experiment where it has been possible to control these parameters with sufficient precision.

In the research published by the Nature Nanotechnology journal, however, these scientists have revealed and explained the changes that the electric current flowing through a molecular union (metal/molecule/metal) undergoes, depending on the area of contact uniting the molecule to the metallic electrodes. Basically, changing the number of atoms in contact with the molecule, one by one, it goes from a low state (bad contact) to another, higher one (good contact) of conduction. With bad contact the current is limited by the area of contact, while with good contact the current is limited by the intrinsic properties of the molecule.

Taking part in this collaboration project were scientists from the Donostia International Physics Center (DIPC), from the Physics of Materials Centre at the CSIC-University of the Basque Country (UPV/EHU) Mixed Centre and from the Department of the Physics of Materials at the Chemistry Faculty of the UPV/EHU.

Source: Elhuyar Fundazioa

Full bibliographic information
Atomic-scale engineering of electrodes for single-molecule contacts, G. Schull, Th. Frederiksen, A. Arnau, D. Sanchez-Portal, R. Berndt, Nature Nanotechnology

Researchers Use Patient's Own Blood To Treat Hamstring Injury

Researchers in London say they have found an effective two-part treatment for microtears in the hamstring: injections of the patient's own blood and a steroid along with "dry-needling," in which repeated needle punctures cause controlled internal bleeding in the injured area. Results of the study were presented today at the annual meeting of the Radiological Society of North America (RSNA).

"By injecting the patient's own blood where it is needed at the site of a damaged tendon, we help the patient heal themselves," said lead researcher Waseem A. Bashir, M.D., a radiologist at Royal National Orthopaedic Hospital and Ealing Hospital in London. "Blood contains many growth factors, and the injections have been shown to promote faster healing of certain injuries."

Hamstring tendinopathy is a common sports injury that occurs in soccer, gymnastics, karate or any sport that requires quick acceleration. It may be caused by an improper warm-up or, in an elite athlete, as the result of repetitive strain. Unlike a torn or ruptured tendon that can be surgically repaired, the tiny microtears that characterize chronic tendinopathy are not easily diagnosed, are difficult to heal and often sideline athletes for long periods, if not permanently.

"Patients with hamstring tendinopathy will experience pain walking or climbing stairs and even while sitting or riding in a car," Dr. Bashir said. "The condition is literally a pain in the butt."

In the study, 42 patients with suspected hamstring microtearing underwent ultrasound and MRI to confirm the tendinopathy and then were randomly assigned to one of three treatment groups. The first group received an injection of both a long-lasting anesthetic and steroid on the surface of the tendon, as well as the dry-needling procedure at the site of microtears.

The second group received an injection of the anesthetic along with two to three milliliters of their own blood, called an autologous blood injection (ABI), and dry-needling. The third group received a local anesthetic, a steroid and ABI along with dry-needling.

"The injections were all performed with ultrasound and color Doppler, which allows us to watch in real-time where the needle is going," Dr. Bashir said. "During the dry-needling, we can see blood flow increase in the area."

Following their treatments, all patients in the study participated in a structured six-week physiotherapy program. The patients were then evaluated at various intervals over a one-year period to assess their levels of pain and functioning.

Patients treated solely with an injection of a steroid and dry-needling reported improved functionality for only three to 12 weeks after treatment. A year later, patients in this group reported being at pre-treatment levels of pain and functionality. Patients who received their own blood plus dry-needling reported significant improvements in functionality even one year after the treatment.

Patients who received both their own blood and a steroid along with dry-needling at the site of tendon damage experienced the most significant reduction in pain levels and the most sustained functional improvement one year following treatment.

"Ultrasound-guided ABI in the hamstring, in combination with a local steroid and dry-needling, appears to be a more clinically effective alternative to the current standard, steroid therapy," Dr. Bashir said. "A few of our soccer-playing patients had been told their condition was untreatable and they had basically given up all hope of playing again. They were amazed to be able to play again after our treatment and physical therapy."

He added that ABI therapy has also been an effective treatment for microtears in other tendons, including the elbow, the patellar tendon and those in the rotator cuff within the shoulder.

Dr. Bashir's coauthor is David A. Connell, M.D.

Contacts and sources:

Soil Microbes Define Dangerous Rates Of Climate Change

Scientists at the University of Exeter have studied a potentially significant feedback to rapid climate change. Runaway reactions in peatlands could give off large amount of carbon and considerable heat. Researchers are now investigating possible links between this reaction and peatland wildfires, such as those in Russia earlier this year.

The rate of global warming could lead to a rapid release of carbon from peatlands that would further accelerate global warming.

Two recent studies published by the Mathematics Research Institute at the University of Exeter highlight the risk that this 'compost bomb' instability could pose, and calculate the conditions under which it could occur.

The same Exeter team is now exploring a possible link between the theories described in the studies and last summer's devastating peatland fires in Russia.

The first paper is published in the European Journal of Social Science and the second in Proceedings of the Royal Society A.

The first paper by Catherine Luke and Professor Peter Cox describes the basic phenomenon. When soil microbes decompose organic matter they release heat – this is why compost heaps are often warmer than the air around them.

The compost bomb instability is a runaway feedback that occurs when the heat is generated by microbes more quickly than it can escape to the atmosphere. This in turn requires that the active decomposing soil layer is thermally-insulated from the atmosphere.

Catherine Luke explains: "The compost bomb instability is most likely to occur in drying organic soils covered by an insulating lichen or moss layer".

The second paper led by Dr Sebastian Wieczorek and Professor Peter Ashwin, also of the University of Exeter, proves there is a dangerous rate of global warming beyond which the compost bomb instability occurs.

This is in contrast to the general belief that tipping points correspond to dangerous levels of global warming.

Sebastian Wieczorek explains: "The compost bomb instability is a novel type of rate-dependent climate tipping point".

The Exeter team is now modelling the potential impact of the compost bomb instability on future climate change, including the potential link to the Russian peatland fires.It is also working to identify other rate-dependent tipping points.

You can view the two papers online:

Soil carbon and climate change: from the Jenkinson effect to the compost-bomb instability http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2389.2010.01312.x/abstract

For more information, please contact:
College of Engineering, Mathematics and Physical Sciences - University of Exeter

Contacts and sources:
Proceedings of the Royal Society A

Risk From Deadly 'Honey Fungus'

An international team of scientists has developed a new technique to aid crops at risk from a devastating agricultural parasite commonly known as the 'honey fungus', one of the most serious diseases of trees and shrubs across the northern hemisphere. The development allows crop to be screened for natural resistance by adding DNA with fluorescent genes to the fungus before being planted out.

The research, a collaboration between the United States Department of Agriculture's Agricultural Research Service and the University of Bristol, has the ability to transform and to genetically manipulate the plant-pathogenic fungus Armillaria mellea by artificially introducing DNA into it. DNA was introduced using Agrobacterium, a bacterium that is commonly used to genetically modify plants.

This powerful technique has been immensely important in the study of other pathogens in the laboratory, in terms of pinpointing how the pathogens enter and spread through plants and, then, developing control practices that prevent or minimise infection.

The honey fungus is a devastating disease for many hundreds of different species of trees and shrubs, ranging from those important for timber production, various orchard and vine fruits and numerous ornamental shrubs, so is of importance in forestry, agriculture and gardening.

Infections result in a reduced growth rate of the host plant, premature wilting of the foliage, lower harvests and eventually death of the plant. This is often accompanied by the growth of honey-coloured toadstools – the fruiting bodies of the fungus, which are responsible for its common name. The fungus attacks the roots of the plant and can spread through the soil, meaning that once infections are established they can move to nearby plants and spread throughout a wide area. Indeed the term "humungous fungus" has been coined for this group of Armillaria species as some colonies have been found where one individual fungus covers an area in excess of 15 hectares.

Controlling the fungus is very difficult. The most effective pesticide to prevent Armillaria root disease, methyl bromide, is being phased out (except for quarantine and critical uses) due to its role in depleting the stratospheric ozone layer. There are few alternatives for preventing or curing infections. The disease is particularly damaging to orchard or vine crops as the infection could wipe out production long before the costs of establishing the orchard have been recouped. Because the fungus persists on dead roots buried in the soil, it is likely that any new plants put into the same area would also be vulnerable to attack.

In a garden setting there are no chemical control measures, meaning that using naturally resistant plant varieties is important. This work will allow us to genetically modify the fungus so that it can be easily identified, measured and studied in a lab situation.

Professor Gary Foster, a plant pathologist from the University of Bristol's School of Biological Sciences, said: "The ability to track and visualise the fungus as it enters and grows through a plant will help us understand how this fungus behaves in the field. Such knowledge is an important step towards helping us to devise better control strategies in the future."

Dr Kendra Baumgartner, a specialist in vine and tree crop diseases from the U.S. Department of Agriculture, said: "Efforts are already under way to identify rootstocks of grapes, walnuts, and stone fruits that are naturally resistant to infection. The improved screening that is enabled by using transformed strains of Armillaria should allow more rapid identification of resistant plant materials."

Being able to transform the fungus also helps with investigations into its population structure. These species are unusual as they can produce new genetic types without going through a conventional sexual cycle. When two individuals meet, there is the chance that nuclei from one strain can invade and recombine with nuclei in the other fungus, giving rise to new genotypes with new and novel traits.

Dr Andy Bailey, a mycologist at the University of Bristol, said: "This can be studied in far more detail now that we are able to introduce genes that are easy to follow."

The work was funded with support from the Organisation for Economic Cooperation and Development's Cooperative Research Program in Biological Resource Management for Sustainable Agricultural Systems, and will appear in the December issue (24) of Applied and Environmental Microbiology.

Contacts and sources:
Applied and Environmental Microbiology