Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Thursday, May 29, 2014

Did Dogs Help Man Kill Off The Mammoths?

A new analysis of European archaeological sites containing large numbers of dead mammoths and dwellings built with mammoth bones has led Penn State Professor Emerita Pat Shipman to formulate a new interpretation of how these sites were formed.

A fragment of a large bone, probably from a mammoth, Pat Shipman reports, was placed in this dog's mouth shortly after death. This finding suggests the animal was according special mortuary treatment, perhaps acknowledging its role in mammoth hunting. The fossil comes from the site of Predmosti, in the Czech republic, and is about 27,000 years B.P. old. This object is one of three canid skulls from Predmosti that were identified as dogs based on analysis of their morphology.

Photo credit: Anthropos Museum, Brno, the Czech Republic, courtesy of Mietje Germonpre.

She suggests that their abrupt appearance may have been due to early modern humans working with the earliest domestic dogs to kill the now-extinct mammoth -- a now-extinct animal distantly related to the modern-day elephant. Shipman's analysis also provides a way to test the predictions of her new hypothesis. Advance publication of her article "How do you kill 86 mammoths?" is available online throughQuaternary International.

Spectacular archaeological sites yielding stone tools and extraordinary numbers of dead mammoths -- some containing the remains of hundreds of individuals -- suddenly became common in central and eastern Eurasia between about 45,000 and 15,000 years ago, although mammoths previously had been hunted by humans and their extinct relatives and ancestors for at least a million years. Some of these mysterious sites have huts built of mammoth bones in complex, geometric patterns as well as piles of butchered mammoth bones.

"One of the greatest puzzles about these sites is how such large numbers of mammoths could have been killed with the weapons available during that time," Shipman said. Many earlier studies of the age distribution of the mammoths at these sites found similarities with modern elephants killed by hunting or natural disasters, but Shipman's new analysis of the earlier studies found that they lacked the statistical evaluations necessary for concluding with any certainty how these animals were killed.

Surprisingly, Shipman said, she found that "few of the mortality patterns from these mammoth deaths matched either those from natural deaths among modern elephants killed by droughts or by culling operations with modern weapons that kill entire family herds of modern elephants at once." This discovery suggested to Shipman that a successful new technique for killing such large animals had been developed and its repeated use over time could explain the mysterious, massive collections of mammoth bones in Europe.

These maps show the locations of collections of mammoth bones at the archaeological sites that Pat Shipman analyzed in her paper that will be published in the journal Quaternary International.

Credit: Jeffrey Mathison

The key to Shipman's new hypothesis is recent work by a team led by Mietje Germonpré of the Royal Belgian Institute of Natural Sciences, which has uncovered evidence that some of the large carnivores at these sites were early domesticated dogs, not wolves as generally had been assumed. Then, with this evidence as a clue, Shipman used information about how humans hunt with dogs to formulate a series of testable predictions about these mammoth sites.

"Dogs help hunters find prey faster and more often, and dogs also can surround a large animal and hold it in place by growling and charging while hunters move in. Both of these effects would increase hunting success," Shipman said. "Furthermore, large dogs like those identified by Germonpré either can help carry the prey home or, by guarding the carcass from other carnivores, can make it possible for the hunters to camp at the kill sites." Shipman said that these predictions already have been confirmed by other analyses. In addition, she said, "if hunters working with dogs catch more prey, have a higher intake of protein and fat, and have a lower expenditure of energy, their reproductive rate is likely to rise."

Another unusual feature of these large mammoth kill sites is the presence of extraordinary numbers of other predators, particularly wolves and foxes. "Both dogs and wolves are very alert to the presence of other related carnivores -- the canids -- and they defend their territories and food fiercely," Shipman explained. "If humans were working and living with domesticated dogs or even semi-domesticated wolves at these archaeological sites, we would expect to find the new focus on killing the wild wolves that we see there."

Two other types of studies have yielded data that support Shipman's hypothesis. Hervé Bocherens and Dorothée Drucker of the University of Tubingen in Germany, carried out an isotopic analysis of the ones of wolves and purported dogs from the Czech site of Predmostí. They found that the individuals identified as dogs had different diets from those identified as wolves, possibly indicating feeding by humans. Also, analysis of mitochondrial DNA by Olaf Thalmann of the University of Turku in Finland, and others, showed that the individuals identified as dogs have a distinctive genetic signature that is not known from any other canid. "Since mitochondrial DNA is carried only by females, this finding may indicate that these odd canids did not give rise to modern domesticated dogs and were simply a peculiar, extinct group of wolves," Shipman said. "Alternatively, it may indicate that early humans did domesticate wolves into dogs or a doglike group, but the female canids interbred with wild wolf males and so the distinctive female mitochondrial DNA lineage was lost."

As more information is gathered on fossil canids dated to between 45,000 and 15,000 years ago, Shipman's hunting-dog hypothesis will be supported "if more of these distinctive doglike canids are found at large, long-term sites with unusually high numbers of dead mammoths and wolves; if the canids are consistently large, strong individuals; and if their diets differ from those of wolves," Shipman said. "Dogs may indeed be man's best friend."

Contacts and sources:
Barbara K. Kennedy
Penn State

Wednesday, May 28, 2014

Birdsnap App Is A Digital Field Guide To Identify 500 Most Common North American Bird Species

Columbia Engineering researchers use computer vision + machine learning techniques to launch electronic field guide featuring 500 of the most common North American bird species.

Researchers at Columbia Engineering, led by Computer Science Professor Peter Belhumeur, have taken bird-watching to a new level. Using computer vision and machine learning techniques, they have developed Birdsnap, a new iPhone app that is an electronic field guide featuring 500 of the most common North American bird species. The free app, which enables users to identify bird species through uploaded photos, accompanies a visually beautiful, comprehensive website that includes some 50,000 images. 

Digital technology is about to add big data to the bird enthusiast's traditional tools of binoculars and a field guide. Peter Belhumeur, a Columbia Engineering computer science professor whose app for recognizing leaves was launched in 2011, has now created Birdsnap, an electronic guide for identifying birds. Birdsnap uses the computer technology that can recognize human faces to identify 500 common birds in North America.

Credit: Columbia University, Office of Communications and Public Affairs

Birdsnap, which also features birdcalls for each species, offers users numerous ways to organize species—alphabetically, by their relationship in the Tree of Life, and by the frequency with which they are sighted at a particular place and season. The researchers, who collaborated with colleagues at the University of Maryland, are presenting their work at the IEEE Conference on Computer Vision and Pattern Recognition in Columbus, OH, June 24 to 27.

"Our goal is to use computer vision and artificial intelligence to create a digital field guide that will help people learn to recognize birds," says Belhumeur, who launched Leafsnap, a similar electronic field guide for trees, with colleagues two years ago. "We've been able to take an incredible collection of data—thousands of photos of birds—and use technology to organize the data in a useful and fun way."

Belhumeur and his colleague, Computer Science Professor David Jacobs of the University of Maryland, realized that many of the techniques they have developed for face recognition, in work spanning more than a decade, could also be applied to automatic species identification. State-of-the-art face recognition algorithms rely on methods that find correspondences between comparable parts of different faces, so that, for example, a nose is compared to a nose, and an eye to an eye. Birdsnap works the same way, detecting the parts of a bird so that it can examine the visual similarity of its comparable parts (each species is labeled through the location of 17 parts). It automatically discovers visually similar species and makes visual suggestions for how they can be distinguished.

This is a screenshot of comparison between Canada Warbler and Magnolia Warbler.
Credit: Columbia Engineering

"Categorization is one of the fundamental problems of computer vision," says Thomas Berg, a Columbia Engineering computer science PhD candidate who works closely with Belhumeur. "Recently, there's been a lot of progress in fine-grained visual categorization, the recognition of—and distinguishing between—categories that look very similar. What's really exciting about Birdsnap is that not only does it do well at identifying species, but it can also identify which parts of the bird the algorithm uses to identify each species. Birdsnap then automatically annotates images of the bird to show these distinctive parts—birders call them 'field marks'—so the user can learn what to look for."

The team designed what they call "part-based one-vs-one features," or POOFs, each of which classifies birds of just two species, based on a small part of the body of the bird. The system builds hundreds of POOFs for each pair of species, each based on a different part of the bird, and chooses the parts used by the most accurate POOFs as field marks. Birdsnap also uses POOFs for identification of uploaded images.

The team also took advantage of the fact that modern cameras, especially those on phones, embed the date and location in their images and used that information to improve classification accuracy. Not only did they come up with a fully automatic method to teach users how to identify visually similar species, but they also designed a system that can pinpoint which birds are arriving, departing, or migrating. "You can ID birds in the U.S. wherever you are at any time of year," Berg notes.
Digital technology is about to add big data to the bird enthusiast's traditional tools of binoculars and a field guide. Peter Belhumeur, a Columbia Engineering computer science professor whose app for recognizing leaves was launched in 2011, has now created Birdsnap, an electronic guide for identifying birds. Birdsnap uses the computer technology that can recognize human faces to identify 500 common birds in North America.
Credit: Columbia University, Office of Communications and Public Affairs

The Leafsnap app, which involved costly time and resources spent in collecting and photographing thousands of leaves, took almost 10 years to develop and now has more than a million users. Belhumeur got Birdsnap going in about six months, thanks to the proliferation of online data sources and advances in computer vision and mobile computing. Photos were downloaded from the Internet, with species labels confirmed by workers on Amazon Mechanical Turk, who also labeled the parts. Descriptions were sourced through Wikipedia. The maps were based on data from eBird, a joint venture of Cornell University's Lab of Ornithology and the National Audubon Society, and BirdLife, an international network of conservation groups.

Belhumeur hopes next to work with Columbia Engineering colleagues on adding the ability to recognize bird songs, bringing audio and visual recognition together. He also wants to create "smart" binoculars that use this artificial intelligence technology to identify and tag species within the field of view.

"Biological domains—whether trees, dogs, or birds—where taxonomy dictates a clear set of subcategories, are wonderfully well-suited to the problem of fine-grained visual categorization," Belhumeur observes. "With all the advances in computer vision and information collection, it's an exciting time to be immersed in visual recognition and big data."

Contacts and sources:
Columbia University

This research was funded by the National Science Foundation, the Gordon and Betty Moore Foundation, and the Office of Naval Research.

Tai Chi Health Benefits And Anti-Aging Effects Proven In Study

Tai Chi, a traditional Chinese martial art and sport, has been found to be beneficial in raising the numbers of an important type of cell when three groups of young people were tested to discover the benefits of Tai Chi, brisk walking or no exercise. The group performing Tai Chi saw a rise in their cluster of differentiation 34 expressing (CD34+) cells, a stem cell important to a number of the body's functions and structures.

Credit: Wikipedia 

"To evaluate the potential life-lengthening effect of Tai Chi, we conducted a year-long, retrospective cross-sectional study comparing the rejuvenating and anti-aging effects among three groups of volunteers under the age of 25 who engaged in either Tai Chi (TCC), brisk walking (BW), or no exercise habit (NEH)," said study corresponding author Dr. Shinn-Zong Lin of the Center for Neuropsychiatry, China Medical University Hospital, Taichung, Taiwan. 

"We used young volunteers because they have better cell-renewing abilities than the old population and we also wanted to avoid having chronic diseases and medications as interfering factors."

According to the authors, Tai Chi "has been confirmed to benefit" patients with mild to moderate Parkinson's disease and fibromyalgia. In addition, they cite possible advantages of Tai Chi in pain reduction, fall prevention and balance improvement, aerobic capacity, blood pressure, quality of life and stress reduction.

"Compared with the NEH group, the TCC group had a significantly higher number of CD 34+ cells," wrote the authors. "We found that the CD34+ cell count of the TCC group was significantly higher than the BW group."

CD 34+ cells, they explained, express the CD 34 protein and are "cluster markers" for hematopoietic stem cells (blood stem cells) involved in cell self-renewal, differentiation and proliferation.

"It is possible that Tai Chi may prompt vasodilation and increase blood flow," said Lin. "Considering that BW may require a larger space or more equipment, Tai Chi seems to be an easier and more convenient choice of anti-aging exercise." 

"This study provides the first step into providing scientific evidence for the possible health benefits of Tai Chi." said Dr. Paul R. Sanberg, distinguished professor at the Center of Excellence for Aging and Brain Repair, Morsani College of Medicine, University of South Florida, Tampa, FL. "Further study of how Tai Chi can elicit benefit in different populations and on different parameters of aging are necessary to determine its full impact."

The study was published in issue 23(4/5) of Cell Transplantation and is freely available on-line at: http://www.ingentaconnect.com/content/cog/ct/2014/00000023/F0020004/art00020.

Contacts and sources: 
Contact: Dr. Shinn-Zong Lin Center for Neuropsychiatry, China Medical University Hospital, Taichung, Taiwan, ROC.
Robert Miranda
Cell Transplantation Center of Excellence for Aging and Brain Repair

Citation: Ho, T-J.; Ho, L-I.; Hsueh, K-W.; Chan, T-M.; Huang, S-L.; Lin, J-G.; Liang, W-M.; Hsu, W-H.; Harn, H-J.; Lin, S-Z. Tai Chi Intervention Increases Progenitor CD34+ Cells in Young Adults. Cell Transplant. 23(4-5):613-620; 2014.

Cats Found To Eat More In The Winter And Less In Summer

Cats eat more during the winter and owners should give their pet more food during this time, University of Liverpool research has found.

Researchers from the University's School of Veterinary Science, in collaboration with colleagues at the Royal Canin Research Centre in France, spent four years monitoring how much cats chose to eat, and found that food intake increased in colder months and decreased during the summer.

Credit: Wikipedia

The 38 cats studied had a microchip on their collar which allowed them to take as much food as they wanted from a dispenser which only opened for them. At the same time, this microchip recorded how much the cat had eaten and when.

Veterinarian and study author, Dr Alex German, said: "Cats, like many humans are more inclined to comfort eat when it's cold outside but, in their case, it's likely to be due to the extra energy they need to keep warm when out and about."

The study found that cats ate approximately 15% less food during summer, and the vets have concluded that the extra effort to keep warm in winter and the temptation to rest during hot summer days contributed to the swing in activity levels during the year.

The cats were all inhabitants of a centre in southern France where they were allowed to play and exercise outside all year round. The cats were of mixed breeds, ages and genders. Data on food was compared to the climate in the area using computer modelling to provide information about how the temperature changed over the year.

Seasonal food intake has also been examined in the past on farm animals, such as dairy cows, to establish new ways of increasing milk production, but this is the largest study that has yet taken place with domestic cats.

Dr German said: "People should consider the amount of food their cats need at different times of year as this can be part of helping them to maintain a healthy weight."

Contacts and sources:
Jamie Brown
University of Liverpool

The paper was published in the journal PLOS One. http://dx.plos.org/10.1371/journal.pone.0096071

Marine Fish Use Red Biofluorescence To Communicate

Tübingen University biologists show marine fish use red biofluorescence to communicate

The ocean is blue because red light is swiftly absorbed by the water. That’s why even a few meters below the surface, the sea and its creatures appear a dull blue color. Evolutionary biologists at the University of Tübingen are carrying out research into marine fish which have developed their own biofluorescence - producing bright red colors in the blue depths of the sea.

The red-eye wrasse produces deep red biofluorescence to communicate and defend his territory.
Photo: Nico Michiels, University of Tübingen

In the latest Proceedings of the Royal Society B, they demonstrate for the first time that fish are able to perceive their own biofluorescence and to use if to communicate with members of their own species.

Dr Tobias Gerlach, Dr Dennis Sprenger and Professor Nico Michiels from Tübingen’s Institute of Evolution and Ecology made use of the fact that the male of the red-eye wrasse, Cirrhilabrus solorensis, reacts aggressively towards his own reflection in a mirror. 

In one experiment, a filter placed in front of the mirror blocked the fluorescent part of the fish’s coloration - with the result that males were far less interested in their mirror image. The researchers say this shows that the red-eye wrasse not only recognizes its species’ special coloration - it also uses it as a territorial marker and displays it in conflicts with other males.

One of the most exciting discoveries, the researchers say, is that the fluorescence is a deep red in a part of the spectrum which, it was previously believed, fish could not see or make use of. It could be that red-eye wrasses use their fluorescence as a private frequency to communicate amongst themselves.

Contacts and sources:
Tübingen University

Citation: Tobias Gerlach, Dennis Sprenger, Nico K. Michiels: “Fairy wrasses perceive and respond to their red fluorescent colouration”, Proceedings of the Royal Society B (2014)
http://dx.doi.org/10.1098/rspb. 2014.0787

Mesopotamia Far Wetter And More Fertile 12,000 Years Ago: Birthplace Of Western Agricuture

A study co-headed by Josep Lluís Araus, professor from the University of Barcelona (UB), Juan Pedro Ferrio, Ramón y Cajal researcher at Agrotecnio of the University of Lleida (UdL), and Jordi Voltas, professor from Agrotecnio, describes the characteristics of agriculture at its beginnings by comparing kernel and wood samples from ancient Near East sites —the birthplace of Western agriculture— with present samples.

The study is co-headed by the University of Barcelona and the University of Lleida, together with the Archaeological Museum of Catalonia.
Photo: Josep Lluís Araus, UB

It is the first time that direct evidences enable to know humidity and fertility conditions of crops, as well as the process of cereal domestication developed by humans from the Neolithic (12,000 years ago) to early Roman times (around 2,000 years ago).

The study has been published in the journal Nature Communications. Researchers Ramon Buxó, archaeologist and director of the Archaeological Museum of Catalonia-Girona, and Mònica Aguilera, UdL researcher who is now working at the Paris Natural History Museum, participated in the study too.

Researchers used crop physiology techniques to analyse archaeobotanical remains. In total, they analysed 367 kernels —for instance, barley and wheat—, and 362 wood samples obtained in eleven archaeological sites from Upper Mesopotamia, which includes present south-eastern Turkey and northern Siria, to the Near East. Studied kernels belong to present crops of wheat and barley species that are similar to the archaeological remains found in the region.

Progressive domestication

Researchers compared the size of kernel remains with present samples to determine the evolution of crop domestication. “The methodology used to date does not reproduce real size; it measures width and long of charred kernels”, explains Josep Lluís Araus, professor from the Department of Plan Biology of UB. “We have reconstructed cereal kernel weight —adds the expert— and seen that it increased for a longer period of time than it was thought, probably during several millenniums”. According to the researcher, the initial selection of kernel was “unconscious”, in other words, first farmers selected the biggest kernels, so size increased progressively.

Wetter and more fertile soils

Sample analysis of carbon and nitrogen isotope compositions —a technique used in crop physiology and improvement— was a key factor to describe the conditions of the area. On one hand, “Carbon isotope composition enables to evaluate water availability for crops. It reached its maximum level 9,000 years ago, and then it decreased progressively until the beginning of our times”, points out Araus. In any case, researchers have not found conclusive evidences about the use of irrigation as a common practice. “This information together with cereal kernel weight allows assessing the productivity of ancient crops”, highlights Josep Lluís Araus.

On the other hand, nitrogen isotope composition provides information about soil’s organic matter and fertility. Juan Pedro Ferrio (Agrotecnio-UdL) affirms that “although they were dryland crops, it can be affirmed that nitrogen was much more available than today: undoubtedly, soils were much more fertile than nowadays”. Moreover, it can be observed a progressive decrease of soil fertility, probably due to over-exploitation or the use of less fertile soils, but also to more extreme climate conditions.

These data enable to describe more precisely agronomic conditions and the evolution of human populations linked to agricultural practices. “The study relates conditions like water availability or soil fertility to crops yield”, states Josep Lluís Araus. Past yields, compared with average calorie needs of one person, enable, for example, to have a rough idea of the crop area needed to feed the population. “This information —adds Araus— can be used to know more precisely the borders of past settlements and the evolution of human communities. The aim is to include all this information in models in order to better understand the past”, concludes the researcher.

Contacts and sources:
University of Barcelona 

Citation:  Agronomic conditions and crop evolution in ancient Near East agriculture. José L. Araus, Juan P. Ferrio, Jordi Voltas, Mònica Aguilera. Ramón Buxó. Nature Communications, Volume: 5,Article number:3953 DOI: doi:10.1038/ncomms4953 Published 23 May

Seafloor Experts Publish New View Of Zone Where Malaysia Airlines Flight 370 Might Lie

A new illustration of the seafloor, created by two of the world’s leading ocean floor mapping experts that details underwater terrain where the missing Malaysia Airlines flight might be located, could shed additional light on what type of underwater vehicles might be used to find the missing airplane and where any debris from the crash might lie.

The seafloor topography map illustrates jagged plateaus, ridges and other underwater features of a large area underneath the Indian Ocean where search efforts have focused since contact with Malaysia Airlines flight MH370 was lost on March 8. The image was published today in Eos, the weekly newspaper of the Earth and space sciences, published by the American Geophysical Union.

Seafloor topography in the Malaysia Airlines flight MH370 search area. Dashed lines approximate the search zone for sonar pings emitted by the flight data recorder and cockpit voice recorder popularly called black boxes. The first sonar contact (black circle) was reportedly made by a Chinese vessel on the east flank of Batavia Plateau (B), where the shallowest point in the area (S) is at an estimated depth of 1637 meters. The next reported sonar contact (red circle) was made by an Australian vessel on the north flank of Zenith Plateau (Z). 

The deepest point in the area (D) lies in the Wallaby-Zenith Fracture Zone at an estimated depth of 7883 meters. The Wallaby Plateau (W) lies to the east of the Zenith Plateau. The shallowest point in the entire area shown here is on Broken Ridge (BR). Deep Sea Drilling Project (DSDP) site 256 is marked by a gray dot. The inset in the top left shows the area’s location to the west of Australia. Seafloor depths are from the General Bathymetric Chart of the Oceans [2010].

Credit: Walter H.F. Smith and Karen M. Marks

The new illustration of a 2,000 kilometer by 1,400 kilometer (1,243 miles by 870 miles) area where the plane might be shows locations on the seafloor corresponding to where acoustic signals from the airplane’s black boxes were reportedly detected at the surface by two vessels in the area. It also shows the two plateaus near where these “pings” were heard.

It points out the deepest point in the area: 7,883 meters (about five miles) underneath the sea in the Wallaby-Zenith Fracture Zone – about as deep as 20 Empire State buildings stacked top to bottom. Undersea mountains and plateaus rise nearly 5,000 meters (about three miles) above the deep seafloor, according to the map.

The illustration, designated as Figure 1 of the Eosarticle, was created by Walter H.F. Smith and Karen M. Marks, both of the National Oceanic and Atmospheric Administration’s Laboratory for Satellite Altimetry in College Park, Maryland, and the former and current chairs, respectively, of the Technical Sub-Committee on Ocean Mapping of the General Bathymetric Chart of the Oceans, or GEBCO. GEBCO is an international organization that aims to provide the most authoritative publicly available maps of the depths and shapes of the terrain underneath the world’s oceans.

Satellite altimetry has made it possible to depict the topography of vast regions of the seafloor that would otherwise have remained unmapped, Smith said. To illustrate the topography of the search area, Smith and Marks used publicly available data from GEBCO and other bathymetric models and data banks, along with information culled from news reports.

Smith said the terrain and depths shown in the map could help searchers choose the appropriate underwater robotic vehicles they might use to look for the missing plane. Knowing the roughness and shape of the ocean floor could also help inform models predicting where floating debris from the airplane might turn up.

Smith cautions that the new illustration is not a roadmap to find the missing airplane. Nor does the map define the official search area for the aircraft, he added.

“It is not ‘x marks the spot’,” Smith said of their map. “We are painting with a very, very broad brush.”

Search efforts for the missing airplane have focused on an area of the southern Indian Ocean west of Australia where officials suspect that the plane crashed after it veered off course. After an initial air and underwater search failed to find any trace of the airplane, authorities announced this month that they will expand the search area and also map the seabed in the area.

Smith pointed out that the search for the missing plane is made more difficult because so little is understood about the seafloor in this part of the Indian Ocean. In the southeast Indian Ocean, only 5 percent of the ocean bottom has been measured by ships with echo soundings. Knowledge of the rest of the area comes from satellite altimetry, which provides relatively low-resolution mapping compared to ship-borne methods.

“It is a very complex part of the world that is very poorly known,” Smith said.

A lack of good data about Earth’s seafloors not only hinders search efforts, it also makes it harder for scientists to accurately model the world’s environment and climate, Smith noted. Today, our knowledge of our planet’s undersea topography is “vastly poorer than our knowledge of the topographies of Earth’s Moon, Mars and Venus,” Smith and Marks write in Eos. This is because these other planetary bodies have no oceans, making their surfaces relatively easy to sense from space.

Smith said he hoped that “the data collected during the search for MH370 will be contributed to public data banks and will be a start of greater efforts to map Earth’s ocean floor.”

Contacts and sources: 
Nanci Bompey
American Geophysical Union

Tuesday, May 27, 2014

DARPA Brain Implant For PTSD, Brain Injuries And Other Neurological And Psychiatric Disorders Researched

Investigators at Massachusetts General Hospital (MGH) today announced a new research initiative designed to treat post-traumatic stress disorder (PTSD), traumatic brain injury (TBI), and other neurological and psychiatric disorders.

Conceptual model of brain implant for PTSD and TBI

Courtesy of MGH and Draper Labs

The goal of the project, which is made possible by a $30 million grant from the Defense Advanced Research Projects Agency (DARPA), is to design and build a first-of-its-kind implantable deep brain stimulation (DBS) device which will monitor signals across multiple brain structures in real time. Based on the monitored activity, it will then deliver stimulation to key areas to alleviate symptoms related to neuropsychiatric disorders such as PTSD, severe depression, drug addiction, and TBI.

“Deep brain stimulation has been shown to be an effective treatment for a variety of brain diseases, especially those involving movement like Parkinson’s disease,” says Emad Eskandar MD, director of functional neurosurgery at MGH and the project’s principal investigator. 

“Our goal is to take DBS to the next level and create an implantable device to treat disorders like PTSD and TBI. Together with our partners we’re committed to developing this technology, which we hope will be a bold new step toward treating those suffering from these debilitating disorders,” says Eskandar.

The initiative, called Transdiagnostic Restoration of Affective Networks by System Identification and Function Oriented Real-Modeling and Deep Brain Stimulation (TRANSFORM DBS), involves cross-hospital collaborations along with partners from the Massachusetts Institute of Technology (MIT), and Draper Labs. 

The MGH-based team will include the departments of Neurosurgery, Psychiatry, Neurology, Anesthesia and Critical Care, and the Martinos Center for Biomedical Imaging. The TRANSFORM DBS team will also work closely with scientists at Draper Laboratories, who will be responsible for the engineering portions of the project.

“We’re strongly encouraged by the previous data connected with this approach,” says Eskandar. “Our hope is that this project will not only restore quality of life for those affected, both military and civilian, but dramatically change the way we approach the treatment of neuropsychiatric disorders.”

Contacts and sources:
Mike Morrison
Massachusetts General Hospital

Skin Grafts From Genetically Modified Pigs May Offer Alternative For Burn Treatment

A specially-bred strain of miniature swine lacking the molecule responsible for the rapid rejection of pig-to-primate organ transplants may provide a new source of skin grafts to treat seriously burned patients. A team of investigators from Massachusetts General Hospital (MGH) report that skin grafts from pigs lacking the Gal sugar molecule were as effective in covering burn-like injuries on the backs of baboons as skin taken from other baboons, a finding that could double the length of time burns can be protected while healing. The report in the journal Transplantation has been published online.

Skin graft illustration 

Credit: Wikipedia

"This exciting work suggests that these GalT-knockout porcine skin grafts would be a useful addition to the burn-management armamentarium," says Curtis Cetrulo, MD, of the MGH Transplantation Biology Research Center (TBRC) and the Division of Plastic and Reconstructive Surgery, corresponding author of the Transplantation paper. "We are actively exploring options for establishing clinical-grade production of these grafts and hope to begin a clinical trial in due course."

A key component in the treatment of major burns, particularly those involving more than 30 percent of the body surface, is removing the damaged skin and covering the injury, preferably with a graft of a patient's own tissue. When insufficient undamaged skin is available for grafting, tissue from deceased donors is used as a temporary covering. But deceased-donor skin grafts are in short supply and expensive – disadvantages also applying to artificial skin grafts – must be carefully tested for pathogens and are eventually rejected by a patient's immune system. Once a deceased-donor graft has been rejected, a patient's immune system will reject any subsequent deceased-donor grafts almost immediately.

The current study was designed to investigate whether a resource already available at the MGH might help expand options for protecting burned areas following removal of damaged skin. For more than 30 years David H. Sachs, MD, founder and scientific director of the TBRC, has been investigating ways to allow the human body to accept organ and tissue transplants from animals. Sachs and his team developed a strain of inbred miniature swine with organs that are close in size to those of adult humans.

 Since pig organs implanted into primates are rapidly rejected due to the presence of the Gal (alpha-1,3-galactose) molecule, Sachs and his collaborators used the strain that he developed to generate miniature swine in which both copies of the gene encoding GalT (galactosyltransferase), the enzyme responsible for placing the Gal molecule on the cell surface, were knocked out.

When Cetrulo's team used skin from these Gal-free pigs to provide grafts covering burn-like injuries on the backs of baboons – injuries made while the animals were under anesthesia – the grafts adhered and developed a vascular system within 4 days of implantation. Signs of rejection began to appear on day 10, and rejection was complete by day 12 – a time frame similar to what is seen with deceased-donor grafts and identical to that observed when the team used skin grafts from other baboons. 

As with the use of second deceased-donor grafts to treat burned patients, a second pig-to-baboon graft was rapidly rejected. But if a pit-to baboon was followed by a graft using baboon skin, the second graft adhered to the wound and remained in place for around 12 days before rejection. The researchers also showed that acceptance of a second graft was similar no matter whether a pig xenograft or a baboon skin graft was used first.

"These results raise the possibility not only of providing an alternative to deceased-donor skin for many patients but also that, in patients whose burns are particularly extensive and require prolonged coverage, sequential use of GalT-knockout and deceased-donor skin could provide extended, high-quality wound coverage," says co-author David Leonard, MBChB, of the TBRC and Division of Plastic and Reconstructive Surgery. "A high-quality alternative to deceased-donor skin that could be produced from a specially maintained, pathogen-free herd of GalT-knockout miniature swine would be an important resource for burn management in both civilian and military settings."

Contacts and sources:
Mike Morrison
Massachusetts General Hospital

Giant Mars Volcano May Provide Habitat For Humans

Heat from a volcano erupting beneath an immense glacier would have created large lakes of liquid water on Mars in the relatively recent past. And where there’s water, there is also the possibility of life. A recent paper by Brown University researchers calculates how much water may have been present near the Arsia Mons volcano and how long it may have remained.

 The slopes of a giant Martian volcano, once covered in glacial ice, may have been home to one of the most recent habitable environments yet found on the Red Planet, according to new research led by Brown University geologists.

Possibly habitable environs: Braided fluvial channels (inset) emerge from the edge of glacial deposits roughly 210 million years old on the martian volcano Arsia Mons, nearly twice as high as Mount Everest. (Colors indicate elevation.)
Credit: NASA/Goddard Space Flight Center/Arizona State University/Brown University

Nearly twice as tall as Mount Everest, Arsia Mons is the third tallest volcano on Mars and one of the largest mountains in the solar system. This new analysis of the landforms surrounding Arsia Mons shows that eruptions along the volcano’s northwest flank happened at the same time that a glacier covered the region around 210 million years ago. The heat from those eruptions would have melted massive amounts of ice to form englacial lakes — bodies of water that form within glaciers like liquid bubbles in a half-frozen ice cube.

The ice-covered lakes of Arsia Mons would have held hundreds of cubic kilometers of meltwater, according to calculations by Kat Scanlon, a graduate student at Brown who led the work. And where there’s water, there’s the possibility of a habitable environment.

“This is interesting because it’s a way to get a lot of liquid water very recently on Mars,” Scanlon said.

While 210 million years ago might not sound terribly recent, the Arsia Mons site is much younger than the habitable environments turned up by Curiosity and other Mars rovers. Those sites are all likely older than 2.5 billion years. The fact that the Arsia Mons site is relatively young makes it an interesting target for possible future exploration.

“If signs of past life are ever found at those older sites, then Arsia Mons would be the next place I would want to go,” Scanlon said.

A paper describing Scanlon’s work is published in the journal Icarus.

Scientists have speculated since the 1970s that the northwest flank of Arsia Mons may once have been covered by glacial ice. That view got a big boost in 2003 when Brown geologist Jim Head and Boston University’s David Marchant showed that terrain around Arsia Mons looks strikingly similar to landforms left by receding glaciers in the Dry Valleys of Antarctica. Parallel ridges toward the bottom of the mountain appear to be drop moraines — piles of rubble deposited at the edges of a receding glacier. An assemblage of small hills in the region also appears to be debris left behind by slowly flowing glacial ice.

The glacier idea got another boost with recently developed climate models for Mars that take into account changes in the planet’s axis tilt. The models suggested that during periods of increased tilt, ice now found at the poles would have migrated toward the equator. That would make Mars’s giant mid-latitude mountains — Ascraeus Mons, Pavonis Mons and Arsia Mons — prime locations for glaciation around 210 million years ago.

Fire and ice

Working with Head, Marchant, and Lionel Wilson from the Lancaster Environmental Centre in the U.K., Scanlon looked for evidence that hot volcanic lava may have flowed in the region the same time that the glacier was present. She found plenty.

Using data from NASA’s Mars Reconnaissance Orbiter, Scanlon found pillow lava formations, similar to those that form on Earth when lava erupts at the bottom of an ocean. She also found the kinds of ridges and mounds that form on Earth when a lava flow is constrained by glacial ice. The pressure of the ice sheet constrains the lava flow, and glacial meltwater chills the erupting lava into fragments of volcanic glass, forming mounds and ridges with steep sides and flat tops. The analysis also turned up evidence of a river formed in a jökulhlaup, a massive flood that occurs when water trapped in a glacier breaks free.

Based on the sizes of the formations, Scanlon could estimate how much lava would have interacted with the glacier. Using basic thermodynamics, she could then calculate how much meltwater that lava would produce. She found that two of the deposits would have created lakes containing around 40 cubic kilometers of water each. That’s almost a third of the volume of Lake Tahoe in each lake. Another of the formations would have created around 20 cubic kilometers of water.

Even in the frigid conditions of Mars, that much ice-covered water would have remained liquid for a substantial period of time. Scanlon’s back-of-the-envelope calculation suggests the lakes could have persisted or hundreds or even a few thousand years.

That may have been long enough for the lakes to be colonized by microbial life forms, if in fact such creatures ever inhabited Mars.

“There’s been a lot of work on Earth — though not as much as we would like — on the types of microbes that live in these englacial lakes,” Scanlon said. “They’ve been studied mainly as an analog to [Saturn’s moon] Europa, where you’ve got an entire planet that’s an ice covered lake.”

In light of this research, it seems possible that those same kinds of environs existed on Mars at this site in the relatively recent past.

There’s also possibility, Head points out, that some of that glacial ice may still be there. “Remnant craters and ridges strongly suggest that some of the glacial ice remains buried below rock and soil debris,” he said. “That’s interesting from a scientific point of view because it likely preserves in tiny bubbles a record of the atmosphere of Mars hundreds of millions of years ago. But an existing ice deposit might also be an exploitable water source for future human exploration.”

Contacts and sources:
Kevin Stacey
Brown University

New Design Of A System That " Interrogates " And Shows How The Brain Relearns

Monitoring the rehabilitation of patients with neurological damage caused by a stroke, has encouraged Mexican scientists to work in the design and manufacture of a functional infrared spectroscopy (fNIRS -FD ) instrument capable of identifying the affected areas of the brain and the sites that were activated while analyzing the oxygen content in blood flow during therapy.

Credit: Investigación y Desarrollo

"It's a device consisting of a headband or helmet equipped with emitters and light detectors, oximeter (to measures oxygen levels), a monitor and software. Its operation is based on infrared light, which passes through the scalp to the skull leather and displays and “interrogates” brain activity in order to obtain information on cell metabolism, alterations in blood flow and amount of oxygen," explains Carlos Gerardo Treviño Palacios, researcher at the National Institute of Astrophysics, Optics and Electronics (INAOE) in Mexico.

He highlights that so far they are ending the development of an oximeter and software to display images. Also, they analyze information that will be provided to the base hardware and detectors, and work in the construction helmet. This will not only help rehabilitate patients, but will create a map of the brain to detect which parts are replacing areas that died in the motor cortex after stroke and watch how the body relearns with the help of rehabilitation.

Credit: Investigación y Desarrollo

"The aim is to build a non-invasive imaging system to avoid secluding the patient into a box camera during the shooting of brain “photography” with the limitations of the procedure , as happens with an MRI," says Treviño Palacios.

He notes that although the latter method also measures the concentration of oxygen, infrared spectroscopy despite having a lower resolution does not require the patient to lie still and requires only the use of a helmet, allowing the physician to observe brain activity and progress while continuing the patient’s rehabilitation therapy. Additional advantages are system portability and low cost.

"In parallel, we are looking for a fast optical signal, ie, a series of changes that occur a few milliseconds before the neuron is active in the images, which shows the action potential of the nerve cell," says the researcher at INAOE.

Credit: Investigación y Desarrollo

This project is jointly implemented by INAOE and the National Institute of Neurology and Neurosurgery of the Mexican Ministry of Health, where collaboration comes naturally to raise an investigation into an imaging modality based on the interaction of light with matter, after a previous collaboration where a rehabilitation therapy system was developed.

"The particular characteristics of the optical imaging system make it a unique tool in certain problems where the in-vivo and in- situ neuroimaging is required noninvasively and continuously for long periods of time. This is the case of the study of brain plasticity in patients going through motor rehabilitation, which should be monitored while practicing neuro-rehabilitation exercises during therapy sessions that can last from 45 minutes to an hour ," says Treviño Palacios. (Agencia ID)

Contacts and sources:
Investigación y Desarrollo

Astronomers Create First Realistic Virtual Universe

Tracking 13 billion years of cosmic evolution, astronomers have created the first realistic virtual simulation of the Universe.

Credit: Dr Debora Sijacki, Cambridge University

A newly-developed computer simulation has created the first realistic version of the Universe, enabling researchers to understand how galaxies, black holes and other cosmic phenomena developed from shortly after the Big Bang to the present day.

The simulation, known as Illustris, follows the complex development of both normal and dark matter over 13 billion years, matching many of the features observed in the real Universe for the first time.

Developed by an international team of researchers, Illustris tracks the development of the Universe from 12 million years after the Big Bang up to the present, and identified more than 41,000 galaxies in a cube of simulated space 350 million light years on each side. The results are reported in the May 8th issue of the journal Nature.

Over the past two decades, researchers have been attempting to build accurate computer simulations of the development of the Universe, using computer programs which are capable of encapsulating all the relevant laws of physics governing the formation of galaxies.

Previous attempts to simulate the universe were hampered by lack of computing power and the complexities of the underlying physics. As a result those programs either were limited in resolution, or forced to focus on a small portion of the universe. Earlier simulations also had trouble modelling complex feedback from star formation, supernova explosions, and supermassive black holes.

Illustris employs a sophisticated computer program to recreate the evolution of the universe in high fidelity. It includes both normal matter and dark matter using 12 billion 3D “pixels,” or resolution elements.

Illustris yields a realistic mix of spiral galaxies like the Milky Way and giant elliptical galaxies. It also recreated large-scale structures like galaxy clusters and the bubbles and voids of the cosmic web.

The team dedicated five years to developing the Illustris project. The actual calculations took three months of run time, using a total of 8,000 CPUs running in parallel. In comparison, the same calculations would have taken an average desktop computer more than 2,000 years to complete.

“Until now, no single simulation was able to reproduce the Universe on both large and small scales simultaneously,” says lead author Dr Mark Vogelsberger of the Massachusetts Institute of Technology and Harvard University, who conducted the work in collaboration with researchers at the University of Cambridge, the Harvard-Smithsonian Center for Astrophysics and the Heidelberg Institute for Theoretical Studies.

“The Illustris simulation is a remarkable technical achievement,” said Dr Debora Sijacki of Cambridge’s Institute of Astronomy, one of the paper’s co-authors. “It shows us for the first time how the bewildering variety of galaxies and the supermassive black holes at their centres have formed.”

Since light travels at a fixed speed, the farther away astronomers look, the farther back in time they can see. A galaxy one billion light-years away is seen as it was a billion years ago. Telescopes like Hubble can give us views of the early Universe by looking to greater distances. However, astronomers can’t use Hubble to follow the evolution of a single galaxy over time.

“Illustris is like a time machine. We can go forward and backward in time. We can pause the simulation and zoom into a single galaxy or galaxy cluster to see what’s really going on,” said co-author Dr Shy Genel of Harvard University.

A selection of videos and imagery from the project are available online at www.illustris-project.org

Contacts and sources:
Dr Debora Sijacki,
Institute of Astronomy,
Cambridge University

Exposing ‘Evil Twins’ With Twisted Light

A combination of nanotechnology and a unique twisting property of light could lead to new methods for ensuring the purity and safety of pharmaceuticals. 

A direct relationship between the way in which light is twisted by nanoscale structures and the nonlinear way in which it interacts with matter could be used to ensure greater purity for pharmaceuticals, allowing for ‘evil twins’ of drugs to be identified with much greater sensitivity.

When twisted light matches the twist of nanostructures, strong interactions with chiral molecules could arise

Credit: Ventsislav Valev

Researchers from the University of Cambridge have used this relationship, in combination with powerful lasers and nanopatterned gold surfaces, to propose a sensing mechanism that could be used to identify the right-handed and left-handed versions of molecules.

Some molecules are symmetrical, so their mirror image is an exact copy. However, most molecules in nature have a mirror image that differs - try putting a left-handed glove on to your right hand and you’ll see that your hands are not transposable one onto the other. Molecules whose mirror-images display this sort of “handedness” are known as chiral.

The chirality of a molecule affects how it interacts with its surroundings, and different chiral forms of the same molecule can have completely different effects. Perhaps the best-known instance of this is Thalidomide, which was prescribed to pregnant women in the 1950s and 1960s. One chiral form of Thalidomide worked as an effective treatment for morning sickness in early pregnancy, while the other form, like an ‘evil twin’, prevented proper growth of the foetus. The drug that was prescribed to patients however, was a mix of both forms, resulting in more than 10,000 children worldwide being born with serious birth defects, such as shortened or missing limbs.

When developing new pharmaceuticals, identifying the correct chiral form is crucial. Specific molecules bind to specific receptors, so ensuring the correct chiral form is present determines the purity and effectiveness of the end product. However, the difficulty with achieving chiral purity is that usually both forms are synthesised in equal quantities.

Researchers from the University of Cambridge have designed a new type of sensing mechanism, combining a unique twisting property of light with frequency doubling to identify different chiral forms of molecules with extremely high sensitivity, which could be useful in the development of new drugs. The results are published in the journal Advanced Materials.

The sensing mechanism, designed by Dr Ventsislav Valev and Professor Jeremy Baumberg from the Cavendish Laboratory, in collaboration with colleagues from the UK and abroad, uses a nanopatterned gold surface in combination with powerful lasers.

Currently, differing chiral forms of molecules are detected by using beams of polarised light. The way in which the light is twisted by the molecules results in chiroptical effects, which are typically very weak. By using powerful lasers however, second harmonic generation (SHG) chiroptical effects emerge, which are typically three orders of magnitude stronger. SHG is a quantum mechanical process whereby two red photons can be annihilated to create a blue photon, creating blue light from red.

Recently, another major step towards increasing chiroptical effects came from the development of superchiral light – a super twisty form of light.

The researchers identified a direct link between the fundamental equations for superchiral light and SHG, which would make even stronger chiroptical effects possible. Combining superchiral light and SHG could yield record-breaking effects, which would result in very high sensitivity for measuring the chiral purity of drugs.

The researchers also used tiny gold structures, known as plasmonic nanostructures, to focus the beams of light. Just as a glass lens can be used to focus sunlight to a certain spot, these plasmonic nanostructures concentrate incoming light into hotspots on their surface, where the optical fields become huge. Due to the presence of optical field variations, it is in these hotspots that superchiral light and SHG combine their effects.

“By using nanostructures, lasers and this unique twisting property of light, we could selectively destroy the unwanted form of the molecule, while leaving the desired form unaffected,” said Dr Valev. “Together, these technologies could help ensure that new drugs are safe and pure.” 

Contacts and sources:

Artificial Lung The Size Of A Sugar Cube Developed

What medications can be used to treat lung cancer, and how effective are they? Until now, drug companies have had to rely on animal testing to find out. But in the future, a new 3D model lung is set to achieve more precise results and ultimately minimize – or even completely replace – animal testing.

  The 3D-Lung tumor model helps researchers to test medications.

Credit: © Fraunhofer IGB

Lung cancer is a serious condition. Once patients are diagnosed with it, chemotherapy is often their only hope. But nobody can accurately predict whether or not this treatment will help. To start with, not all patients respond to a course of chemotherapy in exactly the same way. And then there’s the fact that the systems drug companies use to test new medications leave a lot to be desired. 

“Animal models may be the best we have at the moment, but all the same, 75 percent of the drugs deemed beneficial when tested on animals fail when used to treat humans,” explains Prof. Dr. Heike Walles, head of the Würzburg-based “Regenerative Technologies for Oncology” project group belonging to the Fraunhofer Institute for Interfacial Engineering and Biotechnology IGB.

These tests are set to achieve better results in the future: “We’ve developed an innovative 3D test system that allows us to superbly simulate what happens in the human body. Our plan is for this system to replace animal tests in the future,” says Walles. Essentially what the researchers have done is to recreate the human lung in miniature – with a volume of half a cubic centimeter, each model is no bigger than a sugar cube. 

In a parallel effort, scientists at the Department of Bioinformatics at the University of Würzburg are working up computer simulation models for different patient groups. These are necessary because patients may have genetic variations that inhibit therapies from having the desired effect. Comparing the theoretical and biological models allows each research group to optimize their results.

The biological model is based human lung cancer cells growing on tissue. Thus an artificial lung is created. A bioreactor is used to make it breathe and to pump a nutrient medium through its blood vessels in the same way our bodies supply our lungs with blood. The reactor also makes it possible to regulate factors such as how fast and deeply the model lung breathes.

With the scientists having managed to construct the lung tissue, Walles is delighted to report that “treatments that generate resistance in clinics do the same in our model.” Researchers are now planning to explore the extent to which their artificial lung can be used to test new therapeutic agents. Should resistance crop up during testing, doctors can opt to treat the patient with a combination therapy from the outset and thus side-step the problem. 

Thinking long-term, there is even the possibility of creating an individual model lung for each patient. This would make it possible to accurately predict which of the various treatment options will work. The required lung cells are collected as part of the biopsy performed to allow doctors to analyze the patient’s tumor.

On the trail of metastases

Testing new medications is by no means the only thing the model lung can be used for. It is also designed to help researchers to better understand the formation of metastases; it is these that often make a cancer fatal. “As metastases can’t be examined in animals – or in 2D models where cells grow only on a flat surface – we’ve only ever had a rough understanding of how they form. Now for the first time, our 3D lung tissue makes it possible to perform metastases analysis,” explains Walles. “In the long term, this may enable us to protect patients from metastases altogether.” 

In order to travel through the body, tumor cells alter their surface markers – in other words, the molecules that bind them to a particular area of the body. Cancer cells are then free to spread throughout the body via the body’s circulatory system before taking up residence somewhere else by expressing their original surface markers. The scientists plan to use their model lung’s artificial circulatory system to research exactly how this transformation occurs. And in doing so, they may someday succeed in developing medication that will stop metastases from forming in the first place.

From June 23-26, researchers will be presenting their new model at the BIO International Convention in San Diego, California (Germany Pavilion, Booth 4513-03).

Contacts and sources:
Prof. Dr. Heike Walles
Fraunhofer Institute for Interfacial Engineering and Biotechnology IGB

Sunday, May 25, 2014

Mind Alteration Device Makes Flies Sing And Dance

 In a joint effort with collaboration partners from the Vienna University of Technology and a lab in the USA, the team of Andrew Straw at the IMP developed a special device for the thermogenetic control of flies. This tool, called FlyMAD, enabled the scientists to target light or heat to specific body regions of flies in motion and to analyse the animals‘ brain cells.

Compared to other techniques, FlyMAD allows highly improved temporal resolution. Using the new technology, Straw and his colleagues got new insight into the role of two neuronal cell types in courtship behavior of flies. The results of the study will be published online in Nature Methods on May 25 (doi 10.1038/nmeth.2973).

This composite image shows a laser being aimed at a walking fly using the FlyMAD system. 
Credit: Matt Staley and Dan Bath, JFRC, HHMI 

The fruit fly Drosophila Melanogaster represents an ideal experimental system to analyse circuit functions of brain cells (neurons). In the past, it was not possible to specifically control the activity of neurons in moving flies. Andrew Straw and his team have now overcome this barrier.

Rapid mind alteration in moving flies

Straw and his co-workers are interested in the mechanisms underlying cell circuits in the fly brain. Straw’s group concentrates on the control of complex behaviors such as courtship. In order to better understand how different neuronal circuits work together, Straw and his team developed FlyMAD (“Fly Mind Altering Device”), an apparatus using a video camera to track the flies‘ motion in a box. FlyMAD allows simultaneous observation of several flies and targeted irradiation of specific body regions of these animals. By combining the sensitive methods of optogenetics and thermogenetics, the researchers were able to specifically alter neural pathways in the fly brain with FlyMAD.

The novel technology of thermogenetics uses genetically modified, temperature-sensitive flies. Upon irradiation with infrared light and the concomitant rise in temperature to 30 degrees Celsius, these animals change certain aspects of their behavior. This does not happen at a control temperature of 24 degrees Celsius. Compared to other commonly used methods, FlyMAD applies a highly improved temporal resolution. Infrared-induced activation or repression of specific neurons and the following change in the animals‘ behavior occur within the fraction of a second.

A male Drosophila raises a wing and ‘sings’ due to neuronal activation of song neurons. FlyMAD enables precisely aiming an infrared laser at a fly in which a warmth-sensitive ion channel was genetically introduced into previously identified neurons. 

Image credit: Dan Bath, JFRC, HHMI

The application of visible light to certain genetically engineered flies can also induce alterations of their brain. FlyMAD thus represents an absolute novelty for fly research, as optogenetics has been restricted to mice so far.

New insight into courtship behavior of flies

Straw and his co-workers tested FlyMAD by analyzing already known reactions of genetically modified flies to light and heat. As this proof-of-principle showed that FlyMAD worked reliably – for example by making the flies “moonwalk” - the researchers went on to use their method to tackle new scientific questions. 

In a thermogenetic set up, they investigated a certain type of neurons that had been linked to the flies’ courtship song in earlier experiments. Taking advantage of the better temporal resolution of FlyMAD, the scientists were able to characterize the role of two neuronal cell types in the brain in more detail. 

They could show that activity of one type of neurons correlated with a persistent state of courtship, whereas the other cell type was important for the action of “singing”. In the experiment this became obvious when males tried to mate with a ball of wax, circled it and started vibrating their wings after stimulation with the laser beam.

FlyMAD allows combination of optogenetics and thermogenetics

In the future, Straw wants to combine the activation of flies both by light and by heat in one experiment – that is feasible with FlyMAD. This would allow the activation or repression of different genetic elements in one fly. 

"FlyMAD offers the fantastic opportunity to address many of our questions. We could, for example, analyze how single neurons function in a cascade within the neuronal circuit“, Straw emphasizes the potential of his work. Ultimately, new insight into the function of the fly brain can also be applied to the network of cells in the mammalian brain.

Contacts and sources: 
Research Institute of Molecular Pathology

Citation: Daniel E. Bath, John R. Stowers, Dorothea Hörmann, Andreas Poehlmann, Barry J. Dickson and Andrew D. Straw. FlyMAD: Rapid thermogenetic control of neuronal activity in freely-walking Drosophila. Nature Methods, doi 10.1038/nmeth.2973, 2014

Inspecting Letters With Terahertz Waves

Is it a harmless parcel or a bomb, an innocent letter or a drug shipment? A new terahertz scanner is capable of detecting illicit drugs and explosives sent by post without having to open suspicious packages or envelopes.

The prize-winning team presents terahertz scanner for the secure identification: Professor René Beigang and Thorsten Sprenger (from left to right).
Credit: © Dirk Mahler/Fraunhofer

Alert at Schloss Bellevue. A suspicious letter addressed to German President Joachim Gauck has been detected, which might contain a bomb. Not willing to take any risks, the bomb squad is called out to destroy the package. Later investigations revealed that the envelope did not contain any explosives, but better safe than sorry. A year ago, this event created turmoil in the mail sorting office in Berlin, because at the time there was no safe and simple way of reliably detecting the presence of explosives or drugs in letters and small packets. 

A new solution is offered by the terahertz scanner developed by researchers at the Fraunhofer Institute for Physical Measurement Techniques IPM in Kaiserslautern in collaboration with Hübner GmbH & Co. KG in Kassel. Their T-COGNITION system is capable of detecting and identifying the hidden content of suspicious packages or envelopes without having to open them. One of this year’s Joseph von Fraunhofer prizes was awarded to Prof. Dr. René Beigang of Fraunhofer IPM and Dipl.-Ing. Thorsten Sprenger, Head of Public Security and Photonics at Hübner, for their work on the terahertz scanner for the secure identification of hazardous materials and illicit drugs in postal consignments.

But why did the scientists choose to use terahertz waves for this application? Professor René Beigang explains: “The terahertz range lies midway between microwave and infrared in the electromagnetic spectrum, and thus combines the advantages of both.” 

Like microwaves, these low-energy frequencies can easily penetrate paper, wood, lightweight fabrics, plastics, and ceramics. Moreover, terahertz waves generate characteristic spectra depending on the type of material they travel through, which can be analyzed quickly using intelligent software. A further significant advantage is that terahertz waves are non-ionizing and therefore safe to use in an unprotected environment, unlike X-rays. This makes the technology an interesting option for use in mail scanners.

Scaling up terahertz technology for industrial applications

Terahertz technology is still in its infancy, and until now it has found relatively few applications. The department of Material Characterization and Testing at the University of Kaiserslautern, sponsored jointly by Fraunhofer IPM and the Land of Rheinland-Pfalz, hopes to change this situation. “Our goal is to scale up terahertz technology and extend its range of use to include security applications,” says Beigang. 

The engineers at Hübner were among the first to recognize the potential of the Fraunhofer researchers’ work. The company’s traditional line of business is manufacturing key components for the transportation industry (e.g. rail vehicles, buses, airport technology, automotive). A new division for public security was added in 2006, when the company first started to look for cooperation partners. The mail scanner project was launched four years later, based on previous joint development projects. In the meantime, the company has brought its T-COGNITION solution onto the market.

This is how the mail scanner works. Suspicious envelopes and packages are fed into the scanner on a retractable tray. They are then exposed to terahertz waves which are absorbed at different frequencies within the spectral range depending on the substance they travel through (characteristic absorption properties). 

Detectors at the output of the scanner record the transmitted wavelengths. “Within a few seconds, T-COGNITION produces a spectroscopic fingerprint that allows the detected hazardous material to be compared with database samples and definitively identified,” says Thorsten Sprenger.

The system triggers an alarm if the consignment contains explosives or illicit drugs.The system is capable of examining the content of postal items up to C4 format with a thickness of up to two centimeters. Sprenger says: “It is the ideal mailroom solution for prisons, customs offices, government agencies, company headquarters, and embassies or consulates, because it helps to improve security and protect human lives.”

T-COGNITION recently received the PrismAward, the equivalent of an Oscar in the photonics world, at the Photonics West 2014 international congress in San Francisco.

Contacts and sources:
Fraunhofer Institute 

Whole Beams Of Light, Not Just Particles, Can Be Entangled

Sending entangled beams through fast-light materials

Michael Lewis's bestselling book "Flash Boys" describes how some brokers, engaging in high frequency trading, exploit fast telecommunications to gain fraction-of-a-second advantage in the buying and selling of stocks. But you don't need to have billions of dollars riding on this-second securities transactions to appreciate the importance of fast signal processing. From internet to video streaming, we want things fast.

This image depicts the experimental setup for studying fast light. Pump beams (purple) create correlated probe (turquoise) and conjugate (gold) beams. Each of these beams is aimed at a beam splitter (yellow disks). A local oscillator (LO) also sends a laser beam into each of the beam splitters. The resulting interference pattern -- registered in a spectrum analyzer, SA -- for the probe and conjugate arms are compared.

Credit: NIST

Paul Lett and his colleagues at the Joint Quantum Institute (1) specialize in producing modulated beams of light for encoding information. They haven't found a way to move data faster than c, the speed of light in a vacuum, but in a new experiment they have looked at how light traveling through so called "fast-light" materials does seem to advance faster than c, at least in one limited sense. They report their results (online as of 25 May 2014) in the journal Nature Photonics (2)

Seeing how light can be manipulated in this way requires a look at several key concepts, such as entanglement, mutual information, and anomalous dispersion. At the end we'll arrive at a forefront result.


Much research at JQI is devoted to the processing of quantum information, information coded in the form of qubits. Qubits, in turn are tiny quantum systems---sometimes electrons trapped in a semiconductor, sometimes atoms or ions held in a trap---maintained in a superposition of states. The utility of qubits increases when two or more of them can be yoked into a larger quantum arrangement, a process called entanglement. Two entangled photons are not really sovereign particles but parts of a single quantum entity.

The basis of entanglement is often a discrete variable, such as electron spin (whose value can be up or down) or photon polarization (say, horizontal or vertical). The essence of entanglement is this: while the polarization of each photon is indeterminate until a measurement is made, once you measure the polarization of one of the pair of entangled photons, you automatically know the other photon's polarization too.

But the mode of entanglement can also be vested in a continuous variable. In Lett's lab, for instance, two whole light beams can be entangled. Here the operative variable is not polarization but phase (how far along in the cycle of the wave you are) or intensity (how many photons are in the beam). For a light beam, phase and intensity are not discrete (up or down) but continuous in variability.


Biologists examining the un-seamed strands of DNA can (courtesy of the correlated nature of nucleic acid constituents) deduce the sequence of bases along one strand by examining the sequence of the other strand. So it is with entangled beams. A slight fluctuation of the instantaneous intensity of one beam (such fluctuations are inevitable because of the Heisenberg uncertainty principle) will be matched by a comparable fluctuation in the other beam.

Lett and his colleagues make entangled beams in a process called four-wave mixing. A laser beam (pump beam) enters a vapor-filled cell. Here two photons from the pump beam are converted into two daughter photons proceeding onwards with different energies and directions. These photons constitute beams in their own right, one called the probe beam, the other called the conjugate beam. Both of these beams are too weak to measure directly. Instead each beam enters a beam splitter (yellow disk in the drawing below) where its light can be combined with light from a local oscillator (which also serves as a phase reference). The ensuing interference patterns provide aggregate phase or intensity information for the two beams.

When the beam entanglement is perfect, the mutual correlation is 1. When studying the intensity fluctuations of one beam tells you nothing about those of the other beam, then the mutual correlation is 0.

The mutual information of the two beams (how much we know about one beam if we know the fluctuation of the other beam) peaks at different times depending on whether the conjugate beam passes through a fast-light medium (red), a slow-light medium (green), or no medium at all (black).

Credit: NIST


In a famous experiment, Isaac Newton showed how incoming sunlight split apart into a spectrum of colors when it passed through a prism. The degree of wavelength-dependent dispersion for a material that causes this splitting of colors is referred to as its index of refraction.

In most materials the index is larger than 1. For plain window glass, it is about 1.4; for water it is 1.33 for visible light, and gradually increases as the frequency of the light goes up. At much higher frequency (equivalent to shorter wavelength), though, the index can change its value abruptly and go down. For glass, that occurs at ultraviolet wavelengths so you don't ordinarily see this "anomalous dispersion" effect. In a warm vapor of rubidium atoms, however, (and especially when modified with laser light) the effect can occur at infrared wavelengths, and here is where the JQI experiment looks.

In the figure above notice that the conjugate beam is sent through a second cell, filled with rubidium vapor. Here the beam is subject to dispersion. The JQI experiment aims to study how the entanglement of this conjugate beam with the probe beam (subject to no dispersion) holds up.

When the refraction is "normal"---that is, when index of refraction causes ordinary dispersion---the light signal is slowed in comparison with the beam which doesn't undergo dispersion. For this set of conditions, the cell is referred to as a "slow-light" material. When, however, the frequency is just right, the conjugate beam will undergo anomalous dispersion. When the different frequency components that constitute a pulse or intensity fluctuation reformulate themselves as they emerge from the cell, they will now be just slightly ahead of a pulse that hadn't gone through the cell. (To make a proper measurement of delay one needs two entangled beams---beams whose fluctuations are related.)


No, the JQI researchers are not saying that any information is traveling faster than c. The figure above shows that the peak for the mutual information for the fast-light-material is indeed ahead of the comparable peaks for an unscattered beam or for a beam emerging from a slow-light material. It turns out that the cost of achieving anomalous dispersion at all has been that additional gain (amplification) is needed, and this amplification imposes noise onto the signal.

This inherent limitation in extracting useful information from an incoming light beam is even more pronounced with beams containing (on average) one or less-than-one photon. Such dilute beams are desirable in many quantum experiments where measurement control or the storage or delay of quantum information is important.

"We did these experiments not to try to violate causality, said Paul Lett, "but because we wanted to see the fundamental way that quantum noise "enforces" causality, and working near the limits of quantum noise also lets us examine the somewhat surprising differences between slow and fast light materials when it comes to the transport of information."

Saturday, May 24, 2014

‘E-Waste Pollution’ Threat To Human Health

In addition to its damaging effect on the environment and its illegal smuggling into developing countries, researchers have now linked e-waste to adverse effects on human health, such as inflammation and oxidative stress – precursors to cardiovascular disease, DNA damage and possibly cancer.

Credit: IOP

In a study published May 31st, in IOP Publishing’s journal Environmental Research Letters,researchers took air samples from one of the largest e-waste dismantling areas in China and examined their effects on human lung epithelial cells.

E-waste, or electronic waste, describes end-of-life electrical goods such as computers, televisions, printers, and mobile phones. Each year between 20–50 million tons of e-waste is generated worldwide, 100,000 tons of which is exported from UK shores, according to a recent BBC Panorama programme. A large proportion of worldwide e-waste is exported to China.

Due to the crude recycling process, many pollutants, such as persistent organic pollutants and heavy metals, are released from e-waste, which can easily accumulate in the human body through the inhalation of contaminated air.

After exposing the cultured lung cells to the organic-soluble and water-soluble constituents of the samples, the researchers tested for the level of Interleukin-8 (IL-8), a key mediator of inflammatory response, and Reactive Oxygen Species (ROS), chemically reactive molecules that can cause extensive damage in excess.

The samples were also tested for the expression of the p53 gene – a tumour suppressor gene that produces a protein to help counteract cell damage. If there is evidence of this gene being expressed it can be seen as a marker that cell damage is taking place.

The results showed that the samples of pollutants caused significant increases in both IL-8 and ROS levels – indicators of an inflammatory response and oxidative stress respectively. Significant increases were also observed in the levels of the p53 protein with the risk of organic-soluble pollutants being much higher than water-soluble pollutants.

Co-author of the study Dr Fangxing Yang, of Zhejiang University, said, “Both inflammatory response and oxidative stress may lead to DNA damage, which could induce oncogenesis, or even cancer. Of course, inflammatory response and oxidative stress are also associated with other diseases, such as cardiovascular diseases.”

In this study, the researchers took samples of the air from Taizhou of Zhejiang province – a dismantling complex that involves more than 60,000 people and dismantles more than two million tons of e-waste to recycle metals each year.

To obtain the samples, the researchers used two sampling sites that were located downwind of a dismantling industrial park in Taizhou, set up by the local government in 2004.

It is well known that inflammatory response and oxidative stress can lead to DNA damage and therefore activate the p53 gene to counteract this damage. The study did not find any significant correlation between IL-8 and ROS and p53 expression; however the researchers suggest that this may be due to the various other endpoints, not examined in this study, which can damage DNA.

A further study will attempt to characterise the components present in the polluted air and identify the key contributors to these adverse effects.

Dr Yang continued, “From these results it is clear that the ‘open’ dismantlement of e-waste must be forbidden with more primitive techniques improved. As the results show potential adverse effects on human health, workers at these sites must also be given proper protection.

“Furthermore, one must consider the initial manufacturing process of electrical goods and look to utilise more environmentally and human friendly materials in their production.”

Contacts and sources:
Institute of Physics

Deadly Jellyfish Blooms Now Easier To Predict

The sting of the Irukandji jellyfish, part of what is commonly called the 'box jellyfish' family, can be fatal. At best, being stung causes painful cramping and people may need hospitalisation, suffering from what is known as Irukandji Syndrome. Unfortunately the jellyfish are tiny, transparent, and when they arrive near the coastline they come in blooms numbering hundreds of thousands. Until now the first sign that the Irukandji are in the water nearby are the cries of people being stung.

A report just published in Interface, the Journal of the Royal Society, sets out a study into how blooms can be predicted. Lead author Lisa-Ann Gershwin said, 'They travel in very, very large numbers. It's not uncommon to have dozens of stings on a beach in a day. The jellyfish bodies and tentacles are invisible in water - it's like a diamond dropped into a glass of water - you just can't see them.'

Irukandji jellyfish
Credit: http://www.irukandjijellyfish.com/

Although the blooms can occur anywhere, from Wales to Melbourne, the researchers in this study focused on the Great Barrier Reef. They compared a sting database containing records collected between 1985 and 2012 with medium-range weather forecasts and found that when ocean-blowing trade winds dropped, the jellyfish arrived.

Researchers are confident that the modelling will apply across the jellyfish's range. 'You will probably need a bit of tweaking from place to place, but the general principle will hold true', explained Dr Gershwin. By anticipating the arrival of these hazardous blooms, coastal authorities will be able to put in place management interventions to prevent people from being injured.

But are the box jelly fish really a problem in Europe? Other studies suggest that the shipping industry has artificially distributed jellyfish into non-native habitats where they then colonise. Juvenile jellyfish (polyps) attach to ship hulls and travel with them. Ships also take on ballast water in originating harbours and then dump the water (along with jellyfish and other organisms) when they arrive in a new location. Billions of gallons of ballast water are transported annually around the globe.

Jellyfish are aggressive colonisers. One such example is the Comb Jelly, an American species which has invaded the Black and Caspian seas. The MEMO project (Mnemiopsis leidyi:Ecology, Modelling and Observation), is examining their biology and physiology in an attempt to monitor the impact the jellyfish are having on commercial fish and shellfish stocks. The project started in January 2011 and is funded by the Interreg IVa MEMO-2 Seas Programme. In total, EUR 3.5 million is allocated over three years, involving 20 scientists from the UK, France, Belgium and the Netherlands.

Changing temperatures of the seas and oceans and greater cross-transference of species due to shipping are having an impact. Whether it is a case of keeping swimmers safe, or trying to prevent invasive species of jellyfish from spreading into commercially sensitive waters, ongoing research is much needed.

For more information, please visit:

Interface, the Journal of the Royal Society