Unseen Is Free

Unseen Is Free
Try It Now

OpenX

Google Translate

Saturday, November 22, 2014

Revolutionizing The Interaction Between Plants And Bacteria

In laboratory trials, it has been observed that plants grown from 25 to 35 percent more when the microorganism was added.

Legumes, such as lentils, beans, peas and chickpeas, important for human nutrition, could increase its production thanks to the contributions of a scientific group that revolutionized the study of interactions between plants and microorganisms, at the University of Salamanca in Spain, leaded by Martha Trujillo Toledo.

Credit:   University of Salamanca

After isolating and studying a bacteria of the genus Micromonospora in 2003, it was discovered that this microorganism improved the productions of the grains. However, because it is a new line of investigation and only two laboratories in the world are studying the interaction of legumes with the microorganism, the way the bacteria reaches the plant is currently unknown.

"What we do know is that it is able to penetrate the tissues of the plant and promote its growth, which increases between 25 and 35 percent. Moreover, the organism belongs to the actinobacterias group, which is one of the largest producers of antibiotics and other substances. Here, we discovered that one of our strains produced antitumor molecules, so it might have an important biotechnological application" highlights the researcher, who is part of the Network of Mexican Talent, Chapter Spain.

 Martha Trujillo Toledo


She adds that a significant percentage of her research has been devoted to describing all the new species of Micromonospora. "Hence, we set the goal of trying to understand the relationship between the bacteria and the plant, for which we conducted trials in a climatic chamber, where we grow the plant with all the nutrients it needs to develop, but by adding the bacteria the growth increased" .

She points that they are still studying this interaction, because it is not yet known what brings the bacteria to the plant. "We also observed that the number of nodules of the legumes, where the nitrogen they need gets fixed, almost doubled."

Another important aspect of the work was to conduct molecular studies to identify bacterial genes important for the interaction with its host. In this regard, sequencing revealed a big surprise, since almost 200 genes encode enzymes that destroy plant tissue, which is ironic in a bacteria found inside plants and favoring their protection and growth, as demonstrated by previous studies.

Trujillo Toledo refers that by improving the deeper understanding of the plant-microorganism relationship, the bacteria could be applied in the future as a growth enhancer for legumes to improve production for farmers.

Notably, the research group of Trujillo Toledo has about 30 different sampled species of plants not only in Spain but in other European countries, Nicaragua and Australia. Also, in Mexico she has a joint project with Maria Valdes Ramirez of the National School of Biological Sciences, at the IPN, since she also found the Micromonospora bacteria in other plants, which also produce nodules and fix nitrogen like legumes. (Agencia ID)


Contacts and sources: 
Investigación y Desarrollo

Possibilities For Personalized Cancer Vaccines Revealed At ESMO Symposium


The possibilities for personalised vaccines in all types of cancer are revealed today in a lecture from Dr Harpreet Singh at the ESMO Symposium on Immuno-Oncology 2014 in Geneva, Switzerland.

“One of the biggest hurdles in cancer immunotherapy is the discovery of appropriate cancer targets that can be recognised by T-cells,” said Singh, who is scientific coordinator of the EU-funded GAPVAC phase I trial which is testing personalised vaccines in glioblastoma, the most common and aggressive brain cancer. “In the GAPVAC trial we will treat glioblastoma patients with vaccines that are ideal for each patient because they contain personalised antigens.”1

For all patients in the GAPVAC study, researchers will identify genes expressed in the tumour, peptides presented on the human leukocyte antigen (HLA) receptor (i.e. peptides which will be seen by T-cells), cancer specific mutations, and the ability of the immune system to mount a response to certain antigens. Based on this information, two vaccines, called actively personalised vaccines (APVACs), will be constructed and administered following conventional surgery.

 Glioblastoma 
Credit: Wikipedia

The first vaccine will be prepared from a warehouse of 72 targets previously identified by the researchers as relevant for treatment in glioblastoma. These peptides have been manufactured and put on the shelf ready to be vaccinated in patients. Patients will be given a cocktail of the peptides they express and which their immune system can mount a response to.

Singh said: “A patient may express 20 of these 72 targets on their tumour, for example. If we find that the patient’s immune system can mount responses to 5 of the 20 targets, we mix the 5 peptides and give them to the patient. We mix the peptides off the shelf but the cocktail is changed for each patient because it is matched to their biomarkers.”

The second vaccine is synthesised de novo based on a mutated peptide expressed in the tumour of the patient. Singh said: “That peptide is not in our warehouse because it just occurs in this one single patient. The patient receives APVAC-1 and APVAC-2 in a highly personalised fashion in a way that I think has never been done for any patient.”

He added: “GAPVAC has two major goals. One is to show that personalised vaccines are feasible, since this is one of the most complicated trials ever done in cancer immunotherapy. The second is to show that we can mount far better biological responses in these patients compared to vaccination with non-personalised antigens.”

Singh’s previous research has shown that vaccination with non-personalised antigens leads to better disease control and longer overall survival in phase I and phase II clinical studies in patients with renal cell cancer.2

Singh said: “For the non-personalised vaccines we used off-the-shelf peptide targets that were shared by many patients with a particular cancer. Using this approach we have successfully vaccinated patients with renal cell cancer, colorectal cancer and glioblastoma.”

He added: “During this research we identified other targets that appeared in very few patients or even, in extreme cases, in a single patient. Often these rarer peptides are of better quality, meaning they are more specifically seen in cancer cells and occur at higher levels. This led us to start developing personalised cancer vaccines which contain the ideal set of targets for one particular patient. We hope they will be even more effective than the off-the-shelf vaccines.”

Singh continued: “A very simple example from something established is trastuzumab in breast cancer. Trastuzumab was originally given to every breast cancer patient and the efficacy was just seen in a subset. Now only about 20% of breast cancer patients receive trastuzumab and the personalised aspect is just based on the low abundance of Her2, the target.”

Singh believes that personalised vaccines hold promise for all types of cancer, and that personalisation could also be applied to adoptive cell therapy.

He concluded: “Personalisation is not limited to vaccines but is a general principle that could be applied to cancer immunotherapy more broadly. We are starting with vaccines but we are also thinking about how to use personalised antigens in adoptive cell therapy.”


Contacts and sources:
European Society for Medical Oncology (ESMO)

References
1 Glioma Actively Personalised Vaccine Consortium (GAPVAC): www.gapvac.eu
2 Walter S, et al. Multipeptide immune response to cancer vaccine IMA901 after single-dose cyclophosphamide associates with longer patient survival. Nat Med. 2012;18(8):1254-1261. doi: 10.1038/nm.2883. Epub 2012 Jul 29.


Citation: Annals of Oncology
Volume 25, 2014, Supplement 6
http://annonc.oxfordjournals.org/content/25/suppl_6.toc

Little Ice Age Was Global: Research Rekindles Debate Of Sun's Role

A team of UK researchers has shed new light on the climate of the Little Ice Age, and rekindled debate over the role of the sun in climate change. The new study, which involved detailed scientific examination of a peat bog in southern South America, indicates that the most extreme climate episodes of the Little Ice Age were felt not just in Europe and North America, which is well known, but apparently globally. The research has implications for current concerns over ‘Global Warming’.

"February" from the calendar of Les Très Riches Heures du duc de Berry, 1412-1416

Climate sceptics and believers of Global Warming have long argued about whether the Little Ice Age (from c. early 15th to 19th Centuries) was global, its causes, and how much influence the sun has had on global climate, both during the Little Ice Age and in recent decades. This new study helps clarify those debates.

The team of researchers, from the Universities of Gloucestershire, Aberdeen and Plymouth, conducted studies on past climate through detailed laboratory examination of peat from a bog near Ushuaia, Tierra del Fuego. They used exactly the same laboratory methods as have been developed for peat bogs in Europe. 

Two principal techniques were used to reconstruct past climates over the past 3000 years: at close intervals throughout a vertical column of peat, the researchers investigated the degree of peat decomposition, which is directly related to climate, and also examined the peat matrix to reveal the changing amounts of different plants that previously grew on the bog.

The data show that the most extreme cold phases of the Little Ice Age—in the mid-15th and then again in the early 18th centuries—were synchronous in Europe and South America. There is one stark difference: while in continental north-west Europe, bogs became wetter, in Tierra del Fuego, the bog became drier—in both cases probably a result of a dramatic equator-ward shift of moisture-bearing winds.

These extreme times coincide with periods when it is known that the sun was unusually quiet. In the late 17th to mid-18th centuries it had very few sunspots—fewer even than during the run of recent cold winters in Europe, which other UK scientists have linked to a relatively quiet sun.

Professor Frank Chambers, Head of the University of Gloucestershire’s Centre for Environmental Change and Quaternary Research, who led the writing of the Fast-Track Research Report, said:

“Both skeptics and adherents of Global Warming might draw succor from this work. Our study is significant because, while there are various different estimates for the start and end of the Little Ice Age in different regions of the world, our data show that the most extreme phases occurred at the same time in both the Northern and Southern Hemispheres. These extreme episodes were abrupt global events. They were probably related to sudden, equator-ward shifts of the Westerlies in the Southern Hemisphere, and the Atlantic depression tracks in the Northern Hemisphere. The same shifts seem to have happened abruptly before, such as c. 2800 years ago, when the same synchronous but opposite response is shown in bogs in Northwest Europe compared with southern South America.

“It seems that the sun’s quiescence was responsible for the most extreme phases of the Little Ice Age, implying that solar variability sometimes plays a significant role in climate change. A change in solar activity may also, for example, have contributed to the post Little Ice Age rise in global temperatures in the first half of the 20th Century. However, solar variability alone cannot explain the post-1970 global temperature trends, especially the global temperature rise in the last three decades of the 20th Century, which has been attributed by the Inter-Governmental Panel on Climate Change (IPCC) to increased concentrations of greenhouse gases in the atmosphere.”

Professor Chambers concluded: “I must stress that our research findings are only interpretable for the period from 3000 years ago to the end of the Little Ice Age. That is the period upon which our research is focused. However, in light of our substantiation of the effects of ‘grand solar minima’ upon past global climates, it could be speculated that the current pausing of ‘Global Warming’, which is frequently referenced by those skeptical of climate projections by the IPCC, might relate at least in part to a countervailing effect of reduced solar activity, as shown in the recent sunspot cycle.”



Contacts and sources: 
University of Gloucestershire

Friday, November 21, 2014

Sun’s Rotating ‘Magnet’ Pulls Lightning Towards UK


The Sun may be playing a part in the generation of lightning strikes on Earth by temporarily ‘bending’ the Earth’s magnetic field and allowing a shower of energetic particles to enter the upper atmosphere.


This is according to researchers at the University of Reading who have found that over a five year period the UK experienced around 50% more lightning strikes when the Earth’s magnetic field was skewed by the Sun’s own magnetic field.

The Earth’s magnetic field usually functions as an in-built force-field to shield against a bombardment of particles from space, known as galactic cosmic rays, which have previously been found to prompt a chain-reaction of events in thunderclouds that trigger lightning bolts.

It is hoped these new insights, which have been published today, 19 November, in IOP Publishing’s journal Environmental Research Letters, could lead to a reliable lightning forecast system that could provide warnings of hazardous events many weeks in advance.

To do so, weather forecasters would need to combine conventional forecasts with accurate predictions of the Sun’s spiral-shaped magnetic field known as the heliospheric magnetic field (HMF), which is spewed out as the Sun rotates and is dragged through the solar system by the solar wind.

Lead author of the research Dr Matt Owens said: “We’ve discovered that the Sun’s powerful magnetic field is having a big influence on UK lightning rates.

“The Sun’s magnetic field is like a bar magnet, so as the Sun rotates its magnetic field alternately points toward and away from the Earth, pulling the Earth’s own magnetic field one way and then another.”

In their study, the researchers used satellite and Met Office data to show that between 2001 and 2006, the UK experienced a 50% increase in thunderstorms when the HMF pointed towards the Sun and away from Earth.

This change of direction can skew or ‘bend’ the Earth’s own magnetic field and the researchers believe that this could expose some regions of the upper atmosphere to more galactic cosmic rays—tiny particles from across the Universe accelerated to close to the speed of light by exploding stars.

“From our results, we propose that galactic cosmic rays are channelled to different locations around the globe, which can trigger lightning in already charged-up thunderclouds. The changes to our magnetic field could also make thunderstorms more likely by acting like an extra battery in the atmospheric electric circuit, helping to further ‘charge up’ clouds,” Dr Owens continued.

The results build on a previous study by University of Reading researchers, also published in Environmental Research Letters, which found an unexpected link between energetic particles from the Sun and lightning rates on Earth.

Professor Giles Harrison, head of Reading’s Department of Meteorology and co-author of both studies, said: “This latest finding is an important step forward in our knowledge of how the weather on Earth is influenced by what goes on in space. The University of Reading’s continuing success in this area shows that new insights follow from atmospheric and space scientists working together.”

Dr Owens continued: “Scientists have been reliably predicting the solar magnetic field polarity since the 1970s by watching the surface of the Sun. We just never knew it had any implications on the weather on Earth. We now plan to combine regular weather forecasts, which predict when and where thunderclouds will form, with solar magnetic field predictions. This means a reliable lightning forecast could now be a genuine possibility.”


Contacts and sources:
Institute of Physics 

From Wednesday 19 November, this paper can be downloaded fromhttp://iopscience.iop.org/1748-9326/9/11/115009/article

Thursday, November 20, 2014

The Riddle Of The Missing Stars


Thanks to the NASA/ESA Hubble Space Telescope, some of the most mysterious cosmic residents have just become even more puzzling.

This NASA/ESA Hubble Space Telescope image shows four globular clusters in the dwarf galaxy Fornax.

Credit: NASA, ESA, S. Larsen (Radboud University, the Netherlands)

New observations of globular clusters in a small galaxy show they are very similar to those found in the Milky Way, and so must have formed in a similar way. One of the leading theories on how these clusters form predicts that globular clusters should only be found nestled in among large quantities of old stars. But these old stars, though rife in the Milky Way, are not present in this small galaxy, and so, the mystery deepens.

Globular clusters -- large balls of stars that orbit the centres of galaxies, but can lie very far from them -- remain one of the biggest cosmic mysteries. They were once thought to consist of a single population of stars that all formed together. However, research has since shown that many of the Milky Way's globular clusters had far more complex formation histories and are made up of at least two distinct populations of stars.

Of these populations, around half the stars are a single generation of normal stars that were thought to form first, and the other half form a second generation of stars, which are polluted with different chemical elements. In particular, the polluted stars contain up to 50-100 times more nitrogen than the first generation of stars.

The proportion of polluted stars found in the Milky Way's globular clusters is much higher than astronomers expected, suggesting that a large chunk of the first generation star population is missing. A leading explanation for this is that the clusters once contained many more stars but a large fraction of the first generation stars were ejected from the cluster at some time in its past.

This explanation makes sense for globular clusters in the Milky Way, where the ejected stars could easily hide among the many similar, old stars in the vast halo, but the new observations, which look at this type of cluster in a much smaller galaxy, call this theory into question.

Astronomers used Hubble's Wide Field Camera 3 (WFC3) to observe four globular clusters in a small nearby galaxy known as the Fornax Dwarf Spheroidal galaxy [1].

"We knew that the Milky Way's clusters were more complex than was originally thought, and there are theories to explain why. But to really test our theories about how these clusters form we needed to know what happened in other environments," says Søren Larsen of Radboud University in Nijmegen, the Netherlands, lead author of the new paper. "Before now we didn't know whether globular clusters in smaller galaxies had multiple generations or not, but our observations show clearly that they do!"

The astronomers' detailed observations of the four Fornax clusters show that they also contain a second polluted population of stars [2] and indicate that not only did they form in a similar way to one another, their formation process is also similar to clusters in the Milky Way. Specifically, the astronomers used the Hubble observations to measure the amount of nitrogen in the cluster stars, and found that about half of the stars in each cluster are polluted at the same level that is seen in Milky Way's globular clusters.

This high proportion of polluted second generation stars means that the Fornax globular clusters' formation should be covered by the same theory as those in the Milky Way.

Based on the number of polluted stars in these clusters they would have to have been up to ten times more massive in the past, before kicking out huge numbers of their first generation stars and reducing to their current size. But, unlike the Milky Way, the galaxy that hosts these clusters doesn't have enough old stars to account for the huge number that were supposedly banished from the clusters.

"If these kicked-out stars were there, we would see them -- but we don't!" explains Frank Grundahl of Aarhus University in Denmark, co-author on the paper. "Our leading formation theory just can't be right. There's nowhere that Fornax could have hidden these ejected stars, so it appears that the clusters couldn't have been so much larger in the past."

This finding means that a leading theory on how these mixed generation globular clusters formed cannot be correct and astronomers will have to think once more about how these mysterious objects, in the Milky Way and further afield, came to exist.

The new work is detailed in a paper published today, 20 November 2014, in The Astrophysical Journal.


Contacts and sources:
Georgia Bladon
ESA/Hubble Information Centre

Deep-Earth Carbon Offers Clues About Origin of Life on Earth

New findings by a Johns Hopkins University-led team reveal long unknown details about carbon deep beneath the Earth’s surface and suggest ways this subterranean carbon might have influenced the history of life on the planet.

The team also developed a new, related theory about how diamonds form in the Earth’s mantle.

For decades scientists had little understanding of how carbon behaved deep below the Earth’s surface even as they learned more and more about the element’s vital role at the planet’s crust. Using a model created by Johns Hopkins geochemist Dimitri Sverjensky, Sverjensky, Vincenzo Stagno of the Carnegie Institution of Washington and Fang Huang, a Johns Hopkins graduate student, became the first to calculate how much carbon and what types of carbon exist in fluids at 100 miles below the Earth’s surface at temperatures up to 2,100 degrees F.

Dimitri Sverjensky
Credit: Johns Hopkins University

In an article published this week in the journal Nature Geoscience, Sverjensky and his team demonstrate that in addition to the carbon dioxide and methane already documented deep in subduction zones, there exists a rich variety of organic carbon species that could spark the formation of diamonds and perhaps even become food for microbial life.

“It is a very exciting possibility that these deep fluids might transport building blocks for life into the shallow Earth,” said Sverjensky, a professor in the Department of Earth and Planetary Sciences. “This may be a key to the origin of life itself.”

Sverjensky’s theoretical model, called the Deep Earth Water model, allowed the team to determine the chemical makeup of fluids in the Earth’s mantle, expelled from descending tectonic plates. Some of the fluids, those in equilibrium with mantle peridotite minerals, contained the expected carbon dioxide and methane. But others, those in equilibrium with diamonds and eclogitic minerals, contained dissolved organic carbon species including a vinegar-like acetic acid.

These high concentrations of dissolved carbon species, previously unknown at great depth in the Earth, suggest they are helping to ferry large amounts of carbon from the subduction zone into the overlying mantle wedge where they are likely to alter the mantle and affect the cycling of elements back into the Earth’s atmosphere.

The team also suggested that these mantle fluids with dissolved organic carbon species could be creating diamonds in a previously unknown way. Scientists have long believed diamond formation resulted through chemical reactions starting with either carbon dioxide or methane. The organic species offer a range of different starting materials, and an entirely new take on the creation of the gemstones.

The research is part of a 10-year global project to further understanding of carbon on Earth called the Deep Carbon Observatory. The work is funded by the Alfred P. Sloan Foundation.



Contacts and sources:
Jill Rosen
Johns Hopkins University

Imagination, Reality Flow In Opposite Directions In The Brain

As real as that daydream may seem, its path through your brain runs opposite reality.

Aiming to discern discrete neural circuits, researchers at the University of Wisconsin-Madison have tracked electrical activity in the brains of people who alternately imagined scenes or watched videos.

Credit: galleryhip.com

"A really important problem in brain research is understanding how different parts of the brain are functionally connected. What areas are interacting? What is the direction of communication?" says Barry Van Veen, a UW-Madison professor of electrical and computer engineering. "We know that the brain does not function as a set of independent areas, but as a network of specialized areas that collaborate."

Van Veen, along with Giulio Tononi, a UW-Madison psychiatry professor and neuroscientist, Daniela Dentico, a scientist at UW-Madison's Waisman Center, and collaborators from the University of Liege in Belgium, published results recently in the journal NeuroImage. Their work could lead to the development of new tools to help Tononi untangle what happens in the brain during sleep and dreaming, while Van Veen hopes to apply the study's new methods to understand how the brain uses networks to encode short-term memory.

Electrical and computer engineering professor Barry Van Veen wears an electrode net used to monitor brain activity via EEG signals. His research with psychiatry professor and neuroscientist Giulio Tononi could help untangle what happens in the brain during sleep and dreaming.
Credit: Nick Berard

During imagination, the researchers found an increase in the flow of information from the parietal lobe of the brain to the occipital lobe -- from a higher-order region that combines inputs from several of the senses out to a lower-order region.

In contrast, visual information taken in by the eyes tends to flow from the occipital lobe -- which makes up much of the brain's visual cortex -- "up" to the parietal lobe.

"There seems to be a lot in our brains and animal brains that is directional, that neural signals move in a particular direction, then stop, and start somewhere else," says. "I think this is really a new theme that had not been explored."

The researchers approached the study as an opportunity to test the power of electroencephalography (EEG) -- which uses sensors on the scalp to measure underlying electrical activity -- to discriminate between different parts of the brain's network.

Brains are rarely quiet, though, and EEG tends to record plenty of activity not necessarily related to a particular process researchers want to study.

To zero in on a set of target circuits, the researchers asked their subjects to watch short video clips before trying to replay the action from memory in their heads. Others were asked to imagine traveling on a magic bicycle -- focusing on the details of shapes, colors and textures -- before watching a short video of silent nature scenes.

Using an algorithm Van Veen developed to parse the detailed EEG data, the researchers were able to compile strong evidence of the directional flow of information.

"We were very interested in seeing if our signal-processing methods were sensitive enough to discriminate between these conditions," says Van Veen, whose work is supported by the National Institute of Biomedical Imaging and Bioengineering. "These types of demonstrations are important for gaining confidence in new tools."


Contacts and sources:
Barry Van Veen
University of Wisconsin-Madison

New Zealand's Moa Were Exterminated By An Extremely Low-Density Human Population

A new study suggests that the flightless birds named moa were completely extinct by the time New Zealand's human population had grown to two and half thousand people at most.

The new findings, which appear in the prestigious journal Nature Communications, incorporate results of research by international teams involved in two major projects led by Professor Richard Holdaway (Palaecol Research Ltd and University of Canterbury) and Mr Chris Jacomb (University of Otago), respectively.

This is a restoration of an upland moa, Megalapteryx didinus.
Credit: George Edward Lodge


The researchers calculate that the Polynesians whose activities caused moa extinction in little more than a century had amongst the lowest human population densities on record. They found that during the peak period of moa hunting, there were fewer than 1500 Polynesian settlers in New Zealand, or about 1 person per 100 square kilometres, one of the lowest population densities recorded for any pre-industrial society.

They found that the human population could have reached about 2500 by the time moa went extinct. For several decades before then moa would have been rare luxuries.

Estimates of the human population during the moa hunting period are more sensitive to how long it took to exterminate the birds through hunting and habitat destruction than to the size of the founding population.

To better define the critical period of moa hunting, the research was aimed at "book-ending" the moa hunter period with new estimates for when people started eating moa, and when there were no more moa to eat.

Starting with the latest estimate for a founding population of about 400 people (including 170-230 women), and applying population growth rates in the range achieved by past and present populations, the researchers modelled the human population size through the moa hunter period and beyond. When moa and seals were still available, the better diet enjoyed by the settlers likely fuelled higher population growth, and the analyses took this into account.

The first "book-end" - first evidence for moa hunting - was set by statistical analyses of 93 new high-precision radiocarbon dates on genetically identified moa eggshell pieces. These had been excavated from first settlement era archaeological sites in the eastern South Island, and showed that moa were still breeding nearby.

Chris Jacomb explains: "The analyses showed that the sites were all first occupied - and the people began eating moa - after the major Kaharoa eruption of Mt Tarawera of about 1314 CE."

Ash from this eruption is an important time marker because no uncontested archaeological evidence for settlement has ever been found beneath it, Mr Jacomb says.

The other "book-end" was derived from statistical analyses of 270 high-precision radiocarbon dates on moa from non-archaeological sites. Analysis of 210 of the ages showed that moa were exterminated first in the more accessible eastern lowlands of the South Island, at the end of the 14th century, just 70-80 years after the first evidence for moa consumption.

Analysis of all 270 dates, on all South Island moa species from throughout the South Island, showed that moa survived for only about another 20 years after that.

Their total extinction most probably occurred within a decade either side of 1425 CE, barely a century after the earliest well-dated site, at Wairau Bar near Blenheim, was settled by people from tropical East Polynesia. The last known birds lived in the mountains of north-west Nelson. Professor Holdaway adds that "the results provide further support for the rapid extinction model for moa that Chris Jacomb and I published 14 years ago in [the US journal] Science."

The researchers note that it is often suggested that people could not have caused the extinction of megafauna such as the mammoths and giant sloths of North America and the giant marsupials of Australia, because the human populations when the extinctions happened were too small.

Prof Holdaway and Mr Jacomb say that the extinction of the New Zealand terrestrial megafauna of moa, giant eagle, and giant geese, accomplished by the direct and indirect activities of a very low-density human population, shows that population size can no longer be used as an argument against human involvement in extinctions elsewhere.


Contacts and sources: 
Richard N. Holdaway
University of Otago

Dinosaur Air Conditioning

Sweating, panting, moving to the shade, or taking a dip are all time-honored methods used by animals to cool down. The implicit goal of these adaptations is always to keep the brain from overheating. Now a new study shows that armor-plated dinosaurs (ankylosaurs) had the capacity to modify the temperature of the air they breathed in an exceptional way: by using their long, winding nasal passages as heat transfer devices.


Led by paleontologist Jason Bourke, a team of scientists at Ohio University used CT scans to document the anatomy of nasal passages in two different ankylosaur species. The team then modeled airflow through 3D reconstructions of these tubes. Bourke found that the convoluted passageways would have given the inhaled air more time and more surface area to warm up to body temperature by drawing heat away from nearby blood vessels. As a result, the blood would be cooled, and shunted to the brain to keep its temperature stable.

Modern mammals and birds use scroll-shaped bones called conchae or turbinates to warm inhaled air. But ankylosaurs seem to have accomplished the same result with a completely different anatomical construction.

"There are two ways that animal noses transfer heat while breathing," says Bourke. "One is to pack a bunch of conchae into the air field, like most mammals and birds do--it's spatially efficient. The other option is to do what lizards and crocodiles do and simply make the nasal airway much longer. Ankylosaurs took the second approach to the extreme."

Lawrence Witmer, who was also involved with the study, said, "Our team discovered these 'crazy-straw' airways several years ago, but only recently have we been able to scientifically test hypotheses on how they functioned. By simulating airflow through these noses, we found that these stretched airways were effective heat exchangers. They would have allowed these multi-tonne beasts to keep their multi-ounce brains from overheating."

Like our own noses, ankylosaur noses likely served more than one function. Even as it was conditioning the air it breathed, the convoluted passageways may have added resonance to the low-pitched sounds the animal uttered, allowing it to be heard over greater


Contacts and sources:
Anthony Friscia
Society of Vertebrate Paleontology

Climate Change Was Not To Blame For The Collapse Of The Bronze Age

Scientists will have to find alternative explanations for a huge population collapse in Europe at the end of the Bronze Age as researchers prove definitively that climate change - commonly assumed to be responsible - could not have been the culprit.

Bronze Age site
Credit: University of Bradford

Archaeologists and environmental scientists from the University of Bradford, University of Leeds, University College Cork, Ireland (UCC), and Queen's University Belfast have shown that the changes in climate that scientists believed to coincide with the fall in population in fact occurred at least two generations later.

Their results, published this week in Proceedings of the National Academy of Sciences, show that human activity starts to decline after 900BC, and falls rapidly after 800BC, indicating a population collapse. But the climate records show that colder, wetter conditions didn't occur until around two generations later.

Fluctuations in levels of human activity through time are reflected by the numbers of radiocarbon dates for a given period. The team used new statistical techniques to analyse more than 2000 radiocarbon dates, taken from hundreds of archaeological sites in Ireland, to pinpoint the precise dates that Europe's Bronze Age population collapse occurred.

The team then analysed past climate records from peat bogs in Ireland and compared the archaeological data to these climate records to see if the dates tallied. That information was then compared with evidence of climate change across NW Europe between 1200 and 500 BC.

"Our evidence shows definitively that the population decline in this period cannot have been caused by climate change," says Ian Armit, Professor of Archaeology at the University of Bradford, and lead author of the study.

Graeme Swindles, Associate Professor of Earth System Dynamics at the University of Leeds, added, "We found clear evidence for a rapid change in climate to much wetter conditions, which we were able to precisely pinpoint to 750BC using statistical methods."

According to Professor Armit, social and economic stress is more likely to be the cause of the sudden and widespread fall in numbers. Communities producing bronze needed to trade over very large distances to obtain copper and tin. Control of these networks enabled the growth of complex, hierarchical societies dominated by a warrior elite. As iron production took over, these networks collapsed, leading to widespread conflict and social collapse. It may be these unstable social conditions, rather than climate change, that led to the population collapse at the end of the Bronze Age.

According to Katharina Becker, Lecturer in the Department of Archaeology at UCC, the Late Bronze Age is usually seen as a time of plenty, in contrast to an impoverished Early Iron Age. "Our results show that the rich Bronze Age artefact record does not provide the full picture and that crisis began earlier than previously thought," she says.

"Although climate change was not directly responsible for the collapse it is likely that the poor climatic conditions would have affected farming," adds Professor Armit. "This would have been particularly difficult for vulnerable communities, preventing population recovery for several centuries."

The findings have significance for modern day climate change debates which, argues Professor Armit, are often too quick to link historical climate events with changes in population.

"The impact of climate change on humans is a huge concern today as we monitor rising temperatures globally," says Professor Armit.

"Often, in examining the past, we are inclined to link evidence of climate change with evidence of population change. Actually, if you have high quality data and apply modern analytical techniques, you get a much clearer picture and start to see the real complexity of human/environment relationships in the past."

Contacts and sources: 

Study Shows Marijuana’s Long-Term Effects On The Brain

The effects of chronic marijuana use on the brain may depend on age of first use and duration of use, according to researchers at the Center for BrainHealth at The University of Texas at Dallas.

Credit: Vanderbilt University

In a paper published today in Proceedings of the National Academy of Sciences (PNAS), researchers for the first time comprehensively describe existing abnormalities in brain function and structure of long-term marijuana users with multiple magnetic resonance imaging (MRI) techniques. Findings show chronic marijuana users have smaller brain volume in the orbitofrontal cortex (OFC), a part of the brain commonly associated with addiction, but also increased brain connectivity.

“We have seen a steady increase in the incidence of marijuana use since 2007,“said Dr. Francesca Filbey, Associate Professor in the School of Behavioral and Brain Sciences at the University of Texas at Dallas and Director of the Cognitive Neuroscience Research in Addictive Disorders at the Center for BrainHealth. “However, research on its long-term effects remains scarce despite the changes in legislation surrounding marijuana and the continuing conversation surrounding this relevant public health topic.”

The research team studied 48 adult marijuana users and 62 gender- and age-matched non-users, accounting for potential biases such as gender, age and ethnicity. The authors also controlled for tobacco and alcohol use. On average, the marijuana users who participated in the study consumed the drug three times per day. Cognitive tests show that chronic marijuana users had lower IQ compared to age-and gender-matched controls but the differences do not seem to be related to the brain abnormalities as no direct correlation can be drawn between IQ deficits and OFC volume decrease.

“What’s unique about this work is that it combines three different MRI techniques to evaluate different brain characteristics,” said Dr. Sina Aslan, founder and president of Advance MRI, LLC and adjunct assistant professor at The University of Texas at Dallas. “The results suggest increases in connectivity, both structural and functional that may be compensating for gray matter losses. Eventually, however, the structural connectivity or ‘wiring’ of the brain starts degrading with prolonged marijuana use.”

Tests reveal that earlier onset of regular marijuana use induces greater structural and functional connectivity. Greatest increases in connectivity appear as an individual begins using marijuana. Findings show severity of use is directly correlated to greater connectivity.

Although increased structural wiring declines after six to eight years of continued chronic use, marijuana users continue to display more intense connectivity than healthy non-users, which may explain why chronic, long-term users “seem to be doing just fine” despite smaller OFC brain volumes, Filbey explained.

“To date, existing studies on the long-term effects of marijuana on brain structures have been largely inconclusive due to limitations in methodologies,” said Dr. Filbey. “While our study does not conclusively address whether any or all of the brain changes are a direct consequence of marijuana use, these effects do suggest that these changes are related to age of onset and duration of use.”

The study offers a preliminary indication that gray matter in the OFC may be more vulnerable than white matter to the effects of delta-9-tetrahydrocannabinol (THC), the main psychoactive ingredient in the cannabis plant. According to the authors, the study provides evidence that chronic marijuana use initiates a complex process that allows neurons to adapt and compensate for smaller gray matter volume, but further studies are needed to determine whether these changes revert back to normal with discontinued marijuana use, whether similar effects are present in occasional marijuana users versus chronic users and whether these effects are indeed a direct result of marijuana use or a predisposing factor.

The research was funded by the National Institute on Drug Abuse to Dr. Filbey (R01 DA030344, K01 DA021632).
 

Contacts and sources:
Jessica Baine, B.S.,  Study Coordinator
The Center for BrainHealth

Mega-Landslide Covers 1,300 Square Miles

A catastrophic landslide, one of the largest known on the surface of the Earth, took place within minutes in southwestern Utah more than 21 million years ago, reports a Kent State University geologist in a paper published in the November issue of the journal Geology.



David Hacker, Ph.D., Kent State University associate professor of geology, points to pseudotachylyte layers and veins within the Markagunt gravity slide.
The Markagunt gravity slide, the size of three Ohio counties, is one of the two largest known continental landslides (larger slides exist on the ocean floors). David Hacker, Ph.D., associate professor of geology at Kent State University at Trumbull, and two colleagues discovered and mapped the scope of the Markagunt slide over the past two summers.



His colleagues and co-authors are Robert F. Biek of the Utah Geological Survey and Peter D. Rowley of Geologic Mapping Inc. of New Harmony, Utah.



Geologists had known about smaller portions of the Markagunt slide before the recent mapping showed its enormous extent. Hiking through the wilderness areas of the Dixie National Forest and Bureau of Land Management land, Hacker identified features showing that the Markagunt landslide was much bigger than previously known.



The landslide took place in an area between what is now Bryce Canyon National Park and the town of Beaver, Utah. It covered about 1,300 square miles, an area as big as Ohio’s Cuyahoga, Portage and Summit counties combined.



Its rival in size, the “Heart Mountain slide,” which took place around 50 million years ago in northwest Wyoming, was discovered in the 1940s and is a classic feature in geology textbooks.



The Markagunt could prove to be much larger than the Heart Mountain slide, once it is mapped in greater detail.



“Large-scale catastrophic collapses of volcanic fields such as these are rare but represent the largest known landslides on the surface of the Earth,” the authors wrote.

The length of the landslide – over 55 miles – also shows that it was as fast moving as it was massive, Hacker said.

Evidence showing that the slide was catastrophic – occurring within minutes – included the presence of pseudotachylytes, rocks that were melted into glass by the immense friction. Any animals living in its path would have been quickly overrun.

Evidence of the slide is not readily apparent to visitors today.

“Looking at it, you wouldn’t even recognize it as a landslide,” Hacker said.

But internal features of the slide, exposed in outcrops, yielded evidence such as jigsaw puzzle rock fractures and shear zones, along with the pseudotachylytes. 



Hacker, who studies catastrophic geological events, said the slide originated when a volcanic field consisting of many strato-volcanoes, a type similar to Mount St. Helens in the Cascade Mountains, which erupted in 1980, collapsed and produced the massive landslide.



The collapse may have been caused by the vertical inflation of deeper magma chambers that fed the volcanoes. Hacker has spent many summers in Utah mapping geologic features of the Pine Valley Mountains south of the Markagunt where he has found evidence of similar, but smaller slides from magma intrusions called laccoliths.



What is learned about the mega-landslide could help geologists better understand these extreme types of events. The Markagunt and the Heart Mountain slides document for the first time how large portions of ancient volcanic fields have collapsed, Hacker said, representing “a new class of hazards in volcanic fields.”



While the Markagunt landslide was a rare event, it shows the magnitude of what could happen in modern volcanic fields like the Cascades.



“We study events from the geologic past to better understand what could happen in the future,” he said.

The next steps in the research, conducted with his co-authors on the Geology paper, will be to continue mapping the slide, collect samples from the base for structural analysis and date the pseudotachylytes.



Hacker, who earned his Ph.D. in geology at Kent State, joined the faculty in 2000 after working for an environmental consulting company. He is co-author of the book Earth’s Natural Hazards: Understanding Natural Disasters and Catastrophes, published in 2010.




Contacts and sources:
Emily Vincent
Kent State University

View the abstract of the Geology paper, available online now.

Too Many People, Not Enough Water: Now And 2,700 Years Ago

The Assyrian Empire once dominated the ancient Near East. At the start of the 7th century BC, it was a mighty military machine and the largest empire the Old World had yet seen. But then, before the century was out, it had collapsed. Why? An international study now offers two new factors as possible contributors to the empire's sudden demise - overpopulation and drought.

Assyrian Attack on a Town
Credit: Wikipedia

Adam Schneider of the University of California, San Diego and Selim Adalı of Koç University in Istanbul, Turkey, have just published evidence for their novel claim.

Map of traditional Assyrian heartland and cities mentioned in ancient text.

Credit: Adam Schneider

"As far as we know, ours is the first study to put forward the hypothesis that climate change - specifically drought - helped to destroy the Assyrian Empire," said Schneider, doctoral candidate in anthropology at UC San Diego and first author on the paper in the Springer journal Climatic Change.

The researchers' work connects recently published climate data to text found on a clay tablet. The text is a letter to the king, written by a court astrologer, reporting (almost incidentally) that "no harvest was reaped" in 657 BC.

Paleoclimatic records back up the courtier's statement. Further, analysis of the region's weather patterns, in what is now Northern Iraq and Syria, suggests that the drought was not a one-off event but part of a series of arid years.

Add to that the strain of overpopulation, especially in places like the Assyrian capital of Nineveh (near present-day Mosul) - which had grown unsustainably large during the reign of King Sennacherib - and Assyria was fatally weakened, the researchers argue.


 This image shows UC San Diego anthropologist Adam Schneider in Damascus, 2010.
Credit: Adam Schneider

Within five years of the no-harvest report, Assyria was racked by a series of civil wars. Then joint Babylonian and Median forces attacked and destroyed Nineveh in 612 BC. The empire never recovered.

"We're not saying that the Assyrians suddenly starved to death or were forced to wander off into the desert en masse, abandoning their cities," Schneider said. "Rather, we're saying that drought and overpopulation affected the economy and destabilized the political system to a point where the empire couldn't withstand unrest and the onslaught of other peoples."

Schneider and Adalı draw parallels in their paper between the collapse of the ancient superpower and what is happening in the same area now. They point out, for instance, that the 7th-century story they outline bears a striking resemblance to the severe drought and subsequent political conflict in today's Syria and northern Iraq.

Schneider also sees an eerie similarity between Nineveh and Southern California. Though people weren't forcibly relocated to Los Angeles or San Diego to help an emperor grow himself a "great city," still, the populations of these contemporary metropolitan areas are probably also too large for their environments.

 
On a more global scale, Schneider and Adalı conclude, modern societies should pay attention to what can happen when immediate gains are prioritized over considerations of the long term.

"The Assyrians can be 'excused' to some extent," they write, "for focusing on short-term economic or political goals which increased their risk of being negatively impacted by climate change, given their technological capacity and their level of scientific understanding about how the natural world works. We, however, have no such excuses, and we also possess the additional benefit of hindsight, which allows us to piece together from the past what can go wrong if we choose not to enact policies that promote longer-term sustainability."


Contacts and sources:
Inga Kiderra
University of California - San Diego

Were Neanderthals A Sub-Species Of Modern Humans?

New Research Led By SUNY Downstate’s Dr. Samuel Márquez Says No

Disappearance of Neanderthals Likely the Result of Competition from Homo sapiens, and Not from Poor Adaptation to Cold

In an extensive, multi-institution study led by SUNY Downstate Medical Center, researchers have identified new evidence supporting the growing belief that Neanderthals were a distinct species separate from modern humans (Homo sapiens), and not a subspecies of modern humans.

Neanderthal man from the National Museum of Nature and Science.
Credit: Wikipedia
 
The study looked at the entire nasal complex of Neanderthals and involved researchers with diverse academic backgrounds. Supported by funding from the National Science Foundation and the National Institutes of Health, the research also indicates that the Neanderthal nasal complex was not adaptively inferior to that of modern humans, and that the Neanderthals’ extinction was likely due to competition from modern humans and not an inability of the Neanderthal nose to process a colder and drier climate.

Samuel Márquez, PhD, associate professor and co-discipline director of gross anatomy in SUNY Downstate’s Department of Cell Biology, and his team of specialists published their findings on the Neanderthal nasal complex in the November issue of The Anatomical Record, which is part of a special issue on The Vertebrate Nose: Evolution, Structure, and Function (now online).

They argue that studies of the Neanderthal nose, which have spanned over a century and a half, have been approaching this anatomical enigma from the wrong perspective. Previous work has compared Neanderthal nasal dimensions to modern human populations such as the Inuit and modern Europeans, whose nasal complexes are adapted to cold and temperate climates.

However, the current study joins a growing body of evidence that the upper respiratory tracts of this extinct group functioned via a different set of rules as a result of a separate evolutionary history and overall cranial bauplan (bodyplan), resulting in a mosaic of features not found among any population of Homo sapiens. Thus Dr. Márquez and his team of paleoanthropologists, comparative anatomists, and an otolaryngologist have contributed to the understanding of two of the most controversial topics in paleoanthropology – were Neanderthals a different species from modern humans and which aspects of their cranial morphology evolved as adaptations to cold stress.

“The strategy was to have a comprehensive examination of the nasal region of diverse modern human population groups and then compare the data with the fossil evidence. We used traditional morphometrics, geometric morphometric methodology based on 3D coordinate data, and CT imaging,” Dr. Márquez explained.

Anthony S. Pagano, PhD, anatomy instructor at NYU Langone Medical Center, a co-author, traveled to many European museums carrying a microscribe digitizer, the instrument used to collect 3D coordinate data from the fossils studied in this work, as spatial information may be missed using traditional morphometric methods. “We interpreted our findings using the different strengths of the team members,” Dr. Márquez said, “so that we can have a ‘feel’ for where these Neanderthals may lie along the modern human spectrum.”

Co-author William Lawson, MD, DDS, vice-chair and the Eugen Grabscheid research professor of otolaryngology and director of the Paleorhinology Laboratory of the Icahn School of Medicine at Mount Sinai, notes that the external nasal aperture of the Neanderthals approximates some modern human populations but that their midfacial prognathism (protrusion of the midface) is startlingly different. That difference is one of a number of Neanderthal nasal traits suggesting an evolutionary development distinct from that of modern humans. Dr. Lawson’s conclusion is predicated upon nearly four decades of clinical practice, in which he has seen over 7,000 patients representing a rich diversity of human nasal anatomy.

Distinguished Professor Jeffrey T. Laitman, PhD, also of the Icahn School of Medicine and director of the Center for Anatomy and Functional Morphology, and Eric Delson, PhD, director of the New York Consortium in Evolutionary Primatology or NYCEP, are also co-authors and are seasoned paleoanthropologists, each approaching their fifth decade of studying Neanderthals. Dr. Delson has published on various aspects of human evolution since the early 1970's.

Dr. Laitman states that this article is a significant contribution to the question of Neanderthal cold adaptation in the nasal region, especially in its identification of a different mosaic of features than those of cold-adapted modern humans. Dr. Laitman’s body of work has shown that there are clear differences in the vocal tract proportions of these fossil humans when compared to modern humans. This current contribution has now identified potentially species-level differences in nasal structure and function.

Dr. Laitman said, “The strength of this new research lies in its taking the totality of the Neanderthal nasal complex into account, rather than looking at a single feature. By looking at the complete morphological pattern, we can conclude that Neanderthals are our close relatives, but they are not us.”

Ian Tattersall, PhD, emeritus curator of the Division of Anthropology at the American Museum of Natural History, an expert on Neanderthal anatomy and functional morphology who did not participate in this study, stated, “Márquez and colleagues have carried out a most provocative and intriguing investigation of a very significant complex in the Neanderthal skull that has all too frequently been overlooked.” Dr. Tattersall hopes that “with luck, this research will stimulate future research demonstrating once and for all that Homo neanderthalensis deserves a distinctive identity of its own.”


Contacts and sources:
SUNY Downstate Medical Center

The article in The Anatomical Record is entitled, “The Nasal Complex of Neanderthals: An Entry Portal to their Place in Human Ancestry.” It is available online at:http://onlinelibrary.wiley.com/doi/10.1002/ar.23040/full.

This research was supported by the following grants, awarded to Mount Sinai: NSF-SBR9634519 and NSFBCS -1128901 from the National Science Foundation; and NIH 1 F31DC00255-01 from the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health (NIH). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH and NSF. Analysis and additional data collection were performed at SUNY Downstate.

Supercomputing Beyond Genealogy Reveals Surprising European Ancestors

NSF XSEDE Stampede supercomputer compares modern and ancient DNA

Left:The Stuttgart skull, from a 7,000-year-old skeleton found in Germany among artifacts from the first widespread farming culture of central Europe. Right: Blue eyes and dark skin, that's how the European hunter-gatherer appeared 7,000 years ago. Artist depiction based on La Braña 1, whose remains were recovered at La Braña-Arintero site in León, Spain. 
Courtesy of the Consejo Superior de Investigaciones Cientificas

What if you researched your family's genealogy, and a mysterious stranger turned out to be an ancestor?

That's the surprising feeling had by a team of scientists who peered back into Europe's murky prehistoric past thousands of years ago. With sophisticated genetic tools, supercomputing simulations and modeling, they traced the origins of modern Europeans to three distinct populations.

The international research team published their September 2014 results in the journal Nature.

The 8,000 year-old skull discovered in Loschbour, Luxembourg provided DNA evidence for the study of European ancestry. Scientists really only have a handful of ancient remains well-preserved enough for their genomes to be sequenced.


The remains studied were found at the caves of Loschbour, La Braña, Stuttgart, a ritual site at Motala, and at Mal'ta.XSEDE, the Extreme Science and Engineering Discovery Environment, provided the computational resources used in the study. It's a single virtual system that scientists use to interactively share computing resources, data and expertise.

Genomic analysis code ran on Stampede, the nearly 10 petaflop Dell/Intel Linux supercomputer at the Texas Advanced Computing Center (TACC). The research was funded in part by the National Cancer Institute of the National Institutes of Health.

"The main finding was that modern Europeans seem to be a mixture of three different ancestral populations," said study co-author Joshua Schraiber, a National Science Foundation Postdoctoral fellow at the University of Washington.

Schraiber said these results surprised him because the prevailing view among scientists held that only two distinct groups mixed between 7,000 and 8,000 years ago in Europe, as humans first started to adopt agriculture.

Hunter-gatherers with olive skin and mainly blue-eyes first expanded upon the continent about 12,000 years ago, moving north with the retreat of glaciers at the end of the last Ice Age. Later, early European farmers from the Near East migrated west and mixed with the hunter-gatherers. Genetic evidence revealed these farmers had light-colored skin and brown eyes.

The third mystery group that emerged from the data is ancient northern Eurasians. "People from the Siberia area is how I conceptualize it. We don't know too much anthropologically about who these people are. But the genetic evidence is relatively strong, because we do have ancient DNA from an individual that's very closely related to that population, too," Schraiber said.

"Having access to the TACC was really essential for me because at some point I was using a hundred gigabytes of RAM to do something."Joshua Schraiber, National Science Foundation Postdoctoral fellow, University of WashingtonThat individual is a three-year-old-boy whose remains were found near Lake Baikal in Siberia at a site called Mal'ta. Scientists determined his arm bone to be 24,000 years old. They then sequenced his genome, making it the second oldest one yet sequenced of a modern human. Interestingly enough, in late 2013 scientists used the Mal'ta genome to find that about one-third of Native American ancestry originated through gene flow from these ancient North Eurasians.

"I think there was a little bit of luck in this," Schraiber said, referring to the Mal'ta genome. "We knew the models weren't fitting. But we didn't know what was wrong. Luckily, this new ancient DNA had come out."

What scientists did was take the genomes from these ancient humans and compare them to those from 2,345 modern-day Europeans. "I used the POPRES data set, which had been used before to ask similar questions just looking at modern Europeans," Schraiber said. "And then I used a piece of software called Beagle, which was written by Brian Browning and Sharon Browning at the University of Washington, which computationally detects these regions of identity by descent."

Joshua Schraiber, National Science Foundation Postdoctoral fellow, University of Washington.The heavy demand on CPU time and RAM caused a bottleneck in the analysis.

"Having access to the Stampede supercomputer at TACC was essential for me because at some point I was using a hundred gigabytes of RAM to do something. It took days, even spreading it across multiple processors. It takes a lot of effort to do this identity by descent detection," Schraiber said.

Working on the hunch that the Mal'ta genome might fill in some missing blanks that the modeling pointed out, Schraiber saw a lot more identity by descent between the ancient individuals and modern individuals than he expected. "It made us happy in a lot of ways to find that these are people who share ancestors with modern Europeans."

Schraiber looks to East Asia and Africa as the next hot spots to study human history as scientists push forward to discover and analyze new sources of ancient DNA.

"Using archeological evidence tells you a lot. Modern DNA tells you a lot. But it's by combining the two and getting ancient DNA, which is anthropological evidence and genetic evidence at the same time, you're able to unravel these things. You're able to find complexity that you just didn't know was there before," he said.


Contacts and sources:
Faith Singer
University of Texas at Austin, Texas Advanced Computing Center

Horses And Rhinos Origin On The Island Of India

Fossils suggest ancestor of horses and rhinos originated on the Asian subcontinent while it was still an island.

 
Working at the edge of a coal mine in India, a team of Johns Hopkins researchers and colleagues have filled in a major gap in science's understanding of the evolution of a group of animals that includes horses and rhinos. That group likely originated on the subcontinent when it was still an island headed swiftly for collision with Asia, the researchers report Nov. 20 in the online journal Nature Communications.

This is an artist's depiction of Cambaytherium thewissi.

Credit: Elaine Kasmer

Modern horses, rhinos and tapirs belong to a biological group, or order, called Perissodactyla. Also known as "odd-toed ungulates," animals in the order have, as their name implies, an uneven number of toes on their hind feet and a distinctive digestive system. 

Though paleontologists had found remains of Perissodactyla from as far back as the beginnings of the Eocene epoch, about 56 million years ago, their earlier evolution remained a mystery, says Ken Rose, Ph.D., a professor of functional anatomy and evolution at the Johns Hopkins University School of Medicine.

Rose and his research team have for years been excavating mammal fossils in the Bighorn Basin of Wyoming, but in 2001 he and Indian colleagues began exploring Eocene sediments in Western India because it had been proposed that perissodactyls and some other mammal groups might have originated there. In an open-pit coal mine northeast of Mumbai, they uncovered a rich vein of ancient bones. Rose says he and his collaborators obtained funding from the National Geographic Society to send a research team to the mine site at Gujarat in the far Western part of India for two weeks at a time once every year or two over the last decade.

The mine yielded what Rose says was a treasure trove of teeth and bones for the researchers to comb through back in their home laboratories. Of these, more than 200 fossils turned out to belong to an animal dubbed Cambaytherium thewissi, about which little had been known. 

The researchers dated the fossils to about 54.5 million years old, making them slightly younger than the oldest known Perissodactyla remains, but, Rose says, it provides a window into what a common ancestor of all Perissodactyla would have looked like. "Many of Cambaytherium's features, like the teeth, the number of sacral vertebrae, and the bones of the hands and feet, are intermediate between Perissodactyla and more primitive animals," Rose says. "This is the closest thing we've found to a common ancestor of the Perissodactyla order."

Cambaytherium and other finds from the Gujarat coal mine also provide tantalizing clues about India's separation from Madagascar, lonely migration, and eventual collision with the continent of Asia as the Earth's plates shifted, Rose says. In 1990, two researchers, David Krause and Mary Maas of Stony Brook University, published a paper suggesting that several groups of mammals that appear at the beginning of the Eocene, including primates and odd- and even-toed ungulates, might have evolved in India while it was isolated. Cambaytherium is the first concrete evidence to support that idea, Rose says. But, he adds, "It's not a simple story."

"Around Cambaytherium's time, we think India was an island, but it also had primates and a rodent similar to those living in Europe at the time," he says. "One possible explanation is that India passed close by the Arabian Peninsula or the Horn of Africa, and there was a land bridge that allowed the animals to migrate. But Cambaytherium is unique and suggests that India was indeed isolated for a while."

Rose said his team was "very fortunate that we discovered the site and that the mining company allowed us to work there," although he added, "it was frustrating to knowing that countless fossils were being chewed up by heavy mining equipment." When coal extraction was finished, the miners covered the site, he says. His team has now found other mines in the area to continue digging.


Contacts and sources: 
Shawna Williams
Johns Hopkins Medicine


Other authors on the study were Luke T. Holbrook of Rowan University, Rajendra S. Rana of Garhwal University, Kishor Kumar of the Wadia Institute of Himalayan Geology, Katrina E. Jones and Heather E. Ahrens of Johns Hopkins University, Pieter Missiaen of Ghent University, Ashok Sahni of Panjab University and Thierry Smith of the Royal Belgian Institute of Natural Sciences.

This study was funded by the National Geographic Society (grants 6868-00, 7938-05, 8356-07, 8710-09, 8958-11 and 9240-12), the Belgian Science Policy Office (project BR/121/A3/PALEURAFRICA), the National Science Foundation (grant number DEB-0211976) and the Wadia Institute of Himalayan Geology.

Unravelling The Mystery Of Gamma-Ray Bursts

A team of scientists hope to trace the origins of gamma-ray bursts with the aid of giant space 'microphones'.

Researchers at Cardiff University are trying to work out the possible sounds scientists might expect to hear when the ultra-sensitive LIGO and Virgo detectors are switched on in 2015.

It's hoped the kilometre-scale microphones will detect gravitational waves created by black holes, and shed light on the origins of the Universe.

Researchers Dr Francesco Pannarale and Dr Frank Ohme, in Cardiff University's School of Physics and Astronomy, are exploring the potential of seeing and hearing events that astronomers know as short gamma-ray bursts.

These highly energetic bursts of hard radiation have been seen by gamma-ray satellites such as Fermi and Swift, but the exact origin of these quickly disappearing flashes of gamma-rays remains unknown.

"By picking up the gravitational waves associated with these events, we will be able to access precious information that was previously hidden, such as whether the collision of a star and a black hole has ignited the burst and roughly how massive these objects were before the impact," explained Dr Ohme, who has focused his research on predicting the exact shape of the gravitational wave signals scientists are expecting to see.

Dr Pannarale added: "A possible scenario that could produce gamma-ray bursts involves a neutron star, the most compact star in the Universe, being ripped apart by a black hole while orbiting it. The remaining matter would be accelerated so much it could cause the energy bursts we are observing today.

"In some cases, by observing both electro-magnetic and gravitational wave signatures of the same event, we will be able to better understand the behaviour of material in the highest density region we know in our Universe, so that we will start to rule out various theoretical models that have been proposed but cannot be tested otherwise."


Contacts and sources:
Dr. Frank Ohme
Cardiff University

The results of Pannarale and Ohme have been published in Physical Review Letters: http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.151101

Exotic Object: 'Kicked' Black Hole or Mega Star?

In his general theory of relativity, Albert Einstein predicted that there are such things as gravitational waves. In fact, the very existence of these waves is the linchpin of the entire theory. Despite the great lengths that physicists have gone to in recent decades, however, they still have not managed to detect them directly with a measurement. This could largely be due to the fact that this requires a level of precision that it is practically impossible to achieve with today's measuring devices.

Zoom into Markarian 177 and SDSS1133 and see how they compare with a simulated galaxy collision. When the central black holes in these galaxies combine, a "kick" launches the merged black hole on a wide orbit taking it far from the galaxy's core.
Credit: NASA's Goddard Space Flight Center/L. Blecha (UMD)

Ultimately, it is all about measuring the tiniest of compressions and extensions of space which, according to Einstein's theory, arise when gravitational waves pass through it. And even using the high-precision measuring equipment of the future, only waves with a corresponding level of intensity may well be detectable, such as those formed during the fusion of two merging black holes.

If two galaxies head towards each other in space and eventually collide, they merge into one. The two supermassive black holes in the centre of the two galaxies also fuse. In this process, if the general theory of relativity holds true, gravitational waves are formed and spread out in space. 

A simulation of two colliding galaxies (left) shows how their coalescing supermassive black holes can launch the resulting larger black hole (dot, lower left) on a wide orbit. Right: Compare the simulation with this Keck II near-infrared image of Markarian 177 and SDSS1133 (lower left).

Credit: Simulation, L. Blecha (UMD); image, W. M. Keck Observatory/M. Koss (ETH Zurich) et al.

If the black holes have unequal masses or are spinning at different speeds, the gravitational waves will be emitted asymmetrically - giving the fused black hole a "kick" that propels it in the opposite direction. In some cases, this recoil kick is relatively weak and the fused black hole drifts back into the centre. In other cases, however, the kick is strong enough to propel the black hole out of the galaxy entirely, where it will forever wander through the universe.

Remnant of a Collision Between Two Galaxies...

Astronomers have been searching for such recoiling black holes, but have not found any strong candidates yet. An international team of scientists including Kevin Schawinski, a professor at the Institute for Astronomy at ETH Zurich, and Michael Koss, a Swiss National Science Foundation Ambizione Fellow working with the Schawinski group, discovered an object that may in fact be a recoiling black hole. 
SDSS1133 (bright spot, lower left) has been a persistent source for more than 60 years. This sequence of archival astronomical imagery, taken through different instruments and filters, shows that the source is detectable in 1950 and brightest in 2001.

Credit: NASA's Goddard Space Flight Center/M. Koss (ETH Zurich)

The object, named SDSS1133, lies around 90 million light years from Earth, which is nearby in astronomical terms. Researchers from the University of Hawaii, the University of Maryland, the Jet Propulsion Laboratory in Pasadena, California, the University of Arizona, the University of Copenhagen, the University of California, Berkeley, and the Ohio State University have also worked on the discovery.

The researchers first realized that SDSS1133 was a unique object last year, while observing it with a reflecting telescope at the Keck Observatory in Hawaii. Comparisons with an astronomical map from 2001 showed that it was already ten times weaker last year than in 2001 - and although the object was visible on maps from the 1950s and 1990s, it could only be seen very weakly. SDSS1133 shone very brightly in 2001 but did not go completely dark afterwards, which showed that it cannot be a normal supernova - the life-ending explosion of a star - because supernovae tend to be detectable for only a few months before fading significantly.

From a comparison of the wavelength spectrum of the light emitted by SDSS1133 and a nearby dwarf galaxy the scientists concluded that the object might be a black hole that belonged to this dwarf galaxy at one stage and was jettisoned out of it.

... Or One of the Longest-lived Supernovae?

And yet the researchers are far from certain, mainly because there is a second, more exotic possibility: SDSS1133 could be a new type of long-duration outburst before a supernova within a giant star. This giant star would have lost much of its mass in a series of eruptions over the course of at least 50 years before its final explosion.

The dwarf galaxy Markarian 177 (center) and its unusual source SDSS1133 (blue) lie 90 million light-years away. The galaxies are located in the bowl of the Big Dipper, a well-known star pattern in the constellation Ursa Major.

Credit: Sloan Digital Sky Survey

Scientists have already observed stars changing in this fashion: Eta Carinae, one of the most massive stars in our own galaxy, briefly became the second-brightest star in the sky in 1843. If this type of activity were also the explanation for SDSS1133, that would make it the longest continuous outbursts ever observed before a supernova.

Answers on the Horizon

ETH scientists will have the opportunity to search for answers to these questions next year. Black holes and supernovae both emit ultraviolet light, but with differing wavelengths. The researchers have been allocated observation time with the Hubble Space Telescope in October 2015 in order to measure this spectrum more precisely.

Using the Keck II telescope in Hawaii, researchers obtained high-resolution images of Markarian 177 and SDSS1133 using a near-infrared filter. Twin bright spots in the galaxy's central region are consistent with recent star formation, a disturbance that hints this galaxy may have merged with another.

Credit: W. M. Keck Observatory/M. Koss (ETH Zurich) et al.

Changes in the object's brightness in the coming years will also give scientists clues as to whether they are dealing with a jettisoned black hole or an exploding mega-star: for a recoiling black hole they expect to see variable brightness, whereas the brightness of a supernova explosion should generally decrease over time. "Whether SDSS1133 is a recoiling black hole or an exploding mega-star, we are observing something that has never before been seen in the universe", says Michael Koss.

A computer simulation of colliding galaxies shows how merged supermassive black holes can be placed in an elongated orbit. The simulation begins with disks of stars, gas and dark matter representing two galaxies of comparable mass; only the stars are shown. The galaxies make a close pass that results in trong tidal distortions that warp their disks. These disturbances trigger instabilities that drive gas to the galaxy centers, where it forms new stars and fuels the supermassive black holes.

Before the galaxies completely merge, their black holes spiral together and coalesce. Gravitational waves emitted in the merger create a "recoil kick" that ejects the black hole from the galaxy's center; in this case, the kick is 90 percent of the escape speed. The simulation stops just as the black hole reaches the farthest point of its orbit, which is why it slows down in the last few frames. The video represents an elapsed time of about 2.7 billion years. The simulation took five days to complete on the Odyssey computing cluster at Harvard University.

Credit: L. Blecha (UMD)
And should they discover that the object is in fact a recoiling black hole, that would considerably increase the odds of one day being able to detect gravitational waves. The scientists estimate that the recoil, if confirmed, occurred around ten million years ago. Consequently, it is not this object in itself that would be important for the concrete measurement of gravitational waves, but rather the fact that it is existing. "Dwarf galaxies are very common," says Koss. "Therefore it would be highly probable that other recoil events would appear before too long. The hope is that we would be able to observe one near Earth and measure the gravitational waves."

ESA missions to search for gravitational waves

The European Space Agency (ESA) will use space probes and laser interferometers to detect gravitational waves in space during one of its next large-scale missions, "eLISA". The launch of the probes has been scheduled for 2034. However, the preparatory mission LISA Pathfinder is already due to blast off next year with a view to testing key technologies for eLISA. ETH Zurich is also involved in LISA Pathfinder.


Contacts and sources:
Michael Koss
ETH Zurich

Citation: Koss M, Blecha L, Mushotzky R, Hung CL, Veilleux S, Trakhtenbrot B, Schawinski K, Stern D, Smith N, Li Y, Man A, Filippenko AV, Mauerhan JC, Stanek K, Sanders D: An Unusually Persistent Transient in a Nearby Dwarf Galaxy, 2014, Monthly Notices of the Royal Astronomical Society 2014. 445: 515.