Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Sunday, November 30, 2014

Nanoflares: First Genetic-Based Tool To Detect Circulating Cancer Cells In Blood

Metastasis is bad news for cancer patients. Northwestern University scientists now have demonstrated a simple but powerful tool that can detect live cancer cells in the bloodstream, potentially long before the cells could settle somewhere in the body and form a dangerous tumor.

NanoFlares-specially designed stem cells that have been developed to detect blood-borne cancers. When the cells come in contact with cancerous cells, they emit light. 
Courtesy of the International Institute for Nanotechnology at Northwestern University

The NanoFlare technology is the first genetic-based approach that is able to detect live circulating tumor cells out of the complex matrix that is human blood -- no easy feat. In a breast cancer study, the NanoFlares easily entered cells and lit up the cell if a biomarker target was present, even if only a trace amount. The NanoFlares are tiny spherical nucleic acids with gold nanoparticle cores outfitted with single-stranded DNA “flares.”

“This technology has the potential to profoundly change the way breast cancer in particular and cancers in general are both studied and treated,” said Chad A. Mirkin, a nanomedicine expert and a corresponding author of the study.

Mirkin’s colleagues Dr. C. Shad Thaxton and Dr. Chonghui Cheng, both of Northwestern University Feinberg School of Medicine, are also corresponding authors.

The research team, in a paper to be published the week of Nov. 17 by the Proceedings of the National Academy of Sciences (PNAS), reports two key innovations:

• The ability to track tumor cells in the bloodstream based on genetic content located within the cell itself, as opposed to using proteins located on the cell’s surface (current technology)

• The ability to collect the cells in live form, so they may be studied and used to inform researchers and clinicians as to how to treat a disease -- an important step toward personalized medicine

“Cancers are very genetically diverse, and it’s important to know what cancer subtype a patient has,” Mirkin said. “Now you can think about collecting a patient’s cells and studying how those cells respond to different therapies. The way a patient responds to treatment depends on the genetic makeup of the cancer.”

Mirkin is the George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences and professor of medicine, chemical and biological engineering, biomedical engineering and materials science and engineering.

A NanoFlare is designed to recognize a specific genetic code snippet associated with a cancer. The core nanoparticle, only 13 nanometers in diameter, enters cells, and the NanoFlare seeks its target. If the genetic target is present in the cell, the NanoFlare binds to it and the reporter “flare” is released to produce a fluorescent signal. The researchers then can isolate those cells.

“The NanoFlare turns on a light in the cancer cells you are looking for,” said Thaxton, an assistant professor of urology at Feinberg. “That the NanoFlares are effective in the complex matrix of human blood is a great technical advance. We can find small numbers of cancer cells in blood, which really is like searching for a needle in a haystack.”

Once they identified the cancer cells, the researchers were able to separate them from normal cells. This ability to isolate, culture and grow the cancer cells will allow researchers to zero in on the cancer cells that matter to the health of the patient. Most circulating tumor cells may not metastasize, and analysis of the cancer cells could identify those that will.

“This could lead to personalized therapy where we can look at how an individual’s cells respond to different therapeutic cocktails,” said Mirkin, whose lab developed NanoFlares in 2007.

NanoFlares light up (red clouds) individual cells if a cancer (in this study, breast cancer) biomarker (messenger RNA, blue) is detected by recognition DNA (green) molecules coated on gold nanospheres and containing a fluorescent chemical (red) reporter flare
Credit: Tiffany L. Halo et al./PNAS

In the study, the genetic targets were messenger RNA (mRNA) that code for certain proteins known to be biomarkers for aggressive breast cancer cells.

The research team first used the blood of healthy individuals, spiking some of the blood with living breast cancer cells to see if the NanoFlares could detect them. (Unspiked blood was used as a control.)

Cheng, an assistant professor of medicine in hematology/oncology at Feinberg, provided the cell lines and NanoFlare targets the researchers used to model blood samples taken from breast cancer patients.

The research team tested four different NanoFlares, each with a different genetic target relevant to breast cancer metastasis. The technology successfully detected the cancer cells with less than 1 percent incidence of false-negative results.

Currently, in another study, the researchers are focused on detecting circulating tumor cells in the blood of patients with a diagnosis of breast cancer.

“When it comes to detecting and treating cancer, the mantra is the earlier, the better,” Thaxton said. “This technology may enable us to better detect circulating cancer cells and provides another tool to add to the toolkit of cancer diagnosis.

Mirkin, Thaxton and Cheng are members of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.

The National Cancer Institute, the National Institutes of Health, the American Cancer Society, the Air Force Office of Scientific Research and the Howard Hughes Medical Institute supported the research.

Contacts and sources:
by Megan Fellman
Northwestern University

Citation: “NanoFlares for the detection, isolation, and culture of live tumor cells from human blood.”

Wi-Fi Drug Delivery: Wireless Electronic Implants Stop Staph, Then Harmlessly Dissolve

Researchers at Tufts University, in collaboration with a team at the University of Illinois at Champaign-Urbana, have demonstrated a resorbable electronic implant that eliminated bacterial infection in mice by delivering heat to infected tissue when triggered by a remote wireless signal.  The silk and magnesium devices then harmlessly dissolved in the test animals. The technique had previously been demonstrated only in vitro.
Credit: Tufts University

 The research is published online in the Proceedings of the National Academy of Sciences Early Edition the week of November 24-28, 2014.

"This is an important demonstration step forward for the development of on-demand medical devices that can be turned on remotely to perform a therapeutic function in a patient and then safely disappear after their use, requiring no retrieval," said senior author Fiorenzo Omenetto, professor of biomedical engineering and Frank C. Doble professor at Tufts School of Engineering. "These wireless strategies could help manage post-surgical infection, for example, or pave the way for eventual 'wi-fi' drug delivery."

Optical (and corresponding IR) images of the dissolution of implant device (top row: powering induction coil with resistor/heater) 

Credit: Tufts University

Implantable medical devices typically use non-degradable materials that have limited operational lifetimes and must eventually be removed or replaced. The new wireless therapy devices are robust enough to survive mechanical handling during surgery but designed to harmlessly dissolve within minutes or weeks depending on how the silk protein was processed, noted the paper's first author, Hu Tao, Ph.D., a former Tufts post-doctoral associate who is now on the faculty of the Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences.

Each fully dissolvable wireless heating device consisted of a serpentine resistor and a power-receiving coil made of magnesium deposited onto a silk protein layer. The magnesium heater was encapsulated in a silk "pocket" that protected the electronics and controlled its dissolution time.

Devices were implanted in vivo in S. aureus infected tissue and activated by a wireless transmitter for two sets of 10-minute heat treatments. Tissue collected from the mice 24 hours after treatment showed no sign of infection, and surrounding tissues were found to be normal. Devices completely dissolved after 15 days, and magnesium levels at the implant site and surrounding areas were comparable to levels typically found in the body.

The researchers also conducted in vitro experiments in which similar remotely controlled devices released the antibiotic ampicillin to kill E. coli and S. aureus bacteria. The wireless activation of the devices was found to enhance antibiotic release without reducing antibiotic activity.

Omenetto holds an adjunct appointment in the Department of Physics in the School of Arts and Sciences at Tufts as well as appointments in the Departments of Biomedical Engineering and Chemical and Biological Engineering in the School of Engineering.

Contacts and sources:
Kim Thurler
Tufts University

In addition to Omenetto and Tao, authors on the paper were co-first author Suk-Won Hwang, formerly of the Department of Materials Science and Engineering, Beckman Institute for Advanced Science and Technology, and Frederick Seitz Materials Research Laboratory, University of Illinois at Urbana-Champaign, and now at KU-KIST Graduate School of Converging Science and Technology, Korea University; Benedetto Marelli, Bo An, Jodie E. Moreau, Miaomiao Yang, and Mark A. Brenckle, Department of Biomedical Engineering, Tufts University; Stanley Kim, Department of Materials Science and Engineering, Beckman Institute for Advanced Science and Technology, and Frederick Seitz Materials Research Laboratory, University of Illinois at Urbana-Champaign; David L. Kaplan, Department of Biomedical Engineering and Department of Chemical and Biomedical Engineering, Tufts University; and co-corresponding author John A. Rogers, Department of Materials Science and Engineering, Beckman Institute for Advanced Science and Technology, Frederick Seitz Materials Research Laboratory, Department of Chemistry, and Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.

Research reported in this paper was supported by the National Institutes of Health under award number P41-EB002520 and by the National Science Foundation under grant number DMR-1242240.

Citation: "Silk-based resorbable electronic devices for remotely controlled therapy and in vivo infection abatement," http://www.pnas.org/cgi/doi/10.1073/pnas.1407743111

Water To Gasoline Demonstrated By Sunfire

A Power-to-Liquids (PtL) demonstration rig which is the first of its kind in the world was  officially inaugurated by Dresden-based Sunfire GmbH on November 14th in the presence of German Federal Minister of Education and Research Johanna Wanka, Bilfinger Board Member Pieter Koolen and a number of other high-ranking representatives from the worlds of politics, industry and research.

The rig uses sunfire’s PtL technology to transform water and CO2 to high-purity synthetic fuels (petrol, diesel, kerosene) with the aid of renewable electricity. So-called PtL fuels – also known as “e-fuels” – can be used in pure form or as an admixture in combination with conventional fuels, and are recognized as an environmentally friendly, resource-saving alternative which contributes to the fulfillment of greenhouse gas quotas.

Preparations for the commissioning of the PtL rig are currently in fully swing at Sunfire’s Gasanstaltstraße premises. High-temperature steam electrolysis The PtL technology is built around the solid oxide electrolysis cells (SOECs) developed by the cleantech firm as part of the eponymous BMBF research project SUNFIRE. 

Step 1 of the PtL process sees the  SOECs used to convert electrical energy to chemical energy. Hydrogen is generated using steam rather than liquid water.

Step 2 – the reverse water-gas shift reaction – is again innovative, and involves the use of the hydrogen (H2) yielded by the steam electrolysis step to reduce carbon dioxide (CO2) to carbon monoxide (CO) for the third and final step: Fischer-Tropsch Synthesis. This step sees the carbon monoxide and additional
hydrogen (in the form of renewable synthesis gas) converted to petrol, diesel, kerosene and other base products for the chemicals industry (e.g. waxes). The feeding of the heat released during synthesis back into the process ensures a high degree of system efficiency (70 percent).

Proof of technical feasibility at industrial scale The cost of building the PtL demonstration rig was within the seven-digit range, with development costs also incurred at the various consortium members. Half of the overall sum invested corresponds with the public funding received from the Federal Ministry of Education and Research. 

The rig’s capacity for CO2 recycling stands at 3.2 tonnes per day, and once brought into commission it will produce up to a barrel of fuel per day. Commercialization is dependent on further technological development and regulatory factors and scheduled for 2016.

Contacts and sources:
Sunfire GmbH

Friday, November 28, 2014

40,000 Year Old Rock Art Discovery Across Asia

The latest research on the oldest surviving rock art of Southeast Asia shows that the region’s first people, hunter-gatherers who arrived over 50,000 years ago, brought with them a rich art practice.

A naturalistic painting of a deer at a rock art site near Siem Reap, Cambodia is the oldest painting of the region . 
Photo: Paul Taçon

Published this week in the archaeological journal Antiquity, the research shows that these earliest people skilfully produced paintings of animals in rock shelters from southwest China to Indonesia. Besides these countries, early sites were also recorded in Thailand, Cambodia and Malaysia.

Griffith University Chair in Rock Art Professor Paul Taçon led the research which involved field work with collaborative international teams in rugged locations of several countries.

The oldest paintings were identified by analysing overlapping superimpostions of art in various styles as well as numerical dating. It was found that the oldest art mainly consists of naturalistic images of wild animals and, in some locations, hand stencils.

The research shows that 35,000 – 40,000 year old dates for some rock art in Sulawesi, Indonesia announced in October by Griffith University Senior Research Fellow Maxime Aubert is not an anomaly. Instead, the practice was widespread across the region.

A bull from the Xianrendong rock art site (Yunnan, China) is a natural projection of stone that resembles a profile animal. It was painted with red ochre to highlight the head, front legs and side of the body. The head has a natural hole for an eye. The image was enhanced with DStretch

Photo Paul Taçon

Professor Taçon said that, “As with the early art of Europe, the oldest Southeast Asian images often incorporated or were placed in relation to natural features of rock surfaces.

“This shows a purposeful engagement with the new places early peoples arrived in for both symbolic and practical reasons.

“Essentially, they humanised landscapes wherever they went, transforming them from wild places to cultural landscapes. This was the beginning of a process that continues to this day.”

But unlike in Europe, the oldest surviving rock art of Southeast Asia is more often found in rock shelters rather than deep caves, suggesting experiences in deep caves cannot have been their inspiration as has long been argued for Europe.

“This significantly shifts debates about the origins of art-making and supports ideas that this fundamental human behaviour began with our most ancient ancestors in Africa rather than Europe.

“The research supports the idea suggested by the early Indonesian rock art dates that modern humans brought the practice of making semi-permanent images in rocky landscapes to Europe and Asia from Africa,’’ Professor Taçon said.

Hand stencils like these ones were recently shown to have been made up to 40,000 years ago in Sulawesi, Indonesia but are also found at the earliest surviving rock art sites of northern Australia

Photo: Paul Taçon

These results have implications not only for our understanding of Southeast Asian and European rock art but also Australian, because in Kakadu-Arnhem Land and other parts of northern Australia the oldest surviving rock art also consists of naturalistic animals and stencils.

Thus the practice of making these sorts of designs may have been brought to Australia at the time of initial colonisation, but it may alternatively have been independently invented or resulted from as yet unknown forms of culture contact.

All three possibilities are equally intriguing. New investigations in both northern Australia and Southeast Asia are currently being planned.

Contacts and sources:
Griffith University

Saturday, November 22, 2014

Revolutionizing The Interaction Between Plants And Bacteria

In laboratory trials, it has been observed that plants grown from 25 to 35 percent more when the microorganism was added.

Legumes, such as lentils, beans, peas and chickpeas, important for human nutrition, could increase its production thanks to the contributions of a scientific group that revolutionized the study of interactions between plants and microorganisms, at the University of Salamanca in Spain, leaded by Martha Trujillo Toledo.

Credit:   University of Salamanca

After isolating and studying a bacteria of the genus Micromonospora in 2003, it was discovered that this microorganism improved the productions of the grains. However, because it is a new line of investigation and only two laboratories in the world are studying the interaction of legumes with the microorganism, the way the bacteria reaches the plant is currently unknown.

"What we do know is that it is able to penetrate the tissues of the plant and promote its growth, which increases between 25 and 35 percent. Moreover, the organism belongs to the actinobacterias group, which is one of the largest producers of antibiotics and other substances. Here, we discovered that one of our strains produced antitumor molecules, so it might have an important biotechnological application" highlights the researcher, who is part of the Network of Mexican Talent, Chapter Spain.

 Martha Trujillo Toledo

She adds that a significant percentage of her research has been devoted to describing all the new species of Micromonospora. "Hence, we set the goal of trying to understand the relationship between the bacteria and the plant, for which we conducted trials in a climatic chamber, where we grow the plant with all the nutrients it needs to develop, but by adding the bacteria the growth increased" .

She points that they are still studying this interaction, because it is not yet known what brings the bacteria to the plant. "We also observed that the number of nodules of the legumes, where the nitrogen they need gets fixed, almost doubled."

Another important aspect of the work was to conduct molecular studies to identify bacterial genes important for the interaction with its host. In this regard, sequencing revealed a big surprise, since almost 200 genes encode enzymes that destroy plant tissue, which is ironic in a bacteria found inside plants and favoring their protection and growth, as demonstrated by previous studies.

Trujillo Toledo refers that by improving the deeper understanding of the plant-microorganism relationship, the bacteria could be applied in the future as a growth enhancer for legumes to improve production for farmers.

Notably, the research group of Trujillo Toledo has about 30 different sampled species of plants not only in Spain but in other European countries, Nicaragua and Australia. Also, in Mexico she has a joint project with Maria Valdes Ramirez of the National School of Biological Sciences, at the IPN, since she also found the Micromonospora bacteria in other plants, which also produce nodules and fix nitrogen like legumes. (Agencia ID)

Contacts and sources: 
Investigación y Desarrollo

Possibilities For Personalized Cancer Vaccines Revealed At ESMO Symposium

The possibilities for personalised vaccines in all types of cancer are revealed today in a lecture from Dr Harpreet Singh at the ESMO Symposium on Immuno-Oncology 2014 in Geneva, Switzerland.

“One of the biggest hurdles in cancer immunotherapy is the discovery of appropriate cancer targets that can be recognised by T-cells,” said Singh, who is scientific coordinator of the EU-funded GAPVAC phase I trial which is testing personalised vaccines in glioblastoma, the most common and aggressive brain cancer. “In the GAPVAC trial we will treat glioblastoma patients with vaccines that are ideal for each patient because they contain personalised antigens.”1

For all patients in the GAPVAC study, researchers will identify genes expressed in the tumour, peptides presented on the human leukocyte antigen (HLA) receptor (i.e. peptides which will be seen by T-cells), cancer specific mutations, and the ability of the immune system to mount a response to certain antigens. Based on this information, two vaccines, called actively personalised vaccines (APVACs), will be constructed and administered following conventional surgery.

Credit: Wikipedia

The first vaccine will be prepared from a warehouse of 72 targets previously identified by the researchers as relevant for treatment in glioblastoma. These peptides have been manufactured and put on the shelf ready to be vaccinated in patients. Patients will be given a cocktail of the peptides they express and which their immune system can mount a response to.

Singh said: “A patient may express 20 of these 72 targets on their tumour, for example. If we find that the patient’s immune system can mount responses to 5 of the 20 targets, we mix the 5 peptides and give them to the patient. We mix the peptides off the shelf but the cocktail is changed for each patient because it is matched to their biomarkers.”

The second vaccine is synthesised de novo based on a mutated peptide expressed in the tumour of the patient. Singh said: “That peptide is not in our warehouse because it just occurs in this one single patient. The patient receives APVAC-1 and APVAC-2 in a highly personalised fashion in a way that I think has never been done for any patient.”

He added: “GAPVAC has two major goals. One is to show that personalised vaccines are feasible, since this is one of the most complicated trials ever done in cancer immunotherapy. The second is to show that we can mount far better biological responses in these patients compared to vaccination with non-personalised antigens.”

Singh’s previous research has shown that vaccination with non-personalised antigens leads to better disease control and longer overall survival in phase I and phase II clinical studies in patients with renal cell cancer.2

Singh said: “For the non-personalised vaccines we used off-the-shelf peptide targets that were shared by many patients with a particular cancer. Using this approach we have successfully vaccinated patients with renal cell cancer, colorectal cancer and glioblastoma.”

He added: “During this research we identified other targets that appeared in very few patients or even, in extreme cases, in a single patient. Often these rarer peptides are of better quality, meaning they are more specifically seen in cancer cells and occur at higher levels. This led us to start developing personalised cancer vaccines which contain the ideal set of targets for one particular patient. We hope they will be even more effective than the off-the-shelf vaccines.”

Singh continued: “A very simple example from something established is trastuzumab in breast cancer. Trastuzumab was originally given to every breast cancer patient and the efficacy was just seen in a subset. Now only about 20% of breast cancer patients receive trastuzumab and the personalised aspect is just based on the low abundance of Her2, the target.”

Singh believes that personalised vaccines hold promise for all types of cancer, and that personalisation could also be applied to adoptive cell therapy.

He concluded: “Personalisation is not limited to vaccines but is a general principle that could be applied to cancer immunotherapy more broadly. We are starting with vaccines but we are also thinking about how to use personalised antigens in adoptive cell therapy.”

Contacts and sources:
European Society for Medical Oncology (ESMO)

1 Glioma Actively Personalised Vaccine Consortium (GAPVAC): www.gapvac.eu
2 Walter S, et al. Multipeptide immune response to cancer vaccine IMA901 after single-dose cyclophosphamide associates with longer patient survival. Nat Med. 2012;18(8):1254-1261. doi: 10.1038/nm.2883. Epub 2012 Jul 29.

Citation: Annals of Oncology
Volume 25, 2014, Supplement 6

Little Ice Age Was Global: Research Rekindles Debate Of Sun's Role

A team of UK researchers has shed new light on the climate of the Little Ice Age, and rekindled debate over the role of the sun in climate change. The new study, which involved detailed scientific examination of a peat bog in southern South America, indicates that the most extreme climate episodes of the Little Ice Age were felt not just in Europe and North America, which is well known, but apparently globally. The research has implications for current concerns over ‘Global Warming’.

"February" from the calendar of Les Très Riches Heures du duc de Berry, 1412-1416

Climate sceptics and believers of Global Warming have long argued about whether the Little Ice Age (from c. early 15th to 19th Centuries) was global, its causes, and how much influence the sun has had on global climate, both during the Little Ice Age and in recent decades. This new study helps clarify those debates.

The team of researchers, from the Universities of Gloucestershire, Aberdeen and Plymouth, conducted studies on past climate through detailed laboratory examination of peat from a bog near Ushuaia, Tierra del Fuego. They used exactly the same laboratory methods as have been developed for peat bogs in Europe. 

Two principal techniques were used to reconstruct past climates over the past 3000 years: at close intervals throughout a vertical column of peat, the researchers investigated the degree of peat decomposition, which is directly related to climate, and also examined the peat matrix to reveal the changing amounts of different plants that previously grew on the bog.

The data show that the most extreme cold phases of the Little Ice Age—in the mid-15th and then again in the early 18th centuries—were synchronous in Europe and South America. There is one stark difference: while in continental north-west Europe, bogs became wetter, in Tierra del Fuego, the bog became drier—in both cases probably a result of a dramatic equator-ward shift of moisture-bearing winds.

These extreme times coincide with periods when it is known that the sun was unusually quiet. In the late 17th to mid-18th centuries it had very few sunspots—fewer even than during the run of recent cold winters in Europe, which other UK scientists have linked to a relatively quiet sun.

Professor Frank Chambers, Head of the University of Gloucestershire’s Centre for Environmental Change and Quaternary Research, who led the writing of the Fast-Track Research Report, said:

“Both skeptics and adherents of Global Warming might draw succor from this work. Our study is significant because, while there are various different estimates for the start and end of the Little Ice Age in different regions of the world, our data show that the most extreme phases occurred at the same time in both the Northern and Southern Hemispheres. These extreme episodes were abrupt global events. They were probably related to sudden, equator-ward shifts of the Westerlies in the Southern Hemisphere, and the Atlantic depression tracks in the Northern Hemisphere. The same shifts seem to have happened abruptly before, such as c. 2800 years ago, when the same synchronous but opposite response is shown in bogs in Northwest Europe compared with southern South America.

“It seems that the sun’s quiescence was responsible for the most extreme phases of the Little Ice Age, implying that solar variability sometimes plays a significant role in climate change. A change in solar activity may also, for example, have contributed to the post Little Ice Age rise in global temperatures in the first half of the 20th Century. However, solar variability alone cannot explain the post-1970 global temperature trends, especially the global temperature rise in the last three decades of the 20th Century, which has been attributed by the Inter-Governmental Panel on Climate Change (IPCC) to increased concentrations of greenhouse gases in the atmosphere.”

Professor Chambers concluded: “I must stress that our research findings are only interpretable for the period from 3000 years ago to the end of the Little Ice Age. That is the period upon which our research is focused. However, in light of our substantiation of the effects of ‘grand solar minima’ upon past global climates, it could be speculated that the current pausing of ‘Global Warming’, which is frequently referenced by those skeptical of climate projections by the IPCC, might relate at least in part to a countervailing effect of reduced solar activity, as shown in the recent sunspot cycle.”

Contacts and sources: 
University of Gloucestershire

Friday, November 21, 2014

Sun’s Rotating ‘Magnet’ Pulls Lightning Towards UK

The Sun may be playing a part in the generation of lightning strikes on Earth by temporarily ‘bending’ the Earth’s magnetic field and allowing a shower of energetic particles to enter the upper atmosphere.

This is according to researchers at the University of Reading who have found that over a five year period the UK experienced around 50% more lightning strikes when the Earth’s magnetic field was skewed by the Sun’s own magnetic field.

The Earth’s magnetic field usually functions as an in-built force-field to shield against a bombardment of particles from space, known as galactic cosmic rays, which have previously been found to prompt a chain-reaction of events in thunderclouds that trigger lightning bolts.

It is hoped these new insights, which have been published today, 19 November, in IOP Publishing’s journal Environmental Research Letters, could lead to a reliable lightning forecast system that could provide warnings of hazardous events many weeks in advance.

To do so, weather forecasters would need to combine conventional forecasts with accurate predictions of the Sun’s spiral-shaped magnetic field known as the heliospheric magnetic field (HMF), which is spewed out as the Sun rotates and is dragged through the solar system by the solar wind.

Lead author of the research Dr Matt Owens said: “We’ve discovered that the Sun’s powerful magnetic field is having a big influence on UK lightning rates.

“The Sun’s magnetic field is like a bar magnet, so as the Sun rotates its magnetic field alternately points toward and away from the Earth, pulling the Earth’s own magnetic field one way and then another.”

In their study, the researchers used satellite and Met Office data to show that between 2001 and 2006, the UK experienced a 50% increase in thunderstorms when the HMF pointed towards the Sun and away from Earth.

This change of direction can skew or ‘bend’ the Earth’s own magnetic field and the researchers believe that this could expose some regions of the upper atmosphere to more galactic cosmic rays—tiny particles from across the Universe accelerated to close to the speed of light by exploding stars.

“From our results, we propose that galactic cosmic rays are channelled to different locations around the globe, which can trigger lightning in already charged-up thunderclouds. The changes to our magnetic field could also make thunderstorms more likely by acting like an extra battery in the atmospheric electric circuit, helping to further ‘charge up’ clouds,” Dr Owens continued.

The results build on a previous study by University of Reading researchers, also published in Environmental Research Letters, which found an unexpected link between energetic particles from the Sun and lightning rates on Earth.

Professor Giles Harrison, head of Reading’s Department of Meteorology and co-author of both studies, said: “This latest finding is an important step forward in our knowledge of how the weather on Earth is influenced by what goes on in space. The University of Reading’s continuing success in this area shows that new insights follow from atmospheric and space scientists working together.”

Dr Owens continued: “Scientists have been reliably predicting the solar magnetic field polarity since the 1970s by watching the surface of the Sun. We just never knew it had any implications on the weather on Earth. We now plan to combine regular weather forecasts, which predict when and where thunderclouds will form, with solar magnetic field predictions. This means a reliable lightning forecast could now be a genuine possibility.”

Contacts and sources:
Institute of Physics 

From Wednesday 19 November, this paper can be downloaded fromhttp://iopscience.iop.org/1748-9326/9/11/115009/article

Thursday, November 20, 2014

The Riddle Of The Missing Stars

Thanks to the NASA/ESA Hubble Space Telescope, some of the most mysterious cosmic residents have just become even more puzzling.

This NASA/ESA Hubble Space Telescope image shows four globular clusters in the dwarf galaxy Fornax.

Credit: NASA, ESA, S. Larsen (Radboud University, the Netherlands)

New observations of globular clusters in a small galaxy show they are very similar to those found in the Milky Way, and so must have formed in a similar way. One of the leading theories on how these clusters form predicts that globular clusters should only be found nestled in among large quantities of old stars. But these old stars, though rife in the Milky Way, are not present in this small galaxy, and so, the mystery deepens.

Globular clusters -- large balls of stars that orbit the centres of galaxies, but can lie very far from them -- remain one of the biggest cosmic mysteries. They were once thought to consist of a single population of stars that all formed together. However, research has since shown that many of the Milky Way's globular clusters had far more complex formation histories and are made up of at least two distinct populations of stars.

Of these populations, around half the stars are a single generation of normal stars that were thought to form first, and the other half form a second generation of stars, which are polluted with different chemical elements. In particular, the polluted stars contain up to 50-100 times more nitrogen than the first generation of stars.

The proportion of polluted stars found in the Milky Way's globular clusters is much higher than astronomers expected, suggesting that a large chunk of the first generation star population is missing. A leading explanation for this is that the clusters once contained many more stars but a large fraction of the first generation stars were ejected from the cluster at some time in its past.

This explanation makes sense for globular clusters in the Milky Way, where the ejected stars could easily hide among the many similar, old stars in the vast halo, but the new observations, which look at this type of cluster in a much smaller galaxy, call this theory into question.

Astronomers used Hubble's Wide Field Camera 3 (WFC3) to observe four globular clusters in a small nearby galaxy known as the Fornax Dwarf Spheroidal galaxy [1].

"We knew that the Milky Way's clusters were more complex than was originally thought, and there are theories to explain why. But to really test our theories about how these clusters form we needed to know what happened in other environments," says Søren Larsen of Radboud University in Nijmegen, the Netherlands, lead author of the new paper. "Before now we didn't know whether globular clusters in smaller galaxies had multiple generations or not, but our observations show clearly that they do!"

The astronomers' detailed observations of the four Fornax clusters show that they also contain a second polluted population of stars [2] and indicate that not only did they form in a similar way to one another, their formation process is also similar to clusters in the Milky Way. Specifically, the astronomers used the Hubble observations to measure the amount of nitrogen in the cluster stars, and found that about half of the stars in each cluster are polluted at the same level that is seen in Milky Way's globular clusters.

This high proportion of polluted second generation stars means that the Fornax globular clusters' formation should be covered by the same theory as those in the Milky Way.

Based on the number of polluted stars in these clusters they would have to have been up to ten times more massive in the past, before kicking out huge numbers of their first generation stars and reducing to their current size. But, unlike the Milky Way, the galaxy that hosts these clusters doesn't have enough old stars to account for the huge number that were supposedly banished from the clusters.

"If these kicked-out stars were there, we would see them -- but we don't!" explains Frank Grundahl of Aarhus University in Denmark, co-author on the paper. "Our leading formation theory just can't be right. There's nowhere that Fornax could have hidden these ejected stars, so it appears that the clusters couldn't have been so much larger in the past."

This finding means that a leading theory on how these mixed generation globular clusters formed cannot be correct and astronomers will have to think once more about how these mysterious objects, in the Milky Way and further afield, came to exist.

The new work is detailed in a paper published today, 20 November 2014, in The Astrophysical Journal.

Contacts and sources:
Georgia Bladon
ESA/Hubble Information Centre

Deep-Earth Carbon Offers Clues About Origin of Life on Earth

New findings by a Johns Hopkins University-led team reveal long unknown details about carbon deep beneath the Earth’s surface and suggest ways this subterranean carbon might have influenced the history of life on the planet.

The team also developed a new, related theory about how diamonds form in the Earth’s mantle.

For decades scientists had little understanding of how carbon behaved deep below the Earth’s surface even as they learned more and more about the element’s vital role at the planet’s crust. Using a model created by Johns Hopkins geochemist Dimitri Sverjensky, Sverjensky, Vincenzo Stagno of the Carnegie Institution of Washington and Fang Huang, a Johns Hopkins graduate student, became the first to calculate how much carbon and what types of carbon exist in fluids at 100 miles below the Earth’s surface at temperatures up to 2,100 degrees F.

Dimitri Sverjensky
Credit: Johns Hopkins University

In an article published this week in the journal Nature Geoscience, Sverjensky and his team demonstrate that in addition to the carbon dioxide and methane already documented deep in subduction zones, there exists a rich variety of organic carbon species that could spark the formation of diamonds and perhaps even become food for microbial life.

“It is a very exciting possibility that these deep fluids might transport building blocks for life into the shallow Earth,” said Sverjensky, a professor in the Department of Earth and Planetary Sciences. “This may be a key to the origin of life itself.”

Sverjensky’s theoretical model, called the Deep Earth Water model, allowed the team to determine the chemical makeup of fluids in the Earth’s mantle, expelled from descending tectonic plates. Some of the fluids, those in equilibrium with mantle peridotite minerals, contained the expected carbon dioxide and methane. But others, those in equilibrium with diamonds and eclogitic minerals, contained dissolved organic carbon species including a vinegar-like acetic acid.

These high concentrations of dissolved carbon species, previously unknown at great depth in the Earth, suggest they are helping to ferry large amounts of carbon from the subduction zone into the overlying mantle wedge where they are likely to alter the mantle and affect the cycling of elements back into the Earth’s atmosphere.

The team also suggested that these mantle fluids with dissolved organic carbon species could be creating diamonds in a previously unknown way. Scientists have long believed diamond formation resulted through chemical reactions starting with either carbon dioxide or methane. The organic species offer a range of different starting materials, and an entirely new take on the creation of the gemstones.

The research is part of a 10-year global project to further understanding of carbon on Earth called the Deep Carbon Observatory. The work is funded by the Alfred P. Sloan Foundation.

Contacts and sources:
Jill Rosen
Johns Hopkins University

Imagination, Reality Flow In Opposite Directions In The Brain

As real as that daydream may seem, its path through your brain runs opposite reality.

Aiming to discern discrete neural circuits, researchers at the University of Wisconsin-Madison have tracked electrical activity in the brains of people who alternately imagined scenes or watched videos.

Credit: galleryhip.com

"A really important problem in brain research is understanding how different parts of the brain are functionally connected. What areas are interacting? What is the direction of communication?" says Barry Van Veen, a UW-Madison professor of electrical and computer engineering. "We know that the brain does not function as a set of independent areas, but as a network of specialized areas that collaborate."

Van Veen, along with Giulio Tononi, a UW-Madison psychiatry professor and neuroscientist, Daniela Dentico, a scientist at UW-Madison's Waisman Center, and collaborators from the University of Liege in Belgium, published results recently in the journal NeuroImage. Their work could lead to the development of new tools to help Tononi untangle what happens in the brain during sleep and dreaming, while Van Veen hopes to apply the study's new methods to understand how the brain uses networks to encode short-term memory.

Electrical and computer engineering professor Barry Van Veen wears an electrode net used to monitor brain activity via EEG signals. His research with psychiatry professor and neuroscientist Giulio Tononi could help untangle what happens in the brain during sleep and dreaming.
Credit: Nick Berard

During imagination, the researchers found an increase in the flow of information from the parietal lobe of the brain to the occipital lobe -- from a higher-order region that combines inputs from several of the senses out to a lower-order region.

In contrast, visual information taken in by the eyes tends to flow from the occipital lobe -- which makes up much of the brain's visual cortex -- "up" to the parietal lobe.

"There seems to be a lot in our brains and animal brains that is directional, that neural signals move in a particular direction, then stop, and start somewhere else," says. "I think this is really a new theme that had not been explored."

The researchers approached the study as an opportunity to test the power of electroencephalography (EEG) -- which uses sensors on the scalp to measure underlying electrical activity -- to discriminate between different parts of the brain's network.

Brains are rarely quiet, though, and EEG tends to record plenty of activity not necessarily related to a particular process researchers want to study.

To zero in on a set of target circuits, the researchers asked their subjects to watch short video clips before trying to replay the action from memory in their heads. Others were asked to imagine traveling on a magic bicycle -- focusing on the details of shapes, colors and textures -- before watching a short video of silent nature scenes.

Using an algorithm Van Veen developed to parse the detailed EEG data, the researchers were able to compile strong evidence of the directional flow of information.

"We were very interested in seeing if our signal-processing methods were sensitive enough to discriminate between these conditions," says Van Veen, whose work is supported by the National Institute of Biomedical Imaging and Bioengineering. "These types of demonstrations are important for gaining confidence in new tools."

Contacts and sources:
Barry Van Veen
University of Wisconsin-Madison

New Zealand's Moa Were Exterminated By An Extremely Low-Density Human Population

A new study suggests that the flightless birds named moa were completely extinct by the time New Zealand's human population had grown to two and half thousand people at most.

The new findings, which appear in the prestigious journal Nature Communications, incorporate results of research by international teams involved in two major projects led by Professor Richard Holdaway (Palaecol Research Ltd and University of Canterbury) and Mr Chris Jacomb (University of Otago), respectively.

This is a restoration of an upland moa, Megalapteryx didinus.
Credit: George Edward Lodge

The researchers calculate that the Polynesians whose activities caused moa extinction in little more than a century had amongst the lowest human population densities on record. They found that during the peak period of moa hunting, there were fewer than 1500 Polynesian settlers in New Zealand, or about 1 person per 100 square kilometres, one of the lowest population densities recorded for any pre-industrial society.

They found that the human population could have reached about 2500 by the time moa went extinct. For several decades before then moa would have been rare luxuries.

Estimates of the human population during the moa hunting period are more sensitive to how long it took to exterminate the birds through hunting and habitat destruction than to the size of the founding population.

To better define the critical period of moa hunting, the research was aimed at "book-ending" the moa hunter period with new estimates for when people started eating moa, and when there were no more moa to eat.

Starting with the latest estimate for a founding population of about 400 people (including 170-230 women), and applying population growth rates in the range achieved by past and present populations, the researchers modelled the human population size through the moa hunter period and beyond. When moa and seals were still available, the better diet enjoyed by the settlers likely fuelled higher population growth, and the analyses took this into account.

The first "book-end" - first evidence for moa hunting - was set by statistical analyses of 93 new high-precision radiocarbon dates on genetically identified moa eggshell pieces. These had been excavated from first settlement era archaeological sites in the eastern South Island, and showed that moa were still breeding nearby.

Chris Jacomb explains: "The analyses showed that the sites were all first occupied - and the people began eating moa - after the major Kaharoa eruption of Mt Tarawera of about 1314 CE."

Ash from this eruption is an important time marker because no uncontested archaeological evidence for settlement has ever been found beneath it, Mr Jacomb says.

The other "book-end" was derived from statistical analyses of 270 high-precision radiocarbon dates on moa from non-archaeological sites. Analysis of 210 of the ages showed that moa were exterminated first in the more accessible eastern lowlands of the South Island, at the end of the 14th century, just 70-80 years after the first evidence for moa consumption.

Analysis of all 270 dates, on all South Island moa species from throughout the South Island, showed that moa survived for only about another 20 years after that.

Their total extinction most probably occurred within a decade either side of 1425 CE, barely a century after the earliest well-dated site, at Wairau Bar near Blenheim, was settled by people from tropical East Polynesia. The last known birds lived in the mountains of north-west Nelson. Professor Holdaway adds that "the results provide further support for the rapid extinction model for moa that Chris Jacomb and I published 14 years ago in [the US journal] Science."

The researchers note that it is often suggested that people could not have caused the extinction of megafauna such as the mammoths and giant sloths of North America and the giant marsupials of Australia, because the human populations when the extinctions happened were too small.

Prof Holdaway and Mr Jacomb say that the extinction of the New Zealand terrestrial megafauna of moa, giant eagle, and giant geese, accomplished by the direct and indirect activities of a very low-density human population, shows that population size can no longer be used as an argument against human involvement in extinctions elsewhere.

Contacts and sources: 
Richard N. Holdaway
University of Otago

Dinosaur Air Conditioning

Sweating, panting, moving to the shade, or taking a dip are all time-honored methods used by animals to cool down. The implicit goal of these adaptations is always to keep the brain from overheating. Now a new study shows that armor-plated dinosaurs (ankylosaurs) had the capacity to modify the temperature of the air they breathed in an exceptional way: by using their long, winding nasal passages as heat transfer devices.

Led by paleontologist Jason Bourke, a team of scientists at Ohio University used CT scans to document the anatomy of nasal passages in two different ankylosaur species. The team then modeled airflow through 3D reconstructions of these tubes. Bourke found that the convoluted passageways would have given the inhaled air more time and more surface area to warm up to body temperature by drawing heat away from nearby blood vessels. As a result, the blood would be cooled, and shunted to the brain to keep its temperature stable.

Modern mammals and birds use scroll-shaped bones called conchae or turbinates to warm inhaled air. But ankylosaurs seem to have accomplished the same result with a completely different anatomical construction.

"There are two ways that animal noses transfer heat while breathing," says Bourke. "One is to pack a bunch of conchae into the air field, like most mammals and birds do--it's spatially efficient. The other option is to do what lizards and crocodiles do and simply make the nasal airway much longer. Ankylosaurs took the second approach to the extreme."

Lawrence Witmer, who was also involved with the study, said, "Our team discovered these 'crazy-straw' airways several years ago, but only recently have we been able to scientifically test hypotheses on how they functioned. By simulating airflow through these noses, we found that these stretched airways were effective heat exchangers. They would have allowed these multi-tonne beasts to keep their multi-ounce brains from overheating."

Like our own noses, ankylosaur noses likely served more than one function. Even as it was conditioning the air it breathed, the convoluted passageways may have added resonance to the low-pitched sounds the animal uttered, allowing it to be heard over greater

Contacts and sources:
Anthony Friscia
Society of Vertebrate Paleontology

Climate Change Was Not To Blame For The Collapse Of The Bronze Age

Scientists will have to find alternative explanations for a huge population collapse in Europe at the end of the Bronze Age as researchers prove definitively that climate change - commonly assumed to be responsible - could not have been the culprit.

Bronze Age site
Credit: University of Bradford

Archaeologists and environmental scientists from the University of Bradford, University of Leeds, University College Cork, Ireland (UCC), and Queen's University Belfast have shown that the changes in climate that scientists believed to coincide with the fall in population in fact occurred at least two generations later.

Their results, published this week in Proceedings of the National Academy of Sciences, show that human activity starts to decline after 900BC, and falls rapidly after 800BC, indicating a population collapse. But the climate records show that colder, wetter conditions didn't occur until around two generations later.

Fluctuations in levels of human activity through time are reflected by the numbers of radiocarbon dates for a given period. The team used new statistical techniques to analyse more than 2000 radiocarbon dates, taken from hundreds of archaeological sites in Ireland, to pinpoint the precise dates that Europe's Bronze Age population collapse occurred.

The team then analysed past climate records from peat bogs in Ireland and compared the archaeological data to these climate records to see if the dates tallied. That information was then compared with evidence of climate change across NW Europe between 1200 and 500 BC.

"Our evidence shows definitively that the population decline in this period cannot have been caused by climate change," says Ian Armit, Professor of Archaeology at the University of Bradford, and lead author of the study.

Graeme Swindles, Associate Professor of Earth System Dynamics at the University of Leeds, added, "We found clear evidence for a rapid change in climate to much wetter conditions, which we were able to precisely pinpoint to 750BC using statistical methods."

According to Professor Armit, social and economic stress is more likely to be the cause of the sudden and widespread fall in numbers. Communities producing bronze needed to trade over very large distances to obtain copper and tin. Control of these networks enabled the growth of complex, hierarchical societies dominated by a warrior elite. As iron production took over, these networks collapsed, leading to widespread conflict and social collapse. It may be these unstable social conditions, rather than climate change, that led to the population collapse at the end of the Bronze Age.

According to Katharina Becker, Lecturer in the Department of Archaeology at UCC, the Late Bronze Age is usually seen as a time of plenty, in contrast to an impoverished Early Iron Age. "Our results show that the rich Bronze Age artefact record does not provide the full picture and that crisis began earlier than previously thought," she says.

"Although climate change was not directly responsible for the collapse it is likely that the poor climatic conditions would have affected farming," adds Professor Armit. "This would have been particularly difficult for vulnerable communities, preventing population recovery for several centuries."

The findings have significance for modern day climate change debates which, argues Professor Armit, are often too quick to link historical climate events with changes in population.

"The impact of climate change on humans is a huge concern today as we monitor rising temperatures globally," says Professor Armit.

"Often, in examining the past, we are inclined to link evidence of climate change with evidence of population change. Actually, if you have high quality data and apply modern analytical techniques, you get a much clearer picture and start to see the real complexity of human/environment relationships in the past."

Contacts and sources: 

Study Shows Marijuana’s Long-Term Effects On The Brain

The effects of chronic marijuana use on the brain may depend on age of first use and duration of use, according to researchers at the Center for BrainHealth at The University of Texas at Dallas.

Credit: Vanderbilt University

In a paper published today in Proceedings of the National Academy of Sciences (PNAS), researchers for the first time comprehensively describe existing abnormalities in brain function and structure of long-term marijuana users with multiple magnetic resonance imaging (MRI) techniques. Findings show chronic marijuana users have smaller brain volume in the orbitofrontal cortex (OFC), a part of the brain commonly associated with addiction, but also increased brain connectivity.

“We have seen a steady increase in the incidence of marijuana use since 2007,“said Dr. Francesca Filbey, Associate Professor in the School of Behavioral and Brain Sciences at the University of Texas at Dallas and Director of the Cognitive Neuroscience Research in Addictive Disorders at the Center for BrainHealth. “However, research on its long-term effects remains scarce despite the changes in legislation surrounding marijuana and the continuing conversation surrounding this relevant public health topic.”

The research team studied 48 adult marijuana users and 62 gender- and age-matched non-users, accounting for potential biases such as gender, age and ethnicity. The authors also controlled for tobacco and alcohol use. On average, the marijuana users who participated in the study consumed the drug three times per day. Cognitive tests show that chronic marijuana users had lower IQ compared to age-and gender-matched controls but the differences do not seem to be related to the brain abnormalities as no direct correlation can be drawn between IQ deficits and OFC volume decrease.

“What’s unique about this work is that it combines three different MRI techniques to evaluate different brain characteristics,” said Dr. Sina Aslan, founder and president of Advance MRI, LLC and adjunct assistant professor at The University of Texas at Dallas. “The results suggest increases in connectivity, both structural and functional that may be compensating for gray matter losses. Eventually, however, the structural connectivity or ‘wiring’ of the brain starts degrading with prolonged marijuana use.”

Tests reveal that earlier onset of regular marijuana use induces greater structural and functional connectivity. Greatest increases in connectivity appear as an individual begins using marijuana. Findings show severity of use is directly correlated to greater connectivity.

Although increased structural wiring declines after six to eight years of continued chronic use, marijuana users continue to display more intense connectivity than healthy non-users, which may explain why chronic, long-term users “seem to be doing just fine” despite smaller OFC brain volumes, Filbey explained.

“To date, existing studies on the long-term effects of marijuana on brain structures have been largely inconclusive due to limitations in methodologies,” said Dr. Filbey. “While our study does not conclusively address whether any or all of the brain changes are a direct consequence of marijuana use, these effects do suggest that these changes are related to age of onset and duration of use.”

The study offers a preliminary indication that gray matter in the OFC may be more vulnerable than white matter to the effects of delta-9-tetrahydrocannabinol (THC), the main psychoactive ingredient in the cannabis plant. According to the authors, the study provides evidence that chronic marijuana use initiates a complex process that allows neurons to adapt and compensate for smaller gray matter volume, but further studies are needed to determine whether these changes revert back to normal with discontinued marijuana use, whether similar effects are present in occasional marijuana users versus chronic users and whether these effects are indeed a direct result of marijuana use or a predisposing factor.

The research was funded by the National Institute on Drug Abuse to Dr. Filbey (R01 DA030344, K01 DA021632).

Contacts and sources:
Jessica Baine, B.S.,  Study Coordinator
The Center for BrainHealth

Mega-Landslide Covers 1,300 Square Miles

A catastrophic landslide, one of the largest known on the surface of the Earth, took place within minutes in southwestern Utah more than 21 million years ago, reports a Kent State University geologist in a paper published in the November issue of the journal Geology.

David Hacker, Ph.D., Kent State University associate professor of geology, points to pseudotachylyte layers and veins within the Markagunt gravity slide.
The Markagunt gravity slide, the size of three Ohio counties, is one of the two largest known continental landslides (larger slides exist on the ocean floors). David Hacker, Ph.D., associate professor of geology at Kent State University at Trumbull, and two colleagues discovered and mapped the scope of the Markagunt slide over the past two summers.

His colleagues and co-authors are Robert F. Biek of the Utah Geological Survey and Peter D. Rowley of Geologic Mapping Inc. of New Harmony, Utah.

Geologists had known about smaller portions of the Markagunt slide before the recent mapping showed its enormous extent. Hiking through the wilderness areas of the Dixie National Forest and Bureau of Land Management land, Hacker identified features showing that the Markagunt landslide was much bigger than previously known.

The landslide took place in an area between what is now Bryce Canyon National Park and the town of Beaver, Utah. It covered about 1,300 square miles, an area as big as Ohio’s Cuyahoga, Portage and Summit counties combined.

Its rival in size, the “Heart Mountain slide,” which took place around 50 million years ago in northwest Wyoming, was discovered in the 1940s and is a classic feature in geology textbooks.

The Markagunt could prove to be much larger than the Heart Mountain slide, once it is mapped in greater detail.

“Large-scale catastrophic collapses of volcanic fields such as these are rare but represent the largest known landslides on the surface of the Earth,” the authors wrote.

The length of the landslide – over 55 miles – also shows that it was as fast moving as it was massive, Hacker said.

Evidence showing that the slide was catastrophic – occurring within minutes – included the presence of pseudotachylytes, rocks that were melted into glass by the immense friction. Any animals living in its path would have been quickly overrun.

Evidence of the slide is not readily apparent to visitors today.

“Looking at it, you wouldn’t even recognize it as a landslide,” Hacker said.

But internal features of the slide, exposed in outcrops, yielded evidence such as jigsaw puzzle rock fractures and shear zones, along with the pseudotachylytes. 

Hacker, who studies catastrophic geological events, said the slide originated when a volcanic field consisting of many strato-volcanoes, a type similar to Mount St. Helens in the Cascade Mountains, which erupted in 1980, collapsed and produced the massive landslide.

The collapse may have been caused by the vertical inflation of deeper magma chambers that fed the volcanoes. Hacker has spent many summers in Utah mapping geologic features of the Pine Valley Mountains south of the Markagunt where he has found evidence of similar, but smaller slides from magma intrusions called laccoliths.

What is learned about the mega-landslide could help geologists better understand these extreme types of events. The Markagunt and the Heart Mountain slides document for the first time how large portions of ancient volcanic fields have collapsed, Hacker said, representing “a new class of hazards in volcanic fields.”

While the Markagunt landslide was a rare event, it shows the magnitude of what could happen in modern volcanic fields like the Cascades.

“We study events from the geologic past to better understand what could happen in the future,” he said.

The next steps in the research, conducted with his co-authors on the Geology paper, will be to continue mapping the slide, collect samples from the base for structural analysis and date the pseudotachylytes.

Hacker, who earned his Ph.D. in geology at Kent State, joined the faculty in 2000 after working for an environmental consulting company. He is co-author of the book Earth’s Natural Hazards: Understanding Natural Disasters and Catastrophes, published in 2010.

Contacts and sources:
Emily Vincent
Kent State University

View the abstract of the Geology paper, available online now.

Too Many People, Not Enough Water: Now And 2,700 Years Ago

The Assyrian Empire once dominated the ancient Near East. At the start of the 7th century BC, it was a mighty military machine and the largest empire the Old World had yet seen. But then, before the century was out, it had collapsed. Why? An international study now offers two new factors as possible contributors to the empire's sudden demise - overpopulation and drought.

Assyrian Attack on a Town
Credit: Wikipedia

Adam Schneider of the University of California, San Diego and Selim Adalı of Koç University in Istanbul, Turkey, have just published evidence for their novel claim.

Map of traditional Assyrian heartland and cities mentioned in ancient text.

Credit: Adam Schneider

"As far as we know, ours is the first study to put forward the hypothesis that climate change - specifically drought - helped to destroy the Assyrian Empire," said Schneider, doctoral candidate in anthropology at UC San Diego and first author on the paper in the Springer journal Climatic Change.

The researchers' work connects recently published climate data to text found on a clay tablet. The text is a letter to the king, written by a court astrologer, reporting (almost incidentally) that "no harvest was reaped" in 657 BC.

Paleoclimatic records back up the courtier's statement. Further, analysis of the region's weather patterns, in what is now Northern Iraq and Syria, suggests that the drought was not a one-off event but part of a series of arid years.

Add to that the strain of overpopulation, especially in places like the Assyrian capital of Nineveh (near present-day Mosul) - which had grown unsustainably large during the reign of King Sennacherib - and Assyria was fatally weakened, the researchers argue.

 This image shows UC San Diego anthropologist Adam Schneider in Damascus, 2010.
Credit: Adam Schneider

Within five years of the no-harvest report, Assyria was racked by a series of civil wars. Then joint Babylonian and Median forces attacked and destroyed Nineveh in 612 BC. The empire never recovered.

"We're not saying that the Assyrians suddenly starved to death or were forced to wander off into the desert en masse, abandoning their cities," Schneider said. "Rather, we're saying that drought and overpopulation affected the economy and destabilized the political system to a point where the empire couldn't withstand unrest and the onslaught of other peoples."

Schneider and Adalı draw parallels in their paper between the collapse of the ancient superpower and what is happening in the same area now. They point out, for instance, that the 7th-century story they outline bears a striking resemblance to the severe drought and subsequent political conflict in today's Syria and northern Iraq.

Schneider also sees an eerie similarity between Nineveh and Southern California. Though people weren't forcibly relocated to Los Angeles or San Diego to help an emperor grow himself a "great city," still, the populations of these contemporary metropolitan areas are probably also too large for their environments.

On a more global scale, Schneider and Adalı conclude, modern societies should pay attention to what can happen when immediate gains are prioritized over considerations of the long term.

"The Assyrians can be 'excused' to some extent," they write, "for focusing on short-term economic or political goals which increased their risk of being negatively impacted by climate change, given their technological capacity and their level of scientific understanding about how the natural world works. We, however, have no such excuses, and we also possess the additional benefit of hindsight, which allows us to piece together from the past what can go wrong if we choose not to enact policies that promote longer-term sustainability."

Contacts and sources:
Inga Kiderra
University of California - San Diego

Were Neanderthals A Sub-Species Of Modern Humans?

New Research Led By SUNY Downstate’s Dr. Samuel Márquez Says No

Disappearance of Neanderthals Likely the Result of Competition from Homo sapiens, and Not from Poor Adaptation to Cold

In an extensive, multi-institution study led by SUNY Downstate Medical Center, researchers have identified new evidence supporting the growing belief that Neanderthals were a distinct species separate from modern humans (Homo sapiens), and not a subspecies of modern humans.

Neanderthal man from the National Museum of Nature and Science.
Credit: Wikipedia
The study looked at the entire nasal complex of Neanderthals and involved researchers with diverse academic backgrounds. Supported by funding from the National Science Foundation and the National Institutes of Health, the research also indicates that the Neanderthal nasal complex was not adaptively inferior to that of modern humans, and that the Neanderthals’ extinction was likely due to competition from modern humans and not an inability of the Neanderthal nose to process a colder and drier climate.

Samuel Márquez, PhD, associate professor and co-discipline director of gross anatomy in SUNY Downstate’s Department of Cell Biology, and his team of specialists published their findings on the Neanderthal nasal complex in the November issue of The Anatomical Record, which is part of a special issue on The Vertebrate Nose: Evolution, Structure, and Function (now online).

They argue that studies of the Neanderthal nose, which have spanned over a century and a half, have been approaching this anatomical enigma from the wrong perspective. Previous work has compared Neanderthal nasal dimensions to modern human populations such as the Inuit and modern Europeans, whose nasal complexes are adapted to cold and temperate climates.

However, the current study joins a growing body of evidence that the upper respiratory tracts of this extinct group functioned via a different set of rules as a result of a separate evolutionary history and overall cranial bauplan (bodyplan), resulting in a mosaic of features not found among any population of Homo sapiens. Thus Dr. Márquez and his team of paleoanthropologists, comparative anatomists, and an otolaryngologist have contributed to the understanding of two of the most controversial topics in paleoanthropology – were Neanderthals a different species from modern humans and which aspects of their cranial morphology evolved as adaptations to cold stress.

“The strategy was to have a comprehensive examination of the nasal region of diverse modern human population groups and then compare the data with the fossil evidence. We used traditional morphometrics, geometric morphometric methodology based on 3D coordinate data, and CT imaging,” Dr. Márquez explained.

Anthony S. Pagano, PhD, anatomy instructor at NYU Langone Medical Center, a co-author, traveled to many European museums carrying a microscribe digitizer, the instrument used to collect 3D coordinate data from the fossils studied in this work, as spatial information may be missed using traditional morphometric methods. “We interpreted our findings using the different strengths of the team members,” Dr. Márquez said, “so that we can have a ‘feel’ for where these Neanderthals may lie along the modern human spectrum.”

Co-author William Lawson, MD, DDS, vice-chair and the Eugen Grabscheid research professor of otolaryngology and director of the Paleorhinology Laboratory of the Icahn School of Medicine at Mount Sinai, notes that the external nasal aperture of the Neanderthals approximates some modern human populations but that their midfacial prognathism (protrusion of the midface) is startlingly different. That difference is one of a number of Neanderthal nasal traits suggesting an evolutionary development distinct from that of modern humans. Dr. Lawson’s conclusion is predicated upon nearly four decades of clinical practice, in which he has seen over 7,000 patients representing a rich diversity of human nasal anatomy.

Distinguished Professor Jeffrey T. Laitman, PhD, also of the Icahn School of Medicine and director of the Center for Anatomy and Functional Morphology, and Eric Delson, PhD, director of the New York Consortium in Evolutionary Primatology or NYCEP, are also co-authors and are seasoned paleoanthropologists, each approaching their fifth decade of studying Neanderthals. Dr. Delson has published on various aspects of human evolution since the early 1970's.

Dr. Laitman states that this article is a significant contribution to the question of Neanderthal cold adaptation in the nasal region, especially in its identification of a different mosaic of features than those of cold-adapted modern humans. Dr. Laitman’s body of work has shown that there are clear differences in the vocal tract proportions of these fossil humans when compared to modern humans. This current contribution has now identified potentially species-level differences in nasal structure and function.

Dr. Laitman said, “The strength of this new research lies in its taking the totality of the Neanderthal nasal complex into account, rather than looking at a single feature. By looking at the complete morphological pattern, we can conclude that Neanderthals are our close relatives, but they are not us.”

Ian Tattersall, PhD, emeritus curator of the Division of Anthropology at the American Museum of Natural History, an expert on Neanderthal anatomy and functional morphology who did not participate in this study, stated, “Márquez and colleagues have carried out a most provocative and intriguing investigation of a very significant complex in the Neanderthal skull that has all too frequently been overlooked.” Dr. Tattersall hopes that “with luck, this research will stimulate future research demonstrating once and for all that Homo neanderthalensis deserves a distinctive identity of its own.”

Contacts and sources:
SUNY Downstate Medical Center

The article in The Anatomical Record is entitled, “The Nasal Complex of Neanderthals: An Entry Portal to their Place in Human Ancestry.” It is available online at:http://onlinelibrary.wiley.com/doi/10.1002/ar.23040/full.

This research was supported by the following grants, awarded to Mount Sinai: NSF-SBR9634519 and NSFBCS -1128901 from the National Science Foundation; and NIH 1 F31DC00255-01 from the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health (NIH). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH and NSF. Analysis and additional data collection were performed at SUNY Downstate.

Supercomputing Beyond Genealogy Reveals Surprising European Ancestors

NSF XSEDE Stampede supercomputer compares modern and ancient DNA

Left:The Stuttgart skull, from a 7,000-year-old skeleton found in Germany among artifacts from the first widespread farming culture of central Europe. Right: Blue eyes and dark skin, that's how the European hunter-gatherer appeared 7,000 years ago. Artist depiction based on La Braña 1, whose remains were recovered at La Braña-Arintero site in León, Spain. 
Courtesy of the Consejo Superior de Investigaciones Cientificas

What if you researched your family's genealogy, and a mysterious stranger turned out to be an ancestor?

That's the surprising feeling had by a team of scientists who peered back into Europe's murky prehistoric past thousands of years ago. With sophisticated genetic tools, supercomputing simulations and modeling, they traced the origins of modern Europeans to three distinct populations.

The international research team published their September 2014 results in the journal Nature.

The 8,000 year-old skull discovered in Loschbour, Luxembourg provided DNA evidence for the study of European ancestry. Scientists really only have a handful of ancient remains well-preserved enough for their genomes to be sequenced.

The remains studied were found at the caves of Loschbour, La Braña, Stuttgart, a ritual site at Motala, and at Mal'ta.XSEDE, the Extreme Science and Engineering Discovery Environment, provided the computational resources used in the study. It's a single virtual system that scientists use to interactively share computing resources, data and expertise.

Genomic analysis code ran on Stampede, the nearly 10 petaflop Dell/Intel Linux supercomputer at the Texas Advanced Computing Center (TACC). The research was funded in part by the National Cancer Institute of the National Institutes of Health.

"The main finding was that modern Europeans seem to be a mixture of three different ancestral populations," said study co-author Joshua Schraiber, a National Science Foundation Postdoctoral fellow at the University of Washington.

Schraiber said these results surprised him because the prevailing view among scientists held that only two distinct groups mixed between 7,000 and 8,000 years ago in Europe, as humans first started to adopt agriculture.

Hunter-gatherers with olive skin and mainly blue-eyes first expanded upon the continent about 12,000 years ago, moving north with the retreat of glaciers at the end of the last Ice Age. Later, early European farmers from the Near East migrated west and mixed with the hunter-gatherers. Genetic evidence revealed these farmers had light-colored skin and brown eyes.

The third mystery group that emerged from the data is ancient northern Eurasians. "People from the Siberia area is how I conceptualize it. We don't know too much anthropologically about who these people are. But the genetic evidence is relatively strong, because we do have ancient DNA from an individual that's very closely related to that population, too," Schraiber said.

"Having access to the TACC was really essential for me because at some point I was using a hundred gigabytes of RAM to do something."Joshua Schraiber, National Science Foundation Postdoctoral fellow, University of WashingtonThat individual is a three-year-old-boy whose remains were found near Lake Baikal in Siberia at a site called Mal'ta. Scientists determined his arm bone to be 24,000 years old. They then sequenced his genome, making it the second oldest one yet sequenced of a modern human. Interestingly enough, in late 2013 scientists used the Mal'ta genome to find that about one-third of Native American ancestry originated through gene flow from these ancient North Eurasians.

"I think there was a little bit of luck in this," Schraiber said, referring to the Mal'ta genome. "We knew the models weren't fitting. But we didn't know what was wrong. Luckily, this new ancient DNA had come out."

What scientists did was take the genomes from these ancient humans and compare them to those from 2,345 modern-day Europeans. "I used the POPRES data set, which had been used before to ask similar questions just looking at modern Europeans," Schraiber said. "And then I used a piece of software called Beagle, which was written by Brian Browning and Sharon Browning at the University of Washington, which computationally detects these regions of identity by descent."

Joshua Schraiber, National Science Foundation Postdoctoral fellow, University of Washington.The heavy demand on CPU time and RAM caused a bottleneck in the analysis.

"Having access to the Stampede supercomputer at TACC was essential for me because at some point I was using a hundred gigabytes of RAM to do something. It took days, even spreading it across multiple processors. It takes a lot of effort to do this identity by descent detection," Schraiber said.

Working on the hunch that the Mal'ta genome might fill in some missing blanks that the modeling pointed out, Schraiber saw a lot more identity by descent between the ancient individuals and modern individuals than he expected. "It made us happy in a lot of ways to find that these are people who share ancestors with modern Europeans."

Schraiber looks to East Asia and Africa as the next hot spots to study human history as scientists push forward to discover and analyze new sources of ancient DNA.

"Using archeological evidence tells you a lot. Modern DNA tells you a lot. But it's by combining the two and getting ancient DNA, which is anthropological evidence and genetic evidence at the same time, you're able to unravel these things. You're able to find complexity that you just didn't know was there before," he said.

Contacts and sources:
Faith Singer
University of Texas at Austin, Texas Advanced Computing Center