Tuesday, October 31, 2017

'Darwin's Aliens' Extraterrestrials May Be More Like Us Than We Think

Hollywood films and science fiction literature fuel the belief that aliens are other-worldly, monster-like beings, who are very different to humans. But new research suggests that we could have more in common with our extra-terrestrial neighbours, than initially thought.

In a new study published in the International Journal of Astrobiology scientists from the University of Oxford show for the first time how evolutionary theory can be used to support alien predictions and better understand their behaviour. They show that aliens are potentially shaped by the same processes and mechanisms that shaped humans, such as natural selection.

The theory supports the argument that foreign life forms undergo natural selection, and are like us, evolving to be fitter and stronger over time.

Picture an alien. These illustrations represent different levels of adaptive complexity we might imagine when thinking about aliens. (a) A simple replicating molecule, with no apparent design. This may or may not undergo natural selection. (b) An incredibly simple, cell-like entity. Even something this simple has sufficient contrivance of parts that it must undergo natural selection. (c) An alien with many intricate parts working together is likely to have undergone major transitions.
Credit:  International Journal of Astrobiology

Sam Levin, a researcher in Oxford's Department of Zoology, said: 'A fundamental task for astrobiologists (those who study life in the cosmos) is thinking about what extra-terrestrial life might be like. But making predictions about aliens is hard. We only have one example of life - life on Earth -- to extrapolate from. Past approaches in the field of astrobiology have been largely mechanistic, taking what we see on Earth, and what we know about chemistry, geology, and physics to make predictions about aliens.

'In our paper, we offer an alternative approach, which is to use evolutionary theory to make predictions that are independent of Earth's details. This is a useful approach, because theoretical predictions will apply to aliens that are silicon based, do not have DNA, and breathe nitrogen, for example.'

Using this idea of alien natural selection as a framework, the team addressed extra-terrestrial evolution, and how complexity will arise in space.

Species complexity has increased on the Earth as a result of a handful of events, known as major transitions. These transitions occur when a group of separate organisms evolve into a higher-level organism - when cells become multi-cellular organisms, for example. Both theory and empirical data suggest that extreme conditions are required for major transitions to occur.

The paper also makes specific predictions about the biological make-up of complex aliens, and offers a degree of insight as to what they might look like.

Major transitions in space: 'The Octomite'. A complex alien that comprises a hierarchy of entities, where each lower level collection of entities has aligned evolutionary interests such that conflict is effectively eliminated. These entities engage in division of labour, with various parts specialising on various tasks, such that the parts are mutually dependent.

Credit:  International Journal of Astrobiology

Sam Levin added: 'We still can't say whether aliens will walk on two legs or have big green eyes. But we believe evolutionary theory offers a unique additional tool for trying to understand what aliens will be like, and we have shown some examples of the kinds of strong predictions we can make with it.

'By predicting that aliens undergone major transitions - which is how complexity has arisen in species on earth, we can say that there is a level of predictability to evolution that would cause them to look like us.

'Like humans, we predict that they are made-up of a hierarchy of entities, which all cooperate to produce an alien. At each level of the organism there will be mechanisms in place to eliminate conflict, maintain cooperation, and keep the organism functioning. We can even offer some examples of what these mechanisms will be.

'There are potentially hundreds of thousands of habitable planets in our galaxy alone. We can't say whether or not we're alone on Earth, but we have taken a small step forward in answering, if we're not alone, what our neighbours are like.'



Contacts and sources:
Lanisha Butterfield
University of Oxford

Citation 'Darwin's aliens' appears in the International Journal of Astrobiology a
 

Graphene Enables High-Speed Electronics on Flexible Materials

A flexible detector for terahertz frequencies (1000 gigahertz) has been developed by Chalmers researchers using graphene transistors on plastic substrates. It is the first of its kind, and can extend the use of terahertz technology to applications that will require flexible electronics, such as wireless sensor networks and wearable technology. The results are published in the scientific journal Applied Physics Letters.

Terahertz radiation has a wide range of uses and can occur in everything from radio astronomy to medicine. The term refers to the electromagnetic waves whose frequencies range from 100 gigahertz to 10 terahertz. Demand for higher bandwidth in wireless communications and depiction for security applications has led to intensified research on systems and components intended for terahertz frequencies.

With the help of the two-dimensional material graphene, the first flexible terahertz detector has been developed by researchers at Chalmers. The opportunities are great within health and Internet of Things, and for new types of sensors. 
Illustration: Boid – Product Design Studio, Gothenburg.

One challenge has long been to enable low weight and cheap applications. However, advances in polymer technology have promoted the development of flexible electronics and enabled the production of high frequency units on flexible substrates.

Now, Chalmers researchers Xinxin Yang, Andrei Vorobiev, Andrey Generalov, Michael A. Andersson and Jan Stake have developed the first mechanically flexible and graphene-based terahertz detector in its kind. Thus, paving the way for flexible terahertz electronics.

The detector has unique features. At room temperature, it detects signals in the frequency range 330 to 500 gigahertz. It is translucent and flexible, and opens to a variety of applications. The technique can be used for imaging in the terahertz area (THz camera), but also for identifying different substances (sensor). It may also be of potential benefit in health care, where terahertz waves can be used to detect cancer. Other areas where the detector could be used are imaging sensors for vehicles or for wireless communications.

The unique electronic features of graphene, combined with its flexible nature, make it a promising material to integrate into plastic and fabric, something that will be important building blocks in a future interconnected world. Graphene electronics enables new applications for, among other things, everyday objects, which are commonly referred to as the Internet of Things.

Flexible terahertz detector 
Credit: Chalmers University of Technology

The detector shows the concrete possibilities of graphene, a material that conduct electric current extremely well. It is a feature that makes graphene an attractive building block in fast electronics. The Chalmers researchers' work is therefore an important step forward for graphene in the terahertz area, and a breakthrough for high performance and cheap flexible terahertz technology.

The detector drew attention at the EU Tallinn Digital Summit recently, where several important technological innovations made possible by graphene and related materials were on display. At the summit, EU Heads of State and Government gathered to discuss digital innovation and Europe's digital future. The flagship focus was to show what role graphene can play.

The research is also part of Xinxin Yang's licentiate seminar, which will be presented at Chalmers on 22 November 2017.

The research on the terahertz detector has been funded by the EU Graphene Flagship, the Swedish Foundation for Strategic Research (SSF), and the Knut and Alice Wallenberg Foundation (KAW).






Contacts and sources:
Chalmers University of Technology

 Citation: "A flexible graphene terahertz detector"
Xinxin Yang, Andrei Vorobiev, Andrey Generalov, Michael A. Andersson and Jan Stake, Chalmers University of Technology
Applied Physics Letters, Volume 111, Issue 2, 10.1063/1.4993434
http://dx.doi.org/10.1063/1.4993434

Intricate Beauty of a Cracked Glass Revealed by Novel Technique

Physics, math and special gels explain the formation of fracture patterns in brittle materials

Researchers have long pondered the origin of delicate criss-cross faceted patterns that are commonly found on the surfaces of broken material. Typical crack speeds in glass easily surpass a kilometer per second, and broken surface features may be well smaller than a millimeter. Since the formation of surface structure lasts a tiny fraction of a second, the processes generating these patterns have been largely a mystery.

Now there is a way around this problem. Replacing hard glass with soft but brittle gels makes it possible to slow down the cracks that precipitate fracture to mere meters per second. This novel technique has enabled researchers Itamar Kolvin, Gil Cohen and Prof. Jay Fineberg, at the Hebrew University of Jerusalem’s Racah Institute of Physics, to unravel the complex physical processes that take place during fracture in microscopic detail and in real time.

This photograph shows a typical faceted fracture surface formed by the fracture of a brittle gel. 
Novel technique reveals the intricate beauty of a cracked glass
Credit: Hebrew University of Jerusalem

Their work sheds new light on how broken surface patterns are formed. Surface facets bounded by steps are formed due to a special “topological” arrangement of the crack that cannot easily be undone, much as a knot along a string cannot be unraveled without pulling the whole length of the string through it.

These “crack knots” increase the surface formed by a crack, thereby creating a new venue for dissipating the energy required for material failure, and thereby making materials harder to break.

“The complex surfaces that are commonly formed on any fractured object have never been entirely understood,” said Prof. Jay Fineberg. “While a crack could form perfectly flat, mirror-like fracture surfaces (and sometimes does), generally complex facetted surfaces are the rule, even though they require much more energy to form. This study illuminates both how such beautiful and intricate patterns emerge in the fracture process, and why the crack cannot divest itself of them once they are formed.”

This physically important process provides an aesthetic example of how physics and mathematics intertwine to create intricate and often unexpected beauty. The research appears in Nature Materials.


Contacts and sources:
The Hebrew University of Jerusalem

Citation: Itamar Kolvin, Gil Cohen, Jay Fineberg. Topological defects govern crack front motion and facet formation on broken surfaces. Nature Materials, Advance Online Publication October 16, 2017. doi:10.1038/nmat5008. Link: http://dx.doi.org/10.1038/nmat5008

Gene Therapy Protects against Age-Related Cognitive and Memory Deficits

Researchers from the Institute of Neurosciences at the Universitat Autònoma de Barcelona (INc-UAB) and the Vall d'Hebron Research Institute (VHIR) are the first to demonstrate that regulation of the brain's Klotho gene using gene therapy protects against age-related learning and memory problems in mice.

The study, published in Molecular Psychiatry (Nature group), opens the door to advancing in the research and development of therapies based on this neuroprotective gene.

Researchers from the UAB demonstrated in a previous study that Klotho regulates age-associated processes, increasing life expectancy when over-expressed and accelerating the development of learning and memory deficiencies when inhibited.

From left to right: researchers of the UAB Institute of Neurosciences Miguel Chillón, Assumpció Bosch, Lydia Giménez-Llort and Àngela Sánchez.

Credit: Universitat Autònoma de Barcelona

Now they have demonstrated in vivo for the first time that one dose of this gene injected into the central nervous system prevents the cognitive decline associated with ageing in old animals which were treated at a younger age.

The results, which form part of the PhD thesis of Anna Massó, first author of the article, are part of a study led by INc-UAB researchers Dr Miguel Chillón, ICREA researcher at the Department of Biochemistry and Molecular Biology of the UAB and the VHIR; Dr Lydia Giménez-Llort from the Department of Psychiatry and Legal Medicine of the UAB; and with the collaboration of Dr Assumpció Bosch, also from the Department of Biochemistry and Molecular Biology.

“The therapy is based on an increase in the levels of this protein in the brain using an adeno-associated viral vector (AAV). Taking into account that the study was conducted with animals which aged naturally, we believe this could have the therapeutic ability to treat dementia and neurodegenerative disorders such as Alzheimer's or multiple sclerosis, among others”, Miguel Chillón points out.

The researchers patented their therapy and have licensed it to Kogenix Therapeutics. The company includes UAB participation and is based in the United States. It was launched by Dr Miguel Chillón and Dr Assumpció Bosch, together with the entrepreneur Menachem Abraham and Dr Carmela Abraham, professor of Biochemistry and Pharmacology at the Boston University School of Medicine, a pioneering centre in the study of Klotho in the central nervous system for more than a decade.

The objective of Kogenix is to achieve the initial capital needed to advance in the pre-clinical trials already being conducted with animal models of Alzheimer's disease. This will give way to the development of a drug to be used in gene therapy against neurodegenerative diseases based on small molecules which enhance the expression of the gene and/or the use of fragments of the Klotho protein itself.

“In basic research studies and clinical trials the AAVs have shown to be safe and effective in the implementation of a central nervous system gene therapy. In fact, the Food and Drug Administration made the first gene therapy available in the United States in August and additional approvals are expected”, Dr Assumpció Bosch states.




Contacts and sources:
Universitat Autònoma de Barcelona (INc-UAB)

Article: A Massó, Angela Sánchez, A Bosch, L Gimenez-Llort, M Chillon. “Secreted αKlotho isoform protects against age-dependent memory deficits.” Molecular Psychiatry (2017) 00, 1–11. DOI:10.1038/mp.2017.211 https://www.nature.com/mp/journal/vaop/ncurrent/full/mp2017211a.html

New Way to Detect the Risk of Dyslexia Before Learning to Read

Researchers are developing an early diagnosis of dyslexia through hearing capacity.

Almost 10% of the world population suffers dyslexia. Establishing an early diagnosis would allow the development of training programs to palliate this disorder. We now may be nearer to reaching this goal thanks to a study carried out by the Basque Centre on Cognition, Brain and Language (BCBL), associating auditory processing in children to their reading skills. The results offer a new approach for detecting the risk before the children learn to read.

Difficulty recognising words, decoding and writing problems, limitation of reading comprehension... These are the main consequences of dyslexia, a cognitive disorder of neurological origin in which a late diagnosis is the main handicap.

A study led by investigators of the Basque Centre on Cognition, Brain and Language (BCBL) has demonstrated a relationship between the capacity of children to learn how to read and their hearing ability.

This new study reveals the relationship between the capacity of children to learn how to read and their hearing ability. /
Credit:  BCBL


This breakthrough, published in Frontiers in Psychology, casts light upon the detection of the disorder and could help establish the risk of dyslexia from an early stage, as well as develop training programmes to palliate reading limitations on a preemptive basis.

"The capacity of children to listen and process language is a decisive factor in learning to read”, explains Paula Ríos-López, the supervisor of the study and a research member of the BCBL. At present, we must wait for children to reach 9 years of age in order to diagnose dyslexia.

However, the results of the San Sebastián centre imply that measurement of hearing capacity in children from a very early age may allow us to identify those individuals that might have reading problems and therefore be more susceptible to dyslexia.

Furthermore, different training activities could be implemented before 9 years of age, based on prosody (accents, tone and intonation) and language rhythms, together with programmes designed to palliate reading difficulties.

The aim is to improve reading skills and avoid future disorders. "For example, we could make an activity as simple as playing a drum improve the rhythmic skills of the child with the purpose of gradually improving language perception and avoiding future disorders”, underscores Ríos-López.

The study has been carried out in 40 children in the second and fifth primary school grades. In order to demonstrate the relationship between the ability to learn how to read and hearing capacity, the subjects were exposed to a pseudo-word (an invented word without meaning), which the children were required to repeat verbally when asked about it.

The experiment showed that this word was better understood when preceded by phrases produced only with prosodic information, i.e., in which the information only consisted of rhythms and intonations, with no phonemes of any kind.

As explained by the expert, those children that yielded poorer scores in the reading skill test were those that received most help from the phrase with prosodic information to successfully understand and repeat the pseudo-word.

In this sense, children that do not optimally process low frequency sounds (tones, accents and intonations of speech) have greater difficulty correctly decoding phonemes and words - and this in turn is directly correlated to reading capacity and its possible disorders.

“Rhythm offers the brain the key to focalising auditory attention in moments when information relevant to speech perception appears”, explains Ríos-López. “When the brain predicts the appearance of such information, an excitable state is produced, with the recruitment of neurons destined to adapt to it”, she concludes.


Contacts and sources:
Plataforma SINC

Citation: Paula Ríos-López, Monika T. Molnar, Mikel Lizarazu and Marie Lallier. The Role of Slow Speech Amplitude Envelope for Speech Processing and Reading Development’. Front. Psychol., 31 August 2017 | https://doi.org/10.3389/fpsyg.2017.01497

Accelerated Evolution Technique Begins New Age in Natural Drug Discovery

A research collaboration has discovered a new way of rapidly generating a swathe of medically significant natural products after discovering a ground-breaking technique that turns the marathon of evolution into a sprint.

The surprise discovery came when the research team inadvertently replicated a process that bacteria use to evolve their machinery for making natural products.

Now the team, which includes scientists at the John Innes Centre, plan to harness this process to generate "libraries" of valuable compounds created from the technique which they have named Accelerated Evolution.

This is Professor Barrie Wilkinson.
Credit: John Innes Centre

"For 20 years we have been using rational bioengineering to modify the chemical structures of clinically important natural products - using genetics to make a new molecule in a process that parallels medicinal chemistry - and that's what we were doing when we stumbled upon this," said Professor Barrie Wilkinson from the John Innes Centre.

"We have discovered a completely new way of doing things, one that will also teach us how to better bioengineer systems in a rational manner."

The collaboration was led by Isomerase Therapeutics Ltd, and included the University of Cambridge, Pfizer, Roche and DSTL.

The team were involved in lab work to produce new versions of rapamycin, a commercially successful natural compound produced by bacteria, used to prevent organ transplant rejection and treat certain cancers.

Rapamycin belongs to a medically and agriculturally important class of compounds called Polyketides. Fungi and bacteria produce these compounds to give them a survival advantage, for example to defend against pathogen attack and secure resources in the environment.

As part of their experiment, the team inserted a temperature sensitive replicon into the genome of the host strain, the soil bacteria Streptomyces rapamycinicus.

But instead of the expected result - a new Streptomyces rapamycinicus strain producing a specific new version of rapamycin - they isolated a wide range of new strains that each produced unexpected new molecules. These strains could be further modified leading to hundreds of new, structurally diverse compounds.

The team believe that, by inserting the replicon into the genes responsible for making rapamycin, they inadvertently introduced a genetic instability which activated a DNA repair process called homologous replication.

This caused the host bacteria to "spit" out the replicon from the genome, causing a rearrangement of the rapamycin biosynthetic genes.

Professor Wilkinson explains: "We think this process mimics and accelerates the processes that are prevalent during natural polyketide evolution."

By setting up a drug discovery development platform to harness the Accelerated Evolution platform, the team believes it is at the beginning of a new age in natural product drug discovery.

Dr Matthew Gregory, a corresponding author of the paper and CEO of Isomerase Therapeutics Ltd, said: "The work described in this paper has important ramifications for the natural products field and synthetic biology generally."




Contacts and sources:
Adrian Galvin  
John Innes Centre

The discovery is outlined in the journal Nature Communications in a paper called: Diversity oriented biosynthesis via accelerated evolution of modular gene clusters. http://www.nature.com/articles/s41467-017-01344-3

The full report: link to paper
http://www.nature.com/articles/s41467-017-01344-3

Dinosaur-Killing Asteroid Impact May Have Cooled Earth’s Climate More Than Previously Thought

The Chicxulub asteroid impact that wiped out the dinosaurs likely released far more climate-altering sulfur gas into the atmosphere than originally thought, according to new research.

A new study makes a more refined estimate of how much sulfur and carbon dioxide gas were ejected into Earth’s atmosphere from vaporized rocks immediately after the Chicxulub event. The study’s authors estimate more than three times as much sulfur may have entered the air compared to what previous models assumed, implying the ensuing period of cool weather may have been colder than previously thought.

The new study lends support to the hypothesis that the impact played a significant role in the Cretaceous-Paleogene extinction event that eradicated nearly three-quarters of Earth’s plant and animal species, according to Joanna Morgan, a geophysicist at Imperial College London in the United Kingdom and co-author of the new study published in Geophysical Research Letters, a journal of the American Geophysical Union.

An artist’s rendering of the Chicxulub asteroid impact that killed off most of the dinosaurs .

Credit: Donald E. Davis/NASA/JPL.

“Many climate models can’t currently capture all of the consequences of the Chicxulub impact due to uncertainty in how much gas was initially released,” Morgan said. “We wanted to revisit this significant event and refine our collision model to better capture its immediate effects on the atmosphere.”

The new findings could ultimately help scientists better understand how Earth’s climate radically changed in the aftermath of the asteroid collision, according to Georg Feulner, a climate scientist at the Potsdam Institute for Climate Impact Research in Potsdam, Germany who was not involved with the new research. The research could help give new insights into how Earth’s climate and ecosystem can significantly change due to impact events, he said.

“The key finding of the study is that they get a larger amount of sulfur and a smaller amount of carbon dioxide ejected than in other studies,” he said. “These improved estimates have big implications for the climactic consequences of the impact, which could have been even more dramatic than what previous studies have found.”

A titanic collision

The Chicxulub impact occurred 66 million years ago when an asteroid approximately 12 kilometers (7 miles) wide slammed into Earth. The collision took place near what is now the Yucatán peninsula in the Gulf of Mexico. The asteroid is often cited as a potential cause of the Cretaceous-Paleogene extinction event, a mass extinction that erased up to 75 percent of all plant and animal species, including the dinosaurs.

The asteroid collision had global consequences because it threw massive amounts of dust, sulfur and carbon dioxide into the atmosphere. The dust and sulfur formed a cloud that reflected sunlight and dramatically reduced Earth’s temperature. Based on earlier estimates of the amount of sulfur and carbon dioxide released by the impact, a recent study published in Geophysical Research Letters showed Earth’s average surface air temperature may have dropped by as much as 26 degrees Celsius (47 degrees Fahrenheit) and that sub-freezing temperatures persisted for at least three years after the impact.

In the new research, the authors used a computer code that simulates the pressure of the shock waves created by the impact to estimate the amounts of gases released in different impact scenarios. They changed variables such as the angle of the impact and the composition of the vaporized rocks to reduce the uncertainty of their calculations.

A simulation of the crater and impact plume formed eight seconds after the Chicxulub impact at 45 degrees. Chart A shows the density of different materials created in the impact. The colors show the atmosphere (blue), sediment (yellow), asteroid (gray) and basement (red), with darker colors reflecting higher densities. SW is the shock wave formed by the impact. Chart B shows the temperature in Kelvin at different locations in the impact.



Credit: Pierazzo and Artemieva (2012).

The new results show the impact likely released approximately 325 gigatons of sulfur and 425 gigatons of carbon dioxide into the atmosphere, more than 10 times global human emissions of carbon dioxide in 2014. In contrast, the previous study in Geophysical Research Letters that modeled Earth’s climate after the collision had assumed 100 gigatons of sulfur and 1,400 gigatons of carbon dioxide were ejected as a result of the impact.

Improving the impact model

The new study’s methods stand out because they ensured only gases that were ejected upwards with a minimum velocity of 1 kilometer per second (2,200 miles per hour) were included in the calculations. Gases ejected at slower speeds didn’t reach a high enough altitude to stay in the atmosphere and influence the climate, according to Natalia Artemieva, a senior scientist at the Planetary Science Institute in Tucson, Arizona and co-author of the new study.

Older models of the impact didn’t have as much computing power and were forced to assume all the ejected gas entered the atmosphere, limiting their accuracy, Artemieva said.

Joanna Morgan, center, co-author of the new study, was previously a leading member of expedition 364. The project drilled into the Chicxulub crater in April of 2016 and recovered rock core samples.

Credit: Joanna Morgan.

The study authors also based their model on updated estimates of the impact’s angle. An older studyassumed the asteroid hit the surface at an angle of 90 degrees, but newer research shows the asteroid hit at an angle of approximately 60 degrees. Using this revised angle of impact led to a larger amount of sulfur being ejected into the atmosphere, Morgan said.

The study’s authors did not model how much cooler Earth would have been as a result of their revised estimates of how much gas was ejected. Judging from the cooling seen in the previous study, which assumed a smaller amount of sulfur was released by the impact, the release of so much sulfur gas likely played a key role in the extinction event. The sulfur gas would have blocked out a significant amount of sunlight, likely leading to years of extremely cold weather potentially colder than the previous study found. The lack of sunlight and changes in ocean circulation would have devastated Earth’s plant life and marine biosphere, according to Feulner.

The release of carbon dioxide likely led to some long-term climate warming, but its influence was minor compared to the cooling effect of the sulfur cloud, Feulner said.

Along with gaining a better understand of the Chicxulub impact, researchers can also use the new study’s methods to estimate the amount of gas released during other large impacts in Earth’s history. For example, the authors calculated the Ries crater located in Bavaria, Germany was formed by an impact that ejected 1.3 gigatons of carbon dioxide into the atmosphere. This amount of gas likely had little effect on Earth’s climate, but the idea could be applied to help understand the climactic effects of larger impacts.

Contacts and sources:
Lauren Lipuma
The American Geophysical Union


Citation: “Quantifying the Release of Climate-Active Gases by Large Meteorite Impacts With a Case Study of Chicxulub” http://onlinelibrary.wiley.com/doi/10.1002/2017GL074879/pdf.

Minor Merger Kicks Supermassive Black Hole into High Gear

The galaxy Messier 77 (M77) is famous for its super-active nucleus that releases enormous energy across the electromagnetic spectrum, ranging from x-ray to radio wavelengths. Yet, despite its highly active core, the galaxy looks like any normal quiet spiral. There's no visual sign of what is causing its central region to radiate so extensively. It has long been a mystery why only the center of M77 is so active. Astronomers suspect a long-ago event involving a sinking black hole, which could have kicked the core into high gear.

To test their ideas about why the central region of M77 beams massive amounts of radiation, a team of researchers at the National Astronomical Observatory of Japan and the Open University of Japan used the Subaru Telescope to study M77. The unprecedented deep image of the galaxy reveals evidence of a hidden minor merger billions of years ago. The discovery gives crucial evidence for the minor merger origin of active galactic nuclei.

Figure 1: The deep image of Messier 77 taken with the Hyper Suprime-Cam (HSC) mounted at the Subaru Telescope. The picture is created by adding the color information from the Sloan Digital Sky Survey (Note 1) to the monochromatic image acquired by the HSC. (\
Figure 1
Credit: NAOJ/SDSS/David Hogg/Michael Blanton. Image Processing: Ichi Tanaka

The Mystery of Seyfert Galaxies
The galaxy Messier 77 (NGC 1068) is famous for harboring an active nucleus at its core that releases an enormous amount of energy. The existence of such active galaxies in the nearby universe was first noted by the American astronomer Carl Seyfert more than 70 years ago. Nowadays they are called the Seyfert galaxies (Note 2). Astronomers think that the source of such powerful activity is the gravitational energy released from superheated matter falling onto a supermassive black hole (SMBH) that resides in the center of the host galaxy. The estimated mass of such a SMBH for M77 is about 10 million times that of the Sun.

It takes a massive amount of gas dumped on the galaxy's central black hole to create such strong energies. That may sound like an easy task, but it's actually very difficult. The gas in the galactic disk will circulate faster and faster as it spirals into the vicinity of the SMBH. Then, at some point the "centrifugal force" balances with the gravitational pull of the SMBH. That actually prevents the gas from falling into the center. The situation is similar to water draining out of a bathtub. Due to the centrifugal force, the rapidly rotating water will not drain out rapidly. So, how can the angular momentum be removed from the gas circling near an active galactic nucleus? Finding the answer to that question is one of the big challenges for researchers today.

A Prediction Posed 18 Years Ago
In 1999, Professor Yoshiaki Taniguchi (currently at the Open University of Japan), the team leader of the current Subaru study, published a paper about the driving mechanism of the active nucleus of Seyfert galaxies such as M 77. He pointed out that a past event – a "minor merger" where the host galaxy ate up its "satellite" galaxy (a small low-mass galaxy orbiting it) – would be the key to activating the Seyfert nucleus (Note 3).

Usually, a minor merger event simply breaks up a low-mass satellite galaxy. The resulting debris is absorbed into the disk of the more massive host galaxy before it approaches the center. Therefore, it was not considered as the main driver of the nuclear activity. "However, the situation could be totally different if the satellite galaxy has a (smaller) SMBH in its center (Note 4)," Professor Taniguchi suggests, "because the black hole can never be broken apart. If it exists, it should eventually sink into the center of the host galaxy."

The sinking SMBH from the satellite galaxy would eventually create a disturbance in the rotating gas disk around the main galaxy's SMBH. Then, the disturbed gas would eventually rush into the central SMBH while releasing enormous gravitational energy. "This must be the main ignition mechanism of the active Seyfert nuclei," Taniguchi argued. "The idea can naturally explain the mystery about the morphology of the Seyfert galaxies," said Professor Taniguchi, pointing out the advantage of the model of normal-looking galaxies also being very active at their cores. (Note 5).

Probing the Theory Using the Subaru Telescope
Recent advances in observational technique allow the detection of the extremely faint structure around galaxies, such as loops or debris that are likely made by dynamical interactions with satellite galaxies.. The outermost parts of galaxies are often considered as relatively "quiet" with a longer dynamical timescale than anywhere inside. Simulations show that the faint signature of a past minor merger can remain several billion years after the event. "Such a signature can be a key test for our minor merger hypothesis for Seyfert galaxies. Now it is time to revisit M77," said Taniguchi.

The team's choice to look for 'the past case' was, of course, the Subaru Telescope and its powerful imaging camera, Hyper Suprime-Cam. The observing proposal was accepted and executed on Christmas night 2016. "The data was just amazing," said Dr. Ichi Tanaka, the primary investigator of the project. "Luckily, we could also retrieve the other data that was taken in the past and just released from the Subaru Telescope's data archive. Thus, the combined data we got finally is unprecedentedly deep."

Figure 2 shows the result. The team has identified several notable features outside the bright disk as seen in Figure 1, most of which were not known prior to the observation. There is a faint outer one-arm structure outside the disk to the west. The opposite part of the disk has a ripple-like structure which is clearly different from the spiral pattern. The detected signatures amazingly match to the result of a minor-merger simulation published by other research teams. What is more, the observing team discovered three extremely diffuse and large blobby structures farther outside of the disk. Intriguingly, it seems that two of these diffuse blobs actually constitute a gigantic loop around M77 with a diameter of 250,000 light years. These structures are compelling evidence that M77 ate up its satellite galaxy at least several billion years ago.

Subaru's great photon-collecting power and the superb performance of the Hyper Suprime-Cam were crucial in the discovery of the extremely faint structures in M77. Their discovery reveals the normal-looking galaxy's hidden violent past.. "Though people may sometimes make a lie, galaxies never do. The important thing is to listen to their small voices to understand the galaxies," said Professor Taniguchi.

Figure 2: (Left) The newly-discovered, extremely diffuse structures around M77. The innermost color part of the picture shows the bright part of the galaxy (from SDSS: see the center of Figure 1). The middle part in red-brown is the contrast-enhanced expression of the faint one-arm structure (labeled as "Banana") to the right, as well as the ripple structure (labeled as "Ripple") to the left. All the fore/background objects unrelated to M77 are removed during the process. The outermost monochrome part shows the faint ultra-diffuse structures in yellow circles (labelled as "UDO-SE", "UDO-NE", "UDO-SW"). A deep look at them indicates the latter two ("UDO-NE", "UDO-SW") constitute a part of the large loop-like structure. 
Figure 2
Credit: NAOJ) (Right) Artist's impression of M77. The illustration in the right is created and copyrighted by Mr. Akihiro Ikeshita. (Credit: Akihiro Ikeshita) 

The team will expand its study to more Seyfert galaxies using the Subaru Telescope. Dr. Masafumi Yagi, who leads the next phase of the project said, "We will discover more and more evidences of the satellite merger around Seyfert host galaxies. We expect that the project can provide a critical piece for the unified picture for the triggering mechanism for active galactic nuclei."

The result is going to be published in the Volume 69 Issue 5 of the Publications of the Astronomical Society of Japan (I. Tanaka, M.Yagi & Y. Taniguchi 2017, "Morphological evidence for a past minor merger in the Seyfert galaxy NGC 1068"). The research is financially supported by the Basic Research A grant JP16H02166 by the Grant-in-Aid for Scientific Research program.


Note1: The color image by the Sloan Digital Sky Survey used for Figure 1 is under the copyright of David W. Hogg and Michael R. Blanton.

Note 2: Seyfert galaxies are actually a subclass of the active galactic nuclei. There are even more powerful active galactic nuclei called quasar in the universe. Usually quasars are found much farther away than M77.

Note 3: Satellite galaxies are common for large galaxies. For example, there are two bright satellite galaxies called Large and Small Magellanic Clouds associated with our Milky Way. The Andromeda galaxy, our nearest neighbor, also has two bright satellites called Messier 32 and NGC 205.

Note 4: Astronomers believe that most galaxies have an SMBH in their central regions, with its mass mysteriously scaled to the mass of the host galaxy. It is also known that some satellite galaxies also have smaller SMBH. For example, Messier 32 (satellite of the Andromeda galaxy) is likely to have a SMBH much heavier than a million times the mass of our Sun. It is however not easy to directly prove the existence of the SMBH for satellite galaxies due to its light weight.

Note 5: Y. Taniguchi 1999, ApJ, 524, 65, for the reference.




Contacts and sources:'
National Astronomical Observatory of Japan  


Citation: I. Tanaka, M.Yagi & Y. Taniguchi 2017, "Morphological evidence for a past minor merger in the Seyfert galaxy NGC 1068", Publ. Astron. Soc. Japan, 69, psx100, 2017 Oct. 26.

The Red Sea Is Warming Faster Than Global Average

The world’s warmest sea is heating up faster than the global average, which could challenge the ability of the Red Sea’s organisms to cope.

“The global rate of ocean warming has many consequences for life on this planet. Now we are learning that the Red Sea is warming even faster than the global average,” says KAUST PhD student of marine science, Veronica Chaidez.

The analyses, conducted by a multidisciplinary team spanning all three divisions at KAUST, provide vital data that could help predict the future of the Red Sea’s marine biodiversity when supplemented by evidence to be gathered on the thermal limits of local organisms.

The mean maximum annual temperatures increase gradually from the north of the Red Sea to its south.
Credit:  Reproduced with permission from reference 1© 2017 Nature Publishing Group


Analyses of satellite sensing data from 1982 to 2015 show that the Red Sea’s maximum surface temperatures have increased at a rate of 0.17 ± 0.07°C per decade, exceeding the global ocean warming rate of 0.11°C per decade. Maximum sea-surface temperatures were found to increase from north to south along the Red Sea basin, with the coolest temperatures located in the gulfs of Suez and Aqaba in the far North. These two gulfs, however, are showing the highest rates of change compared to the rest of the basin at 0.40–0.45°C per decade; four times faster than the mean global ocean warming rate.

The Northern Red Sea experiences maximum temperatures throughout July, while the Southern Red Sea is warmest from late July to mid-August. Interestingly, sea-surface temperatures reached their maximum in an area on the Eastern coast of the Red Sea, about 200km south of Jeddah, from mid-August to early September. This anomaly may be caused by the unique wind patterns in this region.

Maximum surface temperatures are also recorded about a quarter of a day earlier per decade.

Systematic monitoring efforts are needed to assess the impacts of these rapid warming rates on coral bleaching and mass marine organism mortality events, adds Chaidez. Currently, no such monitoring exists in the Red Sea, but Chaidez is testing the thermal capacities of some of the basin’s plants and animals in her laboratory. A model that incorporates data on temperatures, organism thermal limits, and other relevant biological data could help predict impacts of warming on the local ecosystem.

Evidence suggests that warm temperatures in the Red Sea are already challenging the capacity of its marine organisms to adapt and survive. Marine organisms generally adapt to rising ocean temperatures by migrating toward the poles. This is not an easy migration in the Red Sea since it is a semi-enclosed space, rendering its organisms vulnerable.



Contacts and sources:
KAUST - King Abdullah University of Science and Technology 


Citation: Chaidez, V. Dreano, D., Agusti, S., Duarte, C.M. & Hoteit, I. Decadal trends in Red Sea maximum surface temperature. Scientific Reports 7, 8144 (2017)

‘Monster’ Planet Discovery Challenges Formation Theory

A giant planet – the existence of which was previously thought extremely unlikely – has been discovered by an international collaboration of astronomers, with the University of Warwick taking a leading role.

New research, led by Dr Daniel Bayliss and Professor Peter Wheatley from the University of Warwick’s Astronomy and Astrophysics Group, has identified the unusual planet NGTS-1b - the largest planet compared to the size of its companion star ever discovered in the universe.

NGTS-1b is a gas giant six hundred light years away, the size of Jupiter, and orbits a small star with a radius and mass half that of our sun.

Artist's impression of sunrise on planet NGTS-1b

Credit University of Warwick/Mark Garlick


Its existence challenges theories of planet formation which state that a planet of this size could not be formed by such a small star. According to these theories, small stars can readily form rocky planets but do not gather enough material together to form Jupiter-sized planets.

The planet is a hot Jupiter, at least as large as the Jupiter in our solar system, but with around 20% less mass. It is very close to its star – just 3% of the distance between Earth and the Sun – and orbits the star every 2.6 days, meaning a year on NGTS-1b lasts two and a half days.

The temperature on the gassy planet is approximately 530°C, or 800 kelvin.

Dr Daniel Bayliss, the lead author of the research, commented:

"The discovery of NGTS-1b was a complete surprise to us - such massive planets were not thought to exist around such small stars. This is the first exoplanet we have found with our new NGTS facility and we are already challenging the received wisdom of how planets form.

“Our challenge is to now find out how common these types of planets are in the Galaxy, and with the new NGTS facility we are well-placed to do just that.”

The researchers spotted the planet using the state-of-the-art Next-Generation Transit Survey (NGTS) - a wide-field observing facility made of a compact ensemble of telescopes, designed to search for transiting planets on bright stars - run by the Universities of Warwick, Leicester, Cambridge, Queen’s University Belfast, Observatoire de Genève, DLR Berlin and Universidad de Chile.

Artist's impression of planet NGTS-1b with its neighbouring sun


The planet orbits a red M-dwarf – the most common type of star in the universe, leading to the possibility that there could be more of these planets waiting to be found by the NGTS survey.

NGTS-1b is the first planet outside our solar system to have been discovered by the NGTS facility, which is situated at the European Southern Observatory’s Paranal Observatory in Northern Chile.

Professor Peter Wheatley, who is from the University of Warwick and leads NGTS, commented:

“NGTS-1b was difficult to find, despite being a monster of a planet, because its parent star is small and faint. Small stars are actually the most common in the universe, so it is possible that there are many of these giant planets waiting to found.

“Having worked for almost a decade to develop the NGTS telescope array, it is thrilling to see it picking out new and unexpected types of planets. I'm looking forward to seeing what other kinds of exciting new planets we can turn up.”

The researchers made their discovery by monitoring patches of the night sky over many months, and detecting red light from the star with innovative red-sensitive cameras. They noticed dips in the light from the star every 2.6 days, implying that a planet was orbiting and periodically blocking starlight.

Using these data, they then tracked the planet’s orbit around its star and calculated the size, position and mass of NGTS-1b by measuring the radial velocity of the star – finding out how much the star ‘wobbles’ during orbit, due to the gravitational tug from the planet, which changes depending on the planet’s size.

The research, ‘NGTS-1b: a hot Jupiter transiting an M-dwarf’, will be published in the Monthly Notices of the Royal Astronomical Society.



Cotacts and sources:
Luke Walton
University of Warwick

Monday, October 30, 2017

How We Know The Age Of The Earth? (Video)

The Earth is 4.565 billion years old, give or take a few million years. How do scientists know that? Since there’s no “established in” plaque stuck in a cliff somewhere, geologists deduced the age of the Earth thanks to a handful of radioactive elements. With radiometric dating, scientists can put an age on really old rocks — and even good old Mother Earth. For the 30th anniversary of National Chemistry Week, this edition of Reactions describes how scientists date rocks: 
Credit:  ACSReactions

Radiometric dating or radioactive dating is a technique used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formed. The method compares the abundance of a naturally occurring radioactive isotope within the material to the abundance of its decay products, which form at a known constant rate of decay.

The use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of the Earth itself, and can also be used to date a wide range of natural and man-made materials.

Example of a radioactive decay chain from lead-212 (212Pb) to lead-208 (208Pb) . Each parent nuclide spontaneously decays into a daughter nuclide (the decay product) via an α decay or a β decay. The final decay product, lead-208 (208Pb), is stable and can no longer undergo spontaneous radioactive decay.
Credit: , Eugene Alvin Villar (seav) / Wikimedia Commons

Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scale.  Among the best-known techniques are radiocarbon dating, potassium–argon dating and uranium–lead dating. By allowing the establishment of geological timescales, it provides a significant source of information about the ages of fossils and the deduced rates of evolutionary change. Radiometric dating is also used to date archaeological materials, including ancient artifacts.

Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be applied.




Contacts and sources:
Katie Cottingham, Ph.D.
American Chemical Society

Monster Colliding Black Holes May Lurk on the Edge of Spiral Galaxies

If you are looking for colliding black holes look to the edge of spiral galaxies say researchers.

The outskirts of spiral galaxies like our own could be crowded with colliding black holes of massive proportions and a prime location for scientists hunting the sources of gravitational waves, said researchers at Rochester Institute of Technology in an upcoming paper in Astrophysical Journal Letters.

The RIT study identifies an overlooked region that may prove to be rife with orbiting black holes and the origin of gravitational-wave chirps heard by observatories in the United States and Italy. Identifying the host galaxies of merging massive black holes could help explain how orbiting pairs of black holes form.

RIT researchers propose that the outer gas disk of spiral galaxies could be teeming with black holes that emit gravitational waves as they collide. Shown here is the Southern Pinwheel galaxy seen in ultraviolet light and radio wavelengths. The radio data, colored here in red, reveal the boondocks of the galaxy where orbiting black holes might exist.

Credit: NASA/JPL-Caltech/VLA/MPIA

Conditions favorable for black-hole mergers exist in the outer gas disks of big spiral galaxies, according to Sukanya Chakrabarti, assistant professor of physics at RIT and lead author of "The Contribution of Outer HI Disks to the Merging Binary Black Hole Populations," available online at https://arxiv.org/abs/1710.09407.

Until now, small satellite or dwarf galaxies were thought to have the most suitable environment for hosting black-hole populations: a sparse population of stars, unpolluted with heavy metals like iron, gold and platinum -- elements spewed in supernovae explosions -- and inefficient winds that leave massive stars intact.

Chakrabarti realized the edges of galaxies like the Milky Wavy have similar environments to dwarf galaxies but with a major advantage -- big galaxies are easier to find.

"The metal content in the outer disks of spiral galaxies is also quite low and should be rife with black holes in this large area," Chakrabarti said.

A co-author on the paper, Richard O'Shaughnessy, assistant professor of mathematical sciences at RIT and a member of the LIGO Scientific Collaboration, said: "This study shows that, when predicting or interpreting observations of black holes, we need to account not only for differences between different types of galaxies but also the range of environments that occur inside of them."

A deeper understanding of the universe is possible now that scientists can combine gravitational wave astronomy with traditional measurements of bands of light. Existing research shows that even black holes, which are too dense for light to escape, have a gravitational wave and an optical counterpart, remnants of matter from the stellar collapse from which they formed.

"If you can see the light from a black-hole merger, you can pinpoint where it is in the sky," Chakrabarti said. "Then you can infer the parameters that drive the life cycle of the universe as a whole and that's the holy grail for cosmology. The reason this is important is because gravitational waves give you a completely independent way of doing it so it doesn't rely on astrophysical approximations."


Contacts and sources:
Susan Gawlowicz
Rochester Institute of Technology

Nature’s Very Own Mind-Boggling Death Star Beams

They are nature’s very own Death Star beams – ultra-powerful jets of energy that shoot out from the vicinity of black holes like deadly rays from the Star Wars super-weapon.

Now a team of scientists led by the University of Southampton has moved a step closer to understanding these mysterious cosmic phenomena – known as relativistic jets – by measuring how quickly they ‘switch on’ and start shining brightly once they are launched.

How these jets form is still a puzzle. One theory suggests that they develop within the ‘accretion disc’ – the matter sucked into the orbit of a growing black hole. Extreme gravity within the disc twists and stretches magnetic fields, squeezing hot, magnetised disc material called plasma until it erupts in the form of oppositely directed magnetic pillars along the black hole’s rotational axis.

This is an artist's impression of astrophysical jets emitting from the binary system V404 Cygni.

Credit: G Pérez Díaz (IAC)

Plasma travels along these focused jets and gains tremendous speed, shooting across vast stretches of space. At some point, the plasma begins to shine brightly, but how and where this occurs in the jet has been debated by scientists.

In a new study published today [Monday, October 30] in Nature Astronomy, an international team of scientists led by Dr Poshak Gandhi show how they used precise multi-wavelength observations of a binary system called V404 Cygni – consisting of a star and a black hole closely orbiting each other, with the black hole feeding off matter from the star that falls through the disc – to throw light on this hotly debated phenomenon.

V404 Cygni is located about 7,800 light years away in the constellation of Cygnus, and weighs as much as about nine of our Suns put together. Dr Gandhi and his collaborators captured the data in June 2015, when V404 Cygni was observed radiating one of the brightest ‘outbursts’ of light from a black hole ever seen – bright enough to be visible to small telescopes used by amateur astronomers, and energetic enough to tear apart an Earth-like planet if properly focused.

Using telescopes on Earth and in space observing at exactly the same time, they captured a 0.1-second delay between X-ray flares emitted from near the black hole, where the jet forms, and the appearance of visible light flashes, marking the moment when accelerated jet plasma begins to shine.

This ‘blink of an eye’ delay was calculated to represent a maximum distance of 19,000 miles (30,000 km), impossible to resolve at the distance of V404 with any current telescope.

Dr Gandhi, of the University of Southampton, said: “Scientists have been observing jets for decades, but are far from understanding how nature creates these mind-bogglingly vast and energetic structures.



“Now, for the first time, we have captured the time delay between the appearance of X-rays and the appearance of optical light in a stellar-mass black hole at the moment jet plasma is activated. This lays to rest the controversy regarding the origin of the optical flashes, and also gives us a critical distance over which jet plasma must have been strongly accelerated to speeds approaching that of light."

In Star Wars terms, the key measurement of this study can roughly be likened to measuring the distance between the surface of the Death Star, where multiple rays of light shoot out, and the point where they converge into a single bright beam.

“But the physics of black hole jets has nothing to do with lasers or the fictional Kyber crystals that power the Death Star. Nature has found other ways to power jets,” said Dr Gandhi. “Gravity and magnetic fields play the key roles here, and this is the mechanism we are trying to unravel.”

The study also creates a link between V404 Cygni and supermassive black holes, which lie at the centre of massive galaxies and which weigh billions of times more than stellar-mass black holes. Similar jet physics may apply to all black holes.

Dr Gandhi said: “This is an exciting and important discovery which can be fed back into theory about relativistic jets, and contributes to our ever-growing understanding of black holes.”

The X-ray emission, representing the accretion disc ‘feeding’ the jet at its base, was captured from Earth orbit by NASA’s NuSTAR telescope, while the moment the jet became visible as optical light was caught by the ULTRACAM high-speed camera, mounted on the William Herschel Telescope on La Palma, in the Canary Islands.

Professor Vik Dhillon, of the University of Sheffield, the principal investigator behind ULTRACAM, commented: “This discovery was made possible thanks to our camera gathering 28 frames per second. It demonstrates the untapped potential of studying astrophysical phenomena at high speeds.”

At the same time, radio waves from the extended portions of the jet plasma were observed by a team of Professor Rob Fender, of the University of Oxford, using the AMI-LA radio telescope, in Cambridge, UK.

Professor Fender said: “These observations are another major step towards understanding exactly how relativistic jets are formed by black holes. Radio detections come from the outer jet and are the key unambiguous indicator of ongoing jet activity. The optical, X-rays and radio were also crucial for that discovery.”

As well as Southampton, the research involved the universities of Sheffield, Oxford, Cambridge and Warwick, in the UK, as well as universities in Italy, Spain, France, USA, Canada, Netherlands, Switzerland, India, Germany and the United Arab Emirates.

It was supported by the Science and Technology Facilities Council, the Spanish Ministry of Economy, Industry and Competitiveness, the Leverhulme Trust, the French National Support Agency, the Royal Society, NWO, and UK-India UKIERI-UGC Thematic Partnerships.


Contacts and sources: 

University of Southampton 

Spider Power a “Game-Changer” For Microphones In Phones And Hearing Aids

Would you want a spider web inside your ear? Probably not. But if you're able to put aside the creepy factor, new research from Binghamton University, State University of New York shows that fine fibers like spider silk actually improve the quality of microphones for hearing aids.

Binghamton University distinguished professor Ron Miles and graduate student Jian Zhou recently published a study in titled "Sensing fluctuating airflow with spider silk" that should lead to better microphones for hearing aids than traditional pressure-based systems.

Miles has done a number of studies looking at what we can learn from insects when it comes to hearing. He explained, "We use our eardrums, which pick up the direction of sound based on pressure, but most insects actually hear with their hairs." The spider silk is able to pick up the velocity of the air instead of the pressure of the air.

Mosquitoes, flies and spiders all have fine hairs on their bodies that move with the sounds waves traveling through the air. Miles wanted to recreate this type of hearing inside a microphone.

New research from Binghamton University, State University of New York shows that fine fibers like spider silk actually improve the quality of microphones for hearing aids.

Credit: Vinayaraj / Wikimedia Commons

Their microphone improves the directional sensing across a wide variety of frequencies that are often too quiet for microphones to pick up on. For someone with a hearing aid, that means being able to cancel out background noise when having a conversation in a crowded area. The same concept could be applied to the microphone inside cell phones.

Spider silk is thin enough that it also can move with the air when hit by soundwaves. "This can even happen with infrasound at frequencies as low as 3 hertz," said Miles. Sound at that frequency is typically inaccessible. It'd be equivalent to hearing the tectonic plates moving in an earthquake.

The study used spider silk, but Miles explained that any fiber that is thin enough could be used in the same way.

This is Binghamton University distinguished professor Ron Miles

Credit: Binghamton University, State University of New York

While the spider silk picks up the direction of airflow with great accuracy, that information has to be translated into an electronic signal to be of use.

"We coated the spider silk with gold and put it in a magnetic field to obtain an electronic signal," said Miles. "It's actually a fairly simple way to make an extremely effective microphone that has better directional capabilities across a wide range of frequencies."

The study is a game-changer for microphones but may also tell us something unique about spiders, said Miles. He and Zhou speculate that because spider silk is so good at sensing air flow, it's possible spiders can hear through their own web on top of what they are already known to hear through the small hairs on their bodies.



Contacts and sources:
Ron Miles / Rachael Flores
Binghamton University, State University of New York

Stoners Have More Sex Says Study, Stanford Findings "Unambiguous"

The jury's still out on rock 'n' roll. But the link between sex and at least one drug, marijuana, has been confirmed.

A study by investigators at the Stanford University School of Medicine indicates that, despite concerns among physicians and scientists that frequent marijuana use may impair sexual desire or performance, the opposite appears more likely to be the case.

The findings, to be published online Oct. 27 in the Journal of Sexual Medicine, are based on an analysis of more than 50,000 Americans ages 25-45. And they're unambiguous.

"Frequent marijuana use doesn't seem to impair sexual motivation or performance. If anything, it's associated with increased coital frequency," said the study's senior author, Michael Eisenberg, MD, assistant professor of urology. The lead author is Andrew Sun, MD, a resident in urology.

File:Orange Cookies.png
Credit: Lab Tested / Wikimedia Commons

Hint of a causal connection

The study does not establish a causal connection between marijuana use and sexual activity, Eisenberg noted. But the results hint at it, he added. "The overall trend we saw applied to people of both sexes and all races, ages, education levels, income groups and religions, every health status, whether they were married or single and whether or not they had kids."

The study is the first to examine the relationship between marijuana use and frequency of sexual intercourse at the population level in the United States.

"Marijuana use is very common, but its large-scale use and association with sexual frequency hasn't been studied much in a scientific way," Eisenberg said.

According to the National Institute on Drug Abuse, more than 20 million adult Americans are current marijuana users. With the drug's legalization for medical or recreational use in 29 states, that number is climbing. But despite marijuana's growing status as a recreational drug, its status as a procreational drug remains ambiguous: On one hand, there are reports of erectile dysfunction in heavy users, and rigorous studies have found reduced sperm counts in men who smoke it; on the other hand, experiments conducted in animal models and humans indicate that marijuana stimulates activity in brain regions involved in sexual arousal and activity.

Looking at survey responses

To arrive at an accurate determination of marijuana's effect on intercourse frequency, Eisenberg and Sun turned to the National Survey of Family Growth, sponsored by the federal Centers for Disease Control and Prevention. The survey, which provides data pertaining to family structures, sexual practices and childbearing, reflects the overall demographic features of the U.S. population. Originally conducted at regular intervals, the survey is now carried out on an annual basis. It explicitly queries respondents on how many times they've had intercourse with a member of the opposite sex in the past four weeks, and how frequently they've smoked marijuana over the past 12 months.

The investigators compiled answers to those questions for all years since 2002, when the survey first began collecting data on men as well as women. They included data from respondents ages 25-45 and excluded a small percentage (fewer than 3 percent) of respondents who had failed to answer one or more relevant questions.

In all, Eisenberg and Sun obtained data on 28,176 women averaging 29.9 years of age and 22,943 men whose average age was 29.5. They assessed these individuals' self-reported patterns of marijuana use over the previous year and their self-reported frequency of heterosexual intercourse over the previous four weeks.

Some 24.5 percent of men and 14.5 percent of women in the analysis reported having used marijuana, and there was a positive association between the frequency of marijuana use and the frequency of sexual intercourse. This relationship applied to both sexes: Women denying marijuana use in the past year, for example, had sex on average 6.0 times during the previous four weeks, whereas that number was 7.1 for daily pot users. Among men, the corresponding figure was 5.6 for nonusers and 6.9 for daily users.

In other words, pot users are having about 20 percent more sex than pot abstainers, Eisenberg noted.

Positive association is universal

Moreover, Eisenberg said, the positive association between marijuana use and coital frequency was independent of demographic, health, marital or parental status.

In addition, the trend remained even after accounting for subjects' use of other drugs, such as cocaine or alcohol. This, Eisenberg said, suggests that marijuana's positive correlation with sexual activity doesn't merely reflect some general tendency of less-inhibited types, who may be more inclined to use drugs, to also be more likely to have sex. In addition, coital frequency rose steadily with increasing marijuana use, a dose-dependent relationship supporting a possible active role for marijuana in fostering sexual activity.

Nevertheless, Eisenberg cautioned, the study shouldn't be misinterpreted as having proven a causal link. "It doesn't say if you smoke more marijuana, you'll have more sex," he said.



Contacts and sources:
Bruce Goldman
Stanford University Medical Center

Bandit Masked Feathered Dinosaur Discovered

Researchers from the University of Bristol have revealed how a small feathered dinosaur used its colour patterning, including a bandit mask-like stripe across its eyes, to avoid being detected by its predators and prey.

By reconstructing the likely color patterning of the Chinese dinosaur Sinosauropteryx, researchers have shown that it had multiple types of camouflage which likely helped it to avoid being eaten in a world full of larger meat-eating dinosaurs, including relatives of the infamous Tyrannosaurus Rex, as well as potentially allowing it to sneak up more easily on its own prey.

Sinosauropteryx in the likely open habitat in which it lived 130 million years ago in the Early Cretaceous

Credit: Robert Nicholls

Fiann Smithwick from the University's School of Earth Sciences led the work, which has been published today in the journal Current Biology.

He said: "Far from all being the lumbering prehistoric grey beasts of past children's books, at least some dinosaurs showed sophisticated colour patterns to hide from and confuse predators, just like today’s animals.

"Vision was likely very important in dinosaurs, just like today’s birds, and so it is not surprising that they evolved elaborate colour patterns."

The colour patterns also allowed the team to identify the likely habitat in which the dinosaur lived 130 million years ago.

The work involved mapping out how dark pigmented feathers were distributed across the body and revealed some distinctive color patterns.

The best-preserved fossil specimen of Sinosauropteryx from the Early Cretaceous Jehol Biota of China and an interpretive drawing of the bones, stomach contents and darkly pigmented feathers. Scale bar represents 50 mm.
Credit: University of Bristol

These colour patterns can also be seen in modern animals where they serve as different types of camouflage.

The patterns include a dark stripe around the eye, or 'bandit mask', which in modern birds helps to hide the eye from would-be predators, and a striped tail that may have been used to confuse both predators and prey.

Senior author, Dr Jakob Vinther, added: "Dinosaurs might be weird in our eyes, but their colour patterns very much resemble modern counterparts.

"They had excellent vision, were fierce predators and would have evolved camouflage patterns like we see in living mammals and birds."

The small dinosaur also showed a 'counter-shaded' pattern with a dark back and light belly; a pattern used by many modern animals to make the body look flatter and less 3D.

This stops animals standing out from their background, making them harder to spot, avoiding detection from would-be predators and potential prey.

Previous work on modern animals, carried out by one of the authors, Bristol's Professor Innes Cuthill, has shown that the precise pattern of countershading relates to the specific environments in which animals live.

Animals living in open habitats, such as savannahs, often have a counter-shaded pattern that goes from dark to light sharply and high on the side of the body, while those living in more closed habitats, like forests, usually change from dark to light much lower and more gradually.



This principal was applied to Sinosauropteryx, and allowed for the reconstruction of its habitat 130 million years ago. The countershading on Sinosauropteryx went from dark to light high on the body, suggesting that it would be more likely to live in open habitats with minimal vegetation.

Behavioural ecologist Professor Cuthill, who was also a co-author of this study, said: “We’ve shown before that countershading can act as effective camouflage against living predators. It’s exciting that we can now use the colours of extinct animals to predict the sort of environment they lived in.”

Fiann Smithwick added: "By reconstructing the color of these long-extinct dinosaurs, we have gained a better understanding of not only how they behaved and possible predator-prey dynamics, but also the environments in which they lived.

"This highlights how palaeocolour reconstructions can tell us things not possible from looking at just the bones of these animals."


Contacts and sources:
Jakob Vinther 
University of Bristol 


Citation: ‘Countershading and stripes in the theropod dinosaur Sinosauropteryx reveal heterogeneous habitats in the Early Cretaceous Jehol Biota’ by F. Smithwick, J. Vinther, R. Nicholls and I. Cuthill in Current Biology

Sleepwalkers Are Better at Automatic Walking

Sleepwalkers who are awake may have a multi-tasking advantage over non-sleepwalkers, according to recent research that uses virtual reality.

Try counting backwards from 200 in steps of 7 while walking en-route to your favourite café. Chances are, you will slow down or even freeze mid-stride, unless you are a sleepwalker.

Breakthrough research using virtual reality has revealed significant differences in how the brains of sleepwalkers and non-sleepwalkers control and perceive body movement – a first in cognitive science. Sleepwalkers exhibit increased automation in their movements with respect to non-sleepwalkers. The results are published in Current Biology on October 23, 2017.



Wearing a full-body motion capture suit in a room full of IR-tracking cameras at EPFL (Ecole polytechnique fédérale de Lausanne), sleepwalkers and non-sleepwalkers were asked to walk towards a target object, in this case a virtual cylinder. The subject was shown a life-size avatar that could truthfully replicate or deviate from the subject’s actual trajectory in real-time. Participants could therefore be tricked into walking along a modified trajectory to compensate for the avatar deviation. Their walking speed and accuracy of movement along with their movement awareness were then recorded and analysed.

There was no difference between sleepwalkers and non-sleepwalkers while performing this first task – just as previous research would have suggested. When the researchers added a layer of complexity, however, a clear distinction emerged between the two groups.

Subjects were asked to count backwards in steps of 7 starting from 200. Non-sleepwalkers significantly slowed down when having to count backwards while walking, yet sleepwalkers maintained a similar walking velocity in both conditions, showing a strong link between sleepwalking and automatic control of locomotion not during nocturnal episodes of sleepwalkers, but during full wakefulness. Furthermore, sleepwalkers were more accurate at detecting changes in the virtual reality feedback when faced with the mental arithmetic task.

“We found that sleepwalkers continued to walk at the same speed, with the same precision as before and were more aware of their movements than non-sleepwalkers,” says EPFL neuroscientist Olaf Blanke. “The research is also a first in the field of action-monitoring, providing important biomarkers for sleepwalkers – while they are awake.”

The Somnambulist, 1871 by John Everett Millais 
Somnambulism
Credit: Wikimedia Commons

Sleepwalkers are known to perform complex movements such as walking in the absence of full consciousness. This ability may translate into a multi-tasking advantage for sleepwalkers while awake. Somnambulism, or sleepwalking, currently affects between 2-4% of adults and over 10% in children. The condition can cause movements ranging from small gestures, to complex actions such as walking and even behaviours like getting dressed, driving a car, or playing a musical instrument, – all while asleep.

Sleepwalking is caused by a partial arousal from slow-wave or deep sleep, however it is not know which functional brain mechanisms are affected by this pathophysiology. The new relationship between sleepwalking and conscious movement control offers new insights into the brain mechanisms of sleepwalking and could potentially be used to aid diagnosis of sleepwalking while the subject is awake, rather than requiring an overnight stay in a sleep laboratory.

“Traditionally, little has been known about daytime markers of sleepwalking, mostly because of the difficulty in investigating this condition experimentally,” explains Oliver Kannape from the University of Central Lancashire (UCLan) and lead author of the study. “Our research offers novel insight into this common sleep disorder and provides a clear scientific link between action monitoring, consciousness, and sleepwalking.”




Contacts and source:
Hillary Sanctuary
École polytechnique fédérale de Lausanne (EPFL)

Icebergs Scar Antarctic Seafloor

Thousands of marks on the Antarctic seafloor, caused by icebergs which broke free from glaciers more than ten thousand years ago, show how part of the Antarctic Ice Sheet retreated rapidly at the end of the last ice age as it balanced precariously on sloping ground and became unstable. Today, as the global climate continues to warm, rapid and sustained retreat may be close to happening again, and could trigger runaway ice retreat into the interior of the continent, which in turn would cause sea levels to rise even faster than currently projected.

Researchers from the University of Cambridge, the British Antarctic Survey and Stockholm University imaged the seafloor of Pine Island Bay, in West Antarctica. They found that, as seas warmed at the end of the last ice age, Pine Island Glacier retreated to a point where its grounding line - the point where it enters the ocean and starts to float - was perched precariously at the end of a slope.

Break up of a floating 'ice shelf' in front of the glacier left tall ice 'cliffs' at its edge. The height of these cliffs made them unstable, triggering the release of thousands of icebergs into Pine Island Bay, and causing the glacier to retreat rapidly until its grounding line reached a restabilising point in shallower water.

Ice cliffs in Pine Island Bay, taken from the IB Oden.

Credit: Martin Jakobsson

Today, as warming waters caused by climate change flow underneath the floating ice shelves in Pine Island Bay, the Antarctic Ice Sheet is once again at risk of losing mass from rapidly retreating glaciers. Significantly, if ice retreat is triggered, there are no relatively shallow points in the ice sheet bed along the course of Pine Island and Thwaites glaciers to prevent possible runaway ice retreat into the interior of West Antarctica. The results are published in the journal Nature.

"Today, the Pine Island and Thwaites glaciers are grounded in a very precarious position, and major retreat may already be happening, caused primarily by warm waters melting from below the ice shelves that jut out from each glacier into the sea," said Matthew Wise of Cambridge's Scott Polar Research Institute, and the study's first author. "If we remove these buttressing ice shelves, unstable ice thicknesses would cause the grounded West Antarctic Ice Sheet to retreat rapidly again in the future. Since there are no potential restabilising points further upstream to stop any retreat from extending deep into the West Antarctic hinterland, this could cause sea-levels to rise faster than previously projected."

Pine Island Glacier and the neighbouring Thwaites Glacier are responsible for nearly a third of total ice loss from the West Antarctic Ice Sheet, and this contribution has increased greatly over the past 25 years. In addition to basal melt, the two glaciers also lose ice by breaking off, or calving, icebergs into Pine Island Bay.

Icebergs in Pine Island Bay, West Antarctica.

Credit: Robert Larter

Today, the icebergs that break off from Pine Island and Thwaites glaciers are mostly large table-like blocks, which cause characteristic 'comb-like' ploughmarks as these large multi-keeled icebergs grind along the sea floor. By contrast, during the last ice age, hundreds of comparatively smaller icebergs broke free of the Antarctic Ice Sheet and drifted into Pine Island Bay. These smaller icebergs had a v-shaped structure like the keel of a ship, and left long and deep single scars in the sea floor.

High-resolution imaging techniques, used to investigate the shape and distribution of ploughmarks on the sea floor in Pine Island Bay, allowed the researchers to determine the relative size and drift direction of icebergs in the past. Their analysis showed that these smaller icebergs were released due to a process called marine ice-cliff instability (MICI). More than 12,000 years ago, Pine Island and Thwaites glaciers were grounded on top of a large wedge of sediment, and were buttressed by a floating ice shelf, making them relatively stable even though they rested below sea level.

Eventually, the floating ice shelf in front of the glaciers 'broke up', which caused them to retreat onto land sloping downward from the grounding lines to the interior of the ice sheet. This exposed tall ice 'cliffs' at their margin with an unstable height, and resulted in rapid retreat of the glaciers from marine ice cliff instability between 12,000 and 11,000 years ago. This occurred under climate conditions that were relatively similar to those of today.

"Ice-cliff collapse has been debated as a theoretical process that might cause West Antarctic Ice Sheet retreat to accelerate in the future," said co-author Dr Robert Larter, from the British Antarctic Survey. "Our observations confirm that this process is real and that it occurred about 12,000 years ago, resulting in rapid retreat of the ice sheet into Pine Island Bay."

Today, the two glaciers are getting ever closer to the point where they may become unstable, resulting once again in rapid ice retreat.

The research has been funded in part by the UK Natural Environment and Research Council (NERC).


Contacts and sources:
Sarah Collins
University of Cambridge

Smart Access to Homes and Cars Using Finger Vibration-Based Security System

"Good, good, good, good vibrations" goes the catchy Beach Boys song, a big hit in 1966 and beyond.

Now Rutgers engineers have created VibWrite, a smart access system that senses finger vibrations to verify users. The low-cost security system could eventually be used to gain access to homes, apartment buildings, cars, appliances - anything with a solid surface.

"Everyone's finger bone structure is unique, and their fingers apply different pressures on surfaces, so sensors that detect subtle physiological and behavioral differences can identify and authenticate a person," said Yingying (Jennifer) Chen, a professor in the Department of Electrical and Computer Engineering at Rutgers University-New Brunswick.

This is an illustration of a finger touching a solid surface with a sensor to detect vibrations. The VibWrite system can be linked to three types of pass codes for user authentication.

Credit: The DAISY Lab

Chen is senior author of a peer-reviewed paper on VibWrite that was published online today at the ACM Conference on Computer and Communications Security, a flagship annual event of the Association for Computing Machinery (ACM). The international conference in Dallas, Texas, convenes information security researchers, practitioners, developers and users who explore cutting-edge research. VibWrite paper co-authors include Jian Liu and Chen Wang, doctoral students who work with Chen, and a researcher at the University of Alabama at Birmingham.

The market for smart security access systems is expected to grow rapidly, reaching nearly $10 billion by 2022, the paper says. Today's smart security access systems mainly rely on traditional techniques that use intercoms, cameras, cards or fingerprints to authenticate users. But these systems require costly equipment, complex hardware installation and diverse maintenance needs.

The goal of VibWrite is to allow user verification when fingers touch any solid surface, the paper says. VibWrite integrates passcode, behavioral and physiological characteristics. It builds on a touch-sensing technique by using vibration signals. It's different than traditional, password-based approaches, which validate passwords instead of legitimate users, as well as behavioral biometrics-based solutions, which typically involve touch screens, fingerprint readers or other costly hardware and lead to privacy concerns and "smudge attacks" that trace oily residues on surfaces from fingers.

This is an experimental setup of VibWrite on a wooden table and door panel.

Credit: The DAISY Lab

"Smart access systems that use fingerprinting and iris-recognition are very secure, but they're probably more than 10 times as expensive as our VibWrite system, especially when you want to widely deploy them," said Chen, who works in the School of Engineering and is a member of the Wireless Information Network Laboratory (WINLAB) at Rutgers University-New Brunswick.

VibWrite allows users to choose from PINs, lock patterns or gestures to gain secure access, the paper says. The authentication process can be performed on any solid surface beyond touch screens and on any screen size. It is resilient to "side-channel attacks" - when someone places a hidden vibration receiver on the surface or uses a nearby microphone to capture vibration signals. It also resists several other types of attacks, including when an attacker learns passcodes after observing a user multiple times.

A great benefit is that a VibWrite system is low-cost and uses minimal power. It includes an inexpensive vibration motor and receiver, and it can turn any solid surface into an authentication surface. Both hardware installation and maintenance are easy, and "VibWrite probably could be commercialized in a couple of years," Chen said.

During two trials, VibWrite verified legitimate users with more than 95 percent accuracy and the false positive rate was less than 3 percent. But the current VibWrite system needs improvements because users may need a few attempts to pass the system. To improve performance, the Rutgers-led team will deploy multiple sensor pairs, refine the hardware and upgrade authentication algorithms. They also need to further test the system outdoors to account for varying temperatures, humidity, winds, wetness, dust, dirt and other conditions.


Contacts and sources:
Todd B. Bates
Rutgers University