Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Friday, February 24, 2017

Historic Cultural Records Inform Scientific Perspectives on Woodland Uses


Scientists at the University of York and University College Cork have investigated how cultural records dating back 300 years could help improve understanding of the ways in which science interprets the many uses of woodland areas.

The researchers hope that the work will give a cultural narrative to environmental data collected over time, but also give new insight into the ways in which woodland management systems can be adapted to increase a sense of ownership amongst communities that live near woodland areas.

Historical evidence, gathered and analysed by the Shrawley Lime Group, a group of experts investigating the cultural and ecological history of Shrawley Woods, provided the team with new thinking on how documented woodland uses can be coloured by a cultural perception of them as either a home for various tree species; working woods; or spaces of leisure.

These are the Shrawley Woods.

Credit: Dr Suzi Richer

From researching pollen grains preserved in a waterlogged area of Shrawley Woods, the researchers were able to provide environmental data dating back to the 11th century. This was then compared with oral history records from the 18th century, which revealed the differences that occur in how the same type of tree is referenced between environmental and cultural records over time.

Researchers showed that the name of the tree related more closely to how it was used by woodland dwellers and not by its species name, a feature that becomes more common from the industrial revolution onwards.

The team found that the scientific data referenced both oak and lime species in the woods, but the historical information refers primarily to the products of the woods, such as 'poles' used for hop growing, and do not reference the species name at all. It is only when the local oral history evidence is included that historical and scientific data can be linked together and the evolution of wooded areas fully understood.

Dr Suzi Richer, from the University of York's Archaeology and Environment Departments, said: "We find that many books, television programmes, films, and art work, position woodlands as 'dangerous' or 'alien' places, where cultural norms can be broken, but archaeological and historical evidence shows that these were often working and living spaces with evidence of charcoal burning, brick kilns, and water-powered mills, which bring people and wooded areas much closer together in a working, living harmony.

"Scientific data by itself, particularly if it spans over many years, can miss out the cultural and social context of the period it represents and therefore the relationship between the environment and the people who lived there in the past. This can be crucial to help us interpret environmental records more fully."

Records show that from around the 1800s, woodlands become far less 'personal' in the way in which they are documented, but the oral history accounts demonstrate that this 'other way' of seeing trees persisted and still persists in areas of the West Midlands today.

The need to standardise resources was also consistent with the Enlightenment way of seeing the world at that time - one which saw the natural world as 'civilised'. It is from this point onwards that names, like 'oak', are used more commonly.

Dr Ben Gearey, from the Department of Archaeology, University College Cork, said: "We often think of environmental data as giving us information on the adverse effects that human activity can have on the environment, but our research shows that it can also demonstrate how cultural perceptions of a landscape or species can shape conservation efforts.

"We hope that this work demonstrates the importance of combining information from scientific and cultural approaches, and also accounts from the local communities in which these types of studies are undertaken.

"The next stage is to look more closely at the archaeological record and how we can present combined records so that they are meaningful for policy makers and woodland managers."



Contacts and sources:
Samantha Martin
University of York

The research is published in the Environmental Archaeology: Journal of Human Palaeoecology. http://dx.doi.org/10.1080/14614103.2017.1283765

The Hole in the Universe

The events surrounding the Big Bang were so cataclysmic that they left an indelible imprint on the fabric of the cosmos. We can detect these scars today by observing the oldest light in the Universe. As it was created nearly 14 billion years ago, this light — which exists now as weak microwave radiation and is thus named the cosmic microwave background (CMB) — has now expanded to permeate the entire cosmos, filling it with detectable photons.

The CMB can be used to probe the cosmos via something known as the Sunyaev-Zel’dovich (SZ) effect, which was first observed over 30 years ago. We detect the CMB here on Earth when its constituent microwave photons travel to us through space. On their journey to us, they can pass through galaxy clusters that contain high-energy electrons. These electrons give the photons a tiny boost of energy. Detecting these boosted photons through our telescopes is challenging but important — they can help astronomers to understand some of the fundamental properties of the Universe, such as the location and distribution of dense galaxy clusters.

Credit: ALMA (ESO/NAOJ/NRAO)/T. Kitayama (Toho University, Japan)/ESA/Hubble & NASA

This image shows the first measurements of the thermal Sunyaev-Zel’dovich effect from the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile (in blue). Astronomers combined data from ALMA’s 7- and 12-metre antennas to produce the sharpest possible image. The target was one of the most massive known galaxy clusters, RX J1347.5–1145, the centre of which shows up here in the dark “hole” in the ALMA observations. The energy distribution of the CMB photons shifts and appears as a temperature decrease at the wavelength observed by ALMA, hence a dark patch is observed in this image at the location of the cluster.



Contacts and sources:
Richard Hook
ESO

Thursday, February 23, 2017

The First Trace of Differences between Matter and 'Common,' Baryonic Antimatter


The world around us is mainly constructed of baryons, particles composed of three quarks. Why are there no antibaryons, since just after the Big Bang, matter and antimatter came into being in exactly the same amounts? A lot points to the fact that after many decades of research, physicists are closer to the answer to this question. In the Large Hadron Collider beauty (LHCb) experiment the first trace of the differences between baryons and antibaryons has just been encountered.

In data collected during the first phase of operation of the Large Hadron Collider the LHCb collaboration team has discovered an interesting asymmetry. The most recent analysis of decays of the beauty baryon Lambda b, a particle six times more massive than a proton, suggests that it decays a little differently than its antimatter counterpart. If this result is confirmed, it will be possible to talk about having observed the first difference between antibaryons and baryons, i.e. the family of particles which to a greater degree make up our everyday world.

The first trace of differences between matter and 'common', baryonic antimatter has just been encountered in decays of the beauty baryon Lambda b. Pictured above: LHCb Collaboration in front of LHCb detector.

Source: CERN, The LHCb Collaboration

Certain differences between matter and antimatter have already been observed previously. In 1964, it was noticed that kaons - that is, K mesons, particles made up of a strange quark and an up or down antiquark - sometimes decay somewhat differently than antikaons (the Nobel Prize was awarded for this discovery in 1980). In turn, in recent years there have been reports of the discovery of slightly clearer differences in the decays of antimesons and B mesons of various types (the B meson consists of a beauty quark and an up, down, strange or charm quark).

Mesons are quark-antiquark pairs with short lifetimes, appearing today in the Universe in small quantities, and on Earth, produced mainly in high-energy collisions in particle accelerators. Meanwhile the matter of which the macroscopic structures of our world are composed is made up of leptons (these include electrons) and to a greater degree baryons - clusters of three quarks (the proton is a baryon containing two up quarks and one down, as is the neutron which is composed of two down quarks and one up). The most recent analysis of data from the LHCb collaboration, published in the journal Nature Physics and concerning the decays of Lambda b particles composed of down, up and beauty quarks, is thus the first indication of the possible differences between baryonic matter and its antimatter reflection.

"We cannot yet talk about a discovery. Nevertheless, we are dealing with something that seems to be an increasingly promising observational clue, taken from the data from the first stage of operation of the LHC accelerator. We will, however, have to wait for the final confirmation - or denial... - of the current result another dozen or so months until the official end of the analysis of data from the second run," stresses Prof. Marcin Kucharczyk from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, one of the participants of the LHCb collaboration.

Modern particle physics and cosmological models suggest that antimatter came into being in exactly the same amounts as matter. This fact is linked with spectacular consequences: When a particle encounters its antiparticle, there is a great likelihood of mutual annihilation, i.e. a process in which both particles completely transform into energy. This mechanism is extremely efficient. The amount of energy generated by the annihilation of a kilogram of antimatter with a good approximation corresponds to the amount of energy that would be released as a result of burning the annual petrol production of all the refineries in Poland.

If in the contemporary Universe there were planets, stars or galaxies made of antimatter, they should emit large amounts of radiation with very characteristic energies. This would arise due to the inevitable interactions with matter of the opposite type, leading to annihilation. Meanwhile, astronomers only observe annihilation radiation here and there and in residual amounts, well explained by physical phenomena which are also today responsible for the formation of small amounts of antimatter. Thus the fundamentally important question arises: since originally matter and antimatter filled the Universe in exactly equal amounts, why have they not completely disappeared? Why has a small portion of matter managed to survive the era of annihilation?

In the living world great extinctions leading to the extinction of species last for tens and hundreds of thousands of years. Meanwhile, everything points to the fact that antimatter annihilated by matter disappeared from our universe fractions of a second after the Big Bang. For every few billion particles of matter just one particle survived the giant cataclysm. If a similar scale of destruction touched the human species, within seconds the Earth's population would be down to one live individual. The question of why only he survived would certainly be most apt.

"In modern physics, it is assumed that the existence of matter should be due to some minor differences between the properties of particles and antiparticles. In equations, to convert a particle into an antiparticle, you have to change the sign of the corresponding quantum characteristics - in the case of electrons or the quarks making up protons or neutrons it is the electrical charge - and change the character of the spatial coordinates, i.e. form a mirror image. The combination of these two operations is called CP symmetry, that is, charge and parity symmetry. Thus, attempts to detect differences between matter and antimatter boil down to tracking events in which CP symmetry is not preserved," explains Prof. Kucharczyk.

Looking for signs of CP violation, the LHCb collaboration researchers chose from a huge number of collisions and the products of their decays approx. 6,000 cases in which Lambda b particles decayed to a proton and three pi mesons (pions), and approx. 1,000 cases with a decay path leading to a proton, a pion and two kaons. Detailed analysis revealed that the angles at which the products of decays diverge are sometimes somewhat different for Lambda b baryons than for their antimatter partners. The result is confirmed with a statistical significance of 3.3 standard deviations (sigma), which corresponds to a probability of approx. 99% that it is not a random fluctuation. In particle physics it is assumed, however, that one can talk of a discovery only with a statistical significance of over 5 sigma, that is, when the probability of a random fluctuation is less than one to more than three million.

The Henryk Niewodniczanski Institute of Nuclear Physics (IFJ PAN) is currently the largest research institute of the Polish Academy of Sciences. The broad range of studies and activities of IFJ PAN includes basic and applied research, ranging from particle physics and astrophysics, through hadron physics, high-, medium-, and low-energy nuclear physics, condensed matter physics (including materials engineering), to various applications of methods of nuclear physics in interdisciplinary research, covering medical physics, dosimetry, radiation and environmental biology, environmental protection, and other related disciplines. The average yearly yield of the IFJ PAN encompasses more than 500 scientific papers in the Journal Citation Reports published by the Thomson Reuters. The part of the Institute is the Cyclotron Centre Bronowice (CCB) which is an infrastructure, unique in Central Europe, to serve as a clinical and research centre in the area of medical and nuclear physics. IFJ PAN is a member of the Marian Smoluchowski Krakow Research Consortium: "Matter-Energy-Future" which possesses the status of a Leading National Research Centre (KNOW) in physics for the years 2012-2017. The Institute is of A+ Category (leading level in Poland) in the field of sciences and engineering.




Contacts and sources:
Prof. Marcin Kucharczyk
The Institute of Nuclear Physics of the Polish Academy of Sciences

Citation: "Measurement of matter-antimatter differences in beauty baryon decays"
The LHCb collaboration  Nature Physics (2017) DOI: 10.1038/nphys4021

New Model Predicts More Gas-Giants Will Be Found Orbiting Sun-Like Stars


New planetary formation models from Carnegie's Alan Boss indicate that there may be an undiscovered population of gas giant planets orbiting around Sun-like stars at distances similar to those of Jupiter and Saturn. His work is published by The Astrophysical Journal.

The population of exoplanets discovered by ongoing planet-hunting projects continues to increase. These discoveries can improve models that predict where to look for more of them.

The planets predicted by Boss in this study could hold the key to solving a longstanding debate about the formation of our Solar System's giant planets out of the disk of gas and dust that surrounded the Sun in its youth.

Boss' model of a planet-forming disk, which demonstrates that gas giant planets could be found orbiting Sun-like stars at distances similar to Jupiter and Saturn. The disk extends from 4 to 20 times the distance of the Earth from the Sun. You can see the spiral arms forming in the midplane of the disk. The disk instability theory suggests that gas giant planets can form from the clumps seen in the densest regions of the spiral arms.

Credit: Alan Boss

One theory holds that gas giants form just like terrestrial planets do--by the slow accretion of rocky material from the rotating disk--until the object contains enough material to gravitationally attract a very large envelope of gas around a solid core. The other theory states that gas giant planets form rapidly when the disk gas forms spiral arms, which increase in mass and density until distinct clumps form that coalesce into baby gas giant planets.

One problem with the first option, called core accretion, is that it can't explain how gas giant planets form beyond a certain orbital distance from their host stars--a phenomenon that is increasingly found by intrepid planet hunters. However, models of the second theory, called disk instability, have indicated the formation of planets with orbits between about 20 and 50 times the distance between the Earth and the Sun.

"Given the existence of gas giant planets on such wide orbits, disk instability or something similar must be involved in the creation of at least some exoplanets," Boss said. "However, whether or not this method could create closer-orbiting gas giant planets remains unanswered."

Boss set out to use his modeling tools to learn if gas giant planets can form closer to their host stars by taking a new look at the disk-cooling process. His simulations indicate that there may be a largely unseen population of gas giant planets orbiting Sun-like stars at distances between 6 and 16 times that separating the Earth and the Sun. (For context Jupiter is just over five times as distant from the Sun as Earth is, and Saturn is over nine times as distant.)

"NASA's upcoming Wide Field Infrared Survey Telescope may be ideally suited to test my predictions here," Boss added.



Contacts and sources:
Alan Boss
Carnegie Institution for Science

Cameras Can Steal Data from Blinking Computer Hard Drive LED Lights

Researchers at the Ben-Gurion University of the Negev (BGU) Cyber Security Research Center have demonstrated that data can be stolen from an isolated "air-gapped" computer's hard drive reading the pulses of light on the LED drive using various types of cameras and light sensors.

In the new paper, the researchers demonstrated how data can be received by a Quadcopter drone flight, even outside a window with line-of-sight of the transmitting computer. 

Watch a video of the demonstration.


Air-gapped computers are isolated -- separated both logically and physically from public networks -- ostensibly so that they cannot be hacked over the Internet or within company networks. These computers typically contain an organization's most sensitive and confidential information.

Led by Dr. Mordechai Guri, head of R&D at the Cyber Security Research Center, the research team utilized the hard-drive (HDD) activity LED lights that are found on most desktop PCs and laptops. The researchers found that once malware is on a computer, it can indirectly control the HDD LED, turning it on and off rapidly (thousands of flickers per second) -- a rate that exceeds the human visual perception capabilities. As a result, highly sensitive information can be encoded and leaked over the fast LED signals, which are received and recorded by remote cameras or light sensors.

Credit: Pixabay

"Our method compared to other LED exfiltration is unique, because it is also covert," Dr. Guri says. "The hard drive LED flickers frequently, and therefore the user won't be suspicious about changes in its activity."

Dr. Guri and the Cyber Security Research Center have conducted a number of studies to demonstrate how malware can infiltrate air-gapped computers and transmit data. Previously, they determined that computer speakers and fans, FM waves and heat are all methods that can be used to obtain data.

In addition to Dr. Guri, the other BGU researchers include Boris Zadov, who received his M.Sc. degree from the BGU Department of Electrical and Computer Engineering and Prof. Yuval Elovici, director of the BGU Cyber Security Research Center. Prof. Elovici is also a member of Ben-Gurion University's Department of Software and Information Systems Engineering and director of Deutsche Telekom Laboratories at BGU.
 


Contacts and sources:
Andrew Lavin
American Associates, Ben-Gurion University of the Negev (AABGU)

Mainly Men Migrated from the Pontic Steppe To Europe 5,000 Years Ago According to Genetic Data

A new study, looking at the sex-specifically inherited X chromosome of prehistoric human remains, shows that hardly any women took part in the extensive migration from the Pontic-Caspian Steppe approximately 5,000 years ago. The great migration that brought farming practices to Europe 4,000 years earlier, on the other hand, consisted of both women and men. The difference in sex bias suggests that different social and cultural processes drove the two migrations.

Genetic data suggest that modern European ancestry represents a mosaic of ancestral contributions from multiple waves of prehistoric migration events. Recent studies of genomic variation in prehistoric human remains have demonstrated that two mass migration events are particularly important to understanding European prehistory: the Neolithic spread of agriculture from Anatolia starting around 9,000 years ago, and migration from the Pontic-Caspian Steppe around 5,000 years ago. These migrations are coincident with large social, cultural, and linguistic changes, and each has been inferred to have replaced more than half of the contemporaneous gene pool of resident Central Europeans.

The Pontic–Caspian steppe, or Ukrainian steppe is the vast steppeland stretching from the northern shores of the Black Sea (called Euxeinos Pontos in antiquity) as far east as the Caspian Sea, from Moldova and western Ukraine across the Southern Federal District and the Volga Federal District of Russia to western Kazakhstan, forming part of the larger Eurasian steppe, adjacent to the Kazakh steppe to the east. It is a part of the Palearctic temperate grasslands, savannas, and shrublands ecoregion of the temperate grasslands, savannas, and shrublands biome.

The area corresponds to Cimmeria, Scythia, and Sarmatia of classical antiquity. Across several millennia the steppe was used by numerous tribes of nomadic horsemen, many of which went on to conquer lands in the settled regions of Europe and in western and southern Asia.

The steppe extends roughly from the Dniepr to the Ural or 30° to 55° east longitude, and from the Black Sea and the Caucasus in the south to the temperate forest and taiga in the north, or 45° to 55° north latitude.

Credit: Wikipedia

Dramatic events in human prehistory can be investigated using patterns of genetic variation among the people that lived in those times. In particular, studies of differing female and male demographic histories on the basis of ancient genomes can provide information about complexities of social structures and cultural interactions in prehistoric populations.

Researchers from Uppsala and Stanford University investigated the genetic ancestry on the sex-specifically inherited X chromosome and the autosomes in 20 early Neolithic and 16 Late Neolithic/Bronze Age human remains. Contrary to previous hypotheses suggesting patrilocality (social system in which a family resides near the man's parents) of many agricultural populations, they found no evidence of sex-biased admixture during the migration that spread farming across Europe during the early Neolithic.

For later migrations from the Pontic steppe during the early Bronze Age, however, we find a dramatic male bias. There are simply too few X-chromosomes from the migrants, which points to around ten migrating males for every migrating female, says Mattias Jakobsson, professor of Genetics at the Department of Organismal Biology, Uppsala University.

The research group found evidence of ongoing, primarily male, migration from the steppe to central Europe over a period of multiple generations, with a level of sex bias that excludes a pulse migration during a single generation.

The contrasting patterns of sex-specific migration during these two migrations suggest a view of differing cultural histories in which the Neolithic transition was driven by mass migration of both males and females in roughly equal numbers -- perhaps whole families -- whereas the later Bronze Age migration and cultural shift were instead driven by male migration.




Contacts and sources:
Mattias Jakobsson
Uppsala University

Tiny Fibers Three-In-One Design Allows Genetic, Chemical, Optical, and Electrical Inputs and outputs for the Brain

For the first time ever, a single flexible fiber no bigger than a human hair has successfully delivered a combination of optical, electrical, and chemical signals back and forth into the brain, putting into practice an idea first proposed two years ago. With some tweaking to further improve its biocompatibility, the new approach could provide a dramatically improved way to learn about the functions and interconnections of different brain regions.

The fibers are designed to mimic the softness and flexibility of brain tissue. This could make it possible to leave implants in place and have them retain their functions over much longer periods than is currently possible with typical stiff, metallic fibers, thus enabling much more extensive data collection. For example, in tests with lab mice, the researchers were able to inject viral vectors that carried genes called opsins, which sensitize neurons to light, through one of two fluid channels in the fiber. They waited for the opsins to take effect, then sent a pulse of light through the optical waveguide in the center, and recorded the resulting neuronal activity, using six electrodes to pinpoint specific reactions. All of this was done through a single flexible fiber just 200 micrometers across — comparable to the width of a human hair.

Graduate student Seongjun Park holds an example of a new flexible fiber, which is no bigger than a human hair and has successfully delivered a combination of optical, electrical, and chemical signals back and forth into the brain.

Photo: Young Gyu Yoon

The new fibers were developed through a collaboration among material scientists, chemists, biologists, and other specialists. The results are reported in the journal Nature Neuroscience, in a paper by Seongjun Park, an MIT graduate student; Polina Anikeeva, the Class of 1942 Career Development Professor in the Department of Materials Science and Engineering; Yoel Fink, a professor in the departments of Materials Science and Engineering, and Electrical Engineering and Computer Science; Gloria Choi, the Samuel A. Goldblith Career Development Professor in the Department of Brain and Cognitive Sciences, and 10 others at MIT and elsewhere.

Previous research efforts in neuroscience have generally relied on separate devices: needles to inject viral vectors for optogenetics, optical fibers for light delivery, and arrays of electrodes for recording, adding a great deal of complication and the need for tricky alignments among the different devices. Getting that alignment right in practice was “somewhat probabilistic,” Anikeeva says. “We said, wouldn’t it be nice if we had a device that could just do it all.”

After years of effort, that’s what the team has now successfully demonstrated. “It can deliver the virus [containing the opsins] straight to the cell, and then stimulate the response and record the activity — and [the fiber] is sufficiently small and biocompatible so it can be kept in for a long time,” Anikeeva says.

Since each fiber is so small, “potentially, we could use many of them to observe different regions of activity,” she says. In their initial tests, the researchers placed probes in two different brain regions at once, varying which regions they used from one experiment to the next, and measuring how long it took for responses to travel between them.

The key ingredient that made this multifunctional fiber possible was the development of conductive “wires” that maintained the needed flexibility while also carrying electrical signals well. After much work, the team was able to engineer a composite of conductive polyethylene doped with graphite flakes. The polyethylene was initially formed into layers, sprinkled with graphite flakes, then compressed; then another pair of layers was added and compressed, and then another, and so on. A member of the team, Benjamin Grena, a recent graduate in materials science and engineering, referred to it as making “mille feuille,” (literally, “a thousand leaves,” the French name for a Napoleon pastry). That method increased the conductivity of the polymer by a factor of four or five, Park says. “That allowed us to reduce the size of the electrodes by the same amount.”

One immediate question that could be addressed through such fibers is that of exactly how long it takes for the neurons to become light-sensitized after injection of the genetic material. Such determinations could only be made by crude approximations before, but now could be pinpointed more clearly, the team says. The specific sensitizing agent used in their initial tests turned out to produce effects after about 11 days.

The team aims to reduce the width of the fibers further, to make their properties even closer to those of the neural tissue. “The next engineering challenge is to use material that is even softer, to really match” the adjacent tissue, Park says. Already, though, dozens of research teams around the world have been requesting samples of the new fibers to test in their own research.

The research team included members of MIT’s Research Laboratory of Electronics, Department of Electrical Engineering and Computer Science, McGovern Institute for Brain Research, Department of Chemical Engineering, and Department of Mechanical Engineering, as well as researchers at Tohuku University in Japan and Virginia Polytechnic Institute. It was supported by the National Institute of Neurological Disorders and Stroke, the National Science Foundation, the MIT Center for Materials Science and Engineering, the Center for Sensorimotor Neural Engineering, and the McGovern Institute for Brain Research.



Contacts and sources:
David L. Chandler 
Massachusetts Institute of Technology (MIT)

Ultracool Dwarf and the Seven Earth-like Planets


A total of seven Earth-like, potentially habitable worlds have been discovered orbiting a nearby star known as TRAPPIST-1. Just 40 light-years away, the star’s diminutive size and dim light output mean it is known as an ultracool dwarf.

ESOcast 96 explores this important discovery, from how the astronomers made the incredibly intricate measurements required to find and study the planets — including observations with ESO’s Very Large Telescope — to each world’s potential to support life as we know it. Excitingly, three of the planets in the system orbit in the habitable zone around TRAPPIST-1, and could harbour oceans of water on their surfaces.



Dwarf stars like TRAPPIST-1 are very common in our galaxy, making rich planetary systems like this some of the best targets in humanity’s search for life elsewhere in the Universe. This ESOcast takes you on a journey through one such system, which contains both the largest number of Earth-sized planets and the largest number of potentially habitable worlds ever discovered.

This infographic displays some artist's illustrations of how the seven planets orbiting TRAPPIST-1 might appear — including the possible presence of water oceans — alongside some images of the rocky planets in our Solar System. Information about the size and orbital periods of all the planets is also provided for comparison; the TRAPPIST-1 planets are all approximately Earth-sized.

Credit: NASA

Astronomers have found a system of seven Earth-sized planets just 40 light-years away. Using ground and space telescopes, including ESO’s Very Large Telescope, the planets were all detected as they passed in front of their parent star, the ultracool dwarf star known as TRAPPIST-1. According to the paper appearing today in the journal Nature, three of the planets lie in the habitable zone and could harbour oceans of water on their surfaces, increasing the possibility that the star system could play host to life. This system has both the largest number of Earth-sized planets yet found and the largest number of worlds that could support liquid water on their surfaces.

Astronomers using the TRAPPIST–South telescope at ESO’s La Silla Observatory, the Very Large Telescope (VLT) at Paranal and the NASA Spitzer Space Telescope, as well as other telescopes around the world [1], have now confirmed the existence of at least seven small planets orbiting the cool red dwarf star TRAPPIST-1 [2]. All the planets, labelled TRAPPIST-1b, c, d, e, f, g and h in order of increasing distance from their parent star, have sizes similar to Earth [3].

Dips in the star’s light output caused by each of the seven planets passing in front of it — events known as transits — allowed the astronomers to infer information about their sizes, compositions and orbits [4]. They found that at least the inner six planets are comparable in both size and temperature to the Earth.

Lead author Michaël Gillon of the STAR Institute at the University of Liège in Belgium is delighted by the findings: “This is an amazing planetary system — not only because we have found so many planets, but because they are all surprisingly similar in size to the Earth!”

With just 8% the mass of the Sun, TRAPPIST-1 is very small in stellar terms — only marginally bigger than the planet Jupiter — and though nearby in the constellation Aquarius (The Water Carrier), it appears very dim. Astronomers expected that such dwarf stars might host many Earth-sized planets in tight orbits, making them promising targets in the hunt for extraterrestrial life, but TRAPPIST-1 is the first such system to be found.

Co-author Amaury Triaud expands: “The energy output from dwarf stars like TRAPPIST-1 is much weaker than that of our Sun. Planets would need to be in far closer orbits than we see in the Solar System if there is to be surface water. Fortunately, it seems that this kind of compact configuration is just what we see around TRAPPIST-1!”

This diagram compares the orbits of the newly-discovered planets around the faint red star TRAPPIST-1 with the Galilean moons of Jupiter and the inner Solar System. All the planets found around TRAPPIST-1 orbit much closer to their star than Mercury is to the Sun, but as their star is far fainter, they are exposed to similar levels of irradiation as Venus, Earth and Mars in the Solar System.
Credit:ES O/O. Furtak

The team determined that all the planets in the system are similar in size to Earth and Venus in the Solar System, or slightly smaller. The density measurements suggest that at least the innermost six are probably rocky in composition.

The planetary orbits are not much larger than that of Jupiter’s Galilean moon system, and much smaller than the orbit of Mercury in the Solar System. However, TRAPPIST-1’s small size and low temperature mean that the energy input to its planets is similar to that received by the inner planets in our Solar System; TRAPPIST-1c, d and f receive similar amounts of energy to Venus, Earth and Mars, respectively.

This diagram compares the sizes of the newly-discovered planets around the faint red star TRAPPIST-1 with the Galilean moons of Jupiter and the inner Solar System. All the planets found around TRAPPIST-1 are of similar size to the Earth.
Credit: ESO/O. Furtak

All seven planets discovered in the system could potentially have liquid water on their surfaces, though their orbital distances make some of them more likely candidates than others. Climate models suggest the innermost planets, TRAPPIST-1b, c and d, are probably too hot to support liquid water, except maybe on a small fraction of their surfaces. The orbital distance of the system’s outermost planet, TRAPPIST-1h, is unconfirmed, though it is likely to be too distant and cold to harbour liquid water — assuming no alternative heating processes are occurring [5]. TRAPPIST-1e, f, and g, however, represent the holy grail for planet-hunting astronomers, as they orbit in the star’s habitable zone and could host oceans of surface water [6].

These new discoveries make the TRAPPIST-1 system a very important target for future study. The NASA/ESA Hubble Space Telescope is already being used to search for atmospheres around the planets and team member Emmanuël Jehin is excited about the future possibilities: “With the upcoming generation of telescopes, such as ESO’s European Extremely Large Telescope and the NASA/ESA/CSA James Webb Space Telescope, we will soon be able to search for water and perhaps even evidence of life on these worlds.”



Contacts and sources: 
Richard Hook
ESO

Neural Networks Promise Sharpest Ever Images of Deep Space


Telescopes, the workhorse instruments of astronomy, are limited by the size of the mirror or lens they use. Using 'neural nets', a form of artificial intelligence, a group of Swiss researchers now have a way to push past that limit, offering scientists the prospect of the sharpest ever images in optical astronomy. The new work appears in a paper in Monthly Notices of the Royal Astronomical Society.

The diameter of its lens or mirror, the so-called aperture, fundamentally limits any telescope. In simple terms, the bigger the mirror or lens, the more light it gathers, allowing astronomers to detect fainter objects, and to observe them more clearly. A statistical concept known as 'Nyquist sampling theorem' describes the resolution limit, and hence how much detail can be seen.

The Swiss study, led by Prof Kevin Schawinski of ETH Zurich, uses the latest in machine learning technology to challenge this limit. They teach a neural network, a computational approach that simulates the neurons in a brain, what galaxies look like, and then ask it to automatically recover a blurred image and turn it into a sharp one. Just like a human, the neural net needs examples - in this case a blurred and a sharp image of the same galaxy - to learn the technique.

The frames here show an example of an original galaxy image (left), the same image deliberately degraded (second from left), the image after recovery with the neural net (second from right), and the image processed with deconvolution, the best existing technique (right).


Credit; K. Schawinski / C. Zhang / ETH Zurich.

Their system uses two neural nets competing with each other, an emerging approach popular with the machine learning research community called a "generative adversarial network", or GAN. The whole teaching programme took just a few hours on a high performance computer.

The trained neural nets were able to recognise and reconstruct features that the telescope could not resolve - such as star-forming regions, bars and dust lanes in galaxies. The scientists checked it against the original high-resolution image to test its performance, finding it better able to recover features than anything used to date, including the 'deconvolution' approach used to improve the images made in the early years of the Hubble Space Telescope.

Schawinski sees this as a big step forward: "We can start by going back to sky surveys made with telescopes over many years, see more detail than ever before, and for example learn more about the structure of galaxies. There is no reason why we can't then apply this technique to the deepest images from Hubble, and the coming James Webb Space Telescope, to learn more about the earliest structures in the Universe."

Professor Ce Zhang, the collaborator from computer science, also sees great potential: "The massive amount of astronomical data is always fascinating to computer scientists. But, when techniques such as machine learning emerge, astrophysics also provides a great test bed for tackling a fundamental computational question - how do we integrate and take advantage of the knowledge that humans have accumulated over thousands of years, using a machine learning system? We hope our collaboration with Kevin can also shed light on this question."

The success of the project points to a more "data-driven" future for astrophysics in which information is learned automatically from data, instead of manually crafted physics models. ETH Zurich is hosting this work on the space.ml cross-disciplinary astrophysics/computer-science initiative, where the code is available to the general public.






Contacts and sources:
Robert Massey
The Royal Astronomical Society

Citation:  "Generative Adversarial Networks recover features in astrophysical images of galaxies beyond the deconvolution limit", Kevin Schawinski, Ce Zhang, Hantian Zhang, Lucas Fowler, and Gokula Krishnan Santhanam, Monthly Notices of the Royal Astronomical Society, in press. After the embargo expires, a copy of the paper will be available at no cost from http://doi.org/10.1093/mnrasl/slx008

A preprint is available at ttp://www.ras.org.uk/images/stories/press/Computation/Schawinski_et_al.pdf

Caught in the Act: First-Ever Global View of Transshipment in Commercial Fishing Industry



Analysis of satellite data broadcast from ships at sea enables automatic identification and monitoring of transshipments, a practice associated with illegal, unregulated, and unreported fishing

Transshipment, the transfer of goods from one boat to another, is a major pathway for illegally caught and unreported fish to enter the global seafood market. It has also been associated with drug smuggling and slave labor. Illegal in many cases, transshipment has been largely invisible and nearly impossible to manage, because it often occurs far from shore and out of sight. Until now.

Today, with the release of our report, The Global View of Transshipment: Preliminary Findings, we present the first-ever global footprint of transshipment in the fishing industry. The report explains how data scientists from SkyTruth and Global Fishing Watch (a partnership of Oceana, SkyTruth and Google) analyzed Automatic Identification System (AIS) signals from ships at sea to developed a tool to identify and track 90 percent of the world's large refrigerated cargo vessels, ships that collect catch from multiple fishing boats at sea and carry it to port.

In the Indian Ocean, off the remote Saya de Malha bank, the refrigerated cargo vessel (reefer) Leelawadee was seen with two unidentified likely fishing vessels tied alongside. Image Captured by DigitalGlobe on Nov. 30, 2016.

Imagery by DigitalGlobe © 2017

According to the analysis, from 2012 through 2016, refrigerated cargo vessels, known as "reefers," participated in more than 5,000 likely transshipments (instances in which they rendezvoused with an AIS-broadcasting fishing vessel and drifted long enough to receive a catch). In addition, the data revealed more than 86,000 potential transshipments in which reefers exhibited transshipment-like behavior, but there were no corresponding AIS signals from fishing vessels. Brian Sullivan, Google's lead for Global Fishing Watch, will present the findings at the Economist World Ocean Summit in Indonesia today. The report, along with the underlying data and our list of likely and suspected transshipments, will be freely available on our website, globalfishingwatch.org.

The global scale of transshipment and its ability to facilitate suspicious activity, such as illegal fishing and human rights abuses, is exposed in a complementary report being issued today by our partners at Oceana. The opportunity for mixing legal and illegal catch during the collection of fish from multiple fishing boats provides an easy route for illegal players to get their product to market. This obscures the seafood supply chain from hook to port and hobbles efforts at sustainability because it prevents an accurate measurement of the amount of marine life being taken from the sea.

Among the many findings, Global Fishing Watch data documents that transshipment in offshore coastal waters is more common in regions with a high proportion of Illegal, Unregulated and Unreported (IUU) fishing than in regions where management is strong such as in North America and Europe. The data also revealed clusters of transshipment along the Exclusive Economic Zones (EEZs) of some countries, and inside those zones of nations rated strongly for corruption and having limited monitoring capabilities. "These correlations do not provide any proof of specific illegal behavior," says Global Fishing Watch Research Program Director, David Kroodsma, and lead author on the report, "but they raise important questions and can lead to more informed international efforts by fisheries management organizations to prevent or better regulate transshipment."

According to Oceana's report, three of the top eight countries visited by reefers have not yet ratified an international treaty meant to eliminate illegal, unregulated and unreported fishing, and therefore may have weaker regulations that would make it easier for illegally caught fish to enter the global marketplace. The report calls for the banning of transshipment at sea and expanded mandates for unique identifiers and vessel tracking for fishing vessels.


The Hai Feng 648 is with an unidentified fishing vessel off the coast of Argentina. There is a large mostly Chinese squid fleet just beyond the EEZ boundary. The Hai Feng 648 was previously with the squid fleet at the edge of the Peruvian EEZ and in 2014 took illegally processed catch from the Lafayette into port in Peru. This image was acquired on Nov. 30, 2016.

Imagery by DigitalGlobe © 2017


The new analytical tools SkyTruth and Global Fishing Watch have developed using public domain AIS data can enable fisheries managers to identify and monitor transshipment anywhere in the world, permanently lifting the veil from the previously invisible practice of transshipment.

The results were obtained through an analysis of over 21 billion satellite signals from Automatic Identification System messages broadcast by ocean-going vessels between 2012 and 2016. Using an artificial intelligence system developed by Global Fishing Watch, Kroodsma's team identified refrigerated cargo vessels based on their movement patterns. Verifying their results with confirmed fishery registries and open source online resources, they identified 794 reefers. That represents 90 percent of the world's reefer vessels identified in 2010 according to the US Central Intelligence Agency World Factbook. 

Through further analysis, they mapped 5065 instances in which a reefer and a fishing vessel were moving at a certain speed within a certain proximity to one another for a certain length of time.) Our algorithm was verified by matching a subset of these "likely transshipments" to known transshipments recorded by fishing registries. The data also revealed 86,490 potential transshipments, instances in which reefers that appeared to be alone traveled in a pattern and at a speed consistent with transshipment. Their activity cannot be verified, but given that many fishing vessels turn off their AIS device when they do not want to be detected, and some fishing vessels do not have AIS, these events must be considered potential transshipments.



Contacts and sources:
Kimbra Cutlip 
Global Fishing Watch

Using Dogs To Find and Save Big Cats


Investigators are using specially-trained detection dogs to determine the numbers and distribution of cheetah in a region of Western Zambia. The research represents the first demonstration of this strategy for wide-ranging species that are often threatened.

While traditional survey methods failed to detect any cheetah, using dogs specially trained to locate scat and other signs allowed the team to detect cheetah presence throughout the survey area. The researchers estimated a density of 5.9 to 6.6 cheetah per 1000km2.

This is a detection dog searching for cheetah scat.

Credit:  Dave Hamman
 
"With the alarming global decline of cheetah, we need new methods to be able to monitor and evaluate the remaining populations, many of which are in very remote ecosystems where traditional survey methods are challenging at best," said Dr. Matthew Becker, lead author of the Journal of Zoology study. "With this study, detection dogs once again demonstrate they are a powerful conservation tool and an important ally for threatened African carnivores like cheetah."

"Rapid global large carnivore declines make evaluations of remaining populations critical. Yet landscape-scale evaluations of presence, abundance and distribution are difficult, as many species are wide-ranging, occur only at low densities and are elusive." say the authors of "Using dogs to find cats: detection dogs as a survey method for wide-ranging cheetah."


Contacts and sources:
Lauren Elkins
Wiley

Citation: Using dogs to find cats: detection dogs as a survey method for wide-ranging cheetah http://dx.doi.org/10.1111/jzo.12445

Ancient Rocks in Colorado Give Evidence of a 'Chaotic Solar System'


Plumbing a 90 million-year-old layer cake of sedimentary rock in Colorado, a team of scientists from the University of Wisconsin-Madison and Northwestern University has found evidence confirming a critical theory of how the planets in our solar system behave in their orbits around the sun.

The finding, published Feb. 23, 2017 in the journal Nature, is important because it provides the first hard proof for what scientists call the "chaotic solar system," a theory proposed in 1989 to account for small variations in the present conditions of the solar system. The variations, playing out over many millions of years, produce big changes in our planet's climate -- changes that can be reflected in the rocks that record Earth's history.

The discovery promises not only a better understanding of the mechanics of the solar system, but also a more precise measuring stick for geologic time. Moreover, it offers a better understanding of the link between orbital variations and climate change over geologic time scales.

The layer cake of sedimentary rock near Big Bend, Texas, shows the alternating layers of shale and limestone characteristic of the rock laid down at the bottom of a shallow ocean during the late Cretaceous period. The rock holds the 87 million-year-old signature of a 'resonance transition' in the orbits of Mars and Earth, definitive geologic evidence that the orbits of the planets in our solar system behave differently than prevailing theory, which held that the planets orbit like clockwork in a quasiperiodic manner.

Credit: Bradley Sageman, Northwestern University

Using evidence from alternating layers of limestone and shale laid down over millions of years in a shallow North American seaway at the time dinosaurs held sway on Earth, the team led by UW-Madison Professor of Geoscience Stephen Meyers and Northwestern University Professor of Earth and Planetary Sciences Brad Sageman discovered the 87 million-year-old signature of a "resonance transition" between Mars and Earth. A resonance transition is the consequence of the "butterfly effect" in chaos theory. It plays on the idea that small changes in the initial conditions of a nonlinear system can have large effects over time.

In the context of the solar system, the phenomenon occurs when two orbiting bodies periodically tug at one another, as occurs when a planet in its track around the sun passes in relative proximity to another planet in its own orbit. These small but regular ticks in a planet's orbit can exert big changes on the location and orientation of a planet on its axis relative to the sun and, accordingly, change the amount of solar radiation a planet receives over a given area. Where and how much solar radiation a planet gets is a key driver of climate.

"The impact of astronomical cycles on climate can be quite large," explains Meyers, noting as an example the pacing of the Earth's ice ages, which have been reliably matched to periodic changes in the shape of Earth's orbit, and the tilt of our planet on its axis. "Astronomical theory permits a very detailed evaluation of past climate events that may provide an analog for future climate."

To find the signature of a resonance transition, Meyers, Sageman and UW-Madison graduate student Chao Ma, whose dissertation work this comprises, looked to the geologic record in what is known as the Niobrara Formation in Colorado. The formation was laid down layer by layer over tens of millions of years as sediment was deposited on the bottom of a vast seaway known as the Cretaceous Western Interior Seaway. The shallow ocean stretched from what is now the Arctic Ocean to the Gulf of Mexico, separating the eastern and western portions of North America.

"The Niobrara Formation exhibits pronounced rhythmic rock layering due to changes in the relative abundance of clay and calcium carbonate," notes Meyers, an authority on astrochronology, which utilizes astronomical cycles to measure geologic time. "The source of the clay (laid down as shale) is from weathering of the land surface and the influx of clay to the seaway via rivers. The source of the calcium carbonate (limestone) is the shells of organisms, mostly microscopic, that lived in the water column."

 Meyers explains that while the link between climate change and sedimentation can be complex, the basic idea is simple: "Climate change influences the relative delivery of clay versus calcium carbonate, recording the astronomical signal in the process. For example, imagine a very warm and wet climate state that pumps clay into the seaway via rivers, producing a clay-rich rock or shale, alternating with a drier and cooler climate state which pumps less clay into the seaway and produces a calcium carbonate-rich rock or limestone."

The new study was supported by grants from the National Science Foundation. It builds on a meticulous stratigraphic record and important astrochronologic studies of the Niobrara Formation, the latter conducted in the dissertation work of Robert Locklair, a former student of Sageman's at Northwestern.

Dating of the Mars-Earth resonance transition found by Ma, Meyers and Sageman was confirmed by radioisotopic dating, a method for dating the absolute ages of rocks using known rates of radioactive decay of elements in the rocks. In recent years, major advances in the accuracy and precision of radioisotopic dating, devised by UW-Madison geoscience Professor Bradley Singer and others, have been introduced and contribute to the dating of the resonance transition.

The motions of the planets around the sun has been a subject of deep scientific interest since the advent of the heliocentric theory -- the idea that the Earth and planets revolve around the sun -- in the 16th century. From the 18th century, the dominant view of the solar system was that the planets orbited the sun like clockwork, having quasiperiodic and highly predictable orbits. In 1988, however, numerical calculations of the outer planets showed Pluto's orbit to be "chaotic" and the idea of a chaotic solar system was proposed in 1989 by astronomer Jacques Laskar, now at the Paris Observatory.

Following Laskar's proposal of a chaotic solar system, scientists have been looking in earnest for definitive evidence that would support the idea, says Meyers.

"Other studies have suggested the presence of chaos based on geologic data," says Meyers. "But this is the first unambiguous evidence, made possible by the availability of high-quality, radioisotopic dates and the strong astronomical signal preserved in the rocks."



Contacts and sources:
Stephen Meyers
University of Wisconsin-Madison 

Popular Heartburn Drugs Linked to Gradual Yet 'Silent' Kidney Damage

Taking popular heartburn drugs for prolonged periods has been linked to serious kidney problems, including kidney failure. The sudden onset of kidney problems often serves as a red flag for doctors to discontinue their patients' use of so-called proton pump inhibitors (PPIs), which are sold under the brand names Prevacid, Prilosec, Nexium and Protonix, among others.

But a new study evaluating the use of PPIs in 125,000 patients indicates that more than half of patients who develop chronic kidney damage while taking the drugs don't experience acute kidney problems beforehand, meaning patients may not be aware of a decline in kidney function, according to researchers at Washington University School of Medicine in St. Louis and the Veterans Affairs St. Louis Health Care System. Therefore, people who take PPIs, and their doctors, should be more vigilant in monitoring use of these medications.

Taking popular heartburn medication for prolonged periods may lead to serious kidney damage, even in people who show no signs of kidney problems, according to researchers at Washington University School of Medicine in St. Louis and the Veterans Affairs St. Louis Health Care System
Credit:  Michael Worful/Washington University School of Medicine in St. Louis

The study is published Feb. 22 in Kidney International.

The onset of acute kidney problems is not a reliable warning sign for clinicians to detect a decline in kidney function among patients taking proton pump inhibitors, said Ziyad Al-Aly, MD, the study's senior author and an assistant professor of medicine at Washington University School of Medicine. "Our results indicate kidney problems can develop silently and gradually over time, eroding kidney function and leading to long-term kidney damage or even renal failure. Patients should be cautioned to tell their doctors if they're taking PPIs and only use the drugs when necessary."

More than 15 million Americans suffering from heartburn, ulcers and acid reflux have prescriptions for PPIs, which bring relief by reducing gastric acid. Many millions more purchase the drugs over-the-counter and take them without being under a doctor's care.

The researchers -- including first author Yan Xie, a biostatistician at the St. Louis VA -- analyzed data from the Department of Veterans Affairs databases on 125,596 new users of PPIs and 18,436 new users of other heartburn drugs referred to as H2 blockers. The latter are much less likely to cause kidney problems but often aren't as effective.

Over five years of follow up, the researchers found that more than 80 percent of PPI users did not develop acute kidney problems, which often are reversible and are characterized by too little urine leaving the body, fatigue and swelling in the legs and ankles.

However, more than half of the cases of chronic kidney damage and end-stage renal disease associated with PPI use occurred in people without acute kidney problems.

In contrast, among new users of H2 blockers, 7.67 percent developed chronic kidney disease in the absence of acute kidney problems, and 1.27 percent developed end-stage renal disease.

End-stage renal disease occurs when the kidneys can no longer effectively remove waste from the body. In such cases, dialysis or a kidney transplant is needed to keep patients alive.

"Doctors must pay careful attention to kidney function in their patients who use PPIs, even when there are no signs of problems," cautioned Al-Aly, who also is the VA's associate chief of staff for research and education and co-director of the VA's Clinical Epidemiology Center. "In general, we always advise clinicians to evaluate whether PPI use is medically necessary in the first place because the drugs carry significant risks, including a deterioration of kidney function."



Contacts and sources:
Diane Duke Williams
Washington University School of Medicine in St. Louis

Achieving A Strong, Lasting ‘Blue Economy’ Possible Says Marine Ecologist

Incentive-based solutions offer significant hope for addressing the myriad environmental challenges facing the world’s oceans – that’s the central message a leading marine ecologist delivered today in during a presentation at the annual meeting of the American Association for the Advancement of Science.

Jane Lubchenco, a distinguished professor in the Oregon State University College of Science, shared lessons from around the world about ways “to use the ocean without using it up” as nations look to the ocean for new economic opportunities, food security or poverty alleviation.

Elizabeth Cerny-Chipman, a former postdoctoral scholar under Lubchenco who’s now a Knauss Fellow at the National Oceanic and Atmospheric Administration, co-authored the presentation, titled “Getting Incentives Right for Sustained Blue Growth: Science and Opportunities.”

Credit: OSU


In her presentation, Lubchenco pointed out that achieving the long-term potential of blue growth will require aligning short- and long-term economic incentives to achieve a diverse mix of benefits. Blue growth refers to long-term strategies for supporting sustainable growth in the marine and maritime sectors as a whole.

“If we harness human ingenuity and recognize that a healthy ocean is essential for long-term prosperity, we can tackle the enormous threats facing the ocean,” Lubchenco says, “and we can make a transition from vicious cycles to virtuous cycles.”

Lubchenco and her collaborators note that the world’s oceans are the main source of protein production for 3 billion people; are directly or indirectly responsible for the employment of more than 200 million people; and contribute $270 billion to the planet’s gross domestic product.

“The right incentives can drive behavior that aligns with both desired environmental outcomes and desirable social outcomes,” Lubchenco says.

The first step in building increased support for truly sustainable blue growth, she says, is highlighting its potential. That means working with decision-makers to promote win-win solutions with clear short-term environmental and economic benefits. Governments, industry and communities all have important roles to play, Lubchenco notes.

“Another key step is transforming the social norms that drive the behavior of the different actors, particularly in industry,” Lubchenco says. “Finally, it will be critical to take a cross-sector approach.

“Some nations, like the Seychelles, Belize and South Africa, are doing integrated, smart planning to deconflict use by different sectors while also growing their economies in ways that value the health of the ocean, which is essential to jobs and food security. They are figuring out how to be smarter about ocean uses, not just to use the ocean more intensively.”

Prior to her presentation, Lubchenco gave a related press briefing on how to create the right incentives for sustainable uses of the ocean.

In November 2016, Lubchenco, Cerny-Chipman, OSU graduate student Jessica Reimer and Simon Levin, the distinguished university professor in ecology and evolutionary biology at Princeton University, co-authored a paper on a related topic for the Proceedings of the National Academy of Sciences.





Contacts and sources:
By Steve Lundeberg,
Jane Lubchenco,
Oregon State University

Wednesday, February 22, 2017

Impacts of Mass Coral Die-Off on Indian Ocean Reefs Revealed

Warming seawaters, caused by climate change and extreme climatic events, threaten the stability of tropical coral reefs, with potentially devastating implications for many reef species and the human communities that reefs support.

New research by the University of Exeter shows that increased surface ocean temperatures during the strong 2016 El Niño led to a major coral die-off event in the Maldives, and that this has caused reef growth rates to collapse. They also found that the rates at which some reefs species, in particular parrotfish, are eroding the reefs had increased following this coral die-off event.

Similar magnitudes of coral death have been reported on many other reefs in the region, including on the northern Great Barrier Reef, suggesting similar impacts may be very widespread.

Picture was taken in September 2016 along the shallow (2-3 m depth) fore-reef slope habitat around Kandahalagala showing the extent of coral bleaching-driven coral mortality that has preferentially impacted Acropora sp.
Credit:   University of Exeter

Professor Chris Perry and Dr Kyle Morgan, of the University of Exeter's Geography department, studied the impact of the 2016 El Niño event at sites in the southern Maldives and found that the event had not only caused widespread coral bleaching, a phenomenon whereby corals expel their photosynthesising algae when stressed by high temperatures, but that this had also led to extensive coral death in all shallow water reef habitats examined.

"A very major concern now is how quickly these reefs might recover. Recovery from similar past disturbances in the Maldives have taken 10-15 years, but major bleaching events are predicted to become far more frequent than this. If this is the case it could lead to long-term loss of reef growth and so limit the coastal protection and habitat services these reefs presently provide," Professor Perry said.

"The most alarming aspect of this coral die-off event is that it has led to a rapid and very large decline in the growth rate of the reefs. This in turn has major implications not only for the capacity of these reefs to match any increases in sea-level, but is also likely to lead to a loss of the surface structure of the reefs that is so critical for supporting fish species diversity and abundance."

Coral reefs are formed by the accumulation of coral skeletons (made of calcium carbonate) that builds up over 100's to 1000's of years, forming the complex structures that support a huge diversity of marine life. The so-called 'carbonate budget' of a reef, which represents the balance between the rate at which this carbonate is produced by corals and the rate at which it is removed (by biological or physical erosion or chemical dissolution), influences the development of these structures and how fast a reef can grow.

Picture was taken in September 2016 along the shallow (2-3 m depth) fore-reef slope habitat around Kandahalagala showing the extent of coral bleaching-driven coral mortality that has preferentially impacted Acropora sp.

Credit:   University of Exeter

The effect these combined factors was a major decline in the carbonate budgets of these reefs, with an average reduction of 157%. Before the warming event, the reefs had been in a period of rapid growth, but after the period of higher sea temperatures a negative carbonate budget was recorded at all sites. Put simply, the structure of these reefs is now eroding at a faster rate that it is growing. Based on past studies the researchers suggest that given the severity of the bleaching impacts it may take 10 to 15 years for full recovery to occur.

The extent of the 2016 bleaching, which also affected reefs in other parts of the Indian Ocean and Pacific, was so severe that it was subsequently named the 'Third Global Coral Bleaching Event'.

Dr Kyle Morgan said: "Coral reefs provide a wealth of benefits. They are vital habitats, essential for a vast number of species and they are also important for tourism and food provision. The reduction in carbonate budget threatens these benefits and may well also lead to the structural collapse of reefs. The key issue to consider now is whether, and when, these reefs will recover, both ecologically and in terms of their growth. Based on past trajectories, we predict recovery will take at least a decade, however it all depends on the extent of future warming events and climate change."

University of Exeter scientists warned there could be further rises in sea temperatures owing to global warming with potentially devastating effects on coral reefs.

Professor Mat Collins, an expert in climate modelling at the University of Exeter, said:

"We expect El Niño variability to continue into the future which, when combined with rising temperatures due to global warming, means we will see unprecedented sea temperatures and increasing incidence of coral bleaching."

Bleaching drives collapse in reef carbonate budgets and reef growth potential on southern Maldives reefs is published in Scientific Reports.


Contacts and sources:
Marie Woolf
University of Exeter

Single-Payer Reform Is 'The Only Way to Fulfill the President's Pledge' on Health Care

In Annals of Internal Medicine, researchers say a single-payer health reform would save an estimated $504 billion annually in administrative costs, allowing for universal coverage, full benefits and lower costs

Proposals floated by Republican leaders won't achieve President Trump's campaign promises of more coverage, better benefits, and lower costs, but a single-payer reform would, according to a commentary published today [Tuesday, Feb. 21] in Annals of Internal Medicine, one of the nation's most prestigious and widely cited medical journals.

Republicans promised to repeal the Affordable Care Act on the first day of the Trump presidency. But the health reform effort has stalled because Republicans in Congress have been unable to come up with a better replacement and fear a backlash against plans that would deprive millions of coverage and raise deductibles.

Credit: Wikimedia Commons

In today's Annals commentary, longtime health policy experts Drs. Steffie Woolhandler and David Himmelstein warn that the proposals by Speaker Paul Ryan, R-Wis., and Secretary of HHS Tom Price would slash Medicaid spending for the poor, shift the ACA's subsidies from the near-poor to wealthier Americans, and replace Medicare with a voucher program, even as they would cut Medicare's funding and raise the program's eligibility age.

Woolhandler and Himmelstein review evidence that, in contrast, single-payer reform could provide comprehensive first-dollar coverage to all Americans within the current budgetary envelope because of vast savings on health care bureaucracy and profits. The authors estimate that a streamlined, publicly financed single-payer program would save $504 billion annually on health care paperwork and profits, including $220 billion on insurance overhead, $150 billion in hospital billing and administration and $75 billion doctors' billing and paperwork. They estimate that an additional $113 billion could be saved each year by hard bargaining with drug companies over prices. The data supporting their estimates is summarized in a table.

The savings would cover the cost of expanding insurance to the 26 million who remain uninsured despite the ACA, as well as "plugging the gaps in existing coverage -- abolishing copayments and deductibles, covering such services as dental and long-term care that many policies exclude."

The lead author of the commentary, Dr. Steffie Woolhandler, is an internist, distinguished professor of public health and health policy at CUNY's Hunter College, and lecturer in medicine at Harvard Medical School. She said: "We're wasting hundreds of billions of health care dollars on insurance paperwork and profits. Private insurers take more than 12 cents of every premium dollar for their overhead and profit, as compared to just over 2 cents in Medicare. Meanwhile, 26 million are still uninsured and millions more with coverage can't afford care. It's time we make our health care system cater to patients instead of bending over backward to help insurance companies."

Dr. David Himmelstein, the senior author, is a primary care doctor and, like Woolhander, a distinguished professor at CUNY's Hunter College and lecturer at Harvard Medical School. He noted: "We urgently need reform that moves forward from the ACA, but the Price and Ryan plans would replace Obamacare with something much worse. Polls show that most Americans -- including most people who want the ACA repealed, and even a strong minority of Republicans - want single-payer reform. And doctors are crying out for such reform. The Annals of Internal Medicine is one of the most respected and traditional medical journals. Their willingness to publish a call for single payer signals that it's a mainstream idea in our profession."





Contacts and sources:
Mark Almberg
Physicians For A National Health Program

The Annals of Internal Medicine is the flagship journal of the American College of Physicians (ACP), the nation's largest medical specialty organization with 148,000 internal medicine physicians, related subspecialists, and medical students. In 2007, the Annals published a lengthy policy article in which the ACP said a single-payer system was one pathway to achieving universal coverage. In early 2008, it published a study showing 59 percent of U.S. physicians support "government legislation to establish national health insurance," a leap of 10 percentage points from five years before.

The commentary is believed to be the first full-length, direct call for single payer, or national health insurance, that the journal has published in its 90-year history.

"Single-Payer Reform: The Only Way to Fulfill the President's Pledge of More Coverage, Better Benefits, and Lower Costs," by Steffie Woolhandler, M.D., M.P.H., and David U. Himmelstein, M.D. Annals of Internal Medicine. Published online first, Feb. 21, 2017. doi:10.7326/M17-0302.

Disclosures: Drs. Woolhandler and Himmelstein co-founded Physicians for a National Health Program, a nonprofit educational and research organization that supports a single-payer national health plan; they also served as advisers to Sen. Bernie Sanders' presidential campaign. Neither the Sanders campaign nor PNHP played any role in funding or otherwise supporting the commentary.

High-Sensitivity Cameras Shows the Atomic Structure of Metal-Organic Frameworks

Highly sensitive electron cameras allow researchers to see the atomic structure of metal-organic frameworks.

Researchers at KAUST have developed a method for fine-scale imaging of metal-organic frameworks (MOFs), three-dimensional structures made up of metal ions connected by organic ligands. MOFs are useful for gas storage and separation because they can be designed to have precise pore sizes of molecular dimensions and large void spaces (porosity) within their frameworks.

Symmetry-imposed and lattice-averaged HRTEM image of the metal-organic framework ZIF-8 (black and white) with a structural model overlaid to show the position of the zinc ions and organic ligands (in color).

Credit: (c) 2017 KAUST Ivan D. Gromicho

Typically, high-resolution transmission electron microscopy (HRTEM) is used to visualize structures with atomic resolution; however, this method is unsuitable for observing MOFs because the electron beams destroy their structures.

"To thoroughly understand the performance of metal-organic frameworks in various applications, we need to know their structures at the atomic level because their macroscopic behavior is determined by their microscopic structure," explained KAUST Associate Professor of Chemical Science Yu Han. By visualizing these structures, researchers can uncover important clues about how these materials self-assemble to create their trademark pores.

Several members of the University's Advanced Membranes and Porous Materials Center, including Han's research scientist and first author of the paper, Yihan Zhu, Associate Professor of Chemical and Biological Engineering Zhiping Lai and Professor of Chemical and Biological Engineering and Director of the Center Ingo Pinnau, joined forces with the University's Imaging and Characterization Core Lab and with colleagues from Gatan, Lawrence Berkeley National Laboratory and others in China. Their collaboration resulted in an adaptation of HRTEM using state-of-the-art direct-detection electron-counting cameras1.

The high sensitivity of these detectors enabled them to acquire images with an electron dose low enough that it does not damage the structure of MOFs, allowing the group to produce high-resolution images of their atomic structures.

The team applied their method to ZIF-8, a MOF comprising zinc ions connected by organic 2-methylimidazole linkers. They were able to image its structure with a resolution of 0.21 nanometers (one nanometer is one billionth of a meter), a resolution high enough to image the individual columns of zinc atoms and organic linkers.

This helped the researchers to reveal the surface and interfacial structures of ZIF-8 crystals. "The results unraveled that porosity generated at the interfaces of ZIF-8 crystals is different from the intrinsic porosity of ZIF-8, which influences how gas molecules transport in ZIF-8 crystals," explained Han.



Contacts and sources:
KAUST - King Abdullah University of Science and Technology

Citation: Zhu, Y., Ciston, J., Zheng, B., ... & Han, Y. Unravelling surface and interfacial structures of a metal-organic framework by transmission electron microscopy. Nature Materials advance online publication, 20 February 2017. http://dx.doi.org/10.1038/nmat4852

What’s Next for Plant Breeders? Drones Are

Crop breeders grow thousands of potential varieties at a time; until now, observations of key traits were made by hand. In a new study, unmanned aerial vehicles, or drones, were used successfully to remotely evaluate and predict soybean maturity timing in tests of potential varieties. The use of drones for this purpose could substantially reduce the man-hours needed to evaluate new crops.

When plant breeders develop new crop varieties, they grow up a lot of plants and they all need to be checked. Repeatedly.

“Farmers might have a 100-acre field planted with one soybean variety, whereas breeders may have 10,000 potential varieties planted on one 10-acre field. The farmer can fairly quickly determine whether the single variety in a field is ready to be harvested. However, breeders have to walk through research fields several times in the fall to determine the date when each potential variety matures,” explains University of Illinois soybean breeder Brian Diers.

Drones are increasingly being used in agriculture. A new study demonstrates their benefits for soybean breeders.
Credit; University of Illinois

“We have to check every three days,” masters student Nathan Schmitz adds. “It takes a good amount of time during a busy part of the year. Sometimes it’s really hot, sometimes really muddy.”

To make things easier, an interdisciplinary team including breeders, computer scientists, engineers, and geographic information specialists turned to unmanned aerial vehicles – commonly known as UAVs or drones.

“When drones became available, we asked ourselves how we could apply this new technology to breeding. For this first attempt, we tried to do a couple simple things,” Diers says.

One goal was to predict the timing of pod maturity using images from a camera attached to the drone, along with sophisticated data and image analysis techniques. “We used multi-spectral images,” Schmitz explains. “We set up an equation in the program to pick up changes in the light frequency reflected off the plant. That color change is how we differentiate a mature plant from an immature one.”

The researchers developed an algorithm to compare images from the drone with pod maturity data measured the old-fashioned way, by walking the fields. “Our maturity predictions with the drone were very close to what we recorded while walking through the fields,” Diers notes.

Predictions made by the model achieved 93 percent accuracy, but Diers says they might have done even better without some of the inherent limitations of flying drones. For example, they could only fly it and obtain good images on sunny days with little wind.

Drones are increasingly recognized for their potential to improve efficiency and precision in agriculture—especially after new FAA rules went into effect in August 2016—but this is one of the first studies to use drones to optimize breeding practices. Diers notes that the application could be particularly useful to large breeding companies, which test hundreds of thousands of potential varieties annually. If breeders can save time and effort using this technology, new varieties could potentially be developed and made available to farmers on a faster timeline—a welcome improvement.

The article, “Development of methods to improve soybean yield estimation and predict plant maturity with an unmanned aerial vehicle based platform,” is published in Remote Sensing of Environment. In addition to Diers and Schmitz, Neil Yu, Liujun Li, Lei Tian, and Jonathan Greenberg, all from the University of Illinois, are co-authors.



Contact and sources:
University of Illinois College of Agricultural, Consumer and Environmental Sciences

New Evidence That E-Cigarettes May Harm Your Heart

It’s been more than 50 years since the U.S. surgeon general warned the public about the lethal dangers of cigarette smoking.

Last year, Surgeon General Dr. Vivek Murthy issued the first-ever report on electronic cigarettes, warning that their use posed a significant and avoidable risk to young people in the United States. E-cigarettes, or e-cigs, arrived on the U.S. market about 10 years ago. Since then, their popularity has exploded, especially among teenagers.

E-cigs are not actually cigarettes. There is no combustion or tobacco. Instead, these electronic, handheld devices deliver nicotine with flavoring and other chemicals in a vapor instead of smoke. Although traditional cigarettes are widely known as the most common preventable cause of heart disease, not much is known about the cardiovascular risks of e-cigarettes.

Credit: www.ecigclick.co.uk, CC BY-SA 2.0/Wikimedia Commons

To shed light on that issue, UCLA researchers decided to see if two health indicators that promote heart disease in tobacco users were also prevalent in people who use e-cigarettes.

The 42-person study, whose findings were published online Feb. 1 in the journal JAMA Cardiology, found that 23 study participants who were habitual users of e-cigarettes were more likely to have signs of two heart risk factors than 19 other participants who did not use e-cigarettes. The risk factors were oxidative stress, which hampers the body’s ability to defend itself against free radicals — a type of particle that has been associated with heart disease — and higher levels of adrenaline in the heart, which can lead to an increased heart rate and high blood pressure.

“The results were a bit surprising because it is widely believed that e-cigarettes are less harmful than tobacco cigarettes,” the study’s co-author, Dr. Holly Middlekauff, a professor of medicine in the division of cardiology at UCLA, told HealthDay. “Instead, we found the same types of abnormalities in our e-cigarette users that are reported in tobacco cigarette users, and these abnormalities are associated with increased cardiac risk.”
Cardiac risk factors the same as those of smokers

The study's authors noted that the findings only show an association, not a cause-and-effect link between e-cigarettes and the heart risks.

“We do not know if a tobacco cigarette smoker is better off switching to e-cigarettes. Most studies show that carcinogens are present at much lower levels in e-cigarettes compared to tobacco cigarettes,” said Middlekauff. “So it is conceivable that the risk for heart disease is similar for e-cigarettes and tobacco cigarettes, but that the risk for cancer is much greater with tobacco cigarettes."

To further their research, they are now doing a comparison of the heart effects of tobacco cigarettes to e-cigarette use.

"The key finding from our study is that e-cigarettes have real, adverse physiologic effects that have been associated with heart disease,” added Middlekauff. “My advice is, if you don't already smoke tobacco cigarettes, don't start using e-cigarettes — they are not harmless.”

Other authors on the study include Roya Moheimani, May Bhetraratana, Fen Yin, Kacey Peters, Jeffrey Gornmbein and Jesus Araujo. All are from UCLA.



Contacts and sources:
Amy Albin
UCLA