Sunday, September 30, 2018

PCB Pollution Killing Killer Whales, PCB Threatens Top of the Food Chain

A new study, just published in the journal Science, shows that more than 40 years after the first initiatives were taken to ban the use of PCBs, the chemical pollutants remain a deadly threat to animals at the top of the food chain

When foreign hazardous substances enter the marine environment, they are assimilated into the first link in the food chain, phytoplankton. The phytoplankton is consumed by zooplankton, which in turn is consumed by smaller fish, etc. The chemicals accumulate in each link of the food chain, and this means that killer whales that feed on large animals in contaminated areas may contain concentrations of PCBs so high that the survival of the species is threatened. Killer whales that primarily feed on smaller fish are not threatened in the same way.

Credit Graphic: Aarhus University

More than forty years after the first initiatives were taken to ban the use of PCBs, the chemical pollutants remain a deadly threat to animals at the top of the food chain. A new study, just published in the journal Science, shows that the current concentrations of PCBs can lead to the disappearance of half of the world's populations of killer whales from the most heavily contaminated areas within a period of just 30-50 years.

Killer whales (Orcinus orca) form the last link in a long food chain and are among the mammals with the highest level of PCBs (polychlorinated biphenyls) in their tissue. Researchers have measured values as high as 1300 milligrams per kilo in the fatty tissue (blubber) of killer whales. For comparison, a large number of studies show that animals with PCB levels as low as 50 milligrams per kilo of tissue may show signs of infertility and severe impacts on the immune system.

In some areas, killer whales feed primarily on sea mammals and big fish like tuna and sharks and are then threatened by PCBs. In areas where the killer whales primarily feed on small fish like herring, they are less threatened.

Credit: Audun Rikardsen


Together with colleagues from a wide range of international universities and research institutions, researchers from Aarhus University have documented that the number of killer whales is rapidly declining in 10 out of the 19 killer whale populations investigated and that the species may disappear entirely from several areas within a few decades.

Killer whales are particularly threatened in heavily contaminated areas like the waters near Brazil, the Strait of Gibraltar and around the UK. Around the British Isles, the researchers estimate that the remaining population counts less than 10 killer whales. Also along the east coast of Greenland, killer whales are effected due to the high consumption of sea mammals like seals.

PCBs accumulate in the food chain

The killer whale is one of the most widespread mammals on Earth and is found in all of the world's oceans from pole to pole. But today, only the populations living in the least polluted areas possess a large number of individuals.

Overfishing and man-made noise may also affect the health of the animals, but PCBs particularly can have a dramatic effect on the reproduction and immune system of the killer whales.

By collecting data from around the world and loading them into population models, the researchers can see that 10 out of 19 populations of killer whales are affected by high levels of PCBs in their body. PCBs particularly affect the reproduction and immune system of the whales. The situation is worst in the oceans around Brazil and the UK where the model predicts that populations have been cut in half over the first decades since the use of PCBs became widespread. Here, the models predict a high risk that the species will disappear within a 30-40-year period. The line indicates median values, while the shaded field shows the variation.
Credit: Aarhus University

Killer whales whose diet includes, among other items, seals and large fish such as tuna and sharks critical accumulate PCBs and other pollutants stored at successive levels of the food chain. It is these populations of killer whales that have the highest PCB concentrations and it is these populations that are at the highest risk of population collapse. Killer whales that primarily feed on small-sized fish such as herring and mackerel have a significantly lower content of PCBs and are thus at lower risk of effects.

PCBs have been used around the world since the 1930s. More than one million tonnes of PCBs were produced and used in, among other things, electrical components and plastics. Together with DDT and other organic pesticides - PCBs have spread around the global oceans.

Through the 1970s and 1980s, PCBs were banned in several countries and in 2004, through the Stockholm Convention, more than 90 countries have committed themselves to phase out and dispose of the large stocks of PCBs.

PCBs are only slowly decomposed in the environment. Moreover, PCBs are passed down from the mother orca to its offspring through the mother's fat-rich milk. This means that the hazardous substances remain in the bodies of the animals, instead of being released into the environment where they eventually deposit or degrade.

Global investigation of killer whales

"We know that PCBs deform the reproductive organs of animals such as polar bears. It was therefore only natural to examine the impact of PCBs on the scarce populations of killer whales around the world," says Professor Rune Dietz from the Department of Bioscience and Arctic Research Centre, Aarhus University, who initiated the killer whale studies and is co-author of the article.

The research group, which includes participants from the United States, Canada, England, Greenland, Iceland and Denmark, reviewed all the existing literature and compared all data with their own most recent results. This provided information about PCB levels in more than 350 individual killer whales around the globe - the largest number of killer whales ever studied.

Applying models, the researchers then predicted the effects of PCBs on the number of offspring as well as on the immune system and mortality of the killer whale over a period of 100 years.

More than 50% of the populations under threat

"The findings are surprising. We see that over half of the studied killer whales populations around the globe are severely affected by PCBs" says postdoc Jean-Pierre Desforges from Aarhus University, who led the investigations.

The effects result in fewer and fewer animals over time in these populations. The situation is worst in the oceans around Brazil, the Strait of Gibraltar, the northeast Pacific and around the UK. Here, the models show that the populations have virtually been halved during the half century where PCBs have been present.

"In these areas, we rarely observe newborn killer whales," says Ailsa Hall, who together with Bernie McConnell developed the models used by Sea Mammal Research Unit in Scotland.

"As the effects have been recognized for more than 50 years, it is frightening to see that the models predict a high risk of population collapse in these areas within a period of 30-40 years," says Jean-Pierre Desforges.

A female killer whale may live for 60-70 years, and although the world took its first steps to phase out PCBs more than 40 years ago, killer whales still have high levels of PCBs in their bodies.

"This suggests that the efforts have not been effective enough to avoid the accumulation of PCBs in high trophic level species that live as long as the killer whale does. There is therefore an urgent need for further initiatives than those under the Stockholm Convention," concludes Paul D. Jepson, Institute of Zoology, Zoological Society of London, England, who is another killer whale expert and co-author of the article.

In the oceans around the Faroe Islands, Iceland, Norway, Alaska and the Antarctic, the prospects are not so gloomy. Here, killer whale populations grow and the models predict that they will continue to do so throughout the next century.





Contacts and sources:
Postdoc Jean-Pierre Desforges
Aarhus University


Citation: Predicting global killer whale population collapse from PCB pollution
Jean-Pierre Desforges, Ailsa Hall, Bernie Mcconnell, Aqqalu Rosing-Asvid, Jonathan L. Barber, Andrew Brownlow, Sylvain De Guise, Igor Eulaers, Paul D. Jepson, Robert J. Letcher, Milton Levin, Peter S. Ross, Filipa Samarra, Gísli Víkingson, Christian Sonne, Rune Dietz.. Science, 2018 DOI: 10.1126/science.aat1953

Plate Tectonics Likely Active on Earth Since the Very Beginning



A new study suggests that plate tectonics—a scientific theory that divides the earth into large chunks of crust that move slowly over hot viscous mantle rock—could have been active from the planet’s very beginning. The new findings defy previous beliefs that tectonic plates were developed over the course of billions of years.

The paper, published in Earth and Planetary Science Letters, has important implications in the fields of geochemistry and geophysics. For example, a better understanding of plate tectonics could help predict whether planets beyond our solar system could be hospitable to life.

A combined image of Earth’s plates and their boundaries. 

Tectonic Plates
Credit: NASA/Goddard Space Flight Center Scientific Visualization Studio


“Plate tectonics set up the conditions for life,” said Nick Dygert, assistant professor of petrology and geochemistry in UT’s Department of Earth and Planetary Sciences and coauthor of the study. “The more we know about ancient plate tectonics, the better we can understand how Earth got to be the way it is now.”

For the research, Dygert and his team looked into the distribution of two very specific noble gas isotopes: Helium-3 and Neon-22. Noble gases are those that don’t react to any other chemical element.

Previous models have explained the Earth’s current Helium-3/Neon-22 ratio by arguing that a series of large-scale impacts (like the one that produced our moon) resulted in massive magma oceans, which degassed and incrementally increased the ratio of the Earth each time.

However, Dygert believes the scenario is unlikely.

“While there is no conclusive evidence that this didn’t happen,” he said, “it could have only raised the Earth’s Helium-3/Neon-22 ratio under very specific conditions.”

Instead, Dygert and his team believe the Helium-3/Neon-22 ratio raised in a different way.

As the Earth’s crust is continuously formed, the ratio of helium to neon in the mantle beneath the crust increases. By calculating this ratio in the mantle beneath the crust, and considering how this process would affect the bulk Earth over long periods of time, a rough timeline of Earth’s tectonic plate cycling can be established.

“Helium-3 and Neon-22 were produced during the formation of the solar system and not by other means,” Dygert said. “As such, they provide valuable insight into Earth’s earliest conditions and subsequent geologic activity.”


Contacts and sources:
Andrea Schneibel / Will Wells
University of Tennessee at Knoxville


Citation: Plate tectonic cycling modulates Earth's 3He/22Ne ratio
Nick Dygert, Colin R.M. Jackson, Marc A. Hesse, Marissa M. Tremblay, David L. Shuster, Jesse T. Gu.. Earth and Planetary Science Letters, 2018; 498: 309 DOI: 10.1016/j.epsl.2018.06.044

North Korea's 2017 A Bomb Test Set off Later Earthquakes, New Analysis Finds



Using newly refined analysis methods, scientists have discovered that a North Korean nuclear bomb test last fall set off aftershocks over a period of eight months. The shocks, which occurred on a previously unmapped nearby fault, are a window into both the physics of nuclear explosions, and how natural earthquakes can be triggered. The findings are described in two papers just published online in the journal Seismological Research Letters.

The September 3, 2017 underground test was North Korea’s sixth, and by far largest yet, yielding some 250 kilotons, or about 17 times the size of the bomb that destroyed Hiroshima. Many experts believe the device was a hydrogen bomb–if true, a significant advance from cruder atomic devices the regime previously exploded. The explosion itself produced a magnitude 6.3 earthquake. This was followed 8.5 minutes later by a magnitude 4 quake, apparently created when an area above the test site on the country’s Mt. Mantap collapsed into an underground cavity occupied by the bomb.

A 2017 nuclear bomb test at North Korea’s Mt. Mantap (star) set off subsequent earthquakes northwest of the test site over a period of 8 months. Seismic stations in Russia, China and South Korea picked up the tremors.

(Courtesy Won-Young Kim/Lamont-Doherty Earth Observatory)

The test and collapse were picked up by seismometers around the world and widely reported at the time. But later, without fanfare, seismic stations run by China, South Korea and the United States picked up 10 smaller shocks, all apparently scattered within 5 or 10 kilometers around the test site. The first two came on Sept. 23, 2017; the most recent was April 22, 2018. Scientists assumed the bomb had shaken up the earth, and it was taking a while to settle back down. “It’s not likely that there would be so many events in that small area over a small period of time,” said the lead author of one of the studies, Won-Young Kim, a seismologist at Columbia University’s Lamont-Doherty Earth Observatory. “These are probably triggered due to the explosion.”

After looking at the series of aftershock reports, Kim’s group sifted more closely through the data and spotted three other aftershocks that had not previously been recognized, for a total of 13. The tremors were all modest, all between magnitude 2.1 and 3.4, and almost certainly harmless. In the past they would have been hard to pick out using far-off seismometers, he said. However, under new international cooperation agreements, he and colleagues obtained recordings from relatively nearby instruments including ones in Ussuriysk, Russia, a borehole in South Korea, and Mudanjiang, northeast China.

The group then used a new analysis method developed in part by Lamont seismologist David Schaff that looks at energy waves that are much lower frequency and slower-moving seismic than those used in conventional earthquake analyses. These slow-moving waves allowed Schaff and the rest of the team to pinpoint the locations of the quakes with far greater precision than with conventional recordings. Instead of the random scatter initially seen, the quake locations lined up in a neat 700-meter-long row about 5 kilometers northwest of the blast–indication of a hidden fracture.

Seismometers have long been routinely used to verify nuclear test treaties, and scientists have become increasingly confident that they can detect even small tests and distinguish them from natural earthquakes. But the link between explosions and subsequent quakes is less studied. Seismologists documented a handful of apparent aftershocks near a Nevada test site in the 1970s, and near a Soviet test site in Kazakhstan in 1989. However, they were not able to pinpoint the locations of these quakes with the technology then available. With more instruments and the new analysis method, ‘now we can see everything,” said Paul Richards, a Lamont seismologist who coauthored the papers. “It’s a radical improvement in cataloging even tiny, tiny earthquakes. It shows not just what we can do with natural earthquakes, but that we can monitor what the North Koreans are doing. North Korea can’t do anything at all now [in secret] and expect to get away with it.”

Richards said the exact location of tiny quakes could also help in the so far largely fruitless quest by some seismologists to predict bigger quakes. Richards did not assert that quakes could eventually be predicted, but said, “If you’re ever going to do this, you have to understand locations, and how one earthquake affects its neighbors.”

This spring, the North Koreans made a show of blowing up part of the Mt. Mantap site, though it may already have become largely unusable due to the destruction caused by previous explosions. And no nuclear tests have been detected since North Korean leader Kim Jong Un and U.S. president Donald Trump met in June to discuss ending North Korea’s tests. However, despite boasts by Trump that North Korea’s program has been neutralized, U.S. diplomats have noted evidence suggesting that the North continues to quietly develop its weapons.

Lamont scientists have studied previous North Korean tests, including ones in 2013 and 2009; they concluded that reports of one in 2010 was a false alarm. The current studies were coauthored by Eunyoung Jo and Yonggyu Ryoo of the Korea Meteorological Administration.



Contacts and sources: 
Kevin Krajick
Lamont-Doherty Earth Observatory, Columbia University


Citation:   Moment Tensor Source‐Type Analysis for the Democratic People’s Republic of Korea–Declared Nuclear Explosions (2006–2017) and 3 September 2017 Collapse Event.
Andrea Chiang, Gene A. Ichinose, Doug S. Dreger, Sean R. Ford, Eric M. Matzel, Steve C. Myers, W. R. Walter. Seismological Research Letters, 2018; DOI: 10.1785/0220180130

Saturday, September 29, 2018

Ancient Lowland Maya Complexity Revealed



Tulane University researchers, documenting the discovery of dozens of ancient cities in northern Guatemala through the use of jungle-penetrating Lidar (light detection and ranging) technology, have published their results in the prestigious journal Science.

The article includes the work of Marcello Canuto, director of the Middle American Research Institute at Tulane, and Francisco Estrada-Belli, a research assistant professor at Tulane and director of the Holmul Archaeological Project since 2000. They worked with assistant professor of anthropology Thomas Garrison of Ithaca College as well as other scholars to make their discoveries in the Petén forest of Guatemala.

Tulane University researchers Marcello Canuto and Francisco Estrada-Belli led discovery of dozens of ancient cities in northern Guatemala through the use of jungle-penetrating LiDAR (light detection and ranging) technology.
Tulane University researchers Marcello Canuto and Francisco Estrada-Belli led discovery of dozens of ancient cities in northern Guatemala through the use of jungle-penetrating LiDAR (light detection and ranging) technology.
Credit: American Association for the Advancement of Science


A consortium of 18 scholars from U.S., Europe and Guatemalan institutions including the Ministry of Culture and Sports was enabled by the Fundación PACUNAM (Patrimonio Cultural y Natural Maya) to analyze lidar data covering over 2,100 square kilometers of the Maya Biosphere Reserve.

"Since LiDAR technology is able to pierce through thick forest canopy and map features on the earth’s surface, it can be used to produce ground maps that enable us to identify human-made features on the ground, such as walls, roads or buildings,” Canuto said.

The PACUNAM LiDAR INITIATIVE (PLI), is the largest single lidar survey in the history of Mesoamerican archaeology. The collaborative scientific effort has provided fine-grained quantitative data of unprecedented scope to refine long-standing debates regarding the nature of ancient lowland Maya urbanism. Specifically, the key identifications of this study are:

• 61,480 ancient structures in the survey region, resulting in an estimated population of 7 to 11 million at height of the Late Classic period (650-800 CE). The structures include isolated houses, large palaces, ceremonial centers and pyramids.

• 362 square kilometers of terraces or otherwise modified agricultural terrain and another 952 square kilometers of viable farmland, demonstrating a landscape heavily modified for the intensive agriculture necessary to sustainably support massive populations for many centuries.

• 106 square kilometers of causeways within and between urban centers and numerous, sizeable defensive earthworks. This substantial infrastructure investment highlights the interconnectivity of cities and hinterlands as well as the scale of Maya warfare.

Both Canuto and Estrada-Belli noted that discoveries were made in a matter of minutes, compared to what would have taken years of fieldwork without the LiDAR technology.

“Seen as a whole, terraces and irrigation channels, reservoirs, fortifications and causeways reveal an astonishing amount of land modification done by the Maya over their entire landscape on a scale previously unimaginable,” Estrada-Belli said.


Contacts and sources:
Tulane University

Citation: Ancient lowland Maya complexity as revealed by airborne laser scanning of northern Guatemala Marcello A. Canuto et al. Science 28 Sep 2018:
Vol. 361, Issue 6409, eaau0137  http://science.sciencemag.org/content/361/6409/eaau0137
DOI: 10.1126/science.aau0137

Amazonian Plant Found to Kill Cancer

A piece of research by the UPV/EHU-University of the Basque Country reveals oxidative stress and death in hepatic tumour cells caused by the Vismia baccifera plant

The plant Vismia baccifera produces a toxic response in tumour cells

Credit: Daniel A. Monsalve Ortiz

A piece of research conducted by the Free Radicals and Oxidative Stress Group at the UPV/EHU’s Faculty of Medicine and Nursing has deciphered the antitumour mechanism exerted by the plant Vismia baccifera, originally from the Amazonian region of Colombia, in human liver cancer cells.

The journal Heliyon has published the results of the study in which this plant was found to induce oxidative stress in cells, which eventually leads to cell death.

Vismia baccifera plant
File:Vismia baccifera 1.jpg
Credit: Franz Xaver / Wikimedia Commons. GFDL, cc-by-sa

Products derived from plants are receiving increasing attention from the scientific community owing to their anti-oxidant, anti-inflammatory and antitumour activity. “Right now, there is huge interest in identifying compounds derived from plants that could be used as chemotherapeutic agents with the capacity to prevent tumours from growing, or to treat metastasis, for example,” explained Dr Jenifer Trepiana, member of the Free Radicals and Oxidative Stress research group at the UPV/EHU’s Faculty of Medicine and Nursing, and one of the authors of the study.

For its research the group chose the plant Vismia baccifera, which was picked in the Amazonian region of Colombia. “Indigenous populations use it for its anti-inflammatory properties or for urinary tract disorders or skin diseases, but we chose it because in previous studies we had seen that it is the one with the greatest antitumour capability in liver cancer cells that we have used,” said the researcher.

The study was conducted in vitro using a model of human liver tumour cells, and the cells were treated with an aqueous extract of Vismia baccifera leaves prepared as an infusion, just as it is used in traditional indigenous medicine. Healthy human hepatic cells were also treated with this same extract “to see whether or not healthy cells are also affected”, said Dr Trepiana.

Toxicity for tumour cells but not for healthy ones

As they were able to confirm, the extract of Vismia baccifera produces a toxic response in tumour cells. What happens is that it produces an increase in free radicals and, in particular, of hydrogen peroxide, and that ends up bringing about the death of the tumour cells. Among the effects caused by the increase in hydrogen peroxide “the blocking of the cell cycle (in which the cells stop dividing), damage to genetic material, and the activation of a cell death process known as apoptosis were observed”, specified the researcher.

When comparing the cytotoxic action of Vismia baccifera in tumour cells and healthy cells, they saw that “only the cancer cells were affected; we found that these effects do not take place in healthy human liver cells and, previously, in rat cells”, she pointed out. “This is of huge interest because the most important thing is that healthy cells should remain unaffected."

The researcher regards these results, in other words knowing the effect of the plant inside the cells, as “tremendously positive. The ideal thing would be to take the research further and move towards doing in vivo studies using animal models, to go on passing milestones until it can be used as a therapy against cancer. Although we are well aware that it will be a very long road”, she concluded.






Contacts and sources:
University of the Basque Country

Citation:  Unraveling the in vitro antitumor activity of Vismia baccifera against HepG2: role of hydrogen peroxide
Jenifer Trepiana, M. Begoña Ruiz-Larrea, José Ignacio Ruiz-Sanz
Heliyon (2018)  DOI: 10.1016/j.heliyon.2018.e00675
Attached files

How Much Seafood Does the Whole World Eat?


Global seafood consumption has more than doubled in the past 50 years, putting stress on the sustainability of fishing

Net importing nations must consider the sustainability of their trading partners' fishing practices, not just their domestic ones

New analysis on international supply chains makes the case for international collaboration on long-term sustainability of all seafood production

Omul fish, endemic to Lake Baikal (Russia). Smoked and on sale at Listvyanka market.
File:Omul Fish.jpg
Credit: Jan van der Crabben / Wikimedia Commons

Taking into consideration both food that humans consume and seafood processed for feed production, seafood consumption in EU member states equals 27 kg per head.

The highest consumption at EU level is observed in Portugal (61.5 kg per head) while outside the EU, the top consumers are Korea (78.5 kg per head) followed by Norway (66.6 kg per head).

The global per head consumption is estimated at 22.3 kg.

Global demand for seafood is growing

Global seafood consumption has more than doubled in the past 50 years, to over 20 kg per capita per year in 2014.

As demand for seafood rises, the sustainability of fish stock becomes an ever more pressing issue.

Compared to other commodities, the share of globally produced seafood products that are traded internationally is very high and growing, mostly due to globalisation and the geographical discrepancy between aquaculture production happening mostly in Asia, and seafood demand mostly in Europe, North America and Asia.

Given that many nations rely on imports to meet national demands, assessments of the sustainability of seafood need to consider both domestic production and net imports, and whether imported seafood comes from sustainable sources.
Why measure global seafood consumption footprint?

In a recent article "Global seafood consumption footprint", JRC scientists use a new methodology to examine the impact of seafood supply chains across national boundaries - the global seafood consumption footprint.

This is the first ever measure of national footprints based on seafood consumption rather than production, broken down by sector to quantify the dependencies between capture fisheries and aquaculture through fishmeal production and trade by country.

The seafood consumption footprint provides policy-makers with evidence to encourage international collaboration and promote policies to ensure long-term sustainability of all seafood production.
What is our global seafood consumption footprint?

Representation of the interactions between the different sectors showing the flow of seafood products (in million tonnes) and the share of the supply with domestic (blue) or international (grey) origin for 2011.




©EU 2018



According to calculations using baseline data from 2011, global demand for seafood destined for human consumption is 143.8 million tonnes per year, and the overall consumption footprint, which also includes other uses of seafood, is 154 million tonnes.

China has by far the largest seafood consumption footprint (65 million tonnes), followed by the European Union (13 million tonnes), Japan (7.4 million tonnes), Indonesia (7.3 tonnes) and the United States (7.1 million tonnes).

Production (light blue) and consumption (dark blue) footprint for the top 20 countries ranked according to their consumption (in million tonnes) for 2011 (note: freshwater and marine aquaculture productions are combined)


©EU 2018



In terms of consumption footprint per capita, the Republic of Korea scored highest (78.5 kg per capita), followed by Norway (66.6 kg), Portugal (61.5 kg), Myanmar (59.9 kg), Malaysia (58.6 kg) and Japan (58 kg) – China comes in seventh at 48.3 kg per capita.
How to measure global seafood consumption footprint

JRC scientists developed a model (Multi-Region Input-Output, MRIO) for the world seafood supply chain to investigate the impact of seafood consumption across national boundaries.

The model explores the interactions between capture fisheries and aquaculture, fishmeal and trade at the global level, and accounts for trade flows and interdependencies between different countries along the international supply chain, linking the extraction of raw materials, inter-industry flow, trade and final consumption.

Results from the model can provide policy-makers and consumers with information on the extent of reliance on producer nations for their seafood supplies.

They can support the assessment on whether seafood sources are exploited in accordance with the applicable or desired sustainability standards and objectives.

This information can help encourage international collaboration and promote policies to ensure long-term sustainability of all seafood production.
EU policies to preserve fish stocks

The Common Fisheries Policy (CFP) is a set of rules for managing European fishing fleets and for conserving fish stocks.

Stocks may be renewable, but they are finite. Some of these fishing stocks, however, are being overfished.

As a result, EU countries have taken action to ensure the European fishing industry is sustainable and does not threaten the fish population size and productivity over the long term.

As a major fishing power, and the largest single market for fisheries products in the world, the EU also plays an important role in promoting better governance through a number of international organisations.

This involves developing and implementing policy on fisheries management and – more generally – the Law of the Sea.

The EU works closely with its partners from around the globe through the United Nations system, including the Food and Agriculture Organisation (FAO), as well as in other bodies, such as the Organisation for Economic Co-operation and Development (OECD).



Contacts and sources:
 European Commission, Joint Research Centre (JRC)

Citation: "Global seafood consumption footprint"
Guillen, J., Natale, F., Carvalho, N. et al. Ambio (2018). https://doi.org/10.1007/s13280-018-1060-9
DOI https://doi.org/10.1007/s13280-018-1060-9 Publisher Name Springer Netherlands Print ISSN 0044-7447 Online ISSN 1654-7209  https://link.springer.com/article/10.1007/s13280-018-1060-9 https://link.springer.com/article/10.1007/s13280-018-1060-9

Cancer Risk Due to Smoking, Unhealthy Diet, Lack of Physical Activity, and Infections Is Preventable

Almost four in every 10 new cases of cancer in Germany are attributable to lifestyle and environmental factors. These include primarily smoking, low physical activity, overweight, and infections. Hermann Brenner and his group of authors from the German Cancer Research Center (DKFZ) report on how these risk factors affect the number of cancer cases in Germany in concrete terms, in this themed issue of Deutsches Ärzteblatt International (Dtsch Arztebl Int; 115: 571–593).

In their series of articles, the Heidelberg-based authors determined the proportion of new cancer cases in a particular segment of the population among all new cancers expected for 2018. The calculation is based on population projections, published relative risks, and cancer incidence and exposure data for 35-84 year olds in Germany.

File:Cancer Screening-CDC Vital Signs-July 2010.pdf
Credit: CDC

In the current year, an estimated 85 072 cases of cancer will have been caused by smoking. This corresponds to 19% of all new cases. In men, the proportion of lung cancers due to tobacco consumption is 89% and in women, 83%. Overweight and a lack of physical activity/exercise account for 7% and 6% of the expected cancer burden, respectively, and constitute the main risk for uterine and renal cancers. In overweight persons, the risk of liver cancer is also raised, while a lack of physical exercise also contributes to lung cancer. 

Bacterial or viral infections cause 17 600 incident cases, which is 4% of expected new cancer cases. Infection with Helicobacter pylori and human papillomaviruses have a major role in this setting. A lower but still important proportion of new malignancies is due to high alcohol consumption, high intake of processed meat, or low intakes of dietary fiber, fruits, and vegetables. Additional risk factors include indoor radon, particulate matter, or sunbed use.

The authors make a plea for more stringent prevention measures in terms of tobacco and alcohol consumption, overweight, unhealthy diet, and lacking physical exercise. They also call for targeted preventive measures regarding infections and environmental factors. But they highlight that further research is needed in order to identify and quantify environmental risks more comprehensively.








Contacts and sources:
Deutsches Aerzteblatt International

Citation:  Cancers due to smoking and high alcohol consumption—estimation of the attributable cancer burden in Germany Mons U, Gredner T, Behrens G, Stock C, Brenner H: . Dtsch Arztebl Int 2018; 115: 571–7. DOI:10.3238/arztebl.2018.0571
URL: https://www.aerzteblatt.de/pdf.asp?id=199685 
https://www.aerzteblatt.de/int/archive/issue?heftid=6245

Cancers due to excess weight, low physical activity and unhealthy diet—estimation of the attributable cancer burden in Germany.
Behrens G, Gredner T, Stock C, Leitzmann MF, Brenner H, Mons U:  Dtsch Arztebl Int 2018; 115: 578–85. DOI: 10.3238/arztebl.2018.0578
URL: https://www.aerzteblatt.de/pdf.asp?id=199686 

Cancers due to infection and selected environmental factors—estimation of the attributable cancer burden in Germany.
Gredner T, Behrens G, Stock C, Brenner H, Mons U:  Dtsch Arztebl Int 2018; 115: 586–93. DOI: 10.3238/arztebl.2018.0586
URL: https://www.aerzteblatt.de/pdf.asp?id=199692

Researchers Challenge Assumptions on the Effects of Planetary Rotation


The earth’s rotation causes the Coriolis effect, which deflects massive air and water flows toward the right in the Northern Hemisphere and toward the left in the Southern Hemisphere. This phenomenon greatly impacts global wind patterns and ocean currents, and is only significant for large-scale and long-duration geophysical phenomena such as hurricanes. The magnitude of the Coriolis effect, relative to the magnitude of inertial forces, is expressed by the Rossby number. For over 100 years, scientists have believed that the higher this number, the less likely Coriolis effect influences oceanic or atmospheric events.

Recently, researchers at the Naval Postgraduate School in California found that even smaller ocean disturbances with high Rossby numbers, like vortices within submarine wakes, are influenced by the Coriolis effect. Their discovery challenges assumptions at the very foundation of theoretical oceanography and geophysical fluid dynamics. The team reports their findings in Physics of Fluids, from AIP Publishing.

A 2D image of the velocity in an internal jet with the Rossby number of 100 that shows how planetary rotation leads to the destabilization and dispersion of an initially coherent flow pattern
 Timour Radko and David Lorfeld
Credit: Timour Radko and David Lorfeld


“We have discovered some major -- and largely overlooked -- phenomena in fundamental fluid dynamics that pertain to the way the Earth’s rotation influences various geophysical flows,” Timour Radko, an oceanography professor and author on the paper, said.

Radko and Lt. Cmdr. David Lorfeld originally focused on developing novel submarine detection systems. They approached this issue by investigating pancake vortices, or flattened, elongated mini-eddies located in the wakes of submerged vehicles. Eddies are caused by swirling water and a reverse current from waterflow turbulence.

Last year, a team led by Radko published a paper in the same AIP journal on the rotational control of pancake vortices, the first paper that challenged the famous “Rossby rule.” In this most recent paper, the researchers showed, through numerical simulations, that internal jets of the wake can be directly controlled by rotation. They also demonstrated that the evolution of a disorganized fine-scale eddy field is determined by planetary rotation.

“Here is where our discovery could be critical,” Radko said. “We find that cyclones persist, but that anticyclones unravel relatively quickly. If the anticyclones in the wake are as strong as the cyclones, this means that the wake is fresh -- the enemy passed through not too long ago. If the cyclones are much stronger than the anticyclones, then the sub is probably long gone.”

The algorithm that the researchers developed is based on the dissimilar evolution of cyclones and anticyclones, which is a consequence of planetary rotation. “Therefore,” Radko concluded, “such effects must be considered in the numerical and theoretical models of finescale oceanic processes in the range of 10-100 meters.”



Contacts and sources:
Jason Bardi
American Institute of Physics (AIP)

Citation: "Effects of weak planetary rotation on the stability and dynamics of internal stratified jets" is authored by Timour Radko and David Lorfeld. The article appeared in the journal Physics of Fluids Sept. 25, 2018 (DOI: 10.1063/1.5049598) and can be accessed at
http://aip.scitation.org/doi/full/10.1063/1.5049598














Friday, September 28, 2018

New, Highly Stable Catalyst May Help Turn Water into Fuel



Breaking the bonds between oxygen and hydrogen in water could be a key to the creation of hydrogen in a sustainable manner, but finding an economically viable technique for this has proved difficult. Researchers report a new hydrogen-generating catalyst that clears many of the obstacles – abundance, stability in acid conditions and efficiency.

In the journal Angewandte Chemie, researchers from the University of Illinois at Urbana-Champaign report on an electrocatalytic material made from mixing metal compounds with substance called perchloric acid.

Electrolyzers use electricity to break water molecules into oxygen and hydrogen. The most efficient of these devices use corrosive acids and electrode materials made of the metal compounds iridium oxide or ruthenium oxide. Iridium oxide is the more stable of the two, but iridium is one of the least abundant elements on Earth, so researchers are in search of an alternative material.

Postdoctoral researcher Jaemin Kim, left, professor of chemical and biomolecular engineering Hong Yang and graduate student Pei-Chieh (Jack) Shih are part of a team that developed a new material that helps split water molecules for hydrogen fuel production.
Postdoctoral researcher Jaemin Kim, professor of chemical and biomolecular engineering Yang Hong and graduate student Pei-Chieh (Jack) Shih are part of a team that developed a new material that helps split water molecules for hydrogen fuel production.
Photo by L. Brian Stauffer

“Much of the previous work was performed with electrolyzers made from just two elements – one metal and oxygen,” said Hong Yang, a co-author and professor of chemical and biomolecular engineering at Illinois. “In a recent study, we found if a compound has two metal elements – yttrium and ruthenium – and oxygen, the rate of water-splitting reaction increased.”

Yao Qin, a co-author and former member of Yang’s group, first experimented with the procedure for making this new material by using different acids and heating temperatures to increase the rate of the water-splitting reaction.

The researchers found that when they used perchloric acid as a catalyst and let the mixture react under heat, the physical nature of the yttrium ruthenate product changed.

“The material became more porous and also had a new crystalline structure, different from all the solid catalysts we made before,” said Jaemin Kim, the lead author and a postdoctoral researcher. The new porous material the team developed – a pyrochlore oxide of yttrium ruthenate – can split water molecules at a higher rate than the current industry standard.

“Because of the increased activity it promotes, a porous structure is highly desirable when it comes electrocatalysts,” Yang said. “These pores can be produced synthetically with nanometer-sized templates and substances for making ceramics; however, those can’t hold up under the high-temperature conditions needed for making high-quality solid catalysts.”

Yang and his team looked at the structure of their new material with an electron microscope and found that it is four times more porous than the original yttrium ruthenate they developed in a previous study, and three times that of the iridium and ruthenium oxides used commercially.

“It was surprising to find that the acid we chose as a catalyst for this reaction turned out to improve the structure of the material used for the electrodes,” Yang said. “This realization was fortuitous and quite valuable for us.”

The next steps for the group are to fabricate a laboratory-scale device for further testing and to continue to improve the porous electrode stability in acidic environments, Yang said.

“Stability of the electrodes in acid will always be a problem, but we feel that we have come up with something new and different when compared with other work in this area,” Yang said. “This type of research will be quite impactful regarding hydrogen generation for sustainable energy in the future.”

Graduate student Pei-Chieh Shih, Zaid Al-Bardanand and Argonne National Laboratory researcher Cheng-Jun Sun also contributed to this research.



Contacts and sources:
Hong Yang /  Lois Yoksoulian
 University of Illinois at Urbana-Champaign

Citation: r “A porous pyrochlore Y2[Ru1.6Y0.4]O7–g electrocatalyst for enhanced performance towards the oxygen evolution reaction in acidic media” is available online and from the U. of I. News Bureau.

Quantum Mechanics Can Let Oil Industry Know Promise of Recovery Experiments

With their current approach, energy companies can extract about 35 percent of the oil in each well. Every 1 percent above that, compounded across thousands of wells, can mean billions of dollars in additional revenue for the companies and supply for consumers.

Extra oil can be pushed out of wells by forced water - often inexpensive seawater - but scientists doing experiments in the lab found that sodium in water impedes its ability to push oil out, while other trace elements help. Scientists experiment with various combinations of calcium, magnesium, sulfates and other additives, or "wettability modifiers," in the laboratory first, using the same calcite as is present in the well. The goal is to determine which lead to the most oil recovery from the rock.
Clockwise from top left: a schematic diagram of the calcite/brine/oil system, a simulation supercell (color scheme: Ca-indigo, C-brown, O-red, H-white) with ions in brine shown schematically, and the oil-in-water contact angle assuming an initial mixed-wet state and difference (relative to calcite-water) in the effective charge of the surface.

Credit: Sokrates Pantelides

Vanderbilt University physicist Sokrates Pantelides and postdoctoral fellow in physics Jian Liu developed detailed quantum mechanical simulations on the atomic scale that accurately predict the outcomes of various additive combinations in the water.

They found that calcium, magnesium and sulfates settle farther from the calcite surface, rendering it more water-wet by modifying the effective charge on the surface, enhancing oil recovery. Their predictions have been backed by experiments carried out by their collaborators at Khalifa University in Abu Dhabi: Saeed Alhassan, associate professor of chemical engineering and director of the Gas Research Center, and his research associate, Omar Wani.

"Now, scientists in the lab will have a procedure by which they can make intelligent decisions on experiments instead of just trying different things," said Pantelides, University Distinguished Professor of Physics and Engineering, William A. & Nancy F. McMinn Professor of Physics, and professor of electrical engineering. "The discoveries also set the stage for future work that can optimize choices for candidate ions."

The team's paper, ­­­­­"Wettability alteration and enhanced oil recovery induced by proximal adsorption of Na+, Cl-, Ca2+, Mg2+, and SO42- ions on calcite," appears today in the journal Physical Review Applied. It builds on Pantelides' previous work on wettability, released earlier this year.

His co-investigators in Abu Dhabi said the work will have a significant impact on the oil industry.

"We are excited to shed light on combining molecular simulations and experimentation in the field of enhanced oil recovery to allow for more concrete conclusions on the main phenomenon governing the process," Alhassan said. "This work showcases a classic approach in materials science and implements it in the oil and gas industry: the combination of modeling and experiment to provide understanding and solutions to underlying problems."



Contacts and sources:
Heidi Nieland Hall
Vanderbilt University
 :

UCF Selling Experimental Martian Dirt — $20 a Kilogram, Plus Shipping

The University of Central Florida is selling Martian dirt, $20 a kilogram plus shipping.

This is not fake news. A team of UCF astrophysicists has developed a scientifically based, standardized method for creating Martian and asteroid soil known as simulants.

The team published its findings this month in the journal Icarus.

“The simulant is useful for research as we look to go to Mars,” says Physics Professor Dan Britt, a member of UCF’s Planetary Sciences Group. “If we are going to go, we’ll need food, water and other essentials. As we are developing solutions, we need a way to test how these ideas will fare.”

Two hands wearing blue gloves hold a metal bowl containing red-brown dirt to pour into another metal container
Credit: University of Central Florida, Karen Norum

For example, scientists looking for ways to grow food on Mars — cue the 2015 film The Martian — need to test their techniques on soil that most closely resembles the stuff on Mars.

“You wouldn’t want to discover that your method didn’t work when we are actually there,” Britt says. “What would you do then? It takes years to get there.”

UCF’s formula is based on the chemical signature of the soils on Mars collected by the Curiosity rover. Britt built two calibration targets that were part of Curiosity rover.

Researchers currently use simulants that aren’t standardized, so any experiment can’t be compared to another in an apples-to-apples kind of way, Britt says.

As a geologist and a physicist, he knows his dirt. Like a recipe, the ingredients can be mixed in different ways to mimic soil from various objects, including asteroids and planets. And because the formula is based on scientific methods and is published for all to use, even those not ordering through UCF can create dirt that can be used for experiments, which reduces the uncertainty level.

Kevin Cannon, the paper’s lead author and a post-doctoral researcher who works with Britt at UCF, says there are different types of soil on Mars and on asteroids. On Earth, for example, we have black sand, white sand, clay and topsoil to name a few. On other worlds, you might find carbon-rich soils, clay-rich soils and salt-rich soils, he added.

“With this technique, we can produce many variations,” Cannon says. “Most of the minerals we need are found on Earth although some are very difficult to obtain.”

A team of UCF astrophysicists has developed a scientifically based, standardized method for creating Martian and asteroid soil known as simulants.

Credit: University of Central Florida, Karen Norum

Cannon is in Montana to collect ingredients for a moon simulant this week. Moon and asteroid materials are rare and expensive on Earth since they arrived via meteorites in small amounts. That’s why asteroid and moon simulants are also on the list of items that can be ordered. The UCF team can mimic most ingredients and will substitute for any potentially harmful materials. All simulants produced in the lab meet NASA’s safety standards.

Britt and Cannon believe there is a market for the simulant. At $20 a kilogram, plus shipping, it may be easier to send UCF an order, than to try and make it in labs across the nation.

The team already has about 30 pending orders, including one from Kennedy Space Center for half a ton.

“I expect we will see significant learning happening from access to this material,” Britt says.

Cannon believes it will help accelerate the drive to explore our solar system as demonstrated by investments already being made by Space X, Blue Origin and other private companies.

For Cody Schultz, a mechanical engineering senior, getting to work on the experimental soil has been “very cool.”

“For someone who has always loved space science, this is the ultimate cool,” he says. “And the experience is fantastic in terms of the real world … out-of-this-world experience.”

Contacts and sources:
Zenaida Gonzalez Kotala University of Central Florida  

Ice-Free Corridor Sustained Arctic Marine Life during Last Ice Age

Scientists from Norway and the UK have shown that, 20.000 years ago, Arctic sea ice in the winter covered more than twice the area than it does today. Yet, there was a small ice-free oasis between ice covered continents and the frozen ocean. There, marine life prevailed.

"When we were looking for evidence of biological life in sediments at the bottom of the ocean, we found that between the sea ice covered oceans, and the ice sheets on land, there must have been a narrow ice-free corridor that extended over hundreds of kilometres into the Arctic. 

Such ice-free regions are often called "polynyas" - a Russian expression for an area of open water that is surrounded by sea ice and/or ice sheets", says research scientist Jochen Knies from Centre for Arctic Gas Hydrate, Environment and Climate at UiT The Arctic University of Norway, and Geological Survey of Norway.

Sea ice marginal zone in front of West Antarctic ice sheet. Polynias, ice free corridors between the sea ice and land based ice sheets, are common in Antarctica today. 
Photo: M. Forwick


The new findings, which were published recently in Nature Communications, also reveal that the polynya was sustained for at least 5000 years, when the surroundings were largely covered by ice, and global ocean circulation was at a minimum.

Common today in Antarctica and Greenland

Today, polynyas are common around Antarctica and Greenland. They form through a combination of offshore winds blowing from nearby ice sheets and warm water rising from the deep ocean. In areas of extreme cold and little access to food, polynyas provide an oasis for marine mammals to survive and they are also critical for global ocean circulation.

"Polynyas in the polar regions are common nowadays, but it's difficult to confirm their existence in the past. However, by finding chemical fossils of algae that live in the open ocean and in sea ice, we have shown that polynyas must have existed during the last Ice Age" says co-author Simon Belt, professor of chemistry at Plymouth University.

Arctic oasis in front of the Eurasian ice sheet during the last Ice Age, 20,000 years ago (from Knies et al. 2018, Nature Communications).
 
Illustration: J. Knies

During a subsequent period of abrupt climate change around 17,500 years ago, cold freshwater from the melting ice caps caused entire northern oceans to be covered by thick sea ice and the polynya disappeared. This resulted in a dramatic decline in marine life. It took up to 2000 years for the life to recover.

The research is of international importance since it shows the vulnerability of marine ecosystems in the northern oceans to periods of rapid climate change, as well as their adaptability to various extreme climate states.


Contacts and sources:
Jochen KniesCentre for Arctic Gas Hydrate, Environment and Climate at UiT


Citation: Nordic Seas polynyas and their role in preconditioning marine productivity during the Last Glacial Maximum. Knies, J., Köseoğlu, D., Rise, L., Baeten, N., Bellec, V.K., Bøe, R., Klug, M., Panieri, G., Jernas, P.T., Belt, S.T, Nature Communications, 2018. doi: 10.1038/s41467-018-06252-8. http://dx.doi.org/10.1038/s41467-018-06252-8

Thursday, September 27, 2018

The Notorious Luminous Blue Variable Star




New, three-dimensional simulations reveal the inner workings of one of the universe’s most mysterious stars

Sparkling with an exceptional blue-toned brilliance and exhibiting wild variations in both brightness and spectrum, the luminous blue variable (LBV) is a relatively rare and still somewhat mysterious type of star.

Its appearance tends to fluctuate radically over time, and that has piqued the curiosity of astrophysicists who wonder what processes may be at play.

“The luminous blue variable is a supermassive, unstable star,” said Yan-Fei Jiang, a researcher at UC Santa Barbara’s Kavli Institute for Theoretical Physics (KITP). Unlike our own comparatively smaller and steady-burning Sun, he explained, LBVs have been shown to burn bright and hot, then cool and fade so as to be almost indistinguishable from other stars, only to flare up again. Because of these changes, Jiang added, conventional one-dimensional models have been less than adequate at explaining the special physics of these stars.

However, thanks to special, data-intensive supercomputer modeling conducted at Argonne National Laboratory’s Argonne Leadership Computing Facility (ALCF) for its INCITE program, Jiang and colleagues — Matteo Cantiello of the Flatiron Institute, Lars Bildsten of KITP, Eliot Quataert at UC Berkeley, Omer Blaes of UCSB, and James Stone of Princeton — have now developed a three-dimensional simulation. It not only shows the stages of an LBV as it becomes progressively more luminous, then erupts, but also depicts the physical forces that contribute to that behavior. The simulation was developed also with computational resources from NASA and the National Energy Research Scientific Computing Center, while the overall effort was supported by the National Science Foundation and the Gordon and Betty Moore Foundation.

The researchers’ paper, “Luminous Blue Variable Outbursts from the Variations of Helium Opacity,” is published in the journal Nature.

Of particular interest to the researchers are the stars’ mass loss rates, which are significant compared to those of less massive stars. Understanding how these stellar bodies lose mass, Jiang said, could lead to greater insights into just how they end their lives as bright supernova.

Among the physical processes never before seen with one-dimensional models are the supersonic turbulent motions — the ripples and wrinkles radiating from the star’s deep envelope as it prepares for a series of outbursts.

“These stars can have a surface temperature of about 9,000 degrees Kelvin during these outbursts,” Jiang said. That translates to 15,740 degrees Fahrenheit or 8,726 degrees Celsius.

A Luminous Blue Variable 3D simulation

Credit:  UCSB

Also seen for the first time in three dimensions is the tremendous expansion of the star immediately before and during the outbursts — phenomena not captured with previous one-dimensional models. The three dimensional simulations show that it is the opacity of the helium that sets the observed temperature during the outburst.

According to Jiang, in a one-dimensional stellar evolution code, helium opacity — the extent to which helium atoms prevent photons (light) from escaping — is not very important in the outer envelope because the gas density at the cooler outer envelope is far too low.

The paper’s co-author and KITP Director Lars Bildsten explained that the three-dimensional model demonstrates that “the region deep within the star has such vigorous convection that the layers above that location get pushed out to much larger radii, allowing the material at the location where helium recombines to be much denser.” The radiation escaping from the star’s hot core pushes on the cooler, opaque outer region to trigger dramatic outbursts during which the star loses large amounts mass. Hence, convection — the same phenomena responsible for thundercloud formation — causes not only variations in the star’s radius but also in the amount of mass leaving in the form of a stellar wind.

Additional work is underway on more simulations, according to Jiang, including models of the same stars but with different parameters such as metallicity, rotation and magnetic fields.

“We are trying to understand how these parameters will affect the properties of the stars,” Jiang said. “We are also working on different types of massive stars — the so-called Wolf-Rayet stars — which also show strong mass loss.”




Contacts and sources:
Sonia Fernandez
University of California - Santa Barbara

Citation: Outbursts of luminous blue variable stars from variations in the helium opacity
Yan-Fei Jiang, Matteo Cantiello, Lars Bildsten, Eliot Quataert, Omer Blaes, James Stone. . Nature, 2018; 561 (7724): 498 DOI: 10.1038/s41586-018-0525-0

VLA Discovers Powerful Jet Coming from “Wrong” Kind of Sta



Astronomers using the National Science Foundation’s Karl G. Jansky Very Large Array (VLA) have discovered a fast-moving jet of material propelled outward from a type of neutron star previously thought incapable of launching such a jet. The discovery, the scientists said, requires them to fundamentally revise their ideas about how such jets originate.

Neutron stars are superdense objects, the remnants of massive stars that exploded as supernovas. When in binary pairs with “normal” stars, their powerful gravity can pull material away from their companions. That material forms a disk, called an accretion disk, rotating around the neutron star. Jets of material are propelled at nearly the speed of light, perpendicular to the disk.

Artist's conception shows magnetic field lines around neutron star, accretion disk of material orbiting the neutron star, and jets of material propelled outward.

Credit: ICRAR/Universiteit van Amsterdam


“We’ve seen jets coming from all types of neutron stars that are pulling material from their companions, with a single exception. Never before have we seen a jet coming from a neutron star with a very strong magnetic field,” said Jakob van den Eijnden of the University of Amsterdam. “That led to a theory that strong magnetic fields prevent jets from forming,” he added.

The new discovery contradicts that theory.

The scientists studied an object called Swift J0243.6+6124 (Sw J0243), discovered on October 3, 2017, by NASA’s orbiting Neil Gehrels Swift Observatory, when the object emitted a burst of X-rays. The object is a slowly-spinning neutron star pulling material from a companion star that is likely significantly more massive than the Sun. The VLA observations began a week after the Swift discovery and continued until January 2018.

Both the fact that the object’s emission at X-ray and radio wavelengths weakened together over time and the characteristics of the radio emission itself convinced the astronomers that they were seeing radio waves produced by a jet.

“This combination is what we see in other jet-producing systems. Alternative mechanisms just don’t explain it,” van den Eijnden said.

Common theories for jet formation in systems like Sw J0243 say the jets are launched by magnetic field lines anchored in the inner parts of the accretion disks. In this scenario, if the neutron star has a very strong magnetic field, that field is overpowering and prevents the jet from forming.

Artist's conception illustrates superdense neutron star, right, drawing material off its "normal" companion. Material forms an accretion disk rotating around the neutron star. Jets of material are launched perpendicular to the disk.

Credit: ICRAR/Universiteit van Amsterdam

“Our clear discovery of a jet in Sw J0243 disproves that longstanding idea,” van den Eijnden said.

Alternatively, the scientists suggest that Sw J0243’s jet-launching region of the accretion disk could be much farther out than in other types of systems, where the star’s magnetic field is weaker. Another idea, they said, is that the jets may be powered by the neutron star’s rotation, instead of being launched by magnetic field lines in the inner accretion disk.

“Interestingly, the rotation-powered idea predicts that the jet will be significantly weaker from more slowly rotating neutron stars, which is exactly what we see in Sw J0243,” Nathalie Degenaar, also of the University of Amsterdam, said.

The new discovery also implies that Sw J0243 may represent a large group of objects whose radio emission has been too weak to detect until new capabilities provided by the VLA’s major upgrade, completed in 2012, were available. If more such objects are found, the scientists said, they could test the idea that jets are produced by the neutron star’s spin.

The astronomers added that a jet from SwJ0243 may mean that another category of objects, called ultra-luminous X-ray pulsars, also highly magnetized, might produce jets.

“This discovery not only means we have to revise our ideas about jets from such systems, but also opens up exciting new areas of research,” Degenaar said.

Van den Eijnden, Degenaar, and their colleagues are reporting their discovery in the journal Nature.

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.



Contacts and sources:
Dave Finley
National Radio Astronomy Observatory
Citation:

Plasma Thruster: New Space Debris Removal Technology

The Earth is currently surrounded by debris launched into space over several decades. This space junk can collide with satellites and not only cause damage to spacecraft but also result in further debris being created.

To preserve a secure space environment, the active removal or de-orbiting of space debris is an emergent technological challenge. If remedial action is not taken in the near future it will be difficult to sustain human space activities. To overcome this issue, several methods for the removal and de-orbiting of debris have been proposed so far; classified as either contact (e.g., robotic arm, tether net, electrodynamic tether) or contactless methods (e.g., laser, ion beam shepherd), with the contactless methods proving to be more secure.

A concept for space debris removal by bi-directional momentum ejection from a satellite. (Figure 1)

Credit: Kazunori Takahashi.

The ion beam shepherd contactless method uses a plasma beam ejected from the satellite to impart a force to the debris thereby decelerating it, which results in it falling to a lower altitude, re-entering the Earth's atmosphere and burning up naturally. However, ejecting the plasma beam toward the debris accelerates the satellite in the opposite direction, which makes it difficult to maintain a consistent distance between debris and the satellite.

To safely and effectively remove debris, two propulsion systems have to be mounted on the satellite to eject bi-directional plasma beams (Figure 1). This interferes with a satellite system integration requiring the reduction of a satellite's weight and size.

Schematic of a magnetic nozzle rf plasma thruster (helicon plasma thruster) having two open source exits and photographs of the three operation modes in the laboratory test.


Credit:Kazunori Takahashi.



"If the debris removal can be performed by a single high-power propulsion system, it will be of significant use for future space activity," said Associate Professor Kazunori Takahashi from Tohoku University in Japan, who is leading research on new technology to remove space debris in collaboration with colleagues at the Australian National University.

The Japanese and Australian research group has demonstrated that a helicon plasma thruster can yield the space debris removal operation using a single propulsion system (Figure 2). In the laboratory experiment, the bi-directional ejection of plasma plumes from the single plasma thruster was precisely controlled with a magnetic field and gas injection; then the decelerating force imparted to an object simulating debris was measured whilst maintaining the zero-net force to the thruster (and satellite). The system, having the single plasma thruster, can be operational in three operational modes: acceleration of the satellite; deceleration of the satellite; and debris removal.

"The helicon plasma thruster is an electrodeless system, which allows it to undertake long operations performed at a high-power level." says Takahashi, "This discovery is considerably different to existing solutions and will make a substantial contribution to future sustainable human activity in space."


Contacts and sources:
Kazunori TakahashiTohoku University

Citation: Demonstrating a new technology for space debris removal using a bi-directional plasma thruster
Kazunori Takahashi, Christine Charles, Rod W. Boswell & Akira Ando
Scientific Reports volume 8, Article number: 14417 (2018) http://dx.doi.org/10.1038/s41598-018-32697-4

The Brain Diet



Scientists at the Medical University of South Carolina (MUSC) have uncovered mechanisms by which high levels of a hormone called FGF23 can reduce brain health.

In results published in the journal PLoS ONEon September 7th, 2018, high levels of fibroblast growth factor 23 (FGF23) were associated with structural changes in the brain's frontal lobes. High FGF23 levels are thought to lead to the vascular calcification seen in patients with chronic kidney disease. The study showed that such a process may also affect the brain in patients without chronic kidney disease but with elevated cardiovascular risk factors, according to Leonardo Bonilha, M.D., Ph.D., associate professor of neurology in the MUSC Department of Medicine and director of the study.

"We found that there is a relationship between high levels of FGF23 and a form of structural compromise in the brain," said Bonilha.

This is a whole brain reconstruction of the human structural connectome showing white matter connections between different brain regions.

Credit: Barbara Marebwa and Leonardo Bonilha of the Medical University of South Carolina.

FGF23 is produced in the bone. Normally, FGF23 works in the kidneys and the gut to regulate levels of calcium and phosphate in the body. It is thought to be increased in people who eat a diet high in phosphates, which are often found in foods with preservatives. In people with chronic kidney disease or in those who consume a diet high in phosphates, can be a calcification of their arteries, which can cause heart attack or stroke. FGF23 may be the reason.

Bonilha and graduate student Barbara Marebwa were interested in knowing if FGF23 could cause brain problems in people who had elevated cardiovascular risk factors, such as high blood pressure, diabetes, or high cholesterol. The idea was to determine if a high FGF23 level, present in people who did not have chronic kidney disease, was an indicator of problems in the brain.

Bonilha and Marebwa tested the idea that FGF23 and cardiovascular risk factors put together were an indicator of problems with communication in different parts of the brain. They recruited 50 patients for the study, about half of whom had elevated cardiovascular risk factors and about half of whom did not. All of the patients had normal kidney function. The researchers used magnetic resonance imaging to examine the connectomes in patients' brains, which was a way to see how different regions of their brains were connected. The method allows researchers to examine the white matter of the brain, which is more vulnerable to the type of stress that can occur when vessels become calcified. The frontal lobes, which control learning and complex cognitive functions, have a particularly high density of white matter, and thus may be most vulnerable to this type of stress.

The team looked at a feature of the connectome called modularity, which can reveal how well different parts of the brain are organized. People with abnormally high modularity have higher levels of disconnection in the brain, which may indicate problems with brain health in those areas. The researchers found that, in patients with high levels of FGF23 and cardiovascular risk factors, modularity was also high. In patients without cardiovascular risk factors, FGF23 levels were not associated with increased modularity. These results mean that FGF23 is associated with problems with brain health in people who already have high blood pressure, diabetes, or high cholesterol. As a result, elevated FGF23 levels may lead to structural damage in parts of the brain that may put people at a higher risk of stroke or problems with stroke recovery.

"It is important to understand the factors that relate to brain health, because brain health is associated with aging and resilience to injury. For example, if you get a stroke and you already have compromised brain health, the stroke may be more severe and you may not recover as well," explained Bonilha.

The work was part of a strategically focused research network (SFRN) grant funded by the American Heart Association to MUSC to examine disparities in stroke recovery. Myles Wolf, holder of an SFRN grant in cardiac and kidney research at Duke, contributed to the work. Together, the research team may have found a potential disparity in stroke recovery by highlighting vulnerability in the brains of patients with high FGF23 levels. For example, people without access to fresh foods may have high levels of FGF23 and thus an increased risk of stroke.

The next step, according to Bonilha, is to determine if lowering FGF23 levels in patients with cardiovascular risk factors can lead to better brain health or even to better outcomes following stroke. Previous work in other laboratories has revealed that FGF23 levels are elevated in people with cardiovascular risk factors and who consume a diet high in phosphates. The new results build on this finding and highlight the importance of a healthy diet in protecting the brain.

"This study is an important first step to lead to strategies to improve dietary habits and improve brain health," said Bonilha.



Contacts and sources:
Heather WoolwineMedical University of South Carolina (MUSC)

Citation:  Fibroblast growth factor23 is associated with axonal integrity and neural network architecture in the human frontal lobes
Barbara K. Marebwa, Robert J. Adams, Gayenell S. Magwood, Mark Kindy, Janina Wilmskoetter, Myles Wolf, Leonardo Bonilha
PLOS Published: September 7, 2018
https://doi.org/10.1371/journal.pone.0203460
 

In the Battle of Cats Vs. Rats, The Rats Are Winning

The first study to document interactions between feral cats and a wild rat colony finds that contrary to popular opinion, cats are not good predators of rats. 

In a novel approach, researchers monitored the behavior and movement of microchipped rats in the presence of cats living in the same area. They show the rats actively avoided the cats, and only recorded two rat kills in 79 days. Published as part of a special "rodent issue" in Frontiers in Ecology and Evolution, the findings add to growing evidence that any benefit of using cats to control city rats is outweighed by the threat they pose to birds and other urban wildlife.

"Like any prey, rats overestimate the risks of predation. In the presence of cats, they adjust their behavior to make themselves less apparent and spend more time in burrows," says the study's lead researcher Dr. Michael H. Parsons, a visiting scholar at Fordham University. "This raises questions about whether releasing cats in the city to control rats is worth the risks cats pose to wildlife."

File:Le-Chat-et-les-rats.jpg
Credit: Codex / Wikimedia Commons

People have long associated cats as the natural enemy of rats. However Australian and US researchers say cats prefer smaller, defenseless prey such as birds and smaller native wildlife -- which makes cats a threat to urban ecosystems.

"New Yorkers often boast their rats 'aren't afraid of anything' and are the 'size of a cat'," says Parsons. "Yet cats are commonly released to control this relatively large, defensive and potentially dangerous prey."

"Until now, no one has provided good data on the number of city rats killed by cats," adds co-author Michael A. Deutsch, from Arrow Exterminating Company Inc. "But the data have been very clear as to the effect of cats on native wildlife."

When feral cats invaded a New York City waste recycling center, the researchers took the opportunity to correct the record. Their team was already studying a colony of more than 100 rats living inside the center, by microchipping and releasing the animals to study their life history. When the cats entered the research area, they set up motion-capture video cameras to quantify the effect of the cats on the rats -- the first time this has been studied in a natural setting.

"We wanted to know whether the number of cats present would influence the number of rats observed, and vice versa," says Parsons. "We were also interested whether the presence of cats had any effect on eight common rat behaviors or their direction of movement."

The researchers examined 306 videos taken over 79 days. Although up to three cats were active beside the rat colony each day, only 20 stalking events, three kill attempts and two successful kills were recorded in this time. Both kills took place when cats found rats in hiding; the third attempt was an open-floor chase where the cat lost interest.

The videos also revealed that in the presence of cats, the rats spent less time in the open and more time moving to shelter.

"The presence of cats resulted in fewer rat sightings on the same or following day, while the presence of humans did not affect rat sightings," says Parsons. In contrast, the number of rats seen on a given day did not predict the number of cats seen on the following day.

"We already knew the average weight of the rats was 330 g, much more than a typical 15 g bird or 30 g mouse," says Parsons. "As such, we expected a low predation rate on the rats -- and our study confirmed this."

"We are not saying that cats will not predate city rats, only that conditions must be right for it to happen," adds Deutsch. "The cat must be hungry, have no alternative less-risky food source, and usually needs the element of surprise."

The findings could explain why people continue to release cats as "natural" rat control tools. "People see fewer rats and assume it's because the cats have killed them -- whereas it's actually due to the rats changing their behavior," says Parsons.

"The results of our study suggest the benefits of releasing cats are far outweighed by the risks to wildlife," he adds.

The research team plans to continue collecting data as part of their long-term study and will update their findings as new information becomes available.

"Much more research is needed to better understand the city rat problem, we hope our successes will compel others to perform similar studies in other venues," says Parsons.

But for now, in the battle of New York City cats and rats it appears the rats are winning.






Contacts and sources:
Emma Duncan
Frontiers


Citation: Temporal and Space-Use Changes by Rats in Response to Predation by Feral Cats in an Urban Ecosystem
Michael H. Parsons, Peter B. Banks, Michael A. Deutsch and Jason Munshi-South
Front. Ecol. Evol., 27 September 2018 | https://doi.org/10.3389/fevo.2018.00146
https://www.frontiersin.org/articles/10.3389/fevo.2018.00146/full

Did Key Building Blocks for Life Come from Deep Space?

Phosphates, a key building block for life, was found to be generated in outer space and delivered to early Earth by meteorites or comets.

All living beings need cells and energy to replicate. Without these fundamental building blocks, living organisms on Earth would not be able to reproduce and would simply not exist.

Little was known about a key element in the building blocks, phosphates, until now. University of Hawaii at Manoa researchers, in collaboration with colleagues in France and Taiwan, provide compelling new evidence that this component for life was found to be generated in outer space and delivered to Earth in its first one billion years by meteorites or comets. The phosphorus compounds were then incorporated in biomolecules found in cells in living beings on Earth.

Churyumov-Gerasimenko
Credit: ESA/Rosetta/NAVCAM

The breakthrough research is outlined in "An Interstellar Synthesis of Phosphorus Oxoacids," authored by UH Manoa graduate student Andrew Turner, now assistant professor at the University of Pikeville, and UH Manoa chemistry Professor Ralf Kaiser in the September issue of Nature Communications.

According to the study, phosphates and diphosphoric acid are two major elements that are essential for these building blocks in molecular biology. They are the main constituents of chromosomes, the carriers of genetic information in which DNA is found. Together with phospholipids in cell membranes and adenosine triphosphate, which function as energy carriers in cells, they form self-replicating material present in all living organisms.

In an ultra-high vacuum chamber cooled down to 5 K (-450°F) in the W.M. Keck Research Laboratory in Astrochemistry at UH Manoa, the Hawaii team replicated interstellar icy grains coated with carbon dioxide and water, which are ubiquitous in cold molecular clouds, and phosphine. When exposed to ionizing radiation in the form of high-energy electrons to simulate the cosmic rays in space, multiple phosphorus oxoacids like phosphoric acid and diphosphoric acid were synthesized via non-equilibrium reactions.

"On Earth, phosphine is lethal to living beings," said Turner, lead author. "But in the interstellar medium, an exotic phosphine chemistry can promote rare chemical reaction pathways to initiate the formation of biorelevant molecules such as oxoacids of phosphorus, which eventually might spark the molecular evolution of life as we know it."

Kaiser added, "The phosphorus oxoacids detected in our experiments by combination of sophisticated analytics involving lasers, coupled to mass spectrometers along with gas chromatographs, might have also been formed within the ices of comets such as 67P/Churyumov-Gerasimenko, which contains a phosphorus source believed to derive from phosphine." Kaiser says these techniques can also be used to detect trace amounts of explosives and drugs.

Comet 67P/Churyumov-Gerasimenk

Credit: ESA/Rosetta/NAVCAM

"Since comets contain at least partially the remnants of the material of the protoplanetary disk that formed our solar system, these compounds might be traced back to the interstellar medium wherever sufficient phosphine in interstellar ices is available," said Cornelia Meinert of the University of Nice (France).

Upon delivery to Earth by meteorites or comets, these phosphorus oxoacids might have been available for Earth's prebiotic phosphorus chemistry. Hence an understanding of the facile synthesis of these oxoacids is essential to untangle the origin of water-soluble prebiotic phosphorus compounds and how they might have been incorporated into organisms not only on Earth, but potentially in our universe as well.

Turner and Kaiser worked with Meinert and Agnes Chang of National Dong Hwa University (Taiwan) on this project.


Contacts and sources:
Ralf KaiserUniversity of Hawaiʻi at Mānoa

Where Are the Extraterrestrials?


“Are we alone in the universe?” The question has fascinated, tantalized and even disconcerted humans for as long as we can remember.

University of California Santa Barbara (UCSB) experimental cosmologist Philip Lubin and his group are using photonics to search Andromeda for signs of alien life.

So far, it would seem that intelligent extraterrestrial life — at least as fits our narrow definition of it — is nowhere to be found. Theories and assumptions abound as to why we have neither made contact with nor seen evidence of advanced extraterrestrial civilizations despite decades-long efforts to make our presence known and to communicate with them.

Hubble ultra deep field.jpg
Credit: Hubble Space Telescope, NASA and the European Space Agency.

Meanwhile, a steady stream of discoveries are demonstrating the presence of Earth analogues — planets that, like our own, exist at a “Goldilocks zone” distance from their own respective stars, in which conditions are “just right” for liquid water (and thus life) to exist. Perhaps even more mind-blowing is the idea that there are, on average, as many planets as there are stars.

“That is, I think, one of the amazing discoveries of the last century or so — that planets are common,” said Philip Lubin, an experimental cosmologist and professor of physics at UC Santa Barbara. Given that, and the assumption that planets provide the conditions for life, the question for Lubin’s group has become: Are we looking hard enough for these extraterrestrials?

That is the driver behind the Trillion Planet Survey, a project of Lubin’s student researchers. The ambitious experiment, run almost entirely by students, uses a suite of telescopes near and far aimed at the nearby galaxy of Andromeda as well as other galaxies including our own, a “pipeline” of software to process images and a little bit of game theory.

“First and foremost, we are assuming there is a civilization out there of similar or higher class than ours trying to broadcast their presence using an optical beam, perhaps of the ‘directed energy’ arrayed-type currently being developed here on Earth,” said lead researcher Andrew Stewart, a student at Emory University and a member of Lubin’s group. “Second, we assume the transmission wavelength of this beam to be one that we can detect. Lastly, we assume that this beacon has been left on long enough for the light to be detected by us. If these requirements are met and the extraterrestrial intelligence’s beam power and diameter are consistent with an Earth-type civilization class, our system will detect this signal.”

From Radio Waves to Light Waves
For the last half-century, the dominant broadcast from Earth has taken the form of radio, TV and radar signals, and seekers of alien life, such as the scientists at the Search for Extraterrestrial Intelligence (SETI) Institute, have been using powerful radio telescopes to look for those signals from other civilizations. Recently however, and thanks to the exponentially accelerating progress of photonic technology, optical and infrared wavelengths are offering opportunities to search via optical signals that allow for vastly longer range detection for comparable systems.

In a 2016 paper titled “The Search for Directed Intelligence” (or SDI), Lubin outlined the fundamental detection and game theory of a “blind-blind” system in which neither we, nor the extraterrestrial civilization are aware of each other but wish to find one another. That paper was based on the application of photonics developed at UC Santa Barbara in Lubin’s group for the propulsion of small spacecraft through space at relativistic speeds (i.e. a significant fraction of the speed of light) to enable the first interstellar missions. That ongoing project is funded by NASA’s Starlight and billionaire Yuri Milner’s Breakthrough Starshot programs, both of which use the technology developed at UCSB. The 2016 paper shows that the technology we are developing today would be the brightest light in the universe and thus capable of being seen across the entire universe.

Of course, not everyone is comfortable with advertising our presence to other, potentially advanced, extraterrestrial civilizations.

“Broadcasting our presence to the universe, believe it or not, turns out to be a very controversial topic,” Stewart said, citing bureaucratic issues that arise whenever beaconing is discussed, as well as the difficulty in obtaining the necessary technology of the scale required. Consequently, only a few, tentative signals have ever been sent in a directed fashion, including the famous Voyager 1 probe with its message-in-a-bottle-like golden record.

Tipping the concept on its head, the researchers asked, ‘What if there are other civilizations out there that are less shy about broadcasting their presence?’

“At the moment, we’re assuming that they’re not using gravity waves or neutrinos or something that’s very difficult for us to detect,” Lubin said. But optical signals could be detected by small (meter class) diameter telescopes such as those at the Las Cumbres Observatory’s robotically controlled global network.

“In no way are we suggesting that radio SETI should be abandoned in favor of optical SETI,” Stewart added. “We just think the optical bands should be explored as well.”

Searching the Stars
“We’re in the process of surveying (Andromeda) right now and getting what’s called ‘the pipeline’ up and running,” said researcher Alex Polanski, a UC Santa Barbara undergraduate in Lubin’s group. A set of photos taken by the telescopes, each of which takes a 1/30th slice of Andromeda, will be knit together to create a single image, he explained. That one photograph will then be compared to a more pristine image in which there are no known transient signals — interfering signals from, say, satellites or spacecraft — in addition to the optical signals emanating from the stellar systems themselves. The survey photo would be expected to have the same signal values as the pristine “control” photo, leading to a difference of zero. But a difference greater than zero could indicate a transient signal source, Polanski explained. Those transient signals would then be further processed in the software pipeline developed by Stewart to kick out false positives. In the future the team plans to use simultaneous multiple color imaging to help remove false positives as well.

“One of the things the software checks for is, say, a satellite that did go through our image,” said Kyle Friedman, a senior from Granada Hills High School in Los Angeles, who is conducting research in Lubin’s group. “It wouldn’t be small; it would be pretty big, and if that were to happen the software would immediately recognize it and throw out that image before we actually even process it.”

Other vagaries, according to the researchers, include sky conditions, which is why it’s important to have several telescopes monitoring Andromeda during their data run.

Thanks to the efforts of Santa Barbara-based computer engineer Kelley Winters and the guidance of Lubin group project scientist Jatila van der Veen, the data is in good hands. Winters’ cloud-based Linux server provides a flexible, highly connected platform for the data pipeline software to perform its image analysis, while van der Veen will apply her digital image processing expertise to bring this project to future experimental cosmologists.

For Laguna Blanca School senior and future physicist Caitlin Gainey, who joins the UC Santa Barbara physics freshman class this year, the project is a unique opportunity.

“In the Trillion Planet Survey especially, we experience something very inspiring: We have the opportunity to look out of our earthly bubble at entire galaxies, which could potentially have other beings looking right back at us,” she said. “The mere possibility of extraterrestrial intelligence is something very new and incredibly intriguing, so I’m excited to really delve into the search this coming year.”

The search, for any SETI-watcher, is an exercise in patience and optimism. Andromeda is 2.5 million light-years away, van der Veen pointed out, so any signal detected now would have been sent at least 2.5 million years ago — more than long enough for the civilization that sent it to have died out by the time the light reaches us.

“That does not mean we should not look,” van der Veen said. “After all, we look for archaeological relics and fossils, which tell us about the history of Earth. Finding ancient signals will definitely give us information about the history of evolution of life in the cosmos, and that would be amazing.”

While the data run and processing time for this particular project could occur in a span of weeks, according to the researchers this sequence could be repeated indefinitely. Theoretically, like all the sunrise and sunset watchers, and stargazers before us, we could look at the sky forever.

“I think if you were to take someone outside and you were to point at some random star in the night sky and see that is where life is, I think you would be hard pressed to find anyone who would not look at that star and just feel something very deep within themselves,” Polanski said. “Some very deep connection to whatever is up there or some kind of solace, I think, knowing that we’re not alone.”

The latest UCSB data and game theory of the “blind-blind” detection strategy used is being presented at the NASA Technosignatures workshopin Houston on September 28.



Contacts and sources:
Sonia Fernandez
University of California Santa Barbara