Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Friday, May 26, 2017

Would You Eat Shape-Shifting Food? Programmable Pasta Transforms When Wet

"Don't play with your food" is a saying that MIT researchers are taking with a grain or two of salt. The team is finding ways to make the dining experience interactive and fun, with food that can transform its shape when water is added.

The researchers, from MIT's Tangible Media Group, have concocted something akin to edible origami, in the form of flat sheets of gelatin and starch that, when submerged in water, instantly sprout into three-dimensional structures, including common pasta shapes such as macaroni and rotini.

These pasta shapes were caused by immersing a 2-D flat film into water.
These pasta shapes were caused by immersing a 2-D flat film into water.
Image: Michael Indresano Production

The edible films can also be engineered to fold into the shape of a flower as well as other unconventional configurations. Playing with the films' culinary potential, the researchers created flat discs that wrap around beads of caviar, similar to cannoli, as well as spaghetti that spontaneously divides into smaller noodles when dunked in hot broth.



The researchers presented their work in a paper this month at the Association for Computing Machinery's 2017 Computer-Human Interaction Conference on Human Factors in Computing Systems. They describe their shape-morphing creations as not only culinary performance art, but also a practical way to reduce food-shipping costs. For instance, the edible films could be stacked together and shipped to consumers, then morph into their final shape later, when immersed in water.

"We did some simple calculations, such as for macaroni pasta, and even if you pack it perfectly, you still will end up with 67 percent of the volume as air," says Wen Wang, a co-author on the paper and a former graduate student and research scientist in MIT's Media Lab. "We thought maybe in the future our shape-changing food could be packed flat and save space."

Wang's co-authors are Lining Yao, lead author and former graduate student; Chin-Yi Cheng, a former graduate student; Daniel Levine, a current graduate student; Teng Zhang of Syracuse University; and Hiroshi Ishii, the Jerome B. Wiesner Professor in media arts and sciences.

"This project is the one of the latest to materialize our vision of 'radical atoms' -- combining human interactions with dynamic physical materials, which are transformable, conformable, and informable," Ishii says.

Programmable pasta

At MIT, Wang and Yao had been investigating the response of various materials to moisture. They were working mostly with a certain bacterium that can transform its shape, shrinking and expanding in response to humidity. Coincidentally, that same bacterium is used to ferment soybeans to make a common Japanese dish known as natto. Yao and Wang wondered whether other edible materials could be designed to change their shape when exposed to water.

Phytoplankton pasta salad with heirloom tomatoes and wild sorrel, by Matthew Delisle
Phytoplankton pasta salad with heirloom tomatoes and wild sorrel, by Matthew Delisle
Image: Michael Indresano Production

They started playing around with gelatin, a substance that naturally expands when it absorbs water. Gelatin can expand to varying degrees depending on its density -- a characteristic that the team exploited in creating their shape-transforming structures.

Yao and Wang engineered a flat, two-layer film made from gelatin of two different densities. The top layer is more densely packed, and thus able to absorb more water, than the bottom. When the entire structure is immersed in water, the top layer curls over the bottom layer, forming a slowly rising arch.

The researchers looked for ways to control where and to what degree the structure bends, so that they might create different three-dimensional shapes from the gelatin sheet. They eventually settled on 3-D printing strips of edible cellulose over the top gelatin layer. The cellulose strips naturally absorb very little water, and they found that the strips could act as a water barrier, controlling the amount of water that the top gelatin layer is exposed to. By printing cellulose in various patterns onto gelatin, they could predictably control the structure's response to water and the shapes that it ultimately assumed.

"This way you can have programmability," Yao says. "You ultimately start to control the degree of bending and the total geometry of the structure."

Designing for a noodle democracy

Wang and Yao created a number of different shapes from the gelatin films, from macaroni- and rigatoni-like configurations, to shapes that resembled flowers and horse saddles.

Curious as to how their designs might be implemented in a professional kitchen, the team showed their engineered edibles to the head chef of a high-end Boston restaurant. The scientists and chef struck up a short collaboration, during which they designed two culinary creations: transparent discs of gelatin flavored with plankton and squid ink, that instantly wrap around small beads of caviar; and long fettuccini-like strips, made from two gelatins that melt at different temperatures, causing the noodles to spontaneously divide when hot broth melts away certain sections.

"They had great texture and tasted pretty good," Yao says.

The team recorded the cellulose patterns and the dimensions of all of the structures they were able to produce, and also tested mechanical properties such as toughness, organizing all this data into a database. Co-authors Zhang and Cheng then built computational models of the material's transformations, which they used to design an online interface for users to design their own edible, shape-transforming structures.

Helix noodle with Point Judith squid, confit egg yolk, and white hoisin, by Matthew Delisle
Helix noodle with Point Judith squid, confit egg yolk, and white hoisin, by Matthew Delisle
Image: Michael Indresano Production

"We did many lab tests and collected a database, within which you can pick different shapes, with fabrication instructions," Wang says. "Reversibly, you can also select a basic pattern from the database and adjust the distribution or thickness, and can see how the final transformation will look."

The researchers used a laboratory 3-D printer to pattern cellulose onto films of gelatin, but they have outlined ways in which users can reproduce similar effects with more common techniques, such as screenprinting.

"We envision that the online software can provide design instructions, and a startup company can ship the materials to your home," Yao says. "With this tool, we want to democratize the design of noodles."

This research was funded, in part, by the MIT Media Lab and Food + Future, a startup accelerator sponsored by Target Corporation based in Cambridge,


Contacts and sources:
Jennifer Chu
Massachusetts Institute of Technology (MIT)


Paper: Transformative Appetite: Shape-changing food transforms from 2D to 3D by water interaction through cooking.
http://dl.acm.org/citation.cfm?id=3026019

Radioactive Emissions Greater Than 2011 Fukushima Accident Possible: US Nuclear Regulators Greatly Underestimate Potential for Nuclear Disaster

The U.S. Nuclear Regulatory Commission (NRC) relied on faulty analysis to justify its refusal to adopt a critical measure for protecting Americans from the occurrence of a catastrophic nuclear-waste fire at any one of dozens of reactor sites around the country, according to an article in the May 26 issue of Science magazine. Fallout from such a fire could be considerably larger than the radioactive emissions from the 2011 Fukushima accident in Japan.

This image captures the spread of radioactivity from a hypothetical fire in a high-density spent-fuel pool at the Peach Bottom Nuclear Power Plant in Pennsylvania. Based on the guidance from the US Environmental Protection Agency and the experience from the Chernobyl and Fukushima accidents, populations in the red and orange areas would have to be relocated for many years, and many in the yellow area would relocate voluntarily. In this scenario, which is based on real weather patterns that occurred in July 2015, four major cities would be contaminated (New York City, Philadelphia, Baltimore and Washington, D.C.), resulting in the displacement of millions of people.

Photo courtesy of Michael Schoeppner, Princeton University, Program on Science and Global Security


Published by researchers from Princeton University and the Union of Concerned Scientists, the article argues that NRC inaction leaves the public at high risk from fires in spent-nuclear-fuel cooling pools at reactor sites. The pools -- water-filled basins that store and cool used radioactive fuel rods -- are so densely packed with nuclear waste that a fire could release enough radioactive material to contaminate an area twice the size of New Jersey. On average, radioactivity from such an accident could force approximately 8 million people to relocate and result in $2 trillion in damages.

These catastrophic consequences, which could be triggered by a large earthquake or a terrorist attack, could be largely avoided by regulatory measures that the NRC refuses to implement. Using a biased regulatory analysis, the agency excluded the possibility of an act of terrorism as well as the potential for damage from a fire beyond 50 miles of a plant. Failing to account for these and other factors led the NRC to significantly underestimate the destruction such a disaster could cause.

"The NRC has been pressured by the nuclear industry, directly and through Congress, to low-ball the potential consequences of a fire because of concerns that increased costs could result in shutting down more nuclear power plants," said paper co-author Frank von Hippel, a senior research physicist at Princeton's Program on Science and Global Security (SGS), based at the Woodrow Wilson School of Public and International Affairs. "Unfortunately, if there is no public outcry about this dangerous situation, the NRC will continue to bend to the industry's wishes."

Von Hippel's co-authors are Michael Schoeppner, a former postdoctoral researcher at Princeton's SGS, and Edwin Lyman, a senior scientist at the Union of Concerned Scientists.

Spent-fuel pools were brought into the spotlight following the March 2011 nuclear disaster in Fukushima, Japan. A 9.0-magnitude earthquake caused a tsunami that struck the Fukushima Daiichi nuclear power plant, disabling the electrical systems necessary for cooling the reactor cores. This led to core meltdowns at three of the six reactors at the facility, hydrogen explosions, and a release of radioactive material.

"The Fukushima accident could have been a hundred times worse had there been a loss of the water covering the spent fuel in pools associated with each reactor," von Hippel said. "That almost happened at Fukushima in Unit 4."

In the aftermath of the Fukushima disaster, the NRC considered proposals for new safety requirements at U.S. plants. One was a measure prohibiting plant owners from densely packing spent-fuel pools, requiring them to expedite transfer of all spent fuel that has cooled in pools for at least five years to dry storage casks, which are inherently safer. Densely packed pools are highly vulnerable to catching fire and releasing huge amounts of radioactive material into the atmosphere.

The NRC analysis found that a fire in a spent-fuel pool at an average nuclear reactor site would cause $125 billion in damages, while expedited transfer of spent fuel to dry casks could reduce radioactive releases from pool fires by 99 percent. However, the agency decided the possibility of such a fire is so unlikely that it could not justify requiring plant owners to pay the estimated cost of $50 million per pool.

The NRC cost-benefit analysis assumed there would be no consequences from radioactive contamination beyond 50 miles from a fire. It also assumed that all contaminated areas could be effectively cleaned up within a year. Both of these assumptions are inconsistent with experience after the Chernobyl and Fukushima accidents.

In two previous articles, von Hippel and Schoeppner released figures that correct for these and other errors and omissions. They found that millions of residents in surrounding communities would have to relocate for years, resulting in total damages of $2 trillion -- nearly 20 times the NRC's result. Considering the nuclear industry is only legally liable for $13.6 billion, thanks to the Price Anderson Act of 1957, U.S. taxpayers would have to cover the remaining costs.

The authors point out that if the NRC does not take action to reduce this danger, Congress has the authority to fix the problem. Moreover, the authors suggest that states that provide subsidies to uneconomical nuclear reactors within their borders could also play a constructive role by making those subsidies available only for plants that agreed to carry out expedited transfer of spent fuel.

"In far too many instances, the NRC has used flawed analysis to justify inaction, leaving millions of Americans at risk of a radiological release that could contaminate their homes and destroy their livelihoods," said Lyman. "It is time for the NRC to employ sound science and common-sense policy judgments in its decision-making process."

The paper, "Nuclear safety regulation in the post-Fukushima era," was published May 26 in Science. For more information, see von Hippel and Schoeppner's previous papers, "Reducing the Danger from Fires in Spent Fuel Pools" and "Economic Losses From a Fire in a Dense-Packed U.S. Spent Fuel Pool," which were published in Science & Global Security in 2016 and 2017 respectively. The Science article builds upon the findings of a Congressionally-mandated review by the National Academy of Sciences, on which von Hippel served.



Contacts and sources:
B. Rose Kelly 
Princeton University, Woodrow Wilson School of Public and International Affairs

Journey to a Metal World Psyche To Launch in 2022

Psyche, NASA's Discovery Mission to a unique metal asteroid, has been moved up one year with launch in the summer of 2022, and with a planned arrival at the main belt asteroid in 2026 -- four years earlier than the original timeline.

"We challenged the mission design team to explore if an earlier launch date could provide a more efficient trajectory to the asteroid Psyche, and they came through in a big way," said Jim Green, director of the Planetary Science Division at NASA Headquarters in Washington. "This will enable us to fulfill our science objectives sooner and at a reduced cost."


Credit: ASU

Psyche follows an orbit in the outer part of the main asteroid belt, at an average distance from the Sun of 3 astronomical units (AU); Earth orbits at 1 AU. As asteroids go, Psyche is large (about 150 miles in diameter), dense (7,000 kg/m³), and made almost entirely of nickel-iron metal. It is the only known place in our solar system where we can examine directly what is almost certainly a metallic core.

Discovered in 1852, Psyche is made almost entirely of nickel-iron metal and is about three times farther away from the sun than Earth. The asteroid measures about 130 miles (210 kilometers) in diameter and is thought to be made of mostly metallic iron and nickel, similar to Earth's core.

Scientists think Psyche may be the core of an early planet stripped of its rocky exterior by collisions, and could give insight into the violent collisions that created Earth and other terrestrial planets. The mission will help scientists understand how planets and other bodies separated into their layers – including cores, mantles and crusts – early in their histories.

Artist's concept of the Psyche spacecraft, which will conduct a direct exploration of an asteroid thought to be a stripped planetary core.
Artist's concept of the Psyche spacecraft, which will conduct a direct exploration of an asteroid thought to be a stripped planetary core.
Image credit: SSL/ASU/P. Rubin/NASA/JPL-Caltech

The Discovery program announcement of opportunity had directed teams to propose missions for launch in either 2021 or 2023. The Lucy mission was selected for the first launch opportunity in 2021, and Psyche was to follow in 2023. Shortly after selection in January, NASA gave the direction to the Psyche team to research earlier opportunities.

"The biggest advantage is the excellent trajectory, which gets us there about twice as fast and is more cost effective," said Principal Investigator Lindy Elkins-Tanton of Arizona State University in Tempe. "We are all extremely excited that NASA was able to accommodate this earlier launch date. The world will see this amazing metal world so much sooner."



The revised trajectory is more efficient, as it eliminates the need for an Earth gravity assist, which ultimately shortens the cruise time. In addition, the new trajectory stays farther from the sun, reducing the amount of heat protection needed for the spacecraft. The trajectory will still include a Mars gravity assist in 2023.

"The change in plans is a great boost for the team and the mission," said Psyche Project Manager Henry Stone at NASA's Jet Propulsion Laboratory, Pasadena, California. "Our mission design team did a fantastic job coming up with this ideal launch opportunity."


Credit: NASA/ASU

The Psyche spacecraft is being built by Space Systems Loral (SSL), Palo Alto, California. In order to support the new mission trajectory, SSL redesigned the solar array system from a four-panel array in a straight row on either side of the spacecraft to a more powerful five-panel x-shaped design, commonly used for missions requiring more capability. Much like a sports car, by combining a relatively small spacecraft body with a very high-power solar array design, the Psyche spacecraft will speed to its destination at a faster pace than is typical for a larger spacecraft.

"By increasing the size of the solar arrays, the spacecraft will have the power it needs to support the higher velocity requirements of the updated mission," said SSL Psyche Program Manager Steve Scott.

The Psyche Mission

Psyche, an asteroid orbiting the sun between Mars and Jupiter, is made almost entirely of nickel-iron metal. As such, it offers a unique look into the violent collisions that created Earth and the terrestrial planets.


Credit: ASU

The Psyche Mission was selected for flight earlier this year under NASA's Discovery Program, a series of lower-cost, highly focused robotic space missions that are exploring the solar system.

The scientific goals of the Psyche mission are to understand the building blocks of planet formation and explore firsthand a wholly new and unexplored type of world. The mission team seeks to determine whether Psyche is the core of an early planet, how old it is, whether it formed in similar ways to Earth's core, and what its surface is like. The spacecraft's instrument payload will include magnetometers, multispectral imagers, and a gamma ray and neutron spectrometer.


Contacts and sources:
D.C. Agle, Jet Propulsion Laboratory, Pasadena, Calif.
Karin Valentine, Arizona State University School of Earth and Space Exploration, Tempe

Fast-Growing Galaxies from Early Universe Discovered

A team of astronomers including Carnegie's Eduardo Bañados and led by Roberto Decarli of the Max Planck Institute for Astronomy has discovered a new kind of galaxy which, although extremely old--formed less than a billion years after the Big Bang--creates stars more than a hundred times faster than our own Milky Way.

The team's discovery could help solve a cosmic puzzle--a mysterious population of surprisingly massive galaxies from when the universe was only about 10 percent of its current age.

This is an artist's impression of a quasar and neighboring merging galaxy. The galaxies observed by the team are so distant that no detailed images are possible at present. This combination of images of nearby counterparts gives an impression of how they might look in more detail.

Credit: The image was created by the Max Planck Institute for Astronomy using material from the NASA/ESA Hubble Space Telescope.

After first observing these galaxies a few years ago, astronomers proposed that they must have been created from hyper-productive precursor galaxies, which is the only way so many stars could have formed so quickly. But astronomers had never seen anything that fit the bill for these precursors until now.

This newly discovered population could solve the mystery of how these extremely large galaxies came to have hundreds of billions of stars in them when they formed only 1.5 billion years after the Big Bang, requiring very rapid star formation.

The team made this discovery by accident when investigating quasars, which are supermassive black holes that sit at the center of enormous galaxies, accreting matter. They were trying to study star formation in the galaxies that host these quasars.

"But what we found, in four separate cases, were neighboring galaxies that were forming stars at a furious pace, producing a hundred solar masses' worth of new stars per year," Decarli explained.

"Very likely it is not a coincidence to find these productive galaxies close to bright quasars. Quasars are thought to form in regions of the universe where the large-scale density of matter is much higher than average. Those same conditions should also be conducive to galaxies forming new stars at a greatly increased rate," added Fabian Walter, also of Max Planck.

"Whether or not the fast-growing galaxies we discovered are indeed precursors of the massive galaxies first seen a few years back will require more work to see how common they actually are," Bañados explained.

Decarli's team already has follow-up investigations planned to explore this question.

The team also found what appears to be the earliest known example of two galaxies undergoing a merger, which is another major mechanism of galaxy growth. The new observations provide the first direct evidence that such mergers have been taking place even at the earliest stages of galaxy evolution, less than a billion years after the Big Bang.

Their findings are published by Nature.



Contacts and sources:
Eduardo Bañados
The Carnegie Institution for Science

Thursday, May 25, 2017

FAU Archaeologist Involved in Groundbreaking Discovery of Early Human Life in Ancient Peru

A-tisket, A-tasket. You can tell a lot from a basket. Especially if it comes from the ruins of an ancient civilization inhabited by humans nearly 15,000 years ago during the Late Pleistocene and Early Holocene ages.

An archaeologist from Florida Atlantic University's Harbor Branch Oceanographic Institute is among a team of scientists who made a groundbreaking discovery in Huaca Prieta in coastal Peru - home to one of the earliest and largest pyramids in South America.

 Hundreds of thousands of artifacts, including intricate and elaborate hand-woven baskets excavated between 2007 and 2013 in Huaca Prieta, reveal that early humans in that region were a lot more advanced than originally thought and had very complex social networks.

James M. Adovasio, Ph.D., D.Sc., is co-author of the study and a world acclaimed archaeologist at FAU's Harbor Branch, who is the foremost authority on ancient textiles and materials such as those used in basketry.
Credit: Florida Atlantic University's Harbor Branch Oceanographic Institute

For decades, archaeologists exploring Peru have argued about the origins and emergence of complex society in Peru. Did it first happen in the highlands with groups who were dependent on agriculture or did it happen along the coast with groups who were dependent on seafood? Evidence from the site indicates a more rapid development of cultural complexity along the Pacific coast than previously thought as published in Science Advances.

"The mounds of artifacts retrieved from Huaca Prieta include food remains, stone tools and other cultural features such as ornate baskets and textiles, which really raise questions about the pace of the development of early humans in that region and their level of knowledge and the technology they used to exploit resources from both the land and the sea," said James M. Adovasio, Ph.D., D.Sc., co-author of the study and a world acclaimed archaeologist at FAU's Harbor Branch, who is the foremost authority on ancient textiles and materials such as those used in basketry.

Among the artifacts excavated are tools used to capture deep-sea fish-like herring. The variety of hooks they used indicate the diversity of fishing that took place at that time and almost certainly the use of boats that could withstand rough waters. These ancient peoples managed to develop a very efficient means of extracting seaside resources and devised complex techniques to collect those resources. They also combined their exploitation of maritime economy with growing crops like chili pepper, squash, avocado and some form of a medicinal plant on land in a way that produced a large economic surplus.

James M. Adovasio, Ph.D., D.Sc., co-author of the study and a world acclaimed archaeologist at FAU’s Harbor Branch, is the foremost authority on ancient textiles and materials such as those used in basketry.
Credit: FAU

"These strings of events that we have uncovered demonstrate that these people had a remarkable capacity to utilize different types of food resources, which led to a larger society size and everything that goes along with it such as the emergence of bureaucracy and highly organized religion," said Adovasio.

Advosasio's focus of the excavation was on the extensive collection of basket remnants retrieved from the site, which were made from diverse materials including a local reed that is still used today by modern basket makers. Some of the utilitarian baskets discovered may date back as far as 11,000 years, while the more elaborate baskets made from domesticated cotton using some of the oldest dyes known in the New World are approximately 4,000 years old.

"To make these complicated textiles and baskets indicates that there was a standardized or organized manufacturing process in place and that all of these artifacts were much fancier than they needed to be for that time period," said Adovasio. "Like so many of the materials that were excavated, even the baskets reflect a level of complexity that signals a more sophisticated society as well as the desire for and a means for showing social stature. All of these things together tell us that these early humans were engaged in very complicated social relationships with each other and that these fancy objects all bespeak that kind of social messaging."

The late archaeologist Junius B. Bird was the first to excavate Huaca Prieta in the late 1940s after World War II and his original collection is housed in the American Museum of Natural History in New York. This latest excavation is only the second one to take place at this site, but this time using state-of-the-art archaeological technology. This recent excavation took approximately six years to complete and included a total of 32 excavation units and trenches, 32 test pits, and 80 geological cores that were placed on, around and between the Huaca Prieta and Paredones mounds as well as other sites. These artifacts are now housed in a museum in Lima, Peru.

Leading the team of scientists is Tom D. Dillehay, Ph.D., principal investigator and an anthropologist from Vanderbilt University. The final report of this excavation will be published in a book by the University of Texas Press later this summer. Adovasio and Dillehay plan to go back to Peru within a year to further examine some of the, as yet, still unstudied basket specimens, especially the very earliest ones which are among the oldest in the New World.




Contacts and sources:
Gisele Galoustian
Florida Atlantic University 

Zika Reached Miami at least Four Times, Caribbean Travel Likely Responsible

With mosquito season looming in the Northern Hemisphere, doctors and researchers are poised to take on a new round of Zika virus infections.

Now a new study by a large group of international researchers led by scientists at The Scripps Research Institute (TSRI) explains how Zika virus entered the United States via Florida in 2016 -- and how it might re-enter the country this year.

By sequencing the virus's genome at different points in the outbreak, the researchers created a family tree showing where cases originated and how quickly they spread. They discovered that transmission of Zika virus began in Florida at least four -- and potentially up to forty -- times last year. The researchers also traced most of the Zika lineages back to strains of the virus in the Caribbean.

Related image

"Without these genomes, we wouldn't be able to reconstruct the history of how the virus moved around," said TSRI infectious disease researcher and senior author of the study, Kristian G. Andersen, who also serves as director of infectious disease genomics at the Scripps Translational Science Institute (STSI). "Rapid viral genome sequencing during ongoing outbreaks is a new development that has only been made possible over the last couple of years."

The research was published May 24, 2017, in the journal Nature. This was one of three related studies, published simultaneously in Nature journals, exploring the transmission and evolution of Zika virus. A fourth study was also published in Nature Protocols providing details of the technologies used by the researchers.

Why Miami?

By sequencing Zika virus genomes from humans and mosquitoes -- and analyzing travel and mosquito abundance data -- the researchers found that several factors created what TSRI Research Associate Nathan D. Grubaugh called a "perfect storm" for the spread of Zika virus in Miami.

"This study shows why Miami is special," said Grubaugh, the lead author of the study.

First, Grubaugh explained, Miami is home to year-round populations of Aedes aegyptimosquitoes, the main species that transmits Zika virus. The area is also a significant travel hub, bringing in more international air and sea traffic than any other city in the continental United States in 2016. Finally, Miami is an especially popular destination for travelers who have visited Zika-afflicted areas.

The researchers found that travel from the Caribbean Islands may have significantly contributed to cases of Zika reaching the city. Of the 5.7 million international travelers entering Miami by flights and cruise ships between January and June of 2016, more than half arrived from the Caribbean.

Killing Mosquitos Shows Results

The researchers believe Zika virus may have started transmission in Miami up to 40 times, but most travel-related cases did not lead to any secondary infections locally. The virus was more likely to reach a dead end than keep spreading.

The researchers found that one reason for the dead-ends was a direct connection between mosquito control efforts and disease prevention. "We show that if you decrease the mosquito population in an area, the number of Zika infections goes down proportionally," said Andersen. "This means we can significantly limit the risk of Zika virus by focusing on mosquito control. This is not too surprising, but it's important to show that there is an almost perfect correlation between the number of mosquitos and the number of human infections."

TSRI Research Associate Nathan D. Grubaugh works with TSRI Graduate Student Karthik Gangavarapu to map the spread of Zika virus.
Photo by Faith Hark

Based on data from the outbreak, Andersen sees potential in stopping the virus through mosquito control efforts in both Florida and other infected countries, instead of, for example, through travel restrictions. "Given how many times the introductions happened, trying to restrict traffic or movement of people obviously isn't a solution. Focusing on disease prevention and mosquito control in endemic areas is likely to be a much more successful strategy," he said.

When the virus did spread, the researchers found that splitting Miami into designated Zika zones -- often done by neighborhood or city block -- didn't accurately represent how the virus was moving. Within each Zika zone, the researchers discovered a mixing of multiple Zika lineages, suggesting the virus wasn't well-confined, likely moving around with infected people.

Andersen and Grubaugh hope these lessons from the 2016 epidemic will help scientists and health officials respond even faster to prevent Zika's spread in 2017.

Behind the Data

Understanding Zika's timeline required a large international team of scientists and partnerships with several health agencies. In fact, the study was a collaboration of more than 60 researchers from nearly 20 institutions, including study co-leaders at the U.S. Army Medical Research Institute of Infectious Diseases, Florida Gulf Coast University, the University of Oxford, the Fred Hutchinson Cancer Research Center, the Florida Department of Health and the Broad Institute of MIT and Harvard.

The scientists also designed a new method of genomic sequencing just to study the virus. Because Zika virus is hard to collect in the blood of those infected, it was a challenge for the researchers to isolate enough of its genetic material for sequencing. To solve this problem, the team, together with Joshua Quick and Nick Loman at the University of Birmingham in the UK, developed two different protocols to break apart the genetic material they could find and reassemble it in a useful way for analysis.

With these new protocols, the researchers sequenced the virus from 28 of the reported 256 Zika cases in Florida, as well as seven mosquito pools, to model what happened in the larger patient group. As they worked, the scientists released their data immediately publicly to help other scientists. They hope to release more data -- and analysis -- in real time as cases mount in 2017.

The new study was published with three companion papers, also in Nature journals, that explore Zika's spread in other parts of the Americas (see sidebar).




Contacts and sources:
Madeline McCurry-Schmidt
The Scripps Research Institute (TSRI)

The new study, "Genomic epidemiology reveals multiple introductions of Zika virus into the United States," also included authors from the University of Miami, the University of Birmingham, Colorado State University, St. Michael's Hospital (Toronto), the University of Toronto, the University of Washington, Tulane University, Miami-Dade County Mosquito Control, the University of Florida, the University of Edinburgh and the National Institutes of Health.

Sediment from Himalayas May Have Made 2004 Indian Ocean Earthquake More Severe

Sediment that eroded from the Himalayas and Tibetan plateau over millions of years was transported thousands of kilometers by rivers and in the Indian Ocean -- and became sufficiently thick over time to generate temperatures warm enough to strengthen the sediment and increase the severity of the catastrophic 2004 Sumatra earthquake.

The magnitude 9.2 earthquake on Dec. 26, 2004, generated a massive tsunami that devastated coastal regions of the Indian Ocean. The earthquake and tsunami together killed more than 250,000 people making it one of the deadliest natural disasters in history.

Researchers carry a sediment core.

Credit: Tim Fulton, IODP-JRSO

An international team of scientists that outlined the process of sediment warming says the same mechanism could be in place in the Cascadia Subduction Zone off the Pacific Northwest coast of North America, as well as off Iran, Pakistan and in the Caribbean.

Results of the research, which was conducted as part of the International Ocean Discovery Program, are being published this week in the journal Science.

"The 2004 Indian Ocean tsunami was triggered by an unusually strong earthquake with an extensive rupture area," said expedition co-leader Lisa McNeill, an Oregon State University graduate now at the University of Southampton. "We wanted to find out what caused such a large earthquake and tsunami, and what it might mean for other regions with similar geological properties."

The research team sampled for the first time sediment and rocks from the tectonic plate that feeds the Sumatra subduction zone. From the research vessel JOIDES Resolution, the team drilled down 1.5 kilometers below the seabed, measured different properties of the sediments, and ran simulations to calculate how the sediment and rock behaves as it piles up and travels eastward 250 kilometers toward the subduction zone.

"We discovered that in some areas where the sediments are especially thick, dehydration of the sediments occurred before they were subducted," noted Marta Torres, an Oregon State University geochemist and co-author on the study. "Previous earthquake models assumed that dehydration occurred after the material was subducted, but we had suspected that it might be happening earlier in some margins.

"The earlier dehydration creates stronger, more rigid material prior to subduction, resulting in a very large fault area that is prone to rupture and can lead to a bigger and more dangerous earthquake."

Torres explained that when the scientists examined the sediments, they found water between the sediment grains that was less salty than seawater only within a zone where the plate boundary fault develops, some 1.2 to 1.4 kilometers below the seafloor.

"This along with some other chemical changes are clear signals that it was an increase in temperature from the thick accumulation of sediment that was dehydrating the minerals," Torres said.

Lead author Andre Hüpers of the University of Bremen in Germany said that the discovery will generate new interest in other subduction zone sites that also have thick, hot sediment and rock, especially those areas where the hazard potential is unknown.

The Cascadia Subduction Zone is one of the most widely studied sites in the world and experts say it may have experienced as many as two dozen major earthquakes over the past 10,000 years.

The sediment at the Cascadia deformation front is between 2.5 and 4.0 kilometers thick, which is somewhat less than the 4-5 kilometer thickness of the Sumatra region. However, because the subducting plate at Cascadia is younger when the plate arrives at the subduction zone, the estimated temperatures at the fault surface are about the same in both regions.




Contacts and sources:
Marta Torres
Oregon State University 

Middle Stone Age Humans Used Ochre in Porc-Epic Cave Persisted Over Thousands of Years

Middle Stone Age humans in the Porc-Epic cave likely used ochre over at least 4,500 years, according to a study published May 24, 2017 in the open-access journal PLOS ONE by Daniela Rosso from the University of Barcelona, Spain, and the University of Bordeaux, France, and colleagues.

Ochre, an iron-rich rock characterized by a red or yellow color, is found at many Middle Stone Age sites. The largest known East African collection of Middle Stone Age ochre, found at Porc-Epic Cave in Ethiopia, weighs around 40kg and is thought to date to ca. 40,000 years ago.

Images of ochre, the iron-rich rock characterized by a red or yellow color, that was found at many Middle Stone Age sites.

Credit: Rosso et al (2017)

The authors of the present study conducted a detailed analysis of 3792 pieces of ochre, using microscopy and experimental reproduction of grinding techniques to assess how the ochre was processed and used over a 4,500-year timespan.

The researchers found that the cave inhabitants appeared to have persistently acquired, processed, and used the same types of ochre during this period.

Overall the inhabitants of the cave seem to have processed almost half of the ochre pieces, although the proportion of ochre which had been modified decreased progressively over the period. Whilst flaking and scraping of ochre pieces appeared to have become more common over time, the authors noted a reduction in the proportion of pieces which underwent grinding. The gradual nature of shifts in preferred processing techniques may indicate that they resulted from cultural drift within this practice.

Intensively modified ochre pieces show ground facets likely produced with different types of grindstones, at different times. According to the authors, these pieces were probably curated and processed for the production of small amounts of ochre powder. This is consistent with use in symbolic activities, such as the production of patterns or body painting, although a use for utilitarian activities cannot be discarded.

Whilst the increase of ochre use in certain layers could be explained by refining the dating of the sequence and acquiring environmental data, these authors state that their analysis of ochre treatment seems to reflect a "cohesive behavioral system shared by all community members and consistently transmitted through time."




Contacts and sources:
Beth Jones
PLOS ONE

Fossil Beetles Indicate LA Climate Has Been Relatively Stable for 50,000 Years

Research based on more than 180 fossil insects preserved in the La Brea Tar Pits of Los Angeles indicate that the climate in what is now southern California has been relatively stable over the past 50,000 years.

The La Brea Tar Pits, which form one of the world's richest Ice Age fossil sites, is famous for specimens of saber-toothed cats, mammoths, and giant sloths, but their insect collection is even larger and offers a relatively untapped treasure trove of information. The new study, published today in the journal Quaternary Science Reviews, is based on an analysis of seven species of beetles and offers the most robust environmental analysis for southern California to date.

Researchers dated beetle species from the La Brea Tar Pits that are still alive today, such as the darkling beetle shown in this photo.

Credit: Joyce Gross


"Despite La Brea's significance as one of North America's premier Late Pleistocene fossil localities, there remain large gaps in our understanding of its ecological history," said lead author Anna Holden, a graduate student at the American Museum of Natural History's Richard Gilder Graduate School and a research associate at the La Brea Tar Pits and Museum. "Recent advances are now allowing us to reconstruct the region's paleoenvironment by analyzing a vast and previously under-studied collection from the tar pits: insects."

The new study focuses on ground beetles and darkling beetles, which are still present in and around the Los Angeles Basin today. Insects adapt to highly specific environmental conditions, with most capable of migrating when they or their habitats get too hot, too cold, too wet, or too dry. This is especially true for ground and darkling beetles, which are restricted to well-known habitats and climate ranges.

The researchers used radiocarbon dating to estimate the ages of the beetle fossils and discovered they could be grouped into three semi-continuous ranges: 28,000-50,000 years old, 7,500-16,000 years old, and 4,000 years old. Because the beetles stayed put for such a sustained period of time, evidently content with their environmental conditions, the study suggests that pre-historic Los Angeles was warmer and drier than previously inferred--very similar to today's climate. In addition, insects that thrive in cooler environments, such as forested and canopied habitats, and are just as likely as the beetles to be preserved in the tar pits, have not been discovered at La Brea.

This is a photo of a darkling beetle fossil from the La Brea Tar Pits.

Credit: CP image #0000 2222 9825 2094 provided by the Berkeley Fossil Insect PEN, photography by Rosemary Romero.


"With the exception of the peak of the last glaciers during the late Ice Age about 24,000 years ago, our data show that these highly responsive and mobile beetles were staples in Los Angeles for at least the last 50,000 years, suggesting that the climate in the area has been surprisingly similar." Holden said. "We hope that insects will be used as climate proxies for future studies, in combination with other methods, to give us a complete picture of the paleoenvironment of Earth."


Contacts and sources:
Kendra Snyder
The American Museum of Natural History

Quaternary Science Reviews paper: A 50,000 year insect record from Rancho La Brea, Southern California: Insights into past climate and fossil deposition

Tree-Climbing Goats Disperse Seeds by Spitting

In dry southern Morocco, domesticated goats climb to the precarious tippy tops of native argan trees to find fresh forage. Local herders occasionally prune the bushy, thorny trees for easier climbing and even help goat kids learn to climb. During the bare autumn season, goats spend three quarters of their foraging time "treetop grazing."

Spanish ecologists have observed an unusual way in which the goats may be benefiting the trees: the goats spit the trees' seeds. Miguel Delibes, Irene Castañeda, and José M Fedriani reported their discovery in the latest Natural History Note in the May issue of the Ecological Society of America's journal Frontiers in Ecology and the Environment. The paper is open access.

Goats graze on an argan tree in southwestern Morocco. In the fruiting season, many clean argan nuts are spat out by the goats while chewing their cud.

Credit: H Garrido/EBD-CSIC

Argan may be familiar from popular beauty products that feature argan oil, made from the tree's nuts. The nut is surrounded by a pulpy fruit that looks a bit like a giant green olive. For goats, the fruits are a tasty treat worth climbing up to 30 feet into the branches to obtain.

But the goats don't like the large seeds. Like cows, sheep, and deer, goats re-chew their food after fermenting it for a while in a specialized stomach. While ruminating over their cud, the goats spit out the argan nuts, delivering clean seeds to new ground, wherever the goat has wandered. Gaining some distance from the parent tree gives the seedling a better chance of survival.

This novel seed dispersal effect is a variation on the mechanism ecologists call endozoochory, in which seeds more commonly pass all the way through the animal's digestive system and out the other end (or sometimes through two digestive systems). The authors suspected that reports of goats dispersing argan seeds by this more common mechanism were mistaken, because goats do not usually poop large seeds.

The researchers have witnessed sheep, captive red deer, and fallow deer spitting seeds while chewing their cud, and suspect this spitting variation on endozoochory may actually be common - and perhaps an essential route of seed spread for some plant species.


Contacts and sources:
Liza Lester
The Ecological Society of America

Citation: Miguel Delibes, Irene Castañeda, José M Fedriani. (2017) Tree-climbing goats disperse seeds during rumination. Front Ecol Environ 15(4): 222-223, doi:10.1002/fee.1488



A Whole New Jupiter: First Science Results from NASA’s Juno Mission

Early science results from NASA’s Juno mission to Jupiter portray the largest planet in our solar system as a complex, gigantic, turbulent world, with Earth-sized polar cyclones, plunging storm systems that travel deep into the heart of the gas giant, and a mammoth, lumpy magnetic field that may indicate it was generated closer to the planet’s surface than previously thought.

“We are excited to share these early discoveries, which help us better understand what makes Jupiter so fascinating,” said Diane Brown, Juno program executive at NASA Headquarters in Washington, D.C. “It was a long trip to get to Jupiter, but these first results already demonstrate it was well worth the journey.”

Early science results from NASA’s Juno mission to Jupiter portray the largest planet in our solar system as a complex, gigantic, turbulent world.

Credit: NASA/JPL-CalTech/USGS.


Juno launched on Aug. 5, 2011, entering Jupiter’s orbit on July 4, 2016. The findings from the first data-collection pass, which flew within about 2,600 miles (4,200 kilometers) of Jupiter’s swirling cloud tops on Aug. 27, are being published this week in two papers in the journal Science, as well as a 44-paper special collection in Geophysical Research Letters, a journal of the American Geophysical Union.

“We knew, going in, that Jupiter would throw us some curves,” said Scott Bolton, Juno principal investigator from the Southwest Research Institute in San Antonio. “But now that we are here we are finding that Jupiter can throw the heat, as well as knuckleballs and sliders. There is so much going on here that we didn’t expect that we have had to take a step back and begin to rethink of this as a whole new Jupiter.”

Among the findings that challenge assumptions are those provided by Juno’s imager, JunoCam. The images show both of Jupiter’s poles are covered in Earth-sized swirling storms that are densely clustered and rubbing together.

“We’re puzzled as to how they could be formed, how stable the configuration is, and why Jupiter’s north pole doesn’t look like the south pole,” said Bolton. “We’re questioning whether this is a dynamic system, and are we seeing just one stage, and over the next year, we’re going to watch it disappear, or is this a stable configuration and these storms are circulating around one another?”

Another surprise comes from Juno’s Microwave Radiometer (MWR), which samples the thermal microwave radiation from Jupiter’s atmosphere, from the top of the ammonia clouds to deep within its atmosphere. The MWR data indicates that Jupiter’s iconic belts and zones are mysterious, with the belt near the equator penetrating all the way down, while the belts and zones at other latitudes seem to evolve to other structures. The data suggest the ammonia is quite variable and continues to increase as far down as we can see with MWR, which is a few hundred miles or kilometers.

Prior to the Juno mission, it was known that Jupiter had the most intense magnetic field in the solar system. Measurements of the massive planet’s magnetosphere, from Juno’s magnetometer investigation (MAG), indicate that Jupiter’s magnetic field is even stronger than models expected, and more irregular in shape. MAG data indicates the magnetic field greatly exceeded expectations at 7.766 Gauss, about 10 times stronger than the strongest magnetic field found on Earth.


Juno’s MicroWave Radiometer (MWR) passively observes beneath Jupiter’s cloud tops. This artist’s conception shows real data from the 6 MWR channels, arranged by wavelength.

Credit: NASA/SwRI/JPL.

“Juno is giving us a view of the magnetic field close to Jupiter that we’ve never had before,” said Jack Connerney, Juno deputy principal investigator and the lead for the mission’s magnetic field investigation at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “Already we see that the magnetic field looks lumpy: it is stronger in some places and weaker in others. This uneven distribution suggests that the field might be generated by dynamo action closer to the surface, above the layer of metallic hydrogen. Every flyby we execute gets us closer to determining where and how Jupiter’s dynamo works.”

Juno also is designed to study the polar magnetosphere and the origin of Jupiter’s powerful auroras—its northern and southern lights. These auroral emissions are caused by particles that pick up energy, slamming into atmospheric molecules. Juno’s initial observations indicate that the process seems to work differently at Jupiter than at Earth.

Juno is in a polar orbit around Jupiter, and the majority of each orbit is spent well away from the gas giant. But, once every 53 days, its trajectory approaches Jupiter from above its north pole, where it begins a two-hour transit (from pole to pole) flying north to south with its eight science instruments collecting data and its JunoCam public outreach camera snapping pictures. The download of six megabytes of data collected during the transit can take 1.5 days.

“Every 53 days, we go screaming by Jupiter, get doused by a fire hose of Jovian science, and there is always something new,” said Bolton. “On our next flyby on July 11, we will fly directly over one of the most iconic features in the entire solar system — one that every school kid knows — Jupiter’s Great Red Spot. If anybody is going to get to the bottom of what is going on below those mammoth swirling crimson cloud tops, it’s Juno and her cloud-piercing science instruments.”




Contacts and sources:
Lauren Lipuma, The American Geophysical Union
David C. Agle, NASA Jet Propulsion Laboratory Contact:

Astronomers Witness the Birth of a Black Hole for the First Time as Star Winks Out of Existence

Astronomers have watched as a massive, dying star was likely reborn as a black hole. It took the combined power of the Large Binocular Telescope (LBT), and NASA's Hubble and Spitzer space telescopes to go looking for remnants of the vanquished star, only to find that it disappeared out of sight.

Every second a star somewhere out in the universe explodes as a supernova. But some super-massive stars go out with a whimper instead of a bang. When they do, they can collapse under the crushing tug of gravity and vanish out of sight, only to leave behind a black hole. 

The doomed star, named N6946-BH1, was 25 times as massive as our sun. It began to brighten weakly in 2009. But, by 2015, it appeared to have winked out of existence. By a careful process of elimination, based on observations by the Large Binocular Telescope and the Hubble and Spitzer space telescopes, the researchers eventually concluded that the star must have become a black hole. This may be the fate for extremely massive stars in the universe.

N6946-BH1 Failed Supernova (Artist's Illustration)
Collapsing Star Gives Birth to a Black Hole
Credit: NASAESA, and P. Jeffries (STScI)

It went out with a whimper instead of a bang.

The star, which was 25 times as massive as our sun, should have exploded in a very bright supernova. Instead, it fizzled out—and then left behind a black hole.

"Massive fails" like this one in a nearby galaxy could explain why astronomers rarely see supernovae from the most massive stars, said Christopher Kochanek, professor of astronomy at The Ohio State University and the Ohio Eminent Scholar in Observational Cosmology.

As many as 30 percent of such stars, it seems, may quietly collapse into black holes — no supernova required.

"The typical view is that a star can form a black hole only after it goes supernova," Kochanek explained. "If a star can fall short of a supernova and still make a black hole, that would help to explain why we don’t see supernovae from the most massive stars."

He leads a team of astronomers who published their latest results in the Monthly Notices of the Royal Astronomical Society [paper MNRAS, preprint: astro-ph].

Among the galaxies they've been watching is NGC 6946, a spiral galaxy 22 million light-years away that is nicknamed the "Fireworks Galaxy" because supernovae frequently happen there — indeed, SN 2017eaw, discovered on May 14th, is shining near maximum brightness now. Starting in 2009, one particular star, named N6946-BH1, began to brighten weakly. By 2015, it appeared to have winked out of existence.

A team of astronomers at The Ohio State University watched a star disappear and possibly become a black hole. Instead of becoming a black hole through the expected process of a supernova, the black hole candidate formed through a “failed supernova.” The team used NASA’s Hubble and Spitzer Space Telescopes and the Large Binocular Telescope to observe and monitor the star throughout the past decade. If confirmed, this would be the first time anyone has witnessed the birth of a black hole and the first discovery of a failed supernova.
Credits: Video: NASA, ESA, and K. Jackson (GSFC); Music: “High Heelz” by Donn Wilkerson [BMI] and Lance Sumner [BMI]; Killer Tracks BMI; Killer Tracks Production Music

After the LBT survey for failed supernovas turned up the star, astronomers aimed the Hubble and Spitzer space telescopes to see if it was still there but merely dimmed. They also used Spitzer to search for any infrared radiation emanating from the spot. That would have been a sign that the star was still present, but perhaps just hidden behind a dust cloud.

All the tests came up negative. The star was no longer there. By a careful process of elimination, the researchers eventually concluded that the star must have become a black hole.

It's too early in the project to know for sure how often stars experience massive fails, but Scott Adams, a former Ohio State student who recently earned his Ph.D. doing this work, was able to make a preliminary estimate.

"N6946-BH1 is the only likely failed supernova that we found in the first seven years of our survey. During this period, six normal supernovae have occurred within the galaxies we've been monitoring, suggesting that 10 to 30 percent of massive stars die as failed supernovae," he said.

"This is just the fraction that would explain the very problem that motivated us to start the survey, that is, that there are fewer observed supernovae than should be occurring if all massive stars die that way."

To study co-author Krzysztof Stanek, the really interesting part of the discovery is the implications it holds for the origins of very massive black holes — the kind that the LIGO experiment detected via gravitational waves. (LIGO is the Laser Interferometer Gravitational-Wave Observatory.)

It doesn't necessarily make sense, said Stanek, professor of astronomy at Ohio State, that a massive star could undergo a supernova — a process which entails blowing off much of its outer layers — and still have enough mass left over to form a massive black hole on the scale of those that LIGO detected.

"I suspect it's much easier to make a very massive black hole if there is no supernova," he concluded.

Adams is now an astrophysicist at Caltech. Other co-authors were Ohio State doctoral student Jill Gerke and University of Oklahoma astronomer Xinyu Dai. Their research was supported by the National Science Foundation.

NASA's Jet Propulsion Laboratory in Pasadena, California, manages the Spitzer Space Telescope mission for NASA's Science Mission Directorate, Washington, D.C. Science operations are conducted at the Spitzer Science Center at Caltech in Pasadena, California. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado. Data are archived at the Infrared Science Archive housed at the Infrared Processing and Analysis Center at Caltech. Caltech manages JPL for NASA.

The Large Binocular Telescope is an international collaboration among institutions in the United Sates, Italy and Germany.

The Hubble Space Telescope is a project of international cooperation between NASA and ESA (European Space Agency). NASA’s Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C.




Contacts and sources: 

Wednesday, May 24, 2017

Your Mobile Phone Can Reveal Whether You Have Been Exposed to Radiation

In accidents or terror attacks which are suspected to involve radioactive substances, it can be difficult to determine whether people nearby have been exposed to radiation. But by analysing mobile phones and other objects which come in close contact with the body, it is possible to retrieve important information on radiation exposure. This has been shown by a new thesis from Lund University in Sweden.

The nuclear power plant disasters in Chernobyl and Fukushima are two examples of accidents which have exposed the population to ionising radiation. Many people fear that, for example, dirty bombs will be used in future terror attacks.

“Being able to quickly determine whether someone has been exposed to radiation is a major advantage. In case of a nuclear power plant disaster, many people are worried, even when only a small number of people have been exposed to harmful levels of radiation”, explains Therése Geber-Bergstrand, medical physicist and doctoral student at Lund University.

Providing information several years after an accident

Together with her colleagues, Therése Geber-Bergstrand examined a number of objects or materials that come in close contact with the body and which have the potential of providing information on whether the carrier has been exposed to radiation. Among the objects examined were

· mobile phones

· teeth and dental fillings

· drying agents (found in, for example, small pouches in new brief cases and purses)

The study showed that several of the materials contained very promising properties, not least mobile phones. They contain resistors made from aluminium oxide, which can provide information about radiation as late as six years after the time of exposure. During analysis, the phone is dismantled and the resistor is subsequently examined using a light-sensitive measuring technique, known as optically stimulated luminescence (OSL).

“The results from the mobile phones were very promising. Even though further studies are required, the phones can be used right away. We have an agreement with the Swedish Radiation Safety Authority about analysing a number of mobile phones in our emergency preparedness lab when needed”, says Therése Geber-Bergstrand.

Analyses of mobile phones and of other tested objects can also be performed on a large scale and relatively quickly. It may be possible to receive a test result within one or two hours, compared to the couple of days it can take to receive tests results from a medical exam. According to Therése Geber-Bergstrand, an initial check of the mobile phone can therefore be a valuable tool for determining who needs to undergo more time-consuming and resource-intensive tests.

Salt capsules could complement dosemeters

In her thesis, she also continued to develop the research group’s previous findings with regard to the use of table salt as a cheap and effective indicator of ionising radiation. Her results confirm the benefits of the salt. In the event of a major accident involving radioactive substances, it could therefore be an option to supply some of the emergency staff with special salt capsules as an effective and cheap alternative to dosemeters.


Contacts and sources:
Lund University

Thesis: Optically Stimulated Luminescence for Retrospective Radiation Dosimetry. The Use of Materials Close to Man in Emergency Situations

The Brain Detects Disease in Others Even before It Breaks Out

The human brain is much better than previously thought at discovering and avoiding disease, a new study led by researchers at Karolinska Institutet in Sweden reports. Our sense of vision and smell alone are enough to make us aware that someone has a disease even before it breaks out. And not only aware – we also act upon the information and avoid sick people. The study is published in the scientific journal Proceedings of the National Academy of Sciences (PNAS).

The human immune system is effective at combating disease, but since it entails a great deal of energy expenditure disease avoidance should be part of our survival instinct. A new study now shows that this is indeed the case: the human brain is better than previously thought at discovering early-stage disease in others. Moreover, we also have a tendency to act upon the signals by liking infected people less than healthy ones.

Mats Olsson
Photo: Martin Asperholm


“The study shows us that the human brain is actually very good at discovering this and that this discovery motivates avoidance behaviour,” says principal investigator Professor Mats Olsson at Karolinska Institutet’s Department of Clinical Neuroscience.

By injecting harmless sections of bacteria, the researchers activated the immune response in participants, who developed the classic symptoms of disease – tiredness, pain and fever – for a few hours, during which time smell samples were taken from them and they were photographed and filmed. The injected substance then disappeared from their bodies and with it the symptoms.

Another group of participants were then exposed to these smells and images as well as those of healthy controls, and asked to rate how much they liked the people, while their brain activities were measured in an MR scanner.

They were then asked to state, just by looking at the photographs, which of the participants looked sick, which they considered attractive and which they might consider socialising with.

“Our study shows a significant difference in how people tend to prefer and be more willing to socialise with healthy people than those who are sick and whose immune system we artificially activated,” says Professor Olsson. “We can also see that the brain is good at adding weak signals from multiple senses relating to a person's state of health”.

This he sees as biological confirmation of the argument that survival naturally entails avoiding infection.

“Common sense tells us that there should be a basic behavioural repertoire that assists the immune system. Avoidance, however, does not necessarily apply if you have a close relationship with the person who is ill,” says Professor Olsson. “For instance, there are few people other than your children who you’d kiss when they have a runny nose. In other words, a disease signal can enhance caring behaviour in close relationships. With this study, we demonstrate that the brain is more sensitive to those signals than we once thought.”

The research has been carried out in collaboration with several parties, especially with the Stress Research Institute at Stockholm University.

The study was supported by the Swedish Research Council, the Swedish Foundation for Humanities and Social Sciences, the Knut and Alice Wallenberg Foundation and Deutscher Akademischer Austauschdienst.



Contacts and sources:
Karolinska Institutet

Citation:  “Behavioral and neural correlates to multisensory detection of sick humans”, Christina Regenbogen, John Axelsson, Julie Lasselin, Danja K. Porada, Tina Sundelin, Moa G. Peter, Mats Lekander, Johan N. Lundström, Mats J. Olsson. PNAS, Online 22 May 2017. doi: 10.1073/pnas.1617357114.  
http://www.pnas.org/content/early/2017/05/16/1617357114

Biggest Ever Simulations Help Uncover The History Of The Galaxy

Thousands of processors, terabytes of data, and months of computing time have helped a group of researchers in Germany create some of the largest and highest resolution simulations ever made of galaxies like our Milky Way.

Led by Dr Robert Grand of the Heidelberger Institut fuer Theoretische Studien, the work of the Auriga Project appears in the journal Monthly Notices of the Royal Astronomical Society.

A composite of images from the simulation. (Left) Projected gas density of the galaxy environment about 10 billion years ago. Depicted are filamentary gas structures that feed the main galaxy at the centre. (Middle) Bird’s eye view of the gas disc in the present day. The fine detailed spiral pattern is clearly visible. (Right) Side-on view of the same gas disc in the present day. Cold gas is shown as blue, warm gas as green and hot gas as red.

Credit: Robert J. J. Grand, Facundo A. Gomez, Federico Marinacci, Ruediger Pakmor, Volker Springel, David J. R. Campbell, Carlos S. Frenk, Adrian Jenkins and Simon D. M. White. Click for a full size imageAstronomers study our own and other galaxies with telescopes and simulations, in an effort to piece together their structure and history.

Spiral galaxies like the Milky Way are thought to contain several hundred thousand million stars, as well as copious amounts of gas and dust.

The spiral shape is commonplace, with a massive black hole at the centre, surrounded by a bulge of old stars, and arms winding outwards where relatively young stars like the Sun are found.

A composite image of the stellar distribution shown face-on (top) and edge-on (bottom) at present day. Older stars are redder and younger stars are bluer. Note the concentration of old stars that make up the bulge in the centre. Younger stars make up a thin disc with clear spiral structure, and extend far beyond the central bulge. 
Credit: Robert J. J. Grand, Facundo A. Gomez, Federico Marinacci, Ruediger Pakmor, Volker Springel, David J. R. Campbell, Carlos S. Frenk, Adrian Jenkins and Simon D. M. White.

However understanding how systems like our galaxy came into being continues to remain a key question in the history of the cosmos.

The enormous range of scales (stars, the building blocks of galaxies, are each about one trillion times smaller in mass than the galaxy they make up), as well as the complex physics involved, presents a formidable challenge for any computer model.

Using the Hornet and SuperMUC supercomputers in Germany and a state-of-the-art code, the team ran 30 simulations at high resolution, and 6 at very high resolution, for several months.

The dark matter density 500 million years after the Big Bang, centred on what would become the Milky Way. Red, blue and yellow colours indicate low, intermediate and high density regions.

Credit: Robert J. J. Grand, Facundo A. Gomez, Federico Marinacci, Ruediger Pakmor, Volker Springel, David J. R. Campbell, Carlos S. Frenk, Adrian Jenkins and Simon D. M. White.

The code includes one of the most comprehensive physics models to date. It includes phenomena such as gravity, star formation, hydrodynamics of gas, supernova explosions, and for the first time the magnetic fields that permeate the interstellar medium (the gas and dust between the stars).

Black holes also grew in the simulation, feeding on the gas around them, and releasing energy into the wider galaxy.

Dr Grand and his team were delighted by the results of the simulation. "The outcome of the Auriga Project is that astronomers will now be able to use our work to access a wealth of information, such as the properties of the satellite galaxies and the very old stars found in the halo that surrounds the galaxy."

The gas column density 800 Million years after the Big Bang. Lighter colours indicate higher density. The image is centred on the Milky Way. Many individual galaxies line the 'cosmic web' as they move toward each other.

Credit: Robert J. J. Grand, Facundo A. Gomez, Federico Marinacci, Ruediger Pakmor, Volker Springel, David J. R. Campbell, Carlos S. Frenk, Adrian Jenkins and Simon D. M. White.

The team also see the effect of those smaller galaxies, in some cases spiralling into the larger galaxy early in its history, in a process that could have created large spiral discs.

Dr Grand adds: "For a spiral galaxy to grow in size, it needs a substantial supply of fresh star-forming gas around its edges - smaller gas-rich galaxies that spiral gently into ours can provide exactly that."

The gas column density 2.4 Billion years after the Big Bang. Lighter colours indicate higher density. The image is centred on the Milky Way. Galaxies have merged in the last 2 Billion years to form a few, larger galaxies. These are the progenitors of the Milky Way. 
Credit: Robert J. J. Grand, Facundo A. Gomez, Federico Marinacci, Ruediger Pakmor, Volker Springel, David J. R. Campbell, Carlos S. Frenk, Adrian Jenkins and Simon D. M. White.
The scientists will now combine the results of the Auriga Project work with data in surveys from observatories like the Gaia mission, to better understand how mergers and collisions shaped galaxies like our own.

The magnetic field strength in the present day. Streamlines indicate the direction of the magnetic field lines. 
Credit: Robert J. J. Grand, Facundo A. Gomez, Federico Marinacci, Ruediger Pakmor, Volker Springel, David J. R. Campbell, Carlos S. Frenk, Adrian Jenkins and Simon D. M. White.

Contacts and sources:
Dr Robert Massey
Royal Astronomical Society

Dr Morgan Hollis
Royal Astronomical Society
 
Prof Robert Grand
Heidelberger Institut fuer Theoretische Studien, Germany


Citation:  “The Auriga Project: the properties and formation mechanisms of disc galaxies across cosmic time”, Monthly Notices of the Royal Astronomical Society, vol. 467, pp. 179-207. A copy of the paper is available at no cost from https://academic.oup.com/mnras/article-lookup/doi/10.1093/mnras/stx071

The Birth and Death of a Tectonic Plate

Several hundred miles off the Pacific Northwest coast, a small tectonic plate called the Juan de Fuca is slowly sliding under the North American continent. This subduction has created a collision zone with the potential to generate huge earthquakes and accompanying tsunamis, which happen when faulted rock abruptly shoves the ocean out of its way.

In fact, this region represents the single greatest geophysical hazard to the continental United States; quakes centered here could register as hundreds of times more damaging than even a big temblor on the San Andreas Fault. Not surprisingly, scientists are interested in understanding as much as they can about the Juan de Fuca Plate.

These are the attenuation values recorded at ocean-bottom stations. Radial spokes show individual arrivals at their incoming azimuth; central circles show averages at each station.

Credit: UCSB

This microplate is "born" just 300 miles off the coast, at a long range of underwater volcanoes that produce new crust from melt generated deep below. Part of the global mid-ocean ridge system that encircles the planet, these regions generate 70 percent of the Earth's tectonic plates. However, because the chains of volcanoes lie more than a mile beneath the sea surface, scientists know surprisingly little about them.

University of California Santa Barbara (UCSB) geophysicist Zachary Eilon and his co-author Geoff Abers at Cornell University have conducted new research -- using a novel measurement technique -- that has revealed a strong signal of seismic attenuation or energy loss at the mid-ocean ridge where the Juan de Fuca Plate is created. The researchers' attenuation data imply that molten rock here is found even deeper within the Earth than scientists had previously thought. This in turn helps scientists understand the processes by which Earth's tectonic plates are built, as well as the deep plumbing of volcanic systems. The results of the work appear in the journal Science Advances.

"We've never had the ability to measure attenuation this way at a mid-ocean ridge before, and the magnitude of the signal tells us that it can't be explained by shallow structure," said Eilon, an assistant professor in UCSB's Department of Earth Science. "Whatever is down there causing all this seismic energy to be lost extends really deep, at least 200 kilometers beneath the surface. That's unexpected, because we think of the processes that give rise to this -- particularly the effect of melting beneath the surface -- as being shallow, confined to 60 km or less."

According to Eilon's calculations, the narrow strip underneath the mid-ocean ridge, where hot rock wells up to generate the Juan de Fuca Plate, has very high attenuation. In fact, its levels are as high as scientists have seen anywhere on the planet. His findings also suggest that the plate is cooling faster than expected, which affects the friction at the collision zone and the resulting size of any potential megaquake.

Seismic waves begin at an earthquake and radiate away from it. As they disperse, they lose energy. Some of that loss is simply due to spreading out, but another parameter also affects energy loss. Called the quality factor, it essentially describes how squishy the Earth is, Eilon said. He used the analogy of a bell to explain how the quality factor works.

Ocean-bottom seismometers aboard the R/V Welcoma were deployed in the first year of the Cascadia Initiative.

Credit: Dave O'Gorman

"If I were to give you a well-made bell and you were to strike it once, it would ring for a long time," he explained. "That's because very little of the energy is actually being lost with each oscillation as the bell rings. That's very low attenuation, very high quality. But if I give you a poorly made bell and you strike it once, the oscillations will die out very quickly. That's high attenuation, low quality."

Eilon looked at the way different frequencies of seismic waves attenuated at different rates. "We looked not only at how much energy is lost but also at the different amounts by which various frequencies are delayed," he explained. "This new, more robust way of measuring attenuation is a breakthrough that can be applied in other systems around the world.

"Attenuation is a very hard thing to measure, which is why a lot of people ignore it," Eilon added. "But it gives us a huge amount of new information about the Earth's interior that we wouldn't have otherwise."

Next year, Eilon will be part of an international effort to instrument large unexplored swaths of the Pacific with ocean bottom seismometers. Once that data has been collected, he will apply the techniques he developed on the Juan de Fuca in the hope of learning more about what lies beneath the seafloor in the old oceans, where mysterious undulations in the Earth's gravity field have been measured.

"These new ocean bottom data, which are really coming out of technological advances in the instrumentation community, will give us new abilities to see through the ocean floor," Eilon said. "This is huge because 70 percent of the Earth's surface is covered by water and we've largely been blind to it -- until now.

"The Pacific Northwest project was an incredibly ambitious community experiment," he said. "Just imagine the sort of things we'll find out once we start to put these instruments in other places."



Contacts and sources:
Julie Cohen
University of California Santa Barbara (UCSB)

Going with the Flow: The Forces That Affect Species' Movements in a Changing Climate

Ocean currents affect how climate change impacts movements of species to cooler regions.

A new study published in Scientific Reports provides novel insight into how species' distributions change from the interaction between climate change and ocean currents.

As the climate gets warmer, species migrate to new regions where conditions are more tolerable, such as higher latitudes, deeper waters, or higher terrain. This leads to a shift in their geographical range that can produce significant changes to ecosystems and serious socioeconomic and human health implications. But their prediction is difficult because of the complex interactions between changes in climate and other existing human, environmental and biological factors.

Red colors represent good directional agreement whereas green colors represent directional mismatch (1979-2012).

Credit: García Molinos et al., Scientific Reports, May 2, 2017

"External directional forces, such as water and air currents, are one of those important but overlooked processes that act as conveyor belts facilitating or hindering the dispersion of species" says Dr. Jorge García Molinos, the lead author at the Arctic Research Center of Hokkaido University. "How the movement of climate relates to the movement of water can offer valuable insight to better understand how species track a shifting climate."

García Molinos and his collaborators in the UK and Germany have developed a simple metric to capture the directional agreement between surface ocean currents and warming. They used it in combination with other parameters to build an explanatory model for 270 range shifts in marine biota reported around the globe.

They found that species expanded their range faster and kept track of climate better when ocean currents matched the direction of warming. "We were expecting ocean currents to be most influential at the leading 'cold' edge of a species' range, where warming represents an opportunity for the expansion of its range," comments García Molinos. "In those situations it's a little bit like a conveyor belt at an airport terminal. If you want to get to your boarding gate and you walk with the belt, you approach the gate faster than if you just stand on it passively. If you take the belt that goes in the opposite direction you will need to walk fast or even run to make progress."

However, matching ocean currents and warming unexpectedly slowed down range contractions, or the speed of withdrawal at the "warm" edges. "This was somehow a surprise because we were expecting contraction rates to be mainly driven by the rate of warming," says co-author Prof. Michael T. Burrows. The authors hypothesized this effect to be related to how currents link local populations within a species' range.

Populations of the same species living in warmer waters are naturally adapted to higher temperatures than those inhabiting colder waters. Where currents go in the same direction as warming, populations adapted to warmer conditions would seed individuals into those thriving in cooler waters, which could result in increased genetic variation and adaptation to warming, therefore slowing contraction rates.

"Our study suggests how directional forces such as ocean or air currents can influence the coupling between climate change and biogeographical shifts. Our simple metric can be used to improve predictions of distribution shifts and help explain differences in expansion and contraction rates among species," concludes García Molinos.



Contacts and sources:
Naoki Namba 
Hokkaido University




Citation: Ocean currents modify the coupling between climate change and biogeographical shifts. Authors: J. García Molinos, M. T. Burrows &
E. S. Poloczanska Scientific Reports 7, Article number: 1332 (2017)
doi:10.1038/s41598-017-01309-y

Whales Only Recently Evolved into Giants When Changing Ice, Oceans Concentrated Prey

The blue whale, which uses baleen to filter its prey from ocean water and can reach lengths of over 100 feet, is the largest vertebrate animal that has ever lived.

On the list of the planet's most massive living creatures, the blue whale shares the top ranks with most other species of baleen whales alive today. According to new research from scientists at the Smithsonian's National Museum of Natural History, however, it was only recently in whale's evolutionary past that they became so enormous.

A blue whale, the largest vertebrate animal ever in the history of life, engulfs krill off the coast of California. Photograph authorized under National Marine Fisheries Service permit #16111 for the BBC program 'The Hunt,' courtesy of Hugh Pearson and David Reichert.

Copyright Silverback Films/BBC.


In a study reported May 24 in Proceedings of the Royal Society B, Nicholas Pyenson, the museum's curator of fossil marine mammals, and collaborators Graham Slater at the University of Chicago and Jeremy Goldbogen at Stanford University, traced the evolution of whale size through more than 30 million years of history and found that very large whales appeared along several branches of the family tree about 2 to 3 million years ago. Increasing ice sheets in the Northern Hemisphere during this period likely altered the way whales' food was distributed in the oceans and enhanced the benefits of a large body size, the scientists say.

How and why whales got so big has remained a mystery until now, in part because of the challenges of interpreting an incomplete fossil record. "We haven't had the right data," Pyenson said. "How do you measure the total length of a whale that's represented by a chunk of fossil?" Recently, however, Pyenson established that the width of a whale's skull is a good indicator of its overall body size. With that advance, the time was right to address the long-standing question.

The Smithsonian holds the largest and richest skull collections for both living and extinct baleen whales, and the museum was one of the few places that housed a collection that could provide the raw data needed to examine the evolutionary relationships between whales of different sizes. Pyenson and his colleagues measured a wide range of fossil skulls from the National Museum of Natural History's collections and used those measurements, along with published data on additional specimens, to estimate the length of 63 extinct whale species.

 The fossils included in the analysis represented species dating back to the earliest baleen whales, which lived more than 30 million years ago. The team used the fossil data, together with data on 13 species of modern whales, to examine the evolutionary relationships between whales of different sizes. Their data clearly showed that the large whales that exist today were not present for most of whales' history. "We live in a time of giants," Goldbogen said. "Baleen whales have never been this big, ever."

The research team traced the discrepancy back to a shift in the way body size evolved that occurred about 4.5 million years ago. Not only did whales with bodies longer than 10 meters (approximately 33 feet) begin to evolve around this time, but smaller species of whales also began to disappear. Pyenson notes that larger whales appeared in several different lineages around the same time, suggesting that massive size was somehow advantageous during that timeframe.

A blue whale, the largest vertebrate animal ever in the history of life, engulfs krill off the coast of California. Photograph authorized under National Marine Fisheries Service permit #16111 for the BBC program 'The Hunt,' courtesy of Hugh Pearson and David Reichert.

Credit: Copyright Silverback Films/BBC.

"We might imagine that whales just gradually got bigger over time, as if by chance, and perhaps that could explain how these whales became so massive," said Slater, a former Peter Buck postdoctoral fellow at the museum. "But our analyses show that this idea doesn't hold up -- the only way that you can explain baleen whales becoming the giants they are today is if something changed in the recent past that created an incentive to be a giant and made it disadvantageous to be small."

This evolutionary shift, which took place at the beginning of the Ice Ages, corresponds to climatic changes that would have reshaped whales' food supply in the world's oceans. Before ice sheets began to cover the Northern Hemisphere, food resources would have been fairly evenly distributed throughout the oceans, Pyenson said. But when glaciation began, run off from the new ice caps would have washed nutrients into coastal waters at certain times of the year, seasonally boosting food supplies.

At the time of this transition, baleen whales, which filter small prey, like krill, out of seawater, were well equipped to take advantage of these dense patches of food. Goldbogen, whose studies of modern whale foraging behavior have demonstrated that filter-feeding is particularly efficient when whales have access to very dense aggregations of prey, said the foraging strategy becomes even more efficient as body size increases.

A blue whale, the largest vertebrate animal ever in the history of life, engulfs krill off the coast of California. Photograph authorized under National Marine Fisheries Service permit #16111 for the BBC program 'The Hunt,' courtesy of Hugh Pearson and David Reichert.

Credit: Copyright Silverback Films/BBC.

What's more, large whales can migrate thousands of miles to take advantage of seasonally abundant food supplies. So, the scientists said, baleen whales' filter-feeding systems, which evolved about 30 million years ago, appear to have set the stage for major size increases once rich sources of prey became concentrated in particular locations and times of year.

"An animal's size determines so much about its ecological role," Pyenson said. "Our research sheds light on why today's oceans and climate can support Earth's most massive vertebrates. But today's oceans and climate are changing at geological scales in the course of human lifetimes. With these rapid changes, does the ocean have the capacity to sustain several billion people and the world's largest whales? The clues to answer this question lie in our ability to learn from Earth's deep past -- the crucible of our present world -- embedded in the fossil record."




Contacts and sources:
Ryan Lavery
Smithsonian




Citation: Independent evolution of baleen whale gigantism linked to Plio-Pleistocene ocean dynamics
Graham J. Slater, Jeremy A. Goldbogen, Nicholas D. Pyenson
Published 24 May 2017.DOI: 10.1098/rspb.2017.0546 http://rspb.royalsocietypublishing.org/content/284/1855/20170546

Funding for this study was provided by the Smithsonian's Remington Kellogg Fund and with support from the Basis Foundation.