Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Tuesday, August 30, 2016

Peculiar Age-Defying Star Probed

For years, astronomers have puzzled over a massive star lodged deep in the Milky Way that shows conflicting signs of being extremely old and extremely young.

An age-defying star designated as IRAS 19312+1950 (arrow) exhibits features characteristic of a very young star and a very old star. The object stands out as extremely bright inside a large, chemically rich cloud of material, as shown in this image from NASA’s Spitzer Space Telescope. 

A NASA-led team of scientists thinks the star – which is about 10 times as massive as our sun and emits about 20,000 times as much energy – is a newly forming protostar. That was a big surprise because the region had not been known as a stellar nursery before. But the presence of a nearby interstellar bubble, which indicates the presence of a recently formed massive star, also supports this idea.

IRAS 19312+1950 

Credits: NASA/JPL-Caltech

For years, astronomers have puzzled over a massive star lodged deep in the Milky Way that shows conflicting signs of being extremely old and extremely young.

Researchers initially classified the star as elderly, perhaps a red supergiant. But a new study by a NASA-led team of researchers suggests that the object, labeled IRAS 19312+1950, might be something quite different – a protostar, a star still in the making.

“Astronomers recognized this object as noteworthy around the year 2000 and have been trying ever since to decide how far along its development is,” said Martin Cordiner, an astrochemist working at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. He is the lead author of a paper in the Astrophysical Journal describing the team’s findings, from observations made using NASA’s Spitzer Space Telescope and ESA’s Herschel Space Observatory.

Located more than 12,000 light-years from Earth, the object first stood out as peculiar when it was observed at particular radio frequencies. Several teams of astronomers studied it using ground-based telescopes and concluded that it is an oxygen-rich star about 10 times as massive as the sun. The question was: What kind of star?

Some researchers favor the idea that the star is evolved – past the peak of its life cycle and on the decline. For most of their lives, stars obtain their energy by fusing hydrogen in their cores, as the sun does now. But older stars have used up most of their hydrogen and must rely on heavier fuels that don't last as long, leading to rapid deterioration.


IRAS 19312+1950
Image Credit: NASA/JPL-Caltech

Two early clues – intense radio sources called masers – suggested the star was old. In astronomy, masers occur when the molecules in certain kinds of gases get revved up and emit a lot of radiation over a very limited range of frequencies. The result is a powerful radio beacon – the microwave equivalent of a laser.

One maser observed with IRAS 19312+1950 is almost exclusively associated with late-stage stars. This is the silicon oxide maser, produced by molecules made of one silicon atom and one oxygen atom. Researchers don’t know why this maser is nearly always restricted to elderly stars, but of thousands of known silicon oxide masers, only a few exceptions to this rule have been noted.

Also spotted with the star was a hydroxyl maser, produced by molecules comprised of one oxygen atom and one hydrogen atom. Hydroxyl masers can occur in various kinds of astronomical objects, but when one occurs with an elderly star, the radio signal has a distinctive pattern – it’s especially strong at a frequency of 1612 megahertz. That’s the pattern researchers found in this case.

Even so, the object didn’t entirely fit with evolved stars. Especially puzzling was the smorgasbord of chemicals found in the large cloud of material surrounding the star. A chemical-rich cloud like this is typical of the regions where new stars are born, but no such stellar nursery had been identified near this star.

Scientists initially proposed that the object was an old star surrounded by a surprising cloud typical of the kind that usually accompanies young stars. Another idea was that the observations might somehow be capturing two objects: a very old star and an embryonic cloud of star-making material in the same field.

Cordiner and his colleagues began to reconsider the object, conducting observations using ESA’s Herschel Space Observatory and analyzing data gathered earlier with NASA’s Spitzer Space Telescope. Both telescopes operate at infrared wavelengths, which gave the team new insight into the gases, dust and ices in the cloud surrounding the star.

The additional information leads Cordiner and colleagues to think the star is in a very early stage of formation. The object is much brighter than it first appeared, they say, emitting about 20,000 times the energy of our sun. The team found large quantities of ices made from water and carbon dioxide in the cloud around the object. These ices are located on dust grains relatively close to the star, and all this dust and ice blocks out starlight making the star seem dimmer than it really is.

In addition, the dense cloud around the object appears to be collapsing, which happens when a growing star pulls in material. In contrast, the material around an evolved star is expanding and is in the process of escaping to the interstellar medium. The entire envelope of material has an estimated mass of 500 to 700 suns, which is much more than could have been produced by an elderly or dying star.

“We think the star is probably in an embryonic stage, getting near the end of its accretion stage – the period when it pulls in new material to fuel its growth,” said Cordiner.

Also supporting the idea of a young star are the very fast wind speeds measured in two jets of gas streaming away from opposite poles of the star. Such jets of material, known as a bipolar outflow, can be seen emanating from young or old stars. However, fast, narrowly focused jets are rarely observed in evolved stars. In this case, the team measured winds at the breakneck speed of at least 200,000 miles per hour (90 kilometers per second) – a common characteristic of a protostar.

Still, the researchers acknowledge that the object is not a typical protostar. For reasons they can’t explain yet, the star has spectacular features of both a very young and a very old star.

“No matter how one looks at this object, it’s fascinating, and it has something new to tell us about the life cycles of stars,” said Steven Charnley, a Goddard astrochemist and co-author of the paper.

NASA's Jet Propulsion Laboratory in Pasadena, California, manages the Spitzer Space Telescope mission, whose science operations are conducted at the Spitzer Science Center. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado.

Herschel is an ESA space observatory with science instruments provided by European-led principal investigator consortia and with important participation from NASA.



Contacts and sources:
Elizabeth Landau
Jet Propulsion Laboratory,

For more information, visit: www.nasa.gov/spitzer

Monday, August 29, 2016

Ice Age Inhabitants of Interior Alaska Relied Heavily on Salmon



Ice age inhabitants of Interior Alaska relied more heavily on salmon and freshwater fish in their diets than previously thought, according to a newly published study.

A team of researchers from the University of Alaska Fairbanks made the discovery after taking samples from 17 prehistoric hearths along the Tanana River, then analyzed stable isotopes and lipid residues to identify fish remains at multiple locations. The results offer a more complex picture of Alaska’s ice age residents, who were previously thought to have a diet dominated by terrestrial mammals such as mammoths, bison and elk.


Members of an excavation team work in a trench at the Upward Sun River archaeological site. Salmon remains from the site were dated to 11,800 years old using isotope analysis at the University of Alaska Fairbanks.
Credit: Ben Potter  


The project also found the earliest evidence of human use of anadromous salmon in the Americas, dating back at least 11,800 years.

The results of the study were published today in the Proceedings of the National Academy of Sciences.

DNA analysis of chum salmon bones from the same site on the Tanana River had previously confirmed that fish were part of the local indigenous diet as far back as 11,500 years ago. But fragile fish bones rarely survive for scientists to analyze, so the team used sophisticated geochemistry analyses to estimate the amount of salmon, freshwater and terrestrial resources ancient people ate.

University of Alaska Fairbanks researcher Kyungcheol Choy loads an autosampler in UAF’s Alaska Stable Isotope Facility.
Matthew Wooller photo


A team led by UAF postdoctoral researcher Kyungcheol Choy analyzed stable isotopes and lipid residues, searching for signatures specific to anadromous fish. The effort demonstrated that dietary practices of hunter-gatherers could be recorded at sites where animal remains hadn’t been preserved.

“It’s quite new in the archaeology field,” Choy said. “There’s a lot in these mixtures that’s hard to detect in other ways.”

Ben Potter, a professor of anthropology at UAF and co-author of the study, said the findings suggest a more systematic use of salmon than DNA testing alone could confirm.

“This is a different kind of strategy,” Potter said. “It fleshes out our understanding of these people in a way that we didn’t have before.”

The study required cooperation between UAF’s Department of Anthropology and the Institute of Northern Engineering’s Alaska Stable Isotope Facility to locate and interpret the presence of salmon remains at the sites. Potter said the process could be a template for how a diverse team of researchers can work together to overcome a scientific obstacle.

“It’s an awesome look at how we can merge disciplines to answer a question,” he said.

Other participants in the study included UAF researchers Matthew Wooller, Holly McKinney, Joshua Reuther and Shiway Wang.




Contacts and sources:
Jeff Richardson
University of Alaska Fairbanks

Dental Plaque Sheds New Light On the Diet of Mesolithic Foragers in The Balkans


The study of dental calculus from Late Mesolithic individuals from the site of Vlasac in the Danube Gorges of the central Balkans has provided direct evidence that Mesolithic foragers of this region consumed domestic cereals already by c. 6600 BC, i.e. almost half a millennium earlier than previously thought.

The team of researchers led by Emanuela Cristiani from The McDonald Institute for Archaeological Research, University of Cambridge used polarised microscopy to study micro-fossils trapped in the dental calculus (ancient calcified dental plaque) of 9 individuals dated to the Late Mesolithic (c. 6600-6450 BC) and the Mesolithic-Neolithic transition phase (c. 6200-5900 BC) from the site of Vlasac in the Danube Gorges. The remains were recovered from this site during excavations from 2006 to 2009 by Dušan Bori, Cardiff University.


Recovery of human remains at Vlasac, Serbia.

Credit: Dušan Boric


"The deposition of mineralized plaque ends with the death of the individual, therefore, dental calculus has sealed unique human biographic information about Mesolithic dietary preferences and lifestyle," said Cristiani.

"What we happened to discover has a tremendous significance as it challenges the established view of the Neolithization in Europe," she said.

"Microfossils trapped in dental calculus are a direct evidence that plant foods were an important source of energy within Mesolithic forager diet. More significantly, though, they reveal that domesticated plants were introduced to the Balkans independently from the rest of Neolithic novelties such as domesticated animals and artefacts, which accompanied the arrival of farming communities in the region".

These results suggest that the hitherto held notion of the "Neolithic package" may have to be reconsidered. Archaeologists use the concept of "Neolithic package" to refer to the group of elements that appear in the Early Neolithic settlements of Southeast Europe: pottery, domesticates and cultigens, polished axes, ground stones and timber houses.


Close-up of human remains from Vlasac, Serbia.

Credit: Dušan Boric


This region of the central Balkans has yielded unprecedented data for other areas with a known Mesolithic forager presence in Europe. Dental tartar samples were also taken from three Early Neolithic (c. 5900-5700 BC) female burials from the site of Lepenski Vir, located around 3 km upstream from Vlasac.

Although researchers agree that Mesolithic diet in the Danube Gorges was largely based on terrestrial, or riverine protein-rich resources, the team also found that starch granules preserved in the dental calculus from Vlasac were consistent with domestic species such as wheat (Triticum monococcum, Triticum dicoccum) and barley (Hordeum distichon), which were also the main crops found among Early Neolithic communities of southeast Europe.

Domestic species were consumed together with other wild species of the Aveneae tribe (oats), Fabaeae tribe (peas and beans) and grasses of the Paniceae tribe.

These preserved starch granules provide the first direct evidence that Neolithic domestic cereals had already reached inland foragers deep in the Balkan hinterland by c. 6600 BC. Their introduction in the Mesolithic societies was likely eased by social networks between local foragers and the first Neolithic communities.

Archaeological starch grains were interpreted using a large collection of microremains from modern plants native to the central Balkans and the Mediterranean region.

"Most of the starch granules that we identified in the Late Mesolithic calculus of the central Balkans are consistent with plants that became key staple domestic foods with the start of the Neolithic in this region" said Cristiani.

Anita Radini, University of York added, "In the central Balkans, foragers' familiarity with domestic Cerealia grasses from c. 6500 BC, if not earlier, might have eased the later quick adoption of agricultural practices."

The findings are published in the journal Proceedings of the National Academy of Sciences.









Contacts and sources:
Emanuela Cristiani
University of Cambridge

Hunt For Planet X Reveals Strange Never-Seen-Before Objects and Orbits


In the race to discover a proposed ninth planet in our Solar System, Carnegie's Scott Sheppard and Chadwick Trujillo of Northern Arizona University have observed several never-before-seen objects at extreme distances from the Sun in our Solar System. Sheppard and Trujillo have now submitted their latest discoveries to the International Astronomical Union's Minor Planet Center for official designations. A paper about the discoveries has also been accepted to The Astronomical Journal.

The more objects that are found at extreme distances, the better the chance of constraining the location of the ninth planet that Sheppard and Trujillo first predicted to exist far beyond Pluto (itself no longer classified as a planet) in 2014. The placement and orbits of small, so-called extreme trans-Neptunian objects, can help narrow down the size and distance from the Sun of the predicted ninth planet, because that planet's gravity influences the movements of the smaller objects that are far beyond Neptune. The objects are called trans-Neptunian because their orbits around the Sun are greater than Neptune's.


An illustration of the orbits of the new and previously known extremely distant solar system objects. The clustering of most of their orbits indicates that they are likely be influenced by something massive and very distant, the proposed Planet X.

Credit: Robin Dienel.


In 2014, Sheppard and Trujillo announced the discovery of 2012 VP113 (nicknamed "Biden"), which has the most-distant known orbit in our Solar System. At this time, Sheppard and Trujillo also noticed that the handful of known extreme trans-Neptunian objects all cluster with similar orbital angles. This lead them to predict that there is a planet at more than 200 times our distance from the Sun. Its mass, ranging in possibility from several Earths to a Neptune equivalent, is shepherding these smaller objects into similar types of orbits.

Some have called this Planet X or Planet 9. Further work since 2014 showed that this massive ninth planet likely exists by further constraining its possible properties. Analysis of "neighboring" small body orbits suggest that it is several times more massive than the Earth, possibly as much as 15 times more so, and at the closest point of its extremely stretched, oblong orbit it is at least 200 times farther away from the Sun than Earth. (This is over 5 times more distant than Pluto.)

"Objects found far beyond Neptune hold the key to unlocking our Solar System's origins and evolution," Sheppard explained. "Though we believe there are thousands of these small objects, we haven't found very many of them yet, because they are so far away. The smaller objects can lead us to the much bigger planet we think exists out there. The more we discover, the better we will be able to understand what is going on in the outer Solar System."

Sheppard and Trujillo, along with David Tholen of the University of Hawaii, are conducting the largest, deepest survey for objects beyond Neptune and the Kuiper Belt and have covered nearly 10 percent of the sky to date using some of the largest and most advanced telescopes and cameras in the world, such as the Dark Energy Camera on the NOAO 4-meter Blanco telescope in Chile and the Japanese Hyper Suprime Camera on the 8-meter Subaru telescope in Hawaii. As they find and confirm extremely distant objects, they analyze whether their discoveries fit into the larger theories about how interactions with a massive distant planet could have shaped the outer Solar System.

"Right now we are dealing with very low-number statistics, so we don't really understand what is happening in the outer Solar System," Sheppard said. "Greater numbers of extreme trans-Neptunian objects must be found to fully determine the structure of our outer Solar System."


An artist's conception of Planet X
Courtesy of Robin Dienel.


According to Sheppard, "we are now in a similar situation as in the mid-19th century when Alexis Bouvard noticed Uranus' orbital motion was peculiar, which eventually led to the discovery of Neptune."

The new objects they have submitted to the Minor Planet Center for designation include 2014 SR349, which adds to the class of the rare extreme trans-Neptunian objects. It exhibits similar orbital characteristics to the previously known extreme bodies whose positions and movements led Sheppard and Trujillo to initially propose the influence of Planet X.

Another new extreme object they found, 2013 FT28, has some characteristics similar to the other extreme objects but also some differences. The orbit of an object is defined by six parameters. The clustering of several of these parameters is the main argument for a ninth planet to exist in the outer solar system. 2013 FT28 shows similar clustering in some of these parameters (its semi-major axis, eccentricity, inclination, and argument of perihelion angle, for angle enthusiasts out there) but one of these parameters, an angle called the longitude of perihelion, is different from that of the other extreme objects, which makes that particular clustering trend less strong.

Another discovery, 2014 FE72, is the first distant Oort Cloud object found with an orbit entirely beyond Neptune. It has an orbit that takes the object so far away from the Sun (some 3000 times farther than Earth) that it is likely being influenced by forces of gravity from beyond our Solar System such as other stars and the galactic tide. It is the first object observed at such a large distance.



Contacts and sources: 
Scott Sheppard
Carnegie Institution for Science

Milky Way Had a Blowout 6 Million Years Ago


The center of the Milky Way galaxy is currently a quiet place where a supermassive black hole slumbers, only occasionally slurping small sips of hydrogen gas. But it wasn't always this way. A new study shows that 6 million years ago, when the first human ancestors known as hominins walked the Earth, our galaxy's core blazed forth furiously. The evidence for this active phase came from a search for the galaxy's missing mass.

Measurements show that the Milky Way galaxy weighs about 1-2 trillion times as much as our Sun. About five-sixths of that is in the form of invisible and mysterious dark matter. The remaining one-sixth of our galaxy's heft, or 150-300 billion solar masses, is normal matter. However, if you count up all the stars, gas and dust we can see, you only find about 65 billion solar masses. The rest of the normal matter - stuff made of neutrons, protons, and electrons - seems to be missing.


This artist's impression shows the Milky Way as it may have appeared 6 million years ago during a "quasar" phase of activity. A wispy orange bubble extends from the galactic center out to a radius of about 20,000 light-years. Outside of that bubble, a pervasive "fog" of million-degree gas might account for the galaxy's missing matter of 130 billion solar masses.

Credit: Mark A. Garlick/CfA


"We played a cosmic game of hide-and-seek. And we asked ourselves, where could the missing mass be hiding?" says lead author Fabrizio Nicastro, a research associate at the Harvard-Smithsonian Center for Astrophysics (CfA) and astrophysicist at the Italian National Institute of Astrophysics (INAF).

"We analyzed archival X-ray observations from the XMM-Newton spacecraft and found that the missing mass is in the form of a million-degree gaseous fog permeating our galaxy. That fog absorbs X-rays from more distant background sources," Nicastro continues.

The astronomers used the amount of absorption to calculate how much normal matter was there, and how it was distributed. They applied computer models but learned that they couldn't match the observations with a smooth, uniform distribution of gas. Instead, they found that there is a "bubble" in the center of our galaxy that extends two-thirds of the way to Earth.

Clearing out that bubble required a tremendous amount of energy. That energy, the authors surmise, came from the feeding black hole. While some infalling gas was swallowed by the black hole, other gas was pumped out at speeds of 2 million miles per hour (1,000 km/sec).

Six million years later, the shock wave created by that phase of activity has crossed 20,000 light-years of space. Meanwhile, the black hole has run out of nearby food and gone into hibernation.

This timeline is corroborated by the presence of 6-million-year-old stars near the galactic center. Those stars formed from some of the same material that once flowed toward the black hole.

"The different lines of evidence all tie together very well," says Smithsonian co-author Martin Elvis (CfA). "This active phase lasted for 4 to 8 million years, which is reasonable for a quasar."

The observations and associated computer models also show that the hot, million-degree gas can account for up to 130 billion solar masses of material. Thus, it just might explain where all of the galaxy's missing matter was hiding: it was too hot to be seen.

More answers may come from the proposed next-generation space mission known as X-ray Surveyor. It would be able to map out the bubble by observing fainter sources, and see finer detail to tease out more information about the elusive missing mass. The European Space Agency's Athena X-ray Observatory, planned for launch in 2028, offers similar promise.

These results have been accepted for publication in The Astrophysical Journal and are available online.

Headquartered in Cambridge, Mass., the Harvard-Smithsonian Center for Astrophysics (CfA) is a joint collaboration between the Smithsonian Astrophysical Observatory and the Harvard College Observatory. CfA scientists, organized into six research divisions, study the origin, evolution and ultimate fate of the universe.



Contacts and sources:
Christine Pulliam
Harvard-Smithsonian Center for Astrophysics (CfA) 

Nanoscale Wireless Communication

The pursuit of next-generation technologies places a premium on producing increased speed and efficiency with components built at scales small enough to function on a computer chip.

One of the barriers to advances in "on-chip" communications is the size of the electromagnetic waves at radio and microwave frequencies, which form the backbone of modern wireless technology. The relatively large waves handcuff further miniaturization.



Scientists trying to surpass these limitations are exploring the potential of optical conveyance that exploits the properties of much smaller wavelengths, such as those in the terahertz, infrared and visible frequencies.

A team of researchers at Boston College has developed the first nanoscale wireless communication system that operates at visible wavelengths using antennas that send and receive surface plasmons with an unprecedented degree of control, the team reports in the latest edition of the journal Nature's Scientific Reports.

Furthermore, the device affords an "in-plane" configuration, a prized class of two-way information transmission and recovery in a single path, according to the study, conducted by a team in the lab of Evelyn J. and Robert A. Ferris Professor of Physics Michael J. Naughton.

Surface plasmons possess unique subwavelength capabilities. Researchers trying to exploit those features have developed metallic structures, including plasmonic antennas. But a persistent problem has been the inability to achieve 'in-line' containment of the emission and collection of the electromagnetic radiation.

A Boston College team has developed a device with a three-step conversion process that changes a surface plasmon to a photon on transmission and then converts that elemental electromagnetic particle back to a surface plasmon as the receiver picks it up. The device, illustrated in this video, offers an unprecedented degree of control in this approach to faster, more efficient communications to power computers and optical technologies.

Credit: Michael J. Burns, Juan M. Merlo

The findings mark an important first step toward a nanoscale version - and visible frequency equivalent - of existing wireless communication systems, according to the researchers. Such on-chip systems could be used for high-speed communication, high efficiency plasmonic waveguiding and in-plane circuit switching - a process that is currently used in liquid crystal displays.

The device achieved communication across several wavelengths in tests using near-field scanning optical microscopy, according to lead co-author Juan M. Merlo, a post-doctoral researcher who initiated the project.

"Juan was able to push it beyond the near field - at least to four times the width of a wavelength. That is true far-field transmission and nearly every device we use on a daily basis - from our cell phones to our cars - relies on far-field transmission," said Naughton.

The device could speed the transmission of information by as much as 60 percent compared to earlier plasmonic waveguiding techniques and up to 50 percent faster than plasmonic nanowire waveguides, the team reports.

Surface plasmons are the oscillations of electrons coupled to the interface of an electromagnetic field and a metal. Among their unique abilities, surface plasmons can confine energy on that interface by fitting into spaces smaller than the waves themselves.

Researchers trying to exploit these subwavelength capabilities of surface plasmons have developed metallic structures, including plasmonic antennas. But a persistent problem has been the inability to achieve "in-line" containment of the emission and collection of the electromagnetic radiation.

The BC team developed a device with a three-step conversion process that changes a surface plasmon to a photon on transmission and then converts that elemental electromagnetic particle back to a surface plasmon as the receiver picks it up.

"We have developed a device where plasmonic antennas communicate with each other with photons transmitting between them," said Naughton. "This is done with high efficiency, with energy loss reduced by 50 percent between one antenna and the next, which is a significant enhancement over comparable architectures."

Central to the newfound control of the surface plasmons was the creation of a small gap of air between the waves and the silver surface of the device, said Merlo, who earned his PhD at Mexico's National Institute of Astrophysics, Optics and Electronics. By removing a portion of the glass substrate, the team reduced the disruptive pull of the material on the photons in transmission. Expanding and narrowing that gap proved crucial to tuning the device.

With traditional silicon waveguides, dispersion reduces information transmission speed. Without that impediment, the new device capitalizes on the capability of surface plasmons to travel at 90 to 95 percent of the speed of light on a silver surface and photons traveling between the antennas at their inherent speed of light, Merlo said.

"Silicon-based optical technology has been around for years," said Merlo. "What we are doing is improving it to make it faster. We're developing a tool to make silicon photonics faster and greatly enhance rates of communication."



Contacts and sources
Ed Hayward
Boston College 


In addition to Naughton and Merlo, the paper was co-authored by Professor Krzysztof Kempa, Senior Research Associate Michael J. Burns, and graduate students Nathan T. Nesbitt, Yitzi M. Calm, Aaron H. Rose, Luke D'Imperio, Chaobin Yang and Jeffrey R. Naughton.

The full report can be found at: http://www.nature.com/articles/srep31710




Sunday, August 28, 2016

3-D Printed Structures Remember, Touch and Grip; Can Also Deliver Drugs

Engineers from MIT and Singapore University of Technology and Design (SUTD) are using light to print three-dimensional structures that "remember" their original shapes. Even after being stretched, twisted, and bent at extreme angles, the structures -- from small coils and multimaterial flowers, to an inch-tall replica of the Eiffel tower -- sprang back to their original forms within seconds of being heated to a certain temperature "sweet spot."

Credit: MIT

For some structures, the researchers were able to print micron-scale features as small as the diameter of a human hair -- dimensions that are at least one-tenth as big as what others have been able to achieve with printable shape-memory materials. The team's results were published earlier this month in the online journal Scientific Reports.


In this series, a 3-D printed multimaterial shape-memory minigripper, consisting of shape-memory hinges and adaptive touching tips, grasps a cap screw.

Photo courtesy of Qi (Kevin) Ge


Nicholas X. Fang, associate professor of mechanical engineering at MIT, says shape-memory polymers that can predictably morph in response to temperature can be useful for a number of applications, from soft actuators that turn solar panels toward the sun, to tiny drug capsules that open upon early signs of infection.

"We ultimately want to use body temperature as a trigger," Fang says. "If we can design these polymers properly, we may be able to form a drug delivery device that will only release medicine at the sign of a fever."

Fang's coauthors include former MIT-SUTD research fellow Qi "Kevin" Ge, now an assistant professor at SUTD; former MIT research associate Howon Lee, now an assistant professor at Rutgers University; and others from SUTD and Georgia Institute of Technology.

Ge says the process of 3-D printing shape-memory materials can also be thought of as 4-D printing, as the structures are designed to change over the fourth dimension -- time.

"Our method not only enables 4-D printing at the micron-scale, but also suggests recipes to print shape-memory polymers that can be stretched 10 times larger than those printed by commercial 3-D printers," Ge says. "This will advance 4-D printing into a wide variety of practical applications, including biomedical devices, deployable aerospace structures, and shape-changing photovoltaic solar cells."

Need for speed

Fang and others have been exploring the use of soft, active materials as reliable, pliable tools. These new and emerging materials, which include shape-memory polymers, can stretch and deform dramatically in response to environmental stimuli such as heat, light, and electricity -- properties that researchers have been investigating for use in biomedical devices, soft robotics, wearable sensors, and artificial muscles.

A shape-memory Eiffel tower was 3-D printed using projection microstereolithography. It is shown recovering from being bent, after toughening on a heated Singapore dollar coin.

Photo courtesy of Qi (Kevin) Ge

Shape-memory polymers are particularly intriguing: These materials can switch between two states -- a harder, low-temperature, amorphous state, and a soft, high-temperature, rubbery state. The bent and stretched shapes can be "frozen" at room temperature, and when heated the materials will "remember" and snap back to their original sturdy form.

To fabricate shape-memory structures, some researchers have looked to 3-D printing, as the technology allows them to custom-design structures with relatively fine detail. However, using conventional 3-D printers, researchers have only been able to design structures with details no smaller than a few millimeters. Fang says this size restriction also limits how fast the material can recover its original shape.

"The reality is that, if you're able to make it to much smaller dimensions, these materials can actually respond very quickly, within seconds," Fang says. "For example, a flower can release pollen in milliseconds. It can only do that because its actuation mechanisms are at the micron scale."

Printing with light

To print shape-memory structures with even finer details, Fang and his colleagues used a 3-D printing process they have pioneered, called microstereolithography, in which they use light from a projector to print patterns on successive layers of resin.

The researchers first create a model of a structure using computer-aided design (CAD) software, then divide the model into hundreds of slices, each of which they send through the projector as a bitmap -- an image file format that represents each layer as an arrangement of very fine pixels. The projector then shines light in the pattern of the bitmap, onto a liquid resin, or polymer solution, etching the pattern into the resin, which then solidifies.

"We're printing with light, layer by layer," Fang says. "It's almost like how dentists form replicas of teeth and fill cavities, except that we're doing it with high-resolution lenses that come from the semiconductor industry, which give us intricate parts, with dimensions comparable to the diameter of a human hair."

The researchers then looked through the scientific literature to identify an ideal mix of polymers to create a shape-memory material on which to print their light patterns. They picked two polymers, one composed of long-chain polymers, or spaghetti-like strands, and the other resembling more of a stiff scaffold. When mixed together and cured, the material can be stretched and twisted dramatically without breaking.

What's more, the material can bounce back to its original printed form, within a specific temperature range -- in this case, between 40 and 180 degrees Celsius (104 to 356 degrees Fahrenheit).

The team printed a variety of structures, including coils, flowers, and the miniature Eiffel tower, whose full-size counterpart is known for its intricate steel and beam patterns. Fang found that the structures could be stretched to three times their original length without breaking. When they were exposed to heat within the range of 40 C to 180 C, they snapped back to their original shapes within seconds.

"Because we're using our own printers that offer much smaller pixel size, we're seeing much faster response, on the order of seconds," Fang says. "If we can push to even smaller dimensions, we may also be able to push their response time, to milliseconds."

Soft grip

To demonstrate a simple application for the shape-memory structures, Fang and his colleagues printed a small, rubbery, claw-like gripper. They attached a thin handle to the base of the gripper, then stretched the gripper's claws open. When they cranked the temperature of the surrounding air to at least 40 C, the gripper closed around whatever the engineers placed beneath it.

"The grippers are a nice example of how manipulation can be done with soft materials," Fang says. "We showed that it is possible to pick up a small bolt, and also even fish eggs and soft tofu. That type of soft grip is probably very unique and beneficial."

Going forward, he hopes to find combinations of polymers to make shape-memory materials that react to slightly lower temperatures, approaching the range of human body temperatures, to design soft, active, controllable drug delivery capsules. He says the material may also be printed as soft, responsive hinges to help solar panels track the sun.

"Very often, excessive heat will build up on the back side of the solar cell, so you could use [shape-memory materials] as an actuation mechanism to tune the inclination angle of the solar cell," Fang says. "So we think there will probably be more applications that we can demonstrate."


Contacts and sources:
Abby Abazorius
MIT


This research is supported in part by the SUTD Digital Manufacturing and Design Center (DManD) and the SUTD-MIT joint postdoctoral program.

Spherical Tokamaks Could Provide Path to Limitless Fusion Energy

Among the top puzzles in the development of fusion energy is the best shape for the magnetic facility — or “bottle” — that will provide the next steps in the development of fusion reactors. Leading candidates include spherical tokamaks, compact machines that are shaped like cored apples, compared with the doughnut-like shape of conventional tokamaks. The spherical design produces high-pressure plasmas — essential ingredients for fusion reactions — with relatively low and cost-effective magnetic fields.

Creating "a star in a jar" - replicating on Earth the way the sun and stars create energy through fusion - requires a "jar" that can contain superhot plasma and is low-cost enough to be built around the world. Such a device would provide humankind with near limitless energy, ending dependence on fossil fuels for generating electricity.

This image shows a test cell for National Spherical Torus Experiment-Upgrade with tokamak in the center.

Credit: Elle Starkman/PPPL

Physicists at the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) say that a model for such a "jar," or fusion device, already exists in experimental form - the compact spherical tokamaks at PPPL and Culham, England. These tokamaks, or fusion reactors, could provide the design for possible next steps in fusion energy - a Fusion Nuclear Science Facility (FNSF) that would develop reactor components and also produce electricity as a pilot plant for a commercial fusion power station.


"New options for future plants"

The detailed proposal for such a "jar" is described in a paper published in August 2016 in the journal Nuclear Fusion. "We are opening up new options for future plants," said lead author Jonathan Menard, program director for the recently completed National Spherical Torus Experiment-Upgrade (NSTX-U) at PPPL. The $94-million upgrade of the NSTX, financed by the U.S. Department of Energy's Office of Science, began operating last year.

Spherical tokamaks are compact devices that are shaped like cored apples, compared with the bulkier doughnut-like shape of conventional tokamaks. The increased power of the upgraded PPPL machine and the soon-to-be completed MAST Upgrade device moves them closer to commercial fusion plants that will create safe, clean and virtually limitless energy without contributing greenhouse gases that warm the Earth and with no long-term radioactive waste.

Credit: Princeton Plasma Physics Laboratory

The NSTX-U and MAST facilities "will push the physics frontier, expand our knowledge of high temperature plasmas, and, if successful, lay the scientific foundation for fusion development paths based on more compact designs," said PPPL Director Stewart Prager.

The devices face a number of physics challenges. For example, they must control the turbulence that arises when superhot plasma particles are subjected to powerful electromagnetic fields. They must also carefully control how the plasma particles interact with the surrounding walls to avoid possible disruptions that can halt fusion reactions if the plasma becomes too dense or impure. Researchers at PPPL, Culham, and elsewhere are looking at ways of solving these challenges for the next generation of fusion devices.

The fourth state of matter

The spherical design produces high-pressure plasmas - the superhot charged gas also known as the fourth state of matter that fuels fusion reactions - with relatively low and inexpensive magnetic fields. This unique capability points the way to a possible next generation of fusion experiments to complement ITER, the international tokamak that 35 nations including the United States are building in France to demonstrate the feasibility of fusion power. ITER is a doughnut-shaped tokamak that will be largest in the world when completed within the next decade.

"The main reason we research spherical tokamaks is to find a way to produce fusion at much less cost than conventional tokamaks require," said Ian Chapman, the newly appointed chief executive of the United Kingdom Atomic Energy Authority and leader of the UK's magnetic confinement fusion research programme at the Culham Science Centre.

Center stack of the NSTX-U.
Photo courtesy of Culham Centre for Fusion Energy.

The 43-page Nuclear Fusion paper describes how the spherical design can provide the next steps in fusion energy. A key issue is the size of the hole in the center of the tokamak that holds and shapes the plasma. In spherical tokamaks, this hole can be half the size of the hole in conventional tokamaks, enabling control of the plasma with relatively low magnetic fields.

The smaller hole could be compatible with a blanket system for the FNSF that would breed tritium, a rare isotope - or form - of hydrogen. Tritium will fuse with deuterium, another isotope of hydrogen, to produce fusion reactions in next-step tokamaks.

Superconducting magnets for pilot plants

For pilot plants, the authors call for superconducting magnets to replace the primary copper magnets in the FNSF. Superconducting magnets can be operated far more efficiently than copper magnets but require thicker shielding. However, recent advances in high-temperature superconductors could lead to much thinner superconducting magnets that would require less space and reduce considerably the size and cost of the machine.

Mega Ampere Spherical Tokamak. 
Photo courtesy of Culham Centre for Fusion Energy.

Included in the paper is a description of a device called a "neutral beam injector" that will start and sustain plasma current without relying on a heating coil in the center of the tokamak. Such a coil is not suitable for continuous long-term operation. The neutral beam injector will pump fast-moving neutral atoms into the plasma and will help optimize the magnetic field that confines and controls the superhot gas.

Taken together, the paper describes concepts that strongly support a spherical facility to develop fusion components and create on Earth "a star in a jar"; the upgraded NSTX and MAST facilities will provide crucial data for determining the best path for ultimately generating electricity from fusion.



Contacts and sources:
John Greenwald
DOE/Princeton Plasma Physics Laboratory

Fusion nuclear science facilities and pilot plants based on the spherical tokamak  http://dx.doi.org/10.1088/0029-5515/56/10/106023

Saturday, August 27, 2016

Fracking Earthquake Risks Can Be Lessened




New research from the U.S. Geological Survey and the University of Colorado shows actions taken by drillers and regulators can lessen risk in the case of earthquakes likely caused by the injection of industrial wastewater deep underground.

While the earthquake that rumbled below Colorado’s eastern plains May 31, 2014, did no major damage, its occurrence surprised both Greeley residents and local seismologists. To some Greeley residents, the magnitude 3.2 earthquake felt like a large truck hitting the house.

The earthquake happened in an area that had seen no seismic activity in at least four decades, according to a new analysis by a team of Colorado researchers. It was likely caused by the injection of industrial wastewater deep underground—and, the team concluded, quick action taken by scientists, regulators and industry may have reduced the risk of larger quakes in the area.



“We were surprised to observe an earthquake right in our backyard, and we knew we needed to know more, so we quickly mobilized seismic monitoring equipment," said Will Yeck, lead author of the study. "As it turned out, our findings were not just scientifically interesting. By sharing our observations with others in real time, we were able to help inform the decisions made to mitigate these earthquakes. It was extremely rewarding to see our scientific observations have a direct and immediate impact.”

Yeck, then finishing up his Ph.D. in geophysics at the University of Colorado Boulder, and now a researcher with the USGS, worked with a team of researchers that included his Ph.D. advisor Anne Sheehan, a professor and CIRES Fellow, two other graduate students and a USGS colleague. Their work appears in the July-August issue of Seismological Research Letters.

A few minutes after 10 p.m. the night of the earthquake, Sheehan also received an email from a neighbor who had felt an earthquake at her home in Boulder. The neighbor quickly looked up the initial details through a USGS website, and relayed them to Sheehan. It looked like the earthquake was centered in the heart of oil and gas country in Weld County, where drillers sometimes disposed of wastewater deep underground—an activity now known to sometimes trigger earthquakes.

In many homes near the earthquake’s epicenter, furniture shifted in rooms. Bricks fell off at least one chimney.

“The next day was very busy,” said Sheehan.

She requested seismometers from a consortium that rapidly supplies equipment for earthquake aftershock monitoring. She began talking with her graduate students, colleagues from the USGS and the oil and gas industry, and regulators about where to deploy the equipment.

The first week of June, Yeck and fellow graduate students Jenny Nakai and Matthew Weingarten deployed six seismometers in an array around the earthquake’s epicenter to monitor further seismic activity. As data flowed in, they analyzed it in detail to pinpoint the locations and the timing of smaller earthquakes following that first one.

The geophysicists communicated their findings with state oil and gas regulators and wastewater disposal company staff, and helped those staff learn to read and understand real-time seismic data themselves.

It quickly became clear that the earthquakes were centered under one specific well: the wastewater disposal well closest to the Greeley earthquake epicenter which happened to be the highest‐rate injection well in northeast Colorado in 2013, according to data compiled by the state. The well had been pumping an average of 250,000 barrels per month since August 2013, more than a mile deep.

“Soon after the magnitude 3.2 earthquake, when the seismic network was in place, we shared earthquake locations and earthquake magnitude frequency with the Colorado Oil and Gas Conservation Commission and local energy companies to better inform them of seismic activity occurring around the wells,” said Jenny Nakai, a co-author of the new study and a graduate student in geophysics at CU Boulder. “We could see fluctuations in seismic activity as the well was shut down and cemented.”

Injection stopped on June 24 for a month, and the company that drilled the disposal well took two actions to reduce seismicity: They reduced injection rates and used cement to plug the bottom of the well, impeding fluid interaction with deeper, subsurface faults.

Injection resumed a month later at reduced rates, starting at just 5,000 barrels a day mid-July. The injection rates were slowly increased over time.

Seismicity dropped, the team found. Following mitigation, between August 13, 2014, and December 29, 2015, no earthquakes larger than magnitude 1.5 occurred near Greeley.

The research team also used data from more distant seismometers, deployed well before the 2014 earthquake, to detect past seismic activity in the area. They found the Greeley earthquake sequence began roughly four months after the initiation of high-rate wastewater injection in 2013. Their analysis suggested that the biggest observed earthquakes in the area were getting bigger over time, an observation made at other injection induced earthquake locations as well.

State regulators with the Colorado Oil and Gas Conservation Commission modified requirements as a result of the seismology team’s findings, Yeck and his colleagues reported in the paper. Regulators began requiring seismic monitoring at recently permitted commercial disposal wells pumping more than 10,000 barrels per day.

Greeley-area seismicity continues to be monitored both by the CU Boulder team and by an independent contractor.

Authors of “Rapid Response, Monitoring, and Mitigation of Induced Seismicity near Greeley, Colorado” in Seismological Research Letters are William Yeck and Harley Benz (U.S. Geological Survey), Anne Sheehan and Jenny Nakai (CIRES and University of Colorado Boulder), and Matthew Weingarten (Stanford University).




Contacts and sources:
Heidi Koontz
United States Geological Survey

Survey Finds Vast Majority of Americans Think USA Is Divided Over Values and Politics

Americans see their country as deeply divided over values and politics -- a gap they do not expect to diminish any time soon, according to a new survey conducted by The Associated Press-NORC Center for Public Affairs Research. But the survey also finds that most Americans report agreement on important values among members of their local communities. Seven in 10 say the news media places too much emphasis on these differences
While few Americans say they have much in common with people of different religions or ethnic backgrounds, most of the public believes the racial, ethnic, and religious diversity of the United States makes the country stronger. Consensus and disagreement over American exceptionalism, the media's role in accentuating the country's divisions, and future levels of conflicts are also explored in the survey.
Credit; Wikimedia
"Political campaigns, especially the presidential campaigns, raise both the extent and intensity of public debate," said Trevor Tompson, director of The AP-NORC Center. "Surveys like the one we have done can reveal important insights that help explain the underlying causes of recent political events."
The survey is part of AP's Divided America series, which explores the issues dividing American voters in this tumultuous presidential election year and what's driving them toward the decision they will make on November 8.
The nationwide poll of 1,008 adults utilized NORC's AmeriSpeak® Omnibus, a monthly multi-client survey using NORC's probability-based panel designed to be representative of the U.S. household population. Interviews were conducted between June 23 and 27, 2016, online and using landlines and cell phones. The AmeriSpeak panel is notable for its representativeness and high rates of participation.
Some of the poll's key findings are:
  • Eighty percent of Americans say the country is greatly divided when it comes to the most important values, and 85 percent say the United States is increasingly divided by politics.
  • While few people think the country as a whole agrees on values, most say their neighbors do share important values. Six in 10 (62 percent) say members of their local community are in agreement about values.
  • Nearly three-quarters (72 percent) say the news media puts too much focus on disagreements, and 63 percent say the same about politicians and elected officials. The entertainment industry is seen by 43 percent as overemphasizing splits within the country.
  • Most Americans regard the country's diverse population as advantageous to the nation. More than half (56 percent) say diversity makes the country stronger, while 16 percent say it weakens the country. Twenty-eight percent say diversity has no effect one way or the other. Democrats, urbanites, and Hispanics are particularly inclined to see the variety of people in the country as a plus for the United States.
  • Is the United States the best country on earth? Only 26 percent of the public agree that the United States "stands above all other countries in the world," while 55 percent of the public say the United States is "one of the greatest countries in the world along with some others." Just 19 percent think there are other countries that are better.
  • The public is closely divided over whether the good times for the country have been left behind or are yet to come. Fifty-two percent say the country's best days are in the past, while 46 percent say they are ahead of us. Blacks and Hispanics tend to have a positive outlook about the future of the country, while most whites say the good times are in the past.
  • While most people say they have a lot in common with other members of their community, few feel they share much in common with wealthier people or those with different political views.
  • Neither the Democratic nor the Republican candidate for President is regarded as particularly capable of uniting the country. However, while 43 percent say Hillary Clinton's election would lead to a more divided nation, many more, 73 percent, say the country will be more separated if Donald Trump prevails in November.


Contacts and sources:
Eric Young
NORC at The University of Chicago

Fracking Chemicals Adversely Affecting Fertility, More Than 15 Million Americans within a Mile of Fracking Operations

More than 15 million Americans live within a one-mile radius of unconventional oil and gas (UOG) operations. UOGs combine directional drilling and hydraulic fracturing, or "fracking," to release natural gas from underground rock. Scientific studies, while ongoing, are still inconclusive on the potential long-term effects fracturing has on human development.

Researchers at the University of Missouri released a study that is the first of its kind to link exposure to chemicals released during hydraulic fracturing to adverse reproductive and developmental outcomes in mice. Scientists believe that exposure to these chemicals also could pose a threat to human development.


2011-2014 Hydraulic Fracturing Water Use (square meters/well)
Credit:  USGS  F

“Researchers have previously found that endocrine-disrupting chemicals (EDCs) mimic or block hormones — the chemical messengers that regulate respiration, reproduction, metabolism, growth and other biological functions,” said Susan C. Nagel, Nagel, an associate professor of obstetrics, gynecology and women’s health in the School of Medicine. “Evidence from this study indicates that developmental exposure to fracking and drilling chemicals may pose a threat to fertility in animals and potentially people. Negative outcomes were observed even in mice exposed to the lowest dose of chemicals, which was lower than the concentrations found in groundwater at some locations with past oil and gas wastewater spills.”

Researchers mixed 23 oil and gas chemicals in four different concentrations to reflect concentrations ranging from those found in drinking water and groundwater to concentrations found in industry wastewater. The mixtures were added to drinking water given to pregnant mice in the laboratory until they gave birth. The female offspring of the mice that drank the chemical mixtures were compared to female offspring of mice in a control group that were not exposed. Mice exposed to drilling chemicals had lower levels of key hormones related to reproductive health compared to the control group.

Susan Nagel and her team released a study that is the first of its kind to link exposure to chemicals released during fracking to adverse reproductive and developmental outcomes in mice. Scientists believe that exposure to these chemicals also could pose a threat to human development.


“Female mice that were exposed to commonly used fracking chemicals in utero showed signs of reduced fertility, including alterations in the development of the ovarian follicles and pituitary and reproductive hormone concentrations,” Nagel said, who also serves as an adjunct associate professor of biological sciences in the MU College of Arts and Science. “These findings build on our previous research, which found exposure to the same chemicals was tied to reduced sperm counts in male mice. Our studies suggest adverse developmental and reproductive health outcomes might be expected in humans and animals exposed to chemicals in regions with oil and gas drilling activity.”

The study, “Adverse Reproductive and Developmental Health Outcomes Following Prenatal Exposure to a Hydraulic Fracturing Chemical Mixture in Female C57BI/6 Mice,” was published in the journalEndocrinology. Authors of the study include: Christopher D. Kassotis of Duke University in Durham, N.C.; John J. Bromfield of the University of Florida in Gainesville, FL; Kara C. Klemp, Chun-Xia Meng, Victoria D. Balise and Chiamaka J. Isiguzo of the University of Missouri; Andrew Wolfe of the Johns Hopkins University School of Medicine in Baltimore, MD; R. Thomas Zoeller of the University of Massachusetts Amherst in Amherst, MA; and Donald E. Tillitt of the U.S. Geological Survey’s Columbia Environmental Research Center in Columbia, MO.

The research was funded by the University of Missouri Research Council and Mizzou Advantage, a crowd-funding campaign on Experiment.com, and the U.S. Environmental Protection Agency’s STAR Fellowship Assistance Agreement awarded to Christopher D. Kassotis. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.


Contacts and sources:
Jeff Sossamon
University of Missouri - Columbia

The study, "Adverse Reproductive and Developmental Health Outcomes Following Prenatal Exposure to a Hydraulic Fracturing Chemical Mixture in Female C57BI/6 Mice," was published in the journal Endocrinology. Authors of the study include: Christopher D. Kassotis of Duke University in Durham, N.C.; John J. Bromfield of the University of Florida in Gainesville, FL; Kara C. Klemp, Chun-Xia Meng, Victoria D. Balise and Chiamaka J. Isiguzo of the University of Missouri; Andrew Wolfe of the Johns Hopkins University School of Medicine in Baltimore, MD; R. Thomas Zoeller of the University of Massachusetts Amherst in Amherst, MA; and Donald E. Tillitt of the U.S. Geological Survey's Columbia Environmental Research Center in Columbia, MO.

Scientists Solve Puzzle of Converting Gaseous Carbon Dioxide to Fuel with Nanotechnology

Every year, humans advance climate change and global warming - and quite likely our own eventual extinction - by injecting about 30 billion tonnes of carbon dioxide into the atmosphere.

A team of scientists from the University of Toronto (U of T) believes they've found a way to convert all these emissions into energy-rich fuel in a carbon-neutral cycle that uses a very abundant natural resource: silicon. Silicon, readily available in sand, is the seventh most-abundant element in the universe and the second most-abundant element in the earth's crust.



The idea of converting carbon dioxide emissions to energy isn't new: there's been a global race to discover a material that can efficiently convert sunlight, carbon dioxide and water or hydrogen to fuel for decades. However, the chemical stability of carbon dioxide has made it difficult to find a practical solution.

"A chemistry solution to climate change requires a material that is a highly active and selective catalyst to enable the conversion of carbon dioxide to fuel. It also needs to be made of elements that are low cost, non-toxic and readily available," said Geoffrey Ozin, a chemistry professor in U of T's Faculty of Arts & Science, the Canada Research Chair in Materials Chemistry and lead of U of T's Solar Fuels Research Cluster.

Geoffrey Ozin and his colleagues believe they have found a way to convert CO₂ emissions into energy-rich fuel 
Credit: Brian Summers

In an article in Nature Communications published August 23, Ozin and colleagues report silicon nanocrystals that meet all the criteria. The hydride-terminated silicon nanocrystals – nanostructured hydrides for short – have an average diameter of 3.5 nanometres and feature a surface area and optical absorption strength sufficient to efficiently harvest the near-infrared, visible and ultraviolet wavelengths of light from the sun together with a powerful chemical-reducing agent on the surface that efficiently and selectively converts gaseous carbon dioxide to gaseous carbon monoxide.

The potential result: energy without harmful emissions.

"Making use of the reducing power of nanostructured hydrides is a conceptually distinct and commercially interesting strategy for making fuels directly from sunlight," said Ozin.

The U of T Solar Fuels Research Cluster is working to find ways and means to increase the activity, enhance the scale, and boost the rate of production. Their goal is a laboratory demonstration unit and, if successful, a pilot solar refinery.



Contacts and sources:
Sean Bettam
By Kim Luke
University of Toronto 

Hurricanes Are Worse, But Experience, Gender and Politics Determine Who Believes It


Objective measurements of storm intensity show that North Atlantic hurricanes have grown more destructive in recent decades. But coastal residents' views on the matter depend less on scientific fact and more on their gender, belief in climate change and recent experience with hurricanes, according to a new study by researchers at Princeton University, Auburn University-Montgomery, the Louisiana State University and Texas A&M University.

The researchers plumbed data from a survey of Gulf Coast residents and found that the severity of the most recent storm a person weathered tended to play the largest role in determining whether they believed storms were getting worse over time, according to the study published in the International Journal of Climatology. The survey was conducted in 2012 before Hurricane Sandy, the second-most expensive hurricane in history, caused $68 billion in damage.

Princeton University-led research found that people's view of future storm threat is based on their hurricane experience, gender and political affiliation, despite ample evidence that Atlantic hurricanes are getting stronger. This could affect how policymakers and scientists communicate the increasing deadliness of hurricanes as a result of climate change. The figure shows the wind speed of the latest hurricane landfall (left) on the U.S. Gulf Coast by county up to 2012, with red indicating the strongest winds. The data on the right show for the same area, by county, public agreement with the statement that storms have been strengthening in recent years, which was posed during a 2012 survey. Blue indicates the strongest agreement, while red equals the least agreement.

Image courtesy of Ning Lin, Department of Civil and Environmental Engineering

Respondents' opinions also strongly differed depending on whether they were male or female, whether they believed in climate change and whether they were a Democrat or a Republican. For instance, people who believe in climate change were far more likely to perceive the increasing violence of storms than those who did not. The researchers noted that because climate change has become a politically polarizing issue, party affiliation also was an indicator of belief in strengthening storms.

"Understanding how people in coastal regions perceive the threat is important because it influences whether they will take the necessary actions to address that threat," said Ning Lin, the senior researcher on the study and a Princeton assistant professor of civil and environmental engineering.

"What you see is that there is often a gap between the reality of the storm trends and how people interpret those trends," said Siyuan Xian, a doctoral candidate in Lin's lab and co-lead author of the new paper.

While scientists continue to debate the impact of climate change on the frequency and strength of hurricanes, numerous studies of objective measures -- such as wind speed, storm-surge height and economic damage -- show that hurricanes are stronger than they were even a few decades ago.

For instance, eight of the 10 most economically damaging hurricanes since 1980 have occurred since 2004, according to the National Oceanic and Atmospheric Administration (NOAA). In constant dollars, Hurricanes Katrina (2005) and Sandy caused nearly $154 billion and $68 billion in damage, respectively, according to NOAA.

In comparison, the costliest storms of the 1990s, Hurricanes Andrew (1992) and Floyd (1999), caused $46 billion and $9 billion in damage (adjusted for inflation), respectively. Hurricane Patricia in 2015 was the strongest Western Hemisphere storm in recorded history with maximum sustained winds of 215 miles per hour.

As the intensity of storms has increased, government agencies and coastal residents must grapple with preparing for the next landfall. Residents must decide, for example, whether to invest in storm shutters, roof and wall fortifications, flood-proof flooring and other structural buffers. On a larger scale, coastal planners need voter support to implement land-use policies that take the threat into account and to invest taxpayer dollars into protection measures such as seawalls or sand dunes.

Understanding how people perceive the threat of hurricanes is crucial for predicting whether they will take them seriously, Xian said. Six hurricanes form each year in the North Atlantic on average, although as many as 15 have developed in a single hurricane season.

"If you perceive a higher risk, you will be more likely to support policies and take action to ameliorate the impacts," Xian said. "We wanted to know how people perceive the threat of hurricanes and what influences their perceptions. This information will help guide how agencies communicate the risk, and what policies and actions are proposed to make communities resilient to these storms."

Lin and Xian worked with co-authors Wanyun Shao, assistant professor of geography at Auburn University-Montgomery; Barry Keim, professor of climatology at Louisiana State University; and Kirby Goidel, a Texas A&M professor of communication.

To explore what influences perceptions of hurricane threat, the researchers analyzed data from the 2012 Gulf Coast Climate Change Survey to analyze Gulf Coast residents' beliefs about hurricane trends from 1992 to 2011. Louisiana State University and NOAA conducted the survey.

The survey focused on residents of Texas, Louisiana, Mississippi, Alabama and Florida, who lived in areas of the Gulf Coast that experienced at least one hurricane landfall over the 20-year period from 1992 to 2011.

In addition to probing beliefs about hurricane trends, the survey gathered information on respondents' gender, political affiliations, opinions on climate change and other characteristics that might influence their perspective on hurricane trends.

The researchers' results mirrored a trend seen in other studies of extreme climate events, Lin said.

"The increasing power of Atlantic hurricanes is often connected to climate change, but studies have shown that Republicans and males tend to be more skeptical of climate change," Lin said. "We found a strong link between disbelief in climate change and disbelief that storms are getting worse -- they tend to come as a package."

The researchers were able to tease out what elements of the storms a respondent had experienced left the biggest impression on them. For instance, while storm surges tend to cause the most property damage, gale winds were more likely to convince people that hurricanes are getting stronger.

Behavioral scientists have long hypothesized the most recent landfall of a storm has a stronger influence on people's perceptions of long-term climate trends, said Sander van der Linden, a postdoctoral researcher and lecturer in Princeton's Woodrow Wilson School of Public and International Affairs, and director of the Social and Environmental Decision-Making (SED) Lab. Van der Linden is familiar with the research but had no role in it.

"This study provides strong empirical evidence of this phenomenon," said van der Linden, who studies public policy from a behavioral-science perspective. "This finding is important because it suggests that people may not be thinking about long-term changes in climate patterns but rather are paying attention to more salient variations in and impacts of short-term local weather."

The study's authors said this information could help governments communicate hurricane risk more effectively to the public. Taking into account that people are more likely to respond to the threat of high winds, for instance, could help agencies such as the Federal Emergency Management Agency motivate the public to adequately prepare for storms. The researchers also recommended that public agencies work to further educate the public about the risk posed by storm surge.

"Public opinion can make or break policies intended to address climate change and ameliorate damage from storms," Lin said. "Tapping into the state of current perceptions and what drives them will be critical for governments around the world as the impacts of climate change are increasingly felt."

The researchers are currently conducting other studies related to climate-change perception, including research on flood adaption and insurance-purchasing behavior in the counties along the Gulf Coast, as well as looking at worldwide perceptions of climate change and the willingness to adopt green-energy technologies.




Contacts and sources:
Steven Schultz
Princeton University


The paper, "Understanding perceptions of changing hurricane strength along the US Gulf coast," was published online June 20 by the International Journal of Climatology. Support for the research was provided in part by NOAA's Gulf of Mexico Coastal Storm Program, Texas Sea Grant, Louisiana Sea Grant, Florida Sea Grant and Mississippi-Alabama Sea Grant Consortium.

Friday, August 26, 2016

Acoustic Prism Invented, Can Split a Sound into Its Constituent Frequencies

Ecole Polytechnique Fédérale De Lausanne (EPFL) scientists have invented a new type of “acoustic prism” that can split a sound into its constituent frequencies. Their acoustic prism has applications in sound detection.

Almost 400 years ago, Newton showed that a prism could split white light into the colors of the rainbow, with each colour corresponding to a different wave frequency. Such an “optical prism” relies on a physical phenomenon (refraction) to split light into its constituent frequencies.

Now, a prism exists for sound. Hervé Lissek and his team at EPFL have invented an "acoustic prism" that splits sound into its constituent frequencies using physical properties alone. Its applications in sound detection are published in the Journal of the Acoustical Society of America.



The acoustic prism is entirely man-made, unlike optial prisms, which occur naturally in the form of water droplets. Decomposing sound into its constituent frequencies relies on the physical interaction between a sound wave and the structure of the prism. The acoustic prism modifies the propagation of each individual frequency of the sound wave, without any need of computations or electronic components.

The acoustic prism

The acoustic prism looks like a rectangular tube made of aluminum, complete with ten, perfectly aligned holes along one side. Each hole leads to an air-filled cavity inside the tube, and a membrane is placed between two consecutive cavities.

When sound is directed into the tube at one end, high-frequency components of the sound escape out of the tube through the holes near the source, while low frequencies escape through the holes that are further away, towards the other end of the tube. Like light through an optical prism, the sound is dispersed, with the dispersion angle depending on the wave’s frequency.

The membranes are key, since they vibrate and transmit the sound to the neighboring cavities with a delay that depends on frequency. The delayed sound then leaks through the holes and towards the exterior, dispersing the sound.

Angular detection by frequency

Credit:  EPFL


To take the concept a step further, the researchers realized that they could use the acoustic prism as an antenna to locate the direction of a distant sound by simply measuring its frequency. Since each dispersion angle corresponds to a particular frequency, it’s enough to measure the main frequency component of an incoming sound to determine where it is coming from, without actually moving the prism.

The principle of the acoustic prism relies on the design of cavities, ducts and membranes, which can be easily fabricated and even miniaturized, possibly leading to cost-effective angular sound detection without resorting to expensive microphone arrays or moving antennas.




Contacts and sources:
by Hillary Sanctuary
Ecole Polytechnique Fédérale De Lausanne

HMS Bounty Mutineers' Pigtails To Undergo Forensic DNA Analysis

Ten pigtails of hair thought to be from seven mutineers of “Mutiny on the Bounty” fame and three of their female Polynesian companions will be analysed in a new collaboration between the Pitcairn Islands Study Centre at Pacific Union College (California, US) and the forensic DNA group at King’s College London (UK).

The forensic DNA group at King’s has been sent hair strands from the ten pigtails, which are currently on display in the California-based centre, to help establish as much information as possible on their origins.

As the pigtails purportedly date back to the pre-1800s, the King’s team will first attempt to extract DNA from the historical hair samples after cleaning the outside and then digesting the hair matrix using a chemical process. Nuclear DNA is not found in hair shafts, only the roots which are not available here; however, mitochondrial DNA may be present. If sufficient mitochondrial DNA can be collected, the first step will be to investigate the ancestral origins of the owners of the pigtails.


Hair from the collection of pigtails donated to the Pitcairn Islands Study Centre will be analyzed by forensics experts at King's College London.
Credit: Pitcairn Islands Study Centre

Unlike nuclear DNA, mitochondrial DNA does not discriminate between all individuals as people sharing a common maternal ancestor will also share a similar profile. However, this type of DNA can provide some indication of maternal geographic origin e.g. whether someone is likely to be of European descent, so the team will aim to establish whether the hair samples do indeed come from seven Europeans and three Polynesian individuals, as the documentation accompanying the samples suggests.

Further, more detailed identification will require genealogical methods to trace the ancestors of the pigtail owners, to be able to link samples to names from historical records and other sources of information. A lot has been written about the possible descendants of the mutineers but this information will not be helpful with regards to the male mutineers; instead, their maternal line will need be traced. The study will therefore try to identify their maternal ancestors, such as their respective mothers and maternal grandmothers, and research other direct female descendants down to individuals living today.

Dr Denise Syndercombe-Court, project lead from the Analytical and Environmental Sciences Division at King’s College London, said: “First, we will have to determine whether we can recover mitochondrial DNA of appropriate quality to be analysed. The hairs, if from the mutineers, are over two hundred years old and we have no idea what environments they might have been exposed to in the intervening time.”

“Potentially as problematic will be the genealogical research as civil registration in the UK did not start until 1837, some 50 years after the mutiny and so, at best, the death of the mother may be listed in these records but other processes would need to be used to gather more information. Because of the patrilineal transmission of surnames we would not even expect to find someone who believes they may be linked to the mutineers and so we will have to depend on this research and hope for the agreed consent from any identified living descendant to act as a modern day reference. We do not anticipate that this will be easy and it will require other interested parties to get involved in this part of the study.”

Credit: Pitcairn Islands Study Centre

Herbert Ford, Director of the Pitcairn Islands Study Centre, said: “This hair is a gift from Joy Allward, wife of the late Maurice Allward of Hatfield, UK, who successfully bid for the hair at a Sotheby’s auction in London in 2000.”

“If the tests and genealogical studies of this hair authenticates that it is of seven of the nine mutineers who hid out from British justice on Pitcairn Island in 1790, it will be the only tangible physical evidence of their having existed. There is only one known mutineer grave on Pitcairn, that of John Adams. Of the whereabouts of the remains of the eight others, we can only speculate.”

The pigtails on display in the US were housed in a nineteenth-century cylindrical tobacco tin. Also with the locks of hair was a handkerchief said to have belonged to Sarah, the daughter of William McCoy, one of the Bounty mutineers.

A worn, faded label with the pigtails notes that it is attached to the hair of William McCoy. The mutineer McCoy died on Pitcairn Island in 1800. Notes written on the label also state that the pigtails are of seven of the mutineers of H.M.S. Bounty and “also that of three of the Tahitian women,” who accompanied the mutineers to Pitcairn in 1789.

Further information on the label notes that “The holders of the hair have been (1) Teio, wife of McCoy. (2) Mrs. Sarah Christian. (3) F. G. Mitchell. Given to F. G. Mitchell, 22nd June 1849 (Jubilee Day) by Mrs. Sarah Nobbs.”

The story of the mutiny that took place on the ship H.M.S. Bounty in the South Pacific Ocean in 1789 was made famous by the publication of a trilogy of books published in the 1930s. Following the publication of the books, a number of Hollywood-type motion pictures about the Bounty mutiny were shown worldwide over the next four decades.



Contacts and sources:
Jenny Gimpel
King’s College London (UK)
Pitcairn Islands Study Centre