Friday, April 20, 2018

New Way of Exploring the Afterglow from the Big Bang Found

Researchers have developed a new way to improve our knowledge of the Big Bang by measuring radiation from its afterglow, called the cosmic microwave background radiation. The new results predict the maximum bandwidth of the universe, which is the maximum speed at which any change can occur in the universe.

The cosmic microwave background (CMB) is a reverberation or afterglow left from when the universe was about 300,000 years old. It was first discovered in 1964 as a ubiquitous faint noise in radio antennas. In the past two decades, satellite-based telescopes have started to measure it with great accuracy, revolutionizing our understanding of the Big Bang.

The detailed, all-sky picture of the infant universe created from nine years of WMAP data. The image reveals 13.77 billion year old temperature fluctuations (shown as color differences) that correspond to the seeds that grew to become the galaxies in the cosmic microwave background (CMB). The signal from our galaxy was subtracted using the multi-frequency data. This image shows a temperature range of ± 200 microKelvin.
File:Ilc 9yr moll4096.png
Credit: NASA / WMAP Science Team

Achim Kempf, a professor of applied mathematics at the University of Waterloo and Canada Research Chair in the Physics of Information, led the work to develop the new calculation, jointly with Aidan Chatwin-Davies and Robert Martin, his former graduate students at Waterloo.

“It’s like video on the Internet,” said Kempf. “If you can measure the CMB with very high resolution, this can tell you about the bandwidth of the universe, in a similar way to how the sharpness of the video image on your Skype call tells you about the bandwidth of your internet connection.”

The study appears in a special issue of Foundations of Physics dedicated to the material Kempf presentedto the Vatican Observatory in Rome last year. The international workshop entitled, Black Holes, Gravitational Waves and Spacetime Singularities, gathered 25 leading physicists from around the world to present, collaborate and inform on the latest theoretical progress and experimental data on the Big Bang. Kempf’s invitation was the result of this paper in Physical Review Letters, a leading journal in the field.

“This kind of work is highly collaborative,” said Kempf, also an affiliate at the Perimeter Institute for Theoretical Physics. “It was great to see at the conference how experimentalists and theoreticians inspire each other’s work.”

While at the Vatican, Kempf and other researchers in attendance also shared their work with the Pope.

“The Pope has a great sense of humor and had a good laugh with us on the subject of dark matter,” said Kempf.

Teams of astronomers are currently working on even more accurate measurements of the cosmic microwave background. By using the new calculations, these upcoming measurements might reveal the value of the universe’s fundamental bandwidth, thereby telling us also about the fastest thing that ever happened, the Big Bang.

Contacts and sources:
Pamela Smyth
University of Waterloo

Thursday, April 19, 2018

Atoms May Hum a Tune from Grand Cosmic Symphony: "Striking Resemblance to Universe in a Microcosm:"

Researchers playing with a cloud of ultracold atoms uncovered behavior that bears a striking resemblance to the universe in microcosm. Their work, which forges new connections between atomic physics and the sudden expansion of the early universe, was published April 19 in Physical Review X and featured in Physics.

"From the atomic physics perspective, the experiment is beautifully described by existing theory," says Stephen Eckel, an atomic physicist at the National Institute of Standards and Technology (NIST) and the lead author of the new paper. "But even more striking is how that theory connects with cosmology."

In several sets of experiments, Eckel and his colleagues rapidly expanded the size of a doughnut-shaped cloud of atoms, taking snapshots during the process. The growth happens so fast that the cloud is left humming, and a related hum may have appeared on cosmic scales during the rapid expansion of the early universe--an epoch that cosmologists refer to as the period of inflation.

An expanding, ring-shaped cloud of atoms shares several striking features with the early universe. 
Credit: E. Edwards/JQI

The work brought together experts in atomic physics and gravity, and the authors say it is a testament to the versatility of the Bose-Einstein condensate (BEC)--an ultracold cloud of atoms that can be described as a single quantum object--as a platform for testing ideas from other areas of physics.

"Maybe this will one day inform future models of cosmology," Eckel says. "Or vice versa. Maybe there will be a model of cosmology that's difficult to solve but that you could simulate using a cold atomic gas."

It's not the first time that researchers have connected BECs and cosmology. Prior studies mimicked black holes and searched for analogs of the radiation predicted to pour forth from their shadowy boundaries. The new experiments focus instead on the BEC's response to a rapid expansion, a process that suggests several analogies to what may have happened during the period of inflation.

The first and most direct analogy involves the way that waves travel through an expanding medium. Such a situation doesn't arise often in physics, but it happened during inflation on a grand scale. During that expansion, space itself stretched any waves to much larger sizes and stole energy from them through a process known as Hubble friction.

In one set of experiments, researchers spotted analogous features in their cloud of atoms. They imprinted a sound wave onto their cloud--alternating regions of more atoms and fewer atoms around the ring, like a wave in the early universe--and watched it disperse during expansion. Unsurprisingly, the sound wave stretched out, but its amplitude also decreased. The math revealed that this damping looked just like Hubble friction, and the behavior was captured well by calculations and numerical simulations.

"It's like we're hitting the BEC with a hammer," says Gretchen Campbell, the NIST co-director of the Joint Quantum Institute (JQI) and a coauthor of the paper, "and it's sort of shocking to me that these simulations so nicely replicate what's going on."

In a second set of experiments, the team uncovered another, more speculative analogy. For these tests they left the BEC free of any sound waves but provoked the same expansion, watching the BEC slosh back and forth until it relaxed.

In a way, that relaxation also resembled inflation. Some of the energy that drove the expansion of the universe ultimately ended up creating all of the matter and light around us. And although there are many theories for how this happened, cosmologists aren't exactly sure how that leftover energy got converted into all the stuff we see today.

In the BEC, the energy of the expansion was quickly transferred to things like sound waves traveling around the ring. Some early guesses for why this was happening looked promising, but they fell short of predicting the energy transfer accurately. So the team turned to numerical simulations that could capture a more complete picture of the physics.

What emerged was a complicated account of the energy conversion: After the expansion stopped, atoms at the outer edge of the ring hit their new, expanded boundary and got reflected back toward the center of the cloud. There, they interfered with atoms still traveling outward, creating a zone in the middle where almost no atoms could live. Atoms on either side of this inhospitable area had mismatched quantum properties, like two neighboring clocks that are out of sync.

The situation was highly unstable and eventually collapsed, leading to the creation of vortices throughout the cloud. These vortices, or little quantum whirlpools, would break apart and generate sound waves that ran around the ring, like the particles and radiation left over after inflation. Some vortices even escaped from the edge of the BEC, creating an imbalance that left the cloud rotating.

Unlike the analogy to Hubble friction, the complicated story of how sloshing atoms can create dozens of quantum whirlpools may bear no resemblance to what goes on during and after inflation. But Ted Jacobson, a coauthor of the new paper and a physics professor at the University of Maryland specializing in black holes, says that his interaction with atomic physicists yielded benefits outside these technical results.

"What I learned from them, and from thinking so much about an experiment like that, are new ways to think about the cosmology problem," Jacobson says. "And they learned to think about aspects of the BEC that they would never have thought about before. Whether those are useful or important remains to be seen, but it was certainly stimulating."

Eckel echoes the same thought. "Ted got me to think about the processes in BECs differently," he says, "and any time you approach a problem and you can see it from a different perspective, it gives you a better chance of actually solving that problem."

Future experiments may study the complicated transfer of energy during expansion more closely, or even search for further cosmological analogies. "The nice thing is that from these results, we now know how to design experiments in the future to target the different effects that we hope to see," Campbell says. "And as theorists come up with models, it does give us a testbed where we could actually study those models and see what happens."

Contacts and sources:
Chris Cesare
University of Maryland

Citation: A Rapidly Expanding Bose-Einstein Condensate: An Expanding Universe in the Lab
S. Eckel, A. Kumar, T. Jacobson, I. B. Spielman, and G. K. Campbell
Phys. Rev. X 8, 021021 – Published 19 April 2018

Ancient Humans Linked to Unprecedented Wave of Large-Mammal Extinctions

Homo sapiens, Neanderthals and other recent human relatives may have begun hunting large mammal species down to size — by way of extinction — at least 90,000 years earlier than previously thought, says a new study published in the journal Science.

Elephant-dwarfing wooly mammoths, elephant-sized ground sloths and various saber-toothed cats highlighted the array of massive mammals roaming Earth between 2.6 million and 12,000 years ago. Prior research suggested that such large mammals began disappearing faster than their smaller counterparts — a phenomenon known as size-biased extinction — in Australia around 35,000 years ago.

With the help of emerging data from older fossil and geologic records, the new study estimated that this size-biased extinction started at least 125,000 years ago in Africa. By that point, the average African mammal was already 50 percent smaller than those on other continents, the study reported, despite the fact that larger landmasses can typically support larger mammals.

A herd of Columbian mammoths move across the plains in this Morrill Hall mural by Mark Marcuson.

Troy Fedderson | University Communication

But as humans migrated out of Africa, other size-biased extinctions began occurring in regions and on timelines that coincide with known human migration patterns, the researchers found. Over time, the average body size of mammals on those other continents approached and then fell well below Africa’s. Mammals that survived during the span were generally far smaller than those that went extinct.

The magnitude and scale of the recent size-biased extinction surpassed any other recorded during the last 66 million years, according to the study, which was led by the University of New Mexico’s Felisa Smith.

“It wasn’t until human impacts started becoming a factor that large body sizes made mammals more vulnerable to extinction,” said the University of Nebraska-Lincoln’s Kate Lyons, who authored the study with Smith and colleagues from Stanford University and the University of California, San Diego. “The anthropological record indicates that Homo sapiens are identified as a species around 200,000 years ago, so this occurred not very long after the birth of us as a species. It just seems to be something that we do.

Kate Lyons, assistant professor of biological sciences
Credit: UNL

“From a life-history standpoint, it makes some sense. If you kill a rabbit, you’re going to feed your family for a night. If you can kill a large mammal, you’re going to feed your village.”

By contrast, the research team found little support for the idea that climate change drove size-biased extinctions during the last 66 million years. Large and small mammals seemed equally vulnerable to temperature shifts throughout that span, the authors reported.

A life-sized display of Archie, a Columbian mammoth, is on display at the University of Nebraska State Museum in Morrill Hall. A new study suggests that such massive mammals were much more likely than their smaller counterparts to go extinct in regions occupied by ancient humans.
A life-sized display of Archie, a Columbian mammoth, is on display at the University of Nebraska State Museum in Morrill Hall. A new study suggests that such massive mammals were much more likely than their smaller counterparts to go extinct in regions occupied by ancient humans.
Troy Fedderson | University Communication

“If climate were causing this, we would expect to see these extinction events either sometimes (diverging from) human migration across the globe or always lining up with clear climate events in the record. And they don’t do either of those things.”

Off the face of the Earth

The team also looked ahead to examine how potential mammal extinctions could affect the world’s biodiversity. To do so, it posed a question: What would happen if the mammals currently listed as vulnerable or endangered were to go extinct within the next 200 years?

In that scenario, Lyons said, the largest remaining mammal would be the domestic cow. The average body mass would plummet to less than six pounds — roughly the size of a Yorkshire terrier.

“If this trend continues, and all the currently threatened (mammals) are lost, then energy flow and taxonomic composition will be entirely restructured,” said Smith, professor of biology at New Mexico. “In fact, mammalian body size around the globe will revert to what the world looked like 40 million years ago.”

The University of Nebraska State Museum's Elephant Hall highlights the differences in current elephants (left) and mammoths (middle and right). Pictured (from left) is an African elephant; an Asian elephant with a juvenile; dwarf mammoth; Archie, a Columbian mammoth; and a Jefferson mammoth.

Troy Fedderson | University Communication

-Lyons said that restructuring could have “profound implications” for the world’s ecosystems. Large mammals tend to be herbivores, devouring large quantities of vegetation and effectively transporting the associated nutrients around an ecosystem. If they continue to disappear, she said, the remaining mammals would prove poor stand-ins for important ecological roles.

“The kinds of ecosystem services that are provided by large mammals are very different than what you get from small mammals,” Lyons said. “Ecosystems are going to be very, very different in the future. The last time mammal communities looked like that and had a mean body size that small was after the extinction of the dinosaurs.

“What we’re doing is potentially erasing 40 to 45 million years of mammal body-size evolution in a very short period of time.”

Smith and Lyons authored the study with Jon Payne of Stanford University and Rosemary Elliott Smith from the University of California, San Diego. The team received support from the National Science Foundation.

Contacts and sources:
Scott Schrage
University of Nebraska-Lincoln

Searching for Our Sun's Lost Siblings among 340,000 Stars

A Sydney-led international group of astronomers has revealed the "DNA" of more than 340,000 stars in the first major data release from the Galactic Archaeology survey GALAH for clues about how galaxies formed and evolved.

An Australian-led group of astronomers working with European collaborators has revealed the “DNA” of more than 340,000 stars in the Milky Way, which should help them find the siblings of the Sun, now scattered across the sky.

Credit: Australian Astronomical Observatory

This is a major announcement from an ambitious Galactic Archaeology survey, called GALAH, launched in late 2013 as part of a quest to uncover the formulation and evolution of galaxies. When complete, GALAH will investigate more than a million stars.

A schematic of the HERMES instrument showing the light path of how star light from the telescope AAT is split into four different channels. Credit: AAO. Image top of page: HERMES, the new spectrograph built at the AAO, uses volume phase holographic (VPH) gratings to provide various optimized spectra in blue, green and red light and a fourth band in infra-red light. 
A schematic of the HERMES instrument showing the light path of how star light from the telescope AAT is split into four different channels. Credit: AAO. Image top of page: HERMES, the new spectrograph built at the AAO, uses volume phase holographic (VPH) gratings to provide various optimised spectra in blue,  green and red light and a fourth band in infra-red light. Credit: N.A. Sharp, NOAO/NSO/Kitt Peak FTS/AURA/NSF.
Credit: N.A. Sharp, NOAO/NSO/Kitt Peak FTS/AURA/NSF.

The GALAH survey used the HERMES spectrograph at the Australian Astronomical Observatory’s (AAO) 3.9-metre Anglo-Australian Telescope near Coonabarabran, NSW, to collect spectra for the 340,000 stars.

The GALAH Survey today makes its first major public data release.

The ‘DNA’ collected traces the ancestry of stars, showing astronomers how the Universe went from having only hydrogen and helium - just after the Big Bang - to being filled today with all th elements we have here on Earth that are necessary for life.

“No other survey has been able to measure as many elements for as many stars as GALAH,” said Dr Gayandhi De Silva, of the University of Sydney and AAO, the HERMES instrument scientist who oversaw the groups working on today’s major data release.

“This data will enable such discoveries as the original star clusters of the Galaxy, including the Sun's birth cluster and solar siblings - there is no other dataset like this ever collected anywhere else in the world,” said Dr De Silva, from the University's School of Physics.

Dr. Sarah Martell from the UNSW Sydney, who leads GALAH survey observations, explained that the Sun, like all stars, was born in a group or cluster of thousands of stars.

“Every star in that cluster will have the same chemical composition, or DNA – these clusters are quickly pulled apart by our Milky Way Galaxy and are now scattered across the sky,” Dr Martell said.

“The GALAH team’s aim is to make DNA matches between stars to find their long-lost sisters and brothers.”

For each star, this DNA is the amount they contain of each of nearly two dozen chemical elements such as oxygen, aluminium, and iron.

Unfortunately, astronomers cannot collect the DNA of a star with a mouth swab but instead use the starlight, with a technique called spectroscopy.

The light from the star is collected by the telescope and then passed through an instrument called a spectrograph, which splits the light into detailed rainbows, or spectra.

Associate Professor Daniel Zucker, from Macquarie University and the AAO, said astronomers measured the locations and sizes of dark lines in the spectra to work out the amount of each element in a star.

“Each chemical element leaves a unique pattern of dark bands at specific wavelengths in these spectra, like fingerprints,” he said.

Dr Jeffrey Simpson of the AAO said it takes about an hour to collect enough photons of light for each star, but “Thankfully, we can observe 360 stars at the same time using fibre optics,” he added.

The GALAH team has spent more than 280 nights at the telescope since 2014 to collect all the data.

The GALAH survey is the brainchild of Professor Joss Bland-Hawthorn from the University of Sydney and the ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) and Professor Ken Freeman of the Australian National University (ANU). It was conceived more than a decade ago as a way to unravel the history of our Milky Way galaxy; the HERMES instrument was designed and built by the AAO specifically for the GALAH survey.

Measuring the abundance of each chemical in so many stars is an enormous challenge. To do this, GALAH has developed sophisticated analysis techniques.

PhD student Sven Buder of the Max Planck Institute for Astronomy, Germany, who is lead author of the scientific article describing the GALAH data release, is part of the analysis effort of the project, working with PhD student Ly Duong and Professor Martin Asplund of ANU and ASTRO 3D.

Mr. Buder said: “We train [our computer code] The Cannon to recognize patterns in the spectra of a subset of stars that we have analysed very carefully, and then use The Cannon’s machine learning algorithms to determine the amount of each element for all of the 340,000 stars.“ Ms. Duong noted that “The Cannon is named for Annie Jump Cannon, a pioneering American astronomer who classified the spectra of around 340,000 stars by eye over several decades a century ago – our code analyses that many stars in far greater detail in less than a day.”

The GALAH survey’s data release is timed to coincide with the huge release of data on 25 April from the European Gaia satellite, which has mapped more than 1.6 billion stars in the Milky Way, making it by far the biggest and most accurate atlas of the night sky to date.

In combination with velocities from GALAH, Gaia data will give not just the positions and distances of the stars, but also their motions within the Galaxy.

Professor Tomaz Zwitter (University of Ljubljana, Slovenia) said today’s results from the GALAH survey would be crucial to interpreting the results from Gaia: "The accuracy of the velocities that we are achieving with GALAH is unprecedented for such a large survey."

Dr Sanjib Sharma from the University of Sydney concluded: “For the first time we’ll be able to get a detailed understanding of the history of the Galaxy.”

Eleven science papers accompanying this release are simultaneously being published in Monthly Notices of the Royal Astronomical Society and Astronomy and Astrophysics.

The ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) is a $40m Research Centre of Excellence funded by the Australian Research Council (ARC) and six collaborating Australian universities - The Australian National University, The University of Sydney, The University of Melbourne, Swinburne University of Technology, The University of Western Australia and Curtin University.

Contacts and sources:
Vivienne Reiner
University of Sydney

Papers published today as part of the data release are:
Buder et al. 2018a: The GALAH Survey: Second Data Release
Quillen et al. 2018: The GALAH Survey: Stellar streams
Duong et al. 2018: The GALAH Survey: properties of the Galactic disks
Kos et al. 2018: The GALAH Survey: chemical tagging the Pleiades
Buder et al. 2018b: The GALAH Survey: Chemo-dynamics of the Galaxy
Simpson et al. 2018a: The GALAH Survey: co-orbiting stars and chemical tagging
Simpson et al. 2018b: The GALAH Survey: GALAH and the Magellanic clouds
Khanna et al. 2018: The GALAH Survey: Velocity fluctuations in the Milky Way
Gao et al. 2018: The GALAH Survey: NLTE trends in the open cluster M67
Zwitter et al. 2018: The GALAH Survey: Radial Velocity Library
Kos et al. 2018: The GALAH Survey: Holistic spectroscopy

Mars' Moons Formed from Giant Impact of Ceres-Sized Dwarf Planet or Vesta Sized Asteroid

Southwest Research Institute scientists posit a violent birth of the tiny Martian moons Phobos and Deimos, but on a much smaller scale than the giant impact thought to have resulted in the Earth-Moon system. Their work shows that an impact between proto-Mars and a dwarf-planet-sized object likely produced the two moons, as detailed in a paper published today in Science Advances.

The origin of the Red Planet’s small moons has been debated for decades. The question is whether the bodies were asteroids captured intact by Mars gravity or whether the tiny satellites formed from an equatorial disk of debris, as is most consistent with their nearly circular and co-planar orbits. The production of a disk by an impact with Mars seemed promising, but prior models of this process were limited by low numerical resolution and overly simplified modeling techniques.

Credit:  Southwest Research Institute 

“Ours is the first self-consistent model to identify the type of impact needed to lead to the formation of Mars’ two small moons,” said lead author Dr. Robin Canup, an associate vice president in the SwRI Space Science and Engineering Division. Canup is one of the leading scientists using large-scale hydrodynamical simulations to model planet-scale collisions, including the prevailing Earth-Moon formation model.

SwRI scientists modeled a Ceres-sized object crashing into Mars at an oblique angle. These four frames from the 3-D simulation show that the impact initially produces a disk of orbiting debris primarily derived from Mars (bottom right frame). The outer portions of the disk later accumulate into Mars’ small moons, Phobos and Deimos. The inner portions of the disk accumulate into larger moons that eventually spiral inward and are assimilated into Mars.
Four frames from the Mars impact 3-D simulation
Credit:  Southwest Research Institute 

“A key result of the new work is the size of the impactor; we find that a large impactor — similar in size to the largest asteroids Vesta and Ceres — is needed, rather than a giant impactor,” Canup said. “The model also predicts that the two moons are derived primarily from material originating in Mars, so their bulk compositions should be similar to that of Mars for most elements. However, heating of the ejecta and the low escape velocity from Mars suggests that water vapor would have been lost, implying that the moons will be dry if they formed by impact.”

The new Mars model invokes a much smaller impactor than considered previously. Our Moon may have formed when a Mars-sized object crashed into the nascent Earth 4.5 billion years ago, and the resulting debris coalesced into the Earth-Moon system. The Earth’s diameter is about 8,000 miles, while Mars’ diameter is just over 4,200 miles. The Moon is just over 2,100 miles in diameter, about one-fourth the size of Earth.

While they formed in the same timeframe, Deimos and Phobos are very small, with diameters of only 7.5 miles and 14 miles respectively, and orbit very close to Mars. The proposed Phobos-Deimos forming impactor would be between the size of the asteroid Vesta, which has a diameter of 326 miles, and the dwarf planet Ceres, which is 587 miles wide.

This composite image compares how big the moons of Mars appear, as seen from the surface of the Red Planet, in relation to the size that our Moon appears from Earth’s surface. While Earth’s Moon is 100 times bigger than the larger Martian moon Phobos, the Martian moons orbit much closer to their planet, making them appear relatively larger in the sky. Deimos, at far left, and Phobos, beside it, are shown together as photographed by NASA’s Mars rover Curiosity on Aug. 1, 2013.Composite image compares how big the moons of Mars appear, as seen from the surface of the Red Planet, in relation to the size that our Moon appears from Earth’s surface
Image Courtesy of NASA/JPL-Caltech/Malin Space Science Systems/Texas A&M Univ.

“We used state-of-the-art models to show that a Vesta-to-Ceres-sized impactor can produce a disk consistent with the formation of Mars’ small moons,” said the paper’s second author, Dr. Julien Salmon, an SwRI research scientist. “The outer portions of the disk accumulate into Phobos and Deimos, while the inner portions of the disk accumulate into larger moons that eventually spiral inward and are assimilated into Mars. Larger impacts advocated in prior works produce massive disks and more massive inner moons that prevent the survival of tiny moons like Phobos and Deimos.”

These findings are important for the Japan Aerospace Exploration Agency (JAXA) Mars Moons eXploration (MMX) mission, which is planned to launch in 2024 and will include a NASA-provided instrument. The MMX spacecraft will visit the two Martian moons, land on the surface of Phobos and collect a surface sample to be returned to Earth in 2029.

“A primary objective of the MMX mission is to determine the origin of Mars’ moons, and having a model that predicts what the moons compositions would be if they formed by impact provides a key constraint for achieving that goal,” Canup said.

The “Origin of Phobos and Deimos by the impact of a Vesta-to-Ceres-sized body with Mars,” is published in the April 18, 2018, issue of Science Advances. The research was funded by NASA’s Solar System Exploration Research Virtual Institute (SSERVI) in Silicon Valley, and by NASA’s Emerging Worlds program. The research was conducted as part of the Institute for the Science of Exploration Targets (ISET), a SSERVI team from SwRI’s Boulder, Colorado, office.

Contacts and sources:
 Deb Schmid
Southwest Research Institute

Tuesday, April 17, 2018

Alga? Squid? Rare Ancient Fossil Finally Identified as a Ray after 70 Years

Paleontologists are often working with just a small part of the puzzle. And sometimes, the piece fits differently according to who’s doing the puzzling.

That seems to be the case with a rare fossil specimen first discovered 70 years ago in Kansas’ Niobrara Formation—the only one of its kind. Since its discovery, the incomplete fossil has been twice misidentified, first as a plant and then as a cephalopod. Museum researchers recently reinterpreted it as a 70–85-million-year-old cartilaginous fish from the group which includes sharks and rays.

A manta ray swimming in the Flower Garden Banks National Marine Sanctuary in the Gulf of Mexico.
Manta ray swims underwater.
Courtesy of G.P. Schmahl/NOAA

In the new study, led by Allison Bronson, a comparative biology Ph.D.-degree student in the Museum’s Richard Gilder Graduate School, Museum researchers re-examined the hard tissue of Platylithophycus cretaceum, debunking previous theories that the fossil slab was either a green alga or a cephalopod.

“There are many examples of temporarily misplaced taxa in paleontological history, including ferns that were once thought to be sponges and lungfish teeth thought to be fungi,” said Bronson. “In this case, the misidentification didn’t happen because of a lack of technology at the time—scientists familiar with cartilage structure could easily see this was a chondrichthyan fish. The researchers used reasonable arguments for their interpretations, but didn’t look outside of their own fields.”

Platylithophycus cretaceum was first described as an alga and later as squid. New research has found the specimen is most likely a cartilaginous fish.

Credit: © M. Eklund

The fossil specimen, which measures approximately 1.5 feet long by 10 inches wide, was first described by two paleobotanists from the Colorado School of Mines and Princeton University in 1948. The researchers identified the hexagonal plates that cover its surface as “fronds” from the calcium carbonate-covered filaments of a green alga. Twenty years later, in 1968, researchers from Fort Hays Kansas State College re-classified the specimen, this time as a cephalopod with similarities to a cuttlefish.

Bronson and co-author John Maisey, a curator in the Museum’s Division of Paleontology, tested the specimen with a small amount of dilute organic acid and determined that it was not composed of calcium carbonate. Rather, its hard tissue was calcium phosphate, the same material found in the fossilized skeletons of sharks and rays. Then, using an electron microscope, the researchers examined the specimen’s hexagonal plates, which they reinterpreted as tessellated calcified cartilage, most likely belonging to the fish’s supportive gill arches.

A high-magnification photo of the surface of the Platylithophycus cretaceusspecimen shows the tessellated calcified cartilage.

Credit: © AMNH/A. Bronson

“We think this was a rather large cartilaginous fish, possibly related to living filter-feeding rays such as Manta and Mobula,” said Maisey, who is a co-author on the paper. “This potentially expands the range of diversity in the Niobrara fauna.”

The Niobrara Formation, situated along what was once the Western Interior Seaway, is considered one of the most diverse fossil-fish sites in North America, especially for animals from the late Cretaceous.

But while Platylithophycus hails from this fossil-rich region, it remains the only specimen of its kind. Since the slab contains only preserved remnants of the animal’s gills, it will continue to be classified under its plant name until a more complete specimen is found.

Contacts and sources:
American Museum of Natural History

Citation: Resolving the identity of Platylithophycus, an enigmatic fossil from the Niobrara Chalk (Upper Cretaceous, Coniacian–Campanian). Authors: Allison W. Bronson, John G. Maisey. Journal of Paleontology, 2018; 1 DOI: 10.1017/jpa.2018.14

The Tropical ‘Bread Basket’ Study Explores Grain Production in the Tropics

It wasn’t until the late-1990s that the tropics began to emerge as a possible region for growing grain crops, particularly soybean. But, today, farmers in central Brazil are running productive farm businesses, largely due to a new tropical system of production known as safrinha, or succession farming, which results in two large crops—soybean and maize—per year.

Agricultural economists at the University of Illinois wanted to learn more about the productivity of grain production in this tropical area. In a study published in the International Journal of Agricultural Management, they examine input and output factors for several large-scale farms located in the state of Mato Grosso, Brazil.

Credit: ACES

“Mato Grosso, where this research is set, is by the far the largest geographical state producing soybean in the world,” says Peter Goldsmith, a professor in the Department of Agricultural and Consumer Economics at U of I and lead author of the study. “They far surpass Illinois or Iowa as a state, and the yields are the same as in the U.S. But nobody 20 years ago thought you could produce soybean in the tropics.”

Historically, the tropics—defined as approximately 20 degrees north latitude and 20 degrees south latitude—has been one of the poorest parts of the world, with the lowest agricultural productivity, and some of the highest incidences of malnutrition, Goldsmith says. “The thought is that the ‘bread basket’ was outside that region and these regions would forever be food-importing regions. And, up until the late-1990s—not that long ago—nobody thought of the potential for the tropical world.”

But because of the phenomenon of these large-scale, high-producing farms in central Brazil, Goldsmith says it was important to look at these “new-age” farmers and how they behave in terms of producing soybean and maize. If, after all, tropical Brazil now produces 64 percent of the nation’s 114 million metric ton soybean crop, according to a 2018 study, how do they do it?

“It’s a very different sort of agriculture, which we describe in the paper, but the real point is that they have proven that you can engage broad-land production, just like we see in the Midwest or Argentina—large farms, large combines, mechanized, high yields, productive farms—in the tropics. That opens up a lot of new land. But it also opens up a lot of sensitive biomes, such as rainforests,” Goldsmith says, a prospect that can be controversial and requires serious environmental stewardship.

So one of the questions the researchers asked is whether these farmers are producing more by simply using more land. The answer is not necessarily. “The safrinha system allows you to double crop—succession crop—so you can double the output, in essence, without doubling agricultural land,” Goldsmith says. “That’s a real strength of tropical production, and some farmers are even producing a third crop on the same land in the same year.”

They also looked at the types and quantities of inputs the farmers use.

Analyzing data from 43 soybean/maize-producing farms in Mato Grosso—including costs, revenues, input quantities, and inventory values, as well as agronomic data and farm characteristics—the researchers determined whether the farms are factor-productive; whether they are producing more with fewer inputs.

“We wanted to know, for every unit of chemicals they use, how much output of grain do they produce. Or for every unit of fertilizer that they use, how much output do they get. We also asked that of fuel, labor, and machinery inputs,” Goldsmith says. “We show in the paper that, globally, an increase of outputs has occurred; we are much more factor-productive on average around the world due to an increase in the use of technology. This is not necessarily the case in the tropics of Brazil, except with respect to land.”

Overall, Goldsmith says agricultural output growth due to total factor productivity in Mato Grosso is low at 9 percent. Most of the growth, 64 percent, comes from greater usage of inputs such as chemical and fertilizer applications. This is not surprising for the tropics, where the pest pressure is much higher than in other parts of the world, such as the Midwest in the United States.

The results also show that farmers in Mato Grosso rely mostly on traditional inputs and extensifaction, over the adoption of new technologies. But that extensification using the safrinha system is unique to the tropics; more land but not a larger farming footprint.

“Tropical production is different, and we should be using different metrics to measure productivity,” Goldsmith says. “Technology usage is slower for a number of reasons, but low technology adoption rates don’t seem to affect growth, though. Successive cropping systems, which don’t require high technology for success, dramatically improve the productivity of the land.”

The results of the study lay some groundwork for policymakers, producers, and those in the agribusiness industry when thinking about agricultural expansion in the tropics.

“We make the case that if the tropical region is this new, high-growth area with potential for grain production—and large-scale production is very possible there—and if it’s the type of sustainable business we see in the Mato Grosso state of Brazil and other regions of South America, than these are important people and businesses for the future of the food system.

“There are significant positive implications for the future of global grain supplies and our ability to feed the large global population of 2050, as well as bringing rural economic development and the resulting poverty alleviation to many of the world’s poorest regions. This new business model has proven to be very successful and is here to stay,” Goldsmith says.

The study, “The productivity of tropical grain production,” is published in the International Journal of Agricultural Management. Co-authors include Peter Goldsmith and Krystal Montesdeoca. Goldsmith is a professor in the Department of Agricultural and Consumer Economics at U of I, director of the Food and Agribusiness Program at U of I, and is the principal investigator of USAID’s Soybean Innovation Lab centered at U of I.

Contacts and sources:
University of Illinois College of Agricultural, Consumer and Environmental Sciences (ACES)

The study, “The productivity of tropical grain production,” is published in the International Journal of Agricultural Management. [] Co-authors include Peter Goldsmith and Krystal Montesdeoca. Goldsmith is a professor in the Department of Agricultural and Consumer Economics at U of I, director of the Food and Agribusiness Program at U of I, and is the principal investigator of USAID’s Soybean Innovation Lab centered at U of I.

Giant Group of Octopus Moms Discovered in the Deep Sea

We know more about the surface of the moon that we do about the bottom of the ocean. The sea floor is an alien landscape, with crushing pressure, near-total darkness, and fluids wafting from cracks in the Earth's crust. It's also home to some weird animals that scientists are only just getting to know. Case in point: deep-sea expeditions and drones have revealed a giant group of octopuses and their eggs in a place where they shouldn't be able to survive.

"When I first saw the photos, I was like 'No, they shouldn't be there! Not that deep and not that many of them," says Janet Voight, associate curator of zoology at the Field Museum and an author of a new study on the octopuses published in Deep Sea Research Part I.

Group of brooding octopuses on the ocean floor.

Credit: Phil Torres and Geoff Wheat

Nearly two miles deep in the ocean, a hundred miles off the Pacific coast of Costa Rica, scientists during two cruises a year apart used subsea vehicles to explore the Dorado Outcrop, a rocky patch of sea floor made of cooled and hardened lava from an underwater volcano. Geochemists explored the outcrop in a tiny submersible vehicle, hoping to collect samples of the warm fluids that emerge from cracks in the rocks; they didn't count on finding dozens of octopuses huddled around the cracks.

The octopuses were an unknown species of the genus Muusoctopus--pink, dinner-plate-sized creatures with enormous eyes. Up to a hundred of them seemed to occupy every available rock in a small area. That in itself was strange--Muuscoctopus are normally loners. Stranger still was that nearly all of the octopuses seemed to be mothers, each guarding a clutch of eggs. And this nursery was situated alongside the warm fluid issuing from the cracks in the outcrop.

One of the Muusoctopus octopuses studied by the researchers.

Credit: Phil Torres and Geoff Wheat

It doesn't make sense for deep-sea octopuses to brood eggs in warm water like this: it's suicide. Deep sea octopus live in cold, nearly invariant temperatures. Exposure to higher temperatures jump-starts their metabolism, making them need more oxygen than the warm water can provide. The octopuses that the scientists observed (both in-person and via hours of video footage from an ROV) showed evidence of severe stress, and they could only guess that the 186 eggs that were attached to the rocks leaking warm, low oxygen fluid had it worse. None had any sign of a developing embryo. All in all, not a great place to start an octopus family.

However, the sheer number of what the scientist think are doomed octopuses and their eggs suggest that there's a better, healthier habitat nearby. The team suspects there must be more octopuses living inside crevices in the rocks, where the water is cool and rich in oxygen. The crevices could be such a good environment for egg-brooding that the booming population has to spill over into the dangerously warm region outside, like a gentrifying neighborhood expanding into a rougher part of town.

"Octopus females only produce one clutch of eggs in their lives. In order for this huge population to be sustained, there must be even more octopuses to replace the dying mothers and eggs that we can see," says Voight. "My coauthors, Geoff Wheat and Anne Hartwell, know about basalt and how an outcrop like this is made. Odds are it has hollow areas where other females nurture their eggs to hatching. They are analogous to the boomers who have all the good jobs, while the millennials wait, seeking just one little piece of the cool rock." Voight notes that there's evidence for the unseen population: the scientists observed octopus arms emerging once in a while from cracks in the rock.

The study doesn't just shed light upon deep-sea biology; it also illustrates the collaborative nature of science. "This project was a cohesive dynamic of three scientists from different research backgrounds coming together to investigate a fascinating observation," says Hartwell, the paper's lead author and an oceanographer affiliated with the University of Akron and the University of Alaska Fairbanks.

"The focus of [our] expeditions to Dorado Outcrop was to study a cool hydrothermal system. In doing so, we discovered this fascinating congregation of brooding octopuses," says Wheat, a geochemist at the University of Alaska Fairbanks. "To maximize the scientific return of the expeditions, we shared the video with deep-sea biologists, whose research led to this publication. This is only the third hydrothermal system of its type that has been sampled, yet millions of similar environments exist in the deep sea. What other remarkable discoveries are waiting for us?"

"These surprising observations show us how a deep-sea animal reproduces," says Barbara Ransom, a program director in the National Science Foundation's Division of Ocean Sciences, which funded the research. "The findings were serendipitous. The researchers saw something unusual and stopped to find out what was happening. Unexpected discoveries like this one can dramatically change our understanding of how the oceans work."

"To my knowledge there had been no reports of octopuses at this or comparable depths off between southern California and Peru. Never would I have anticipated such a dense cluster of these animals at 3000 meters depth, and we argue that the numbers of octopuses we see are simply the surplus population," says Voight. "What else is down there that we can't even imagine? I want to find out."

Contacts and sources:
Kate Golembiewski, with Sarah Lawhun
Field Museum

Citation: Clusters of deep-sea egg-brooding octopods associated with warm fluid discharge: an ill-fated fragment of a larger, discrete population?  Anne M.Hartwell, Janet R.Voight,  C. Geoffrey Wheat

Polymer-Graphene Nanocarpets Can Electrify Smart Fabrics

Researchers from Tomsk Polytechnic University together with their international colleagues have discovered a method to modify and use the one-atom thin conductor of current and heat, graphene without destroying it.

Thanks to the novel method, the researchers were able to synthesize on single-layer graphene a well-structured polymer with a strong covalent bond, which they called 'polymer carpets'. The entire structure is highly stable; it is less prone to degradation over time that makes the study promising for the development of flexible organic electronics. Also, if a layer of molybdenum disulfide is added over the 'nanocarpet', the resulting structure generates current under exposure to light. The study results were published in Journal of Materials Chemistry C.

This is the scheme for obtaining a hybrid structure of 'graphene-polymer'.
Credit:  Tomsk Polytechnic University

Graphene is simultaneously the most durable, light and an electrically conductive carbon material. It can be used for manufacturing solar batteries, smartphone screens, thin and flexible electronics, and even in water filters since graphene films pass water molecules and stop all other compounds. Graphene should be integrated into complex structures to be used successfully. However, it is a challenge to do that. According to scientists, graphene itself is stable enough and reacts poorly with other compounds. In order to make it react with other elements, i.e. to modify it, graphene is usually at least partially destroyed. This modification degrades the properties of the materials obtained.

Professor Raul D. Rodriguez from the Research School for Chemistry & Applied Biomedical Sciences says: 'When functionalizing graphene, you should be very careful. If you overdo it, the unique properties of graphene are lost. Therefore, we decided to follow a different path.

In graphene, there are inevitable nanodefects, for example, at the edges of graphene and wrinkles in the plane. Hydrogen atoms are often attached to such defects. It is this hydrogen that can interact with other chemicals.'

To modify graphene, the authors use a thin metal substrate on which a graphene single-layer is placed. Then graphene is covered with a solution of bromine-polystyrene molecules. The molecules interact with hydrogen and are attached to the existing defects, resulting in polyhexylthiophene (P3HT). Further exposed to light during the photocatalysis, a polymer begins to 'grow'.

'In the result, we obtained the samples which structure resembles 'polymer carpets' as we call them in the paper. Above such a 'polymer carpet' we place molybdenum disulfide. Due to a unique combination of materials, we obtain a 'sandwich' structure' that functions like a solar battery. That is, it generates current when exposed to light. In our experiments a strong covalent bond is established between the molecules of the polymer and graphene, that is critical for the stability of the material obtained,' notes Rodriguez.

According to the researcher, the method for graphene modification, on the one hand, enables obtaining a very sturdy compound; on the other hand, it is rather simple and cheap as affordable materials are used. The method is versatile because it makes growing very different polymers directly on graphene possible.

'The strength of the obtained hybrid material is achieved additionally because we do not destroy graphene itself but use pre-existing defects, and a strong covalent bond to polymer molecules. This allows us to consider the study as promising for the development of thin and flexible electronics when solar batteries can be attached to clothes, and when deformed they will not break,' the professor explains.

Contacts and sources:
Raul D. RodriguezTomsk Polytechnic University

Citation: Bottom-up Fabrication of Graphene-based Conductive Polymer Carpets for Optoelectronics Tao Zhang, Raul D. David Rodriguez, Ihsan Amin, Jacek Gasiorowski, Mahfujur Rahaman, Wenbo Sheng, Jana Kalbacova, Evgeniya Sheremet, Dietrich RT Zahn and Rainer Jordan Journal of Materials Chemistry C

Pink Russian Rocks Mark One of the Greatest Events in the History of Earth

The Great Oxidation Event is one of the major scientific mysteries. Oxygen appeared on the planet for the first time over 2,3 billion years ago. A new Science study is making scientists rethink what they thought they knew about the event.

No oxygen, no complex life as we know it:

The geological history of Earth is characterized by many large-scale changes that have had a significant impact on Earth’s environments. The most important of these events for the evolution of life as we know it, was the Great Oxidation Event some 2.3 billion years ago. For the first-time ever, the oxygen concentrations in the atmosphere rose to about 1-10% of today’s.

Russian geologists stumbled upon thick salt beds that had been buried 2-3 km below the surface. 

Photo: Kärt Paiste 

“It has long been debated if this was only a temporary increase in oxygen that lasted for about 300-400 million years, or if it was a permanent transition. Until now robust evidence to constrain the Great Oxidation Event has been lacking, partly due to the incomplete and poorly preserved rock record from this time period.” says Kärt Paiste, PhD-candidate at CAGE Centre for Arctic Gas Hydrate, Environment and Climate at UiT The Arctic Univeristy of Norway..

She is one of the co-authors of a study in Science that for the first time shows the magnitude of this event.

“Instead of a trickle, it was more like a firehose,” said Clara Blättler, a postdoctoral research fellow in the Department of Geosciences at Princeton University (US) and first author on the study. “It was a major change in the production of oxygen.”

Russian scientists stumbled upon the clues

During deep drilling in 2007-2009 on the shores of the Lake Onega in Karelia, Russian geologists stumbled upon thick salt beds that had been buried 2-3 km below the surface.

Salt crystals were found in the pink rocks from the shore of Lake Onega, Russia. These tell a story of weathering continents. 

Photo: Kärt Paiste

These beds contained extraordinarily well preserved evaporite minerals – salt deposits of halite and anhydrite left behind by evaporation of the restricted marine basin. The salt deposits contained a surprisingly large amount of sulfate minerals, which are highly soluble and rarely preserved in the rock record. This makes the finding even more unique.

“This is the strongest ever evidence that the ancient seawater from which those minerals precipitated had high sulfate concentrations reaching at least 30% of present-day oceanic sulfate as our estimations indicate.” Sais Aivo Lepland, researcher at The Geological Survey of Norway and CAGE and co-author of the study.

“This is an amount that is significantly larger than previously thought. Studies of these extraordinary rocks have provided first solid evidence that around 2 billion years ago ancient seawater was similar to today’s.” says Kärt Paiste.

Changed the composition of the oceans

More specifically, in order to form thick anhydrite deposits a significant amount of sulfate has to be transported into the oceans, which is only possible through so called oxidative weathering.

This means that Earth’s atmosphere had to contain abundant amounts of oxygen to create a chemical reaction that would weather continents. The buildup of oxygen was caused by the growth of cyanobacteria capable of photosynthesis. This activity must have thus been longer, and more intense than previously believed.

“The Great Oxidation Event lasted long enough to intensively weather landmasses in order to change the composition of the oceans.” Says Paiste.

According to Lepland the findings in the study require considerable rethinking of the magnitude of oxygenation of Earth’s atmosphere-ocean system.

Facts: The Geological Survey of Norway (NGU), with Aivo Lepland at the helm has over the past decade coordinated research that focuses on understanding Earth’s early oxygenation. This has been done in close collaboration with the Karelian Science Centre (KSC) in Petrozavodsk, Russia. In 2007, an international consortium, the ICDP-sponsored Fennoscandian Arctic Russia – Drilling Early Earth project (FAR-DEEP), drilled 15 cores across the Kola-Karelia region of Russia and, in 2007-2009, the KSC was involved in drilling of the 3.5 km deep Onega Parametric Hole (OPH) at the western shores of the Lake Onega. This core recovered the unique succession of evaporates described above and the established NGU-KSC collaboration enabled an international team of researchers to study and sample the OPH core in Petrozavodsk in 2014 and 2016.


Contacts and sources: 
University of Tromsø - The Arctic University of Norway

Citation:  Blättler, C.L., Two-billion-year-old evaporites capture Earth’s great oxidation. Science. 22 Mar 2018. DOI: 10.1126/science.aar2687

Attached files
Fieldwork on the pink rocks on the shores of Lake Onega, Russia. Photo: Kärt Paiste. (13)

October 7, 2008: The Day It Rained Diamonds from a Doomed Planet on the Nubian Desert

Using transmission electron microscopy, EPFL scientists have examined a slice from a meteorite that contains large diamonds formed at high pressure. The study shows that the parent body from which the meteorite came was a planetary embryo of a size between Mercury to Mars. The discovery is published in Nature Communications.

On October 7, 2008, an asteroid entered Earth’s atmosphere and exploded 37 km above the Nubian Desert in Sudan. The asteroid, now known as “2008 TC3”, was just over four meters in diameter. When it exploded in the atmosphere, it scattered multiple fragments across the desert. Only fifty fragments, ranging in size from 1–10 cm, were collected, for a total mass of 4.5 kg. Over time, the fragments were gathered and catalogued for study into a collection named Almahata Sitta (Arabic for “Station Six”, after a nearby train station between Wadi Halfa and Khartoum).

Credit: © 2018 EPFL / Hillary Sanctuary

The Almahata Sitta meteorites are mostly ureilites, a rare type of stony meteorite that often contains clusters of nano-sized diamonds. Current thinking is that these tiny diamonds can form in three ways: enormous pressure shockwaves from high-energy collisions between the meteorite “parent body” and other space objects; deposition by chemical vapor; or, finally, the “normal” static pressure inside the parent body, like most diamonds on Earth.

The unanswered question, so far, has been the planetary origin of 2008 TC3 ureilites. Now, scientists at Philippe Gillet’s lab at EPFL, with colleagues in France and Germany, have studied large diamonds (100-microns in diameter) in some of the Almahata Sitta meteorites and discovered that the asteroid came from a planetary “embryo” whose size is between Mercury to Mars.

The researchers studied the diamond samples using a combination of advanced transmission electron microscopy techniques at EPFL’s Interdisciplinary Centre for Electron Microscopy. The analysis of the data showed that the diamonds had chromite, phosphate, and iron-nickel sulfides embedded in them – what scientists refer to as “inclusions”. These have been known for a long time to exist inside Earth’s diamonds, but are now described for the first time in an extraterrestrial body.o
The particular composition and morphology of these materials can only be explained if the pressure under which the diamonds were formed was higher than 20 GPa (giga-Pascals, the unit of pressure). This level of internal pressure can only be explained if the planetary parent body was a Mercury- to Mars-sized planetary “embryo”, depending on the layer in which the diamonds were formed.

Many planetary formation models have predicted that these planetary embryos existed in the first million years of our solar system, and the study offers compelling evidence for their existence. Many planetary embryos were Mars-sized bodies, such as the one that collided with Earth to give rise to the Moon. Other of these went on to form larger planets, or collided with the Sun or were ejected from the solar system altogether. The authors write “This study provides convincing evidence that the ureilite parent body was one such large ‘lost’ planet before it was destroyed by collisions some 4.5 billion years ago.”

Contacts and sources:
Nik Papageorgiou
EPFL | École polytechnique fédérale de Lausanne

BlackHoleCam To Test If Einsteinian Black Holes Can Be Distinguished

Astrophysicists at Goethe University Frankfurt answer this question by computing images of feeding non-Einsteinian black holes: At present it is hard to tell them apart from standard black holes.

One of the most fundamental predictions of Einstein's theory of relativity is the existence of black holes. In spite of the recent detection of gravitational waves from binary black holes by LIGO, direct evidence using electromagnetic waves remains elusive and astronomers are searching for it with radio telescopes. 

Astrophysicists at Goethe University Frankfurt, and collaborators in the ERC-funded project BlackHoleCam in Bonn and Nijmegen have created and compared self-consistent and realistic images of the shadow of an accreting supermassive black hole - such as the black-hole candidate Sagittarius A* (Sgr A*) in the heart of our galaxy - both in general relativity and in a different theory of gravity. The goal was to test if Einsteinian black holes can be distinguished from those in alternative theories of gravity.

Can we tell black holes apart? Astrophysicists at Goethe University Frankfurt answer this question by computing images of feeding non-Einsteinian black holes: at present it is hard to tell them apart from standard black holes.

Credit:  Fromm/Younsi/Mizuno/Rezzolla (Frankfurt)

Not all of the light rays (or photons) produced by matter falling into a black hole are trapped by the event horizon, a region of spacetime from which nothing can escape. Some of these photons will reach distant observers, so that when a black hole is observed directly a "shadow" is expected against the background sky. The size and shape of this shadow will depend on the black-hole's properties but also on the theory of gravity.

Because the largest deviations from Einstein's theory of relativity are expected very close to the event horizon and since alternative theories of gravity make different predictions on the properties of the shadow, direct observations of Sgr A* represent a very promising approach for testing gravity in the strongest regime. Making such images of the black-hole shadow is the primary goal of the international Event Horizon Telescope Collaboration (EHTC), which combines radio data from telescopes around the world.

Scientists from the BlackHoleCam team in Europe, who are part of the EHTC, have now gone a step further and investigated whether it is possible to distinguish between a "Kerr" black hole from Einstein's gravity and a "dilaton" black hole, which is a possible solution of an alternative theory of gravity.

The researchers studied the evolution of matter falling into the two very different types of black holes and calculated the radiation emitted to construct the images. Furthermore, real-life physical conditions in the telescopes and interstellar medium were used to create physically realistic images. "To capture the effects of different black holes we used realistic simulations of accretion disks with near-identical initial setups. These expensive numerical simulations used state-of-the-art codes and took several months on the Institute's supercomputer LOEWE," says Dr. Yosuke Mizuno, lead author of the study.

Moreover, expected radio images obviously have a limited resolution and image fidelity. When using realistic image resolutions, the scientists found, to their surprise, that even highly non-Einsteinian black holes could disguise themselves as normal black holes.

"Our results show that there are theories of gravity in which black holes can masquerade as Einsteinian, so new techniques of analyzing EHT data may be needed to tell them apart," remarks Luciano Rezzolla, professor at Goethe University and leader of the Frankfurt team. "While we believe general relativity is correct, as scientists we need to be open-minded. Luckily, future observations and more advanced techniques will eventually settle these doubts," concludes Rezzolla.

"Indeed, independent information from an orbiting pulsar, which we are actively searching for, will help eliminate these ambiguities," says Michael Kramer, director at the MPI for Radio Astronomy in Bonn. Heino Falcke (professor at Radboud University), who 20 years ago proposed using radio telescopes to image the shadow of black holes, is optimistic. "There is little doubt that the EHT will eventually obtain strong evidence of a black-hole shadow. These results encourage us to refine our techniques beyond the current state of the art and thus make even sharper images in the future."

Contacts and sources:
Dr. Luciano Rezzolla
Goethe University Frankfurt 

Citation: The current ability to test theories of gravity with black hole shadows Yosuke Mizuno, Ziri Younsi, Christian M. Fromm, Oliver Porth, Mariafelicia De Laurentis, Hector Olivares, Heino Falcke, Michael Kramer & Luciano Rezzolla Nature Astronomy (2018) doi:10.1038/s41550-018-0449-5

Monday, April 16, 2018

Holey Silicon May Be The Holy Grail of Electronics?

Electronics miniaturization has put high-powered computing capability into the hands of ordinary people, but the ongoing downsizing of integrated circuits is challenging engineers to come up with new ways to thwart component overheating.

Scientists at the University of California, Irvine made a breakthrough recently in verifying a new material configuration to facilitate cooling. In a study in the journal Nanotechnology, members of UCI’s Nano Thermal Energy Research Group highlight the attributes of holey silicon, a computer chip wafer with tiny, vertically etched orifices that work to shuttle heat to desired locations.

“We found that heat prefers to travel vertically through but not laterally across holey silicon, which means the material can effectively move the heat from local hot spots to on-chip cooling systems in the vertical direction while sustaining the necessary temperature gradient for thermoelectric junctions in the lateral direction,” said corresponding author Jaeho Lee, UCI assistant professor of mechanical & aerospace engineering.

Jaeho Lee, UCI assistant professor of mechanical & aerospace engineering, believes that holey silicon – microchip material vertically etched with nanoscale orifices – might be a breakthrough in the quest to keep modern electronics cool. 

Credit: Steve Zylius / UCI

“This innovation could potentially be ideal for keeping electronic devices such as smartphones cool during operation,” said lead author Zongqing Ren, a graduate student researcher in the NTERG.

He said that lab simulations demonstrated that the cooling effectiveness of holey silicon is at least 400 percent better than chalcogenides, compounds commonly used in thermoelectric cooling devices.

The lab’s holey silicon research is a follow-on to a study published in Nature Communications in early 2017 in which Lee, as lead author, and his UC Berkeley-based collaborators employed nanometer-scale silicon mesh material to investigate properties of phonons, quasiparticles that give scientists insight into thermal transport mechanisms.

“We know that phonons can show wave-like as well as particle-like behavior during thermal transport,” Lee said. “Using meshes with different hole sizes and spacing, we were able to clarify complex thermal transport mechanisms at the nanoscale.”

Knowledge gained from the earlier study helped his team understand how small, neck-shaped structures created by the etched holes in holey silicon cause phonon backscattering, a particle effect leading to low in-plane thermal conductivity. High cross-plane thermal conductivity was caused by long-wavelength phonons that help to move heat away. 

Lee said the temperature problem in electronics has grown in the past few years as microchip designers seem to have reached a size boundary. With larger components, manufacturers can use heat sinks, fins and even fans to funnel warmth away from critical hardware. On today’s densely packed chips with billions of nanoscale transistors – often sandwiched in slim, pocketable consumer products – there’s no room for such cooling technologies.

Other key issues are longevity and reliability. Semiconductor chips are being embedded in many new places – acting as sensors and actuators in cars and appliances and as nodes along the internet of things. These devices are expected to run continuously for years and even decades. Prolonged exposure to heat could cause the failure of such infrastructure.

“On one hand, nanotechnology has opened up a whole new world of possibilities, but on the other, it’s created a host of challenges,” Lee said. “It’s important that we continue to develop a better understanding of the fundamentals of thermal transport and find ways to control heat transfer at the nanoscale.”

Contacts and sources:
Brian Bell
University of California, Irvine

How A Hummingbird Uses It's Tail To Serenade the Ladies

In the world of Costa’s hummingbirds, it’s not size that matters—it’s sound. During breeding season, male Costa’s perform a high-speed dive during which they “sing” to potential mates using their tail feathers.

Unlike related hummingbird species, Costa’s perform their dives to the side of females, rather than in front of them. In a paper published in Current Biology, researchers at the University of California, Riverside show this trajectory minimizes an audible Doppler sound that occurs when the Costa’s dive. The Doppler effect is familiar to most people as the change in tone of an ambulance siren as the vehicle passes by.

Male Costa’s hummingbirds court females using a high-speed dive in which they sing with their tail feathers.
Male Costa’s hummingbirds court females using a high-speed dive in which they sing with their tail feathers.
Credit: Christopher Clark, UC Riverside.

The findings suggest that males can strategically manipulate the way females perceive their displays by minimizing the Doppler sound. This deprives females of an acoustic indicator that would otherwise reveal the speed of their dives.

“Recent studies in birds and other animals suggest that females prefer higher speeds during male athletic displays. By concealing their speed, males are not necessarily cheating, but instead have evolved this placement of trajectory out of female choice,” said Christopher Clark, who led the study. Clark is an assistant professor of biology in UCR’s College of Natural and Agricultural Sciences.

Clark and co-author Emily Mistick, a former research assistant at UCR, showed that males aim their sound toward potential mates by twisting their tail vertically by up to 90 degrees. In previous research, Clark has shown that the tail song is made by the flutter of the outer tail feathers.
“We don’t know why males twist only half of their tails toward the females, but it may be due to anatomical limitations that prevent them from twisting their whole tail around,” Clark said.

Credit: Univ. of California, Riverside
Clark and Mistick used a device called an acoustic camera to record Costa’s dives. They also conducted experiments in a wind tunnel to examine how the birds’ speed and direction influence the sounds they make. Curiously, they found it was difficult to measure the velocity of the Costa’s dive from the sound produced.

“Once I realized it wasn’t trivial for a scientist to measure, I realized it wouldn’t be trivial for a female to measure either,” Clark said.

Clark said the findings add to the literature about how male animals use athletic displays to attract females.

“Most research has focused on static male attributes, such as bright colors or elongated tails, but our research shows that dynamic displays may be just as important, and males strategically control these performances to show themselves in the best possible light,” Clark said.

Contacts and sources:
University of California, Riverside

Citation: Strategic Acoustic Control of a Hummingbird Courtship Dive Authors:   Christopher J. Clark, Emily A. Mistick  Current Biology

Resistance Isn't Futile: Artificial Antimicrobial Peptides Could Help Overcome Drug-Resistant Bacteria

With aid of computer algorithm, researchers develop peptides more powerful than those found in nature.

During the past several years, many strains of bacteria have become resistant to existing antibiotics, and very few new drugs have been added to the antibiotic arsenal.

To help combat this growing public health problem, some scientists are exploring antimicrobial peptides — naturally occurring peptides found in most organisms. Most of these are not powerful enough to fight off infections in humans, so researchers are trying to come up with new, more potent versions.

Image: Ella Marushchenko

Researchers at MIT and the Catholic University of Brasilia have now developed a streamlined approach to developing such drugs. Their new strategy, which relies on a computer algorithm that mimics the natural process of evolution, has already yielded one potential drug candidate that successfully killed bacteria in mice.

“We can use computers to do a lot of the work for us, as a discovery tool of new antimicrobial peptide sequences,” says Cesar de la Fuente-Nunez, an MIT postdoc and Areces Foundation Fellow. “This computational approach is much more cost-effective and much more time-effective.”

De la Fuente-Nunez and Octavio Franco of the Catholic University of Brasilia and the Dom Bosco Catholic University are the corresponding authors of the paper, which appears in the April 16 issue of Nature Communications. Timothy Lu, an MIT associate professor of electrical engineering and computer science, and of biological engineering, is also an author.

Artificial peptides

Antimicrobial peptides kill microbes in many different ways. They enter microbial cells by damaging their membranes, and once inside, they can disrupt cellular targets such as DNA, RNA, and proteins.

In their search for more powerful, artificial antimicrobial peptides, scientists typically synthesize hundreds of new variants, which is a laborious and time-consuming process, and then test them against different types of bacteria.

De la Fuente-Nunez and his colleagues wanted to find a way to make computers do most of the design work. To achieve that, the researchers created a computer algorithm that incorporates the same principles as Darwin’s theory of natural selection. The algorithm can start with any peptide sequence, generate thousands of variants, and test them for the desired traits that the researchers have specified.

“By using this approach, we were able to explore many, many more peptides than if we had done this manually. Then we only had to screen a tiny fraction of the entirety of the sequences that the computer was able to browse through,” de la Fuente-Nunez says.

In this study, the researchers began with an antimicrobial peptide found in the seeds of the guava plant. This peptide, known as Pg-AMP1, has only weak antimicrobial activity. The researchers told the algorithm to come up with peptide sequences with two features that help peptides to penetrate bacterial membranes: a tendency to form alpha helices and a certain level of hydrophobicity.

After the algorithm generated and evaluated tens of thousands of peptide sequences, the researchers synthesized the most promising 100 candidates to test against bacteria grown in lab dishes. The top performer, known as guavanin 2, contains 20 amino acids. Unlike the original Pg-AMP1 peptide, which is rich in the amino acid glycine, guavanin is rich in arginine but has only one glycine molecule.

More powerful

These differences make guavanin 2 much more potent, especially against a type of bacteria known as Gram-negative. Gram-negative bacteria include many species responsible for the most common hospital-acquired infections, including pneumonia and urinary tract infections.

The researchers tested guavanin 2 in mice with a skin infection caused by a type of Gram-negative bacteria known as Pseudomonas aeruginosa, and found that it cleared the infections much more effectively than the original Pg-AMP1 peptide.

“This work is important because new types of antibiotics are needed to overcome the growing problem of antibiotic resistance,” says Mikhail Shapiro, an assistant professor of chemical engineering at Caltech, who was not involved in the study. “The authors take an innovative approach to this problem by computationally designing antimicrobial peptides using an ‘in silico’ evolutionary algorithm, which scores new peptides based on a set of properties known to be correlated with effectiveness. They also include an impressive array of experiments to show that the resulting peptides indeed have the properties needed to serve as antibiotics, and that they work in at least one mouse model of infections.”

De la Fuente-Nunez and his colleagues now plan to further develop guavanin 2 for potential human use, and they also plan to use their algorithm to seek other potent antimicrobial peptides. There are currently no artificial antimicrobial peptides approved for use in human patients.

“A report commissioned by the British government estimates that antibiotic-resistant bacteria will kill 10 million people per year by the year 2050, so coming up with new methods to generate antimicrobials is of huge interest, both from a scientific perspective and also from a global health perspective,” de la Fuente-Nunez says.

The research was funded by the Ramón Areces Foundation and the Defense Threat Reduction Agency (DTRA).

Contacts and sources:
Anne Trafton
Massachusetts Institute of Technology (MIT)

Dense Stellar Clusters Foster Black Hole Mergers

Black holes in dense stellar clusters could combine repeatedly to form objects bigger than anything a single star could produce.

When LIGO’s twin detectors first picked up faint wobbles in their respective, identical mirrors, the signal didn’t just provide first direct detection of gravitational waves — it also confirmed the existence of stellar binary black holes, which gave rise to the signal in the first place.

Stellar binary black holes are formed when two black holes, created out of the remnants of massive stars, begin to orbit each other. Eventually, the black holes merge in a spectacular collision that, according to Einstein’s theory of general relativity, should release a huge amount of energy in the form of gravitational waves.

A snapshot of a simulation showing a binary black hole formed in the center of a dense star cluster.
A snapshot of a simulation showing a binary black hole formed in the center of a dense star cluster.
Image: Northwestern Visualization/Carl Rodriguez

Now, an international team led by MIT astrophysicist Carl Rodriguez suggests that black holes may partner up and merge multiple times, producing black holes more massive than those that form from single stars. These “second-generation mergers” should come from globular clusters — small regions of space, usually at the edges of a galaxy, that are packed with hundreds of thousands to millions of stars.

A simulation showing the dynamics of 50 black holes in the center of a star cluster, where two single black holes eventually form a binary black hole.

Video: Northwestern Visualization/Carl Rodriguez

“We think these clusters formed with hundreds to thousands of black holes that rapidly sank down in the center,” says Carl Rodriguez, a Pappalardo fellow in MIT’s Department of Physics and the Kavli Institute for Astrophysics and Space Research. “These kinds of clusters are essentially factories for black hole binaries, where you’ve got so many black holes hanging out in a small region of space that two black holes could merge and produce a more massive black hole. Then that new black hole can find another companion and merge again.”

A simulation showing an encounter between a binary black hole (in orange) and a single black hole (in blue) with relativistic effects. Eventually two black holes emit a burst of gravitational waves and merge, creating a new black hole (in red).
A simulation showing an encounter between a binary black hole (in orange) and a single black hole (in blue) with relativistic effects.  Eventually two black holes emit a burst of gravitational waves and merge, creating a new black hole (in red).
Image: Carl Rodriguez

If LIGO detects a binary with a black hole component whose mass is greater than around 50 solar masses, then according to the group’s results, there’s a good chance that object arose not from individual stars, but from a dense stellar cluster.

“If we wait long enough, then eventually LIGO will see something that could only have come from these star clusters, because it would be bigger than anything you could get from a single star,” Rodriguez says.

He and his colleagues report their results in a paper appearing in Physical Review Letters.

A simulation showing the dynamics of 50 black holes in the center of a star cluster, where two single black holes eventually form a binary black hole.

For the past several years, Rodriguez has investigated the behavior of black holes within globular clusters and whether their interactions differ from black holes occupying less populated regions in space.

Globular clusters can be found in most galaxies, and their number scales with a galaxy’s size. Huge, elliptical galaxies, for instance, host tens of thousands of these stellar conglomerations, while our own Milky Way holds about 200, with the closest cluster residing about 7,000 light years from Earth.

In their new paper, Rodriguez and his colleagues report using a supercomputer called Quest, at Northwestern University, to simulate the complex, dynamical interactions within 24 stellar clusters, ranging in size from 200,000 to 2 million stars, and covering a range of different densities and metallic compositions. The simulations model the evolution of individual stars within these clusters over 12 billion years, following their interactions with other stars and, ultimately, the formation and evolution of the black holes. The simulations also model the trajectories of black holes once they form.

“The neat thing is, because black holes are the most massive objects in these clusters, they sink to the center, where you get a high enough density of black holes to form binaries,” Rodriguez says. “Binary black holes are basically like giant targets hanging out in the cluster, and as you throw other black holes or stars at them, they undergo these crazy chaotic encounters.”

It’s all relative

When running their simulations, the researchers added a key ingredient that was missing in previous efforts to simulate globular clusters.

“What people had done in the past was to treat this as a purely Newtonian problem,” Rodriguez says. “Newton’s theory of gravity works in 99.9 percent of all cases. The few cases in which it doesn’t work might be when you have two black holes whizzing by each other very closely, which normally doesn’t happen in most galaxies.”

Newton’s theory of relativity assumes that, if the black holes were unbound to begin with, neither one would affect the other, and they would simply pass each other by, unchanged. This line of reasoning stems from the fact that Newton failed to recognize the existence of gravitational waves — which Einstein much later predicted would arise from massive orbiting objects, such as two black holes in close proximity.

“In Einstein’s theory of general relativity, where I can emit gravitational waves, then when one black hole passes near another, it can actually emit a tiny pulse of gravitational waves,” Rodriguez explains. “This can subtract enough energy from the system that the two black holes actually become bound, and then they will rapidly merge.”

The team decided to add Einstein’s relativistic effects into their simulations of globular clusters. After running the simulations, they observed black holes merging with each other to create new black holes, inside the stellar clusters themselves. Without relativistic effects, Newtonian gravity predicts that most binary black holes would be kicked out of the cluster by other black holes before they could merge. But by taking relativistic effects into account, Rodriguez and his colleagues found that nearly half of the binary black holes merged inside their stellar clusters, creating a new generation of black holes more massive than those formed from the stars. What happens to those new black holes inside the cluster is a matter of spin.

“If the two black holes are spinning when they merge, the black hole they create will emit gravitational waves in a single preferred direction, like a rocket, creating a new black hole that can shoot out as fast as 5,000 kilometers per second — so, insanely fast,” Rodriguez says. “It only takes a kick of maybe a few tens to a hundred kilometers per second to escape one of these clusters.”

Because of this effect, scientists have largely figured that the product of any black hole merger would get kicked out of the cluster, since it was assumed that most black holes are rapidly spinning.

This assumption, however, seems to contradict the measurements from LIGO, which has so far only detected binary black holes with low spins. To test the implications of this, Rodriguez dialed down the spins of the black holes in his simulations and found that in this scenario, nearly 20 percent of binary black holes from clusters had at least one black hole that was formed in a previous merger. Because they were formed from other black holes, some of these second-generation black holes can be in the range of 50 to 130 solar masses. Scientists believe black holes of this mass cannot form from a single star.

Rodriguez says that if gravitational-wave telescopes such as LIGO detect an object with a mass within this range, there is a good chance that it came not from a single collapsing star, but from a dense stellar cluster.

“My co-authors and I have a bet against a couple people studying binary star formation that within the first 100 LIGO detections, LIGO will detect something within this upper mass gap,” Rodriguez says. “I get a nice bottle of wine if that happens to be true.”

This research was supported in part by the MIT Pappalardo Fellowship in Physics, NASA, the National Science Foundation, the Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) at Northwestern University, the Institute of Space Sciences (ICE, CSIC) and Institut d'Estudis Espacials de Catalunya (IEEC), and the Tata Institute of Fundamental Research in Mumbai, India.

Contacts and sources:
Jennifer Chu 
Massachusetts Institute of Technology (MIT)