Wednesday, August 31, 2016

Discovery One-ups Tatooine: Twin Stars Found Hosting Three Giant Exoplanets

A team of Carnegie scientists has discovered three giant planets in a binary star system composed of stellar ''twins'' that are also effectively siblings of our Sun. One star hosts two planets and the other hosts the third. The system represents the smallest-separation binary in which both stars host planets that has ever been observed. The findings, which may help explain the influence that giant planets like Jupiter have over a solar system’s architecture, have been accepted for publication in The Astronomical Journal.

New discoveries coming from the study of exoplanetary systems will show us where on the continuum of ordinary to unique our own Solar System’s layout falls. So far, planet hunters have revealed populations of planets that are very different from what we see in our Solar System. The most-common exoplanets detected are so-called super-Earths, which are larger than our planet but smaller than Neptune or Uranus. Given current statistics, Jupiter-sized planets seem fairly rare—having been detected only around a small percentage of stars.

 Artist’s conception of the binary system with three giant planets discovered in this study. One star hosts two planets and the other hosts the third. The system represents the smallest-separation binary in which both stars host planets that has ever been observed.

  Image is courtesy of Robin Dienel.

This is of interest because Jupiter’s gravitational pull was likely a huge influence on our Solar System’s architecture during its formative period. So the scarcity of Jupiter-like planets could explain why our home system is different from all the others found to date.

The new discovery from the Carnegie team is the first exoplanet detection made based solely on data from the Planet Finder Spectrograph—developed by Carnegie scientists and mounted on the Magellan Clay Telescopes at Carnegie’s Las Campanas Observatory. PFS is able to find large planets with long-duration orbits or orbits that are very elliptical rather than circular, including the new trio of planets discovered in this `”twin’” star study. This special capability comes from the long observing baseline of PFS; it has been taking observations for six years.

Led by Johanna Teske, the team included a number of Carnegie scientists from both the Department of Terrestrial Magnetism in Washington, DC, and the Carnegie Observatories in Pasadena, CA, as well as Steve Vogt of the University of California Santa Cruz.

“We are trying to figure out if giant planets like Jupiter often have long and, or eccentric orbits,” Teske explained. “If this is the case, it would be an important clue to figuring out the process by which our Solar System formed, and might help us understand where habitable planets are likely to be found.”

  An illustration of this highly unusual system, which features the smallest-separation binary stars that both host planets ever discovered. Only six other metal-poor binary star systems with exoplanets have ever been found.
 Illustration is courtesy of Timothy Rodigas.

The twin stars studied by the group are called HD 133131A and HD 133131B. The former hosts two moderately eccentric planets, one of which is, at a minimum, about 1 and a half times Jupiter’s mass and the other of which is, at a minimum, just over half Jupiter’s mass. The latter hosts one moderately eccentric planet with a mass at least 2.5 times Jupiter’s.

The two stars themselves are separated by only 360 astronomical units (AU). One AU is the distance between the Earth and the Sun. This is extremely close for twin stars with detected planets orbiting the individual stars. The next-closest binary system that hosts planets is comprised of two stars that are about 1,000 AU apart.

The system is even more unusual because both stars are “metal poor,” meaning that most of their mass is hydrogen and helium, as opposed to other elements like iron or oxygen. Most stars that host giant planets are "metal rich.” Only six other metal-poor binary star systems with exoplanets have ever been found, making this discovery especially intriguing.

Adding to the intrigue, Teske used very precise analysis to reveal that the stars are not actually identical “twins” as previously thought, but have slightly different chemical compositions, making them more like the stellar equivalent of fraternal twins.

This could indicate that one star swallowed some baby planets early in its life, changing its composition slightly. Alternatively, the gravitational forces of the detected giant planets that remained may have had a strong effect on fully-formed small planets, flinging them in towards the star or out into space.

“The probability of finding a system with all these components was extremely small, so these results will serve as an important benchmark for understanding planet formation, especially in binary systems,” Teske explained.

The other members of Teske’s team were Carnegie’s Stephen Shectman, Matías Díaz, Paul Butler, Jeffrey Crane, and Pamela Arriagada.

Contacts and sources:
Johanna Teske
Carnegie Institution for Science

Powerful Ring Airy Beams: THz Waves Look Behind Walls 100 Feet Away

Science fiction's sensor beams are becoming a reality. 

Terahertz (THz) waves, which fall between microwave and the infrared band on the electromagnetic spectrum, can penetrate certain solid objects that are opaque to visible light to create images of what is hidden from view. Unlike traditional x-rays, the waves do so without damaging human tissue. All that makes THz waves a promising tool for Homeland Security and other law enforcement agencies. But before THz waves can be widely used, a number of obstacles need to be overcome, including how to make them more effective over greater distances.

Here’s the scene: a suspicious package is found in a public place. The police are called in and clear the area. Forced to work from a distance and unable to peer inside, they fear the worst and decide to detonate the package.

New research at the University of Rochester might help authorities in the not-too-distant future be better informed in tackling such situations and do so more safely. Working with a special type of electromagnetic wave—called terahertz (THz)—that’s capable of sensing and/or imaging objects behind barriers, the team demonstrated that they can detect a THz wave at a distance of up to 100 feet. The THz wave created by the researchers is more than five times stronger than what is generated by more conventional means, leading them to believe that a THz wave—and the image of a hidden object—can be detected at much greater distances in the future.

 Imaging the contents of a teapot using THz waves. Shown are the original teapot, the teapot empty, and the teapot half-full

Photos by the Lab of Xi-Cheng Zhang/University of Rochester

The research project was led by Kang Liu, a PhD student in optics, and Xi-Cheng Zhang, the M. Parker Givens Professor of Optics and the director of the Institute of Optics, in collaboration with a group from Greece led by Tzortzakis Stelios. The results have been published in the journal Optica.

“The use of an unconventional laser beam in our project goes beyond a scientific curiosity,” said Zhang. “It makes possible the remote sensing of chemical, biological, and explosive materials from a standoff distance.”

Creating a more robust terahertz wave from an Airy beam. 

Graphic by Michael Osadciw/University of Rochester

One of the drawbacks is that the waves are absorbed by water molecules in the air and weaken significantly over longer distances, making them generally ineffective. One solution is to generate the THz waves near the target, so that they have only a short distance to travel. It’s also important that the waves are intensive, because, as Liu points out, “The stronger the terahertz wave, the more work it can do.”

The key to their results was the use of a specific exotic laser beam—called a ring-Airy beam—to generate a THz wave that has 5.3 times the pulse energy of THz waves created with standard Gaussian beams.

Ordinary beams of light spread out as they travel, but that’s not the case with ring-Airy beams, which curve toward the center from all points.

To begin the process, Liu directed a laser beam onto a spatial light modulator (SLM), which formed the ring-Airy beam. As the name indicates, the beam is circular with a hollow center. Instead of spreading out as it travels, the beam collapsed inward, creating an intensely excited region of free electrons—called a plasma. Those electrons, in turn, generated the THz wave, which would be capable of penetrating a nearby target and reflecting images or providing vital chemical information about what is hidden.

“When the target is a suspected explosive device, it’s important to get the work done at a safe distance,” said Liu. “We believe our method could help THz remote sensing from more than 100 feet away by providing a more robust and flexible way to generate THz remotely.”

The modulator allowed the researchers to change the size of the ring-Airy beam and fine-tune the dimensions of the plasma that is created. The next step, as Liu sees it, is to manipulate ring-Airy beams to create stronger THz waves over greater distances.

Funding for the research project was provided by the US Army Research Office, “Laserlab-Europe”, and the General Secretariat for Research and Technology Aristeia project “FTERA.”


Contacts and sources:
Peter Iglinski
 University of Rochester

Your Brain on Sentences: What Does the Meaning of a Word Look Like?

Researchers at the University of Rochester have, for the first time, decoded and predicted the brain activity patterns of word meanings within sentences, and successfully predicted what the brain patterns would be for new sentences.

The study used functional magnetic resonance imaging (fMRI) to measure human brain activation. “Using fMRI data, we wanted to know if given a whole sentence, can we filter out what the brain’s representation of a word is—that is to say, can we break the sentence apart into its word components, then take the components and predict what they would look like in a new sentence,” said Andrew Anderson, a research fellow who led the study as a member of the lab of Rajeev Raizada, assistant professor of brain and cognitive sciences at Rochester.

“We found that we can predict brain activity patterns—not perfectly [on average 70% correct], but significantly better than chance,” said Anderson, The study is published in the journal Cerebral Cortex.

These brain maps show how accurately it was possible to predict neural activation patterns for new, previously unseen sentences, in different regions of the brain. The brighter the area, the higher the accuracy. The most accurate area, which can be seen as the bright yellow strip, is a region in the left side of the brain known as the Superior Temporal Sulcus. This region achieved statistically significant sentence predictions in 11 out of the 14 people whose brains were scanned. Although that was the most accurate region, several other regions, broadly distributed across the brain, also produced significantly accurate sentence predictions.

University of Rochester graphic / Andrew Anderson and Xixi Wang

Anderson and his colleagues say the study makes key advances toward understanding how information is represented throughout the brain. “First, we introduced a method for predicting the neural patterns of words within sentences—which is a more complex problem than has been addressed by previous studies, which have almost all focused on single words,” Anderson said. “And second, we devised a novel approach to map semantic characteristics of words that we then correlated to neural activity patterns.”

Finding a word in a sentence

To predict the patterns of particular words within sentences, the researchers used a broad set of sentences, with many words shared between them. For example: “The green car crossed the bridge,” “The magazine was in the car,” and “The accident damaged the yellow car.” fMRI data was collected from 14 participants as they silently read 240 unique sentences.

“We estimate the representation of a word ‘car,’ in this case, by taking the neural activity pattern associated with all of the sentences which that word occurred in and we decomposed sentence level brain activity patterns to build an estimate of the representation of the word,” explained Anderson.

Brain activation patterns for different sensory and emotional aspects of the word “play.” The numbers to the left of each brain pattern show how strongly the word is associate with each feature. For example, “play” is positively associated with “Biomotion”, because playing often involves people moving their bodies. But it is negatively associated with “Unpleasant”, because play is rarely an unpleasant activity.”
University of Rochester graphic / Andrew Anderson

What does the meaning of a word look like?

“Coffee has a color, smell, you can drink it—coffee makes you feel good—it has sensory, emotional, and social aspects,” said senior author Raizada. “So we built upon a model created by Jeffrey Binder at the Medical College of Wisconsin, a coauthor on the paper, and surveyed people to tell us about the sensory, emotional, social and other aspects for a set of words. Together, we then took that approach in a new direction, by going beyond individual words to entire sentences.”

The new semantic model employs 65 attributes—such as “color,” “pleasant,” “loud,” and “time.” Participants in the survey rated, on a scale of 0-6, the degree to which a given root concept was associated with a particular experience. For example, “To what degree do you think of ‘coffee’ as having a characteristic or defining temperature?” In total, 242 unique words were rated with each of the 65 attributes.

“The strength of association of each word and its attributes allowed us to estimate how its meanings would be represented across the brain using fMRI,” said Raizada.

The model captures a wider breadth of experience than previous semantic models, said Anderson, “which made it easier to interpret the relationship between the predictive model and brain activity patterns.”

The team was then able to recombine activity patterns for individual words, in order to predict brain patterns for entire sentences built up out of new combinations of those words. For example, the computer model could predict the brain pattern for a sentence such as, “The family played at the beach,” even though it had never seen that specific sentence before. Instead, it had only seen other sentences containing those words in different contexts, such as “The beach was empty” and “The young girl played soccer.”

The researchers said the study opens a new set of questions toward understanding how meaning is represented in the brain. “Not now, not next year, but this kind of research may eventually help individuals who have problems with producing language, including those who suffer from traumatic brain injuries or stroke,” said Anderson.

The Intelligence Advanced Research Projects Activity and the National Science Foundation supported the research.

Contacts and sources:
Monique Patenaude
University of Rochester

Businesses Spent $341 Billion on R&D Performed in US in 2014

Businesses spent $341 billion on research and development (R&D) performed in the United States in 2014, a 5.6 percent increase over the previous year, according to a new report from the National Center for Science and Engineering Statistics (NCSES).

Development accounted for the greatest share, 78 percent, of 2014 R&D spending. Applied research accounted for 16 percent, while basic research accounted for 6 percent. The NCSES InfoBrief focuses on business-sector R&D spending. Other sectors, including higher education and federally funded research and development centers (FFRDCs), also contribute to total U.S. R&D spending.

Development accounted for the greatest share of business R&D performance in 2014.

Credit: NSF

Funding from companies' own sources rose by 6.7 percent from 2013 to 2014, totaling $283 billion. Funding from other sources totaled $58 billion. The federal government was the largest of those other sources, accounting for $27 billion, $19 billion of which came from the Department of Defense. Of the federal funding, 92 percent went toward aerospace products and parts; professional, scientific and technical services; and computer and electronic products.

Small- and medium-sized companies performed 16 percent of the nation's business R&D in 2014, while companies with 500 to 24,999 domestic employees performed 48 percent. Companies with 25,000 or more employees made up the other 36%. Businesses that performed or funded R&D employed 21.5 million people in the U.S., 1.5 million of which were R&D employees.

Credit: NSF

Business R&D is concentrated in a relatively small number of states. California alone accounted for 30 percent of the $283 billion in R&D funded by companies' own sources in 2014. Other states with high amounts in the business R&D category were: Massachusetts (6 percent), Michigan (5 percent), Washington (5 percent), Texas (5 percent), Illinois (4 percent), New Jersey (4 percent), New York (4 percent), and Pennsylvania (3 percent).

Companies that performed R&D in the United States in 2014 spent $638 billion on assets with expected useful lives of more than 1 year (table 5). Of this amount, $28 billion (4.4%) was spent on structures, equipment, software, and other assets used for R&D: $17 billion by manufacturers and $10 billion by companies in nonmanufacturing industries. 

Manufacturing industry groups with high levels of capital expenditures on assets used for R&D in 2014 were semiconductor and other electronic products (NAICS 3344) ($3.5 billion), pharmaceuticals and medicines (NAICS 3254) ($2.8 billion), autombiles, bodies, trailers, and parts (NAICS 3361–63) ($1.2 billion), and aerospace products and parts (NAICS 3364) ($1.2 billion). Among the nonmanufacturing industries were software publishers (NAICS 5112) ($1.8 billion), telecommunications services (NAICS 517) ($1.5 billion), and computer systems design and related services (NAICS 5415) ($1.2 billion).

For more information, including R&D performance numbers for all states and a breakdown of spending by different business sectors, read the full InfoBrief.

Contacts and sources:
Rob Margetta

Cyclops Beetles' Solution to the Chicken and Egg Conundrum, Genetic Level Answers

Beetles with cyclops eyes have given Indiana University scientists insight into how new traits may evolve through the recruitment of existing genes -- even if these genes are already carrying out critical functions.

The study, reported in the Proceedings of the Royal Society B, was led by Eduardo Zattara, a postdoctoral researcher in the IU Bloomington College of Arts and Sciences' Department of Biology. It was published in tandem with another study led by Hannah Busey, an undergraduate student researcher at IU Bloomington and 2016 Goldwater fellow, which appeared in the Journal of Experimental Zoology.

The discovery was made after switching off orthodenticle genes in horned beetles of the genus Onthophagus, also known as dung beetles. Knocking out these genes caused drastic changes in the insects' head structure, including the loss of horns -- a recently evolved structure used for male combat over access to females -- as well as the growth of compound eyes in a completely unexpected place: the top center of the head.

The results were specific to Onthophagus; the same changes did not produce the same effects in Tribolium, or flour beetles, which do not have horns.

Heads of horned and cyclopic beetles of the genus Onthophagus. After knocking out the gene otd1, the cyclopic beetle (right) lost the horn but gained a pair of small compound eyes in the center of the head. 
Photo by Eduardo Zattara

"We were amazed that shutting down a gene could not only turn off development of horns and major regions of the head, but also turn on the development of very complex structures such as compound eyes in a new location," Zattara said. "The fact that this doesn’t happen in Tribolium is equally significant, as it suggests that orthodenticle genes have acquired a new function: to direct head and horn formation only in the highly modified head of horned beetles."

The use of Onthophagus as a model system for the evolution of novel traits has been pioneered by Armin Moczek, professor in the IU Bloomington Department of Biology, who is senior author on the papers. Work onTribolium was conducted by David Linz and Yoshi Tomoyasu at Miami University.

Beetle embryos hatch as larvae, which grow and metamorphose into adult beetles. Many genes crucial to making the head of larvae during embryonic development are known from studies in Tribolium, but whether they were involved in making adult heads during metamorphosis was largely unknown.

In her study, Busey removed small patches of skin from the heads of larval Onthophagus and then traced where the adult heads were missing tissue.

"Using this microsurgical technique, we created a map showing which region of the larval head made each part of the adult head," she said. "This allowed us to apply knowledge about Tribolium embryonic development to Onthophagus, because even though adult heads are very different between horned and flour beetles, the larval heads are quite similar."

Zattara's study used these results to select genes needed by embryos to build larval heads and switched them off to test whether they had any roles in building the head of adults.

Eduardo Zattara 
Eduardo Zattara
Photo by Indiana University

Among the genes they selected was orthodenticle, or otd, which contributes to head development in simple invertebrates to complex mammals. If otd is deleted, most animal embryos will not develop a head or brain. Similarly, beetle embryos need otd to properly develop heads, but no larval or adult function was known.

But when Zattara and colleagues switched off otd genes in the larvae of two species of Onthophagus, they found otd had acquired a new function: reorganizing the head during metamorphosis, integrating the horns in the process.

They also found that switching off these genes shrank or eliminated the beetles' horns and associated head regions and, strikingly, induced development of "cyclopic" compound eyes at the top center of the head, where they aren't normally found in insects.

Although the same manipulations in Tribolium flour beetles did not affect head development or grow extra eyes, the IU scientists were surprised to find that otd genes were still expressed in the same location as larval and adult Onthophagus.

The results suggest that the lingering expression of genes in specific tissues or life stages where they no longer have a function may comprise a "stepping stone" in recruiting those genes into making new traits.

“These studies provide a solution to an important 'chicken-and-egg problem' of modern evolutionary developmental biology," Zattara said. “For a gene to carry out a new function, it needs to find a way to be activated at the right time and location. But it is hard to come up with a good reason why a gene would become active in a new context without already carrying out some important function."

"Here we have a situation where a gene is already in the right place -- the head -- just not at the right time -- the embryo instead of the adult," Moczek added. "By allowing the gene's availability to linger into later stages of development, it becomes easier to envision how it could then be eventually captured by evolution and used for a new function, such as the positioning of horns."

Hannah Busey 
Hannah Busey
Photo by Indiana University

These studies were supported in part by the National Science Foundation.

Contacts and sources:
Kevin Fryling 
Indiana University

Microchip Design Senses Sabotage, Detects Malicious Circuitry in Hardware, Spots Built-in Trojans

With the outsourcing of microchip design and fabrication a worldwide, $350 billion business, bad actors along the supply chain have many opportunities to install malicious circuitry in chips. These “Trojan horses” look harmless but can allow attackers to sabotage healthcare devices; public infrastructure; and financial, military, or government electronics.

Siddharth Garg, an assistant professor of electrical and computer engineering at the NYU Tandon School of Engineering, and fellow researchers are developing a unique solution: a chip with both an embedded module that proves that its calculations are correct and an external module that validates the first module’s proofs.

While software viruses are easy to spot and fix with downloadable patches, deliberately inserted hardware defects are invisible and act surreptitiously. For example, a secretly inserted “back door” function could allow attackers to alter or take over a device or system at a specific time. Garg’s configuration, an example of an approach called “verifiable computing” (VC), keeps tabs on a chip’s performance and can spot telltale signs of Trojans.

The ability to verify has become vital in an electronics age without trust: Gone are the days when a company could design, prototype, and manufacture its own chips. Manufacturing costs are now so high that designs are sent to offshore foundries, where security cannot always be assured.

But under the system proposed by Garg and his colleagues, the verifying processor can be fabricated separately from the chip. “Employing an external verification unit made by a trusted fabricator means that I can go to an untrusted foundry to produce a chip that has not only the circuitry-performing computations, but also a module that presents proofs of correctness,” said Garg.

The chip designer then turns to a trusted foundry to build a separate, less complex module: an ASIC (application-specific integrated circuit), whose sole job is to validate the proofs of correctness generated by the internal module of the untrusted chip.

A chip designed to flag malicious circuitry
Credit: NYU Tandon School of Engineering

Garg said that this arrangement provides a safety net for the chip maker and the end user. “Under the current system, I can get a chip back from a foundry with an embedded Trojan. It might not show up during post-fabrication testing, so I’ll send it to the customer,” said Garg. “But two years down the line it could begin misbehaving. The nice thing about our solution is that I don’t have to trust the chip because every time I give it a new input, it produces the output and the proofs of correctness, and the external module lets me continuously validate those proofs.”

An added advantage is that the chip built by the external foundry is smaller, faster, and more power-efficient than the trusted ASIC, sometimes by orders of magnitude. The VC setup can therefore potentially reduce the time, energy, and chip area needed to generate proofs.

“For certain types of computations, it can even outperform the alternative: performing the computation directly on a trusted chip,” Garg said.

Siddharth Garg, assistant professor of electrical and computer engineering
Credit: NYU Tandon School of Engineering

The researchers next plan to investigate techniques to reduce both the overhead that generating and verifying proofs imposes on a system and the bandwidth required between the prover and verifier chips. “And because with hardware, the proof is always in the pudding, we plan to prototype our ideas with real silicon chips,” said Garg.

To pursue the promise of verifiable ASICs, Garg, abhi shelat* of the University of Virginia, Rosario Gennaro of the City University of New York, Mariana Raykova of Yale University, and Michael Taylor of the University of California, San Diego, will share a five-year National Science Foundation Large Grant of $3 million.  *ahbi shelat prefers lower-case spelling

Contacts and sources:
Siddharth Garg
NYU Tandon School of Engineering

Citation:  Verifiable ASICS by Riad S. Wahby of Stanford University, Max Howald of The Cooper Union, Garg, shelat, and Michael Walfish of the NYU Courant Institute of Mathematical Sciences, earned a Distinguished Student Paper Award at the IEEE Symposium on Security and Privacy, one of the leading global conferences for computer security research, held in May in Oakland, California. The authors were supported by grants from the NSF, the Air Force Office of Scientific Research, the Office of Naval Research, a Microsoft Faculty Fellowship, and a Google Faculty Research Award.

Synthetic Life Does Math in a Test Tube, Are DNA Computers Next?

Often described as the blueprint of life, DNA contains the instructions for making every living thing from a human to a house fly.

But in recent decades, some researchers have been putting the letters of the genetic code to a different use: making tiny nanoscale computers.

In a new study, a Duke University team led by professor John Reif created strands of synthetic DNA that, when mixed together in a test tube in the right concentrations, form an analog circuit that can add, subtract and multiply as they form and break bonds.

Duke graduate student Tianqi Song and computer science professor John Reif have created an analog DNA circuit that can add, subtract and multiply as the molecules form and break bonds. 
Photo by John Joyner.

Rather than voltage, DNA circuits use the concentrations of specific DNA strands as signals.

Other teams have designed DNA-based circuits that can solve problems ranging from calculating square roots to playing tic-tac-toe. But most DNA circuits are digital, where information is encoded as a sequence of zeroes and ones.

Instead, the new Duke device performs calculations in an analog fashion by measuring the varying concentrations of specific DNA molecules directly, without requiring special circuitry to convert them to zeroes and ones first.

The researchers describe their approach in the August issue of the journal ACS Synthetic Biology.

Unlike the silicon-based circuits used in most modern day electronics, commercial applications of DNA circuits are still a long way off, Reif said.

For one, the test tube calculations are slow. It can take hours to get an answer.

“We can do some limited computing, but we can’t even begin to think of competing with modern-day PCs or other conventional computing devices,” Reif said.

But DNA circuits can be far tinier than those made of silicon. And unlike electronic circuits, DNA circuits work in wet environments, which might make them useful for computing inside the bloodstream or the soupy, cramped quarters of the cell.

The technology takes advantage of DNA’s natural ability to zip and unzip to perform computations. Just like Velcro and magnets have complementary hooks or poles, the nucleotide bases of DNA pair up and bind in a predictable way.

The researchers first create short pieces of synthetic DNA, some single-stranded and some double-stranded with single-stranded ends, and mix them in a test tube.

When a single strand encounters a perfect match at the end of one of the partially double-stranded ones, it latches on and binds, displacing the previously bound strand and causing it to detach, like someone cutting in on a dancing couple.

The newly released strand can in turn pair up with other complementary DNA molecules downstream in the circuit, creating a domino effect.

The researchers solve math problems by measuring the concentrations of specific outgoing strands as the reaction reaches equilibrium.

To see how their circuit would perform over time as the reactions proceeded, Reif and Duke graduate student Tianqi Song used computer software to simulate the reactions over a range of input concentrations. They have also been testing the circuit experimentally in the lab.

Besides addition, subtraction and multiplication, the researchers are also designing more sophisticated analog DNA circuits that can do a wider range of calculations, such as logarithms and exponentials.

Conventional computers went digital decades ago. But for DNA computing, the analog approach has its advantages, the researchers say. For one, analog DNA circuits require fewer strands of DNA than digital ones, Song said.

Analog circuits are also better suited for sensing signals that don’t lend themselves to simple on-off, all-or-none values, such as vital signs and other physiological measurements involved in diagnosing and treating disease.

The hope is that, in the distant future, such devices could be programmed to sense whether particular blood chemicals lie inside or outside the range of values considered normal, and release a specific DNA or RNA -- DNA’s chemical cousin -- that has a drug-like effect.

Reif’s lab is also beginning to work on DNA-based devices that could detect molecular signatures of particular types of cancer cells, and release substances that spur the immune system to fight back.

Even very simple DNA computing could still have huge impacts in medicine or science,” Reif said.

This research was supported by grants from the National Science Foundation (CCF-1320360, CCF-1217457 and CCF-1617791).

Contacts and sources:
by Robin Smith
Duke University

Citation: "Analog Computation by DNA Strand Displacement Circuits," Tianqi Song, Sudhanshu Garg, Reem Mokhtar, Hieu Bui and John Reif. ACS Synthetic Biology, August 19, 2016. DOI:10.1021/acssynbio.6b00144.

3.18 Million Year Old Cold Case Solved: Human Ancestor Lucy Died Falling From Tree (Video)

Maybe she was pushed?

Lucy, the most famous fossil of a human ancestor, probably died after falling from a tree, according to a study appearing in Nature led by researchers at The University of Texas at Austin.

Lucy, a 3.18-million-year-old specimen of Australopithecus afarensis — or “southern ape of Afar” — is among the oldest, most complete skeletons of any adult, erect-walking human ancestor. Since her discovery in the Afar region of Ethiopia in 1974 by Arizona State University anthropologist Donald Johanson and graduate student Tom Gray, Lucy — a terrestrial biped — has been at the center of a vigorous debate about whether this ancient species also spent time in the trees.

Lucy, a 3.18 million year old fossil specimen of Australopithecus afarensis. 
Image provided by John Kappelman, UT Austin.

“It is ironic that the fossil at the center of a debate about the role of arborealism in human evolution likely died from injuries suffered from a fall out of a tree,” said lead author John Kappelman, a UT Austin anthropology and geological sciences professor.

UT Austin professor John Kappelman with 3D printouts of Lucy’s skeleton illustrating the compressive fractures in her right humerus that she suffered at the time of her death 3.18 million years ago.

Photo by Marsha Miller, UT Austin.

Kappelman first studied Lucy during her U.S. museum tour in 2008, when the fossil detoured to the High-Resolution X-ray Computed Tomography Facility (UTCT) in the UT Jackson School of Geosciences — a machine designed to scan through materials as solid as a rock and at a higher resolution than medical CT. For 10 days, Kappelman and geological sciences professor Richard Ketcham carefully scanned all of her 40-percent-complete skeleton to create a digital archive of more than 35,000 CT slices.

“Lucy is precious. There’s only one Lucy, and you want to study her as much as possible,” Ketcham said. “CT is nondestructive. So you can see what is inside, the internal details and arrangement of the internal bones.”

UT Austin professors John Kappelman and Richard Ketcham examine casts of Lucy while scanning the original fossil (background).

 Photo by Marsha Miller, UT Austin.

Studying Lucy and her scans, Kappelman noticed something unusual: The end of the right humerus was fractured in a manner not normally seen in fossils, preserving a series of sharp, clean breaks with tiny bone fragments and slivers still in place.

“This compressive fracture results when the hand hits the ground during a fall, impacting the elements of the shoulder against one another to create a unique signature on the humerus,” said Kappelman, who consulted Dr. Stephen Pearce, an orthopedic surgeon at Austin Bone and Joint Clinic, using a modern human-scale, 3-D printed model of Lucy.

Pearce confirmed: The injury was consistent with a four-part proximal humerus fracture, caused by a fall from considerable height when the conscious victim stretched out an arm in an attempt to break the fall.

UT Austin professor John Kappelman studies Lucy’s skeleton in the National Museum in Addis Ababa, Ethiopia

 Photo by Lawrence Todd.

Kappelman observed similar but less severe fractures at the left shoulder and other compressive fractures throughout Lucy’s skeleton including a pilon fracture of the right ankle, a fractured left knee and pelvis, and even more subtle evidence such as a fractured first rib — “a hallmark of severe trauma” — all consistent with fractures caused by a fall. Without any evidence of healing, Kappelman concluded the breaks occurred perimortem, or near the time of death.

The question remained: How could Lucy have achieved the height necessary to produce such a high velocity fall and forceful impact? Kappelman argued that because of her small size — about 3 feet 6 inches and 60 pounds — Lucy probably foraged and sought nightly refuge in trees.

In comparing her with chimpanzees, Kappelman suggested Lucy probably fell from a height of more than 40 feet, hitting the ground at more than 35 miles per hour. Based on the pattern of breaks, Kappelman hypothesized that she landed feet-first before bracing herself with her arms when falling forward, and “death followed swiftly.”

UT Austin professor John Kappelman studies Lucy’s humerus in the National Museum in Addis Ababa, Ethiopia.

Photo by Sissi Janet Mattox.

“When the extent of Lucy’s multiple injuries first came into focus, her image popped into my mind’s eye, and I felt a jump of empathy across time and space,” Kappelman said. “Lucy was no longer simply a box of bones but in death became a real individual: a small, broken body lying helpless at the bottom of a tree.”

Kappelman conjectured that because Lucy was both terrestrial and arboreal, features that permitted her to move efficiently on the ground may have compromised her ability to climb trees, predisposing her species to more frequent falls. Using fracture patterns when present, future research may tell a more complete story of how ancient species lived and died.

In addition to the study, the Ethiopian National Museum provided access to a set of 3-D files of Lucy’s shoulder and knee for the public to download and print so that they can evaluate the hypothesis for themselves. “This is the first time 3-D files have been released for any Ethiopian fossil hominin, and the Ethiopian officials are to be commended,” Kappelman said. “Lucy is leading the charge for the open sharing of digital data.”

Other scholastic materials and the 3-D files are available on Permissions to scan, study and photograph Lucy were granted by the Authority for Research and Conservation of Cultural Heritage and the National Museum of Ethiopia of the Ministry of Tourism and Culture. The UTCT was supported by three grants from the U.S. National Science Foundation.

Contacts and sources:
David Ochsner
The University of Texas at Austin

Tuesday, August 30, 2016

Like Herding Gnats: Theorists Solve a Long-Standing Fundamental Problem Involving Atoms

Trying to understand a system of atoms is like herding gnats - the individual atoms are never at rest and are constantly moving and interacting. When it comes to trying to model the properties and behavior of these kinds of systems, scientists use two fundamentally different pictures of reality, one of which is called "statistical" and the other "dynamical."

The two approaches have at times been at odds, but scientists from the U.S. Department of Energy's Argonne National Laboratory announced a way to reconcile the two pictures.

In the statistical approach, which scientists call statistical mechanics, a given system realizes all of its possible states, which means that the atoms explore every possible location and velocity for a given value of either energy or temperature. In statistical mechanics, scientists are not concerned with the order in which the states happen and are not concerned with how long they take to occur. Time is not a player.

Trying to understand a system of atoms is like herding gnats - the individual atoms are never at rest and are constantly moving and interacting. When it comes to trying to model the properties and behavior of these kinds of systems, scientists use two fundamentally different pictures of reality, one of which is called "statistical" and the other "dynamical." The two approaches have at times been at odds, but scientists from Argonne recently announced a way to reconcile the two pictures.
Credit: Argonne National Laboratory

In contrast, the dynamical approach provides a detailed account of how and to what degree these states are explored over time. In dynamics, a system may not experience all of the states that are in principle available to it, because the energy may not be high enough to surmount the energy barriers or because of the time window being too short. "When a system cannot 'see' states beyond an energy barrier in dynamics, it's like a hiker being unable to see the next valley behind a mountain range," said Argonne theorist Julius Jellinek.

When choosing one approach over the other, scientists are forced to take a conceptual fork in the road, because the two approaches do not always agree. Under certain conditions - for example, at sufficiently high energies and long time scales - the statistical and the dynamical portraits of the physical world do in fact sync up. However, in many other cases statistical mechanics and dynamics yield pictures that differ markedly.

"When the two approaches disagree, the correct choice is dynamics because the states actually experienced by a system may depend on the energy, the initial state and on the window of time for observation or measurement," Jellinek said. However, not having the statistical picture is "kind of a loss," he added, because of the power of its tools and concepts to analyze and characterize the properties and behavior of systems.

The fundamental characteristic that lies at the foundation of all statistical mechanics is the "density of states," which is the total number of states a system can assume at a given energy. Knowledge of the density of states allows researchers to establish additional physical properties such as entropy, free energy and others, which form the powerful arsenal of statistical mechanical analysis and characterization tools. The accuracy of all these, however, hinges on the accuracy of the density of states.

The problem is that when it comes to the vibrational motion of systems, scientists had an exact solution for the density of states for only two idealized cases, which are sets of so-called harmonic or Morse oscillators. Though real systems are neither of the two, the ubiquitous practice was to use the harmonic approximation, which hinges on the assumption that real systems behave not too differently from harmonic ones.

This assumption is not bad at low energies, but it becomes inadequate as the energy is increased. Considerable effort has been invested over the last eight decades into attempts to provide a solution for systems that do not behave harmonically, Jellinek said, and until now, the result has been a multitude of approximate solutions, which are all limited to only weak departures from harmonicity or suffer from other limitations. A general and exact solution for vibrational density of states for systems with any degree of anharmonicity remained an unsolved problem.

In a major recent development, Jellinek, in collaboration with Darya Aleinikava, then an Argonne postdoc and now an assistant professor at Benedictine University, provided the missing solution. The methodology they formulated furnishes a general and exact solution for any system at any energy.

"This long-standing fundamental problem is finally solved," said Jellinek. "The solution will benefit many areas of physics, chemistry, materials science, nanoscience and biology."

The solution provided solves yet another problem - it reconciles the statistical and dynamical pictures of the world for even those conditions in which they previously may have disagreed. Since the solution is based on following the actual dynamics of a system at relevant energies and time scales, the resulting densities of states are fully dynamically informed and may be sensitive to time. As such, these densities of states lay the foundation for formulation of new statistical mechanical frameworks that incorporate time and reflect the actual dynamical behavior of systems.

"This leads to a profound change in our view of the relationship between statistical mechanics and dynamics," said Jellinek. "It brings statistical mechanics into harmony with the dynamics irrespective of how specific or peculiar the dynamical behavior of a system may be."

A paper based on the research, "Anharmonic densities of states: A general dynamics-based solution," was published in the June 2 edition of The Journal of Chemical Physics.

The work was supported by the DOE Office of Science and the Alexander von Humboldt Foundation and made use of the National Energy Research Scientific Computing center, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory.

Contacts and sources: 
Jared Sagoff
Argonne National Laboratory

Smarter Brains Are Blood-Thirsty Brains

A University of Adelaide-led project has overturned the theory that the evolution of human intelligence was simply related to the size of the brain -- but rather linked more closely to the supply of blood to the brain.

The international collaboration between Australia and South Africa showed that the human brain evolved to become not only larger, but more energetically costly and blood thirsty than previously believed.

The research team calculated how blood flowing to the brain of human ancestors changed over time, using the size of two holes at the base of the skull that allow arteries to pass to the brain. The findings, published in the Royal Society journal Open Science, allowed the researchers to track the increase in human intelligence across evolutionary time.

These are skull casts from human evolution. Left to right: Australopithecus afarensis, Homo habilis, Homo ergaster, Homo erectus and Homo neanderthalensis.

Photo credit: Roger Seymour. Casts photographed in the South Australian Museum.

"Brain size has increased about 350% over human evolution, but we found that blood flow to the brain increased an amazing 600%," says project leader Professor Emeritus Roger Seymour, from the University of Adelaide. "We believe this is possibly related to the brain's need to satisfy increasingly energetic connections between nerve cells that allowed the evolution of complex thinking and learning.

"To allow our brain to be so intelligent, it must be constantly fed oxygen and nutrients from the blood.

"The more metabolically active the brain is, the more blood it requires, so the supply arteries are larger. The holes in fossil skulls are accurate gauges of arterial size."

The study was a new collaboration between the Cardiovascular Physiology team in the School of Biological Sciences at the University of Adelaide and the Brain Function Research Group and Evolutionary Studies Institute at the University of the Witwatersrand.

These are human skulls, showing the location of two openings for the internal carotid arteries that supply the cerebrum of the brain almost entirely. The sizes of these openings reveal the rate of blood flow, which is related to brain metabolic rate and cognitive ability.

Photo credit: Edward Snelling. Sourced from the Raymond Dart Collection of Human Skeletons, School of Anatomical Sciences, Faculty of Health Sciences, University of the Witwatersrand.

Co-author Dr Edward Snelling, University of the Witwatersrand, says: "Ancient fossil skulls from Africa reveal holes where the arteries supplying the brain passed through. The size of these holes show how blood flow increased from three million-year-old Australopithecus to modern humans. The intensity of brain activity was, before now, believed to have been taken to the grave with our ancestors."

Honours student and co-author Vanya Bosiocic had the opportunity to travel to South Africa and work with world renowned anthropologists on the oldest hominin skull collection, including the newly-discovered Homo naledi.

"Throughout evolution, the advance in our brain function appears to be related to the longer time it takes for us to grow out of childhood. It is also connected to family cooperation in hunting, defending territory and looking after our young," Ms Bosiocic says.

"The emergence of these traits seems to nicely follow the increase in the brain's need for blood and energy."

Contacts and sources:
Professor Roger Seymour, Project leader 
University of Adelaide

The Rise and Fall of Galaxy Formation

An international team of astronomers, including Carnegie’s Eric Persson, has charted the rise and fall of galaxies over 90 percent of cosmic history. Their work, which includes some of the most sensitive astronomical measurements made to date, is published by The Astrophysical Journal.

The FourStar Galaxy Evolution Survey (ZFOURGE) has built a multicolored photo album of galaxies as they grow from their faint beginnings into mature and majestic giants. They did so by measuring distances and brightnesses for more than 70,000 galaxies spanning more than 12 billion years of cosmic time, revealing the breadth of galactic diversity.

 A movie version of this comparison between optical wavelengths and ZFOURGE   
Courtesy of Texas A&M University.

The team assembled the colorful photo album by using a new set of filters that are sensitive to infrared light and taking images with them with theFourStar camera at Carnegie’s 6.5-meter Baade Telescope at our Las Campanas Observatory in Chile. They took the images over a period of 45 nights. The team made a 3-D map by collecting light from over 70,000 galaxies, peering all the way into the distant universe, and by using this light to measure how far these galaxies are from our own Milky Way.

The deep 3-D map also revealed young galaxies that existed as early as 12.5 billion years ago (at less than 10 percent of the current universe age), only a handful of which had previously been found. This should help astronomers better understand the universe’s earliest days. 

A comparison of visualizing galaxies with and without ZFOURGE.
Credit: Texas A&M University. 

"Perhaps the most surprising result is that galaxies in the young universe appear as diverse as they are today,” when the universe is older and much more evolved, said lead author Caroline Straatman, a recent graduate of Leiden University. “The fact that we see young galaxies in the distant universe that have already shut down star formation is remarkable.”

But it’s not just about distant galaxies; the information gathered by ZFOURGE is also giving the scientists the best-yet view of what our own galaxy was like in its youth.

“Ten billion years ago, galaxies like our Milky Way were much smaller, but they were forming stars 30 times faster than they are today,” said Casey Papovich of Texas A&M University.

“ZFOURGE is providing us with a highly complete and reliable census of the evolving galaxy population, and is already helping us to address questions like: How did galaxies grow with time? When did they form their stars and develop into the spectacular structures that we see in the present-day universe?” added Ryan Quadri, also of Texas A&M.

In the study’s first images, the team found one of the earliest examples of a galaxy cluster, a so-called “galaxy city” made up of a dense concentration of galaxies, which formed when the universe was only three billion years old, as compared to the nearly 14 billion years it is today.

“The combination of FourStar, the special filters, Magellan, and the conditions at Las Campanas led to the detection of the cluster,” said Persson, who built the FourStar instrument at the Carnegie Observatories in Pasadena. “It was in a very well-studied region of the sky—‘hiding in plain sight.’”

The paper marks the completion of the ZFOURGE survey and the public release of the dataset, which can be found here :

This work was supported by the George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, the National Science Foundation, the Australian Research Council, an Australian Research Council Future Fellowship, and a NASA Hubble Fellowship awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA. Australian access to the Magellan Telescopes was supported through the National Collaborative Research Infrastructure Strategy of the Australian Federal Government.

The ZFOURGE survey was conducted with the FourStar camera on the Magellan 6.5-meter telescope in Chile and further involved data collected by many of the world’s most powerful observatories, including the Hubble Space Telescope, the Very Large Telescope, the Spitzer Space Telescope, and the Herschel Space Observatory.

Contacts and sources: 
Eric Persson
Carnegie Institution for Science  

Anomalous Grooves on Martian Moon Phobos Explained

Some of the mysterious grooves on the surface of Mars' moon Phobos are the result of debris ejected by impacts eventually falling back onto the surface to form linear chains of craters, according to a new study.

One set of grooves on Phobos are thought to be stress fractures resulting from the tidal pull of Mars. The new study, published August 19 in Nature Communications, addresses another set of grooves that do not fit that explanation.

"These grooves cut across the tidal fields, so they require another mechanism. If we put the two together, we can explain most if not all of the grooves on Phobos," said first author Michael Nayak, a graduate student in Earth and planetary sciences at UC Santa Cruz.

In this spacecraft image of Phobos, red arrows indicate a chain of small craters whose origin researchers were able to trace back to a primary impact at the large crater known as Grildrig.

Credit: ESA/Mars Express, modified by Nayak & Asphaug

Phobos is an unusual satellite, orbiting closer to its planet than any other moon in the solar system, with an orbital period of just 7 hours. Small and heavily cratered, with a lumpy nonspherical shape, it is only 9,000 kilometers from the surface of Mars (the distance from San Francisco to New York and back) and is slowly spiraling inward toward the planet. Phobos appears to have a weak interior structure covered by an elastic shell, allowing it to be deformed by tidal forces without breaking apart.

Coauthor Erik Asphaug, a planetary scientist at Arizona State University and professor emeritus at UC Santa Cruz, has been studying Phobos for many years. Recent computer simulations by him and NASA planetary scientist Terry Hurford showed how tidal stresses can cause fracturing and linear grooves in the surface layer. Although this idea was first proposed in the 1970s, the existence of so many grooves with the wrong orientation for such stress fractures had remained unexplained.

Nayak developed computer simulations showing how those anomalous grooves could result from impacts. Material ejected from the surface by an impact easily escapes the weak gravity of Phobos. But the debris remains in orbit around Mars, most of it moving either just slower or just faster than the orbital velocity of Phobos, and within a few orbits it gets recaptured and falls back onto the surface of the moon.

Nayak's simulations enabled him to track in precise detail the fate of the ejected debris. He found that recaptured debris creates distinctive linear impact patterns that match the characteristics of the anomalous grooves and chains of craters that cut across the tidal stress fractures on Phobos.

"A lot of stuff gets kicked up, floats for a couple of orbits, and then gets recollected and falls back in a linear chain before it has a chance to be pulled apart and disassociated by Mars' gravity," Nayak said. "The controlling factor is where the impact occurs, and that determines where the debris falls back."

The researchers used their model to match a linear chain of small craters on Phobos to its primary source crater. They simulated an impact at the 2.6-kilometer crater called Grildrig, near the moon's north pole, and found that the pattern resulting from ejected debris falling back onto the surface in the model was a very close match to the actual crater chain observed on Phobos.

With its low mass and close orbit around Mars, Phobos is so unusual that it may be the only place in the solar system where this phenomenon occurs, Nayak said.

Contacts and sources:
Tim Stephens
UC Santa Cruz

How 'Planet Nine' Could Doom the Solar System: Dr Dimitri Veras

The solar system could be thrown into disaster when the sun dies if the mysterious 'Planet Nine' exists, according to research from the University of Warwick.

Dr Dimitri Veras in the Department of Physics has discovered that the presence of Planet Nine - the hypothetical planet which may exist in the outer Solar System - could cause the elimination of at least one of the giant planets after the sun dies, hurling them out into interstellar space through a sort of 'pinball' effect.

When the sun starts to die in around seven billion years, it will blow away half of its own mass and inflate itself -- swallowing the Earth -- before fading into an ember known as a white dwarf. This mass ejection will push Jupiter, Saturn, Uranus and Neptune out to what was assumed a safe distance.

Artist's impression showing Planet Nine causing other planets in the solar system to be hurled into interstellar space.

Credit: University of Warwick

However, Dr. Veras has discovered that the existence of Planet Nine could rewrite this happy end-ing. He found that Planet Nine might not be pushed out in the same way, and in fact might instead be thrust inward into a death dance with the solar system's four known giant planets -- most notably Uranus and Neptune. The most likely result is ejection from the solar system, forever.

Using a unique code that can simulate the death of planetary systems, Dr. Veras has mapped nu-merous different positions where a 'Planet Nine' could change the fate of the solar system. The further away and the more massive the planet is, the higher the chance that the solar system will experience a violent future.

This discovery could shed light on planetary architectures in different solar systems. Almost half of existing white dwarfs contain rock, a potential signature of the debris generated from a similarly calamitous fate in other systems with distant "Planet Nines" of their own.

In effect, the future death of our sun could explain the evolution of other planetary systems.

Dr. Veras explains the danger that Planet Nine could create: "The existence of a distant massive planet could fundamentally change the fate of the solar system. Uranus and Neptune in particular may no longer be safe from the death throes of the Sun. The fate of the solar system would depend on the mass and orbital properties of Planet Nine, if it exists."

"The future of the Sun may be foreshadowed by white dwarfs that are 'polluted' by rocky debris. Planet Nine could act as a catalyst for the pollution. The Sun's future identity as a white dwarf that could be 'polluted' by rocky debris may reflect current observations of other white dwarfs throughout the Milky Way," Dr Veras adds.

The paper 'The fates of solar system analogues with one additional distant planet' will be published in the Monthly Notices of the Royal Astronomical Society.

Contacts and sources:
Luke Walton
University of Warwick.

Solar Cycles and Climate Changes Measured by TOSCA Scientists

The Sun’s impact on our planet’s climate has recently been a hotly debated topic in the context of climate change. The controversy around this issue has led scientists across Europe to dig deeper into the claim that solar activity could be a major cause of global warming.

In the 1980s, research showed that the Sun’s radiation levels varied, which naturally invited the question – does solar variability affect our climate? Despite new evidence that solar variability does have a small impact, scattered scientific studies have not helped improve how the Sun’s variations were assessed.

In 2011, European researchers set up TOSCA, a COST-funded international network aiming to offer a better understanding of the Sun’s effect on climate, against the backdrop of global warming. Over 100 specialists in solar physics, geomagnetism, climate modelling or satmospheric chemistry got together to explore this topic in a new way. 

Previously, analyses of the Sun-Earth relationship has focused on measuring the Sun’s total solar irradiation, or variations in solar radiation. “It’s like measuring the wealth of a country only by looking at its GDP”, Dr Thierry Dudok de Wit (University of Orléans, France) points out. Climate studies have long been focusing on similar mechanisms individually, which is why TOSCA opted for a global approach, by bringing on board experts from different research communities.

“Our biggest achievement was changing the way we interacted, by looking at Earth-solar connections as a whole, not individually” , adds Dr Thierry Dudok de Wit, leading the Action.

The group set out to get a better idea of the physical and chemical mechanisms driving such variations , and how impactful they were. Understanding their mechanisms also helps paint a better picture of the link between solar variability and climate change.

By comparing recent measurements with results from new models, the network challenged the long-debated assumption that the Sun’s slight change in radiation could cause the Earth’s climate to change.

They found mechanisms by which solar variation can alter climate variability regionally , but none that would trigger global warming. Looking at time scales longer than a century, the impact of solar variability on climate change is evident, but the effect of greenhouse gases has been proven much more considerable in the short run.

However, there are still many questions behind the Sun-Earth connection, some of which TOSCA helped answer.

By examining the different phenomena defining the solar impact on climate in general , the team showed several subtle phenomena could have a significant impact, often locally. For instance, UV radiation amounts to a mere 7% of solar energy, but its variation produces changes in the stratosphere near the Equator, all the way to the polar regions, which govern climate. This means that winters in Europe would become wetter and milder or, on the contrary, drier and cooler, depending on the Sun’s state.

They also found that streams of electrons and protons known as the solar wind, affecting the Earth’s global electric field, lead to changes in aerosol formation, which ultimately impact rainfall. These effects, largely ignored so far, will now be incorporated into several climate models in order to build a more complete picture. 

TOSCA is a European COST action linking scientists working on the influence of the Sun on the Earth’s climate. Based on present understanding, solar variability has a role in the observed climate change. This is a multidisciplinary topic of considerable scientific and societal importance. However, the mechanisms that link solar activity and climate change are not yet fully understood. TOSCA’s aim is to shed more light on the mechanisms involved.

The TOSCA handbook presents all the scientific facts behind the network’s findings. It also shows the network’s efforts to engage with a general audience by presenting the facts, which are now open to public scrutiny.

The Action was another example of young researchers’ essential contribution: “If I were to lead another COST Action, I would get even more early career researchers involved – it was bright, young minds who made the difference in our group”, Dr Dudok de Wit added.

Dr Benjamin Laken had a leading role in one of TOSCA’s training schools: “I demonstrated the use of Python for data analytics, and also guided a small team of students through an independent research project. This helped expose the students – many for the first time – to critical tools and methods relevant to their development into research. TOSCA enabled me to identify the most pressing knowledge gaps, which I could personally contribute to, and see how to effectively communicate my findings back to an interdisciplinary community. Thanks to the network, I was able to grow as a researcher at a critical time in my career.”

Dr Dudok de Wit’s team at the International Space Science Institute in Bern, and the Coupled Model Intercomparison Project, have been using the datasets identified through the network to describe the Sun’s influence on climate from 1850 up to the present day, as well as a forecast up to the year 2300. The findings will shape the next report prepared by the Intergovernmental Panel on Climate Change. The panel is tasked with providing a scientific, objective view of climate change and its socio-economic effects.

Other projects spinning off from the network, such as SOLID and VarSITI , will continue research on the Sun’s terrestrial impact, placing European experts at the forefront of climate studies research.

Contacts and sources:

TOSCA handbook:
Conclusions of the Study


A Billion Jupiter-Like Worlds in The Milky Way

Our galaxy is home to a bewildering variety of Jupiter-like worlds: hot ones, cold ones, giant versions of our own giant, pint-sized pretenders only half as big around.

Astronomers say that in our galaxy alone, a billion or more such Jupiter-like worlds could be orbiting stars other than our sun. And we can use them to gain a better understanding of our solar system and our galactic environment, including the prospects for finding life.

It turns out the inverse is also true -- we can turn our instruments and probes to our own backyard, and view Jupiter as if it were an exoplanet to learn more about those far-off worlds. The best-ever chance to do this is now, with Juno, a NASA probe the size of a basketball court, which arrived at Jupiter in July to begin a series of long, looping orbits around our solar system's largest planet. Juno is expected to capture the most detailed images of the gas giant ever seen. And with a suite of science instruments, Juno will plumb the secrets beneath Jupiter's roiling atmosphere.

Comparing Jupiter with Jupiter-like planets that orbit other stars can teach us about those distant worlds, and reveal new insights about our own solar system's formation and evolution. (Illustration)

Credits: NASA/JPL-Caltech

It will be a very long time, if ever, before scientists who study exoplanets -- planets orbiting other stars -- get the chance to watch an interstellar probe coast into orbit around an exo-Jupiter, dozens or hundreds of light-years away. But if they ever do, it's a safe bet the scene will summon echoes of Juno.

"The only way we're going to ever be able to understand what we see in those extrasolar planets is by actually understanding our system, our Jupiter itself," said David Ciardi, an astronomer with NASA's Exoplanet Science Institute (NExSci) at Caltech.

Not all Jupiters are created equal

Juno's detailed examination of Jupiter could provide insights into the history, and future, of our solar system. The tally of confirmed exoplanets so far includes hundreds in Jupiter's size-range, and many more that are larger or smaller.

Credit: NASA

The so-called hot Jupiters acquired their name for a reason: They are in tight orbits around their stars that make them sizzling-hot, completing a full revolution -- the planet's entire year -- in what would be a few days on Earth. And they're charbroiled along the way.

But why does our solar system lack a "hot Jupiter?" Or is this, perhaps, the fate awaiting our own Jupiter billions of years from now -- could it gradually spiral toward the sun, or might the swollen future sun expand to engulf it?

Not likely, Ciardi says; such planetary migrations probably occur early in the life of a solar system.

"In order for migration to occur, there needs to be dusty material within the system," he said. "Enough to produce drag. That phase of migration is long since over for our solar system."

Jupiter itself might already have migrated from farther out in the solar system, although no one really knows, he said.

Looking back in time

If Juno's measurements can help settle the question, they could take us a long way toward understanding Jupiter's influence on the formation of Earth -- and, by extension, the formation of other "Earths" that might be scattered among the stars.

"Juno is measuring water vapor in the Jovian atmosphere," said Elisa Quintana, a research scientist at the NASA Ames Research Center in Moffett Field, California. "This allows the mission to measure the abundance of oxygen on Jupiter. Oxygen is thought to be correlated with the initial position from which Jupiter originated."

If Jupiter's formation started with large chunks of ice in its present position, then it would have taken a lot of water ice to carry in the heavier elements which we find in Jupiter. But a Jupiter that formed farther out in the solar system, then migrated inward, could have formed from much colder ice, which would carry in the observed heavier elements with a smaller amount of water. If Jupiter formed more directly from the solar nebula, without ice chunks as a starter, then it should contain less water still. Measuring the water is a key step in understanding how and where Jupiter formed.

That's how Juno's microwave radiometer, which will measure water vapor, could reveal Jupiter's ancient history.

"If Juno detects a high abundance of oxygen, it could suggest that the planet formed farther out," Quintana said.

A probe dropped into Jupiter by NASA’s Galileo spacecraft in 1995 found high winds and turbulence, but the expected water seemed to be absent. Scientists think Galileo's one-shot probe just happened to drop into a dry area of the atmosphere, but Juno will survey the entire planet from orbit.

The chaotic early years

Where Jupiter formed, and when, also could answer questions about the solar system's "giant impact phase," a time of crashes and collisions among early planet-forming bodies that eventually led to the solar system we have today.

Our solar system was extremely accident-prone in its early history -- perhaps not quite like billiard balls caroming around, but with plenty of pileups and fender-benders.

"It definitely was a violent time," Quintana said. "There were collisions going on for tens of millions of years. For example, the idea of how the moon formed is that a proto-Earth and another body collided; the disk of debris from this collision formed the moon. And some people think Mercury, because it has such a huge iron core, was hit by something big that stripped off its mantle; it was left with a large core in proportion to its size."

Part of Quintana's research involves computer modeling of the formation of planets and solar systems. Teasing out Jupiter's structure and composition could greatly enhance such models, she said. Quintana already has modeled our solar system's formation, with Jupiter and without, yielding some surprising findings.

Credit: NASA

"For a long time, people thought Jupiter was essential to habitability because it might have shielded Earth from the constant influx of impacts [during the solar system's early days] which could have been damaging to habitability," she said. "What we've found in our simulations is that it's almost the opposite. When you add Jupiter, the accretion times are faster and the impacts onto Earth are far more energetic. Planets formed within about 100 million years; the solar system was done growing by that point," Quintana said.

"If you take Jupiter out, you still form Earth, but on timescales of billions of years rather than hundreds of millions. Earth still receives giant impacts, but they're less frequent and have lower impact energies," she said.

Getting to the core

Another critical Juno measurement that could shed new light on the dark history of planetary formation is the mission's gravity science experiment. Changes in the frequency of radio transmissions from Juno to NASA's Deep Space Network will help map the giant planet's gravitational field.

Knowing the nature of Jupiter's core could reveal how quickly the planet formed, with implications for how Jupiter might have affected Earth's formation.

Jupiter Infra-red Glow

Credit: NASA

And the spacecraft's magnetometers could yield more insight into the deep internal structure of Jupiter by measuring its magnetic field.

"We don't understand a lot about Jupiter's magnetic field," Ciardi said. "We think it's produced by metallic hydrogen in the deep interior. Jupiter has an incredibly strong magnetic field, much stronger than Earth's."

Mapping Jupiter's magnetic field also might help pin down the plausibility of proposed scenarios for alien life beyond our solar system.

Earth's magnetic field is thought to be important to life because it acts like a protective shield, channeling potentially harmful charged particles and cosmic rays away from the surface.

"If a Jupiter-like planet orbits its star at a distance where liquid water could exist, the Jupiter-like planet itself might not have life, but it might have moons which could potentially harbor life," he said.

An exo-Jupiter’s intense magnetic field could protect such life forms, he said. That conjures visions of Pandora, the moon in the movie "Avatar" inhabited by 10-foot-tall humanoids who ride massive, flying predators through an exotic alien ecosystem.

Juno's findings will be important not only to understanding how exo-Jupiters might influence the formation of exo-Earths, or other kinds of habitable planets. They'll also be essential to the next generation of space telescopes that will hunt for alien worlds. The Transiting Exoplanet Survey Satellite (TESS) will conduct a survey of nearby bright stars for exoplanets beginning in June 2018, or earlier. The James Webb Space Telescope, expected to launch in 2018, and WFIRST (Wide-Field Infrared Survey Telescope), with launch anticipated in the mid-2020s, will attempt to take direct images of giant planets orbiting other stars.

"We're going to be able to image planets and get spectra," or light profiles from exoplanets that will reveal atmospheric gases, Ciardi said. Juno's revelations about Jupiter will help scientists to make sense of these data from distant worlds.

"Studying our solar system is about studying exoplanets," he said. "And studying exoplanets is about studying our solar system. They go together."

Contacts and sources:
Preston Dyches
Jet Propulsion Laboratory, Pasadena, Calif.

Written by Pat Brennan
NASA Exoplanet Program

To learn more about a few of the known exo-Jupiters, visit:

Peculiar Age-Defying Star Probed

For years, astronomers have puzzled over a massive star lodged deep in the Milky Way that shows conflicting signs of being extremely old and extremely young.

An age-defying star designated as IRAS 19312+1950 (arrow) exhibits features characteristic of a very young star and a very old star. The object stands out as extremely bright inside a large, chemically rich cloud of material, as shown in this image from NASA’s Spitzer Space Telescope. 

A NASA-led team of scientists thinks the star – which is about 10 times as massive as our sun and emits about 20,000 times as much energy – is a newly forming protostar. That was a big surprise because the region had not been known as a stellar nursery before. But the presence of a nearby interstellar bubble, which indicates the presence of a recently formed massive star, also supports this idea.

IRAS 19312+1950 

Credits: NASA/JPL-Caltech

For years, astronomers have puzzled over a massive star lodged deep in the Milky Way that shows conflicting signs of being extremely old and extremely young.

Researchers initially classified the star as elderly, perhaps a red supergiant. But a new study by a NASA-led team of researchers suggests that the object, labeled IRAS 19312+1950, might be something quite different – a protostar, a star still in the making.

“Astronomers recognized this object as noteworthy around the year 2000 and have been trying ever since to decide how far along its development is,” said Martin Cordiner, an astrochemist working at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. He is the lead author of a paper in the Astrophysical Journal describing the team’s findings, from observations made using NASA’s Spitzer Space Telescope and ESA’s Herschel Space Observatory.

Located more than 12,000 light-years from Earth, the object first stood out as peculiar when it was observed at particular radio frequencies. Several teams of astronomers studied it using ground-based telescopes and concluded that it is an oxygen-rich star about 10 times as massive as the sun. The question was: What kind of star?

Some researchers favor the idea that the star is evolved – past the peak of its life cycle and on the decline. For most of their lives, stars obtain their energy by fusing hydrogen in their cores, as the sun does now. But older stars have used up most of their hydrogen and must rely on heavier fuels that don't last as long, leading to rapid deterioration.

IRAS 19312+1950
Image Credit: NASA/JPL-Caltech

Two early clues – intense radio sources called masers – suggested the star was old. In astronomy, masers occur when the molecules in certain kinds of gases get revved up and emit a lot of radiation over a very limited range of frequencies. The result is a powerful radio beacon – the microwave equivalent of a laser.

One maser observed with IRAS 19312+1950 is almost exclusively associated with late-stage stars. This is the silicon oxide maser, produced by molecules made of one silicon atom and one oxygen atom. Researchers don’t know why this maser is nearly always restricted to elderly stars, but of thousands of known silicon oxide masers, only a few exceptions to this rule have been noted.

Also spotted with the star was a hydroxyl maser, produced by molecules comprised of one oxygen atom and one hydrogen atom. Hydroxyl masers can occur in various kinds of astronomical objects, but when one occurs with an elderly star, the radio signal has a distinctive pattern – it’s especially strong at a frequency of 1612 megahertz. That’s the pattern researchers found in this case.

Even so, the object didn’t entirely fit with evolved stars. Especially puzzling was the smorgasbord of chemicals found in the large cloud of material surrounding the star. A chemical-rich cloud like this is typical of the regions where new stars are born, but no such stellar nursery had been identified near this star.

Scientists initially proposed that the object was an old star surrounded by a surprising cloud typical of the kind that usually accompanies young stars. Another idea was that the observations might somehow be capturing two objects: a very old star and an embryonic cloud of star-making material in the same field.

Cordiner and his colleagues began to reconsider the object, conducting observations using ESA’s Herschel Space Observatory and analyzing data gathered earlier with NASA’s Spitzer Space Telescope. Both telescopes operate at infrared wavelengths, which gave the team new insight into the gases, dust and ices in the cloud surrounding the star.

The additional information leads Cordiner and colleagues to think the star is in a very early stage of formation. The object is much brighter than it first appeared, they say, emitting about 20,000 times the energy of our sun. The team found large quantities of ices made from water and carbon dioxide in the cloud around the object. These ices are located on dust grains relatively close to the star, and all this dust and ice blocks out starlight making the star seem dimmer than it really is.

In addition, the dense cloud around the object appears to be collapsing, which happens when a growing star pulls in material. In contrast, the material around an evolved star is expanding and is in the process of escaping to the interstellar medium. The entire envelope of material has an estimated mass of 500 to 700 suns, which is much more than could have been produced by an elderly or dying star.

“We think the star is probably in an embryonic stage, getting near the end of its accretion stage – the period when it pulls in new material to fuel its growth,” said Cordiner.

Also supporting the idea of a young star are the very fast wind speeds measured in two jets of gas streaming away from opposite poles of the star. Such jets of material, known as a bipolar outflow, can be seen emanating from young or old stars. However, fast, narrowly focused jets are rarely observed in evolved stars. In this case, the team measured winds at the breakneck speed of at least 200,000 miles per hour (90 kilometers per second) – a common characteristic of a protostar.

Still, the researchers acknowledge that the object is not a typical protostar. For reasons they can’t explain yet, the star has spectacular features of both a very young and a very old star.

“No matter how one looks at this object, it’s fascinating, and it has something new to tell us about the life cycles of stars,” said Steven Charnley, a Goddard astrochemist and co-author of the paper.

NASA's Jet Propulsion Laboratory in Pasadena, California, manages the Spitzer Space Telescope mission, whose science operations are conducted at the Spitzer Science Center. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado.

Herschel is an ESA space observatory with science instruments provided by European-led principal investigator consortia and with important participation from NASA.

Contacts and sources:
Elizabeth Landau
Jet Propulsion Laboratory,

For more information, visit: