Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Wednesday, August 31, 2016

Businesses Spent $341 Billion on R&D Performed in US in 2014


Businesses spent $341 billion on research and development (R&D) performed in the United States in 2014, a 5.6 percent increase over the previous year, according to a new report from the National Center for Science and Engineering Statistics (NCSES).

Development accounted for the greatest share, 78 percent, of 2014 R&D spending. Applied research accounted for 16 percent, while basic research accounted for 6 percent. The NCSES InfoBrief focuses on business-sector R&D spending. Other sectors, including higher education and federally funded research and development centers (FFRDCs), also contribute to total U.S. R&D spending.

Development accounted for the greatest share of business R&D performance in 2014.

Credit: NSF

Funding from companies' own sources rose by 6.7 percent from 2013 to 2014, totaling $283 billion. Funding from other sources totaled $58 billion. The federal government was the largest of those other sources, accounting for $27 billion, $19 billion of which came from the Department of Defense. Of the federal funding, 92 percent went toward aerospace products and parts; professional, scientific and technical services; and computer and electronic products.

Small- and medium-sized companies performed 16 percent of the nation's business R&D in 2014, while companies with 500 to 24,999 domestic employees performed 48 percent. Companies with 25,000 or more employees made up the other 36%. Businesses that performed or funded R&D employed 21.5 million people in the U.S., 1.5 million of which were R&D employees.


Credit: NSF

Business R&D is concentrated in a relatively small number of states. California alone accounted for 30 percent of the $283 billion in R&D funded by companies' own sources in 2014. Other states with high amounts in the business R&D category were: Massachusetts (6 percent), Michigan (5 percent), Washington (5 percent), Texas (5 percent), Illinois (4 percent), New Jersey (4 percent), New York (4 percent), and Pennsylvania (3 percent).

Companies that performed R&D in the United States in 2014 spent $638 billion on assets with expected useful lives of more than 1 year (table 5). Of this amount, $28 billion (4.4%) was spent on structures, equipment, software, and other assets used for R&D: $17 billion by manufacturers and $10 billion by companies in nonmanufacturing industries. 

Manufacturing industry groups with high levels of capital expenditures on assets used for R&D in 2014 were semiconductor and other electronic products (NAICS 3344) ($3.5 billion), pharmaceuticals and medicines (NAICS 3254) ($2.8 billion), autombiles, bodies, trailers, and parts (NAICS 3361–63) ($1.2 billion), and aerospace products and parts (NAICS 3364) ($1.2 billion). Among the nonmanufacturing industries were software publishers (NAICS 5112) ($1.8 billion), telecommunications services (NAICS 517) ($1.5 billion), and computer systems design and related services (NAICS 5415) ($1.2 billion).

For more information, including R&D performance numbers for all states and a breakdown of spending by different business sectors, read the full InfoBrief.




Contacts and sources:
Rob Margetta
NSF

Cyclops Beetles' Solution to the Chicken and Egg Conundrum, Genetic Level Answers

Beetles with cyclops eyes have given Indiana University scientists insight into how new traits may evolve through the recruitment of existing genes -- even if these genes are already carrying out critical functions.

The study, reported in the Proceedings of the Royal Society B, was led by Eduardo Zattara, a postdoctoral researcher in the IU Bloomington College of Arts and Sciences' Department of Biology. It was published in tandem with another study led by Hannah Busey, an undergraduate student researcher at IU Bloomington and 2016 Goldwater fellow, which appeared in the Journal of Experimental Zoology.

The discovery was made after switching off orthodenticle genes in horned beetles of the genus Onthophagus, also known as dung beetles. Knocking out these genes caused drastic changes in the insects' head structure, including the loss of horns -- a recently evolved structure used for male combat over access to females -- as well as the growth of compound eyes in a completely unexpected place: the top center of the head.

The results were specific to Onthophagus; the same changes did not produce the same effects in Tribolium, or flour beetles, which do not have horns.

Heads of horned and cyclopic beetles of the genus Onthophagus. After knocking out the gene otd1, the cyclopic beetle (right) lost the horn but gained a pair of small compound eyes in the center of the head. 
Photo by Eduardo Zattara

"We were amazed that shutting down a gene could not only turn off development of horns and major regions of the head, but also turn on the development of very complex structures such as compound eyes in a new location," Zattara said. "The fact that this doesn’t happen in Tribolium is equally significant, as it suggests that orthodenticle genes have acquired a new function: to direct head and horn formation only in the highly modified head of horned beetles."

The use of Onthophagus as a model system for the evolution of novel traits has been pioneered by Armin Moczek, professor in the IU Bloomington Department of Biology, who is senior author on the papers. Work onTribolium was conducted by David Linz and Yoshi Tomoyasu at Miami University.

Beetle embryos hatch as larvae, which grow and metamorphose into adult beetles. Many genes crucial to making the head of larvae during embryonic development are known from studies in Tribolium, but whether they were involved in making adult heads during metamorphosis was largely unknown.

In her study, Busey removed small patches of skin from the heads of larval Onthophagus and then traced where the adult heads were missing tissue.

"Using this microsurgical technique, we created a map showing which region of the larval head made each part of the adult head," she said. "This allowed us to apply knowledge about Tribolium embryonic development to Onthophagus, because even though adult heads are very different between horned and flour beetles, the larval heads are quite similar."

Zattara's study used these results to select genes needed by embryos to build larval heads and switched them off to test whether they had any roles in building the head of adults.

Eduardo Zattara 
Eduardo Zattara
Photo by Indiana University

Among the genes they selected was orthodenticle, or otd, which contributes to head development in simple invertebrates to complex mammals. If otd is deleted, most animal embryos will not develop a head or brain. Similarly, beetle embryos need otd to properly develop heads, but no larval or adult function was known.

But when Zattara and colleagues switched off otd genes in the larvae of two species of Onthophagus, they found otd had acquired a new function: reorganizing the head during metamorphosis, integrating the horns in the process.

They also found that switching off these genes shrank or eliminated the beetles' horns and associated head regions and, strikingly, induced development of "cyclopic" compound eyes at the top center of the head, where they aren't normally found in insects.

Although the same manipulations in Tribolium flour beetles did not affect head development or grow extra eyes, the IU scientists were surprised to find that otd genes were still expressed in the same location as larval and adult Onthophagus.

The results suggest that the lingering expression of genes in specific tissues or life stages where they no longer have a function may comprise a "stepping stone" in recruiting those genes into making new traits.

“These studies provide a solution to an important 'chicken-and-egg problem' of modern evolutionary developmental biology," Zattara said. “For a gene to carry out a new function, it needs to find a way to be activated at the right time and location. But it is hard to come up with a good reason why a gene would become active in a new context without already carrying out some important function."

"Here we have a situation where a gene is already in the right place -- the head -- just not at the right time -- the embryo instead of the adult," Moczek added. "By allowing the gene's availability to linger into later stages of development, it becomes easier to envision how it could then be eventually captured by evolution and used for a new function, such as the positioning of horns."

Hannah Busey 
Hannah Busey
Photo by Indiana University

These studies were supported in part by the National Science Foundation.


Contacts and sources:
Kevin Fryling 
Indiana University

Microchip Design Senses Sabotage, Detects Malicious Circuitry in Hardware, Spots Built-in Trojans

With the outsourcing of microchip design and fabrication a worldwide, $350 billion business, bad actors along the supply chain have many opportunities to install malicious circuitry in chips. These “Trojan horses” look harmless but can allow attackers to sabotage healthcare devices; public infrastructure; and financial, military, or government electronics.

Siddharth Garg, an assistant professor of electrical and computer engineering at the NYU Tandon School of Engineering, and fellow researchers are developing a unique solution: a chip with both an embedded module that proves that its calculations are correct and an external module that validates the first module’s proofs.

While software viruses are easy to spot and fix with downloadable patches, deliberately inserted hardware defects are invisible and act surreptitiously. For example, a secretly inserted “back door” function could allow attackers to alter or take over a device or system at a specific time. Garg’s configuration, an example of an approach called “verifiable computing” (VC), keeps tabs on a chip’s performance and can spot telltale signs of Trojans.

The ability to verify has become vital in an electronics age without trust: Gone are the days when a company could design, prototype, and manufacture its own chips. Manufacturing costs are now so high that designs are sent to offshore foundries, where security cannot always be assured.

But under the system proposed by Garg and his colleagues, the verifying processor can be fabricated separately from the chip. “Employing an external verification unit made by a trusted fabricator means that I can go to an untrusted foundry to produce a chip that has not only the circuitry-performing computations, but also a module that presents proofs of correctness,” said Garg.

The chip designer then turns to a trusted foundry to build a separate, less complex module: an ASIC (application-specific integrated circuit), whose sole job is to validate the proofs of correctness generated by the internal module of the untrusted chip.

A chip designed to flag malicious circuitry
Credit: NYU Tandon School of Engineering


Garg said that this arrangement provides a safety net for the chip maker and the end user. “Under the current system, I can get a chip back from a foundry with an embedded Trojan. It might not show up during post-fabrication testing, so I’ll send it to the customer,” said Garg. “But two years down the line it could begin misbehaving. The nice thing about our solution is that I don’t have to trust the chip because every time I give it a new input, it produces the output and the proofs of correctness, and the external module lets me continuously validate those proofs.”

An added advantage is that the chip built by the external foundry is smaller, faster, and more power-efficient than the trusted ASIC, sometimes by orders of magnitude. The VC setup can therefore potentially reduce the time, energy, and chip area needed to generate proofs.

“For certain types of computations, it can even outperform the alternative: performing the computation directly on a trusted chip,” Garg said.

Siddharth Garg, assistant professor of electrical and computer engineering
Credit: NYU Tandon School of Engineering

The researchers next plan to investigate techniques to reduce both the overhead that generating and verifying proofs imposes on a system and the bandwidth required between the prover and verifier chips. “And because with hardware, the proof is always in the pudding, we plan to prototype our ideas with real silicon chips,” said Garg.

To pursue the promise of verifiable ASICs, Garg, abhi shelat* of the University of Virginia, Rosario Gennaro of the City University of New York, Mariana Raykova of Yale University, and Michael Taylor of the University of California, San Diego, will share a five-year National Science Foundation Large Grant of $3 million.  *ahbi shelat prefers lower-case spelling



Contacts and sources:
Siddharth Garg
NYU Tandon School of Engineering

Citation:  Verifiable ASICS by Riad S. Wahby of Stanford University, Max Howald of The Cooper Union, Garg, shelat, and Michael Walfish of the NYU Courant Institute of Mathematical Sciences, earned a Distinguished Student Paper Award at the IEEE Symposium on Security and Privacy, one of the leading global conferences for computer security research, held in May in Oakland, California. The authors were supported by grants from the NSF, the Air Force Office of Scientific Research, the Office of Naval Research, a Microsoft Faculty Fellowship, and a Google Faculty Research Award.

Synthetic Life Does Math in a Test Tube, Are DNA Computers Next?

Often described as the blueprint of life, DNA contains the instructions for making every living thing from a human to a house fly.

But in recent decades, some researchers have been putting the letters of the genetic code to a different use: making tiny nanoscale computers.

In a new study, a Duke University team led by professor John Reif created strands of synthetic DNA that, when mixed together in a test tube in the right concentrations, form an analog circuit that can add, subtract and multiply as they form and break bonds.

Duke graduate student Tianqi Song and computer science professor John Reif have created an analog DNA circuit that can add, subtract and multiply as the molecules form and break bonds. 
Photo by John Joyner.

Rather than voltage, DNA circuits use the concentrations of specific DNA strands as signals.

Other teams have designed DNA-based circuits that can solve problems ranging from calculating square roots to playing tic-tac-toe. But most DNA circuits are digital, where information is encoded as a sequence of zeroes and ones.

Instead, the new Duke device performs calculations in an analog fashion by measuring the varying concentrations of specific DNA molecules directly, without requiring special circuitry to convert them to zeroes and ones first.

The researchers describe their approach in the August issue of the journal ACS Synthetic Biology.

Unlike the silicon-based circuits used in most modern day electronics, commercial applications of DNA circuits are still a long way off, Reif said.

For one, the test tube calculations are slow. It can take hours to get an answer.

“We can do some limited computing, but we can’t even begin to think of competing with modern-day PCs or other conventional computing devices,” Reif said.

But DNA circuits can be far tinier than those made of silicon. And unlike electronic circuits, DNA circuits work in wet environments, which might make them useful for computing inside the bloodstream or the soupy, cramped quarters of the cell.

The technology takes advantage of DNA’s natural ability to zip and unzip to perform computations. Just like Velcro and magnets have complementary hooks or poles, the nucleotide bases of DNA pair up and bind in a predictable way.

The researchers first create short pieces of synthetic DNA, some single-stranded and some double-stranded with single-stranded ends, and mix them in a test tube.

When a single strand encounters a perfect match at the end of one of the partially double-stranded ones, it latches on and binds, displacing the previously bound strand and causing it to detach, like someone cutting in on a dancing couple.

The newly released strand can in turn pair up with other complementary DNA molecules downstream in the circuit, creating a domino effect.

The researchers solve math problems by measuring the concentrations of specific outgoing strands as the reaction reaches equilibrium.

To see how their circuit would perform over time as the reactions proceeded, Reif and Duke graduate student Tianqi Song used computer software to simulate the reactions over a range of input concentrations. They have also been testing the circuit experimentally in the lab.

Besides addition, subtraction and multiplication, the researchers are also designing more sophisticated analog DNA circuits that can do a wider range of calculations, such as logarithms and exponentials.

Conventional computers went digital decades ago. But for DNA computing, the analog approach has its advantages, the researchers say. For one, analog DNA circuits require fewer strands of DNA than digital ones, Song said.

Analog circuits are also better suited for sensing signals that don’t lend themselves to simple on-off, all-or-none values, such as vital signs and other physiological measurements involved in diagnosing and treating disease.

The hope is that, in the distant future, such devices could be programmed to sense whether particular blood chemicals lie inside or outside the range of values considered normal, and release a specific DNA or RNA -- DNA’s chemical cousin -- that has a drug-like effect.

Reif’s lab is also beginning to work on DNA-based devices that could detect molecular signatures of particular types of cancer cells, and release substances that spur the immune system to fight back.

Even very simple DNA computing could still have huge impacts in medicine or science,” Reif said.

This research was supported by grants from the National Science Foundation (CCF-1320360, CCF-1217457 and CCF-1617791).



Contacts and sources:
by Robin Smith
Duke University


Citation: "Analog Computation by DNA Strand Displacement Circuits," Tianqi Song, Sudhanshu Garg, Reem Mokhtar, Hieu Bui and John Reif. ACS Synthetic Biology, August 19, 2016. DOI:10.1021/acssynbio.6b00144.

3.18 Million Year Old Cold Case Solved: Human Ancestor Lucy Died Falling From Tree (Video)

Maybe she was pushed?

Lucy, the most famous fossil of a human ancestor, probably died after falling from a tree, according to a study appearing in Nature led by researchers at The University of Texas at Austin.



Lucy, a 3.18-million-year-old specimen of Australopithecus afarensis — or “southern ape of Afar” — is among the oldest, most complete skeletons of any adult, erect-walking human ancestor. Since her discovery in the Afar region of Ethiopia in 1974 by Arizona State University anthropologist Donald Johanson and graduate student Tom Gray, Lucy — a terrestrial biped — has been at the center of a vigorous debate about whether this ancient species also spent time in the trees.

Lucy, a 3.18 million year old fossil specimen of Australopithecus afarensis. 
Image provided by John Kappelman, UT Austin.

“It is ironic that the fossil at the center of a debate about the role of arborealism in human evolution likely died from injuries suffered from a fall out of a tree,” said lead author John Kappelman, a UT Austin anthropology and geological sciences professor.

UT Austin professor John Kappelman with 3D printouts of Lucy’s skeleton illustrating the compressive fractures in her right humerus that she suffered at the time of her death 3.18 million years ago.

Photo by Marsha Miller, UT Austin.

Kappelman first studied Lucy during her U.S. museum tour in 2008, when the fossil detoured to the High-Resolution X-ray Computed Tomography Facility (UTCT) in the UT Jackson School of Geosciences — a machine designed to scan through materials as solid as a rock and at a higher resolution than medical CT. For 10 days, Kappelman and geological sciences professor Richard Ketcham carefully scanned all of her 40-percent-complete skeleton to create a digital archive of more than 35,000 CT slices.

“Lucy is precious. There’s only one Lucy, and you want to study her as much as possible,” Ketcham said. “CT is nondestructive. So you can see what is inside, the internal details and arrangement of the internal bones.”


UT Austin professors John Kappelman and Richard Ketcham examine casts of Lucy while scanning the original fossil (background).

 Photo by Marsha Miller, UT Austin.

Studying Lucy and her scans, Kappelman noticed something unusual: The end of the right humerus was fractured in a manner not normally seen in fossils, preserving a series of sharp, clean breaks with tiny bone fragments and slivers still in place.

“This compressive fracture results when the hand hits the ground during a fall, impacting the elements of the shoulder against one another to create a unique signature on the humerus,” said Kappelman, who consulted Dr. Stephen Pearce, an orthopedic surgeon at Austin Bone and Joint Clinic, using a modern human-scale, 3-D printed model of Lucy.

Pearce confirmed: The injury was consistent with a four-part proximal humerus fracture, caused by a fall from considerable height when the conscious victim stretched out an arm in an attempt to break the fall.

UT Austin professor John Kappelman studies Lucy’s skeleton in the National Museum in Addis Ababa, Ethiopia

 Photo by Lawrence Todd.

Kappelman observed similar but less severe fractures at the left shoulder and other compressive fractures throughout Lucy’s skeleton including a pilon fracture of the right ankle, a fractured left knee and pelvis, and even more subtle evidence such as a fractured first rib — “a hallmark of severe trauma” — all consistent with fractures caused by a fall. Without any evidence of healing, Kappelman concluded the breaks occurred perimortem, or near the time of death.

The question remained: How could Lucy have achieved the height necessary to produce such a high velocity fall and forceful impact? Kappelman argued that because of her small size — about 3 feet 6 inches and 60 pounds — Lucy probably foraged and sought nightly refuge in trees.

In comparing her with chimpanzees, Kappelman suggested Lucy probably fell from a height of more than 40 feet, hitting the ground at more than 35 miles per hour. Based on the pattern of breaks, Kappelman hypothesized that she landed feet-first before bracing herself with her arms when falling forward, and “death followed swiftly.”

UT Austin professor John Kappelman studies Lucy’s humerus in the National Museum in Addis Ababa, Ethiopia.

Photo by Sissi Janet Mattox.



“When the extent of Lucy’s multiple injuries first came into focus, her image popped into my mind’s eye, and I felt a jump of empathy across time and space,” Kappelman said. “Lucy was no longer simply a box of bones but in death became a real individual: a small, broken body lying helpless at the bottom of a tree.”

Kappelman conjectured that because Lucy was both terrestrial and arboreal, features that permitted her to move efficiently on the ground may have compromised her ability to climb trees, predisposing her species to more frequent falls. Using fracture patterns when present, future research may tell a more complete story of how ancient species lived and died.

In addition to the study, the Ethiopian National Museum provided access to a set of 3-D files of Lucy’s shoulder and knee for the public to download and print so that they can evaluate the hypothesis for themselves. “This is the first time 3-D files have been released for any Ethiopian fossil hominin, and the Ethiopian officials are to be commended,” Kappelman said. “Lucy is leading the charge for the open sharing of digital data.”

Other scholastic materials and the 3-D files are available on eLucy.org. Permissions to scan, study and photograph Lucy were granted by the Authority for Research and Conservation of Cultural Heritage and the National Museum of Ethiopia of the Ministry of Tourism and Culture. The UTCT was supported by three grants from the U.S. National Science Foundation.


Contacts and sources:
David Ochsner
The University of Texas at Austin

Tuesday, August 30, 2016

Like Herding Gnats: Theorists Solve a Long-Standing Fundamental Problem Involving Atoms

Trying to understand a system of atoms is like herding gnats - the individual atoms are never at rest and are constantly moving and interacting. When it comes to trying to model the properties and behavior of these kinds of systems, scientists use two fundamentally different pictures of reality, one of which is called "statistical" and the other "dynamical."

The two approaches have at times been at odds, but scientists from the U.S. Department of Energy's Argonne National Laboratory announced a way to reconcile the two pictures.

In the statistical approach, which scientists call statistical mechanics, a given system realizes all of its possible states, which means that the atoms explore every possible location and velocity for a given value of either energy or temperature. In statistical mechanics, scientists are not concerned with the order in which the states happen and are not concerned with how long they take to occur. Time is not a player.

Trying to understand a system of atoms is like herding gnats - the individual atoms are never at rest and are constantly moving and interacting. When it comes to trying to model the properties and behavior of these kinds of systems, scientists use two fundamentally different pictures of reality, one of which is called "statistical" and the other "dynamical." The two approaches have at times been at odds, but scientists from Argonne recently announced a way to reconcile the two pictures.
Credit: Argonne National Laboratory

In contrast, the dynamical approach provides a detailed account of how and to what degree these states are explored over time. In dynamics, a system may not experience all of the states that are in principle available to it, because the energy may not be high enough to surmount the energy barriers or because of the time window being too short. "When a system cannot 'see' states beyond an energy barrier in dynamics, it's like a hiker being unable to see the next valley behind a mountain range," said Argonne theorist Julius Jellinek.

When choosing one approach over the other, scientists are forced to take a conceptual fork in the road, because the two approaches do not always agree. Under certain conditions - for example, at sufficiently high energies and long time scales - the statistical and the dynamical portraits of the physical world do in fact sync up. However, in many other cases statistical mechanics and dynamics yield pictures that differ markedly.

"When the two approaches disagree, the correct choice is dynamics because the states actually experienced by a system may depend on the energy, the initial state and on the window of time for observation or measurement," Jellinek said. However, not having the statistical picture is "kind of a loss," he added, because of the power of its tools and concepts to analyze and characterize the properties and behavior of systems.

The fundamental characteristic that lies at the foundation of all statistical mechanics is the "density of states," which is the total number of states a system can assume at a given energy. Knowledge of the density of states allows researchers to establish additional physical properties such as entropy, free energy and others, which form the powerful arsenal of statistical mechanical analysis and characterization tools. The accuracy of all these, however, hinges on the accuracy of the density of states.

The problem is that when it comes to the vibrational motion of systems, scientists had an exact solution for the density of states for only two idealized cases, which are sets of so-called harmonic or Morse oscillators. Though real systems are neither of the two, the ubiquitous practice was to use the harmonic approximation, which hinges on the assumption that real systems behave not too differently from harmonic ones.

This assumption is not bad at low energies, but it becomes inadequate as the energy is increased. Considerable effort has been invested over the last eight decades into attempts to provide a solution for systems that do not behave harmonically, Jellinek said, and until now, the result has been a multitude of approximate solutions, which are all limited to only weak departures from harmonicity or suffer from other limitations. A general and exact solution for vibrational density of states for systems with any degree of anharmonicity remained an unsolved problem.

In a major recent development, Jellinek, in collaboration with Darya Aleinikava, then an Argonne postdoc and now an assistant professor at Benedictine University, provided the missing solution. The methodology they formulated furnishes a general and exact solution for any system at any energy.

"This long-standing fundamental problem is finally solved," said Jellinek. "The solution will benefit many areas of physics, chemistry, materials science, nanoscience and biology."

The solution provided solves yet another problem - it reconciles the statistical and dynamical pictures of the world for even those conditions in which they previously may have disagreed. Since the solution is based on following the actual dynamics of a system at relevant energies and time scales, the resulting densities of states are fully dynamically informed and may be sensitive to time. As such, these densities of states lay the foundation for formulation of new statistical mechanical frameworks that incorporate time and reflect the actual dynamical behavior of systems.

"This leads to a profound change in our view of the relationship between statistical mechanics and dynamics," said Jellinek. "It brings statistical mechanics into harmony with the dynamics irrespective of how specific or peculiar the dynamical behavior of a system may be."

A paper based on the research, "Anharmonic densities of states: A general dynamics-based solution," was published in the June 2 edition of The Journal of Chemical Physics.

The work was supported by the DOE Office of Science and the Alexander von Humboldt Foundation and made use of the National Energy Research Scientific Computing center, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory.



Contacts and sources: 
Jared Sagoff
Argonne National Laboratory

Smarter Brains Are Blood-Thirsty Brains


A University of Adelaide-led project has overturned the theory that the evolution of human intelligence was simply related to the size of the brain -- but rather linked more closely to the supply of blood to the brain.

The international collaboration between Australia and South Africa showed that the human brain evolved to become not only larger, but more energetically costly and blood thirsty than previously believed.

The research team calculated how blood flowing to the brain of human ancestors changed over time, using the size of two holes at the base of the skull that allow arteries to pass to the brain. The findings, published in the Royal Society journal Open Science, allowed the researchers to track the increase in human intelligence across evolutionary time.

These are skull casts from human evolution. Left to right: Australopithecus afarensis, Homo habilis, Homo ergaster, Homo erectus and Homo neanderthalensis.

Photo credit: Roger Seymour. Casts photographed in the South Australian Museum.

"Brain size has increased about 350% over human evolution, but we found that blood flow to the brain increased an amazing 600%," says project leader Professor Emeritus Roger Seymour, from the University of Adelaide. "We believe this is possibly related to the brain's need to satisfy increasingly energetic connections between nerve cells that allowed the evolution of complex thinking and learning.

"To allow our brain to be so intelligent, it must be constantly fed oxygen and nutrients from the blood.

"The more metabolically active the brain is, the more blood it requires, so the supply arteries are larger. The holes in fossil skulls are accurate gauges of arterial size."

The study was a new collaboration between the Cardiovascular Physiology team in the School of Biological Sciences at the University of Adelaide and the Brain Function Research Group and Evolutionary Studies Institute at the University of the Witwatersrand.

These are human skulls, showing the location of two openings for the internal carotid arteries that supply the cerebrum of the brain almost entirely. The sizes of these openings reveal the rate of blood flow, which is related to brain metabolic rate and cognitive ability.

Photo credit: Edward Snelling. Sourced from the Raymond Dart Collection of Human Skeletons, School of Anatomical Sciences, Faculty of Health Sciences, University of the Witwatersrand.

Co-author Dr Edward Snelling, University of the Witwatersrand, says: "Ancient fossil skulls from Africa reveal holes where the arteries supplying the brain passed through. The size of these holes show how blood flow increased from three million-year-old Australopithecus to modern humans. The intensity of brain activity was, before now, believed to have been taken to the grave with our ancestors."

Honours student and co-author Vanya Bosiocic had the opportunity to travel to South Africa and work with world renowned anthropologists on the oldest hominin skull collection, including the newly-discovered Homo naledi.

"Throughout evolution, the advance in our brain function appears to be related to the longer time it takes for us to grow out of childhood. It is also connected to family cooperation in hunting, defending territory and looking after our young," Ms Bosiocic says.

"The emergence of these traits seems to nicely follow the increase in the brain's need for blood and energy."



Contacts and sources:
Professor Roger Seymour, Project leader 
University of Adelaide

The Rise and Fall of Galaxy Formation

An international team of astronomers, including Carnegie’s Eric Persson, has charted the rise and fall of galaxies over 90 percent of cosmic history. Their work, which includes some of the most sensitive astronomical measurements made to date, is published by The Astrophysical Journal.

The FourStar Galaxy Evolution Survey (ZFOURGE) has built a multicolored photo album of galaxies as they grow from their faint beginnings into mature and majestic giants. They did so by measuring distances and brightnesses for more than 70,000 galaxies spanning more than 12 billion years of cosmic time, revealing the breadth of galactic diversity.

 A movie version of this comparison between optical wavelengths and ZFOURGE   
Courtesy of Texas A&M University.

The team assembled the colorful photo album by using a new set of filters that are sensitive to infrared light and taking images with them with theFourStar camera at Carnegie’s 6.5-meter Baade Telescope at our Las Campanas Observatory in Chile. They took the images over a period of 45 nights. The team made a 3-D map by collecting light from over 70,000 galaxies, peering all the way into the distant universe, and by using this light to measure how far these galaxies are from our own Milky Way.

The deep 3-D map also revealed young galaxies that existed as early as 12.5 billion years ago (at less than 10 percent of the current universe age), only a handful of which had previously been found. This should help astronomers better understand the universe’s earliest days. 

A comparison of visualizing galaxies with and without ZFOURGE.
 
Credit: Texas A&M University. 


"Perhaps the most surprising result is that galaxies in the young universe appear as diverse as they are today,” when the universe is older and much more evolved, said lead author Caroline Straatman, a recent graduate of Leiden University. “The fact that we see young galaxies in the distant universe that have already shut down star formation is remarkable.”

But it’s not just about distant galaxies; the information gathered by ZFOURGE is also giving the scientists the best-yet view of what our own galaxy was like in its youth.

“Ten billion years ago, galaxies like our Milky Way were much smaller, but they were forming stars 30 times faster than they are today,” said Casey Papovich of Texas A&M University.

“ZFOURGE is providing us with a highly complete and reliable census of the evolving galaxy population, and is already helping us to address questions like: How did galaxies grow with time? When did they form their stars and develop into the spectacular structures that we see in the present-day universe?” added Ryan Quadri, also of Texas A&M.

In the study’s first images, the team found one of the earliest examples of a galaxy cluster, a so-called “galaxy city” made up of a dense concentration of galaxies, which formed when the universe was only three billion years old, as compared to the nearly 14 billion years it is today.

“The combination of FourStar, the special filters, Magellan, and the conditions at Las Campanas led to the detection of the cluster,” said Persson, who built the FourStar instrument at the Carnegie Observatories in Pasadena. “It was in a very well-studied region of the sky—‘hiding in plain sight.’”

The paper marks the completion of the ZFOURGE survey and the public release of the dataset, which can be found here :http://zfourge.tamu.edu/DR2016/data.html.

This work was supported by the George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, the National Science Foundation, the Australian Research Council, an Australian Research Council Future Fellowship, and a NASA Hubble Fellowship awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA. Australian access to the Magellan Telescopes was supported through the National Collaborative Research Infrastructure Strategy of the Australian Federal Government.

The ZFOURGE survey was conducted with the FourStar camera on the Magellan 6.5-meter telescope in Chile and further involved data collected by many of the world’s most powerful observatories, including the Hubble Space Telescope, the Very Large Telescope, the Spitzer Space Telescope, and the Herschel Space Observatory.




Contacts and sources: 
Eric Persson
Carnegie Institution for Science  

Anomalous Grooves on Martian Moon Phobos Explained

Some of the mysterious grooves on the surface of Mars' moon Phobos are the result of debris ejected by impacts eventually falling back onto the surface to form linear chains of craters, according to a new study.

One set of grooves on Phobos are thought to be stress fractures resulting from the tidal pull of Mars. The new study, published August 19 in Nature Communications, addresses another set of grooves that do not fit that explanation.

"These grooves cut across the tidal fields, so they require another mechanism. If we put the two together, we can explain most if not all of the grooves on Phobos," said first author Michael Nayak, a graduate student in Earth and planetary sciences at UC Santa Cruz.

In this spacecraft image of Phobos, red arrows indicate a chain of small craters whose origin researchers were able to trace back to a primary impact at the large crater known as Grildrig.

Credit: ESA/Mars Express, modified by Nayak & Asphaug


Phobos is an unusual satellite, orbiting closer to its planet than any other moon in the solar system, with an orbital period of just 7 hours. Small and heavily cratered, with a lumpy nonspherical shape, it is only 9,000 kilometers from the surface of Mars (the distance from San Francisco to New York and back) and is slowly spiraling inward toward the planet. Phobos appears to have a weak interior structure covered by an elastic shell, allowing it to be deformed by tidal forces without breaking apart.

Coauthor Erik Asphaug, a planetary scientist at Arizona State University and professor emeritus at UC Santa Cruz, has been studying Phobos for many years. Recent computer simulations by him and NASA planetary scientist Terry Hurford showed how tidal stresses can cause fracturing and linear grooves in the surface layer. Although this idea was first proposed in the 1970s, the existence of so many grooves with the wrong orientation for such stress fractures had remained unexplained.

Nayak developed computer simulations showing how those anomalous grooves could result from impacts. Material ejected from the surface by an impact easily escapes the weak gravity of Phobos. But the debris remains in orbit around Mars, most of it moving either just slower or just faster than the orbital velocity of Phobos, and within a few orbits it gets recaptured and falls back onto the surface of the moon.

Nayak's simulations enabled him to track in precise detail the fate of the ejected debris. He found that recaptured debris creates distinctive linear impact patterns that match the characteristics of the anomalous grooves and chains of craters that cut across the tidal stress fractures on Phobos.

"A lot of stuff gets kicked up, floats for a couple of orbits, and then gets recollected and falls back in a linear chain before it has a chance to be pulled apart and disassociated by Mars' gravity," Nayak said. "The controlling factor is where the impact occurs, and that determines where the debris falls back."

The researchers used their model to match a linear chain of small craters on Phobos to its primary source crater. They simulated an impact at the 2.6-kilometer crater called Grildrig, near the moon's north pole, and found that the pattern resulting from ejected debris falling back onto the surface in the model was a very close match to the actual crater chain observed on Phobos.

With its low mass and close orbit around Mars, Phobos is so unusual that it may be the only place in the solar system where this phenomenon occurs, Nayak said.



Contacts and sources:
Tim Stephens
UC Santa Cruz

How 'Planet Nine' Could Doom the Solar System: Dr Dimitri Veras



The solar system could be thrown into disaster when the sun dies if the mysterious 'Planet Nine' exists, according to research from the University of Warwick.

Dr Dimitri Veras in the Department of Physics has discovered that the presence of Planet Nine - the hypothetical planet which may exist in the outer Solar System - could cause the elimination of at least one of the giant planets after the sun dies, hurling them out into interstellar space through a sort of 'pinball' effect.

When the sun starts to die in around seven billion years, it will blow away half of its own mass and inflate itself -- swallowing the Earth -- before fading into an ember known as a white dwarf. This mass ejection will push Jupiter, Saturn, Uranus and Neptune out to what was assumed a safe distance.

Artist's impression showing Planet Nine causing other planets in the solar system to be hurled into interstellar space.

Credit: University of Warwick

However, Dr. Veras has discovered that the existence of Planet Nine could rewrite this happy end-ing. He found that Planet Nine might not be pushed out in the same way, and in fact might instead be thrust inward into a death dance with the solar system's four known giant planets -- most notably Uranus and Neptune. The most likely result is ejection from the solar system, forever.

Using a unique code that can simulate the death of planetary systems, Dr. Veras has mapped nu-merous different positions where a 'Planet Nine' could change the fate of the solar system. The further away and the more massive the planet is, the higher the chance that the solar system will experience a violent future.

This discovery could shed light on planetary architectures in different solar systems. Almost half of existing white dwarfs contain rock, a potential signature of the debris generated from a similarly calamitous fate in other systems with distant "Planet Nines" of their own.

In effect, the future death of our sun could explain the evolution of other planetary systems.

Dr. Veras explains the danger that Planet Nine could create: "The existence of a distant massive planet could fundamentally change the fate of the solar system. Uranus and Neptune in particular may no longer be safe from the death throes of the Sun. The fate of the solar system would depend on the mass and orbital properties of Planet Nine, if it exists."

"The future of the Sun may be foreshadowed by white dwarfs that are 'polluted' by rocky debris. Planet Nine could act as a catalyst for the pollution. The Sun's future identity as a white dwarf that could be 'polluted' by rocky debris may reflect current observations of other white dwarfs throughout the Milky Way," Dr Veras adds.

The paper 'The fates of solar system analogues with one additional distant planet' will be published in the Monthly Notices of the Royal Astronomical Society.



Contacts and sources:
Luke Walton
University of Warwick.

Solar Cycles and Climate Changes Measured by TOSCA Scientists

The Sun’s impact on our planet’s climate has recently been a hotly debated topic in the context of climate change. The controversy around this issue has led scientists across Europe to dig deeper into the claim that solar activity could be a major cause of global warming.

In the 1980s, research showed that the Sun’s radiation levels varied, which naturally invited the question – does solar variability affect our climate? Despite new evidence that solar variability does have a small impact, scattered scientific studies have not helped improve how the Sun’s variations were assessed.

In 2011, European researchers set up TOSCA, a COST-funded international network aiming to offer a better understanding of the Sun’s effect on climate, against the backdrop of global warming. Over 100 specialists in solar physics, geomagnetism, climate modelling or satmospheric chemistry got together to explore this topic in a new way. 



Previously, analyses of the Sun-Earth relationship has focused on measuring the Sun’s total solar irradiation, or variations in solar radiation. “It’s like measuring the wealth of a country only by looking at its GDP”, Dr Thierry Dudok de Wit (University of Orléans, France) points out. Climate studies have long been focusing on similar mechanisms individually, which is why TOSCA opted for a global approach, by bringing on board experts from different research communities.

“Our biggest achievement was changing the way we interacted, by looking at Earth-solar connections as a whole, not individually” , adds Dr Thierry Dudok de Wit, leading the Action.

The group set out to get a better idea of the physical and chemical mechanisms driving such variations , and how impactful they were. Understanding their mechanisms also helps paint a better picture of the link between solar variability and climate change.

By comparing recent measurements with results from new models, the network challenged the long-debated assumption that the Sun’s slight change in radiation could cause the Earth’s climate to change.

They found mechanisms by which solar variation can alter climate variability regionally , but none that would trigger global warming. Looking at time scales longer than a century, the impact of solar variability on climate change is evident, but the effect of greenhouse gases has been proven much more considerable in the short run.

However, there are still many questions behind the Sun-Earth connection, some of which TOSCA helped answer.

By examining the different phenomena defining the solar impact on climate in general , the team showed several subtle phenomena could have a significant impact, often locally. For instance, UV radiation amounts to a mere 7% of solar energy, but its variation produces changes in the stratosphere near the Equator, all the way to the polar regions, which govern climate. This means that winters in Europe would become wetter and milder or, on the contrary, drier and cooler, depending on the Sun’s state.

They also found that streams of electrons and protons known as the solar wind, affecting the Earth’s global electric field, lead to changes in aerosol formation, which ultimately impact rainfall. These effects, largely ignored so far, will now be incorporated into several climate models in order to build a more complete picture. 

TOSCA is a European COST action linking scientists working on the influence of the Sun on the Earth’s climate. Based on present understanding, solar variability has a role in the observed climate change. This is a multidisciplinary topic of considerable scientific and societal importance. However, the mechanisms that link solar activity and climate change are not yet fully understood. TOSCA’s aim is to shed more light on the mechanisms involved.



The TOSCA handbook presents all the scientific facts behind the network’s findings. It also shows the network’s efforts to engage with a general audience by presenting the facts, which are now open to public scrutiny.

The Action was another example of young researchers’ essential contribution: “If I were to lead another COST Action, I would get even more early career researchers involved – it was bright, young minds who made the difference in our group”, Dr Dudok de Wit added.

Dr Benjamin Laken had a leading role in one of TOSCA’s training schools: “I demonstrated the use of Python for data analytics, and also guided a small team of students through an independent research project. This helped expose the students – many for the first time – to critical tools and methods relevant to their development into research. TOSCA enabled me to identify the most pressing knowledge gaps, which I could personally contribute to, and see how to effectively communicate my findings back to an interdisciplinary community. Thanks to the network, I was able to grow as a researcher at a critical time in my career.”

Dr Dudok de Wit’s team at the International Space Science Institute in Bern, and the Coupled Model Intercomparison Project, have been using the datasets identified through the network to describe the Sun’s influence on climate from 1850 up to the present day, as well as a forecast up to the year 2300. The findings will shape the next report prepared by the Intergovernmental Panel on Climate Change. The panel is tasked with providing a scientific, objective view of climate change and its socio-economic effects.

Other projects spinning off from the network, such as SOLID and VarSITI , will continue research on the Sun’s terrestrial impact, placing European experts at the forefront of climate studies research.




Contacts and sources:

TOSCA handbook: http://www.cost.eu/media/publications/Earth-s-climate-response-to-a-changing-Sun
Conclusions of the Study
http://lpc2e.cnrs-orleans.fr/~ddwit/TOSCA/TOSCA/Research.html
 

 

A Billion Jupiter-Like Worlds in The Milky Way

Our galaxy is home to a bewildering variety of Jupiter-like worlds: hot ones, cold ones, giant versions of our own giant, pint-sized pretenders only half as big around.

Astronomers say that in our galaxy alone, a billion or more such Jupiter-like worlds could be orbiting stars other than our sun. And we can use them to gain a better understanding of our solar system and our galactic environment, including the prospects for finding life.

It turns out the inverse is also true -- we can turn our instruments and probes to our own backyard, and view Jupiter as if it were an exoplanet to learn more about those far-off worlds. The best-ever chance to do this is now, with Juno, a NASA probe the size of a basketball court, which arrived at Jupiter in July to begin a series of long, looping orbits around our solar system's largest planet. Juno is expected to capture the most detailed images of the gas giant ever seen. And with a suite of science instruments, Juno will plumb the secrets beneath Jupiter's roiling atmosphere.

Comparing Jupiter with Jupiter-like planets that orbit other stars can teach us about those distant worlds, and reveal new insights about our own solar system's formation and evolution. (Illustration)

Credits: NASA/JPL-Caltech

It will be a very long time, if ever, before scientists who study exoplanets -- planets orbiting other stars -- get the chance to watch an interstellar probe coast into orbit around an exo-Jupiter, dozens or hundreds of light-years away. But if they ever do, it's a safe bet the scene will summon echoes of Juno.

"The only way we're going to ever be able to understand what we see in those extrasolar planets is by actually understanding our system, our Jupiter itself," said David Ciardi, an astronomer with NASA's Exoplanet Science Institute (NExSci) at Caltech.

Not all Jupiters are created equal

Juno's detailed examination of Jupiter could provide insights into the history, and future, of our solar system. The tally of confirmed exoplanets so far includes hundreds in Jupiter's size-range, and many more that are larger or smaller.


Credit: NASA


The so-called hot Jupiters acquired their name for a reason: They are in tight orbits around their stars that make them sizzling-hot, completing a full revolution -- the planet's entire year -- in what would be a few days on Earth. And they're charbroiled along the way.

But why does our solar system lack a "hot Jupiter?" Or is this, perhaps, the fate awaiting our own Jupiter billions of years from now -- could it gradually spiral toward the sun, or might the swollen future sun expand to engulf it?

Not likely, Ciardi says; such planetary migrations probably occur early in the life of a solar system.

"In order for migration to occur, there needs to be dusty material within the system," he said. "Enough to produce drag. That phase of migration is long since over for our solar system."

Jupiter itself might already have migrated from farther out in the solar system, although no one really knows, he said.

Looking back in time

If Juno's measurements can help settle the question, they could take us a long way toward understanding Jupiter's influence on the formation of Earth -- and, by extension, the formation of other "Earths" that might be scattered among the stars.

"Juno is measuring water vapor in the Jovian atmosphere," said Elisa Quintana, a research scientist at the NASA Ames Research Center in Moffett Field, California. "This allows the mission to measure the abundance of oxygen on Jupiter. Oxygen is thought to be correlated with the initial position from which Jupiter originated."



If Jupiter's formation started with large chunks of ice in its present position, then it would have taken a lot of water ice to carry in the heavier elements which we find in Jupiter. But a Jupiter that formed farther out in the solar system, then migrated inward, could have formed from much colder ice, which would carry in the observed heavier elements with a smaller amount of water. If Jupiter formed more directly from the solar nebula, without ice chunks as a starter, then it should contain less water still. Measuring the water is a key step in understanding how and where Jupiter formed.

That's how Juno's microwave radiometer, which will measure water vapor, could reveal Jupiter's ancient history.

"If Juno detects a high abundance of oxygen, it could suggest that the planet formed farther out," Quintana said.

A probe dropped into Jupiter by NASA’s Galileo spacecraft in 1995 found high winds and turbulence, but the expected water seemed to be absent. Scientists think Galileo's one-shot probe just happened to drop into a dry area of the atmosphere, but Juno will survey the entire planet from orbit.

The chaotic early years

Where Jupiter formed, and when, also could answer questions about the solar system's "giant impact phase," a time of crashes and collisions among early planet-forming bodies that eventually led to the solar system we have today.

Our solar system was extremely accident-prone in its early history -- perhaps not quite like billiard balls caroming around, but with plenty of pileups and fender-benders.

"It definitely was a violent time," Quintana said. "There were collisions going on for tens of millions of years. For example, the idea of how the moon formed is that a proto-Earth and another body collided; the disk of debris from this collision formed the moon. And some people think Mercury, because it has such a huge iron core, was hit by something big that stripped off its mantle; it was left with a large core in proportion to its size."

Part of Quintana's research involves computer modeling of the formation of planets and solar systems. Teasing out Jupiter's structure and composition could greatly enhance such models, she said. Quintana already has modeled our solar system's formation, with Jupiter and without, yielding some surprising findings.

Credit: NASA

"For a long time, people thought Jupiter was essential to habitability because it might have shielded Earth from the constant influx of impacts [during the solar system's early days] which could have been damaging to habitability," she said. "What we've found in our simulations is that it's almost the opposite. When you add Jupiter, the accretion times are faster and the impacts onto Earth are far more energetic. Planets formed within about 100 million years; the solar system was done growing by that point," Quintana said.

"If you take Jupiter out, you still form Earth, but on timescales of billions of years rather than hundreds of millions. Earth still receives giant impacts, but they're less frequent and have lower impact energies," she said.

Getting to the core

Another critical Juno measurement that could shed new light on the dark history of planetary formation is the mission's gravity science experiment. Changes in the frequency of radio transmissions from Juno to NASA's Deep Space Network will help map the giant planet's gravitational field.

Knowing the nature of Jupiter's core could reveal how quickly the planet formed, with implications for how Jupiter might have affected Earth's formation.

Jupiter Infra-red Glow

Credit: NASA

And the spacecraft's magnetometers could yield more insight into the deep internal structure of Jupiter by measuring its magnetic field.

"We don't understand a lot about Jupiter's magnetic field," Ciardi said. "We think it's produced by metallic hydrogen in the deep interior. Jupiter has an incredibly strong magnetic field, much stronger than Earth's."

Mapping Jupiter's magnetic field also might help pin down the plausibility of proposed scenarios for alien life beyond our solar system.

Earth's magnetic field is thought to be important to life because it acts like a protective shield, channeling potentially harmful charged particles and cosmic rays away from the surface.

"If a Jupiter-like planet orbits its star at a distance where liquid water could exist, the Jupiter-like planet itself might not have life, but it might have moons which could potentially harbor life," he said.



An exo-Jupiter’s intense magnetic field could protect such life forms, he said. That conjures visions of Pandora, the moon in the movie "Avatar" inhabited by 10-foot-tall humanoids who ride massive, flying predators through an exotic alien ecosystem.

Juno's findings will be important not only to understanding how exo-Jupiters might influence the formation of exo-Earths, or other kinds of habitable planets. They'll also be essential to the next generation of space telescopes that will hunt for alien worlds. The Transiting Exoplanet Survey Satellite (TESS) will conduct a survey of nearby bright stars for exoplanets beginning in June 2018, or earlier. The James Webb Space Telescope, expected to launch in 2018, and WFIRST (Wide-Field Infrared Survey Telescope), with launch anticipated in the mid-2020s, will attempt to take direct images of giant planets orbiting other stars.

"We're going to be able to image planets and get spectra," or light profiles from exoplanets that will reveal atmospheric gases, Ciardi said. Juno's revelations about Jupiter will help scientists to make sense of these data from distant worlds.

"Studying our solar system is about studying exoplanets," he said. "And studying exoplanets is about studying our solar system. They go together."









Contacts and sources:
Preston Dyches
Jet Propulsion Laboratory, Pasadena, Calif.

Written by Pat Brennan
NASA Exoplanet Program

To learn more about a few of the known exo-Jupiters, visit:
https://exoplanets.nasa.gov/alien-worlds/strange-new-worlds

Peculiar Age-Defying Star Probed

For years, astronomers have puzzled over a massive star lodged deep in the Milky Way that shows conflicting signs of being extremely old and extremely young.

An age-defying star designated as IRAS 19312+1950 (arrow) exhibits features characteristic of a very young star and a very old star. The object stands out as extremely bright inside a large, chemically rich cloud of material, as shown in this image from NASA’s Spitzer Space Telescope. 

A NASA-led team of scientists thinks the star – which is about 10 times as massive as our sun and emits about 20,000 times as much energy – is a newly forming protostar. That was a big surprise because the region had not been known as a stellar nursery before. But the presence of a nearby interstellar bubble, which indicates the presence of a recently formed massive star, also supports this idea.

IRAS 19312+1950 

Credits: NASA/JPL-Caltech

For years, astronomers have puzzled over a massive star lodged deep in the Milky Way that shows conflicting signs of being extremely old and extremely young.

Researchers initially classified the star as elderly, perhaps a red supergiant. But a new study by a NASA-led team of researchers suggests that the object, labeled IRAS 19312+1950, might be something quite different – a protostar, a star still in the making.

“Astronomers recognized this object as noteworthy around the year 2000 and have been trying ever since to decide how far along its development is,” said Martin Cordiner, an astrochemist working at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. He is the lead author of a paper in the Astrophysical Journal describing the team’s findings, from observations made using NASA’s Spitzer Space Telescope and ESA’s Herschel Space Observatory.

Located more than 12,000 light-years from Earth, the object first stood out as peculiar when it was observed at particular radio frequencies. Several teams of astronomers studied it using ground-based telescopes and concluded that it is an oxygen-rich star about 10 times as massive as the sun. The question was: What kind of star?

Some researchers favor the idea that the star is evolved – past the peak of its life cycle and on the decline. For most of their lives, stars obtain their energy by fusing hydrogen in their cores, as the sun does now. But older stars have used up most of their hydrogen and must rely on heavier fuels that don't last as long, leading to rapid deterioration.


IRAS 19312+1950
Image Credit: NASA/JPL-Caltech

Two early clues – intense radio sources called masers – suggested the star was old. In astronomy, masers occur when the molecules in certain kinds of gases get revved up and emit a lot of radiation over a very limited range of frequencies. The result is a powerful radio beacon – the microwave equivalent of a laser.

One maser observed with IRAS 19312+1950 is almost exclusively associated with late-stage stars. This is the silicon oxide maser, produced by molecules made of one silicon atom and one oxygen atom. Researchers don’t know why this maser is nearly always restricted to elderly stars, but of thousands of known silicon oxide masers, only a few exceptions to this rule have been noted.

Also spotted with the star was a hydroxyl maser, produced by molecules comprised of one oxygen atom and one hydrogen atom. Hydroxyl masers can occur in various kinds of astronomical objects, but when one occurs with an elderly star, the radio signal has a distinctive pattern – it’s especially strong at a frequency of 1612 megahertz. That’s the pattern researchers found in this case.

Even so, the object didn’t entirely fit with evolved stars. Especially puzzling was the smorgasbord of chemicals found in the large cloud of material surrounding the star. A chemical-rich cloud like this is typical of the regions where new stars are born, but no such stellar nursery had been identified near this star.

Scientists initially proposed that the object was an old star surrounded by a surprising cloud typical of the kind that usually accompanies young stars. Another idea was that the observations might somehow be capturing two objects: a very old star and an embryonic cloud of star-making material in the same field.

Cordiner and his colleagues began to reconsider the object, conducting observations using ESA’s Herschel Space Observatory and analyzing data gathered earlier with NASA’s Spitzer Space Telescope. Both telescopes operate at infrared wavelengths, which gave the team new insight into the gases, dust and ices in the cloud surrounding the star.

The additional information leads Cordiner and colleagues to think the star is in a very early stage of formation. The object is much brighter than it first appeared, they say, emitting about 20,000 times the energy of our sun. The team found large quantities of ices made from water and carbon dioxide in the cloud around the object. These ices are located on dust grains relatively close to the star, and all this dust and ice blocks out starlight making the star seem dimmer than it really is.

In addition, the dense cloud around the object appears to be collapsing, which happens when a growing star pulls in material. In contrast, the material around an evolved star is expanding and is in the process of escaping to the interstellar medium. The entire envelope of material has an estimated mass of 500 to 700 suns, which is much more than could have been produced by an elderly or dying star.

“We think the star is probably in an embryonic stage, getting near the end of its accretion stage – the period when it pulls in new material to fuel its growth,” said Cordiner.

Also supporting the idea of a young star are the very fast wind speeds measured in two jets of gas streaming away from opposite poles of the star. Such jets of material, known as a bipolar outflow, can be seen emanating from young or old stars. However, fast, narrowly focused jets are rarely observed in evolved stars. In this case, the team measured winds at the breakneck speed of at least 200,000 miles per hour (90 kilometers per second) – a common characteristic of a protostar.

Still, the researchers acknowledge that the object is not a typical protostar. For reasons they can’t explain yet, the star has spectacular features of both a very young and a very old star.

“No matter how one looks at this object, it’s fascinating, and it has something new to tell us about the life cycles of stars,” said Steven Charnley, a Goddard astrochemist and co-author of the paper.

NASA's Jet Propulsion Laboratory in Pasadena, California, manages the Spitzer Space Telescope mission, whose science operations are conducted at the Spitzer Science Center. Spacecraft operations are based at Lockheed Martin Space Systems Company, Littleton, Colorado.

Herschel is an ESA space observatory with science instruments provided by European-led principal investigator consortia and with important participation from NASA.



Contacts and sources:
Elizabeth Landau
Jet Propulsion Laboratory,

For more information, visit: www.nasa.gov/spitzer

Monday, August 29, 2016

Ice Age Inhabitants of Interior Alaska Relied Heavily on Salmon



Ice age inhabitants of Interior Alaska relied more heavily on salmon and freshwater fish in their diets than previously thought, according to a newly published study.

A team of researchers from the University of Alaska Fairbanks made the discovery after taking samples from 17 prehistoric hearths along the Tanana River, then analyzed stable isotopes and lipid residues to identify fish remains at multiple locations. The results offer a more complex picture of Alaska’s ice age residents, who were previously thought to have a diet dominated by terrestrial mammals such as mammoths, bison and elk.


Members of an excavation team work in a trench at the Upward Sun River archaeological site. Salmon remains from the site were dated to 11,800 years old using isotope analysis at the University of Alaska Fairbanks.
Credit: Ben Potter  


The project also found the earliest evidence of human use of anadromous salmon in the Americas, dating back at least 11,800 years.

The results of the study were published today in the Proceedings of the National Academy of Sciences.

DNA analysis of chum salmon bones from the same site on the Tanana River had previously confirmed that fish were part of the local indigenous diet as far back as 11,500 years ago. But fragile fish bones rarely survive for scientists to analyze, so the team used sophisticated geochemistry analyses to estimate the amount of salmon, freshwater and terrestrial resources ancient people ate.

University of Alaska Fairbanks researcher Kyungcheol Choy loads an autosampler in UAF’s Alaska Stable Isotope Facility.
Matthew Wooller photo


A team led by UAF postdoctoral researcher Kyungcheol Choy analyzed stable isotopes and lipid residues, searching for signatures specific to anadromous fish. The effort demonstrated that dietary practices of hunter-gatherers could be recorded at sites where animal remains hadn’t been preserved.

“It’s quite new in the archaeology field,” Choy said. “There’s a lot in these mixtures that’s hard to detect in other ways.”

Ben Potter, a professor of anthropology at UAF and co-author of the study, said the findings suggest a more systematic use of salmon than DNA testing alone could confirm.

“This is a different kind of strategy,” Potter said. “It fleshes out our understanding of these people in a way that we didn’t have before.”

The study required cooperation between UAF’s Department of Anthropology and the Institute of Northern Engineering’s Alaska Stable Isotope Facility to locate and interpret the presence of salmon remains at the sites. Potter said the process could be a template for how a diverse team of researchers can work together to overcome a scientific obstacle.

“It’s an awesome look at how we can merge disciplines to answer a question,” he said.

Other participants in the study included UAF researchers Matthew Wooller, Holly McKinney, Joshua Reuther and Shiway Wang.




Contacts and sources:
Jeff Richardson
University of Alaska Fairbanks

Dental Plaque Sheds New Light On the Diet of Mesolithic Foragers in The Balkans


The study of dental calculus from Late Mesolithic individuals from the site of Vlasac in the Danube Gorges of the central Balkans has provided direct evidence that Mesolithic foragers of this region consumed domestic cereals already by c. 6600 BC, i.e. almost half a millennium earlier than previously thought.

The team of researchers led by Emanuela Cristiani from The McDonald Institute for Archaeological Research, University of Cambridge used polarised microscopy to study micro-fossils trapped in the dental calculus (ancient calcified dental plaque) of 9 individuals dated to the Late Mesolithic (c. 6600-6450 BC) and the Mesolithic-Neolithic transition phase (c. 6200-5900 BC) from the site of Vlasac in the Danube Gorges. The remains were recovered from this site during excavations from 2006 to 2009 by Dušan Bori, Cardiff University.


Recovery of human remains at Vlasac, Serbia.

Credit: Dušan Boric


"The deposition of mineralized plaque ends with the death of the individual, therefore, dental calculus has sealed unique human biographic information about Mesolithic dietary preferences and lifestyle," said Cristiani.

"What we happened to discover has a tremendous significance as it challenges the established view of the Neolithization in Europe," she said.

"Microfossils trapped in dental calculus are a direct evidence that plant foods were an important source of energy within Mesolithic forager diet. More significantly, though, they reveal that domesticated plants were introduced to the Balkans independently from the rest of Neolithic novelties such as domesticated animals and artefacts, which accompanied the arrival of farming communities in the region".

These results suggest that the hitherto held notion of the "Neolithic package" may have to be reconsidered. Archaeologists use the concept of "Neolithic package" to refer to the group of elements that appear in the Early Neolithic settlements of Southeast Europe: pottery, domesticates and cultigens, polished axes, ground stones and timber houses.


Close-up of human remains from Vlasac, Serbia.

Credit: Dušan Boric


This region of the central Balkans has yielded unprecedented data for other areas with a known Mesolithic forager presence in Europe. Dental tartar samples were also taken from three Early Neolithic (c. 5900-5700 BC) female burials from the site of Lepenski Vir, located around 3 km upstream from Vlasac.

Although researchers agree that Mesolithic diet in the Danube Gorges was largely based on terrestrial, or riverine protein-rich resources, the team also found that starch granules preserved in the dental calculus from Vlasac were consistent with domestic species such as wheat (Triticum monococcum, Triticum dicoccum) and barley (Hordeum distichon), which were also the main crops found among Early Neolithic communities of southeast Europe.

Domestic species were consumed together with other wild species of the Aveneae tribe (oats), Fabaeae tribe (peas and beans) and grasses of the Paniceae tribe.

These preserved starch granules provide the first direct evidence that Neolithic domestic cereals had already reached inland foragers deep in the Balkan hinterland by c. 6600 BC. Their introduction in the Mesolithic societies was likely eased by social networks between local foragers and the first Neolithic communities.

Archaeological starch grains were interpreted using a large collection of microremains from modern plants native to the central Balkans and the Mediterranean region.

"Most of the starch granules that we identified in the Late Mesolithic calculus of the central Balkans are consistent with plants that became key staple domestic foods with the start of the Neolithic in this region" said Cristiani.

Anita Radini, University of York added, "In the central Balkans, foragers' familiarity with domestic Cerealia grasses from c. 6500 BC, if not earlier, might have eased the later quick adoption of agricultural practices."

The findings are published in the journal Proceedings of the National Academy of Sciences.









Contacts and sources:
Emanuela Cristiani
University of Cambridge

Hunt For Planet X Reveals Strange Never-Seen-Before Objects and Orbits


In the race to discover a proposed ninth planet in our Solar System, Carnegie's Scott Sheppard and Chadwick Trujillo of Northern Arizona University have observed several never-before-seen objects at extreme distances from the Sun in our Solar System. Sheppard and Trujillo have now submitted their latest discoveries to the International Astronomical Union's Minor Planet Center for official designations. A paper about the discoveries has also been accepted to The Astronomical Journal.

The more objects that are found at extreme distances, the better the chance of constraining the location of the ninth planet that Sheppard and Trujillo first predicted to exist far beyond Pluto (itself no longer classified as a planet) in 2014. The placement and orbits of small, so-called extreme trans-Neptunian objects, can help narrow down the size and distance from the Sun of the predicted ninth planet, because that planet's gravity influences the movements of the smaller objects that are far beyond Neptune. The objects are called trans-Neptunian because their orbits around the Sun are greater than Neptune's.


An illustration of the orbits of the new and previously known extremely distant solar system objects. The clustering of most of their orbits indicates that they are likely be influenced by something massive and very distant, the proposed Planet X.

Credit: Robin Dienel.


In 2014, Sheppard and Trujillo announced the discovery of 2012 VP113 (nicknamed "Biden"), which has the most-distant known orbit in our Solar System. At this time, Sheppard and Trujillo also noticed that the handful of known extreme trans-Neptunian objects all cluster with similar orbital angles. This lead them to predict that there is a planet at more than 200 times our distance from the Sun. Its mass, ranging in possibility from several Earths to a Neptune equivalent, is shepherding these smaller objects into similar types of orbits.

Some have called this Planet X or Planet 9. Further work since 2014 showed that this massive ninth planet likely exists by further constraining its possible properties. Analysis of "neighboring" small body orbits suggest that it is several times more massive than the Earth, possibly as much as 15 times more so, and at the closest point of its extremely stretched, oblong orbit it is at least 200 times farther away from the Sun than Earth. (This is over 5 times more distant than Pluto.)

"Objects found far beyond Neptune hold the key to unlocking our Solar System's origins and evolution," Sheppard explained. "Though we believe there are thousands of these small objects, we haven't found very many of them yet, because they are so far away. The smaller objects can lead us to the much bigger planet we think exists out there. The more we discover, the better we will be able to understand what is going on in the outer Solar System."

Sheppard and Trujillo, along with David Tholen of the University of Hawaii, are conducting the largest, deepest survey for objects beyond Neptune and the Kuiper Belt and have covered nearly 10 percent of the sky to date using some of the largest and most advanced telescopes and cameras in the world, such as the Dark Energy Camera on the NOAO 4-meter Blanco telescope in Chile and the Japanese Hyper Suprime Camera on the 8-meter Subaru telescope in Hawaii. As they find and confirm extremely distant objects, they analyze whether their discoveries fit into the larger theories about how interactions with a massive distant planet could have shaped the outer Solar System.

"Right now we are dealing with very low-number statistics, so we don't really understand what is happening in the outer Solar System," Sheppard said. "Greater numbers of extreme trans-Neptunian objects must be found to fully determine the structure of our outer Solar System."


An artist's conception of Planet X
Courtesy of Robin Dienel.


According to Sheppard, "we are now in a similar situation as in the mid-19th century when Alexis Bouvard noticed Uranus' orbital motion was peculiar, which eventually led to the discovery of Neptune."

The new objects they have submitted to the Minor Planet Center for designation include 2014 SR349, which adds to the class of the rare extreme trans-Neptunian objects. It exhibits similar orbital characteristics to the previously known extreme bodies whose positions and movements led Sheppard and Trujillo to initially propose the influence of Planet X.

Another new extreme object they found, 2013 FT28, has some characteristics similar to the other extreme objects but also some differences. The orbit of an object is defined by six parameters. The clustering of several of these parameters is the main argument for a ninth planet to exist in the outer solar system. 2013 FT28 shows similar clustering in some of these parameters (its semi-major axis, eccentricity, inclination, and argument of perihelion angle, for angle enthusiasts out there) but one of these parameters, an angle called the longitude of perihelion, is different from that of the other extreme objects, which makes that particular clustering trend less strong.

Another discovery, 2014 FE72, is the first distant Oort Cloud object found with an orbit entirely beyond Neptune. It has an orbit that takes the object so far away from the Sun (some 3000 times farther than Earth) that it is likely being influenced by forces of gravity from beyond our Solar System such as other stars and the galactic tide. It is the first object observed at such a large distance.



Contacts and sources: 
Scott Sheppard
Carnegie Institution for Science