Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Saturday, January 30, 2016

Does Time Jiggle: Quantum Asymmetry Suggests Deeper Origin Due To Difference in Movement Between The Past and Future

New research from Griffith University's Centre for Quantum Dynamics is broadening perspectives on time and space.

In a paper published in the prestigious journal Proceedings of the Royal Society A, Associate Professor Joan Vaccaro challenges the long-held presumption that time evolution -- the incessant unfolding of the universe over time -- is an elemental part of Nature.

In the paper, entitled Quantum asymmetry between time and space, she suggests there may be a deeper origin due to a difference between the two directions of time: to the future and to the past.

"If you want to know where the universe came from and where it's going, you need to know about time," says Associate Professor Vaccaro.

Associate Professor Joan Vaccaro, of Griffith University's Centre for Quantum Dynamics
Credit: Griffith University

"Experiments on subatomic particles over the past 50 years ago show that Nature doesn't treat both directions of time equally.

"In particular, subatomic particles called K and B mesons behave slightly differently depending on the direction of time.

"When this subtle behavior is included in a model of the universe, what we see is the universe changing from being fixed at one moment in time to continuously evolving.

"In other words, the subtle behavior appears to be responsible for making the universe move forwards in time.

"Understanding how time evolution comes about in this way opens up a whole new view on the fundamental nature of time itself.

"It may even help us to better understand bizarre ideas such as travelling back in time."

According to the paper, an asymmetry exists between time and space in the sense that physical systems inevitably evolve over time whereas there is no corresponding ubiquitous translation over space.

This asymmetry, long presumed to be elemental, is represented by equations of motion and conservation laws that operate differently over time and space.

However, Associate Professor Vaccaro used a "sum-over-paths formalism" to demonstrate the possibility of a time and space symmetry, meaning the conventional view of time evolution would need to be revisited.

"In the connection between time and space, space is easier to understand because it's simply there. But time is forever forcing us towards the future," says Associate Professor Vaccaro.

"Yet while we are indeed moving forward in time, there is also always some movement backwards, a kind of jiggling effect, and it is this movement I want to measure using these K and B mesons."

Associate Professor Vaccaro says the research provides a solution to the origin of dynamics, an issue that has long perplexed science.



Contacts and sources:
Michael Jacobson

Phase of The Moon Affects Amount of Rainfall


When the moon is high in the sky, it creates bulges in the planet's atmosphere that creates imperceptible changes in the amount of rain that falls below.

New University of Washington research to be published in Geophysical Research Lettersshows that the lunar forces affect the amount of rain - though very slightly.

"As far as I know, this is the first study to convincingly connect the tidal force of the moon with rainfall," said corresponding author Tsubasa Kohyama, a UW doctoral student in atmospheric sciences.

Satellite data over the tropics, between 10 degrees S and 10 degrees N, shows a slight dip in rainfall when the moon is directly overhead or underfoot. The top panel shows the air pressure, the middle shows the rate of change in air pressure, and the bottom shows the rainfall difference from the average. The change is 0.78 micrometers, or less than one ten thousandth of an inch, per hour.

Credit: Tsubasa Kohyama/University of Washington

Kohyama was studying atmospheric waves when he noticed a slight oscillation in the air pressure. He and co-author John (Michael) Wallace, a UW professor of atmospheric sciences, spent two years tracking down the phenomenon.

Air pressure changes linked to the phases of the moon were first detected in 1847,and temperature in 1932, in ground-based observations. An earlier paper by the UW researchers used a global grid of data to confirm that air pressure on the surface definitely varies with the phases of the moon.

"When the moon is overhead or underfoot, the air pressure is higher," Kohyama said.

Their new paper is the first to show that the moon's gravitational tug also puts a slight damper on the rain.

When the moon is overhead, its gravity causes Earth's atmosphere to bulge toward it, so the pressure or weight of the atmosphere on that side of the planet goes up. Higher pressure increases the temperature of air parcels below. Since warmer air can hold more moisture, the same air parcels are now farther from their moisture capacity.

"It's like the container becomes larger at higher pressure," Kohyama said. The relative humidity affects rain, he said, because "lower humidity is less favorable for precipitation."

Kohyama and Wallace used 15 years of data collected by NASA and the Japan Aerospace Exploration Agency's Tropical Rainfall Measuring Mission satellite from 1998 to 2012 to show that the rain is indeed slightly lighter when the moon is high. The change is only about 1 percent of the total rainfall variation, though, so not enough to affect other aspects of the weather or for people to notice the difference.

"No one should carry an umbrella just because the moon is rising," Kohyama said. Instead, this effect could be used to test climate models, he said, to check if their physics is good enough to reproduce how the pull of the moon eventually leads to less rain.

Wallace plans to continue exploring the topic to see whether certain categories of rain, like heavy downpours, are more susceptible to the phases of the moon, and whether the frequency of rainstorms shows any lunar connection.


Contacts and sources:
Hannah Hickey
 University of Washington 

NASA: Understanding the Magnetic Sun

The surface of the sun writhes and dances. Far from the still, whitish-yellow disk it appears to be from the ground, the sun sports twisting, towering loops and swirling cyclones that reach into the solar upper atmosphere, the million-degree corona - but these cannot be seen in visible light. Then, in the 1950s, we got our first glimpse of this balletic solar material, which emits light only in wavelengths invisible to our eyes.

This comparison shows the relative complexity of the solar magnetic field between January 2011 (left) and July 2014. In January 2011, three years after solar minimum, the field is still relatively simple, with open field lines concentrated near the poles. At solar maximum, in July 2014, the structure is much more complex, with closed and open field lines poking out all over – ideal conditions for solar explosions.
comparison of solar magnetic field lines in 2011 (left) and 2014
Credits: NASA's Goddard Space Flight Center/Bridgman

Once this dynamic system was spotted, the next step was to understand what caused it. For this, scientists have turned to a combination of real time observations and computer simulations to best analyze how material courses through the corona. We know that the answers lie in the fact that the sun is a giant magnetic star, made of material that moves in concert with the laws of electromagnetism.

NASA Goddard solar scientist Holly Gilbert explains a computer model of the sun’s magnetic field. Grasping what drives that magnetic system is crucial for understanding the nature of space throughout the solar system: The sun's invisible magnetic field is responsible for everything from the solar explosions that cause space weather on Earth – such as auroras – to the interplanetary magnetic field and radiation through which our spacecraft journeying around the solar system must travel.
Credits: NASA's Goddard Space Flight Center/Duberstein
"We're not sure exactly where in the sun the magnetic field is created," said Dean Pesnell, a space scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "It could be close to the solar surface or deep inside the sun - or over a wide range of depths."

Getting a handle on what drives that magnetic system is crucial for understanding the nature of space throughout the solar system: The sun's magnetic field is responsible for everything from the solar explosions that cause space weather on Earth - such as auroras - to the interplanetary magnetic field and radiation through which our spacecraft journeying around the solar system must travel.

So how do we even see these invisible fields? First, we observe the material on the sun. The sun is made of plasma, a gas-like state of matter in which electrons and ions have separated, creating a super-hot mix of charged particles. When charged particles move, they naturally create magnetic fields, which in turn have an additional effect on how the particles move. The plasma in the sun, therefore, sets up a complicated system of cause and effect in which plasma flows inside the sun - churned up by the enormous heat produced by nuclear fusion at the center of the sun - create the sun's magnetic fields. This system is known as the solar dynamo.

We can observe the shape of the magnetic fields above the sun's surface because they guide the motion of that plasma - the loops and towers of material in the corona glow brightly in EUV images. Additionally, the footpoints on the sun's surface, or photosphere, of these magnetic loops can be more precisely measured using an instrument called a magnetograph, which measures the strength and direction of magnetic fields.

This video was captured in extreme ultraviolet wavelengths of 171 angstroms. Though typically invisible to our eyes, the extreme ultraviolet images are colorized here in gold.


Next, scientists turn to models. They combine their observations - measurements of the magnetic field strength and direction on the solar surface - with an understanding of how solar material moves and magnetism to fill in the gaps. Simulations such as the Potential Field Source Surface, or PFSS, model - shown in the accompanying video - can help illustrate exactly how magnetic fields undulate around the sun. Models like PFSS can give us a good idea of what the solar magnetic field looks like in the sun's corona and even on the sun's far side.
...
A complete understanding of the sun's magnetic field - including knowing exactly how it's generated and its structure deep inside the sun - is not yet mapped out, but scientists do know quite a bit. For one thing, the solar magnetic system is known to drive the approximately-11-year activity cycle on the sun. 

\With every eruption, the sun's magnetic field smooths out slightly until it reaches its simplest state. At that point the sun experiences what's known as solar minimum, when solar explosions are least frequent. From that point, the sun's magnetic field grows more complicated over time until it peaks at solar maximum, some 11 years after the previous solar maximum.

"At solar maximum, the magnetic field has a very complicated shape with lots of small structures throughout - these are the active regions we see," said Pesnell. "At solar minimum, the field is weaker and concentrated at the poles. It's a very smooth structure that doesn't form sunspots."

Take a look at the side-by-side comparison to see how the magnetic fields change, grew and subsided from January 2011 to July 2014. You can see that the magnetic field is much more concentrated near the poles in 2011, three years after solar minimum. By 2014, the magnetic field has become more tangled and disorderly, making conditions ripe for solar events like flares and coronal mass ejections.


Contacts and sources: 
Karen Fox
NASA's Goddard Space Flight Center

Rare Dinosaur from Appalachia Found by Amateur Fossil Hunters

An international team of researchers has identified and named a new species of dinosaur that is the most complete, primitive duck-billed dinosaur to ever be discovered in the eastern United States.

This new discovery also shows that duck-billed dinosaurs originated in the eastern United States, what was then broadly referred to as Appalachia, before dispersing to other parts of the world. The research team outlined its findings in the Journal of Vertebrate Paleontology.

"This is a really important animal in telling us how they came to be and how they spread all over the world," said Florida State University Professor of Biological Science Gregory Erickson, one of the researchers on the team.

The remains of the dinosaur are on display in McWane Science Center.
Photo courtesy of Jun Ebersole, McWane Science Center

They named the new dinosaur Eotrachodon orientalis, which means "dawn rough tooth from the east." The name pays homage to "Trachodon," which was the first duck-billed dinosaur named in 1856.

This duck-billed dinosaur -- also known as a Hadrosaurid -- was probably 20 to 30 feet long as an adult, mostly walked on its hind legs though it could come down on all four to graze on plants with its grinding teeth, and had a scaly exterior. But what set it apart is that it had a large crest on its nose.

"This thing had a big ugly nose," Erickson said.

That large crest on the nose, plus indentations found in the skull and its unique teeth alerted Erickson and his colleagues from McWane Science Center in Birmingham, Ala., and the University of Bristol in the United Kingdom that the skeleton they had was something special.



The skeletal remains of this 83-million-year-old dinosaur were originally found by a team of amateur fossil enthusiasts alongside a creek in Montgomery County, Alabama in marine sediment. Dinosaurs from the South are extremely rare. A set with a complete skull is an even more extraordinary find. The dinosaur likely was washed out to sea by river or stream sediments after it died. When the group realized they had potentially discovered something of scientific importance, they contacted McWane Science Center in Birmingham, which dispatched a team to the site to carefully remove the remains from the surrounding rock.

After the bones were prepared and cleaned at McWane Science Center and the University of West Alabama, they were studied by a team of paleontologists including Erickson, former FSU doctoral student Albert Prieto-Marquez who is now at the University of Bristol, and Jun Ebersole, director of collections at McWane Science Center. Among the recovered remains of this new dinosaur are a complete skull, dozens of backbones, a partial hip bone and a few bones from the limbs.

It is one of the most complete dinosaur skeletons ever to be found in the eastern United States. Its teeth, which show this dinosaur's remarkable ability to grind up plants in a manner like cows or horses, were present in early hadrosaurids, allowing them to consume a wide variety of plants as the group radiated around the world.

During the late Cretaceous Period, roughly 85 million years ago, North America was divided in half by a 1,000 mile ocean that connected the Gulf of Mexico to the Arctic Ocean. This body of water created two North American landmasses, Laramidia to the west and Appalachia to the east.


The area of what was considered Appalachia is a bit wider than what we call

Appalachia today. It began roughly in Georgia and Alabama and stretched all the way north into Canada.

The remains were found by amateur fossil hunters who contacted McWane Science Center. They are on display in Jun Ebersole's lab at the center.

Photo courtesy of Jun Ebersole, McWane Science Center.

"For roughly 100 million years, the dinosaurs were not able to cross this barrier," Ebersole said. "The discovery of Eotrachodon suggests that duck-billed dinosaurs originated in Appalachia and dispersed to other parts of the world at some point after the seaway lowered, opening a land corridor to western North America."

Added Erickson: "They just needed to get off the island. From there, they became the cows of the Cretaceous."

Erickson brought some bone samples and teeth back to his lab at Florida State for further analysis. He found it difficult to pinpoint the exact age of the dinosaur because no growth lines appeared in the bone samples. However, the highly vascularized bones show that it was growing very rapidly at the time of death, akin to a teenager, and stood to get much larger -- perhaps 20-30 feet in length, which is typical of duck-billed dinosaurs found elsewhere.

The remains of Eotrachodon are housed at McWane Science Center in Birmingham and are currently on display in Ebersole's laboratory for the general public to view.



Contacts and sources:
Kathleen Haughney,
Florida State University  

Descendants of Black Death Confirmed as Source of Repeated European Plague Outbreaks

An international team of researchers has uncovered new information about the Black Death in Europe and its descendants, suggesting it persisted on the continent over four centuries, re-emerging to kill hundreds of thousands in Europe in separate, devastating waves.

The Triumph of Death 

Credit: painting by Pieter Bruegel the Elder 

The findings address the longstanding debate among scientists about whether or not the bacterium Yersinia pestis -- responsible for the Black Death -- remained within Europe for hundreds of years and was the principal cause of some of the worst re-emergences and subsequent plague epidemics in human history.

Until now, some researchers believed repeated outbreaks were the result of the bacterium being re-introduced through major trade with China, a widely-known reservoir of the plague. Instead, it turns out the plague may never have left.

"The more plague genomes we have from these disparate time periods, the better we are able to reconstruct the evolutionary history of this pathogen" says evolutionary geneticist Hendrik Poinar, director of McMaster University's Ancient DNA Centre and a principal investigator at the Michael G. DeGroote Institute for Infectious Disease Research.

Poinar collaborated with Edward Holmes at the University of Sydney, Olivier Dutour of the École Pratique des Hautes Études in France, and Kirsti Bos and Johannes Krause at the University of Tubingen, and others, to map the complete genomes of Y.pestis which was harvested from five adult male victims of the 1722 Plague of Provence.

To do so, they analyzed the dental pulp taken from the five bodies, originally buried in Marseille, France. Researchers were able to extract, purify and enrich specifically for the pathogen's DNA, and then compare the samples with over 150 plague genomes representing a world wide distribution as well as from other points in time, both modern and ancient.

Inspired by the Black Death, The Dance of Death or Danse Macabre, an allegory on the universality of death, is a common painting motif in the late medieval period.

Credit:  History Today 

By comparing and contrasting the samples, researchers determined the Marseille strain is a direct descendant of the Black Death that devastated Europe nearly 400 years earlier and not a divergent strain that came, like the previous pandemic strains Justinian and Black Death, from separate emergences originating in Asia.

More extensive sampling of modern rodent populations, in addition to ancient human and rodent remains from various regions in Asia, the Caucasus and Europe, may yield additional clues about past ecological niches for plague.

"There are many unresolved questions that need to be answered: why did the plague erupt in these devastating waves and then lay dormant? Did it linger in the soil or did it re-emerge in rats? And ultimately why did it suddenly disappear and never come back? Sadly, we don't have the answer to this yet," says Poinar.

"Understanding the evolution of the plague will be critically important as antibiotic resistance becomes a greater threat, particularly since we treat modern-day plague with standard antibiotics. Without methods of treatment, easily treatable infections can become devastating again," he says.

The research was published online in the journal eLife.
 


Contacts and sources:
Michelle Donovan
McMaster University
 

Ancient Extinction of Giant Australian Bird Points to Humans

Ancient extinction of giant Australian bird points to humans

The first direct evidence that humans played a substantial role in the extinction of the huge, wondrous beasts inhabiting Australia some 50,000 years ago -- in this case a 500-pound bird -- has been discovered by a University of Colorado Boulder-led team.

The flightless bird, known as Genyornis newtoni, was nearly 7 feet tall and appears to have lived in much of Australia prior to the establishment of humans on the continent 50,000 years ago, said CU-Boulder Professor Gifford Miller. 

The evidence consists of diagnostic burn patterns on Genyornis eggshell fragments that indicate humans were collecting and cooking its eggs, thereby reducing the birds' reproductive success.

An illustration of a giant flightless bird known as Genyornis newtoni, surprised on her nest by a 1 ton, predatory lizard named Megalania prisca in Australia roughly 50,000 thousand years ago.
Credit: Illustration by Peter Trusler, Monash University

"We consider this the first and only secure evidence that humans were directly preying on now-extinct Australian megafauna," said Miller, associate director of CU-Boulder's Institute of Arctic and Alpine Research. "We have documented these characteristically burned Genyorniseggshells at more than 200 sites across the continent."

A paper on the subject appears online Jan. 29, in Nature Communications.

In analyzing unburned Genyornis eggshells from more than 2,000 localities across Australia, primarily from sand dunes where the ancient birds nested, several dating methods helped researchers determine that none were younger than about 45,000 years old. Burned eggshell fragments from more than 200 of those sites, some only partially blackened, suggest pieces were exposed to a wide range of temperatures, said Miller, a professor in CU-Boulder's Department of Geological Sciences.

Optically stimulated luminescence dating, a method used to determine when quartz grains enclosing the eggshells were last exposed to sunlight, limits the time range of burnedGenyornis eggshell to between 54,000 and 44,000 years ago. Radiocarbon dating indicated the burnt eggshell was no younger than about 47,000 years old.

The blackened fragments were likely burned in transient, human fires -- presumably to cook the eggs -- rather than in wildfires, he said.

Amino acids -- the building blocks of proteins -decompose in a predictable fashion inside eggshells over time. In eggshell fragments burned at one end but not the other, there is a tell-tale "gradient" from total amino acid decomposition to minimal amino acid decomposition, he said. Such a gradient could only be produced by a localized heat source, likely an ember, and not from the sustained high heat produced regularly by wildfires on the continent both in the distant past and today.

Miller also said the researchers found many of the burnt Genyornis eggshell fragments in tight clusters less than 10 feet in diameter, with no other eggshell fragments nearby. Some individual fragments from the same clusters had heat gradient differences of nearly 1,000 degrees Fahrenheit, conditions virtually impossible to reproduce with natural wildfires there, he said.

"We can't come up with a scenario that a wildfire could produce those tremendous gradients in heat," Miller said. "We instead argue that the conditions are consistent with early humans harvesting Genyornis eggs, cooking them over fires, and then randomly discarding the eggshell fragments in and around their cooking fires."

Another line of evidence for early human predation on Genyornis eggs is the presence of ancient, burned eggshells of emus -- flightless birds weighing only about 100 pounds and which still exist in Australia today -- in the sand dunes. Emu eggshells exhibiting burn patterns similar to Genyornis eggshells first appear on the landscape about 50,000 years ago, signaling they most likely were scorched after humans arrived in Australia, and are found fairly consistently to modern times, Miller said.

The Genyornis eggs are thought to have been roughly the size of a cantaloupe and weighed about 3.5 pounds, Miller said.

Genyornis roamed the Australian outback with an astonishing menagerie of other now-extinct megafauna that included a 1,000-pound kangaroo, a 2-ton wombat, a 25-foot-long-lizard, a 300-pound marsupial lion and a Volkswagen-sized tortoise. More than 85 percent of Australia's mammals, birds and reptiles weighing over 100 pounds went extinct shortly after the arrival of the first humans.

The demise of the ancient megafauna in Australia (and on other continents, including North America) has been hotly debated for more than a century, swaying between human predation, climate change and a combination of both, said Miller. While some still hold fast to the climate change scenario -- specifically the continental drying in Australia from about 60,000 to 40,000 years ago -- neither the rate nor magnitude of that change was as severe as earlier climate shifts in Australia during the Pleistocene epoch, which lacked the punch required to knock off the megafauna, said Miller.

Miller and others suspect Australia's first inhabitants traveled to the northern coast of the continent on rafts launched from Indonesian islands several hundred miles away. "We will never know the exact time window humans arrived on the continent," he said. "But there is reliable evidence they were widely dispersed across the continent before 47,000 years ago."

Evidence of Australia megafauna hunting is very difficult to find, in part because the megafauna there are so much older than New World megafauna and in part because fossil bones are easily destroyed by the chemistry of Australian soils. said Miller.

"In the Americas, early human predation on the giant animals in clear -- stone spear heads are found embedded in mammoth bones, for example," said Miller. "The lack of clear evidence regarding human predation on the Australia megafauna had, until now, been used to suggest no human-megafauna interactions occurred, despite evidence that most of the giant animals still roamed Australia when humans colonized the continent."


Contacts and sources:
Gifford Miller
University of Colorado - Boulder

Moon Formed By Head-On Collision Between Earth and Theia 4.5 Billion Years Ago

The moon was formed by a violent, head-on collision between the early Earth and a "planetary embryo" called Theia approximately 100 million years after the Earth formed, UCLA geochemists and colleagues report. UCLA-led research reconstructs massive crash, which took place 4.5 billion years ago.

The extremely similar chemical composition of rocks on the Earth and moon helped scientists determine that a head-on collision, not a glancing blow, took place between Earth and Theia.
Credit: NASA/JPL-Caltech

Scientists had already known about this high-speed crash, which occurred almost 4.5 billion years ago, but many thought the Earth collided with Theia (pronounced THAY-eh) at an angle of 45 degrees or more -- a powerful side-swipe (simulated in this 2012 YouTube video). 

This is a simulation of a larger impactor, about the size of Mars, impacting the early earth. The different colors correspond to different materials and layers within the earth and the impactor. It shows how the impact threw up a lot of one particular layer into space (the yellow color in this picture) which explains the homogenous nature of the moon in comparison to Earth. The stuff that is in space towards the end of the animation would then coalesce to form a single body, which is a fairly well understood process.

New evidence reported Jan. 29 in the journal Science substantially strengthens the case for a head-on assault.

The researchers analyzed seven rocks brought to the Earth from the moon by the Apollo 12, 15 and 17 missions, as well as six volcanic rocks from the Earth's mantle -- five from Hawaii and one from Arizona.

The key to reconstructing the giant impact was a chemical signature revealed in the rocks' oxygen atoms. (Oxygen makes up 90 percent of rocks' volume and 50 percent of their weight.) More than 99.9 percent of Earth's oxygen is O-16, so called because each atom contains eight protons and eight neutrons. But there also are small quantities of heavier oxygen isotopes: O-17, which have one extra neutron, and O-18, which have two extra neutrons. Earth, Mars and other planetary bodies in our solar system each has a unique ratio of O-17 to O-16 -- each one a distinctive "fingerprint."

In 2014, a team of German scientists reported in Science that the moon also has its own unique ratio of oxygen isotopes, different from Earth's. The new research finds that is not the case.

This image shows from left Paul Warren, Edward Young and Issaku Kohl. Young is holding a sample of a rock from the moon.

Credit: Christelle Snow/UCLA

"We don't see any difference between the Earth's and the moon's oxygen isotopes; they're indistinguishable," said Edward Young, lead author of the new study and a UCLA professor of geochemistry and cosmochemistry.

Young's research team used state-of-the-art technology and techniques to make extraordinarily precise and careful measurements, and verified them with UCLA's new mass spectrometer.

The fact that oxygen in rocks on the Earth and our moon share chemical signatures was very telling, Young said. Had Earth and Theia collided in a glancing side blow, the vast majority of the moon would have been made mainly of Theia, and the Earth and moon should have different oxygen isotopes. A head-on collision, however, likely would have resulted in similar chemical composition of both Earth and the moon.

Lunar rock found in northwest Africa
Lunar rock
Credit: Christelle Snow/UCLA
 
"Theia was thoroughly mixed into both the Earth and the moon, and evenly dispersed between them," Young said. "This explains why we don't see a different signature of Theia in the moon versus the Earth."

Theia, which did not survive the collision (except that it now makes up large parts of Earth and the moon) was growing and probably would have become a planet if the crash had not occurred, Young said. Young and some other scientists believe the planet was approximately the same size as the Earth; others believe it was smaller, perhaps more similar in size to Mars.

Another interesting question is whether the collision with Theia removed any water that the early Earth may have contained. After the collision -- perhaps tens of millions of year later -- small asteroids likely hit the Earth, including ones that may have been rich in water, Young said. Collisions of growing bodies occurred very frequently back then, he said, although Mars avoided large collisions.

A head-on collision was initially proposed in 2012 by Matija Cuk, now a research scientist with the SETI Institute, and Sarah Stewart, now a professor at UC Davis; and, separately during the same year by Robin Canup of the Southwest Research Institute.


Contacts and sources:  
Stuart Wolpert
UCLA

Thursday, January 28, 2016

Babylonian Astronomers Used Geometry To Plot Jupiter's Position 1400 Years Before Europeans

A scientist of the Excellence Cluster TOPOI discovered that Babylonian astronomers computed the position of Jupiter with geometric methods.

This is revealed by an analysis of three published and two unpublished cuneiform tablets from the British Museum by Prof. Mathieu Ossendrijver, historian of science of the Humboldt-Universität zu Berlin. The tablets date from the period between 350 and 50 BCE.

 Historians of science thus far assumed that geometrical computations of the kind found on these tablets were first carried out in the 14th century. Moreover, it was assumed that Babylonian astronomers used only arithmetical methods. "The new interpretation reveals that Babylonian astronomers also used geometrical methods", says Mathieu Ossendrijver. His results are published in the current issue of the journal Science.

On four of these tablets the distance covered by Jupiter is computed as the area of a figure that represents how its velocity changes with time. None of the tablets contains drawings but, as Mathieu Ossendrijver explains, the texts describe the figure of which the area is computed as a trapezoid. Two of these so-called trapezoid texts had been known since 1955, but their meaning remained unclear, even after two further tablets with these operations were discovered in recent years.

New interpretation was now prompted by a newly discovered fifth tablet

One reason for this was the damaged state of the tablets, which were excavated unscientifically in Babylon, near its main temple Esagila, in the 19th century. Another reason was, that the calculations could not be connected to a particular planet. The new interpretation of the trapezoid texts was now prompted by a newly discovered, almost completely preserved fifth tablet. A colleague from Vienna who visited the Excellence Cluster TOPOI in 2014, the retired Professor of Assyriology Hermann Hunger, draw the attention of Mathieu Ossendrijver to this tablet. He presented him with an old photograph of the tablet that was made in the British Museum.

The new tablet does not mention a trapezoid figure, but it does contain a computation that is mathematically equivalent to the other ones. This computations can be uniquely assigned to the planet Jupiter. With this new insight the other, thus far incomprehensible tablets could also be deciphered.

Left: Cuneiform tablet with calculations involving a trapezoid. Right: A visualization of trapezoid procedure on the tablet: The distance traveled by Jupiter after 60 days, 10º45', is computed as the area of the trapezoid. The trapezoid is then divided into two smaller ones in order to find the time (tc) in which Jupiter covers half this distance.



Figure: Mathieu Ossendrijver (HU)

In all five tablets, Jupiter's daily displacement and its total displacement along its orbit, both expressed in degrees, are described for the first 60 days after Jupiter becomes visible as a morning star. Mathieu Ossendrijver explains: "The crucial new insight provided by the new tablet without the geometrical figure is that Jupiter's velocity decreases linearly within the 60 days. Because of the linear decrease a trapezoidal figure emerges if one draws the velocity against time."

"It is this trapezoidal figure of which the area is computed on the other four tablets", says the historian of science. The area of this figure is explicitly declared to be the distance travelled by Jupiter after 60 days. Moreover, the time when Jupiter covers half this distance is also calculated, by dividing the trapezoid into two smaller ones of equal area.

European scholars used similar techniques

"These computations anticipate the use of similar techniques by European scholars, but they were carried out at least 14 centuries earlier", says Ossendrijver. The so-called Oxford calculators, a group of scholastic mathematicians, who worked at Merton College, Oxford, in the 14th century, are credited with the "Mertonian mean speed theorem“. This theorem yields the distance travelled by a uniformly decelerating body, corresponding to the modern formula S=t•(u+v)/2, where u and v are the initial and final velocities.

In the same century Nicole Oresme, a bishop and scholastic philosopher in Paris, devised graphical methods that enabled him to prove this relation. He computed S as the area of a trapezoid of width tand heights u and v. The Babylonian trapezoid procedures can be viewed as a concrete examples of the same computation.

Babylonian trapezoid figures exist in an abstract mathematical space

Furthermore, it was hitherto assumed that the astronomers in Babylon used arithmetical methods but no geometrical ones, even though they were common in Babylonian mathematics since 1800 BCE. Ancient Greek astronomers from the time between 350 BCE and 150 CE are also known for their use of geometrical methods. However, the Babylonian trapezoid texts are distinct from the geometrical calculations of their Greek colleagues. The trapezoid figures do not describe configurations in a real space, but they come about by drawing the velocity of the planet against time. As opposed to the geometrical constructions of the Greek astronomers the Babylonian trapezoid figures exist in an abstract mathematical space, defined by time on the x-axis and velocity on the y-axis.



Contacts and sources:
Ibou Diop,  Humboldt-Universität zu Berlin
Dr. Nina Diezemann, Press officer, Cluster of Excellence Topoi

Citation: Mathieu  Ossendrijver: „Ancient Babylonian astronomers calculated Jupiter’s position from the area under a time-velocity graph“, in: Science, January 29, 2016.

Weather 3000 Kilometers Below Earth's Surface More Varied Than Expected

The temperature 3,000 kilometers below the surface of the Earth is much more varied than previously thought, scientists have found.

The discovery of the regional variations in the lower mantle where it meets the core, which are up to three times greater than expected, will help scientists explain the structure of the Earth and how it formed.

Tomogram of the lowermost mantle (on top of core-mantle boundary, such as in our paper) centred on the equatorial region north of Australia. Green dots are stations and red dots are earthquakes near the Earth’s surface, the Earth’s mantle is transparent, and the ray paths through the interior are shown by solid lines. This image was made by our NCI Vizlab facility based on my data and the tomographic model of the lowermost mantle. You can see that the stations and earthquakes used in the tomographic inversion are not uniformly distributed across the surface. The blue regions are the regions of high velocity and the red regions show the low velocity.
Credit: ANU

"Where the mantle meets the core is a more dramatic boundary than the surface of the Earth," said the lead researcher, Associate Professor Hrvoje Tkalčic, from The Australian National University (ANU).

"The contrast between the solid mantle and the liquid core is greater than the contrast between the ground and the air. The core is like a planet within a planet." said Associate Professor Tkalčic, a geophysicist in the ANU Research School of Earth Sciences.

"The center of the earth is harder to study than the center of the sun."

Temperatures in the lower mantle the reach around 3,000-3,500 degrees Celsius and the barometer reads about 125 gigapascals, about one and a quarter million times atmospheric pressure.

Tomogram of the lowermost mantle (on top of core-mantle boundary, such as in our paper) centred on the equatorial region north of Australia.

Credit:  ANU

Variations in these temperatures and other material properties such as density and chemical composition affect the speed at which waves travel through the Earth.

The team examined more than 4,000 seismometers measurements of earthquakes from around the world.

In a process similar to a CT scan, the team then ran a complex mathematical process to unravel the data and build the most detailed global map of the lower mantle, showing features ranging from as large as the entire hemisphere down to 400 kilometers across.

The team used the TerraWulf high-end computing cluster to generate their map.

Credit: Stuart Hay, ANU

The map showed the seismic speeds varied more than expected over these distances and were probably driven by heat transfer across the core-mantle boundary and radioactivity.

"These images will help us understand how convection connects the Earth's surface with the bottom of the mantle," said Associate Professor Tkalčic.

"These thermal variations also have profound implications for the geodynamo in the core, which creates the Earth's magnetic field."



Contacts and sources:
Dr. Hrvoje Tkalcic
he Australian National University (ANU)

How Queen Bees Control the Princesses By Altering DNA

Queen bees and ants emit a chemical that alters the DNA of their daughters and keeps them as sterile and industrious workers, scientists have found.

"When deprived of the pheromone that queens emit, worker bees and ants become more self-centred and lazy, and they begin to lay eggs," said lead researcher Dr Luke Holman from The Australian National University (ANU).


Credit:  purebeeworks.com
"Amazingly, it looks like the queen pheromone works by chemically altering workers' genes," said Dr Holman, a biologist in the ANU Research School of Biology.

Queen bees and ants can have hundreds of thousands of offspring and live for many years, while workers are short-lived and mostly sterile, even though they have the same DNA as the queen.

Recent research suggests that a chemical modification to a baby bee or ant's DNA, called DNA methylation, helps determine whether the baby develops into a queen or a worker.

Dr Holman collaborated with biologists from the University of Helsinki to investigate whether the queen's pheromone altered DNA methylation in workers.

Dr Luke Holman 
Credit: L. Holman

The team found evidence that indeed, workers exposed to pheromones tag their DNA with methylation differently, which might suppress queenly characteristics in the workers.

Surprisingly, the queen pheromone of honeybees seemed to lower methylation, while the queen pheromone of ants seemed to increase it, suggesting things work differently in bees and ants.

"Bees and ants evolved their two-tier societies independently. It would be confusing but cool if they had evolved different means to the same end," Dr Holman said.

Dr Holman said he was looking forward to studying Australian bees next, which evolved sociality independently from the European species in this study.

"It brings us one step closer to understanding how these animals evolved their amazing cooperative behaviour, which in many ways is a step beyond human evolution," he said.

The research is published in Biology Letters.




Contacts and sources:  
Dr Luke Holman
The Australian National University (ANU).

Largest Solar System Yet Found Features Giant Planet 1 Trillion Kilometers From Mother Star

Astronomers studying a lonely planet drifting through space have found its mum; a star a trillion kilometers away.

The planet, known as 2MASS J2126−8140, has an orbit around its host star that takes nearly a million Earth years and is more than 140 times wider than Pluto's. This makes it easily the largest solar system ever found.

"We were very surprised to find such a low-mass object so far from its parent star," said Dr Simon Murphy of ANU Research School of Astronomy and Astrophysics.

"There is no way it formed in the same way as our solar system did, from a large disc of dust and gas."

An artist's impression of 2MASS J2126. 
Credit: University of Hertfordshire / Neil Cook.

Only a handful of extremely wide pairs of this kind have been found in recent years. The distance between the new pair is 6,900 Astronomical Units (AU) - 1,000,000,000,000 kilometres or 0.1 light years - nearly three times the previous widest pair, which is 2,500AU (370,000,000,000 km).

2MASS J2126−8140's parent is a red dwarf star called TYC 9486-927-1. At that distance, it would appear as only a moderately bright star in the sky, and light would take about a month to reach the planet.

Dr Murphy is part of an international team of scientists that studied 2MASS J2126−8140, a gas giant planet around 12 to 15 times the mass of Jupiter, as part of a survey of several thousand young stars and brown dwarfs close to our solar system.

False colour infrared image of TYC 9486-927-1 and 2MASS J2126. The arrows show the projected movement of the star and planet on the sky over 1000 years. The scale indicates a distance of 4000 Astronomical Units (AU), where 1 AU is the average distance between the Earth and the Sun. 
Credit: 2MASS/S. Murphy/ANU.

Once they realized 2MASS J2126−8140 and TYC 9486-927-1 were a similar distance from the Earth - about 100 light years - they compared the motion of the two through space and realised they were moving together.

"We can speculate they formed 10 million to 45 million years ago from a filament of gas that pushed them together in the same direction," Dr Murphy said.

"They must not have lived their lives in a very dense environment. They are so tenuously bound together that any nearby star would have disrupted their orbit completely."

In the last five years a number of free floating planetshave been found. These are gas giant worlds likeJupiter that lack the mass for the nuclear reactions that make stars shine, so cool and fade over time. Measuring the temperatures of these objects is relatively straightforward, but it depends on both mass and age. This means astronomers need to find out how old they are, before they can find out if they are lightweight enough to be planets or if they are heavier 'failed stars' known as brown dwarfs.

US-based researchers found 2MASS J2126 in an infrared sky survey, flagging it as a possible young and hence low mass object. In 2014 Canadian researchers identified 2MASS J2126 as a possible member of a 45 million year old group of stars and brown dwarfs known as the Tucana Horologium Association. This made it young and low enough in mass to be classified as a free-floating planet.

In the same region of the sky, TYC 9486-927-1 is a star that had been identified as being young, but not as a member of any known group of young stars. Until now no one had suggested that TYC 9486-927-1 and 2MASS J2126 were in some way linked.

Lead author Dr Niall Deacon of the University of Hertfordshire has spent the last few years searching for young stars with companions in wide orbits. As part of the work, his team looked through lists of known young stars, brown dwarfs and free-floating planets to see if any of them could be related. They found that TYC 9486-927-1 and 2MASS J2126 are moving through space together and are both about 104 light years from the Sun, implying that they are associated.

"This is the widest planet system found so far and both the members of it have been known for eight years," said Dr Deacon, "but nobody had made the link between the objects before. The planet is not quite as lonely as we first thought, but it's certainly in a very long distance relationship."

The team then looked at the spectrum – the dispersed light – of the star to measure the strength of a feature caused by the element lithium. This is destroyed early on in a star's life so the more lithium it has, the younger it is. TYC 9486-927-1 has stronger signatures of lithium than a group of 45 million year old stars (the Tucana Horologium Association) but weaker signatures than a group of 10 million year old stars, implying an age between the two.

Based on this age the team was able to estimate the mass of 2MASS J2126, finding it to be between 11.6 to 15 times the mass of Jupiter. This placed it on the boundary between planets and brown dwarfs. It means that 2MASS J2126 has a similar mass, age and temperature to one of the first planets directly imaged around another star, beta Pictoris b.

"Compared to beta Pictoris b, 2MASS J2126 is more than 700 times further away from its host star," Dr Simon Murphy of the Australian National University, also a study co-author, "but how such a wide planetary system forms and survives remains an open question."

2MASS J2126 is around 7000 Earth-Sun distances or 1 trillion kilometres away from its parent star, giving it the widest orbit of any planet found around another star. At such an enormous distance it takes roughly 900,000 years to complete one orbit, meaning it has completed less than fifty orbits over its lifetime. There is little prospect of any life on an exotic world like this, but any inhabitants would see their 'Sun' as no more than a bright star, and might not even imagine they were connected to it at all.

The research, which will be published in the Monthly Notices of The Royal Astronomical Society, was led by Dr Niall Deacon from University of Hertfordshire and included Dr Joshua Schlieder from the NASA Ames Research Center.



Contacts and sources: 
Dr Phil Dooley

Spanish Missions Triggered Native American Population Collapse

New evidence shows severe and rapid collapse of Pueblo populations occurred in the 17th century and triggered a cascade of ecological effects that ultimately had consequences for global climates.

New interdisciplinary research in the Southwest United States has resolved long-standing debates on the timing and magnitude of American Indian population collapse in the region.

The severe and rapid collapse of Native American populations in what is now the modern state of New Mexico didn't happen upon first contact with Spanish conquistadors in the 1500s, as some scholars thought. Nor was it as gradual as others had contended

Rather than being triggered by first contact in the 1500s, rapid population loss likely began after Catholic Franciscan missions were built in the midst of native pueblos, resulting in sustained daily interaction with Europeans.


The indirect effects of this demographic impact rippled through the surrounding forests and, perhaps, into our atmosphere.

Those are the conclusions of a new study by a team of scientists looking for the first time at high resolution reconstructions of human population size, tree growth and fire history from the Jemez Mountains of New Mexico.

"Scholars increasingly recognize the magnitude of human impacts on planet Earth, some are even ready to define a new geological epoch called the Anthropocene," said anthropologist and fire expert Christopher Roos, an associate professor at Southern Methodist University, Dallas, and a co-author on the research.

"But it is an open question as to when that epoch began," said Roos. "One argument suggests that indigenous population collapse in the Americas resulted in a reduction of carbon dioxide in the atmosphere because of forest regrowth in the early colonial period. Until now the evidence has been fairly ambiguous. Our results indicate that high-resolution chronologies of human populations, forests and fires are needed to evaluate these claims."

A 2012 photo of standing walls at the ruins of an Ancestral Jemez village that was part of the published study.

Credit: Christopher Roos, SM

 A contentious issue in American Indian history, scientists and historians for decades have debated how many Native Americans died and when it occurred. With awareness of global warming and interdisciplinary interest in the possible antiquity of the Anthropocene, resolution of that debate may now be relevant for contemporary human-caused environmental problems, Roos said.

Findings of the new study were published Jan. 25, 2016 in the Proceedings of the National Academy of Sciences, "Native American Depopulation, Reforestation, and Fire Regimes in the Southwest U.S., 1492-1900 C.E."

The researchers offer the first absolute population estimate of the archaeology of the Jemez Province -- an area west Santa Fe and Los Alamos National Lab in northern New Mexico. Using airborne remote sensing LiDAR technology to establish the size and shape of rubble mounds from collapsed architecture of ancestral villages, the researchers were able to quantify population sizes in the 16th century that were independent of historical documents.

To identify the timing of of the population collapse and its impact on forest fires, the scientists also collected tree-ring data sets from locations adjacent to the Ancestral Jemez villages and throughout the forested mountain range. This sampling framework allowed them to refine the timing of depopulation and the timing of fire regime changes across the Jemez Province.

Pueblo of Jemez.

Their findings indicate that large-scale depopulation only occurred after missions were established in their midst by Franciscan priests in the 1620s. Daily sustained interaction resulted in epidemic diseases, violence and famine, the researchers said. From a population of roughly 6,500 in the 1620s fewer than 900 remained in the 1690s - a loss of more than 85 percent of the population in a few generations.

"The loss of life is staggering," said anthropologist Matthew Liebmann, an associate professor at Harvard University and lead author on the PNAS article.

"Imagine that in a room with 10 people, only one person was left at the end of the day," Liebmann said. "This had devastating effects on the social and economic lives of the survivors. Our research suggests that the effects were felt in the ecology of the forests too."

Other scientists on the team include Josh Farella and Thomas Swetnam, University of Arizona; and Adam Stack and Sarah Martini, Harvard University.

The researchers studied a 100,000-acre area that includes the ancestral pueblo villages of the Jemez (HEY-mehz) people. Located in the Jemez Mountains of north central New Mexico, it's a region in the Santa Fe National Forest of deep canyons, towering flat-topped mesas, as well as rivers, streams and creeks.

Today about 2,000 Jemez tribal members live at the Pueblo of Jemez.

The authors note in their article that, "Archaeological evidence from the Jemez Province supports the notion that the European colonization of the Americas unleashed forces that ultimately destroyed a staggering number of human lives," however, they note, it fails to support the notion that sweeping pandemics uniformly depopulated North America."

A 2013 photo of Ponderosa pine forests within the study area reported on in the published study.

Credit:  Christopher Roos, SMU

"To better understand the role of the indigenous population collapse on ecological and climate changes, we need this kind of high-resolution paired archaeological and paleoecological data," said Roos. "Until then, a human-caused start to Little Ice Age cooling will remain uncertain. Our results suggest this scenario is plausible, but the nature of European and American Indian relationships, population collapse, and ecological consequences are probably much more complicated and variable than many people had previously understood them to be."


Contacts and sources: 
Margaret Allen
Southern Methodist University

The Connection Between Excess Iron and Parkinson's Disease


It's long been known that excess iron is found in the brains of patients with Parkinson's disease (PD), an incurable neurodegenerative condition that affects motor function. The mechanism by which the iron wreaks damage on neurons involved in PD has not been clear. Research from the Andersen lab at the Buck Institute suggests that the damage stems from an impairment in the lysosome, the organelle that acts as a cellular recycling center for damaged proteins.

Dopaminergic neurons in the human substantia nigra, the cells preferentially lost in Parkinson's disease. The yellow staining represents iron-dependent staining of the neurons.

Credit: Subramanian Rajagopalan, MSc. Buck Institute for Research on Aging

Scientists report the impairment allows excess iron to escape into the neurons where it causes toxic oxidative stress. The research will be published online in The Journal of Neuroscience on Jan. 27, 2016.

Lysosomes are key to a process called autophagy, whereby damaged proteins are broken down into building blocks that are used to make newly-built proteins to take their place. It's the cellular equivalent of recycling. With age, the ability of the lysosome to participate in autophagy becomes slower, resulting in the build-up of non-protein "garbage" within the cells. Less-than-optimal autophagy has been associated with several age-related diseases, including PD.

"It's recently been realized that one of the most important functions of the lysosome is to store iron in a place in the cell where it is not accessible to participate in toxic oxidative stress-producing reactions," said Julie K. Andersen, PhD, senior scientist and Buck Institute faculty. "Now we have demonstrated that a mutation in a lysosomal gene results in the toxic release of iron into the cell resulting in neuronal cell death."

Spearheaded by staff scientist Shankar J. Chinta, PhD, the work (done in both mice and cultured human dopaminergic cells) involved a mutation in a gene (ATP13A2) associated with a rare early onset form of PD called Kufor-Rakeb syndrome. When researchers knocked out ATP13A2 the lysosome was unable to maintain the balance of iron within the cell.

The mutation responsible for Kufor-Rakeb was identified in 2010. Those suffering from the condition, which is named for the village in Jordan where the syndrome was first described, experience disease onset in adolescence. "Mutations in this same gene have also been recently linked to sporadic forms of PD," said Andersen. "This suggests that age-related impairments in lysosomal function that impact the ability of neurons to maintain a healthy balance of iron are part of what underlies the presentation of PD in the general population."

Andersen has a long-standing interest in the role of excess iron in PD and this current work provides an example of the value of basic research in drug discovery. In 2003 her lab showed that tying up excess iron with a metal chelator (derived from the Greek word for claw) protected mice from the ravaging effects of the well-known Parkinson's inducing toxin, MPTP. The study provided an important link between the observed excessive iron in the brains of PD patients and oxidative stress associated with neurodegeneration.

 "The issue with iron chelation is that it's a sledge hammer -- it pulls iron from the cells indiscriminately and iron is needed throughout the body for many biological functions," said Andersen. "Now we have a more specific target that we can hit with a smaller hammer, which could allow us to selectively impact iron toxicity within the affected neurons."

Other Buck scientists involved in the study include Subramanian Rajagopalan and Anand Rane. This work was supported by the National Institutes of Health (RO1 NS047198, NS047198, NS041264, and AG012141).




Contacts and sources:
Kris Rebillot
Buck Institute for Research on Aging


Citation: "Regulation of ATP13A2 via PHD2-HIF1a Signaling is Critical for Cellular Iron Homeostasis: Implications for Parkinson's Disease" DOI: 10.1523/JNEUROSCI.3117-15.2016


Wearable Sweat Sensor To Monitor Your Health

When engineers at the University of California, Berkeley, say they are going to make you sweat, it is all in the name of science.

Specifically, it is for a flexible sensor system that can measure metabolites and electrolytes in sweat, calibrate the data based upon skin temperature and sync the results in real time to a smartphone.

UC Berkeley engineers put their wearable sweat sensors to the test.
UC Berkeley video produced by Roxanne Makasdjian and Stephen McNally, UC Berkeley

While health monitors have exploded onto the consumer electronics scene over the past decade, researchers say this device, reported in the Jan. 28 issue of the journal Nature, is the first fully integrated electronic system that can provide continuous, non-invasive monitoring of multiple biochemicals in sweat.

The advance opens doors to wearable devices that alert users to health problems such as fatigue, dehydration and dangerously high body temperatures.

Users wearing the flexible sensor array can run and move freely while the chemicals in their sweat are measured and analyzed. The resulting data, which is transmitted wirelessly to a mobile device, can be used to help assess and monitor a user’s state of health. 
Users wearing the flexible sensor array can run and move freely while the chemicals in their sweat are measured and analyzed. The resulting data, which is transmitted wirelessly to a mobile device, can be used to help assess and monitor a user's state of health. (Image by Der-Hsien Lien and Hiroki Ota, UC Berkeley)
Image by Der-Hsien Lien and Hiroki Ota, UC Berkeley

"Human sweat contains physiologically rich information, thus making it an attractive body fluid for non-invasive wearable sensors," said study principal investigator Ali Javey, a UC Berkeley professor of electrical engineering and computer sciences. "However, sweat is complex and it is necessary to measure multiple targets to extract meaningful information about your state of health. In this regard, we have developed a fully integrated system that simultaneously and selectively measures multiple sweat analytes, and wirelessly transmits the processed data to a smartphone. Our work presents a technology platform for sweat-based health monitors."

Javey worked with study co-lead authors Wei Gao and Sam Emaminejad, both of whom are postdoctoral fellows in his lab. Emaminejad also has a joint appointment at the Stanford School of Medicine, and all three have affiliations with the Berkeley Sensor and Actuator Center and the Materials Sciences Division at Lawrence Berkeley National Laboratory.

Chemical clues to a person's physical condition

To help design the sweat sensor system, Javey and his team consulted exercise physiologist George Brooks, a UC Berkeley professor of integrative biology. Brooks said he was impressed when Javey and his team first approached him about the sensor.

The new sensor developed at UC Berkeley can be made into “smart” wristbands or headbands that provide continuous, real-time analysis of the chemicals in sweat. 
The new sensor developed at UC Berkeley can be made into "smart" wristbands or headbands that provide continuous, real-time analysis of the chemicals in sweat. (Photo by XXX)
Credit: UC Berkeley photo by Wei Gao

"Having a wearable sweat sensor is really incredible because the metabolites and electrolytes measured by the Javey device are vitally important for the health and well-being of an individual," said Brooks, a co-author on the study. "When studying the effects of exercise on human physiology, we typically take blood samples. With this non-invasive technology, someday it may be possible to know what's going on physiologically without needle sticks or attaching little, disposable cups on you."

The prototype developed by Javey and his research team packs five sensors onto a flexible circuit board. The sensors measure the metabolites glucose and lactate, the electrolytes sodium and potassium, and skin temperature.

"The integrated system allows us to use the measured skin temperature to calibrate and adjust the readings of other sensors in real time," said Gao. "This is important because the response of glucose and lactate sensors can be greatly influenced by temperature."

Developing smart wristbands and headbands

Adjacent to the sensor array is the wireless printed circuit board with off-the-shelf silicon components. The researchers used more than 10 integrated circuit chips responsible for taking the measurements from the sensors, amplifying the signals, adjusting for temperature changes and wirelessly transmitting the data. The researchers developed an app to sync the data from the sensors to mobile phones, and fitted the device onto "smart" wristbands and headbands.

Wearable sensors measure skin temperature in addition to glucose, lactate, sodium and potassium in sweat. Integrated circuits analyze the data and transmit the information wirelessly to a mobile phone. 
Wearable sensors measure skin temperature in addition to glucose, lactate, sodium and potassium in sweat. Integrated circuits analyze the data and transmit the information wirelessly to a mobile phone. (Image by Der-Hsien Lien and Hiroki Ota, UC Berkeley)
Image by Der-Hsien Lien and Hiroki Ota, UC Berkeley

They put the device - and dozens of volunteers - through various indoor and outdoor exercises. Study subjects cycled on stationary bikes or ran outdoors on tracks and trails from a few minutes to more than an hour.

"We can easily shrink this device by integrating all the circuit functionalities into a single chip," said Emaminejad. "The number of biochemicals we target can also be ramped up so we can measure a lot of things at once. That makes large-scale clinical studies possible, which will help us better understand athletic performance and physiological responses to exercise."

Javey noted that a long-term goal would be to use this device for population-level studies for medical applications.

Brooks also noted the potential for the device to be used to measure more than perspiration.

"While Professor Javey's wearable, non-invasive technology works well on sweating athletes, there are likely to be many other applications of the technology for measuring vital metabolite and electrolyte levels of healthy persons in daily life," said Brooks. "It can also be adapted to monitor other body fluids for those suffering from illness and injury."



Contacts and sources:
Sarah Yang
University of California, Berkeley

Tuesday, January 26, 2016

Scientists Discover How Pangea Helped Make Coal

The same geologic forces that helped stitch the supercontinent Pangea together also helped form the ancient coal beds that powered the Industrial Revolution.

The consolidation of the ancient supercontinent Pangea 300 million years ago played a key role in the formation of the coal that powered the Industrial Revolution and that is still burned for energy in many parts of the world today, Stanford scientists say.

The finding, published in this week's issue of the journal Proceedings of the National Academy of Sciences, contradicts a popular hypothesis, first formally proposed in the 1990s, that attributes the formation of Carboniferous coal to a 60-million-year gap between the appearance of the first forests and the wood-eating microbes and bacteria that could break them down.

A section of Devonian-era (approximately 360 million-year-old) coal shows fungally mediated degradation of wood older than the Carboniferous.

Credit:  Ker Than

"Much of the scientific community was really enamored with this simple, straightforward explanation," said geobiologist Kevin Boyce, associate professor of geological sciences at Stanford School of Earth, Energy & Environmental Sciences. "So, it has not only refused to die, it has become a conventional wisdom."

In the new study, Boyce and his colleagues took a closer look at this "evolutionary lag" hypothesis, examining the idea from various biochemical and geological perspectives. "Our analysis demonstrates that an evolutionary lag explanation for the creation of ancient coal is inconsistent with geochemistry, sedimentology, paleontology and biology," said Matthew Nelsen, a postdoctoral researcher in Boyce's lab and first author on the new paper.

The scientists examined ancient, organic-rich sediments from North America and showed that not all of the plants that existed during the Carboniferous period, which began about 360 million years ago, possessed high concentrations of lignin, a cell wall polymer that helps give plant tissues their rigidity. Lignin is the biochemical component that, according to the evolutionary lag hypothesis, ancient bacteria and fungi were unable to break down.

The researchers also showed that shifts in lignin abundance in ancient plant fossils had no obvious impact on coal formation. In fact, many Carboniferous coal layers were dominated by the remains of lycopsids, an ancient group of largely low-lignin plants.

"Central to the evolutionary lag model is the assumption that lignin is the dominant biochemical constituent of coal," Nelsen said. "However, much of the plant matter that went into forming these coals contained low amounts of lignin."
Perfect conditions for coal

The scientists instead argue that the waxing and waning of coal deposits during the Carboniferous period was closely tied to a unique combination of tectonics and climate conditions that existed during the assembly of Pangea. Synthesizing findings from across various scientific fields, the scientists argue that during the Carboniferous, massive amounts of organic debris accumulated in warm, humid equatorial wetlands.

Stanford Earth scientists Kevin Boyce (left) and Matt Nelsen (right) examine Carboniferous-era petrified wood fossils.
Kevin Boyce and Matt Nelsen handle Carboniferous-era petrified wood fossils
Credit:  Ker Than

"If you want to generate coal, you need a productive environment where you're making lots of plant matter and you also need some way to prevent that plant matter from decaying," Boyce said. "That happens in wet environments."

The other key element that is required to form large coal deposits is an "accommodation space" – essentially a large hole – where organic matter can accumulate over long periods without being eroded away.

"So you need both a wet tropics and a hole to fill. We have an ever-wet tropics now, but we don't have a hole to fill," Boyce said. "There's only a narrow band in time in Earth's history where you had both a wet tropics and widespread holes to fill in the tropics, and that's the Carboniferous."

During the Carboniferous, amphibian-like creatures were still adjusting to life on land, and hawk-size insects flitted through forests very different from what exists today.

"In the modern world, all trees are seed plants more or less," Boyce said. "Back then, the trees resembled giant versions of ferns and other groups of plants that are now only small herbs. Conifers were just beginning to appear."

The Carboniferous was also a time when geologic forces were herding several large land masses together into what would eventually become the massive supercontinent Pangea. Along geologic fault lines where tectonic plates ground against one another, mountain ranges developed, and deep basins formed alongside the new peaks.

The ponderous pace at which the basins were created meant there was plenty of time for organic matter to accumulate, and as the mountains rose, the basins deepened and even more plant material could pile up.

"With enough time," Boyce said, "that plant matter was eventually transformed into the coal that powered the Industrial Revolution and helped usher in the modern age. Coal, as dead plant matter, is obviously based in short-term biological processes. And yet, as an important part of the long-term carbon cycle, coal accumulation is largely dictated by geological processes that operate on timescales of many millions of years that are entirely independent of the biology."

In addition to Boyce and Nelsen, other co-authors on the study, "Delayed fungal evolution did not cause the Paleozoic peak in coal production," are William DiMichele, the curator of fossil plants at the Smithsonian National Museum of Natural History, and Shanan Peters, a geoscientist at the University of Wisconsin-Madison.



Contacts and sources:
Kevin Boyce, School of Earth, Energy & Environmental Sciences
by Ker Than, School of Earth, Energy & Environmental Sciences
Bjorn Carey, Stanford News Service 

Monday, January 25, 2016

In Galaxy Clustering, Mass May Not Be the Only Thing That Matters


An international team of researchers, including Carnegie Mellon University's Rachel Mandelbaum, has shown that the relationship between galaxy clusters and their surrounding dark matter halo is more complex than previously thought. The researchers' findings, published in Physical Review Letters today (Jan. 25), are the first to use observational data to show that, in addition to mass, a galaxy cluster's formation history plays a role in how it interacts with its environment.

These are density maps of galaxy cluster distribution.

Credit: Kavli IPMU

There is a connection between galaxy clusters and their dark matter halos that holds a great deal of information about the universe's content of dark matter and accelerating expansion due to dark energy. Galaxy clusters are groupings of hundreds to thousands of galaxies bound together by gravity, and are the most massive structures found in the universe. These clusters are embedded in a halo of invisible dark matter. Traditionally, cosmologists have predicted and interpreted clustering by calculating just the masses of the clusters and their halos. However, theoretical studies and cosmological simulations suggested that mass is not the only element at play -- something called assembly bias, which takes into account when and how a galaxy cluster formed, also could impact clustering.

"Simulations have shown us that assembly bias should be part of our picture," said Mandelbaum, a member of Carnegie Mellon's McWilliams Center for Cosmology. "Confirming this observationally is an important piece of understanding galaxy and galaxy cluster formation and evolution."

In the current study, the research team, led by Hironao Miyatake, Surhud More and Masahiro Takada of the Kavli Institute for the Physics and Mathematics of the Universe, analyzed observational data from the Sloan Digital Sky Survey's DR8 galaxy catalog. Using this data, they demonstrated that when and where galaxies group together within a cluster impacts the cluster's relationship with its dark matter environment.

The researchers divided close to 9,000 galaxy clusters into two groups based on the spatial distribution of the galaxies in each cluster. One group consisted of clusters with galaxies aggregated at the center and the other consisted of clusters in which the galaxies were more diffuse. They then used a technique called gravitational lensing to show that, while the two groups of clusters had the same mass, they interacted with their environment much differently. The group of clusters with diffuse galaxies were much more clumpy than the group of clusters that had their galaxies close to the center.

"Measuring the way galaxy clusters clump together on large scales is a linchpin of modern cosmology. We can go forward knowing that mass might not be the only factor in clustering," Mandelbaum said.



Contacts and sources:
Jocelyn Duffy
Carnegie Mellon University

New Theory Aids Search for Universe's Origin

In a new study, scientists from The University of Texas at Dallas and their colleagues suggest a novel way for probing the beginning of space and time, potentially revealing secrets about the conditions that gave rise to the universe.

The prevailing model of the birth of the universe is the big bang theory, which describes the rapid expansion of the universe from a highly compressed primordial state. While the big bang is a successful genesis model, it does, however, require special initial conditions.

In this diagram, time passes from left to right, so at any given time, the Universe is represented by a disk-shaped "slice" of the diagram.
Credit:  NASA/WMAP Science Team - modified by Ryan Kaldar

Determining what produced those initial conditions is a major challenge in cosmology and astrophysics, said Dr. Xingang Chen, assistant professor of physics at UT Dallas and a visiting scholar at the Harvard-Smithsonian Center for Astrophysics.

"Several different scenarios have been proposed for the origin of the big bang and to explain its pre-existing, initial conditions," Chen said.

The leading explanation among theorists is the inflation scenario, which posits that the universe went through an exponential expansion in the first fleeting fraction of a second of its existence. Another scenario suggests that a universe preceded ours and contracted in a "big crunch" before transitioning into our big bang.

In a study appearing in an upcoming issue of the Journal of Cosmology and Astroparticle Physics, Chen and his colleagues, Dr. Mohammad Hossein Namjoo, a postdoctoral researcher at UT Dallas and the Center for Astrophysics, and Dr. Yi Wang of the Hong Kong University of Science and Technology, describe a new theory to determine which scenario is correct.

"Each scenario can have many details in its theoretical models that result in various astrophysical signals that can be observed today," Wang said. "Most of these signals may be shared by the different scenarios, but there are some signals that are unique fingerprints of each scenario. Although these signals are very rare, the latter can be used to distinguish inflation from other scenarios."

Astrophysical observations already have revealed information about the origins of the universe some 13.8 billion years ago, specifically about properties of initial fluctuations that took place in the early universe. For example, researchers have mapped patterns of tiny fluctuations in temperature in the otherwise smooth cosmic microwave background (CMB), which is the heat left over from the explosion of the big bang. Those tiny, "seed" irregularities became magnified as the universe expanded after the big bang, eventually forming all the large-scale structures we see in the universe today, such as stars and galaxies.

From those fluctuations scientists have learned a lot about the spatial variations of the primordial universe, but they have yet to determine the passage of time, Chen said. The phenomenon he and his colleagues discovered would allow that by putting "time stamps" on the evolutionary history of the primordial universe, shedding light on which scenario -- inflation or contraction -- produced the big bang's initial conditions.

"The information we currently have is akin to showing an audience many still pictures from a movie stacked on top of each other, but they lack proper time labeling for the correct sequence," Chen said. "As a result, we do not know for sure if the primordial universe was expanding or contracting."

New research suggests that oscillating heavy particles generated "clocks" in the primordial universe that could be used to determine what produced the initial conditions that gave rise to the universe.

Credit:  Yi Wang and Xingang Chen

Chen and his group devised a way to put the individual snapshots in order. They realized that heavy particles would be present before the big bang in both scenarios.

"These heavy particles have a simple but important property that can be used to resolve the competing scenarios. They oscillate just like a pendulum. They do so classically due to some kind of 'push,' or quantum-mechanically without having to be pushed initially," Chen said. "We call these heavy particles 'primordial standard clocks'."

The researchers found that in both the inflation and contraction scenarios, the oscillating particles generated time "ticks" on the seed fluctuations that the universe was experiencing at the same time.

"With the help of these time labels, we can turn the stacks of stills into a coherent movie and directly reveal the evolutionary history of the primordial universe," Chen said. "This should allow us to distinguish an inflationary universe from other scenarios, including one that previously contracted."

"The clock signals we are searching for are fine oscillatory structures that would manifest in measurements of the cosmic microwave background," Wang said. "Each primordial universe scenario predicts a unique signal pattern."

Namjoo said that detecting clock signals shouldn't require the design of new experiments. While current data is not accurate enough to spot such small variations, ongoing experiments worldwide are expected to gather extremely precise CMB data.

"Our theoretical proposal makes use of the same precision data that many experiments will be gathering in the next decade or so, but analyzes the data from a different angle to dig out a new type of signal," Namjoo said.

If the oscillations from the heavy particles are strong enough, experiments should find them in the next decade, Chen said. Supporting evidence could also come from other lines of investigation, such as maps of the large-scale structure of the universe, including galaxies and cosmic hydrogen.

The research was supported by UT Dallas, Harvard, the Hong Kong University of Science and Technology, and the National Science Foundation.



Contacts and sources:
Amanda Siegfried
The University of Texas at Dallas

Theorists Propose A New Method To Probe The Beginning Of The Universe


How did the universe begin? And what came before the Big Bang? Cosmologists have asked these questions ever since discovering that our universe is expanding. The answers aren't easy to determine. The beginning of the cosmos is cloaked and hidden from the view of our most powerful telescopes. Yet observations we make today can give clues to the universe's origin. New research suggests a novel way of probing the beginning of space and time to determine which of the competing theories is correct.

New research suggests that oscillating heavy particles generated "clocks" in the primordial universe that could be used to determine what produced the initial conditions that gave rise to the universe.

Credit: Yi Wang and Xingang Chen

The most widely accepted theoretical scenario for the beginning of the universe is inflation, which predicts that the universe expanded at an exponential rate in the first fleeting fraction of a second. However a number of alternative scenarios have been suggested, some predicting a Big Crunch preceding the Big Bang. The trick is to find measurements that can distinguish between these scenarios.

One promising source of information about the universe's beginning is the cosmic microwave background (CMB) - the remnant glow of the Big Bang that pervades all of space. This glow appears smooth and uniform at first, but upon closer inspection varies by small amounts. Those variations come from quantum fluctuations present at the birth of the universe that have been stretched as the universe expanded.

The conventional approach to distinguish different scenarios searches for possible traces of gravitational waves, generated during the primordial universe, in the CMB. "Here we are proposing a new approach that could allow us to directly reveal the evolutionary history of the primordial universe from astrophysical signals. This history is unique to each scenario," says coauthor Xingang Chen of the Harvard-Smithsonian Center for Astrophysics (CfA) and the University of Texas at Dallas.

While previous experimental and theoretical studies give clues to spatial variations in the primordial universe, they lack the key element of time. Without a ticking clock to measure the passage of time, the evolutionary history of the primordial universe can't be determined unambiguously.

"Imagine you took the frames of a movie and stacked them all randomly on top of each other. If those frames aren't labeled with a time, you can't put them in order. Did the primordial universe crunch or bang? If you don't know whether the movie is running forward or in reverse, you can't tell the difference," explains Chen.

This new research suggests that such "clocks" exist, and can be used to measure the passage of time at the universe's birth. These clocks take the form of heavy particles, which are an expected product of the "theory of everything" that will unite quantum mechanics and general relativity. They are named the "primordial standard clocks."

Subatomic heavy particles will behave like a pendulum, oscillating back and forth in a universal and standard way. They can even do so quantum-mechanically without being pushed initially. Those oscillations or quantum wiggles would act as clock ticks, and add time labels to the stack of movie frames in our analogy.

"Ticks of these primordial standard clocks would create corresponding wiggles in measurements of the cosmic microwave background, whose pattern is unique for each scenario," says coauthor Yi Wang of The Hong Kong University of Science and Technology. However, current data isn't accurate enough to spot such small variations.

Ongoing experiments should greatly improve the situation. Projects like CfA's BICEP3 and Keck Array, and many other related experiments worldwide, will gather exquisitely precise CMB data at the same time as they are searching for gravitational waves. If the wiggles from the primordial standard clocks are strong enough, experiments should find them in the next decade. Supporting evidence could come from other lines of investigation, like maps of the large-scale structure of the universe including galaxies and cosmic hydrogen.

And since the primordial standard clocks would be a component of the "theory of everything," finding them would also provide evidence for physics beyond the Standard Model at an energy scale inaccessible to colliders on the ground.



Contacts and sources:
Christine Pulliam
Harvard-Smithsonian Center for Astrophysics (CfA)