Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Saturday, March 31, 2012

Fossil Planets: Astronomers Discover A Solar System From The Early Days Of Our Universe

The newly discovered planetary system consists of the star HIP 11952 and two planets, which have orbital periods of 290 and 7 days, respectively. In itself, this would not be particularly remarkable since the discovery of expoplanets has become quite a regular occurrence in the world of astronomy. But HIP 11952 is different: the star is about 13 billion years old and contains very little other than hydrogen and helium. Usually planets form within clouds that include heavier chemical elements. The system could thus shed light on planet formation in the early universe – under conditions quite different from those of later planetary systems, such as our own.

A glimpse into alien worlds: Artist’s impression of HIP 11952 and its two Jupiter-like planets. 
 
© Timotheos Samartzidis

It is widely accepted that planets are formed in disks of gas and dust that swirl around young stars. But look into the details, and many open questions remain – including the question of what it actually takes to make a planet. With a sample of, by now, more than 750 confirmed planets orbiting stars other than the Sun, astronomers have some idea of the diversity among planetary systems.

But also, certain trends have emerged: Statistically, a star that contains more “metals” - in astronomical parlance, the term includes all chemical elements other than hydrogen and helium – is more likely to have planets.

Originally, the universe contained almost no chemical elements other than hydrogen and helium. Almost all heavier elements have been produced, over time inside stars, and then flung into space as massive stars end their lives in giant explosions (supernovae).

This gives raise to new questions: so what about planet formation under conditions like those of the very early universe, say: 13 billion years ago? If metal-rich stars are more likely to form planets, are there, conversely, stars with a metal content so low that they cannot form planets at all? And if the answer is yes, then when, throughout cosmic history, should we expect the very first planets to form?

Now a group of astronomers, including researchers from the Max Planck Institute for Astronomy in Heidelberg, Germany, has discovered a planetary system that could help provide answers to those questions. As part of a survey targeting especially metal-poor stars, they identified two giant planets around a star known by its catalogue number as HIP 11952, a star in the constellation Cetus (“the whale” or “the sea monster”) at a distance of about 375 light-years from Earth. By themselves, these planets, HIP 11952b and HIP 11952c, are not unusual. What is unusual is the fact that they orbit such an extremely metal-poor and, in particular, such a very old star.

For classical models of planet formation, which favour metal-rich stars when it comes to forming planets, planets around such a star should be extremely rare. Veronica Roccatagliata (University Observatory Munich), the principal investigator of the planet survey around metal-poor stars that led to the discovery, explains: “In 2010 we found the first example of such a metal-poor system, HIP 13044. Back then, we thought it might be a unique case; now, it seems as if there might be more planets around metal-poor stars than expected.”

HIP 13044 became famous as the “exoplanet from another galaxy” – the star is very likely part of a so-called stellar stream, the remnant of another galaxy swallowed by our own billions of years ago.

Compared to other exoplanetary systems, HIP 11952 is not only one that is extremely metal-poor, but, at an estimated age of 12.8 billion years, also one of the oldest systems known so far. “This is an archaeological find in our own backyard,” adds Johny Setiawan of the Max Planck Institute for Astronomy, who led the study of HIP 11952: “These planets probably formed when our Galaxy itself was still a baby.”

“We would like to discover and study more planetary systems of this kind. That would allow us to refine our theories of planet formation. The discovery of the planets of HIP 11952 shows that planets have been forming throughout the life of our Universe”, adds Anna Pasquali from the Center for Astronomy at Heidelberg University (ZAH), a co-author of the paper.


Contacts and sources: 
Dr. Markus Pössel
Max Planck Institute for Astronomy,

Citation:
Setiawan, J. et al.
Planetary companions around the metal-poor star HIP 11952
Astronomy & Astrophysics

Galaxy Distribution When The Universe Was Half Its Current Age

At the UK-Germany National Astronomy Meeting NAM2012, the Baryon Oscillation Spectroscopic Survey (BOSS) team announced the most accurate measurement yet of the distribution of galaxies between five and six billion years ago. 

A map of the galaxies in a thin slice of the BOSS catalogue. We are at the centre of the arc, outside the bottom part of the figure, and each black point is a galaxy. The red circle shows the approximate size of the BAO feature. 

CREDIT: Francesco Montesano/Max Planck Institute for Extraterrestrial Physics, Sloan Digital Sky Survey III

This was the key 'pivot' moment at which the expansion of the universe stopped slowing down due to gravity and started to accelerate instead, due to a mysterious force dubbed “dark energy”. The nature of this “dark energy” is one of the big mysteries in cosmology today, and scientists need precise measurements of the expansion history of the universe to unravel this mystery – BOSS provides this kind of data. In a set of six joint papers presented, the BOSS team, an international group of scientists with the participation of the Max Planck Institute of Extraterrestrial Physics in Garching, Germany, used these data together with previous measurements to place tight constraints on various cosmological models.

The BOSS survey, which is a part of the Sloan Digital Sky Survey (SDSS-III), was started in 2009 to probe the universe at a time when dark energy started to dominate. The survey will continue until 2014, collecting data for 1.35 million galaxies with a custom-designed new spectrograph on the 2.5-metre Sloan Telescope at the Apache Point Observatory in New Mexico, USA. In the first year-and-a-half, it has already mapped the three-dimensional positions of more than a quarter of a million galaxies spread across about one tenth of the sky, yielding the most accurate and complete map of the galaxy distribution up to a distance of about 6 billion light years.

Galaxies form a “cosmic web” with a variety of structures which encode valuable information about our universe. One particular feature, the so-called “Baryonic Acoustic Oscillations” (BAO), has been subject of much interest from scientists as it provides them with a “standard rod”. BAO are a relic of the early phases of the universe, when it was a hot and dense “soup” of particles. Small variations of density travelled through this “soup” as pressure-driven (sound) waves. As the universe expanded and cooled, the pressure dropped, causing these waves to stall after they had traveled about 500 million light years. These frozen waves imprinted a particular signature on the matter distribution and are visible in the galaxy map today: it is in fact slightly more probable to find pairs of galaxies separated by this scale than at smaller or larger distances.

Measurements of the apparent size of the BAO scale in the galaxy distribution then provide information about cosmic distances. Combined with the measurement of the galaxies’ redshift – a measure for how fast they move away as a result of the cosmic expansion – scientists can then reconstruct the expansion history of the universe.

The record of baryon acoustic oscillations (white rings) in galaxy maps helps astronomers retrace the history of the expanding universe. These schematic images show the universe at three different times. The false-colour image on the right shows the "cosmic microwave background," a record of what the very young universe looked like, 13.7 billion years ago. The small density variations present then have grown into the clusters, walls, and filaments of galaxies that we see today. These variations included the signal of the original baryon acoustic oscillations (white ring, right). As the universe has expanded (middle and left), evidence of the baryon oscillations has remained, visible in a "peak separation" between galaxies (the larger white rings). The SDSS-III results announced today (middle) are for galaxies 5.5 billion light-years distant, at the time when dark energy turned on. Comparing them with previous results from galaxies 3.8 billion light-years away (left) measures how the universe has expanded with time. 

Credit: E. M. Huff, the SDSS-III team, and the South Pole Telescope team. Graphic by Zosia Rostomian.

The new BOSS data, combined with previous analyses, can now constrain the parameters of the standard cosmological model to an accuracy of better than five per cent. "All the different lines of evidence point to the same explanation," says Dr. Ariel Sanchez, scientist at the Max Planck Institute for Extraterrestrial Physics and lead author of one of the six papers released today. "The dark energy is consistent with Einstein's cosmological constant: a small but irreducible energy continually stretching space itself, driving the accelerated expansion of the universe."

Besides dark energy, the information encoded in the large-scale distribution of galaxies can be used to obtain robust constraints on other important physical parameters such as the curvature of the universe, the neutrino mass, or the phase of inflation in the very early universe. “Current observations show that the universe has to be flat, to an accuracy better than 0.5 per cent,” says Ariel Sanchez. “And at the same time as we measure such a global parameter on a cosmic scale, we can also get information about neutrinos on the smallest scales in the cosmos.”

Neutrinos are tiny, subatomic particles. Even though a number of experiments have shown that they must have mass, scientists do not know how much they actually weigh, as it is difficult to measure this in a laboratory. However, as an additional component in the hot, early phase of the universe the neutrinos affect the growth of structures. The galaxy distribution probed by BOSS provides information about the maximum mass that these neutrinos are allowed to have. “This is really the connection of two extreme worlds, the very large and the very small”, adds Ariel Sanchez.

The quality of the new data even provided the BOSS team with new clues about cosmic inflation, a period of time shortly after the Big Bang during which the universe expanded at an incredible rate. During cosmic inflation, small regions of space were blown out to form our entire observable universe. At the same time, tiny quantum fluctuations also expanded and became the seeds of the structures that the BOSS data show us today. "There is a real zoo of alternative models of inflation. With BOSS we get important new clues about the inflationary phase of the universe, which allows us to pare down the market of available models", remarks Ariel Sanchez.

So far, all measurements are highly consistent with the standard cosmological model, which is made up of a few per cent ordinary matter, about a quarter of dark matter, and the rest dark energy. But Ariel Sanchez is cautious: “This is just the beginning. We can expect much tighter constraints once we have the full five years of BOSS data. There are also a number of future projects, such as EUCLID, that will provide even better measurements, bringing us one step closer to finding answers to the big open questions in cosmology.”

Contacts and sources: 
Max-Planck-Institut für extraterrestrische Physik


Original publications :
The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: baryon acoustic oscillations in the data release 9 spectroscopic galaxy sample
The BOSS team
http://arxiv.org/abs/1203.6594


The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey:cosmological implications of the large-scale two-point correlation function
Ariel G. Sánchez et al.
http://arxiv.org/abs/1203.6616


The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey:measurements of the growth of structure and expansion rate at z=0.57 from anisotropic clustering
Beth Reid et al.
http://arxiv.org/abs/1203.6641


The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey:analysis of potential systematics
Ashley J. Ross et al.
http://arxiv.org/abs/1203.6699


The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey:measuring structure growth using passive galaxies
Rita Tojeiro et al.
http://arxiv.org/abs/1203.6565


The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey:a large sample of mock galaxy catalogues
Marc Manera et al.
http://arxiv.org/abs/1203.6609

Moving Microfluidics From The Lab Bench To The Market

In the not-too-distant future, plastic chips the size of flash cards may quickly and accurately diagnose diseases such as AIDS and cancer, as well as detect toxins and pathogens in the environment. Such lab-on-a-chip technology — known as microfluidics — works by flowing fluid such as blood through microscopic channels etched into a polymer’s surface. Scientists have devised ways to manipulate the flow at micro- and nanoscales to detect certain molecules or markers that signal disease. 

The Center for Polymer Microfabrication is designing processes for manufacturing microfluidic chips. Pictured here is a chip fabricated by the center's tailor-made production machines. 
Moving microfluidics from the lab bench to the factory floor 
Photo: Melinda Hale 

Microfluidic devices have the potential to be fast, cheap and portable diagnostic tools. But for the most part, the technology hasn’t yet made it to the marketplace. While scientists have made successful prototypes in the laboratory, microfluidic devices — particularly for clinical use — have yet to be manufactured on a wider scale.

MIT's David Hardt is working to move microfluidics from the lab to the factory. Hardt heads the Center for Polymer Microfabrication — a multidisciplinary research group funded by the Singapore-MIT Alliance — which is designing manufacturing processes for microfluidics from the ground up. The group is analyzing the behavior of polymers under factory conditions, building new tools and machines to make polymer-based chips at production levels, and designing quality-control processes to check a chip’s integrity at submicron scales — all while minimizing the cost of manufacturing.

“These are devices that people want to make by the millions, for a few pennies each,” says Hardt, the Ralph E. and Eloise F. Cross Professor of Mechanical Engineering at MIT. “The material cost is close to zero, there’s not enough plastic here to send a bill for. So you have to get the manufacturing cost down.”

Micromachines

Hardt and his colleagues found that in making microfluidic chips, many research groups and startups have adopted equipment mainly from the semiconductor industry. Hardt says this equipment — such as nano-indenting and bonding machines — is incredibly expensive, and was never designed to work on polymer-based materials. Instead, Hardt's team looked for ways to design cheaper equipment that’s better suited to work with polymers.

The group focused on an imprinting technique called microembossing, in which a polymer is heated, then stamped with a pattern of tiny channels. In experiments with existing machines, the researchers discovered a flaw in the embossing process: When they tried to disengage the stamping tool from the cooled chip, much of the plastic ripped out with it.

To prevent embossing failures in a manufacturing setting, the team studied the interactions between the cooling polymer and the embossing tool, measuring the mechanical forces between the two. The researchers then used the measurements to build embossing machines specifically designed to minimize polymer “stickiness.” In experiments, the group found that the machines fabricated chips quickly and accurately, “at very low cost,” Hardt says. “In many cases it makes sense to build your own equipment for the task at hand,” he adds.

In addition to building microfluidic equipment, Hardt and his team are coming up with innovative quality-control techniques. Unlike automobile parts on an assembly line that can be quickly inspected with the naked eye, microfluidic chips carry tiny features, some of which can only be seen with a high-resolution microscope. Checking every feature on even one chip is a time-intensive exercise.

Hardt and his colleagues came up with a fast and reliable way to gauge the “health” of a chip’s production process. Instead of checking whether every channel on a chip has been embossed, the group added an extra feature — a tiny X — to the chip pattern. They designed the feature to be more difficult to emboss than the rest of the chip. Hardt says how sharply the X is stamped is a good indication of whether the rest of the chip has been rendered accurately.

Jumpstarting an industry

The group’s ultimate goal is to change how manufacturing is done. Typically, an industry builds up its production processes gradually, making adjustments and improvements over time. Hardt says the semiconductor industry is a prime example of manufacturing’s iterative process.

“Now what they do in manufacturing is impossibly difficult, but it’s been a series of small incremental improvements over years,” Hardt says. “We’re trying to jumpstart that and not wait until industry identifies all these problems when they’re trying to make a product.”

The group is now investigating ways to design a “self-correcting factory” in which products are automatically tested. If the product doesn’t work, Hardt envisions the manufacturing process changing in response, adjusting settings on machines to correct the process. For example, the team is looking for ways to evaluate how fluid flows through a manufactured chip. The point at which two fluids mix within a chip should be exactly the same in every chip produced. If that mixing point drifts from chip to chip, Hardt and his colleagues have developed algorithms that adjust equipment to correct the drift.

Holger Becker, co-founder of Microfluidic ChipShop, a lab-on-a-chip production company in Jena, Germany, says the center's research plays an important role in understanding the different processes involved in large-scale production of microfluidics.

"Most of the academic work in microfluidics concentrates on applications, and unfortunately only very few concentrate on the actual manufacturing technologies suited for industrialization," Becker says. "David Hardt's team takes a very holistic approach looking into all different process steps and the complete manufacturing process instead of individual technologies."

“We’re at the stage where we’d like industry to know what we’re doing,” Hardt says. “We’ve been sort of laboring in the vineyard for years, and now we have this base, and it could get to the point where we’re ahead of the group.”

Contacts and sources: 
Jennifer Chu, MIT News Office

Solar Power Doubles With New 3-D Design

Intensive research around the world has focused on improving the performance of solar photovoltaic cells and bringing down their cost. But very little attention has been paid to the best ways of arranging those cells, which are typically placed flat on a rooftop or other surface, or sometimes attached to motorized structures that keep the cells pointed toward the sun as it crosses the sky.

Innovative 3-D designs from an MIT team can more than double the solar power generated from a given area.

Two small-scale versions of three-dimensional photovoltaic arrays were among those tested by Jeffrey Grossman and his team on an MIT rooftop to measure their actual electrical output throughout the day. 
A new dimension for solar energy 
Photo: Allegra Boverman 

Now, a team of MIT researchers has come up with a very different approach: building cubes or towers that extend the solar cells upward in three-dimensional configurations. Amazingly, the results from the structures they’ve tested show power output ranging from double to more than 20 times that of fixed flat panels with the same base area.

The biggest boosts in power were seen in the situations where improvements are most needed: in locations far from the equator, in winter months and on cloudier days. The new findings, based on both computer modeling and outdoor testing of real modules, have been published in the journal Energy and Environmental Science.

“I think this concept could become an important part of the future of photovoltaics,” says the paper’s senior author, Jeffrey Grossman, the Carl Richard Soderberg Career Development Associate Professor of Power Engineering at MIT.

The MIT team initially used a computer algorithm to explore an enormous variety of possible configurations, and developed analytic software that can test any given configuration under a whole range of latitudes, seasons and weather. Then, to confirm their model’s predictions, they built and tested three different arrangements of solar cells on the roof of an MIT laboratory building for several weeks.

While the cost of a given amount of energy generated by such 3-D modules exceeds that of ordinary flat panels, the expense is partially balanced by a much higher energy output for a given footprint, as well as much more uniform power output over the course of a day, over the seasons of the year, and in the face of blockage from clouds or shadows. These improvements make power output more predictable and uniform, which could make integration with the power grid easier than with conventional systems, the authors say.

The basic physical reason for the improvement in power output — and for the more uniform output over time — is that the 3-D structures’ vertical surfaces can collect much more sunlight during mornings, evenings and winters, when the sun is closer to the horizon, says co-author Marco Bernardi, a graduate student in MIT’s Department of Materials Science and Engineering (DMSE).

The time is ripe for such an innovation, Grossman adds, because solar cells have become less expensive than accompanying support structures, wiring and installation. As the cost of the cells themselves continues to decline more quickly than these other costs, they say, the advantages of 3-D systems will grow accordingly.

“Even 10 years ago, this idea wouldn’t have been economically justified because the modules cost so much,” Grossman says. But now, he adds, “the cost for silicon cells is a fraction of the total cost, a trend that will continue downward in the near future.” Currently, up to 65 percent of the cost of photovoltaic (PV) energy is associated with installation, permission for use of land and other components besides the cells themselves.

Although computer modeling by Grossman and his colleagues showed that the biggest advantage would come from complex shapes — such as a cube where each face is dimpled inward — these would be difficult to manufacture, says co-author Nicola Ferralis, a research scientist in DMSE. The algorithms can also be used to optimize and simplify shapes with little loss of energy. It turns out the difference in power output between such optimized shapes and a simpler cube is only about 10 to 15 percent — a difference that is dwarfed by the greatly improved performance of 3-D shapes in general, he says. The team analyzed both simpler cubic and more complex accordion-like shapes in their rooftop experimental tests.

At first, the researchers were distressed when almost two weeks went by without a clear, sunny day for their tests. But then, looking at the data, they realized they had learned important lessons from the cloudy days, which showed a huge improvement in power output over conventional flat panels.

For an accordion-like tower — the tallest structure the team tested — the idea was to simulate a tower that “you could ship flat, and then could unfold at the site,” Grossman says. Such a tower could be installed in a parking lot to provide a charging station for electric vehicles, he says.

So far, the team has modeled individual 3-D modules. A next step is to study a collection of such towers, accounting for the shadows that one tower would cast on others at different times of day. In general, 3-D shapes could have a big advantage in any location where space is limited, such as flat-rooftop installations or in urban environments, they say. Such shapes could also be used in larger-scale applications, such as solar farms, once shading effects between towers are carefully minimized.

A few other efforts — including even a middle-school science-fair project last year — have attempted 3-D arrangements of solar cells. But, Grossman says, “our study is different in nature, since it is the first to approach the problem with a systematic and predictive analysis.”

David Gracias, an associate professor of chemical and biomolecular engineering at Johns Hopkins University who was not involved in this research, says that Grossman and his team “have demonstrated theoretical and proof-of-concept evidence that 3-D photovoltaic elements could provide significant benefits in terms of capturing light at different angles. The challenge, however, is to mass produce these elements in a cost-effective manner.”

Contacts and sources: 
David L. Chandler, MIT News Office

How Interstellar Beacons Could Help Future Astronauts Find Their Way Across The Universe

The use of stars, planets and stellar constellations for navigation was of fundamental importance for mankind for thousands of years. Now a group of scientists at the Max-Planck Institute for Extraterrestrial Physics in Garching, Germany have developed a new technique using a special population of stars to navigate not on Earth, but in voyages across the universe. Team member Prof. Werner Becker presented their work at the National Astronomy Meeting in Manchester on Friday 30 March.

Artist’s impression of pulsar-based navigation in deep space. The characteristic time signatures of strongly magnetised and fast spinning neutron stars, called pulsars, are used as natural navigation beacons to determine the position and velocity of a spacecraft.
 
 Courtesy of ESA 

Have you ever asked yourself how the starship Enterprise in the TV series Star Trek found its way through the depths of space? Cosmic lighthouses called pulsars might be the key to this interstellar navigation - not only in science fiction but also in the near future of space flight.

When stars much more massive than our Sun reach the end of their lives, their final demise is marked by a dramatic supernova explosion that destroys most of the star. But many leave behind compact, incredibly dense remnants known as neutron stars. Those detected have strong magnetic fields that focus emission into two highly directional beams. The neutron star rotates rapidly and if the beam points in the direction of the Earth we see a pulse of radiation at extremely regular intervals – hence the name pulsar.

Prof. Becker and his team are developing a novel navigation technology for spacecraft based on the regular emission of X-ray light from pulsars. Their periodic signals have timing stabilities comparable to atomic clocks and provide characteristic time signatures that can be used as natural navigation beacons, similar to the use of GPS satellites for navigation on Earth.

Artist’s impression of spaceship Enterprise, navigating in deep space using pulsar signals 
 Compilation by MPE.
By comparing the arrival times of the pulses measured on board the navigator spacecraft with those predicted at a reference location, the spacecraft position can be determined with an accuracy of few kilometres, everywhere in the solar system and far beyond.

At the moment even the fastest spacecraft would take thousands of years to travel to the nearest star and far longer to explore the wider Galaxy so we are unlikely to see journeys like this happen for many centuries. Nonetheless, the pulsar-based navigation system could be in use in the near future.

Professor Becker gives two examples: “These X-ray beacons could augment the existing GPS/Galileo satellite navigation systems and provide autonomous navigation for interplanetary space probes and future manned missions to Mars.”

He adds: “Looking forward, it’s incredibly exciting to think that we have now the technology to chart our route to other stars and may even be able to help our descendants take their first steps into interstellar space.”


Contacts and sources:
Dr Robert Massey
Royal Astronomical Society

Solar ‘Climate Change’ Could Cause Rougher Space Weather

Recent research shows that the space age has coincided with a period of unusually high solar activity, called a grand maximum. Isotopes in ice sheets and tree rings tell us that this grand solar maximum is one of 24 during the last 9300 years and suggest the high levels of solar magnetic field seen over the space age will reduce in future. This decline will cause a reduction in sunspot numbers and explosive solar events, but those events that do take place could be more damaging. Graduate student Luke Barnard of the University of Reading  presented the new results on ‘solar climate change’ in his paper at the National Astronomy Meeting in Manchester.

Image of a coronal mass ejection (CME) on June 7, 2011, recorded in ultraviolet light by the Solar Dynamics Observatory (SDO) satellite. The shock front that forms ahead of these huge expulsions of material from the solar atmosphere (the event shown moved at 1400 km/s) can generate large fluxes of highly energetic particles at Earth which can be a considerable hazard to space-based electronic systems and with repeated exposure, a health risk for crew on board high-altitude aircraft.
Credit: NASA / SDO

The level of radiation in the space environment is of great interest to scientists and engineers as it poses various threats to man-made systems including damage to electronics on satellites. It can also be a health hazard to astronauts and to a lesser extent the crew of high-altitude aircraft.

The main sources of radiation are galactic cosmic rays (GCRs), which are a continuous flow of highly energetic particles from outside our solar system and solar energetic particles (SEPs), which are accelerated to high energies in short bursts by explosive events on the sun. The amount of radiation in the near-Earth environment from these two sources is partly controlled in a complicated way by the strength of the Sun's magnetic field.

There are theoretical predictions supported by observational evidence that a decline in the average strength of the Sun's magnetic field would lead to an increase in the amount of GCRs reaching near-Earth space. Furthermore there are predictions that, although a decline in solar activity would mean less frequent bursts of SEPs, the bursts that do occur would be larger and more harmful.

Currently spacecraft and aircraft are only designed and operated to offer suitable protection from the levels of radiation that have been observed over the course of the space age. A decline in solar activity would result in increased amounts of radiation in near-Earth space and therefore increased risk of harm to spacecraft and aircraft and the astronauts and aircraft crews that operate them.

By comparing this grand maximum with 24 previous examples, Mr Barnard predicts that there is an 8% chance that solar activity will fall to the very low levels seen in the so-called ‘Maunder minimum’, a period during the seventeenth century when very few sunspots were seen. In this instance, the flux of GCRs would probably increase by a factor of 2.5 from present day values and the probability of observing a large SEP event will fall from the presently seen 5 down to 2 events per century.

However, the more probable scenario is that solar activity will decline to approximately half its current value in the next 40 years, in which case the flux of GCRs will increase by a factor of 1.5 and the probability of large SEP events to increase from the current value to 8 events per century. As a result the near-Earth space radiation environment will probably become more hazardous in the next 40 years.

In presenting his results, Mr Barnard comments: “Radiation in space can be a serious issue for both people and the delicate electronic systems that society depends on. Our research shows that this problem is likely to get worse over the coming decades – and that engineers will need to work even harder to mitigate its impact.”

Contacts and sources:
Dr Robert Massey
Royal Astronomical Society

Astronomers Detect Vast Amounts Of Gas And Dust Around Black Hole In Early Universe

Using the IRAM array of millimetre-wave telescopes in the French Alps, a team of European astronomers from Germany, the UK and France have discovered a large reservoir of gas and dust in a galaxy that surrounds the most distant supermassive black hole known. Light from the galaxy, called J1120+0641, has taken so long to reach us that the galaxy is seen as it was only 740 million years after the Big Bang, when the universe was only 1/18th of its current age. Team leader Dr. Bram Venemans of the Max-Planck Institute for Astronomy in Heidelberg, Germany presented the new discovery on Wednesday 28 March at the National Astronomy Meeting in Manchester.

This image shows the bright emission from carbon and dust in a galaxy surrounding the most distant supermassive black hole known. At a distance corresponding to 740 Million years after the Big Bang, the Carbon line, which is emitted by the galaxy at infrared wavelengths (that are unobservable from the ground), is redshifted, because of the expansion of the Universe, to millimeter wavelengths where it can be observed using facilities such as the IRAM Plateau de Bure Interferometer.
cii map_j1120
   Credit: ESO/UKIDSS/SDSS 

The Institut de Radioastronomie Millimetrique (IRAM) array is made up of six 15-m size telescopes that detect emission at millimetre wavelengths (about ten thousand times as long as visible light) sited on the 2550-m high Plateau de Bure in the French Alps. The IRAM telescopes work together to simulate a single much larger telescope in a so-called interferometer that can study objects in fine detail.

A recent upgrade to IRAM allowed the scientists to detect the newly discovered gas and dust that includes significant quantities of carbon. This is quite unexpected, as the chemical element carbon is created via nuclear fusion of helium in the centres of massive stars and ejected into the galaxy when these stars end their lives in dramatic supernova explosions.

Dr Venemans comments: "It’s really puzzling that such an enormous amount of carbon-enriched gas could have formed at these early times in the universe. The presence of so much carbon confirms that massive star formation must have occurred in the short period between the Big Bang and the time we are now observing the galaxy.”

This image of J1120+0641 (red dot in the center) was created by combining survey data in visual and infrared light of the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey.  
 
Credit: ESO/UKIDSS/SDSS  

From the emission from the dust, Venemans and his team are able to show that the galaxy is still forming stars at a rate that is 100 times higher than in our Milky Way.

They give credit to the IRAM upgrade that made the new discovery possible. "Indeed, we would not have been able to detect this emission only a couple years ago." says team member Dr Pierre Cox, director of IRAM.

The astronomers are excited about the fact that this source is also visible from the southern hemisphere where the Atacama Large Millimeter/submillimeter Array (ALMA), which will be the world's most advanced sub-millimetre / millimetre telescope array, is currently under construction in Chile. Observations with ALMA will enable a detailed study of the structure of this galaxy, including the way the gas and dust moves within it.

Dr Richard McMahon, a member of the team from the University of Cambridge in the UK is looking forward to when ALMA is fully operational later this year. “The current observations only provide a glimpse of what ALMA will be capable of when we use it to study the formation of the first generation of galaxies."

Contacts and sources:
Dr Robert Massey
Royal Astronomical Society

The related research paper can be found at http://arxiv.org/abs/1203.5844

Brain Wiring A No-Brainer? Scans Reveal Astonishingly Simple 3D Grid Structure -- NIH-Funded Study

The brain appears to be wired more like the checkerboard streets of New York City than the curvy lanes of Columbia, Md., suggests a new brain imaging study. The most detailed images, to date, reveal a pervasive 3D grid structure with no diagonals, say scientists funded by the National Institutes of Health.

This detail from a DSI scan shows a fabric-like 3-D grid structure of connections in monkey brain.
 
Credit: Van Wedeen, M.D., Martinos Center and Dept. of Radiology, Massachusetts General Hospital and Harvard University Medical School

"Far from being just a tangle of wires, the brain's connections turn out to be more like ribbon cables -- folding 2D sheets of parallel neuronal fibers that cross paths at right angles, like the warp and weft of a fabric," explained Van Wedeen, M.D., of Massachusetts General Hospital (MGH), A.A. Martinos Center for Biomedical Imaging and the Harvard Medical School. "This grid structure is continuous and consistent at all scales and across humans and other primate species."

Wedeen and colleagues report new evidence of the brain's elegant simplicity March 30, 2012 in the journal Science. The study was funded, in part, by the NIH's National Institute of Mental Health (NIMH), the Human Connectome Project of the NIH Blueprint for Neuroscience Research, and other NIH components.

"Getting a high resolution wiring diagram of our brains is a landmark in human neuroanatomy," said NIMH Director Thomas R. Insel, M.D. "This new technology may reveal individual differences in brain connections that could aid diagnosis and treatment of brain disorders."

Knowledge gained from the study helped shape design specifications for the most powerful brain scanner of its kind, which was installed at MGH's Martinos Center last fall. The new Connectom diffusion magnetic resonance imaging (MRI) scanner can visualize the networks of crisscrossing fibers – by which different parts of the brain communicate with each other – in 10-fold higher detail than conventional scanners, said Wedeen.

"This one-of-a-kind instrument is bringing into sharper focus an astonishingly simple architecture that makes sense in light of how the brain grows," he explained. "The wiring of the mature brain appears to mirror three primal pathways established in embryonic development."

As the brain gets wired up in early development, its connections form along perpendicular pathways, running horizontally, vertically and transversely. This grid structure appears to guide connectivity like lane markers on a highway, which would limit options for growing nerve fibers to change direction during development. If they can turn in just four directions: left, right, up or down, this may enforce a more efficient, orderly way for the fibers to find their proper connections – and for the structure to adapt through evolution, suggest the researchers.

Curvature in this DSI image of a whole human brain turns out to be folding of 2-D sheets of parallel neuronal fibers that cross paths at right angles. This picture came from the new Connectom scanner.
 
Credit: Van Wedeen, M.D., Martinos Center and Dept. of Radiology, Massachusetts General Hospital and Harvard University Medical School

Obtaining detailed images of these pathways in human brain has long eluded researchers, in part, because the human cortex, or outer mantle, develops many folds, nooks and crannies that obscure the structure of its connections. Although studies using chemical tracers in neural tracts of animal brains yielded hints of a grid structure, such invasive techniques could not be used in humans.

Wedeen's team is part of a Human Connectome Project Harvard/MGH-UCLA consortium that is optimizing MRI technology to more accurately to image the pathways. In diffusion imaging, the scanner detects movement of water inside the fibers to reveal their locations. A high resolution technique called diffusion spectrum imaging (DSI) makes it possible to see the different orientations of multiple fibers that cross at a single location – the key to seeing the grid structure.

In the current study, researchers performed DSI scans on postmortem brains of four types of monkeys – rhesus, owl, marmoset and galago – and in living humans. They saw the same 2D sheet structure containing parallel fibers crossing paths everywhere in all of the brains – even in local path neighborhoods. The grid structure of cortex pathways was continuous with those of lower brain structures, including memory and emotion centers. The more complex human and rhesus brains showed more differentiation between pathways than simpler species.

Among immediate implications, the findings suggest a simplifying framework for understanding the brain's structure, pathways and connectivity.

The technology used in the current study was able to see only about 25 percent of the grid structure in human brain. It was only apparent in large central circuitry, not in outlying areas where the folding obscures it. But lessons learned were incorporated into the design of the newly installed Connectom scanner, which can see 75 percent of it, according to Wedeen.

Much as a telescope with a larger mirror or lens provides a clearer image, the new scanner markedly boosts resolving power by magnifying magnetic fields with magnetically stronger copper coils, called gradients. Gradients make it possible to vary the magnetic field and get a precise fix on locations in the brain. The Connectom scanner's gradients are seven times stronger than those of conventional scanners. Scans that would have previously taken hours – and, thus would have been impractical with living human subjects – can now be performed in minutes.

"Before, we had just driving directions. Now, we have a map showing how all the highways and byways are interconnected," said Wedeen. "Brain wiring is not like the wiring in your basement, where it just needs to connect the right endpoints. Rather, the grid is the language of the brain and wiring and re-wiring work by modifying it."

Contacts and sources:
Jules Asher  
NIH/National Institute of Mental Health

Reference:  Wedeen VJ, Rosene DL, Ruopeng W, Guangping D, Mortazavi F, Hagmann P, Kass JH, Tseng W-YI. The Geometric Structure of the Brain Fiber Pathways: A Continuous Orthogonal Grid. March 30, 2012 Science. 

Friday, March 30, 2012

The "UFO Galaxy" Imaged By Hubble Space Telescope

The NASA/ESA Hubble Space Telescope has spotted the "UFO Galaxy." NGC 2683 is a spiral galaxy seen almost edge-on, giving it the shape of a classic science fiction spaceship. This is why the astronomers at the Astronaut Memorial Planetarium and Observatory, Cocoa, Fla., gave it this attention-grabbing nickname.

Credit: ESA/Hubble & NASA

While a bird's eye view lets us see the detailed structure of a galaxy (such as this Hubble image of a barred spiral), a side-on view has its own perks. In particular, it gives astronomers a great opportunity to see the delicate dusty lanes of the spiral arms silhouetted against the golden haze of the galaxy’s core. In addition, brilliant clusters of young blue stars shine scattered throughout the disc, mapping the galaxy’s star-forming regions.

Perhaps surprisingly, side-on views of galaxies like this one do not prevent astronomers from deducing their structures. Studies of the properties of the light coming from NGC 2683 suggest that this is a barred spiral galaxy, even though the angle we see it at does not let us see this directly.

NGC 2683, discovered on Feb. 5, 1788, by the famous astronomer William Herschel, lies in the Northern constellation of Lynx. A constellation named not because of its resemblance to the feline animal, but because it is fairly faint, requiring the "sensitive eyes of a cat" to discern it. And when you manage to get a look at it, you’ll find treasures like this, making it well worth the effort.

This image is produced from two adjacent fields observed in visible and infrared light by Hubble’s Advanced Camera for Surveys. A narrow strip which appears slightly blurred and crosses most the image horizontally is a result of a gap between Hubble’s detectors. This strip has been patched using images from observations of the galaxy made by ground-based telescopes, which show significantly less detail. The field of view is approximately 6.5 by 3.3 arcminutes.

For different resolutions, visit:  http://www.spacetelescope.org/images/potw1213a/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+hubble_potw+%28Hubble+Picture+of+The


Contacts and sources:
NASA

Cities Expand By Area Equal To France, Germany And Spain Combined In Less Than 20 Years

Urbanization choices to be fundamental to environmental sustainability, say experts; Equivalent of a city of 1 million needed weekly given population growth trend;

Unless development patterns change, by 2030 humanity’s urban footprint will occupy an additional 1.5 million square kilometres - comparable to the combined territories of France, Germany and Spain, say experts at a major international science meeting underway in London.

UN estimates show human population growing from 7 billion today to 9 billion by 2050, translating into some 1 million more people expected on average each week for the next 38 years, with most of that increase anticipated in urban centres. And ongoing migration from rural to urban living could see world cities receive yet an-other 1 billion additional people. Total forecast urban population in 2050: 6.3 billion (up from 3.5 billion today).

The question isn’t whether to urbanize but how, says Dr. Michail Fragkias of Arizona State University, one of nearly 3000 participants at the conference, entitled “Planet Under Pressure”. Unfortunately, he adds, today’s ongoing pattern of urban sprawl puts humanity at severe risk due to environmental problems. Dense cities designed for efficiency offer one of the most promising paths to sustainability, and urbaniza-tion specialists will share a wealth of knowledge available to drive solutions.  

How best to urbanize is one among many “options and opportunities” under discus-sion by global environmental change specialists today, Day 2 of the four-day confer-ence March 26-29, convened to help address a wide range of global sustainability challenges and offer recommendations to June’s UN “Rio+20” Earth Summit.

Other leading options and opportunities being addressed include green economic development (Yvo de Boer, former Executive Secretary, UN Framework Convention on Climate Change), securing food and water for the world's poorest (Bina Agarwal, Director, Institute of Economic Growth, Delhi University, India), and planetary stew-ardship: risks, obstacles and opportunities (Georgina Mace, Professor, Imperial Col-lege, London). For a full list of “options and opportunities” conference sessions and topics, see conference website.

Cities responsible for 70% of CO2 emissions
Shobhakar Dhakal, Executive Director of the Tokyo-based Global Carbon Project, says reforms in existing cities and better planning of new ones offer disproportionately large environmental benefits compared with other options.

“Re-engineering cities is urgently needed for global sustainability,” says Dr. Dhakal, adding that emerging urban areas “have a latecomer’s advantage in terms of knowledge, sustainability thinking, and technology to better manage such funda-mentals as trash and transportation.”

Over 70% of CO2 emissions today relate to city needs. In billions of metric tonnes, urban-area CO2 emissions were estimated at about 15 in 1990 and 25 in 2010, with forecasts of growth to 36.5 by 2030, assuming business as usual.

Addressing climate change therefore demands focusing on urban efficiencies, like using weather conditions and time of day-adjusted toll systems to reduce traffic congestion, for example: Congestion worldwide costs economies an estimated 1 to 3% of GDP – a problem that not only wastes fuel and causes pollution, but time – an estimated 4.2 billion hours in the USA alone in 2005. Estimated cost of New York City’s congestion: US$4 billion a year in lost productivity.
An “Internet of things” is forming, he notes – a fast-growing number of high-tech, artificially intelligent, Internet-connected cars, appliances, cameras, roadways, pipe-lines and more -- in total about one trillion in use worldwide today.

High-tech ways to improve the efficiency of urban operations and human health and well-being include:
• Rapid patient screening and diagnostics with digitalised health records;
• Utility meters and sensors that monitor the capacity of the power generation net-work and continually gather data on supply and demand of electricity;
• Integrated traveller information services and toll road pricing based on traffic, weather and other data;
• Data gathering and feedback from citizens using mobile phones;

“Our focus should be on enhancing the quality of urbanization – from urban space, infrastructure, form and function, to lifestyle, energy choices and efficiency,” says Dr. Dhakal.

Care is needed, he adds, to avoid unwelcome potential problems of dense urbaniza-tion, including congestion, pollution, crime, the rapid spread of infectious disease and other societal problems – the focus of social and health scientists who will fea-ture prominently at the conference.

Says Prof. Karen Seto of Yale University, who with colleagues is organizing four of the 160 conference sessions at Planet Under Pressure: ”The way cities have grown since World War II is neither socially or environmentally sustainable and the environmen-tal cost of ongoing urban sprawl is too great to continue.”

For these reasons, “the planet can’t afford not to urbanize,” says Seto. “People eve-rywhere, however, have increasingly embraced Western styles of architecture and urbanization, which are resource-intense and often not adapted to local climates. The North American suburb has gone global, and car-dependent urban develop-ments are more and more the norm.”

How humanity urbanizes to define the decades ahead

Fragkias notes that while there were fewer than 20 cities of 1 million or more a cen-tury ago, there are 450 today. While urban areas cover less than five per cent of Earth’s land surface, “the enlarged urban footprint forecast is far more significant proportionally when vast uninhabitable polar, desert and mountain regions,  world breadbasket plains and other prime agricultural land and protected areas are subtracted from the calculation.”

“We have a unique opportunity now to plan for a coming explosion of urbanization in order to decrease pressure on ecosystems, improve the livelihoods of billions of people and avoid the occurrence of major global environmental problems and disas-ters. That process cannot wait,” says Roberto Sánchez-Rodríguez, Professor Emeritus of Environmental Sciences at the University of California, Riverside.

“It is also important to stress that differences exist in the urbanization process in high-, low- and middle-income countries and reflect them in our strategies. We need to move beyond traditional approaches to planning and be responsive to informal urban growth, to the value of ecosystem services, and to the need of multidimen-sional perspectives (social, economic, cultural, environmental, political, biophysical).

Ultimately, the researchers say, solutions include:
• Planning and investments in public infrastructure that encourage transit and acces-sibility
• Better land-use zoning and building standards that increase efficiency and multiple uses.
• Reversing the trend to ever larger homes
• Ending subsidies that promote low density and leapfrog development and discour-age compact development, or favour cars at the expense of public transit
• Improving the quality of inner city schools and addressing other growing urban challenges, such as growing income inequality, segregation and social polarization, crime rates and heightened health threats including stress;
• Through social marketing, foster demand for efficient styles of living

Beyond city limits
Professor Sybil Seitzinger, Executive Director of the International Geosphere-Biosphere Programme said, “A truly sustainable planet will require cities to think be-yond city limits.”

“Everything being brought into the city from outside: food, water, products and en-ergy need to be sourced sustainably. We need to rethink the resource flow to cities.” 

Says Dr. Mark Stafford Smith, Planet Under Pressure co-chair: “A more general theme of the conference is underlined by the urbanization issue – that much of the planet’s future is tied up in interconnected issues – climate change and city design, city resource demands and impacts on rural areas, rural food and water productivity and the ability of cities to continue functioning. The deep intensity of interconnect-edness of these issues requires an integrated approach, tackling challenges together rather than each individually, one at a time.”

Professor Elinor Ostrom of Indiana University, the 2009 Nobel laureate in economics and opening day plenary speaker at the Planet Under Pressure conference, under-lines the importance of cities in giving effect to globally-developed policies to achieve environmental sustainability.

Indeed, through initiatives such as C40, a consortium of cities committed to emis-sions reductions, cities are showing strong leadership. This approach can help ensure a move to a more sustainable pathway should global policies fail to deliver.




Contacts and sources:
Planet Uder Pressure

Images Capture Split Personality Of Dense Suspensions

Stir lots of small particles into water, and the resulting thick mixture appears highly viscous. When this dense suspension slips through a nozzle and forms a droplet, however, its behavior momentarily reveals a decidedly non-viscous side. University of Chicago physicists recorded this surprising behavior in laboratory experiments using high-speed photography, which can capture action taking place in one hundred-thousandths of a second or less.

UChicago graduate student Marc Miskin and Heinrich Jaeger, the William J. Friedman and Alicia Townsend Friedman Professor in Physics, expected that the dense suspensions in their experiments would behave strictly like viscous liquids, which tend to flow less freely than non-viscous liquids. Viscosity certainly does matter as the particle-laden liquid begins to exit the nozzle, but not at the moment where the drop’s thinning neck breaks in two. 

In this image, lighted from the front, water containing zirconium dioxide particles measuring 850 microns in diameter detaches from a nozzle. The suspension neck maintains a symmetric profile until the neck gradually narrows to a width of only one particle, when the liquid surrounding the particles ruptures.
 
Courtesy of Heinrich Jaeger, Marc Miskin

New behavior appears to arise from feedback between the tendencies of the liquid and what the particles within the liquid can allow. “While the liquid deforms and becomes thinner and thinner at a certain spot, the particles also have to move with that liquid. They are trapped inside the liquid,” Jaeger explained. As deformation continues, the particles get in each other’s way.

“Oil, honey, also would form a long thread, and this thread would become thinner and break in a way characteristic of a viscous liquid,” Jaeger said. “The particles in a dense suspension conspire to interact with the liquid in a way that, when it’s all said and done, a neck forms that shows signs of a split personality: It thins in a non-viscous fashion, like water, all the while exhibiting a shape more resembling that of its viscous cousins.”

It took Miskin and Jaeger six months to become convinced that the viscosity of the suspending liquid was a minor player in their experiments. “It is a somewhat heretical view that this viscosity should not matter,” Jaeger said. “Who would have thought that?”

Miskin and Jaeger presented their results in the March 5 online early edition and the March 20 print edition of the Proceedings of the National Academy of Sciences.

In their experiments, Miskin and Jaeger compared a variety of pure liquids to mixtures in which particles occupy more than half the volume.

“The results indicate that what we know about drop breakup from pure liquids does not allow us to predict phenomena observed in their experiments,” said Jeffrey Morris, professor of chemical engineering at City College of New York. “The most striking and interesting result is the fact that, despite these being very viscous mixtures, the viscosity plays little role in the way a drop forms.”

Few studies have examined droplet formation in dense suspensions. As Morris noted, such work could greatly impact applications such as inkjet printing, combustion of slurries involving coal in oil, and the drop-by-drop deposition of cells in DNA microarrays.
Scientific defiance

In these applications particles often are so densely packed that their behavior defies a simple scientific description, one that might only take into account average particle size and the fraction of the liquid that the particles occupy, Morris explained. The UChicago study showed that particles cause deformations and often protrude through the liquid, rendering any such description incomplete until fundamental questions about the interface between a liquid mixture and its surroundings are properly addressed.

“Miskin and Jaeger provide arguments for the importance of these protrusions in their work and suggest that the issue is of broader relevance to any flow where a particle-laden liquid has an interface with another fluid,” Morris said.

Miskin and Jaeger verified their results by systematically evaluating different viscosities, particle sizes and suspending liquids, and developed a mathematical model to explain how the droplet necks evolve over time until they break apart.

One initially counter-intuitive prediction of this model was that larger particles should produce behavior resembling that in pure water without any particles. “If you want to make it behave more like a pure non-viscous liquid, you want to make the particles large,” said Jaeger, who finds himself intrigued by nature’s seemingly endless store of surprises.

Miskin and Jaeger indeed observed this when the particle size approached a significant fraction of the nozzle diameter, making the particles visible to the naked eye.

“You think you have a pretty good idea of what should happen, and instead there’s a surprise at every corner. Honestly, finding surprises is what I love about this work,” Jaeger said.

Contacts and sources:
Steve Koppes
University of Chicago 

Citation: “Droplet formation and scaling in dense suspensions,” by Marc Z. Miskin and Heinrich M. Jaeger, Proceedings of the National Academy of Sciences, March 20, 2012, Vol. 109, No. 12, pgs. 4389-94

Funding: National Science Foundation and the Keck Initiative for Ultrafast Imaging at the University of Chicago 

Civilization At Risk Says First "State Of The Planet" Declaration

Scientists issued the first "State of the Planet" declaration at a major gathering of experts on global environmental and social issues in advance of the major UN Summit Rio+20 in June.

The declaration opens: "Research now demonstrates that the continued functioning of the Earth system as it has supported the well-being of human civilization in recent centuries is at risk." It states that consensus is growing that we have driven the planet into a new epoch, the Anthropocene, where many planetary-scale processes are dominated by human activities. It concludes society must not delay taking urgent and large-scale action.

"This is a declaration to our globally interconnected society," said Dr Lidia Brito, director of science policy, natural sciences, UNESCO, and conference co-chair.

"Time is the natural resource in shortest supply. We need to change course in some fundamental way this decade," she added.

Watch a 3-minute journey through the last 250 years of our history, from the start of the Industrial Revolution to the Rio+20 Summit. The film charts the growth of humanity into a global force on an equivalent scale to major geological processes. The film is part of the world's first educational webportal on the Anthropocene, commissioned by the Planet Under Pressure conference, and developed and sponsored by anthropocene.info
   


Over 3,000 experts in climate change, environmental geo-engineering, international governance, the future of the oceans and biodiversity, global trade, development, poverty alleviation, food security and more discussed the intricate connections between all the different systems and cycles governing our ocean, air, land and the human and animal life dependent on those environments.

Dr Mark Stafford Smith, Planet Under Pressure conference co-chair, said, "In the last decade we have become a highly interconnected society. We are beginning to realise this new state of humanity can be harnessed for rapid innovation."

"But we need to provide more open access to knowledge, we need to move away from GDP as the only measure of progress, and we need a new way of working internationally that is fit for the 21st century," he added. "This conference has provided new ideas and practical solutions for the way forward."

The declaration concludes that, "a highly interconnected global society has the potential to innovate rapidly. The Planet Under Pressure conference has taken advantage of this potential to explore new pathways."

But, say Brito and Stafford Smith, effective planetary stewardship also requires: "More ways of participation at all levels, stronger leadership in all sectors of society; greater connectivity between those generating new knowledge and the rest of society; and rethinking the roles of science, policy, industry and civil society."

The conference presented new initiatives as recommendations for the Rio+20 Summit:
  • Going beyond GDP by taking into account the value of natural capital when measuring progress.
  • A new framework for developing a set of goals for global sustainability for all nations.
  • Creating a UN Sustainable Development Council to integrate social, economic and environmental policy at the global level.
  • Launching a new international research programme, Future Earth, that will focus on solutions.
  • Initiating regular global sustainability analyses.
The conference also previewed the first Inclusive Wealth Report, developed by UN University's International Human Dimensions Programme (UNU-IHDP) and the UN Environment Programme.

Based on a new economic indicator that measures natural, human and produced capital, the tool goes beyond GDP and can provide guidance for economic development towards sustainability.

Says Professor Anantha Duraiappah, Executive Director of UNU-IHDP: "Until the yardsticks which society uses to evaluate progress are changed to capture elements of long-term sustainability, the planet and its people will continue to suffer under the weight of short-term growth policies."

The report, scheduled to be published at Rio+20, will describe the capital base of 20 nations: Australia, Brazil, Canada, Chile, China, Colombia, Ecuador, France, Germany, India, Japan, Kenya, Nigeria, Norway, the Russian Federation, Saudi Arabia, South Africa, USA, United Kingdom and Venezuela.

Off the back of the declaration and recognizing the interconnectedness of the current challenges, the four major international research programmes under ICSU that direct global environmental change science (the International Geosphere-Biosphere Programme; DIVERSITAS; the International Human Dimensions Programme; and the World Climate Research Programme) aim to rapidly reorganize to focus on global sustainability solutions.

Additionally, the programmes are proposing to develop platforms that facilitate cooperation with all sectors of society to develop a new strategy for creating and rapidly translating knowledge into action. "Such interactions should be designed to bring societal relevance and trust to science-policy interfaces, and more effectively inform decision-making to keep pace with rapid global change," reads the declaration. This strategy will form part of "a new contract between science and society" and includes the launch of a new international research programme, Future Earth.

The Planet Under Pressure conference marked the beginning of this new shift in direction, according to the conference co-chairs.

Delegates in London were joined by almost 8,000 people online worldwide and reached more than a million people through social media in the first three conference days.

Dr Brito said, "We have a positive message: strong leadership from all sectors and harnessing the increased connectivity offers some hope that the risk of long-term environmental crises can be minimized."

"This new connectivity is the beginning of how the scientific community needs to operate. We need a powerful network of innovation, North and South. This approach needs to be part of our DNA from now on," she added.

In recorded remarks, UN Secretary General Ban Ki Moon said today that "climate change, the financial crisis and food, water and energy insecurity threaten human wellbeing and civilization as we know it."

"My High-level Panel on Global Sustainability has just recommended that I consider naming a chief scientific advisor or establishing a scientific board to advise me and other organs of the United Nations.

"I also intend to engage the scientific community on other projects, such as the Global Sustainable Development Outlook report," he added, "I am also ready to work with the scientific community on the launch of a large-scale scientific initiative."

UN Rio+20 Executive coordinator, Elizabeth Thompson, said, "politician or public servant, scientist or citizen, community or company, we are the shareholders of Earth Incorporated and have a joint responsibility to protect our common patrimony."

"The scientific community can help us make sense of these complex and interconnected challenges."

Conference delegates also heard how research advances in the previous decade have shown humanity's impact on Earth's life support system has become comparable to planetary scale geological processes such as ice ages. "Consensus is growing we have driven the planet into a new epoch, the Anthropocene, in which many Earth system processes are now dominated by human activities," the declaration states.

This new force risks pushing parts of the Earth system – the sum of our planet's interacting physical, chemical, and biological processes including life and society – past so-called tipping points.

Tipping points include the disappearance of summer sea ice in the Arctic, permafrost in Arctic regions releasing large quantities of greenhouse gases into the atmosphere, and the drying out of the Amazon rainforest. If these tipping points are crossed they can increase the likelihood of going beyond other thresholds generating unacceptable and often irreversible environmental change on global and regional scales with serious consequences for human and all forms of life on the planet.

The declaration stated that existing international arrangements are failing to deal with long-term development challenges such as climate change and biodiversity loss in an interconnected way indicating that it would be a mistake to rely on single international agreements. Research indicated that comprehensive sustainability policies at local, sub-national, national, and regional levels should be encouraged to provide "essential safety nets should singular global policies fail."

Animated time-lapse video of anthropogenic carbon dioxide emissions in map form, spanning the 18th century until this current first decade of the 21st century. Shows the start in England and radiating to Europe, US and then Asia.

The video makes it easy to visualize the geographical distribution and trends in post industrial revolution anthropogenic carbon dioxide emissions over 256 years.

Contacts and sources:
Owen Gaffney
Earth System Science Partnership

More information on the web: www.planetunderpressure2012.net/
Follow the conference via RSS: www.planetunderpressure2012.net/xml/news.xml
Live webstreaming, daily news show and live audio feeds: http://c3379912.workcast.net/planetunderpressure.html


* The State of the Planet Declaration is by the Co-Chairs of the Planet Under Pressure conference, Dr Lidia Brito and Dr Mark Stafford Smith, supported by the conference Scientific Organizing Committee.

The statement in full is available online at http://www.planetunderpressure2012.net/

The research discussed in the press releases, the conclusions drawn and the opinions offered are those of individual speakers or research teams at the Planet Under Pressure conference. 

US Stockpile Security And International Monitoring Capabilities Strengthened, Says New Report On Technical Issues Behind The Comprehensive Nuclear Test Ban Treaty

The United States is now in a better position than at any time in the past to maintain a safe and effective nuclear weapons stockpile without testing and to monitor clandestine nuclear testing abroad, says a new report from the National Research Council. The report, requested by the Office of the Vice President and the White House Office of Science and Technology Policy, reviews and updates a 2002 study that examined the technical concerns raised about the Comprehensive Nuclear Test Ban Treaty (CTBT). The report does not take a position on whether the U.S. should ratify the treaty.

The first atomic test, "Trinity", took place on July 16, 1945.
File:Trinity shot color.jpg
Credit: Wikipedia

"So long as the nation is fully committed to securing its weapons stockpile and provides sufficient resources for doing so, the U.S. has the technical capabilities to maintain safe, reliable nuclear weapons into the foreseeable future without the need for underground weapons testing," said Ellen D. Williams, chief scientist at BP and chair of the committee that wrote the report. "In addition, U.S. and international technologies to monitor weapons testing by other countries are significantly better now than they were a decade ago."

U.S. verification of compliance with the CTBT would be accomplished through a combination of information gathered by the U.S. military and intelligence agencies, the International Monitoring System (IMS), which is now more than 80 percent complete, and other publicly available geophysical data. U.S. global monitoring capabilities are superior to those of the IMS and can focus on countries of national concern, the report says. However, the IMS provides valuable data to the U.S., both as a common baseline for international assessment and as a way of disclosing potential violations when the U.S. needs to keep its own data classified. Therefore, the U.S. should support both the completion of the IMS and its operations regardless of whether CTBT enters into force, the report says.

The improvements in the IMS and the U.S. monitoring network reduce the likelihood of undetected nuclear explosion testing and inhibit development of new types of strategic nuclear weapons, the report says. Technologies for detecting clandestine testing in four environments -- underground, underwater, in the atmosphere, and in space -- have improved significantly in the past decade. In particular, seismology, the most effective approach for monitoring underground nuclear explosion testing, can now detect underground explosions well below 1 kiloton in most regions. A kiloton is equivalent to 1,000 tons of chemical high explosive. The nuclear weapons that were used in Japan in World War II had yields in the range of 10 to 20 kilotons.

The report acknowledges that weapons threats could arise without being detected even if a test ban existed. For example, a rudimentary nuclear weapon could be built and deployed without testing by a nation with access to sufficient material, or a previously tested weapon design might be manufactured without further testing by a country that obtained the design. Such weapons could pose local or regional threats "of great concern," but they would not require the U.S. to return to weapons testing in order to respond, the report says. The U.S. already has or could produce weapons of equal or greater capability based on its own nuclear explosion test history. In addition, if the U.S. determined that there was a technical need to develop a new type of nuclear weapon that has not been tested previously, it could invoke the "supreme national interest clause" and withdraw from the CTBT.

The administration and Congress should develop and implement a comprehensive plan with a clear strategy for maintaining the nation's nuclear deterrence capabilities and competencies, the report says. The nation's Stockpile Stewardship Program, which collectively maintains the safety and reliability of the U.S. nuclear weapons arsenal in the absence of testing, has had "significant advances" in technical knowledge and capability since the 2002 report, such as completion and operation of new major research facilities. The report calls for a strong national commitment to recruiting and sustaining high-quality workers, repairing aging infrastructure, and strengthening the science, engineering, and technology base for both stockpile maintenance and monitoring capabilities.

Initiated nearly 50 years ago under President Eisenhower, negotiations for the CTBT were completed in 1996. The treaty has been signed by 182 nations, including the United States. The pact, which would prohibit nuclear weapons testing in all environments, would establish a network of monitoring stations to help track compliance and provide for inspections of suspected tests. It would permit research, development, and design activities by the nuclear-weapon states, but experiments producing a nuclear yield would be forbidden.

The treaty would enter into force after ratification by the 44 countries that either already possessed nuclear weapons or had nuclear reactors in 1996. To date, 36 have done so, including Russia, the United Kingdom, and France. The U.S. Senate failed to give its advice and consent to ratification in 1999. The Obama administration has indicated that it will seek ratification. The U.S. has observed a testing moratorium since October 1992.

The study was sponsored by the U.S. Department of Energy, the U.S. Department of State, the Carnegie Corporation of New York, and the National Academy of Sciences. The National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council make up the National Academies. They are private, nonprofit institutions that provide science, technology, and health policy advice under a congressional charter. The Research Council is the principal operating agency of the National Academy of Sciences and the National Academy of Engineering. For more information, visit http://national-academies.org. A committee roster follows.
Contacts and sources:
Molly Galvin
National Academy of Sciences

Newly Discovered Foot Points To A New Kid On The Hominin Block

It seems that “Lucy” was not the only hominin on the block in northern Africa about 3 million years ago.

A team of researchers that included Johns Hopkins University geologist Naomi Levin has announced the discovery of a partial foot skeleton with characteristics (such as an opposable big toe bone) that don’t match those of Lucy, the human ancestor (or hominin) known to inhabit that region and considered by many to be the ancestor of all modern humans.

The discovery is important because it provides first-ever evidence that at least two pre-human ancestors lived between 3 million and 4 million years ago in the Afar region of Ethiopia, and that they had different ways of moving around the landscape.

“The foot belonged to a hominin species—not yet named—that overlaps in age with Lucy (Australopithecus afarensis). Although it was found in a neighboring project area that is relatively close to the Lucy fossil site, it does not look like an A. afarensis foot,” explains Levin, an assistant professor in the Morton K. Blaustein Department of Earth and Planetary Sciences in the Krieger School of Arts and Sciences.

Naomi Levin 
 Credit: John Hopkins

A paper in the March 29 issue of Nature describes this foot, which is similar in some ways to the remains of another hominin fossil, calledArdipithecus ramidus, but which has different features.

Its discovery could shed light on how our ancestors learned to walk upright, according to Levin.

“What is clear is that the foot of the Burtele hominin was able to grasp items much better than its contemporary, A. afarensis, would have been able to do, which suggests that it was adept at moving around in trees,” says Levin, who was part of the team led by Yohannes Haile-Selassie of the Cleveland Museum of Natural History and included researchers fromCase Western Reserve University and the Berkeley Geochronology Center as well.

The finding is important, Levin says, because it shows that there is much more to learn about the role of locomotion in human evolution.

“This fossil makes the story of locomotion more complex, and it shows that we have a lot more to learn about how humans transitioned from moving around in trees to moving around on the ground—on two legs. This fossil shows that some hominins may have been capable of doing both,” she says.

The fossil, dated to approximately 3.4 million years ago, was discovered in 2009 in sediments along the Burtele drainage in the Afar region of Ethiopia that is now very hot and dry but which the researchers view as having been wetter and more wooded when the Burtele hominin lived, based on its deltaic sedimentary context, results from isotopic studies and the range of fossil animals found near the site.

“We’re just at the beginning of understanding the environmental context for this important fossil. It will be a critical part of understanding this hominin, its habitat and the role that the environment played in its evolution,” she says.

Related websites
Naomi Levin:  http://eps.jhu.edu/bios/naomi-levin/index.html

Contacts and sources:
Lisa DeNike
Johns Hopkins University

Crocodiles Trump T.Rex As Heavyweight Bite-Force Champions

Paul M. Gignac, Ph.D., Instructor of Research, Department of Anatomical Sciences, Stony Brook University School of Medicine, and colleagues at Florida State University and in California and Australia, found in a study of all 23 living crocodilian species that crocodiles can kill with the strongest bite force measured for any living animal. The study also revealed that the bite forces of the largest extinct crocodilians exceeded 23,000 pounds, a force two-times greater than the mighty Tyrannosaurus rex. Their data, reported online in PLoS One, contributes to the understanding of performance in animals from the past and provides unprecedented insight into how evolution has shaped that performance.

Dr. Paul M. Gignac works with a 12-foot American alligator.
Credit:  tony Brook University

In “Insights into the Ecology and Evolutionary Success of Crocodilians Revealed through Bite-Force and Tooth-Pressure Experimentation,” the researchers detail their examination of the bite force and tooth pressure of every species of alligator, crocodile, caiman, and gharial. Led by Project Director Gregory Erickson, Ph.D., Professor of Biological Science at Florida State University, the study took more than a decade to complete and required a diverse team of croc handlers and scientists.

“Crocodiles and alligators are the largest, most successful reptile hunters alive today, and our research illustrates one of the key ways they have maintained that crown,” says Dr. Gignac.

The team roped 83 adult alligators and crocodiles and placed a force meter between their back teeth and recorded the bite force. They found that gators and crocs have pound-for-pound comparable maximal bite forces, despite different snouts and teeth. Contrary to previous evolutionary thinking, they determined that bite force was correlated with body size but showed surprisingly little correlation with tooth form, diet, jaw shape, or jaw strength.

Dr. Gignac emphasizes that the study results suggest that once crocodilians evolved their remarkable capacity for force-generation, further adaptive modifications involved changes in body size and the dentition to modify forces and pressures for different diets.

The findings are unique, to the point that the team has been contacted by editors of the “Guinness Book of World Records” inquiring about the data.

Among living crocodilians, the bite-force champion is a 17-foot saltwater croc, with a force measured at 3,700 pounds.

“This kind of bite is like being pinned beneath the entire roster of the New York Knicks,” says Dr. Gignac, illustrating the tremendous force displayed by the living creatures. “But with bone-crushing teeth.”

The research was funded by the National Geographic Society and the Florida State University College of Arts and Sciences.

The Department of Anatomical Sciences is one of 25 departments within the Stony Brook University School of Medicine. The department includes graduate and doctoral programs in Anatomical Sciences. The faculty consists of prominent and internationally recognized researchers in the fields of Anthropology, Vertebrate Paleontology and Systematics, and Functional Morphology.
Contacts and sources:
Stony Brook University

 Citation: Insights into the Ecology and Evolutionary Success of Crocodilians Revealed through Bite-Force and Tooth-Pressure Experimentation

Gregory M. Erickson1*, Paul M. Gignac1¤, Scott J. Steppan1,A. Kristopher Lappin2, Kent A. Vliet3, John D. Brueggen4,Brian D. Inouye1, David Kledzik4, Grahame J. W. Webb5

1 Department of Biological Science, Florida State University, Tallahassee, Florida, United States of America, 2 Biological Sciences Department, California State Polytechnic University, Pomona, California, United States of America, 3 Department of Biology, University of Florida, Gainesville, Florida, United States of America, 4 St. Augustine Alligator Farm Zoological Park, St. Augustine, Florida, United States of America, 5 Wildlife Management International, Karama, and School of Environmental Research, Charles Darwin University, Darwin, Northern Territories, Australia