Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Thursday, May 31, 2012

When Life Began On Earth: NASA Astrobiologists Find Iron's Role In Life On Early Earth

On the periodic table of the elements, iron and magnesium are far apart. But new evidence discovered by the NASA Astrobiology Institute (NAI) team at the Georgia Institute of Technology suggests that three billion years ago, iron did the job magnesium does today in helping Ribonucleic acid (RNA), a molecule essential for life, assume the molecular shapes necessary for biology. 

The shape of an RNA molecule remains the same with either magnesium (Mg) or iron (Fe).
ball and stick models of RNA are compared.
Credit: NAI

The results of the study are scheduled to be published online on May 31, 2012 in the journal PLoS ONE.

There is considerable evidence that the evolution of life passed through an early stage when RNA played a more central role, doing the jobs of DNA and protein before they appeared. During that time, more than three billion years ago, the environment lacked oxygen but had lots of available iron.

”One of the greatest challenges in astrobiology is understanding how life began on Earth billions of years ago when the environment was very different than it is today,” said Carl Pilcher, director of the Astrobiology Institute at NASA’s Ames Research Center Moffett Field, Calif. “This study shows us how conditions on early Earth may have been conducive to the development of life.”

In the new study, researchers from the Georgia Institute of Technology, Atlanta, used experiments and numerical calculations to show that under early Earth conditions, with little oxygen around, iron can substitute for magnesium in RNA, enabling it to assume the shapes it needs to catalyze life’s chemical reactions. In fact, it catalyzed those reactions better with iron than with magnesium.

“The primary motivation of this work was to understand RNA under plausible early Earth conditions.” said Loren Williams, a professor in the School of Chemistry and Biochemistry at Georgia Tech and leader of the NAI team. “Our hypothesis is that RNA evolved in the presence of iron and is optimized to work with iron.”

Free oxygen gas was almost nonexistent more than three billion years ago in early Earth’s atmosphere. When oxygen began entering the environment as a product of photosynthesis, it turned Earth’s available iron to rust, forming massive banded iron deposits that are still mined today. When all that iron got tied up in those deposits, it was no longer available. The current study indicates that RNA then began using magnesium, resulting in life as we know it today.

In future studies, the researchers plan to investigate what unique functions RNA can perform with iron that are not possible with magnesium.

In addition to Williams, Georgia Tech School of Biology postdoctoral fellow Shreyas Athavale, research scientist Anton Petrov, and professors Roger Wartell and Stephen Harvey, and Georgia Tech School of Chemistry and Biochemistry postdoctoral fellow Chiaolong Hsiao and professor Nicholas Hud also contributed to this research.

This study was funded by the NASA Astrobiology Institute, a virtual institute located and managed at NASA Ames Research Center, Moffett Field, Calif.

For more information about the NASA Astrobiology Institute, visit:
http://www.astrobiology.nasa.gov/nai

For more information about NASA's Ames Research Center, visit:
http://www.nasa.gov/ames
Contacts and sources:
Karen Jenvey
NASA Ames Research Center

Where Stars Are Born: Astrophysicists Explore Vast Molecular Clouds Of Orion Nebula

A University of Delaware-led research team reports an advance in the June 1 issue of Science that may help astrophysicists more accurately analyze the vast molecular clouds of gas and dust where stars are born.

Krzysztof Szalewicz, professor of physics and astronomy at UD, was the principal investigator on the National Science Foundation funded research project, which solved equations of quantum mechanics to more precisely describe the interactions between molecules of hydrogen and carbon monoxide, the two most abundant gases in space.

The vast Orion Nebula is a celestial maternity ward crowded with newborn stars, some 1,500 light years away. 
Photo by NASA/ESA

Such calculations are important to spectroscopy, the science that identifies atoms or molecules by the color of light they absorb or emit. Sir Isaac Newton discovered that sunlight shining through a prism would separate into a rainbow of colors. Today, spectroscopy is essential to fields ranging from medical diagnostics to airport security.

In astrophysics, spectrometers attached to telescopes orbiting in space measure light across the visible, infrared, and microwave spectrum to detect and quantify the abundance of chemical elements and molecules, as well as their temperatures and densities, in places such as the vast Orion Nebula, a celestial maternity ward crowded with newborn stars, some 1,500 light years away.

Whereas carbon monoxide — the second-most abundant molecule in space — is easily detected by spectrometers, such is not the case for hydrogen. Despite ranking as the most abundant molecule in space, hydrogen emits and absorbs very little light in the spectral ranges that can be observed. Thus, researchers must deduce information about molecular hydrogen from its weak interactions with carbon monoxide in the interstellar medium (the stuff between the stars).

“The hydrogen spectra get lost on the way, but carbon monoxide is like a lighthouse — its spectra are observed more often than those of any other molecule,” Szalewicz says. “You can indirectly tell what the density of hydrogen is from the carbon monoxide spectra.”

Szalewicz and co-authors Piotr Jankowski, a former UD postdoctoral researcher who is now on the chemistry faculty at Nicolaus Copernicus University in Torun, Poland, and A. Robert W. McKellar, from the National Research Council in Ottawa, Canada, wanted to revisit the spectra of the hydrogen and carbon monoxide complex. The first time such a calculation was done was 14 years ago by Szalewicz and Jankowski, parallel to an accurate measurement by McKellar.

In their computational model, the scientists needed to determine first how electrons move around nuclei. To this end, they included simultaneous excitations of up to four electrons at a time. The energy levels produced by the rotations and vibrations of the nuclei then were computed and used to build a theoretical spectrum that could be compared with the measured one.

The team’s calculations, accomplished with the high-powered kolos computing cluster at UD, have resulted in theoretical spectra 100 times more accurate than those published 14 years ago. The theoretical and experimental spectra are now in near-perfect agreement, which allowed the team to “assign” the spectrum, that is, to determine how each spectral feature is related to the underlying motion of the nuclei, Szalewicz says.

The combined theoretical and experimental knowledge about this molecular complex now can be used to analyze recent results from satellite observatories to search for its direct spectral signal. Even more importantly, this knowledge can be used to get better information about the hydrogen molecule in space from indirect observations, Szalewicz notes.

“Spectroscopy provides the most precise information about matter that is available,” he says. “I am pleased that our computations have untangled such a complex problem.”

Szalewicz’s expertise is in numerically solving the equations for the motions of electrons resulting in molecules attracting or repelling each other and then using these interactions to look at different properties of clusters and condensed phases of matter.

His research has unveiled hidden properties of water and found a missing state in the beryllium dimer, both results previously reported in Science, and his findings about heliummay lead to more accurate standards for measuring temperature and pressure.

Szalewicz was elected to the International Academy of Quantum Molecular Science in 2010 and is a fellow of the American Physical Society.

Contacts and sources:
Article by Tracey Bryant

57% Fuel Efficiency: New Small Solid Oxide Fuel Cell Reaches Record Efficiency

Individual homes and entire neighborhoods could be powered with a new, small-scale solid oxide fuel cell system that achieves up to 57 percent efficiency, significantly higher than the 30 to 50 percent efficiencies previously reported for other solid oxide fuel cell systems of its size, according to a study published in this month's issue of Journal of Power Sources.

Pacific Northwest National Laboratory developed this highly efficient, small-scale solid oxide fuel cell system that features PNNL-developed microchannel technology and two unusual processes, called external steam reforming and fuel recycling.
 
Credit: PNNL

The smaller system, developed at the Department of Energy's Pacific Northwest National Laboratory, uses methane, the primary component of natural gas, as its fuel. The entire system was streamlined to make it more efficient and scalable by using PNNL-developed microchannel technology in combination with processes called external steam reforming and fuel recycling. PNNL's system includes fuel cell stacks developed earlier with the support of DOE's Solid State Energy Conversion Alliance.

"Solid oxide fuels cells are a promising technology for providing clean, efficient energy. But, until now, most people have focused on larger systems that produce 1 megawatt of power or more and can replace traditional power plants," said Vincent Sprenkle, a co-author on the paper and chief engineer of PNNL's solid oxide fuel cell development program. "However, this research shows that smaller solid oxide fuel cells that generate between 1 and 100 kilowatts of power are a viable option for highly efficient, localized power generation."

Sprenkle and his co-authors had community-sized power generation in mind when they started working on their solid oxide fuel cell, also known as a SOFC. The pilot system they built generates about 2 kW of electricity, or how much power a typical American home consumes. The PNNL team designed its system so it can be scaled up to produce between 100 and 250 kW, which could provide power for about 50 to 100 American homes.

What is an SOFC?

Fuel cells are a lot like batteries in that they use anodes, cathodes and electrolytes to produce electricity. But unlike most batteries, which stop working when they use up their reactive materials, fuel cells can continuously make electricity if they have a constant fuel supply.

SOFCs are one type of fuel cell that operate at higher temperatures - between about 1100 and 1800 degrees Fahrenheit - and can run on a wide variety of fuels, including natural gas, biogas, hydrogen and liquid fuels such as diesel and gasoline that have been reformed and cleaned. Each SOFC is made of ceramic materials, which form three layers: the anode, the cathode and the electrolyte. Air is pumped up against an outer layer, the cathode. Oxygen from the air becomes a negatively charged ion, O2- , where the cathode and the inner electrolyte layer meet. The ion moves through the electrolyte to reach the final layer, the anode. There, the oxygen ion reacts with a fuel. This reaction creates electricity, as well as the byproducts steam and carbon dioxide. That electricity can be used to power homes, neighborhoods, cities and more.

The big advantage to fuel cells is that they're more efficient than traditional power generation. For example, the combustion engines of portable generators only convert about 18 percent of the chemical energy in fuel into electricity. In contrast, some SOFCs can achieve up to 60 percent efficiency. Being more efficient means that SOFCs consume less fuel and create less pollution for the amount of electricity produced than traditional power generation, including coal power plants.

Sprenkle and his PNNL colleagues are interested in smaller systems because of the advantages they have over larger ones. Large systems generate more power than can be consumed in their immediate area, so a lot of their electricity has to be sent to other places through transmission lines. Unfortunately, some power is lost in the process. On the other hand, smaller systems are physically smaller in size, so they can be placed closer to power users. This means the electricity they produce doesn't have to be sent as far. This makes smaller systems ideal for what's called distributed generation, or generating electricity in relatively small amounts for local use such as in individual homes or neighborhoods.

Goal: Small and efficient
Knowing the advantages of smaller SOFC systems, the PNNL team wanted to design a small system that could be both more than 50 percent efficient and easily scaled up for distributed generation. To do this, the team first used a process called external steam reforming. In general, steam reforming mixes steam with the fuel, leading the two to react and create intermediate products. The intermediates, carbon monoxide and hydrogen, then react with oxygen at the fuel cell's anode. Just as described before, this reaction generates electricity, as well as the byproducts steam and carbon dioxide.

Steam reforming has been used with fuel cells before, but the approach requires heat that, when directly exposed to the fuel cell, causes uneven temperatures on the ceramic layers that can potentially weaken and break the fuel cell. So the PNNL team opted for external steam reforming, which completes the initial reactions between steam and the fuel outside of the fuel cell.

The external steam reforming process requires a device called a heat exchanger, where a wall made of a conductive material like metal separates two gases. On one side of the wall is the hot exhaust that is expelled as a byproduct of the reaction inside the fuel cell. On the other side is a cooler gas that is heading toward the fuel cell. Heat moves from the hot gas, through the wall and into the cool incoming gas, warming it to the temperatures needed for the reaction to take place inside the fuel cell.

Efficiency with micro technology

The key to the efficiency of this small SOFC system is the use of a PNNL-developed microchannel technology in the system's multiple heat exchangers. Instead of having just one wall that separates the two gases, PNNL's microchannel heat exchangers have multiple walls created by a series of tiny looping channels that are narrower than a paper clip. This increases the surface area, allowing more heat to be transferred and making the system more efficient. PNNL's microchannel heat exchanger was designed so that very little additional pressure is needed to move the gas through the turns and curves of the looping channels.

The second unique aspect of the system is that it recycles. Specifically, the system uses the exhaust, made up of steam and heat byproducts, coming from the anode to maintain the steam reforming process. This recycling means the system doesn't need an electric device that heats water to create steam. Reusing the steam, which is mixed with fuel, also means the system is able to use up some of the leftover fuel it wasn't able to consume when the fuel first moved through the fuel cell.

The combination of external steam reforming and steam recycling with the PNNL-developed microchannel heat exchangers made the team's small SOFC system extremely efficient. Together, these characteristics help the system use as little energy as possible and allows more net electricity to be produced in the end. Lab tests showed the system's net efficiency ranged from 48.2 percent at 2.2 kW to a high of 56.6 percent at 1.7 kW. The team calculates they could raise the system's efficiency to 60 percent with a few more adjustments.

The PNNL team would like to see their research translated into an SOFC power system that's used by individual homeowners or utilities.

"There still are significant efforts required to reduce the overall cost to a point where it is economical for distributed generation applications," Sprenkle explained. "However, this demonstration does provide an excellent blueprint on how to build a system that could increase electricity generation while reducing carbon emissions."


Contacts and sources:
Franny White 
DOE/Pacific Northwest National Laboratory

The research was supported by DOE's Office of Fossil Energy.

REFERENCE: M Powell, K Meinhardt, V Sprenkle, L Chick and G McVay, "Demonstration of a highly efficient solid oxide fuel cell power system using adiabatic steam reforming and anode gas recirculation," Journal of Power Sources, Volume 205, 1 May 2012, Pages 377-384, http://www.sciencedirect.com/science/article/pii/S0378775312001991

Catching Solar Particles Infiltrating Earth's Atmosphere

On May 17, 2012 an M-class flare exploded from the sun. The eruption also shot out a burst of solar particles traveling at nearly the speed of light that reached Earth about 20 minutes after the light from the flare. An M-class flare is considered a "moderate" flare, at least ten times less powerful than the largest X-class flares, but the particles sent out on May 17 were so fast and energetic that when they collided with atoms in Earth's atmosphere, they caused a shower of particles to cascade down toward Earth's surface. The shower created what's called a ground level enhancement (GLE).

This graph shows the neutrons detected by a neutron detector at the University of Oulu in Finland from May 16 through May 18, 2012. The peak on May 17 represents an increase in the number of neutrons detected, a phenomenon dubbed a ground level enhancement or GLE. This was the first GLE since December of 2006. 
This graph shows the neutrons detected by a neutron detector from May 16 through May 18, 2012. The peak on May 17 represents an increase in the number of neutrons detected, a phenomenon dubbed a ground level enhancement.
Credit: University of Oulu/NASA's Integrated Space Weather Analysis System 
 
GLEs are quite rare – fewer than 100 events have been observed in the last 70 years, since instruments were first able to detect them. Moreover, this was the first GLE of the current solar cycle--a sure sign that the sun's regular 11-year cycle is ramping up toward solar maximum.

This GLE has scientists excited for another reason, too. The joint Russian/Italian mission PAMELA, short for Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics, simultaneously measured the particles from the sun that caused the GLE. Solar particles have been measured before, but PAMELA is sensitive to the very high-energy particles that reach ground level at Earth. The data may help scientists understand the details of what causes this space weather phenomenon, and help them tease out why a relatively small flare was capable of producing the high-speed particles needed to cause a GLE.

"Usually we would expect this kind of ground level enhancement from a giant coronal mass ejection or a big X-class flare," says Georgia de Nolfo, a space scientist who studies high speed solar particles at NASA's Goddard Space Flight Center in Greenbelt, Md. "So not only are we really excited that we were able to observe these particularly high energy particles from space, but we also have a scientific puzzle to solve."

The path to this observation began on Saturday, May 5, when a large sunspot rotated into view on the left side of the sun. The sunspot was as big as about 15 Earths, a fairly sizable active region, though by no means as big as some of the largest sunspots that have been observed on the sun. Dubbed Active Region 1476, the sunspots had already shown activity on the back side of the sun—as seen by a NASA mission called the Solar Terrestrial Relations Observatory (STEREO) -- so scientists were on alert for more. Scientists who study high-energy particles from the sun had been keeping their eye out for just such an active region because they had seen no GLEs since December of 2006.

An artist's concept of the shower of particles produced when Earth's atmosphere is struck by ultra-high-energy cosmic rays. 
Credit: Simon Swordy/University of Chicago, NASA 

In addition, they had high hopes that the PAMELA mission, which had focused on cosmic rays from outside our galaxy could now be used to observe solar particles. Such "solar cosmic rays" are the most energetic particles that can be accelerated at or near the sun.

But there was a hitch: the satellite carrying the PAMELA instruments were not currently usable since they were in calibration mode. Scientists including de Nolfo and another Goddard researcher, Eric Christian, let the PAMELA collaboration know that this might be the chance they had been waiting for and they convinced the Russian team in charge of the mission to turn the instruments back on to science mode.

"And then the active region pretty much did nothing for two weeks," says Christian. "But just before it disappeared over the right side of the sun, it finally erupted with an M-class flare."

Bingo. Neutron monitors all over the world detected the shower of neutrons that represent a GLE. Most of the time the showers are not the solar energetic particles themselves, but the resultant debris of super-fast particles slamming into atoms in Earth's atmosphere. The elevated levels of neutrons lasted for an hour.

Simultaneously, PAMELA recorded the incoming solar particles up in space, providing one of the first in-situ measurements of the stream of particles that initiated a GLE. Only the early data has been seen so far, but scientists have high hopes that as more observations are relayed down to Earth, they will be able to learn more about the May 17 onslaught of solar protons, and figure out why this event triggered a GLE when earlier bursts of solar protons in January and March, 2012 didn't.

PAMELA is a space-borne experiment of the WiZard collaboration, which is an international collaboration between Italian (I.N.F.N. – Istituto Nazionale di Fisica Nucleare), Russian, German and Swedish institutes, realized with the main support of the Italian (ASI) and Russian (Roscosmos) Space Agencies.


Contacts and sources:
Susan Hendrix
NASA/Goddard Space Flight Center

Everything You Ever Wanted to Know About Drones


by Cora CurrierProPublica 


Everyone is talking about drones. Also known as Unmanned Arial Vehicles, or UAVs, remote-piloted aircrafts have become acontroversial centerpiece of the Obama administration's counter-terrorism strategy. Domestically, their surveillance power is being hyped for everything from fighting crimeto monitoring hurricanes or spawning salmon. Meanwhile, concerns are cropping up about privacy, ethics and safety. We've rounded up some of the best coverage of drones to get you oriented. Did we miss anything? Let us know.

Credit: Wikipedia

A Little History

The idea of unmanned flight had been around for decades, but it was in the 1990s, thanks to advances in GPS and computing, that the possibilities for drones really took off, as the New Yorker recently recounted. While hobbyists and researchers looked for uses for automated, airborne cameras, the military became the driving force behind drone developments. (This history from the Washington Post has more details) According to the Congressional Research Service, the military's cache of U.A.V.'s has grown from just a handful in 2001 to more than 7,000 today. This New York Times graphic shows the variety of drones currently employed by the military — from the famous missile-launching Predator to tiny prototypes shaped like hummingbirds.

This February, Congress cleared the way for far more widespread use of drones by businesses, scientists, police and still unknown others. The Federal Aviation Administration will release a comprehensive set of rules on drones by 2015.

The Shadow Drone War: Obama's Open Secret

As the ground wars in Iraq and Afghanistan wind down, the Obama administration has escalated a mostly covert air war through clandestine bases in the U.S. and other countries. Just this week, the administration's drone-driven national security policy was documented in this book excerpt by Newsweek reporter Daniel Klaidman and a New York Times article.

Both the CIA and military use drones for "targeted killings" of terrorist leaders. The strikes have been an awkward open secret, remaining officially classified whilegovernment officials mention them repeatedly. Obama admitted the program's existencein an online chat in February, and his counterterrorism advisor, John Brennan, gave a speech last month laying out the administration's legal and ethical case for drone strikes.

The crux of it is that they are a precise and efficient form of warfare. Piloted from thousands of miles away (here's an account from a base outside Las Vegas), they don't put U.S. troops at risk, and, by the government's count, harm few civilians.

How Many Civilians Do Drone Strikes Kill?

Updated 5/31

Statistics are hard to nail down. The Long War Journal and the New America Foundationtrack strikes and militant and civilian deaths, drawing mainly on media reports with the caveat that they can't always be verified. The Long War Journal tallied 30 civilian deathsin Pakistan in 2011. The London-based Bureau of Investigative Journalism, which also tracks drone strikes, consistently documents higher numbers of civilian deaths — for Pakistan in 2011, at least 75. Obama administration officials, the New York Times reported this week, have said that such deaths are few or in the "single digits."

But the Times, citing "counterterrorism officials," also reported that the U.S. classifies all military-age men in a drone strike zone to be militants, unless their innocence is proven after the attack. If that's true, it raises questions about the government statistics on civilian casualties. One State Department official told the Times that the CIA might be overzealous in defining strike targets — he told them that "the joke was that when the C.I.A. sees 'three guys doing jumping jacks,' the agency thinks it is a terrorist training camp.

What About the Political Fallout?

The U.S. has also used airstrikes to side-step legal arguments about the boundaries of the campaign against al Qaeda. Both Bush and Obama administration officials have argued that Congress' September 2001 Authorization for Use of Military Force extends to al Qaeda operatives in any country, with or without the consent of local governments.

Drone strikes are extremely unpopular in the countries where they're deployed. They've led to tense diplomatic maneuvers with Pakistan, and protests and radicalization in Yemen. Iraqis have also protested the State Department's use of surveillance drones in their country.

Domestic concerns about civil liberties and due process in the secret air war were inflamed last fall, when a drone strike in Yemen killed Anwar al Awlaki, an al Qaeda member and a U.S. citizen. Weeks later, Awlaki's 16-year-old American son was also killed by a drone.

Costs and Crashes

Drones are cheap relative to most military manned planes, and they were a central feature of the Pentagon's scaled-back budget this year. But drones aren't immune from cost overruns. The latest version of the Global Hawk surveillance drone was put on the back-burner this January after years of expensive setbacks and questions about whether they were really better than the old U-2 spy planes they were slated to replace.

And while drones may not carry pilots, they can still crash. Wired has also reported ondrones' susceptibility to viruses.

Another problem? The Air Force is playing catch-up trying to train people to fly drones and analyze the mountains of data they produce, forcing them to sometimes rely oncivilian contractors for sensitive missions, according to the LA Times. The New York Times reported that in 2011, the Air Force processed 1,500 hours of video and 1,500 still images daily, much of it from surveillance drones. An Air Force commander admitted this spring that it would take "years" to catch up on the data they've collected.

Drones, Coming to America...

There are already a number of non-military entities that the FAA has authorized to fly drones, including a handful of local police departments. How drones might change police work is still to be determined (the Seattle police department, for example, showed off a 3.5-pound camera-equipped drone with a battery life of a whopping 10 minutes.)

Police drones may soon be more widespread, as the FAA released temporary rules this month making it easier for police departments to get approval for UAVs weighing up to 25 pounds, and for emergency responders to use smaller drones. The Department of Homeland Security also announced a program to help local agencies integrate the technology — principally as cheaper and safer alternatives to helicopters for reconnaissance. The Border Patrol already has a small fleet of Predators for border surveillance.

Law enforcement officials are staving off a backlash from privacy advocates. The ACLU and other civil rights groups have raised concerns about privacy and Fourth Amendment rights from unprecedented surveillance capability — not to mention the potential of police drones armed with tear gas and rubber bullets, which some departments have proposed. Congressmen Ed Markey, D-Mass., and Joe Barton, R-Texas, co-chairs of the Congressional Privacy Caucus, have asked the FAA to address privacy concerns in their new guidelines.

One of the first drone-assisted arrests by a local police department took place in North Dakota this year, with the help of a borrowed DHS Predator. It was deployed, as the New Yorker detailed, to catch a group of renegade ranchers in a conflict that originated over a bale of hay.

Scholarly drones

Universities actually have the most permits to fly drones at this point, for research on everything from pesticide distribution to disaster preparation. As Salon points out, the Pentagon and military contractors are also big funders of university drone research.

The Electronic Frontier Foundation, an advocacy group that has been outspoken about privacy concerns related to drones, put together the map below of entities authorized to fly drones by the FAA.



Get Your Own Drone!

Could you, too, become the proud owner of a drone? At the low-end, a drone can be a glorified model helicopter, and there's a dedicated community of DIY-drone builders. This fall, a group from Occupy Wall Street tried to use the "Occucopter" do their own surveillance of police movements.

Contacts and sources:
Cora CurrierProPublica 

Why Did We Stop Being Promiscuous And Decide To Settle Down To Start Families?

It is a question that has puzzled evolutionary biologists for years: Why did we stop being promiscuous and decide to settle down to start families?

Sergey Gavrilets, professor of ecology and evolutionary biology at the University of Tennessee, Knoxville, may have found the answer, and it lies in the power of female choice. The study reveals how females chose their mates played a critical role in human evolution by leading to monogamous relationships, which laid the foundation for the institution of the modern family.

Using mathematical modeling, the associate director for scientific activities at the National Institute for Mathematical and Biological Synthesis (NIMBioS) at UT has discovered that the transformation may have occurred when early-hominid females started choosing males who were good providers.

Gavrilets' findings are published in the "Proceedings of the National Academy of Sciences."

The "sexual revolution" entailed males first competing with other males for dominance, as a way to get matings. However, low-ranked males—and eventually all males except those with the highest societal stature—began supplying females with provisions in what is called "food-for-mating" to get a leg up on the competition. Females showed preference for the "provisioning" males, leading males' energy to be spent on providing for females and females becoming increasingly faithful. This spurred self-domestication and the modern family as we know it today.

"This change has confounded scientists for a long time because many species would be much better off evolutionarily if the effort spent on males competing for mates was redirected towards increasing female fertility or survivorship of their offspring," said Gavrilets.

The study demonstrates mathematically that the most commonly proposed theories for the transition to human pair bonding—or coupling—are not biologically feasible.

However, the study advances a new model showing that the transition to pair-bonding can occur when female choice and faithfulness, among other factors, are included. The result is an increased emphasis on males provisioning females over male competition for mating.

"The study reveals that female choice played a crucial role in human evolution," said Gavrilets.

According to Gavrilets, the transition to coupling has opened the path to intensified male parental investment, which was a breakthrough adaptation with multiple anatomical, behavioral and physiological consequences for early hominids and for all of their descendants. It shifted the dynamic away from males competing with each other for sex to males competing with each other to see who is a better provider to get better mates.

"Pair bonding laid the foundation for a later emergence of the institution of the modern family," said Gavrilets.

NIMBioS brings together researchers from around the world to collaborate across disciplinary boundaries to investigate solutions to basic and applied problems in the life sciences. It is sponsored by the National Science Foundation, the U.S. Department of Homeland Security, and the U.S. Department of Agriculture with additional support from the University of Tennessee, Knoxville. For more information, visitwww.nimbios.org.

Contacts and sources:
University of Tennessee, Knoxville

Milky Way Destined For Head-on Collision With Andromeda Says NASA

The Milky Way and Andromeda are speeding towards a head on collision at 250,000 miles per hour due to gravity and the speed will increase as the two get closer. The two galaxies will merge into a new galaxy and our solar system will likely survive intact to become part of the new galaxy. The most significant change expected is that our solar system will be further from the galactic center.

NASA astronomers announced Thursday they can now predict with certainty the next major cosmic event to affect our galaxy, sun, and solar system: the titanic collision of our Milky Way galaxy with the neighboring Andromeda galaxy.

The Milky Way is destined to get a major makeover during the encounter, which is predicted to happen four billion years from now. It is likely the sun will be flung into a new region of our galaxy, but our Earth and solar system are in no danger of being destroyed.

This illustration shows a stage in the predicted merger between our Milky Way galaxy and the neighboring Andromeda galaxy, as it will unfold over the next several billion years. In this image, representing Earth's night sky in 3.75 billion years, Andromeda (left) fills the field of view and begins to distort the Milky Way with tidal pull.
(Credit: NASA; ESA; Z. Levay and R. van der Marel, STScI; T. Hallas; and A. Mellinger)
› Larger image

"Our findings are statistically consistent with a head-on collision between the Andromeda galaxy and our Milky Way galaxy," said Roeland van der Marel of the Space Telescope Science Institute (STScI) in Baltimore.

The solution came through painstaking NASA Hubble Space Telescope measurements of the motion of Andromeda, which also is known as M31. The galaxy is now 2.5 million light-years away, but it is inexorably falling toward the Milky Way under the mutual pull of gravity between the two galaxies and the invisible dark matter that surrounds them both.

This animation depicts the collision between our Milky Way galaxy and the Andromeda galaxy. Hubble Space Telescope observations indicate that the two galaxies, pulled together by their mutual gravity, will crash together about 4 billion years from now. Around 6 billion years from now, the two galaxies will merge to form a single galaxy. The video also shows the Triangulum galaxy, which will join in the collision and perhaps later merge with the Andromeda/Milky Way pair.


 (Visualization Credit: NASA; ESA; and F. Summers, STScI | Simulation Credit: NASA; ESA; G. Besla, Columbia University; and R. van der Marel, STScI)
› Download video (45 MB mp4)
› Download video (no annotations)

"After nearly a century of speculation about the future destiny of Andromeda and our Milky Way, we at last have a clear picture of how events will unfold over the coming billions of years," said Sangmo Tony Sohn of STScI.

The scenario is like a baseball batter watching an oncoming fastball. Although Andromeda is approaching us more than 2,000 times faster, it will take 4 billion years before the strike.

Computer simulations derived from Hubble's data show that it will take an additional two billion years after the encounter for the interacting galaxies to completely merge under the tug of gravity and reshape into a single elliptical galaxy similar to the kind commonly seen in the local universe.

Although the galaxies will plow into each other, stars inside each galaxy are so far apart that they will not collide with other stars during the encounter. However, the stars will be thrown into different orbits around the new galactic center. Simulations show that our solar system will probably be tossed much farther from the galactic core than it is today.

To make matters more complicated, M31's small companion, the Triangulum galaxy, M33, will join in the collision and perhaps later merge with the M31/Milky Way pair. There is a small chance that M33 will hit the Milky Way first.

This illustration shows the collision paths of our Milky Way galaxy and the Andromeda galaxy. The galaxies are moving toward each other under the inexorable pull of gravity between them. Also shown is a smaller galaxy, Triangulum, which may be part of the smashup.
(Credit: NASA; ESA; A. Feild and R. van der Marel, STScI)
› Larger image

The universe is expanding and accelerating, and collisions between galaxies in close proximity to each other still happen because they are bound by the gravity of the dark matter surrounding them. The Hubble Space Telescope's deep views of the universe show such encounters between galaxies were more common in the past when the universe was smaller.

A century ago astronomers did not realize that M31 was a separate galaxy far beyond the stars of the Milky Way. Edwin Hubble measured its vast distance by uncovering a variable star that served as a "milepost marker."

Hubble went on to discover the expanding universe where galaxies are rushing away from us, but it has long been known that M31 is moving toward the Milky Way at about 250,000 miles per hour. That is fast enough to travel from here to the moon in one hour. The measurement was made using the Doppler effect, which is a change in frequency and wavelength of waves produced by a moving source relative to an observer, to measure how starlight in the galaxy has been compressed by Andromeda's motion toward us.

Previously, it was unknown whether the far-future encounter will be a miss, glancing blow, or head-on smashup. This depends on M31’s tangential motion. Until now, astronomers had not been able to measure M31's sideways motion in the sky, despite attempts dating back more than a century. The Hubble Space Telescope team, led by van der Marel, conducted extraordinarily precise observations of the sideways motion of M31 that remove any doubt that it is destined to collide and merge with the Milky Way.

"This was accomplished by repeatedly observing select regions of the galaxy over a five- to seven-year period," said Jay Anderson of STScI.

"In the worst-case-scenario simulation, M31 slams into the Milky Way head-on and the stars are all scattered into different orbits," said Gurtina Besla of Columbia University in New York, N.Y. "The stellar populations of both galaxies are jostled, and the Milky Way loses its flattened pancake shape with most of the stars on nearly circular orbits. The galaxies' cores merge, and the stars settle into randomized orbits to create an elliptical-shaped galaxy."

This series of photo illustrations shows the predicted merger between our Milky Way galaxy and the neighboring Andromeda galaxy.
illustration sequence depicting the collision of the Milky Way (right) and Andromeda galaxiesCredit: NASA; ESA; Z. Levay and R. van der Marel, STScI; T. Hallas, and A. Mellinger)

First Row, Left: Present day.
First Row, Right: In 2 billion years the disk of the approaching Andromeda galaxy is noticeably larger.
Second Row, Left: In 3.75 billion years Andromeda fills the field of view.
Second Row, Right: In 3.85 billion years the sky is ablaze with new star formation.
Third Row, Left: In 3.9 billion years, star formation continues.
Third Row, Right: In 4 billion years Andromeda is tidally stretched and the Milky Way becomes warped.
Fourth Row, Left: In 5.1 billion years the cores of the Milky Way and Andromeda appear as a pair of bright lobes.
Fourth Row, Right: In 7 billion years the merged galaxies form a huge elliptical galaxy, its bright core dominating the nighttime sky.


The space shuttle servicing missions to Hubble upgraded it with ever more-powerful cameras, which have given astronomers a long-enough time baseline to make the critical measurements needed to nail down M31's motion. The Hubble observations and the consequences of the merger are reported in three papers that will appear in an upcoming issue of the Astrophysical Journal.





For more images, video and information about M31's collision with the Milky Way, visit: http://hubblesite.org/news/2012/20



Contacts and sources:
NASA

X-ray 'Echoes' Map A Supermassive Black Hole's Environs

An international team of astronomers using data from the European Space Agency's (ESA) XMM-Newton satellite has identified a long-sought X-ray "echo" that promises a new way to probe supersized black holes in distant galaxies.


Astronomers using data from the European Space Agency's XMM-Newton satellite have found a long-sought X-ray signal from NGC 4151, a galaxy that contains a supermassive black hole. When the black hole's X-ray source flares, its accretion disk brightens about half an hour later. The discovery promises a new way to unravel what's happening in the neighborhood of these powerful objects



. Credit: NASA's Goddard Space Flight Center

Most big galaxies host a big central black hole containing millions of times the sun's mass. When matter streams toward one of these supermassive black holes, the galaxy's center lights up, emitting billions of times more energy than the sun. For years, astronomers have been monitoring such "active galactic nuclei" (AGN) to better understand what happens on the brink of a monster black hole. 

"Our analysis allows us to probe black holes through a different window. It confirms some long-held ideas about AGN and gives us a sense of what we can expect when a new generation of space-based X-ray telescopes eventually becomes available," said Abderahmen Zoghbi, a postdoctoral research associate at the University of Maryland at College Park (UMCP) and the study's lead author. 

One of the most important tools for astronomers studying AGN is an X-ray feature known as the broad iron line, now regarded as the signature of a rotating black hole. Excited iron atoms produce characteristic X-rays with energies around 6,000 to 7,000 electron volts -- several thousand times the energy in visible light – and this emission is known as the iron K line. 

The galaxy NGC 4151 is located about 45 million light-years away toward the constellation Canes Venatici. Activity powered by its central black hole makes NGC 4151 one of the brightest active galaxies in X-rays.
 Credit: David W. Hogg, Michael R. Blanton, and the Sloan Digital Sky Survey Collaboration

Matter falling toward a black hole collects into a rotating accretion disk, where it becomes compressed and heated before eventually spilling over the black hole's event horizon, the point beyond which nothing can escape and astronomers cannot observe. A mysterious and intense X-ray source near the black hole shines onto the disk's surface layers, causing iron atoms to radiate K-line emission. The inner part of the disk is orbiting the black hole so rapidly that the effects of Einstein's relativity come into play -- most notably, how time slows down close to the black hole. These relativistic effects skew or broaden the signal in a distinctive way. 

Astronomers predicted that when the X-ray source near the black hole flared, the broad iron K line would brighten after a delay corresponding to how long the X-rays took to reach and illuminate the accretion disk. Astronomers call the process relativistic reverberation. With each flare from the X-ray source, a light echo sweeps across the disk and the iron line brightens accordingly.

Unfortunately, neither ESA's XMM-Newton satellite nor NASA's Chandra X-ray Observatory possess telescopes powerful enough to spot reverberations from individual flares. 

This illustration compares the environment around NGC 4151's supermassive black hole with the orbits of the planets in our solar system; the planets themselves are not shown to scale. Echoes of X-ray flares detected in XMM-Newton data demonstrate that the X-ray source (blue sphere, center) is located above the black hole's accretion disk. The time lag between flares in the source and their reflection in the accretion disk places the X-ray source about four times Earth's distance from the sun.  
Credit: NASA's Goddard Space Flight Center

The team reasoned that detecting the combined echoes from multiple flares might be possible if a sufficiently large amount of data from the right object could be analyzed. The object turned out to be the galaxy NGC 4151, which is located about 45 million light-years away in the constellation Canes Venatici. As one of the brightest AGN in X-rays, NGC 4151 has been observed extensively by XMM-Newton. Astronomers think that the galaxy's active nucleus is powered by a black hole weighing 50 million solar masses, which suggested the presence of a large accretion disk capable of producing especially long-lived and easily detectable echoes. 

Since 2000, XMM-Newton has observed the galaxy with an accumulated exposure of about four days. By analyzing this data, the researchers uncovered numerous X-ray echoes, demonstrating for the first time the reality of relativistic reverberation. The findings appear in the May 8 issue of Monthly Notices of the Royal Astronomical Society.

The team found that echoes lagged behind the AGN flares by a little more than 30 minutes. Moving at the speed of light, the X-rays associated with the echo must have traveled an additional 400 million miles -- equivalent to about four times Earth's average distance from the sun -- than those that came to us directly from the flare. 

"This tells us that the mysterious X-ray source in AGN hovers at some height above the accretion disk," said co-author Chris Reynolds, a professor of astronomy at UMCP and Zoghbi's adviser. Jets of accelerated particles often are associated with AGN, and this finding meshes with recent suggestions that the X-ray source may be located near the bases of these jets. 

"The data show that the earliest echo comes from the most broadened iron line emission. This originates from closest to the black hole and fits well with expectations," said co-author Andy Fabian, an astrophysicist at the University of Cambridge in England.

Amazingly, the extreme environment at the heart of NGC 4151 is built on a scale comparable to our own solar system. If we replaced the sun with the black hole, the event horizon would extend less than halfway to Earth if the black hole spins rapidly; slower spin would result in a larger horizon. The X-ray source would hover above the black hole and its accretion disk at a distance similar to that between the sun and the middle of the asteroid belt.

"Teasing out the echo of X-ray light in NGC 4151 is a remarkable achievement. This work propels the science of AGN into a fundamental new area of mapping the neighborhoods of supermassive black holes," said Kimberly Weaver, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Md., who was not involved in the study. NASA Goddard hosts the XMM-Newton Guest Observer Facility, which supports U.S. astronomers who request observing time on the satellite.

The detection of X-ray echoes in AGN provides a new way of studying black holes and their accretion disks. Astronomers envision the next generation of X-ray telescopes with collecting areas large enough to detect the echo of a single AGN flare in many different objects, thereby providing astronomers with a new tool for testing relativity and probing the immediate surroundings of massive black holes.



Contacts and sources:
Francis Reddy
NASA's Goddard Space Flight Center, Greenbelt, Md.

Related Links:
Additional graphics from the SVS
Paper: Relativistic iron K X-ray reverberation in NGC 4151
More about XMM-Newton
The XMM-Newton Guest Observer Facility

Oil Rationing Within A Decade For Consumers? Peeking At Peak Oil

A new book explains the reality of "peak oil" and its far-reaching implications for our global future.

 

What happens when a handful of the world's largest oil fields- accounting for two-thirds of the world's oil-run dry? What are the implications of such a prospect for food production, economic growth and ultimately, global security? In his new book, Peeking at Peak Oil ( Springer, 2012) physicist Kjell Aleklett explores the science and consequences behind the sobering reality that the world's oil production is entering terminal decline with no satisfactory alternatives.

Peeking at Peak Oil explains how oil is formed, discovered and "produced," using science to reveal the errors and deceit of public and private organizations with a vested interest in promoting business as usual. As President of the Association for the Study of Peak Oil & Gas (ASPO) and head of the world's leading research group on peak oil, Aleklett presented the data and major conclusions of his book at the 10th International ASPO Conference in Vienna, Austria, on 30 May 2012 ( http://www.aspo2012.at/). The conference serves as a gathering of international top experts on fossil fuel depletion and its implications, providing attendees with the latest updates on a wide range of energy and related economic issues.

The term "peak oil" was born in January 2001 when Colin Campbell founded ASPO, dedicated to the scientific exploration of the idea that our primary energy supply is finite and limited. Originally regarded as a fringe theory, peak oil has arrived. One telling sign is a 2012 International Monetary Fund working paper, "The Future of Oil: Geology versus Technology," which incorporates Campbell's depletion model, and concludes that oil prices can be understood only by considering supply constraints, i.e., peak oil.

Using simple language and engaging illustrations, Aleklett's Peeking at Peak Oil leaves readers with a clear and comprehensive understanding of the emerging issue of our time. Peak oil is now used thousands of times a day by journalists, politicians, industry leaders, economists, scientists and countless others around the globe. Peak oil is not the end of oil but it tells us the end is in sight.


Contacts and sources:
 Springer Science+Business Media 

Publication title: Peeking at Peak Oil
Author: Aleklett, Kjell
Publication type: Book (Hardback)
Publication date: 28 June 2012
Number of pages: 325
Price: 24.99 GBP British Pounds

Synthetic Cells Stand In For The Real Thing

What makes one cell stick to its home base and another cell detach and migrate? How do cells “sense” their physical environment and respond? These questions go to the core of what it means to be a living cell, but the answers are anything but simple. Indeed, some of the most complex cellular systems – from yeast to human cells – are those for adhesion and environmental sensing.

“Add to that the complexity of a dynamic environment,” says Prof. Benjamin Geiger of the Weizmann Institute's Molecular Cell Biology Department, “and you have something that is very difficult to even define, much less describe in any useful way.”

(l-r) Profs. Benjamin Geiger and Joachim Spatz
Geiger and Spatz 
Credit: Weizmann Institute

That is why Geiger and Prof. Joachim Spatz of the Max Planck Institute for Intelligent Systems, Germany, have launched a project that presents a new approach to understanding the ways in which cells reach out to their surroundings. Simply put, they plan to conduct experiments with a man-made system in which synthetic cells – in the form of vesicles consisting of a simple lipid membrane and a handful of protein molecules – sit on synthetic substrates. Experimenting with these simplified models, in which the researchers can control every aspect of their design, will hopefully yield new insights into how living cells work. While the plan is admittedly ambitious, the payoff could be great: Adhesion and sensing are crucial to everything, from growth and development, to cell migration and tissue architecture, to – when the process goes awry – cancer metastasis.

The idea falls within the new field of synthetic biology, in which scientists take an engineering approach to the cell and its components. Spatz is a materials scientist and Geiger, a biologist. For the past several years, the two have worked closely together to create unique substrates, and used them to test living cells’ sensing abilities. Synthetic cells are the logical next step in the research process.

The scientists’ method for creating artificial cells begins with blood platelets – simple cells that have the ability to adhere to biological as well as artificial surfaces. The researchers remove everything but the cell’s outer “skin” and adhesion-mediating proteins, called integrins, which perform the actual sticking. Then, these proteins are extracted and inserted into synthetic vesicles, and additional components of the adhesion site are gradually added, a few at a time, so that the researchers can test as they go. In parallel, they will experiment with the substrate – controlling its properties down to the placement and spacing of individual molecules. As they analyze the results obtained from the synthetic system, Spatz and Geiger plan to recheck their findings in living cells, to see how well their progressing model reflects the considerably more complex reality.

Prof. Geiger explains: Synthetic cells and substrates will be checked against their natural counterparts in all permutations and combinations

Credit: Weizmann Institute

Even such comparatively simple synthetic cell models are quite complicated. “If we manage to find the right combination to get these cells to respond to environmental cues,” says Spatz, “we will consider that a great success.” Eventually, the scientists intend to move past the present understanding – a “grocery list” of hundreds of individual molecules that participate in the molecular cross-talk underlying a cell’s adhesion and sensing mechanisms – toward understanding how those individual bits and pieces come together to make functional components.

This new undertaking has already demonstrated one considerable success: The European Research Council (ERC) recently awarded the project a grant of 3.5 million euros. Such ERC Advanced Grants are specifically “aimed to promote substantial advances at the frontiers of knowledge and to encourage new productive lines of enquiry, including unconventional approaches and investigations at the interface between established disciplines.”

In addition to advancing the understanding of how living cells sense and respond to their environment, the scientists think the project may yield some interesting insights into the origins of living cells. Even before cells started to stick together to form multicellular organisms, they probably had rudimentary adhesion mechanisms for sensing and grabbing onto food – the most basic need of all.

Prof. Benjamin Geiger’s research is supported by the Leona M. and Harry B. Helmsley Charitable Trust; the Adelis Foundation; the Mario Negri Institute for Pharmacological Research; the estate of Alice Schwarz-Gardos; IIMI, Inc.; and the European Research Council. Prof Geiger is the incumbent of the Professor Erwin Neter Professorial Chair of Cell and Tumor Biology.

Contacts and sources:
Weizmann Institute

SpaceX Dragon Capsule Returns To Earth After First Commercial Flight To Space Station

SpaceX's Dragon capsule splashed down in the Pacific Ocean at 11:42 a.m. EDT a few hundred miles west of Baja California, Mexico, marking a successful end to the first mission by a commercial company to resupply the International Space Station. 

The SpaceX Dragon capsule pauses near the International Space Station so the robotic arm can grapple and berth it to a port on the station. 
SpaceX Dragon as ISS arm reaches out to grab.
Photo credit: NASA

"Congratulations to the teams at SpaceX and NASA who worked hard to make this first commercial mission to the International Space Station an overwhelming success," NASA Administrator Charles Bolden said. "This successful splashdown and the many other achievements of this mission herald a new era in U.S. commercial spaceflight. American innovation and inspiration have once again shown their great strength in the design and operation of a new generation of vehicles to carry cargo to our laboratory in space. Now more than ever we're counting on the inventiveness of American companies and American workers to make the International Space Station and other low Earth orbit destinations accessible to any and all who have dreams of space travel."

The Dragon capsule will be taken by boat to a port near Los Angeles, where it will be prepared for a return journey to SpaceX's test facility in McGregor, Texas, for processing. Some cargo will be removed at the port in California and returned to NASA within 48 hours. The remainder will be returned to Texas with the capsule.

The capsule delivered to the station 1,014 pounds of supplies including experiments, food, clothing and technology. On its return trip to Earth, the capsule carried science experiments that will be returned to researchers hoping to gain new insights provided by the unique microgravity environment in the station's laboratories. In addition to the experiments, Dragon returned a total of 1,367 pounds of hardware and cargo no longer needed aboard the station.

Dragon's journey to the space station was SpaceX's second demonstration mission under NASA's Commercial Orbital Transportation Services (COTS) Program, which provides investments to stimulate the commercial space industry in America. The mission began May 22 as the capsule launched from Cape Canaveral Air Force Station in Florida aboard a SpaceX Falcon 9 rocket. Following a series of tests of its maneuverability and abort systems, the capsule was grappled and berthed to the space station by the crew members of Expedition 31 aboard the orbiting complex.

In the next several weeks, NASA will evaluate the Dragon capsule's mission performance to close out remaining COTS milestones. Once that work is completed NASA and SpaceX will set the target date for the company's first full cargo mission.

In addition to fostering the development of new American cargo vehicles, NASA also is helping spur innovation and development of new spacecraft and launch vehicles from the commercial industry to develop safe, reliable and cost-effective capabilities to transport astronauts to low Earth orbit and the space station.

NASA also is developing the Orion spacecraft and Space Launch System (SLS), a crew capsule and heavy-lift rocket that will provide an entirely new capability for human exploration beyond low Earth orbit. Designed to be flexible for launching spacecraft for crew and cargo missions, SLS and Orion will expand human presence beyond low Earth orbit and enable new missions of exploration across the solar system. 

Contacts and sources: 

For SpaceX mission information and a schedule of NASA TV coverage, visit:
http://www.nasa.gov/spacex

For more information about the International Space Station, visit:
http://www.nasa.gov/station

For more information about NASA's commercial space programs, visit:
http://www.nasa.gov/exploration/commercial





First Time In 100 Years: Daredevil To Cross Niagara Falls On Tightrope

On June 15, high-wire artist Nik Wallenda will attempt to cross Niagara Falls on a tightrope -- the first such attempt in more than 100 years.

Charles Blodin especially owed his celebrity and fortune to his idea of crossing the gorge below Niagara Falls on a tightrope, 1100 feet (335 m) long, 3¼ inches in diameter, 160 feet (50 m) above the water. This he accomplished, first on 30 June 1859, a number of times, always with different theatric variations: blindfolded, in a sack, trundling a wheelbarrow, on stilts, carrying a man (his manager, Harry Colcord) on his back, sitting down midway while he cooked and ate an omelet and standing on a chair with only one chair leg on the rope.
File:Charles.Blondin.jpg
Credit: Wikipedia

He will use an 1,800-foot long, custom-made, two-inch wire that will stretch from Goat Island on the American side of the falls to a site just below the falls on the Canadian side. The wire will be strung about 200 feet above the base of the Niagara Gorge.

The walk poses considerable danger to Wallenda from such things as the falls’ mist plume, changeable winds, possible attack by peregrine falcons as he traverses their flight path, and clamps on the safety harness he is being forced to wear by ABC, which is televising the event.

This event has generated much excitement and controversy, and University at Buffalo experts are available to discuss the nature of such spectacles, their role in popular culture, the Niagara mist plume, crowd psychology and the kinds of risks involved in this venture.

Maria Spelterini crossing Niagara Falls on July 4, 1876
File:Maria Spelterini at Suspension Bridge.jpg
Credit: Wikipedia


The public loves a spectacle that involves possible violence

David Schmid, PhD, Associate Professor and Associate Chair of English, University at Buffalo College of Arts and Sciences says, "From the popularity of reality TV to our tendency to slow down to rubberneck at car accidents on the highway, our society loves a spectacle, particularly if it includes the possibility of violence. This is especially true for those who want to be famous or are attracted to celebrity. More than ever before, fame means to be visible, to do something that grabs public attention and keeps it.

"Because the tightrope walk by Nik Wallenda touches upon all these themes, public interest in it extends far beyond morbid curiosity. The Wallenda event also has that X-factor that sets it apart from so much on the contemporary media landscape: unpredictability.

"Despite the ubiquity and popularity of so-called 'reality TV,' the vast majority of it is so safe, scripted and managed that any element of risk or unpredictability has been entirely removed. This falls walk, on the other hand is real 'reality' television that presents a genuinely chancy, dangerous spectacle on live TV, a performance that could actually lead to the actor’s death. That fact makes us nervous. It also compels us to watch.

"Suzanne Collins, author of the best-selling 'Hunger Games' books, presents the premise in which, in a future United States, children kill each other as part of an immensely popular televised game. At first glance, that seems completely unbelievable. But consider the success of the books and the movie adaptation and the excitement the Wallenda walk is generating. Collins has struck a nerve."

Niagara Falls Water Plume and Wind Could Affect Wallenda's Safety

Marcus Bursik, PhD, Professor of Geology, University at Buffalo College of Arts and Sciences, says "Wallenda will be walking over a portion of the Niagara Gorge directly below Niagara Falls on the Canadian side. The location is not the safest route he could have chosen. In fact, it is probably the worst place to cross. There is a plume of mist that rises from the pool of water in the Niagara Gorge just below the falls that produces a moist updraft. It will be a little like walking a tightrope in a mini-thunderstorm. If he is used to moist updrafts, of course, it may not be a problem for him.

"The size and height of the plume will depend on meteorological conditions that day. It's all super-sensitive to small temperature differences. If the water is colder than the air, as it is in the summer months, there is no or little plume, and the air will be blowing down-gorge instead. At certain times of the year, the water is warmer than the air, which results in a plume that can rise up to 3,500 feet. The warmer the water is than the air, the bigger the plume will be.

"The wind will be a factor as well. The wind can be, and often is stronger than the up or downdrafts associated with the falls. From our measurements, we found that the wind is often blowing across the gorge, so he could get a head or tail wind, or even shear. The plume adds an extra vertical component to a wind that in most places is much weaker than the horizontal wind.

"Wallenda will have factored possible conditions into his consideration of the walk, of course. He must be used to dealing with different kinds of wind conditions. The plume will be relatively small at this time of year, possibly nonexistent, because air temperatures are warmer than the water. It presents a possible risk, however. Since his wire will be only 220 feet above the gorge basin, even a small plume could increase risk.


Wire walk is 'opportunity of a lifetime' for regional tourism

Arun Jain, Samuel P. Capen Professor of Marketing Research, University at Buffalo School of Management, says. "This is an opportunity of a lifetime. We will be window to the world to see this fete being performed at one of the Wonders of the World. Whether he succeeds (I hope he does) or fails, Niagara Falls will be etched in the minds of people across the globe.

"The challenge is to claim it as a Buffalo Niagara event and do things to put a glow on our fair city, attractions (history), and make us an inviting place to visit. Media will be looking for events and activities to show before, during and after the event. Either it will be filled with stories about us or Niagara Falls, Canada. My question is, are our local leaders savvy enough to take advantage of this opportunity handed to them on a silver platter?"
Public Appeal of Wallenda's Walk Has Psychological Underpinnings

Megan E. Pailler, PhD, Director of the Psychological Services Center, University at Buffalo College of Arts and Sciences, Department of Psychology, says, "Sensation seeking, vicarious thrills, danger and uncertainty -- because of our psychological makeup, Nik Wallenda's planned walk across Niagara Falls fascinates us on a variety of levels.

"In many ways, its appeal is similar to that of arousal-inducing action or horror media. As in those cases, the terms applied to the walk are 'dramatic,' 'thrilling' and 'exciting.' But why is this particular event so electrifying? I think for several reasons:

· Many people enjoy the physiological arousal they experience when watching dangerous, exciting and novel events, and the Wallenda walk offers danger, excitement and novelty.

· There is some evidence that people who rate higher on sensation seeking scales prefer arousing media. In general, men tend to rate higher in sensation seeking and may be more likely to enjoy watching a thrilling or dangerous event like the Wallenda walk. The walk also involves voyeurism, and we know how people enjoy watching other people's lives and activities (in fact, this accounts for some of the popularity of reality TV).

· The danger and uncertainty of the outcome intensifies the experience, and, if the walk is successful, there is also the vicarious sense of relief and accomplishment -- similar to the experience of watching your favorite team win a game.

· The Wallenda walk is outside the realm of day-to-day experience. It provides an appealing escape from the mundane. Many of us have detailed memories of watching significant and novel events like the moon landing. And as I said, the potential for danger may heighten this feeling.

· Finally, there is often a satisfying sense of shared experience that accompanies watching such things with others -- things that may collectively be recalled and recounted."


Possible Peregrine Falcon Attacks on Wallenda a Safety Risk
Christopher Hollister, MLS, Associate Librarian, University at Buffalo University Libraries, says, Hollister is an avid conservationist and ornithologist and a contributor to the Breeding Bird Atlas published by the New York State Department of Environmental Conservation.

"The Canadian Peregrine Foundation has raised alarms over the possibility of attacks on Nik Wallenda by a pair of peregrine falcons nesting in a nearby decommissioned Ontario Power Generation plant. They say the birds could feel threatened by Wallenda, since his Niagara Falls walk will take him through their flight path.

"Peregrines are aggressively protective of their territory, particularly when they are caring for their young, and in this region, this is the time of year when they are doing just that. There are countless stories of peregrine falcons diving-bombing people.

"I do think that possible falcon attacks should be a serious consideration in terms of his safety. Peregrines reach speeds of 200 miles per hour during a hunting swoop, making them the fastest animal on the planet. Being hit by a bird moving at that speed would have quite an impact on a man trying to balance on a high wire."


Wallenda Falls Walk Entails Different Kinds of Risks

Cristian-Ioan Tiu, PhD, Assistant Professor of Finance, University at Buffalo School of Management, says,  "The upcoming walk by Nik Wallenda over Niagara Falls entails a variety of risks.

"The first kind is enterprise risk. This is the risk that the whole 'adventure' won’t happen for reasons such as regulations, last minute concern from authorities, etc. I assume that there has been work dedicated to ensure that this event will happen, so the risk here is low.

"Second is specific risk. In this case, danger could be increased by weather, mechanical complications or health issues. I am no professional wire walker, but I assume that the training and accurate weather reports predict most of these risks and pose appropriate responses.

"Finally, there is uncertainty or unquantifiable risks. For example (and I hope not to jinx the guy), the walker cannot prepare for such things as a kid flashing a laser pointer, a helicopter flying too close or someone falling in the water, but they might increase his risk. In fact, however, these are poor examples. What I mean by uncertainty is something for which it is completely impossible to plan.

"I have described these risks from the perspective of the walker. From the perspective of the viewer, the risk will appear greater, partly because what is quantified above as specific risk, that is, a risk that can be managed, will be perceived by the viewer, who is not a rope walker, as uncertainty.

"The greater the difference in perception, the more interesting the show will be."




Contacts and sources:

New Research Shows Runners Can Improve Health And Performance With Less Training

The new 10-20-30 training concept can improve both a person's running performance and health, despite a significant reduction in the total amount of training. This is the conclusion of a study from University of Copenhagen researchers just published in the renowned scientific Journal of Applied of Physiology.

Eadweard Muybridge photo sequence
File:Muybridge runner.jpg
Credit: Wikipedia

Over the course of seven weeks, runners were able to improve performance on a 1500-metre run by 23 seconds and almost by a minute on a 5-km run – and this despite a 50 per cent reduction in their total amount of training. These are just some of the results from a research project involving 18 moderately trained runners following the 10-20-30 training concept developed by researchers from the Department of Exercise and Sport Sciences at the University of Copenhagen.

The new 10-20-30 training concept can improve both a person’s running performance and health, despite a significant reduction in the total amount of training. This is the conclusion of a study from University of Copenhagen researchers just published in the renowned scientific Journal of Applied of Physiology. See interview with Professor Jens Bangsbo where he introduces the 10-20-30 training concept and explains the results from the recently published study.

Credit: Carsten Lundager

In addition to enhancing running performance, the runners from the project also had a significant decrease in blood pressure and a reduction in cholesterol in the blood.

"We were very surprised to see such an improvement in the health profile considering that the participants have been running for several years," says Professor Jens Bangsbo, Department of Exercise and Sport Sciences, who heads the project.

"The results show that the very intense training has a great potential for improving health status of already trained individuals," says Professor Bangsbo.

PhD student Thomas Gunnarsson adds that the emotional well-being of the participants also improved over the span of the project.

"We found a reduction in emotional stress when compared to control subjects continuing their normal training based on a recovery-stress questionnaire administered before and after the 7-week training period," explains Gunnarsson.

The 10-20-30 training concept

The 10-20-30 training concept consists of a 1-km warm-up at a low intensity followed by 3-4 blocks of 5 minutes running interspersed by 2 minutes of rest. Each block consists of 5 consecutive 1-minute intervals divided into 30, 20 and 10 seconds of running at a low, moderate and near maximal intensity, respectively.

30 minutes is all you need

According to Professor Bangsbo, the 10-20-30 training concept is easily adapted in a busy daily schedule as the time needed for training is low. A total of 20-30 minutes including warm-up is all that is needed. Since the 10-20-30 concept deals with relative speeds and includes low speed running and 2-minute rest periods, individuals with different fitness levels and training backgrounds can perform the 10-20-30 training together.

"The training was very inspiring. I could not wait to get out and run together with the others. Today, I am running much faster than I ever thought possible," says Katrine Dahl, one of the participants in the study.  

The study was supported by the Nordea-fonden, Copenhagen, Denmark, and the results are published in the Journal of Applied of Physiology.

Contacts and sources:
Jens Bangsbo
University of Copenhagen

Hiding True Self At Work Can Result In Less Job Satisfaction, Greater Turnover, According To New Rice U. Study

Hiding your true social identity — race and ethnicity, gender, age, religion, sexual orientation or a disability — at work can result in decreased job satisfaction and increased turnover, according to a new study from Rice University, the University of Houston and George Mason University.

“The workplace is becoming a much more diverse place, but there are still some individuals who have difficulty embracing what makes them different, especially while on the job,” said Michelle Hebl, Rice professor of psychology and co-author of “Bringing Social Identity to Work: The Influence of Manifestation and Suppression on Perceived Discrimination, Job Satisfaction and Turnover Intentions.” The paper appears in the Cultural Diversity and Ethnic Minority Psychology journal.

“Previous research suggests that employees who perceive discrimination or are afraid of receiving discrimination are more likely to fall into this category of individuals who feel the need to suppress or conceal their identity,” Hebl said.

The study examined the behavior of 211 working adults in an online survey and measured factors such as identity, perceived discrimination, job satisfaction and turnover intentions.

“This research highlights the fact that people make decisions every day about whether it is safe to be themselves at work, and that there are real consequences of these decisions,” said Rice alumna Eden King, study co-author and associate professor of psychology at George Mason University.

The study also showed that suppressing one’s true identity might result in exposure to co-workers’ discriminatory behavior, as people are less likely to care about appearing prejudiced when they are not in the presence of an “out” group member. On the contrary, the research finds that expression of one’s true identity in a workplace can have positive impact on their interpersonal relationships.

“When individuals embrace their social identity in the workplace, other co-workers might be more sensitive to their behavior and treatment of individuals like them,” said Juan Madera, a University of Houston professor, Rice alumnus and lead study author. “And quite often, what’s good for the worker is good for the workplace. The employees feel accepted and have better experiences with co-workers, which creates a positive working environment that may lead to decreased turnover and greater profits.”

The authors hope their research will encourage the general public to be accepting of people with diverse backgrounds and become allies to them and encourage employers to implement policies that foster a positive organizational culture.

“I think this study really demonstrates that everyone can have a role in making the workplace more inclusive,” Hebl said. “Individuals tell co-workers, who can act as allies and react positively, and organizations can institute protective and inclusive organizational policies. All of these measures will continue to change the landscape and diversity of our workforce.”

This study was funded by Rice University, the University of Houston and George Mason University.
 
Contacts and sources: