Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Friday, September 30, 2011

Would You Kill An Innocent To Save Five Others? Antisocial Personality Traits Predict Utilitarian Responses To Moral Dilemmas


A new study questions the widely-used methods by which lay moral judgments are evaluated; results found individuals who are least prone to moral errors also possess a set of prototypically immoral psychological characteristics.

The study conducted by Daniel Bartels, Columbia Business School, Marketing, and David Pizarro, Cornell University, Psychology found that people who endorse actions consistent with an ethic of utilitarianism—the view that what is the morally right thing to do is whatever produces the best overall consequences—tend to possess psychopathic and Machiavellian personality traits.

In the study, Bartels and Pizarro gave participants a set of moral dilemmas widely used by behavioral scientists who study morality, like the following: "A runaway trolley is about to run over and kill five people, and you are standing on a footbridge next to a large stranger; your body is too light to stop the train, but if you push the stranger onto the tracks, killing him, you will save the five people. Would you push the man?" Participants also completed a set of three personality scales: one for assessing psychopathic traits in a non-clinical sample, one that assessed Machiavellian traits, and one that assessed whether participants believed that life was meaningful. Bartels and Pizarro found a strong link between utilitarian responses to these dilemmas (e.g., approving the killing of an innocent person to save the others) and personality styles that were psychopathic, Machiavellian or tended to view life as meaningless.

These results (which recently appeared in the journal Cognition) raise questions for psychological theories of moral judgment that equate utilitarian responses with optimal morality, and treat non-utilitarian responses as moral "mistakes". The issue, for these theories, is that these results would lead to the counterintuitive conclusion that those who are "optimal" moral decision makers (i.e., who are likely to favor utilitarian solutions) are also those who possess a set of traits that many would consider prototypically immoral (e.g., the emotional callousness and manipulative nature of psychopathy and Machiavellianism).

While some might be tempted to conclude that these findings undermine utilitarianism as an ethical theory, Prof. Bartels explained that he and his co-author have a different interpretation: "Although the study does not resolve the ethical debate, it points to a flaw in the widely-adopted use of sacrificial dilemmas to identify optimal moral judgment. These methods fail to distinguish between people who endorse utilitarian moral choices because of underlying emotional deficits (like those captured by our measures of psychopathy and Machiavellianism) and those who endorse them out of genuine concern for the welfare of others." In short, if scientists' methods cannot identify a difference between the morality of a utilitarian philosopher who sacrifices her own interest for the sake of others, and a manipulative con artist who cares little about the feelings and welfare of anyone but himself, then perhaps better methods are needed.

Contacts and sources: 
Sona Rai
Columbia Business School

The Moon's Shadow, Like A Ship, Creates Waves

The Moon's shadow, like a ship, creates waves

Lunar Orbiter 1 new of the Moon and crescent Earth. This is the first good image of the Earth taken from the vicinity of the Moon, 380,000 km away. The Earth sunset terminator runs through Odessa, Istanbul, and slightly west of Capetown. The center of the lunar surface corresponds to the location of the crater Pasteur, just on the eastern farside at 10 S,105 E, but the high sun angle makes it hard to see the craters. The horizon covers about 550 km, and north is to the right in this west facing image.
The Moon
Credit: NASA  (Lunar Orbiter 1, frame 102; H1, H2, and H3)

During a solar eclipse, the Moon's passage overhead blocks out the majority of the Sun's light and casts a wide swath of the Earth into darkness. The land under the Moon's shadow receives less incoming energy than the surrounding regions, causing it to cool. In the early 1970s, researches proposed that this temperature difference could set off slow-moving waves in the upper atmosphere.

They hypothesized that the waves, moving more slowly than the travelling temperature disparity from which they spawned, would pile up along the leading edge of the Moon's path-like slow-moving waves breaking on a ship's bow. The dynamic was shown theoretically and in early computer simulations, but it was not until a total solar eclipse on 22 July 2009 that researchers were able to observe the behavior.

Using a dense network of ground-based global positioning satellite receivers, Liu et al. tracked the influence of the 2009 eclipse as it passed over Taiwan and Japan. The researchers looked for changes in the total electron content in the ionosphere and find acoustic waves with periods between 3 and 5 minutes traveling around 100 meters per second (328 feet per second) that originated from the leading and trailing edges of the shadow, analogous to bow waves and stern wake common in maritime activity.

They find that there was a 30 minute time difference between the arrival of the bow and stern waves suggesting that, were the Moon's shadow a ship, it would be 1,712 kilometers (1,064 miles) long. The researchers indicate that this would correspond to the part of the Moon's shadow that produced at least an 80 percent obscuration of the Sun's light.

Source: Geophysical Research Letters, doi:10.1029/2011GL048805, 2011http://dx.doi.org/10.1029/2011GL048805

Title: Bow and stern waves triggered by the Moon's shadow boat

Authors: J. Y. Liu: Institute of Space Science, National Central University, Chung-Li, Taiwan, Center for Space and Remote Sensing Research, Chung-Li, Taiwan, and National Space Program Origination, Hsin-Chu, Taiwan;

Y. Y. Sun: Institute of Space Science, National Central University, Chung-Li, Taiwan;

Y. Kakinami: Institute of Seismology and Volcanology, Hokkaido University, Sapporo, Japan;

C. H. Chen: Department of Geophysics, Graduate School of Science, Kyoto University, Kyoto, Japan;

C. H. Lin: Department of Earth Science, National Cheng Kung University, Tainan, Taiwan;

H. F. Tsai: Center Weather Bureau, Taipei, Taiwan.

Johns Hopkins Scientists Discover 'Fickle' DNA Changes In Brain

Johns Hopkins scientists investigating chemical modifications across the genomes of adult mice have discovered that DNA modifications in non-dividing brain cells, thought to be inherently stable, instead underwent large-scale dynamic changes as a result of stimulated brain activity. Their report, in the October issue of Nature Neuroscience, has major implications for treating psychiatric diseases, neurodegenerative disorders, and for better understanding learning, memory and mood regulation.

Specifically, the researchers, who include a husband-and-wife team, found evidence of an epigenetic change called demethylation — the loss of a methyl group from specific locations — in the non-dividing brain cells’ DNA, challenging the scientific dogma that even if the DNA in non-dividing adult neurons changes on occasion from methylated to demethylated state, it does so very infrequently.

“We provide definitive evidence suggesting that DNA demethylation happens in non-dividing neurons, and it happens on a large scale,” saysHongjun Song, Ph.D., professor of neurology and neuroscience and director of the Stem Cell Program in the Institute for Cell Engineering of the Johns Hopkins University School of Medicine. “Scientists have previously underestimated how important this epigenetic mechanism can be in the adult brain, and the scope of change is dramatic.”

DNA comprises the fixed chemical building blocks of each person or animal’s genome, but the addition or removal of a methyl group at the specific location chemically alters DNA and regulates gene expression, enabling cells with the same genetic code to acquire and activate separate functions.

In previously published work, the same Hopkins researchers reported that electrical brain stimulation, such as that used in electroconvulsive therapy (ECT) for patients with drug resistant depression, resulted in increased brain cell growth in mice, due likely to changes in DNA methylation status.

This time, they again used electric shock to stimulate the brains of live mice. A few hours after administering the brain stimulation, the scientists analyzed two million of the same type of neurons from the brains of stimulated mice, focusing on what happens to one building block of DNA — cytosine — at 219,991 sites. These sites represented about one percent of all cytosines in the whole mouse genomes.

In collaboration with genomic biologist Yuan Gao, now at the Lieber Institute for Brain Development, the scientists used the latest DNA sequencing technology and compared neurons in mice with or without brain stimulation. About 1.4 percent of the cytosines measured showed rapid active demethylation or became newly methylated.

“It was mind-boggling to see that so many methylation sites — thousands of sites — had changed in status as a result of brain activity,” Song says. “We used to think that the brain’s epigenetic DNA methylation landscape was as stable as mountains and more recently realized that maybe it was a bit more subject to change, perhaps like trees occasionally bent in a storm. But now we show it is most of all like a river that reacts to storms of activity by moving and changing fast.”
The majority of the sites where the methylation status of the cytosine changed as a result of the brain activity were not in the expected areas of the genome that are traditionally believed to control gene expression, Song notes. Rather, they were in regions where cytosines are low in density, in genomic regions where the function of DNA methylation is not well understood.

Because DNA demethylation can occur passively during cell division, the scientists targeted radiation to the sections of mouse brains they were studying, permanently preventing passive cell division, and still found evidence of DNA demethylation. This confirms, they say, that the DNA methylation changes they measured occurred independently of cell division.

“Our finding opens up new opportunities to figure out if these epigenetic modifications are potential drug targets for treating depression and promote regeneration, for instance,” says Guo-li Ming, M.D., Ph.D., professor of neurology and neuroscience.

This research was supported by the National Institutes of Health, a McKnight Scholar Award, the Brain and Behavior Research Foundation, the Adelson Medical Research Foundation, and the Johns Hopkins Brain Science Institute.

Authors of the paper from Johns Hopkins in addition to Song and Ming are Junjie U. Guo, Dengke K. Ma, Eric Ford, Mi-Hyeon Jang, Michael A Bonaguidi and Yuan Gao.

Other authors are Huan Mo and Hugh L. Eaves of the Virginia Commonwealth University; Madeleine P. Ball, Harvard Medical School; Jacob A Balazer, Proofpoint Inc.; Bin Xie, Lieber Institute for Brain Development; and Kun Zhang, University of California at San Diego.

Contacts and sources: 
Related Stories:
Guo-li Ming on how neurons make connections in the brain

Nervous System Stem Cells Can Replace Themselves, Give Rise to Variety of Cell Types, Even Amplify

Hopkins Team Discovers How DNA Changes

Growth of New Brain Cells Requires 'Epigenetic' Switch

New "Schizophrenia Gene" Prompts Researchers To Test Potential Drug Target

On the Web:
Song lab: http://www.hopkinsmedicine.org/institute_cell_engineering/experts/hongjun_song.html
Ming lab: http://www.hopkinsmedicine.org/institute_cell_engineering/experts/guo_ming.html
Nature Neuroscience: http://www.nature.com/neuro/index.html

New Shrimp Technology Could Speed Up Race To Feed The World

They may look like bunk beds on steroids, but a new shrimp production technology developed by a Texas AgriLife Research scientist near Corpus Christi promises to revolutionize how shrimp make it to our tables.

The patent-pending technology, known as super-intensive stacked raceways, was created by Dr. Addison Lawrence at the Texas AgriLife Research Mariculture Laboratory at Port Aransas, who says the system is able to produce record-setting amounts of shrimp.

Dr. Addison Lawrence, left, points to the lower section of his super-intensive stacked raceway shrimp production system to Dr. Maurice Kemp, president of Royal Caridea. 
AgriLife Research photo by Patty Waits Beasley

“We’re able to produce jumbo size shrimp, each weighing 1.1 ounces, known as U15 shrimp, which gives us world record production of up to 25 kilograms of shrimp per cubic meter of water using either zero water exchange and/or recirculating water,” he said.

At this rate of production, Lawrence said commercial shrimp producers will have the potential to vastly increase their profit margins.

A world-wide license for the new technology has been awarded to Royal Caridea, headed by Dr. Maurice Kemp, president. Sub-licenses are being considered for other countries, including Ecuador, Chile, Colombia, Mexico, Canada, People’s Republic of China, Germany, Czech Republic and Russia.

Lawrence is convinced the indoor system will decrease this country’s dependence on foreign shrimp and could even help alleviate world hunger.

“Order a plate of shrimp at any U.S. restaurant, even along the coast, and chances are you’ll be served shrimp farmed in Southeast Asia and frozen two to four times before it got to your table,” Lawrence said. “That’s because the U.S. imports about 90 percent of the shrimp it consumes, with a value of about $4 billion annually.”

In addition to contributing to a foreign trade deficit, imported shrimp also bring with them environmental and quality control issues, he said.

“They are grown in open ponds and treated sometimes with antibiotics banned in this country, creating a negative impact on wetlands and human consumption,” Lawrence said. “About 90 percent of sea life in the world spends some portion of their life in the wetlands, thus making wetlands essential for the sustainability of food from the oceans. Uncontrolled use of antibiotics creates its own problems for the wetlands and consumers. But because Thailand, India, Vietnam and other countries in the tropics can grow two or three crops of shrimp per year compared to just one crop in the U.S., it’s hard to compete.”

Until now, Lawrence added.

A prototype of the new system has been constructed in a darkened room just feet from its creator’s office. The shrimp grow in four columns of raceways stacked four high. These raceways are long tubs with circulating water of only 5- to 7-inches average depth. As the shrimp develop, they are transferred to a raceway below. Baby shrimp are added to the top raceway, while the more mature shrimp in the lower raceways are harvested.

“Simplicity is the key here,” said Lawrence. “Some of history’s most creative, innovative inventions are based on very simple logic. Keep it simple.”

But the results of these simple tanks — the amount of shrimp that can be harvested — are astounding, Lawrence said.

“These tanks require stringent control and supervision, 24/7 monitoring with computers tracking the shrimp,” he said. “But properly run, these systems can produce up to 1 million pounds of shrimp per acre of water, or two acres of land per year,” he said “That’s far superior to traditional shrimp farms in the U.S. that can produce only up to 20,000 pounds of shrimp per acre of water per year. In tropical countries that have year-round growing seasons, they can produce up to 60,000 pounds of shrimp per year.”

Developing the stacked raceway system is the culmination of Lawrence’s 50-year career in aquaculture, he said. Along the way he’s developed various components of the new system, including the patent-pending feed (co-inventor) for growing the shrimp, the closed water system using zero exchange and recirculating, a unique raceway bottom design and aeration system and other technologies.

The vision for his creation includes stacked raceway facilities near major metropolitan areas throughout the country, producing live, fresh, never-frozen or fresh-frozen shrimp to be available every day of the year.

“Most Americans have never tasted fresh shrimp,” he said. “There is a huge demand for high-quality shrimp. At a nearby IGA supermarket, we test-marketed shrimp produced in these raceways and they sold out in a matter of hours. They would surely bring premium prices at supermarkets and restaurants in New York, Chicago, Las Vegas and other large cities. But more importantly, these systems could provide the protein that a booming world population desperately needs.”

Lawrence said that the world’s population is expected to increase significantly in the next 20 to 30 years.

“Where will the necessary protein come from?” he asked. “The only way to feed the world, I think, is with aquaculture. We can’t catch more fish or shrimp; we’re at a maximum sustained yield, so these systems would not compete with fishermen.”

China, currently a shrimp exporter, will soon become a net importer of shrimp, which will push shrimp prices upward, said Kemp, president of Royal Caridea.

The world’s first commercial application of Lawrence’s stacked raceways will break ground just miles from his office in Port Aransas, according to Kemp. His company will own and operate the project.

“We’ll construct a facility of about 70,000 square feet, hire 15 to 20 people, some of them with advanced degrees, and produce shrimp year-round. We expect to produce some 835,000 pounds of shrimp per year,” he said.

“Also of significance is that this technology will allow shrimp farms to be built inland in proximity to major metropolitan areas and provide live, fresh-dead and fresh-frozen shrimp on a daily basis,” Kemp added.

Lawrence said based on high growth rates and high survival and production levels, economic data shows an estimated rate of return of 25 percent to 60 percent.

“There are no disease problems; it’s biosecure. So, with predictable high internal rates of return, the system is economically viable. But the best part is, it’s totally organic with high-quality protein available every day of the year.”

Contacts and sources:
 Story by Rod Santa Ana
Texas A&M

Texas Drought Could Last Until 2020 Says Texas A&M Climatologist

Texas’ historic and lingering drought has already worn out its welcome, but it could easily stay around for years and there is a chance it might last another five years or even until 2020, says a Texas A&M University weather expert.

John Nielsen-Gammon, who serves as Texas State Climatologist and professor of atmospheric sciences at Texas A&M, says the culprit is the likely establishment of a new La Niña in the central Pacific Ocean. A La Niña is formed when colder than usual ocean temperatures form in the central Pacific, and these tend to create wetter than normal conditions in the Pacific Northwest but also drier than normal conditions in the Southwest. A La Niña has been blamed for starting the current drought but the new one, which began developing several weeks ago, is likely to extend drought conditions for Texas and much of the Southwest.

Currently, about 95 percent of Texas is in either a severe or exceptional drought status and the past year has been the worst one-year drought in the state’s history, Nielsen-Gammon adds.

“This is looking more and more like a multi-year drought,” explains the Texas A&M professor.
Credit: Texas A&M

“September is already proving to be an exceptionally dry month and overall, little more than an inch of rain on average has occurred over Texas, compared to about three inches in a normal year. So a very dry state has become even drier.”

Many parts of Texas are from 10 to 20 inches behind in rainfall.

“We know that Texas has experienced droughts that lasted several years,” adds Nielsen-Gammon. “Many residents remember the drought of the 1950s, and tree ring records show that drought conditions occasionally last for a decade or even longer. I’m concerned because the same ocean conditions that seem to have contributed to the 1950s drought have been back for several years now and may last another five to 15 years.”

The drought has devastated farmers and ranchers, and officials have estimated agriculture losses at more than $5.2 billion. This summer, hundreds of wildfires erupted in Texas and burned more than 127,000 acres, the most ever, and lake levels are down as much as 50 feet in some lakes while several West Texas lakes have completely dried up.

Numerous Texas cities set heat records this summer, such as Wichita Falls, which recorded 100 days of 100-degree heat, the most ever for that city. Dallas also set a record with 70 days of 100-degree heat, and the city had to close down 25 sports fields because large cracks in the ground were deemed unsafe for athletic competition.

“Our best chance to weaken the drought would have been a tropical system coming in from the gulf, but that never happened and hurricane season is just about over for us,” Nielsen-Gammon reports. “There’s still hope for significant rain through the end of October while tropical moisture is still hanging around, but that’s all it is – a hope.”

“In the next few months, the outlook is not all that promising for rain. Parts of Texas, such as the Panhandle and far Northeast Texas, have a better chance than the rest of the state,” he adds.

“Because Texas needs substantially above-normal rain to recover, and it’s not likely to get it, I expect that most of the state will still be in major drought through next summer.”

Contacts and sources: 

Fruity Aromas: An Aphrodisiac For Flies

The smell of food acts as an aphrodisiac for Drosophila (vinegar flies). A European team headed by CNRS researchers from the Centre des Sciences du Goût et de l'Alimentation (CNRS/Université de Bourgogne/INRA) has brought to light a novel olfactory perception mechanism: male flies use a scent derived from the fruit that they eat to stimulate their sexual appetite. These works are published on-line on 28 September 2011 in the journal /Nature/.

Drosophila on a fruit 
© C. Everaerts, CSGA (CNRS/UB/INRA)

An unexpected olfactory perception mechanism in male vinegar flies (/Drosophila melanogaster/) leading to their sexual stimulation has been identified and analyzed by CNRS researchers from the Centre des Sciences du Goût et de l'Alimentation (CNRS/Université de Bourgogne/INRA) in Dijon, in collaboration with a Swiss laboratory at Lausanne University and a British team in Cambridge. The scientists have shown that phenylacetic acid, a molecule associated with food-derived odors (present in flowers, fruit, honey, etc.) appends itself to a specific olfactory molecular receptor (IR84a) situated on male flies' antennae. Detection of this particular scent by this specific receptor triggers the significant activation of some thirty specific neurons, which sets off a defined neuronal circuit resultingin increased sexual arousal of the male fly.

Described for the first time, the olfactory molecular receptor IR84a maintains the sensorial neurons permanently active, even without odor, so as to keep the male flies ready to attract a potential partner. In this way, the more "perfumed" (with phenylacetic acid) the partner, the more attractive it will be, thereby greatly increasing the insect's sexual arousal. This is proved by genetically deleting theexpression of the receptor, which considerably reduces the sexual activity of the male flies (both with and without "perfume").

This olfactory perception mechanism is especially important in "fruit fly" species in the wider sense: the advantage of mating near food sources is obvious for the offspring. Additional work could help to discover similar mechanisms in other animal species.

Contacts and sources:
CNRS (Délégation Paris Michel-Ange)

Citation: *An olfactory receptor for food-derived odours promotes male courtship in Drosophila; *Yael Grosjean, Raphael Rytz, Jean-Pierre Farine, Liliane Abuin, Jérôme Cortot, Gregory S. X. E. Jefferis and Richard Benton - /Nature/, 28 September 2011


MESSENGER Data Paints New Picture of Mercury's Magnetic Field: UBC Researcher

A University of British Columbia geophysicist is part of a NASA mission that is analyzing the first sets of data being collected by MESSENGER as it orbits Mercury. The spacecraft is capturing new evidence that challenges many previous assumptions about our innermost planet.

Analyses of new data from the spacecraft reveals a host of firsts: evidence of widespread flood volcanism on the planet’s surface, close-up views of Mercury’s crater-like depressions, direct measurements of the chemical composition of its surface, and observations of the planet’s global magnetic field.

Schematic View of Mercury’s Magnetosphere and Heavy Plasma Ion Flux:  Mercury’s planetary magnetic field largely shields the surface from the supersonic solar wind emanating continuously from the Sun. MESSENGER has been in a near-polar, highly eccentric orbit (dashed red line) since 18 March 2011. Maxima in heavy ion fluxes observed from orbit are indicated in light blue.
Images from Mercury
Credit: Courtesy of Science/AAAS

The results are reported in a set of seven papers published in a special section of Science magazine on September 30, 2011.

UBC’s Catherine Johnson, an expert in planetary magnetic and gravity fields, is part of the MESSENGER Mission’s geophysics group.

Johnson, along with colleagues at Johns Hopkins University’s Applied Physics Lab, Goddard Space Flight Centre, University of Michigan and the Carnegie Institution, analyzed the data collected by the spacecraft’s magnetometer to detect Mercury’s magnetic equator and paint a never-before-seen picture of Mercury’s magnetic field.

Mercury is the only other planet in the inner solar system besides Earth whose global magnetic field has an internal origin.

“The MESSENGER data has allowed us to establish the large-scale structure of Mercury’s magnetic field,” says Johnson. “Mercury’s field is weak compared to Earth’s. But until now, figuring out exactly how much weaker has been a challenge.”

“Knowing more about the planet’s field geometry might allow us to explain why Mercury’s field differs so much from ours. The magnetometer’s observations allow us to separate the internal and external magnetic field contributions so we can properly estimate the strength of Mercury’s internal dipole field.”

The strength, position and orientation of the dipole field determine how the solar wind interacts with the planet. A stronger magnetic field holds the solar wind at bay. Mercury’s weak magnetic field allows solar wind to reach the planet’s surface, creating a ‘sputter’ effect--an ejection of atoms of individual mineral crystals--that are then carried away from the planet’s surface.

“Because Mercury has no atmosphere, the solar wind can reach the planet’s surface at high northern and southern latitudes, causing that ‘sputtering’ effect, which creates a very tenuous exosphere.”

“In contrast, the solar wind doesn’t reach Earth’s surface,” explains Johnson. “Instead, it interacts with our upper atmosphere, causing what we know as the Northern Lights.”

Mercury, the solar system’s smallest and densest planet, is in many ways is the most extreme.

“The weak magnetic field makes for a very dynamic environment. We want to know what affect that has on Mercury’s interactions with solar wind. The data being returned from MESSENGER will allow us to do that.”

“MESSENGER’s instruments are capturing data that can be obtained only from orbit,” says MESSENGER Principal Investigator Sean Solomon, of the Carnegie Institution of Washington. “We have imaged many areas of the surface at unprecedented resolution, we have viewed the polar regions clearly for the first time, we have built up global coverage with our images and other data sets, we are mapping the elemental composition of Mercury’s surface, we are conducting a continuous inventory of the planet’s neutral and ionized exosphere, and we are sorting out the geometry of Mercury’s magnetic field and magnetosphere. And we’ve only just begun. Mercury has many more surprises in store for us as our mission progresses.”

MESSENGER’s primary mission is to collect data on the composition and structure of Mercury’s crust, topography and geologic history, thin atmosphere and active magnetosphere, and makeup of core and polar materials.

Contacts and sources: 

New Cardiac Patch Uses Gold Nanowires To Enhance Electrical Signaling Between Cells

A team of researchers at MIT and Children’s Hospital Boston has built cardiac patches studded with tiny gold wires that could be used to create pieces of tissue whose cells all beat in time, mimicking the dynamics of natural heart muscle. The development could someday help people who have suffered heart attacks.

The study, reported this week in Nature Nanotechnology, promises to improve on existing cardiac patches, which have difficulty achieving the level of conductivity necessary to ensure a smooth, continuous “beat” throughout a large piece of tissue.




A scanning electron microscope (SEM) image of nanowire-alginate composite scaffolds. Star-shaped clusters of nanowires can be seen in these images. 
Image courtesy of the Disease Biophysics Group, Harvard University

“The heart is an electrically quite sophisticated piece of machinery,” says Daniel Kohane, a professor in the Harvard-MIT Division of Health Sciences and Technology (HST) and senior author of the paper. “It is important that the cells beat together, or the tissue won’t function properly.”

The unique new approach uses gold nanowires scattered among cardiac cells as they’re grown in vitro, a technique that “markedly enhances the performance of the cardiac patch,” Kohane says. The researchers believe the technology may eventually result in implantable patches to replace tissue that’s been damaged in a heart attack.

Co-first authors of the study are MIT postdoc Brian Timko and former MIT postdoc Tal Dvir, now at Tel Aviv University in Israel; other authors are their colleagues from HST, Children’s Hospital Boston and MIT’s Department of Chemical Engineering, including Robert Langer, the David H. Koch Institute Professor.

Ka-thump, ka-thump

To build new tissue, biological engineers typically use miniature scaffolds resembling porous sponges to organize cells into functional shapes as they grow. Traditionally, however, these scaffolds have been made from materials with poor electrical conductivity — and for cardiac cells, which rely on electrical signals to coordinate their contraction, that’s a big problem.

“In the case of cardiac myocytes in particular, you need a good junction between the cells to get signal conduction,” Timko says. But the scaffold acts as an insulator, blocking signals from traveling much beyond a cell’s immediate neighbors, and making it nearly impossible to get all the cells in the tissue to beat together as a unit.


Video courtesy of the Disease Biophysics Group, Harvard University

To solve the problem, Timko and Dvir took advantage of their complementary backgrounds — Timko’s in semiconducting nanowires, Dvir’s in cardiac-tissue engineering — to design a brand-new scaffold material that would allow electrical signals to pass through.

“We started brainstorming, and it occurred to me that it’s actually fairly easy to grow gold nanoconductors, which of course are very conductive,” Timko says. “You can grow them to be a couple microns long, which is more than enough to pass through the walls of the scaffold.”

From micrometers to millimeters

The team took as their base material alginate, an organic gum-like substance that is often used for tissue scaffolds. They mixed the alginate with a solution containing gold nanowires to create a composite scaffold with billions of the tiny metal structures running through it.

Then, they seeded cardiac cells onto the gold-alginate composite, testing the conductivity of tissue grown on the composite compared to tissue grown on pure alginate. Because signals are conducted by calcium ions in and among the cells, the researchers could check how far signals travel by observing the amount of calcium present in different areas of the tissue.

“Basically, calcium is how cardiac cells talk to each other, so we labeled the cells with a calcium indicator and put the scaffold under the microscope,” Timko says. There, they observed a dramatic improvement among cells grown on the composite scaffold: The range of signals conduction improved by about three orders of magnitude.


A wider SEM image of the nanowire-alginate composite scaffolds.
Image courtesy of the Disease Biophysics Group, Harvard University

“In healthy, native heart tissue, you’re talking about conduction over centimeters,” Timko says. Previously, tissue grown on pure alginate showed conduction over only a few hundred micrometers, or thousandths of a millimeter. But the combination of alginate and gold nanowires achieved signal conduction over a scale of “many millimeters,” Timko says.

“It’s really night and day. The performance that the scaffolds have with these nanomaterials is just much, much better,” Kohane says.

“It’s very beautiful work,” says Charles Lieber, a professor of chemistry at Harvard University. “I think the results are quite unambiguous, and very exciting — both in showing fundamentally that they’ve improved the conductivity of these scaffolds, and then how that clearly makes a difference in enhancing the collective firing of the cardiac tissue.”

The researchers plan to pursue studies in vivo to determine how the composite-grown tissue functions when implanted into live hearts. Aside from implications for heart-attack patients, Kohane adds that the successful experiment “opens up a bunch of doors” for engineering other types of tissues; Lieber agrees.

“I think other people can take advantage of this idea for other systems: In other muscle cells, other vascular constructs, perhaps even in neural systems, this is a simple way to have a big impact on the collective communication of cells,” Lieber says. “A lot of people are going to be jumping on this.”



Contacts and sources: 
Emily Finn, MIT News Office

‘Artificial Leaf’ Makes Fuel From Sunlight

Researchers led by MIT professor Daniel Nocera have produced something they’re calling an “artificial leaf”: Like living leaves, the device can turn the energy of sunlight directly into a chemical fuel that can be stored and used later as an energy source.


The 'artificial leaf,' a device that can harness sunlight to split water into hydrogen and oxygen without needing any external connections, is seen with some real leaves, which also convert the energy of sunlight directly into storable chemical form. 

‘Artificial leaf’ makes fuel from sunlight
Photo: Dominick Reuter

The artificial leaf — a silicon solar cell with different catalytic materials bonded onto its two sides — needs no external wires or control circuits to operate. Simply placed in a container of water and exposed to sunlight, it quickly begins to generate streams of bubbles: oxygen bubbles from one side and hydrogen bubbles from the other. If placed in a container that has a barrier to separate the two sides, the two streams of bubbles can be collected and stored, and used later to deliver power: for example, by feeding them into a fuel cell that combines them once again into water while delivering an electric current.

The creation of the device is described in a paper published Sept. 30 in the journal Science. Nocera, the Henry Dreyfus Professor of Energy and professor of chemistry at MIT, is the senior author; the paper was co-authored by his former student Steven Reece PhD ’07 (who now works at Sun Catalytix, a company started by Nocera to commercialize his solar-energy inventions), along with five other researchers from Sun Catalytix and MIT.

The device, Nocera explains, is made entirely of earth-abundant, inexpensive materials — mostly silicon, cobalt and nickel — and works in ordinary water. Other attempts to produce devices that could use sunlight to split water have relied on corrosive solutions or on relatively rare and expensive materials such as platinum.

The artificial leaf is a thin sheet of semiconducting silicon — the material most solar cells are made of — which turns the energy of sunlight into a flow of wireless electricity within the sheet. Bound onto the silicon is a layer of a cobalt-based catalyst, which releases oxygen, a material whose potential for generating fuel from sunlight was discovered by Nocera and his co-authors in 2008. The other side of the silicon sheet is coated with a layer of a nickel-molybdenum-zinc alloy, which releases hydrogen from the water molecules.


An 'artificial leaf' made by Daniel Nocera and his team, using a silicon solar cell with novel catalyst materials bonded to its two sides, is shown in a container of water with light (simulating sunlight) shining on it. The light generates a flow of electricity that causes the water molecules, with the help of the catalysts, to split into oxygen and hydrogen, which bubble up from the two surfaces.

Video courtesy of the Nocera Lab/Sun Catalytix


“I think there’s going to be real opportunities for this idea,” Nocera says. “You can’t get more portable — you don’t need wires, it’s lightweight,” and it doesn’t require much in the way of additional equipment, other than a way of catching and storing the gases that bubble off. “You just drop it in a glass of water, and it starts splitting it,” he says.

Now that the “leaf” has been demonstrated, Nocera suggests one possible further development: tiny particles made of these materials that can split water molecules when placed in sunlight — making them more like photosynthetic algae than leaves. The advantage of that, he says, is that the small particles would have much more surface area exposed to sunlight and the water, allowing them to harness the sun’s energy more efficiently. (On the other hand, engineering a system to separate and collect the two gases would be more complicated in such a setup.)

The new device is not yet ready for commercial production, since systems to collect, store and use the gases remain to be developed. “It’s a step,” Nocera says. “It’s heading in the right direction.”

Ultimately, he sees a future in which individual homes could be equipped with solar-collection systems based on this principle: Panels on the roof could use sunlight to produce hydrogen and oxygen that would be stored in tanks, and then fed to a fuel cell whenever electricity is needed. Such systems, Nocera hopes, could be made simple and inexpensive enough so that they could be widely adopted throughout the world, including many areas that do not presently have access to reliable sources of electricity.

Professor James Barber, a biochemist from Imperial College London who was not involved in this research, says Nocera’s 2008 finding of the cobalt-based catalyst was a “major discovery,” and these latest findings “are equally as important, since now the water-splitting reaction is powered entirely by visible light using tightly coupled systems comparable with that used in natural photosynthesis. This is a major achievement, which is one more step toward developing cheap and robust technology to harvest solar energy as chemical fuel.”

Barber cautions that “there will be much work required to optimize the system, particularly in relation to the basic problem of efficiently using protons generated from the water-splitting reaction for hydrogen production.” But, he says, “there is no doubt that their achievement is a major breakthrough which will have a significant impact on the work of others dedicated to constructing light-driven catalytic systems to produce hydrogen and other solar fuels from water. This technology will advance side by side with new initiatives to improve and lower the cost of photovoltaics.”

Nocera’s ongoing research with the artificial leaf is directed toward “driving costs lower and lower,” he says, and looking at ways of improving the system’s efficiency. At present, the leaf can redirect about 2.5 percent of the energy of sunlight into hydrogen production in its wireless form; a variation using wires to connect the catalysts to the solar cell rather than bonding them together has attained 4.7 percent efficiency. (Typical commercial solar cells today have efficiencies of more than 10 percent). One question Nocera and his colleagues will be addressing is which of these configurations will be more efficient and cost-effective in the long run.

Another line of research is to explore the use of photovoltaic (solar cell) materials other than silicon — such as iron oxide, which might be even cheaper to produce. “It’s all about providing options for how you go about this,” Nocera says.



Contacts and sources:
David L. Chandler, MIT News Office

Researchers Realize High-Power, Narrowband Terahertz Source At Room Temperature

Researchers at Northwestern University have developed a simpler way to generate single-chip terahertz radiation, a discovery that could soon allow for more rapid security screening, border protection, high sensitivity biological/chemical analysis, agricultural inspection, and astronomical applications.

The work, headed by Manijeh Razeghi, Walter P. Murphy Professor of Electrical Engineering and Computer Science in the McCormick School of Engineering and Applied Science, was published Monday in the journal Applied Physics Letters and was presented in August at the SPIE Optics + Photonics conference in San Diego.

Terahertz radiation (wavelength range of 30 – 300 microns) can be used to see through paper, clothing, cardboard, plastic, and many other materials, without any of the health risks posed by current x-ray based techniques. This property has become extremely valuable for security screening, as it is safe to use on people and can detect metals and ceramics that might be used as weapons.

In addition, a scanning terahertz source can identify many types of biological or chemical compounds due to their characteristic absorption spectra in this wavelength range. Sensitivity to water content can also be utilized to study agricultural quality. Finally, through mixing with a compact coherent terahertz source, very weak terahertz signals from deep space can be detected, which may help scientists understand the formation of the universe.

Coherent terahertz radiation has historically been very difficult to generate, and the search for an easy-to-use, compact source continues today. Current terahertz sources are large, multi-component systems that may require complex vacuum electronics, external pump lasers, and/or cryogenic cooling. A single component solution without any of these limitations is highly desirable to enable next generation terahertz systems.

One possible avenue toward this goal is to create and mix two mid-infrared laser beams within a single semiconductor chip in the presence of a giant nonlinearity. This nonlinearity allows for new terahertz photons to be created within the same chip with an energy equal to the difference of the mid-infrared lasers’ energies. As mid-infrared lasers based on quantum cascade laser technology are operable at room temperature, the terahertz emission can also be demonstrated at room temperature.

Razeghi and her group at the Center for Quantum Devices have taken this basic approach a step further by addressing two key issues that have limited the usefulness of initial demonstrations. Razeghi’s group currently leads the world in high-power quantum cascade laser technology; by increasing the power and beam quality of the mid-infrared pumps, the terahertz power has been significantly increased by more than a factor of 30 to ~10 microwatts.

Additionally, the researchers have incorporated a novel dual-wavelength diffraction grating within the laser cavity to create single mode (narrow spectrum) mid-infrared sources, which in turn has led to very narrow linewidth terahertz emission near 4 terahertz. Further, due to the novel generation mechanism, the terahertz spectrum is extremely stable with respect to current and/or temperature. This could make it valuable as a local oscillator, which can be used for low light level receivers like those needed for astronomical applications.

Razeghi said her group will continue in hopes of reaching higher power levels.

“Our goal is to reach milliwatt power levels and incorporate tuning within the device,” Razeghi said. “Theory says that it is possible, and we have all of the tools necessary to realize this potential.”

Razeghi’s work in this area is partially supported by the Defense Advanced Research Projects Agency (DARPA), and she would like to acknowledge the interest and support of Dr. Scott Rodgers of DARPA and Dr. Tariq Manzur of the Naval Undersea Warfare Center


Contacts and sources:

How Your Brain Reacts To Mistakes Depends On Your Mindset

“Whether you think you can or think you can’t—you’re right,” said Henry Ford. A new study, to be published in an upcoming issue of Psychological Science, a journal of the Association for Psychological Science, finds that people who think they can learn from their mistakes have a different brain reaction to mistakes than people who think intelligence is fixed.

“One big difference between people who think intelligence is malleable and those who think intelligence is fixed is how they respond to mistakes,” says Jason S. Moser, of Michigan State University, who collaborated on the new study with Hans S. Schroder, Carrie Heeter, Tim P. Moran, and Yu-Hao Lee. Studies have found that people who think intelligence is malleable say things like, “When the going gets tough, I put in more effort” or “If I make a mistake, I try to learn and figure it out.” On the other hand, people who think that they can’t get smarter will not take opportunities to learn from their mistakes. This can be a problem in school, for example; a student who thinks her intelligence is fixed will think it’s not worth bothering to try harder after she fails a test.

For this study, Moser and his colleagues gave participants a task that is easy to make a mistake on. They were supposed to identify the middle letter of a five-letter series like “MMMMM” or “NNMNN.” Sometimes the middle letter was the same as the other four, and sometimes it was different. “It’s pretty simple, doing the same thing over and over, but the mind can’t help it; it just kind of zones out from time to time,” Moser says. That’s when people make mistakes—and they notice it immediately, and feel stupid.

While doing the task, the participant wore a cap on his or her head that records electrical activity in the brain. When someone makes a mistake, their brain makes two quick signals: an initial response that indicates something has gone awry—Moser calls it the “’oh crap’ response”—and a second that indicates the person is consciously aware of the mistake and is trying to right the wrong. Both signals occur within a quarter of a second of the mistake. After the experiment, the researchers found out whether people believed they could learn from their mistakes or not.

People who think they can learn from their mistakes did better after making a mistake – in other words, they successfully bounced back after an error. Their brains also reacted differently, producing a bigger second signal, the one that says “I see that I’ve made a mistake, so I should pay more attention” Moser says.

The research shows that these people are different on a fundamental level, Moser says. “This might help us understand why exactly the two types of individuals show different behaviors after mistakes.” People who think they can learn from their mistakes have brains that are tuned to pay more attention to mistakes, he says. This research could help in training people to believe that they can work harder and learn more, by showing how their brain is reacting to mistakes.


Contacts and sources:

Humans And Sharks Share Immune System Feature

A central element of the immune system has remained constant through more than 400 million years of evolution, according to new research at National Jewish Health. In the September 29, 2011, online version of the journal Immunity, the researchers report that T-cell receptors from mice continue to function even when pieces of shark, frog and trout receptors are substituted in. The function of the chimeric receptors depends on a few crucial amino acids, found also in humans, that help the T-cell receptor bind to MHC molecules presenting antigens.

"These findings prove a hypothesis first proposed 40 years ago,” said senior authorLaurent Gapin, PhD, associate professor of immunology in the Integrated Department of Immunology at National Jewish Health and the University of Colorado Denver. “Even though mammals, amphibians and cartilaginous fish last shared a common ancestor more than 400 million years ago, they continue to share an element of their T-cell receptors, indicating that the T cell-MHC interaction arose early in the evolution of the immune system, and is central to its function.”

evolution of adaptive immune system
Credit: National Jewish Health

The T cell serves as the sentinel, manager and enforcer of the adaptive immune response. It relies on its receptor, the T-cell receptor, to recognize foreign material and identify the target of the immune-system attack. When the receptor binds to small fragments of foreign organisms, called antigens, the T cell becomes activated, proliferates and initiates an attack against any molecule or organism containing that antigen.

T cells, however, cannot recognize free-floating antigens. They recognize antigens only when they are held by MHC molecules on the surfaces of other cells, much as a hotdog bun (MHC molecule) holds a hotdog (antigen). This interaction between the T cell and MHC molecules is crucial for immune defense and organ transplants. Compatibility of transplanted organs is determined by the similarity of different people’s MHC molecules. Nonetheless, this interaction has long mystified scientists and is poorly understood.

In 1971 future Nobel Laureate Niels K. Jerne proposed that evolution might have selected for genes that specifically recognize MHC molecules. Evidence discovered later suggested T cells’ affinity for MHC molecules might instead be the product of development that occurs as T cells mature in the thymus. The question remained unanswered for 40 years.

The T-cell receptor is constructed by piecing together several peptides among dozens that are available, plus a few random amino acid sequences. This combination is what allows the immune system to generate an almost infinite variety of receptors capable of recognizing almost any potential invader. The receptor has six loops that are the primary binding points for the antigen-MHC complex. One of those loops, known as CDR2, frequently binds the MHC molecule.

Searching for possible similarities in T-cell receptors of different animals, the researchers compared the amino acid sequences of one segment of the T-cell receptor containing the CDR2 loop. Although the segments contained less than 30 percent of the same amino acids, two specific amino acids were the same in human, mouse, frog, trout and shark T-cell receptors. Those appeared to be amino acids specifically involved in binding to the MHC molecule.

“The evolutionary inheritance of this pattern goes all the way from sharks to humans, which last shared a common ancestor 450 million years ago,” said co-author Philippa Marrack, PhD.

The researchers then inserted segments containing the CDR2 loop from frog, trout and shark T-cell receptors into mouse cells. These chimeric T-cell receptors recognized antigen bound to a mouse MHC molecule.

Since sections of frog, trout and shark T-cell receptors functioned perfectly well in mice T-cell receptors, the experiments suggested that the T-cell’s ability to see an antigen only when complexed with an MHC molecule first arose more than 400 million years ago, when all four animals shared a common ancestor.

Sexting Driven By Peer Pressure

Both young men and women experience peer pressure to share sexual images via the new phenomenon of ‘sexting’, preliminary findings from a University of Melbourne study has found.

‘Sexting’ is the practice of sending and receiving sexual images on a mobile phone.

File:Texting.jpg
Credit: Wikipedia

The study is one of the first academic investigations into ‘sexting’ from a young person’s perspective in Australia. The findings were presented to the 2011 Australasian Sexual Health Conference in Canberra.

Ms Shelley Walker from the Primary Care Research Unit in the Department of General Practice at the University of Melbourne said the study not only highlighted the pressure young people experienced to engage in sexting, it also revealed the importance of their voice in understanding and developing responses to prevent and deal with the problem.

“The phenomenon has become a focus of much media reporting; however research regarding the issue is in its infancy, and the voice of young people is missing from this discussion and debate,” she said.

The qualitative study involved individual interviews with 33 young people (15 male and 18 female) aged 15 – 20 years.

Preliminary findings revealed young people believed a highly sexualized media culture bombarded young people with sexualized images and created pressure to engage in sexting.

Young people discussed the pressure boys place on each other to have girls’ photos on their phones and computers. They said if boys refrained from engaging in the activity they were labeled ‘gay’ or could be ostracized from the peer group.


Both genders talked about the pressure girls experienced from boyfriends or strangers to reciprocate on exchanging sexual images.


Some young women talked about the expectation (or more subtle pressure) to be involved in sexting, simply as a result of having viewed images of girls they know.


Both young men and women talked about being sent or shown images or videos, sometimes of people they knew or of pornography without actually having agreed to look at it first.

Ms Walker said ‘sexting’ is a rapidly changing problem as young people keep up with new technologies such as using video and Internet via mobile phones.

The Australian Communication & Media Authority reported in 2010 that around 90 percent of young people aged 15-17 owned mobile phones.

“Our study reveals how complex and ever-changing the phenomenon of ‘sexting’ is and that continued meaningful dialogue is needed to address and prevent the negative consequences of sexting for young people,” she said.


Contacts and sources:

Reefs Recovered Faster After Mass Extinction Than First Thought

Harsh living conditions caused by major fluctuations in the carbon content and sea levels, overacidification and oxygen deficiency in the seas triggered the largest mass extinction of all time at the end of the Permian era 252 million years ago. Life on Earth was also anything but easy after the obliteration of over 90 percent of all species. 

These are reef-forming sponges from the early Triassic era.
Credit: UZH

Throughout the entire Early Triassic era, metazoan-dominated reefs were replaced by microbial deposits. Researchers had always assumed it took the Earth as long as five million years to recover from this species collapse. Now, however, an international team, including the paleontologist Hugo Bucher from the University of Zurich and his team of researchers, has proven that reefs already existed again in the southwest of what is now the USA 1.5 million years after the mass extinction. These were dominated by metazoan organisms such as sponges, serpulids and other living creatures, the researchers report in Nature Geoscience.

Growth thanks to new reef-forming metazoan organisms

Metazoan-dominated reefs already developed during the Early Triassic, much earlier than was previously assumed. As soon as the environmental conditions more or less returned to normal, the reef began to grow again due to metazoan organisms that had played a secondary role in reefs up to then. "This shows that, after the extinction of dominant reef creators, metazoan were able to form reef ecosystems much sooner than was previously thought," says Hugo Bucher, summing up the new discovery.

Contacts and sources:
Citation: Arnaud Brayard, Emmanuelle Vennin, Nicolas Olivier, Kevin G. Bylund, Jim Jenks, Daniel A. Stephen, Hugo Bucher, Richard Hofmann, Nicolas Goudemand and Gilles Escarguel: Transient metazoan reefs in the aftermath of the end-Permian mass extinction, in: Nature Geoscience, 18 September 2011, DOI: 10.1038/NGEO1264


Glucosamine-Like Supplement Suppresses Multiple Sclerosis Attacks

A glucosamine-like dietary supplement suppresses the damaging autoimmune response seen in multiple sclerosis, according to a UC Irvine study.

UCI’s Dr. Michael Demetriou, Ani Grigorian and others found that oral N-acetylglucosamine (GlcNAc), which is similar to but more effective than the widely available glucosamine, inhibited the growth and function of abnormal T-cells that in MS incorrectly direct the immune system to attack and break down central nervous system tissue that insulates nerves.

Study results appear online in The Journal of Biological Chemistry.

Earlier this year, Demetriou and colleagues discoveredthat environmental and inherited risk factors associated with MS — previously poorly understood and not known to be connected — converge to affect how specific sugars are added to proteins regulating the disease.

“This sugar-based supplement corrects a genetic defect that induces cells to attack the body in MS,” said Demetriou, associate professor of neurology and microbiology & molecular genetics, “making metabolic therapy a rational approach that differs significantly from currently available treatments.”

Virtually all proteins on the surface of cells, including immune cells such as T-cells, are modified by complex sugar molecules of variable sizes and composition. Recent studies have linked changes in these sugars to T-cell hyperactivity and autoimmune disease.

In mouse models of MS-like autoimmune disease, Demetriou and his team found that GlcNAc given orally to those with leg weakness suppressed T-cell hyperactivity and autoimmune response by increasing sugar modifications to the T-cell proteins, thereby reversing the progression to paralysis.

The study comes on the heels of others showing the potential of GlcNAc in humans. One reported that eight of 12 children with treatment-resistant autoimmune inflammatory bowel disease improved significantly after two years of GlcNAc therapy. No serious adverse side effects were noted.

“Together, these findings identify metabolic therapy using dietary supplements such as GlcNAc as a possible treatment for autoimmune diseases,” said Demetriou, associate director of UCI’s Multiple Sclerosis Research Center. “Excitement about this strategy stems from the novel mechanism for affecting T-cell function and autoimmunity — the targeting of a molecular defect promoting disease — and its availability and simplicity.”

He cautioned that more human studies are required to assess the full potential of the approach. GlcNAc supplements are available over the counter and differ from commercially popular glucosamine. People who purchase GlcNAc should consult with their doctors before use.

Lindsey Araujo and Dylan Place of UCI and Nandita N. Naidu and Biswa Choudhury of UC San Diego also participated in the research, which was funded by the National Institutes of Health and the National Multiple Sclerosis Society.

Contacts and sources:
Tom Vasich
University of California - Irvine

10 Reasons I Love to Hate My Cell Phone


Story by Selena Routh

My cell phone has virtually become part of my personal attire. It is with me nearly twenty-four hours per day. Most of the time, it is attached to my hip, other times it is on my bedside table, or next to my computer, or riding on the console of my pickup truck. I love having it available, but I don’t love everything about it. Here are 10 reasons I love to hate my cell phone.

  1. Dishonest Reception Bars. I look at the little monitor screen, and I can see a full set of bars glowing in the upper left-hand corner. When I try to send a text message or make a call, however, nothing will go through. Later, when the phone rings and I pick it up to see who it is, I can see one faintly glowing bar in that corner. What’s that all about? Are the bars meaningless, or does the phone just try harder when it knows my ex is calling me?
  2. Dropped Calls. Of course, everyone’s favorite reason to complain about cell phones is the infamous dropped call. There is little to say about it that hasn’t been said alr…
  3. Dropped Call Indicator-Screech. Okay, dropped calls are bad enough in themselves, but I hate being informed of them by what amounts to the sonic version of an ice pick shoved in my ear. It’s almost made worse by the fact that it is inconsistent. About seventy percent of the time, there is no indication other than dead silence on the line. That’s just enough to lull me into forgetfulness, setting me up for the next surprise screech.
  4. Battery Charge Games. My cell phone also likes to play little games with the battery charge indicator. On the way out the door in the morning, the battery appears fully charged. By the time I arrive at work, five minutes later, it shows a half-charge. What happened? Did I absorb all of that energy through my hip? Why don’t I feel more energetic?
  5. Delayed Text Messages. So, what happened to the text message that my wife sent on Thursday afternoon, between then and three a.m. Sunday when it announced its arrival in my phone? Did it stop for the text message version of a coffee break on a communications satellite? What do text messages discuss when they’re hanging around waiting for an inconvenient moment to finish their journeys?
  6. Nappus Interruptus. At my last calculation, approximately eighty-three percent of my naps are interrupted by a cell phone. I know darn well that it has figured out how to send random texts requesting a call, and to set its own alarm.
  7. Hide & Seek. After careful consideration, I’ve decided that my cell phone has a self-propulsion unit that was not mentioned in the owner’s manual. In addition to the games it plays with reception and battery charge indicators, it loves to play hide and seek. When it is not attached to my hip, I always take care to place it in an easy to remember location. Yet, each time it rings, a panicky search ensues, until I find that it has crawled under a pile of mail, again.
  8. Promises Broken. When I acquired my first cell phone, I was promised that it would make me both more efficient and more productive in my work. I’m still waiting.
  9. The Bill. Have you ever tried to read and interpret a cell phone monthly bill? All I want to know is how much to pay and how it got to be that much. As best I can tell, it got to be that much because somebody threw darts at a numbers grid.
  10. The Contract. Is there anything more one-sided than a cell phone service contract? It tells you what you are obligated to do for the service provider, and what they are NOT obligated to do for you. It also tells you that they can change the terms in a moment and on a whim, while any changes that you want to make require personal counseling and a new two-year contract.
You may have one or two, or even ten completely different reasons to love to hate your cell phone. These are mine, and I love/hate them dearly.

Contacts and sources:

Astronomers Reveal Supernova Factory

A team led by astronomers at Chalmers and Onsala Space Observatory has detected seven previously unknown supernovae in a galaxy 250 million light years away. Never before have so many supernovae been discovered at the same time in the same galaxy. The results are accepted for publication in the Astrophysical Journal.

The discovery proves what astronomers have long believed: that the galaxies which are the universe's most efficient star-factories are also supernova factories.

The astronomers used a worldwide network of radio telescopes in five countries, including Sweden, to be able to create extremely sharp images of the galaxy Arp 220. The scientists observed around 40 radio sources in the center of the galaxy Arp 220. These radio sources are hidden behind thick layers of dust and gas and invisible in ordinary telescopes. To discover the nature of these radio sources, they made measurements at different radio wavelengths and watched how they changed over several years.
“With all the data in place, we can now be certain that all seven of these sources are supernovae: stars that exploded in the last 60 years,” says Fabien Batejat, main author of the article about the discovery.

So many supernovae have never before been detected in the same galaxy. The number is nevertheless consistent with how fast stars are forming in Arp 220.

Galaxy Arp 220 (main image, taken with the Hubble Space Telescope) with some of its newly discovered supernovae (inset, taken with Global VLBI). The inset image is 250 light years across.

Credit: NASA, ESA, Hubble Heritage Team, Chalmers

“In Arp 220, we see far more supernovae than in our galaxy. We estimate that a star explodes in Arp 220 once every quarter. In the Milky Way, there is only one supernova per century,” says Rodrigo Parra, astronomer at the European Southern Observatory in Chile and member of the team.

John Conway is professor of observational radio astronomy at Chalmers and deputy director of Onsala Space Observatory.

“Arp 220 is well-known as a place where star formation is very efficient. Now we have been able to show that star factories like this are also supernova factories,” he says.

The radio measurements have also given researchers insight into how radio waves are generated in supernovae and their remnants.

“Our measurements show that a supernova’s own magnetic field is what gives rise to its radio emission, not the magnetic fields in the galaxy around it,” says Fabien Batejat.

Contacts and sources:
Chalmers University of Technology

Citation:
 The results will be published in the October 20 issue of the journal Astrophysical JournalTitle: Resolution of the Compact Radio Continuum Sources in Arp220.
Authors: Fabien Batejat, John E. Conway, Rossa Hurley, Rodrigo Parra, Philip J. Diamond, Colin J. Lonsdale, Carol J. Lonsdale.

The article is already available at http://arxiv.org/abs/1109.6443.

Cosmic Weight Watching Reveals Black Hole-Galaxy History

Using state-of-the-art technology and sophisticated data analysis tools, a team of astronomers from the Max Planck Institute for Astronomy has developed a new and powerful technique to directly determine the mass of an active galaxy at a distance of nearly 9 billion light-years from Earth.

Now, a team of astronomers from the Max Planck Institute for Astronomy, led by Dr Katherine Inskip, has, for the first time, succeeded in directly "weighing" both a galaxy and its central black hole at such a great distance using a sophisticated and novel method. The galaxy, known to astronomers by the number J090543.56+043347.3 (which encodes the galaxy's position in the sky) has a distance of 8.8 billion light-years from Earth (redshift z = 1.3). Colours in this image of the galaxy J090543.56+043347.3 indicate whether there is gas moving towards us or away from us, and at what speed. Using this information, the researchers reconstructed the galaxy's dynamical mass. The star shape indicates the position of the galaxy's active nucleus; the surrounding contour lines indicatfe brightness levels or light emitted by the nucleus. Dark blue pixels indicate gas moving towards us at a speed of 250 km/s, dark red pixels gas moving away from us at 350 km/s.
Credit: K. J. Inskip/MPIA

The researchers' pioneering method promises a new approach for studying the co-evolution of galaxies and their central black holes. First results indicate that for galaxies, the best part of cosmic history was not a time of sweeping changes.

One of the most intriguing developments in astronomy over the last few decades is the realization that not only do most galaxies contain central black holes of gigantic size, but also that the mass of these central black holes are directly related to the mass of their host galaxies. This correlation is predicted by the current standard model of galaxy evolution, the so-called hierarchical model, as astronomers from the Max Planck Institute for Astronomy have recently shown.

When astronomers look out to greater and greater distances, they look further and further into the past. Investigating this black hole-galaxy mass correlation at different distances, and thus at different times in cosmic history, allows astronomers to study galaxy and black hole evolution in action.

For galaxies further away than 5 billion light-years (corresponding to a redshift of z > 0.5), such studies face considerable difficulties. The typical objects of study are so-called active galaxies, and there are well-established methods to estimate the mass of such a galaxy's central black hole. It is the galaxy's mass itself that is the challenge: At such distances, standard methods of estimating a galaxy's mass become exceedingly uncertain or fail altogether.

This is much easier said than done. In order to secure their measurement, the cosmic weightwatchers had to pull out all the stops of observational astronomy before finally obtaining a reliable value for the dynamical mass of J090543.56+043347.3. Combining this result with the mass value of the galaxy's central black hole, which the researchers measured from the same dataset, the result is the same that would be expected for a present-day galaxy. Apparently, nothing major has changed between now and then: At least out to this distance, 9 billion years into the past, the correlation between galaxies and their black holes appears to be the same as for their modern-day counterparts.

The astronomers succeeded in measuring directly the so-called dynamical mass of this active galaxy. The key idea is the following: A galaxy's stars and gas clouds orbit the galactic centre; for instance, our Sun orbits the centre of the Milky Way galaxy once every 250 million years. The stars' different orbital speeds are a direct function of the galaxy's mass distribution. Determine orbital speeds and you can determine the galaxy's total mass.In ordinary images such as this one from the Sloan Digital Survey, J090543.56+043347.3 appears as a featureless blob of light.
Credit: Sloan Digital Sky Survey

Inskip and her colleagues are already hard at work to expand their novel kind of analysis to a larger set of 15 further galaxies. If this confirms their conclusions from J090543.56+043347.3, that would indicate that, over the past 9 billion years – for more than half of the age of our Universe! – most galaxies have lived comparatively boring lives, subject to only very limited and slow change.

Source: Max-Planck-Gesellschaft


Creating Bright Ideas For Lasting Innovations: National Labs Leading Charge On Building Better Batteries

Teams at two of the Energy Department's laboratories are making headway on two projects that will enable building a new lithium battery that charges faster, lasts longer, runs more safely, and might also arrive on the market in the not-too-distant future. Lithium batteries are used in a variety of everyday products from laptops to cell phones, but an improved battery could also significantly increase the charge capacity of hybrid electric vehicles, and energy storage systems of wind and solar power generators.

At left, the traditional approach to composite anodes using silicon (blue spheres) for higher energy capacity has a polymer binder such as PVDF (light brown) plus added particles of carbon to conduct electricity (dark brown spheres). Silicon swells and shrinks while acquiring and releasing lithium ions, and repeated swelling and shrinking eventually break contacts among the conducting carbon particles. At right, the new Berkeley Lab polymer (purple) is itself conductive and continues to bind tightly to the silicon particles despite repeated swelling and shrinking.
Image courtesy of Lawrence Berkeley National Laboratory

Researchers at Oak Ridge National Laboratory (ORNL) and Lawrence Berkeley National Laboratory (Berkeley Lab) focused on a key battery component, the anode, where electricity comes out. Most current commercial lithium batteries have anodes made of graphite, a form of carbon. However, scientists at ORNL incorporated a special form of the compound titanium dioxide into the anode instead, and they found significant improvements. 

At the same level of current, it takes the new ORNL battery just six minutes to be 50 percent charged, while a graphite-based lithium-ion battery would see a mere 10 percent increase in the same timeframe. The new ORNL battery also outperforms faster-charging lithium titanate batteries (which use tiny particles of lithium titanate in the anode in place of graphite to speed charging) and, unlike such batteries, has a sloping discharge voltage that is good for controlling the state of charge.

ORNL's new battery has the potential to be used in a wide range of heavy-duty applications, especially places where increased strength and safety are at a premium such as hybrid electric vehicles, power grids and the energy storage systems of wind and solar power generators. Additional research needs to be performed, but scientists at ORNL believe that if titanium dioxide proves scalable in batteries, they could be on the market within five years.

Berkeley Lab researchers are taking a different approach. They designed a new anode made from a tailored polymer—a material made of millions of repeating units—that conducts electricity. It also embeds silicon particles, which in turn bind to a lot of lithium ions (much more than graphite anodes can). These improvements give the battery a much greater capacity—the ability to store much more energy—than current designs. Even better, they maintain that increased capacity after hundreds of charge-discharge cycles.

The better anode built by Berkeley Lab could contribute to lowering the cost and extending the range of electric cars. Researchers say the anode can be built at a comparatively low cost, and in a way that's already compatible with established manufacturing technologies. And they offer that the tailored polymer that makes the battery better could see use in a wide range of other products too.

Thanks to the Energy Department-supported research, bright ideas are becoming better batteries…and hopefully, a brilliant future.

The research at ORNL was supported by the Energy Department's Office of Science and ORNL's Laboratory Directed Research and Development program. The research at Berkeley Lab was supported by Department's Office of Energy Efficiency and Renewable Energy, with additional research and facilities support from the Energy Department's Office of Science.

Contacts and sources:
Story by Charles Rousseaux, Senior Writer

Old School Networking 101: Before Social Media

Let’s begin with some definitions.

What is networking? Well, it’s pretty simple, actually. Networking is nothing more than making and nurturing connections between people. We all do it, though some are more adept at it than others. Usually, a network is defined as a group of connections through which people socialize, share resources, and enhance productivity.
Now, define “Old School.” A time before the advent of web sites created for the purpose of networking is implied by the last three words. But, how far back should I go? I’ll go all the way back, and use my own personal history to give a series of little snapshots.

1960-70′s

As a child, I carried my network in my head. It consisted of the people in my family and extended family, neighbors, and some people in our small town with whom I had regular contact. Memory and close interaction were sufficient to maintain my network up until high school. That was when I started keeping a notebook with names, addresses, and phone numbers of people that I wanted to stay in touch with in the future. I graduated high school in the mid 1970′s and my network continued to grow.

1980′s

By 1980 I had begun my career, and my social network began to acquire professional connections. Most of the connections were still stored only in my memory, but those that were useful but not used on a regular basis were kept in a “little black book” and on a rolodex. The black book was simply a notebook with sections based on an alphabetical listing of people’s last names. A rolodex, for those of you that are unfamiliar with the term, was a storage system for business cards that could be organized by name or business category.

Actual business and social connections were maintained through personal contact, telephone conversations, and the occasional letter sent or received by US mail. Much effort was made to schedule time for coffee and lunch meetings, and other social activities that helped to nurture face to face time and opportunities for personal discussion. There were then, as there are now, secondary and tertiary connections in my network with people that I had never met or spoken with, people connected to the primary contacts in my network. If I wanted to connect with one of these, I would make contact with the person that I knew of them through, and ask for an introduction.

1990′s

This was pretty much standard throughout the 1980s and 1990s, with the introduction of personal computers making it slightly easier to keep track of network connections. Computers brought databases that could be manipulated to store and correlate network connections. This meant that even those of us that are not naturally adept as information organizers could keep track of larger networks with a little effort. Through the 1990s, both cell phones and email became common and helped to make networks even more accessible and useful. Because we carried our phones with us, we could reach each other to speak more easily, and the ability of email to connect the same information with a lot of people at the same time brought us right to the edge of today’s proliferation of social media and personal technology.

Networking has been made easier, though sometimes a little less personal, with the technologies and media that have grown in the last few decades, but it has always been with us. From the beginning of civilization to now and beyond, networking is simply part of who we are and how we get things done. Nowadays we just happen to use LinkedIn, FaceBook, and Twitter to make it easier than ever to create, extend, and keep in touch with our network.

Contacts and sources:
Story by Melanie Slaugh