Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Friday, January 31, 2014

Advanced Autonomous Convoy Demonstrated By U.S. Army and Lockheed Martin

The U.S. Army Tank-Automotive Research, Development and Engineering Center (TARDEC) and Lockheed Martin [NYSE: LMT] have demonstrated the ability of fully autonomous convoys to operate in urban environments with multiple vehicles of different models.

The demonstration earlier this month at Fort Hood, Texas, was part of the Army and Marine Corps’ Autonomous Mobility Appliqué System (AMAS) program, and marked the completion of the program’s Capabilities Advancement Demonstration (CAD). The test involved driverless tactical vehicles navigating hazards and obstacles such as road intersections, oncoming traffic, stalled and passing vehicles, pedestrians and traffic circles in both urban and rural test areas.

“The AMAS CAD hardware and software performed exactly as designed, and dealt successfully with all of the real-world obstacles that a real-world convoy would encounter,” said David Simon, AMAS program manager for Lockheed Martin Missiles and Fire Control.

The AMAS hardware and software are designed to automate the driving task on current tactical vehicles. The Unmanned Mission Module part of AMAS, which includes a high performance LIDAR sensor, a second GPS receiver and additional algorithms, is installed as a kit and can be used on virtually any military vehicle. In the CAD demonstration, the kit was integrated onto the Army’s M915 trucks and the Palletized Loading System (PLS) vehicle.

“It was very important that we had representation from the technology, acquisition and user bases, along with our industry partners, here at the CAD,” said TARDEC technical manager Bernard Theisen. “We are very pleased with the results of the demonstration, because it adds substantial weight to the Army’s determination to get robotic systems into the hands of the warfighter.” 


Credit: Lockheed Martin

Senior Army leaders representing the Army Materiel Command (AMC), the Army Capabilities Integration Center (ARCIC), the Combined Arms Support Command (CASCOM) and TARDEC were present to witness the demonstration. The AMAS CAD was jointly funded by ARCIC and Lockheed Martin. While the AMAS JCTD is aimed at augmenting the safety and security of human drivers in a convoy mission, the CAD was aimed at completely removing the Soldier from the cab.

For more than three decades, Lockheed Martin has applied its systems-integration expertise to a wide range of successful ground vehicles for U.S. and allied forces worldwide. The company’s products include the combat-proven Multiple Launch Rocket System (MLRS) M270-series and High Mobility Artillery Rocket System (HIMARS) mobile launchers, Havoc 8x8, Common Vehicle, Light Armored Vehicle-Command and Control, Warrior Capability Sustainment Programme, Joint Light Tactical Vehicle (JLTV) and pioneering unmanned platforms such as the Squad Mission Support System (SMSS).

Headquartered in Bethesda, Md., Lockheed Martin is a global security and aerospace company that employs approximately 115,000 people worldwide and is principally engaged in the research, design, development, manufacture, integration and sustainment of advanced technology systems, products and services. The Corporation’s net sales for 2013 were $45.4 billion.


Contacts and sources:
Lockheed Martin 

Study Finds More Than A Third Of Women Have Hot Flashes 10 Years After Menopause

A team of researchers from the Perelman School of Medicine at the University of Pennsylvania has found that moderate to severe hot flashes continue, on average, for nearly 5 years after menopause, and more than a third of women experience moderate/severe hot flashes for 10 years or more after menopause. Current guidelines recommend that hormone therapy, the primary medical treatment for hot flashes, not continue for more than 5 years. However, in the new study published online this week in the journal Menopause, the authors write that “empirical evidence supporting the recommended 3- to 5-year hormone therapy for management of hot flashes is lacking.”

File:PikiWiki Israel 15323 quot;Menopausequot; Kibbutz Hagoshrim.JPG
Credit: Wikimedia Commons 

Hot flashes are episodes of intense radiating heat experienced by many women around the time of menopause. They can result in discomfort, embarrassment, and disruption of sleep. Changing hormone levels are believed to cause hot flashes and other menopausal symptoms such as insomnia, fatigue, memory and concentration problems, anxiety, irritability, and joint and muscle pain. In hormone therapy, medications containing female hormones replace the ones the body stops making during menopause. While hormone replacement therapy (HRT) is considered the most effective treatment for hot flashes, it is not appropriate for all women. In addition, concerns about health hazards linked to HRT have made some doctors less likely to prescribe it, or to adhere strictly to recommended duration guidelines.

“Our findings point to the importance of individualized treatments that take into account each woman’s risks and benefits when selecting hormone or non-hormone therapy for menopausal symptoms,” said the study’s lead author, Ellen W. Freeman, PhD, research professor in the department of Obstetrics and Gynecology at Penn Medicine. “While leading non-hormone therapies such as Paxil or Escitalopram may provide some relief of menopausal symptoms for some women, for others, they may not be as effective as hormone-based therapy.”

The study evaluated 255 women in the Penn Ovarian Aging Study who reached natural menopause over a 16-year period (1996-2012). The results indicate that 80 percent (203) reported moderate/severe hot flashes, 17 percent (44) had only mild hot flashes, and three percent (8) reported no hot flashes.

In addition, obese white women and African American women (both obese and non-obese) had the greatest risk of moderate/severe hot flashes during the period studied, whereas non-obese white women had the lowest risk. The increased risk of hot flashes in obese women has previously been associated with lower levels of estradiol (the most potent estrogen produced by women’s bodies) before menopause, but the new finding that non-obese African-American women also have a greater risk of hot flashes remains unexplained. An earlier report from the Study of Women’s Health Across the Nation indicated that African-American women may be more likely to report hot flashes and also have greater symptom sensitivity, suggesting that cultural differences may affect hot flash reporting, but further evidence is needed.

The Penn study also found a 34 percent lower risk of hot flashes among women with education beyond high school, a finding that researchers say also calls for additional study.

In addition to Freeman, other Penn co-authors are Mary D. Sammel, ScD, from the Center for Clinical Epidemiology and Biostatistics, and Richard J. Sanders.

This study was supported by National Institutes of Health grants RO1 AG12745 and UL1TR000003.





Contacts and sources:
Katie Delach
University of Pennsylvania School of Medicine

Thursday, January 30, 2014

Mysteries In The Childhood Of The Universe

It has long puzzled scientists that there were enormously massive galaxies that were already old and no longer forming new stars in the very early universe, approx. 3 billion years after the Big Bang. Now new research from the Niels Bohr Institute, among others, shows that these massive galaxies were formed by explosive star formation that was set in motion by the collision of galaxies a few billion years after the Big Bang. The results are published in the scientific journal,Astrophysical Journal. 

This graphic compares the size of the extremely compact dead galaxies in the early universe with the size of our own galaxy, the Milky Way. The two galaxy types have approximately equal amounts of stars, which means that the density of stars in the compact galaxies is more that 10 times higher than in the Milky way. The researchers have now discovered how these extreme galaxies formed.

Credit: Graphic courtesy of NASA, European Space Agency, and S. Toft og A. Feild

Galaxies are giant collections of stars, gas and dark matter. The smallest galaxies contain a few million stars, while the largest can contain several hundred billion stars. The first stars already emerged in the very early universe approx. 200 million years after the Big Bang from the gases hydrogen and helium. Gas is the raw material used to form stars. These giant clouds of gas and dust contract and eventually the gas is so compact that the pressure heats the matter so that glowing gas balls are formed, new stars are born. The stars are collected in galaxies, the first of which are a kind of baby galaxies. As long as there is gas in the galaxy, new stars are being formed.

Mysteries in the childhood of the universe

The astronomers' theory is therefore that the structure of the universe was built by baby galaxies gradually growing larger and more massive by constantly forming new stars and by colliding with neighbouring galaxies to form new, larger galaxies. The largest galaxies in today's universe were therefore believed to have been under construction throughout the history of the universe.

This graphic shows the evolutionary sequence in the growth of massive elliptical galaxies over 13 billion years, as gleaned from space-based and ground-based telescopic observations. The growth of this class of galaxies is driven by rapid star formation in the so called SMG galaxies and mergers with other galaxies.

Credit: Graphic courtesy of NASA, European Space Agency, and S. Toft og A. Feild

"That is why it surprised us that we already when the universe was only 3 billion years old, found galaxies that were just as massive as today's large spiral galaxies and the largest elliptical galaxies, which are the giants in the local universe. Even more surprisingly, the stars in these early galaxies were squeezed into a very small area, so the size of the galaxies were three times smaller than similar mass galaxies today. This means that the density of stars was 10 times greater. Furthermore, the galaxies were already dead, so they were no longer forming new stars. It was a great mystery," explains Sune Toft, Dark Cosmology Centre at the Niels Bohr Institute at the University of Copenhagen.

The extremely massive and compact galaxies were not flattened spiral galaxies where stars and gas rotate around the centre. Rather, they resembled elliptical galaxies where stars move more hither and thither and where the gas for new star formation has been used up. But how could the galaxies become so massive and so burnt out so early? How were they formed?

Solving the mystery

To find out what happened, Sune Toft had to look even further back in time. Based on the ages of the galaxies, he knew that they had to have formed very early in the history of the universe, but at that point there was simply not enough time for the galaxies to have grown so massive through normal star formation. He had a theory that the massive galaxies were formed by the fusion of smaller galaxies, but that alone could not explain how they had become so massive so quickly and were already dead. The theory was therefore, that there must have been some especially extreme galaxies in the formation process.

"We studied the galaxies that existed when the universe was between 1 and 2 billion years old. My theory that it must have been some galaxies with very specific properties that were part of the formation process made me focus on the special SMG galaxies, which are dominated by intense stare formation hidden under a thick blanket of dust," explains Sune Toft.

He explains that when such gas-rich galaxies merge, all of the gas is driven into the centre of the system where it ignites an explosion of new star formation. A lot of stars are formed in the centre and the galaxy quickly becomes very compact. But with the explosive star formation, the gas to form new stars is also used up extremely quickly and then you get a dead galaxy.

"I discovered that there was a direct evolutionary link between two of the most extreme galaxy types we have in the universe – the most distant and most intense star forming galaxies which are formed shortly after the Big Bang – and the extremely compact dead galaxies we see 1-2 billion years later," says Sune Toft.

The new research is a breakthrough in discovering the formation process of the enormously massive and dead galaxies in the early universe.









Contacts and sources:
Gertie Skaarup
University of Copenhagen - Niels Bohr Institute

Nearest Brown Dwarf Clouds Charted For First Ever Weather Map

ESO's Very Large Telescope has been used to create the first ever map of the weather on the surface of the nearest brown dwarf to Earth. An international team has made a chart of the dark and light features on WISE J104915.57-531906.1B, which is informally known as Luhman 16B and is one of two recently discovered brown dwarfs forming a pair only six light-years from the Sun. The new results are being published in the 30 January 2014 issue of the journal Nature.

ESO's Very Large Telescope has been used to create the first ever map of the weather on the surface of the nearest brown dwarf to Earth. An international team has made a chart of the dark and light features on WISE J104915.57-531906.1B, which is informally known as Luhman 16B and is one of two recently discovered brown dwarfs forming a pair only six light-years from the Sun. The figure shows the object at six equally spaced times as it rotates once on its axis.

Credit: Image courtesy of ESO/I. Crossfield

Brown dwarfs fill the gap between giant gas planets, such as Jupiter and Saturn, and faint cool stars. They do not contain enough mass to initiate nuclear fusion in their cores and can only glow feebly at infrared wavelengths of light. The first confirmed brown dwarf was only found twenty years ago and only a few hundred of these elusive objects are known.

The closest brown dwarfs to the Solar System form a pair called Luhman 16AB [1] that lies just six light-years from Earth in the southern constellation of Vela (The Sail). This pair is the third closest system to the Earth, after Alpha Centauri and Barnard's Star, but it was only discovered in early 2013. The fainter component, Luhman 16B, had already been found to be changing slightly in brightness every few hours as it rotated -- a clue that it might have marked surface features.

Now astronomers have used the power of ESO's Very Large Telescope (VLT) not just to image these brown dwarfs, but to map out dark and light features on the surface of Luhman 16B.

Ian Crossfield (Max Planck Institute for Astronomy, Heidelberg, Germany), the lead author of the new paper, sums up the results: "Previous observations suggested that brown dwarfs might have mottled surfaces, but now we can actually map them. Soon, we will be able to watch cloud patterns form, evolve, and dissipate on this brown dwarf -- eventually, exometeorologists may be able to predict whether a visitor to Luhman 16B could expect clear or cloudy skies."

To map the surface the astronomers used a clever technique. They observed the brown dwarfs using the CRIRES instrument on the VLT. This allowed them not just to see the changing brightness as Luhman 16B rotated, but also to see whether dark and light features were moving away from, or towards the observer. By combining all this information they could recreate a map of the dark and light patches of the surface.

The atmospheres of brown dwarfs are very similar to those of hot gas giant exoplanets, so by studying comparatively easy-to-observe brown dwarfs [2] astronomers can also learn more about the atmospheres of young, giant planets -- many of which will be found in the near future with the new SPHERE instrument that will be installed on the VLT in 2014.

Crossfield ends on a personal note: "Our brown dwarf map helps bring us one step closer to the goal of understanding weather patterns in other solar systems. From an early age I was brought up to appreciate the beauty and utility of maps. It's exciting that we're starting to map objects out beyond the Solar System!"
 

Contacts and sources:
Richard Hook
ESO

Notes:

[1] This pair was discovered by the American astronomer Kevin Luhman on images from the WISE infrared survey satellite. It is formally known as WISE J104915.57-531906.1, but a shorter form was suggested as being much more convenient. As Luhman had already discovered fifteen double stars the name Luhman 16 was adopted. Following the usual conventions for naming double stars, Luhman 16A is the brighter of the two components, the secondary is named Luhman 16B and the pair is referred to as Luhman 16AB.

[2] Hot Jupiter exoplanets lie very close to their parent stars, which are much brighter. This makes it almost impossible to observe the faint glow from the planet, which is swamped by starlight. But in the case of brown dwarfs there is nothing to overwhelm the dim glow from the object itself, so it is much easier to make sensitive measurements.



Citation:  "A Global Cloud Map of the Nearest Known Brown Dwarf", by Ian Crossfield et al. to appear in the journal Nature.

The team is composed of I. J. M. Crossfield (Max Planck Institute for Astronomy [MPIA], Heidelberg, Germany), B. Biller (MPIA; Institute for Astronomy, University of Edinburgh, United Kingdom), J. Schlieder (MPIA), N. R. Deacon (MPIA), M. Bonnefoy (MPIA; IPAG, Grenoble, France), D. Homeier (CRAL-ENS, Lyon, France), F. Allard (CRAL-ENS), E. Buenzli (MPIA), Th. Henning (MPIA), W. Brandner (MPIA), B. Goldman (MPIA) and T. Kopytova (MPIA; International Max-Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg, Germany).



Rogue Asteroids May Be The Norm, Solar System A Virtual Snow Globe Of Asteroids

To get an idea of how the early solar system may have formed, scientists often look to asteroids. These relics of rock and dust represent what today's planets may have been before they differentiated into bodies of core, mantle, and crust.

‘Rogue’ asteroids may be the norm
Credit:  European Southern Observatory

In the 1980s, scientists' view of the solar system's asteroids was essentially static: Asteroids that formed near the sun remained near the sun; those that formed farther out stayed on the outskirts. But in the last decade, astronomers have detected asteroids with compositions unexpected for their locations in space: Those that looked like they formed in warmer environments were found further out in the solar system, and vice versa. Scientists considered these objects to be anomalous "rogue" asteroids.

But now, a new map developed by researchers from MIT and the Paris Observatory charts the size, composition, and location of more than 100,000 asteroids throughout the solar system, and shows that rogue asteroids are actually more common than previously thought. Particularly in the solar system's main asteroid belt — between Mars and Jupiter — the researchers found a compositionally diverse mix of asteroids.

The new asteroid map suggests that the early solar system may have undergone dramatic changes before the planets assumed their current alignment. For instance, Jupiter may have drifted closer to the sun, dragging with it a host of asteroids that originally formed in the colder edges of the solar system, before moving back out to its current position. Jupiter's migration may have simultaneously knocked around more close-in asteroids, scattering them outward.

"It's like Jupiter bowled a strike through the asteroid belt," says Francesca DeMeo, who did much of the mapping as a postdoc in MIT's Department of Earth, Atmospheric and Planetary Sciences. "Everything that was there moves, so you have this melting pot of material coming from all over the solar system."

DeMeo says the new map will help theorists flesh out such theories of how the solar system evolved early in its history. She and Benoit Carry of the Paris Observatory have published details of the map in Nature.

From a trickle to a river

To create a comprehensive asteroid map, the researchers first analyzed data from the Sloan Digital Sky Survey, which uses a large telescope in New Mexico to take in spectral images of hundreds of thousands of galaxies. Included in the survey is data from more than 100,000 asteroids in the solar system. DeMeo grouped these asteroids by size, location, and composition. She defined this last category by asteroids' origins — whether in a warmer or colder environment — a characteristic that can be determined by whether an asteroid's surface is more reflective at redder or bluer wavelengths.

The team then had to account for any observational biases. While the survey includes more than 100,000 asteroids, these are the brightest such objects in the sky. Asteroids that are smaller and less reflective are much harder to pick out, meaning that an asteroid map based on observations may unintentionally leave out an entire population of asteroids.

early asteroid belt
Credit: Harvard-Smithsonian Center for Astrophysics

To avoid any bias in their mapping, the researchers determined that the survey most likely includes every asteroid down to a diameter of five kilometers. At this size limit, they were able to produce an accurate picture of the asteroid belt. The researchers grouped the asteroids by size and composition, and mapped them into distinct regions of the solar system where the asteroids were observed.

From their map, they observed that for larger asteroids, the traditional pattern holds true: The further one gets from the sun, the colder the asteroids appear. But for smaller asteroids, this trend seems to break down. Those that look to have formed in warmer environments can be found not just close to the sun, but throughout the solar system — and asteroids that resemble colder bodies beyond Jupiter can also be found in the inner asteroid belt, closer to Mars.

As the team writes in its paper, "the trickle of asteroids discovered in unexpected locations has turned into a river. We now see that all asteroid types exist in every region of the main belt."

A shifting solar system

The compositional diversity seen in this new asteroid map may add weight to a theory of planetary migration called the Grand Tack model. This model lays out a scenario in which Jupiter, within the first few million years of the solar system's creation, migrated as close to the sun as Mars is today. During its migration, Jupiter may have moved right through the asteroid belt, scattering its contents and repopulating it with asteroids from both the inner and outer solar system before moving back out to its current position — a picture that is very different from the traditional, static view of a solar system that formed and stayed essentially in place for the past 4.5 billion years.

"That [theory] has been completely turned on its head," DeMeo says. "Today we think the absolute opposite: Everything's been moved around a lot and the solar system has been very dynamic."

DeMeo adds that the early pinballing of asteroids around the solar system may have had big impacts — literally — on Earth. For instance, colder asteroids that formed further out likely contained ice. When they were brought closer in by planetary migrations, they may have collided with Earth, leaving remnants of ice that eventually melted into water.

"The story of what the asteroid belt is telling us also relates to how Earth developed water, and how it stayed in this Goldilocks region of habitability today," DeMeo says.

Snow Globe Solar System

Our solar system seems like a neat and orderly place, with small, rocky worlds near the Sun and big, gaseous worlds farther out, all eight planets following orbital paths unchanged since they formed.

However, the true history of the solar system is more riotous. Giant planets migrated in and out, tossing interplanetary flotsam and jetsam far and wide. New clues to this tumultuous past come from the asteroid belt.

"We found that the giant planets shook up the asteroids like flakes in a snow globe," says lead author Francesca DeMeo, a Hubble postdoctoral fellow at the Harvard-Smithsonian Center for Astrophysics.

Millions of asteroids circle the Sun between the orbits of Mars and Jupiter, in a region known as the main asteroid belt. Traditionally, they were viewed as the pieces of a failed planet that was prevented from forming by the influence of Jupiter's powerful gravity. Their compositions seemed to vary methodically from drier to wetter, due to the drop in temperature as you move away from the Sun.

Credit: Harvard-Smithsonian Center for Astrophysics

That traditional view changed as astronomers recognized that the current residents of the main asteroid belt weren't all there from the start. In the early history of our solar system the giant planets ran amok, migrating inward and outward substantially. Jupiter may have moved as close to the Sun as Mars is now. In the process, it swept the asteroid belt nearly clean, leaving only a tenth of one percent of its original population.

As the planets migrated, they stirred the contents of the solar system. Objects from as close to the Sun as Mercury, and as far out as Neptune, all collected in the main asteroid belt.

"The asteroid belt is a melting pot of objects arriving from diverse locations and backgrounds," explains DeMeo.

Using data from the Sloan Digital Sky Survey, DeMeo and co-author Benoit Carry (Paris Observatory) examined the compositions of thousands of asteroids within the main belt. They found that the asteroid belt is more diverse than previously realized, especially when you look at the smaller asteroids.

This finding has interesting implications for the history of Earth. Astronomers have theorized that long-ago asteroid impacts delivered much of the water now filling Earth's oceans. If true, the stirring provided by migrating planets may have been essential to bringing those asteroids.

This raises the question of whether an Earth-like exoplanet would also require a rain of asteroids to bring water and make it habitable. If so, then Earth-like worlds might be rarer than we thought.

The paper describing these findings appears in the January 30, 2014 issue of Nature.

Headquartered in Cambridge, Mass., the Harvard-Smithsonian Center for Astrophysics (CfA) is a joint collaboration between the Smithsonian Astrophysical Observatory and the Harvard College Observatory. CfA scientists, organized into six research divisions, study the origin, evolution and ultimate fate of the universe.  

Contacts and sources:
Partially Written by Jennifer Chu, MIT News Office
Sarah McDonnell
Massachusetts Institute of Technology

David A. Aguilar
Director of Public Affairs
Harvard-Smithsonian Center for Astrophysics

New Titanosaurus, Yongjinglong datangi

A team led by University of Pennsylvania paleontologists has characterized a new dinosaur based on fossil remains found in northwestern China. The species, a plant-eating sauropod namedYongjinglong datangi, roamed during the Early Cretaceous period, more than 100 million years ago. This sauropod belonged to a group known as Titanosauria, members of which were among the largest living creatures to ever walk the earth.

A team led by University of Pennsylvania paleontologists has characterized a new dinosaur based on fossil remains found in northwestern China. The species, a plant-eating sauropod named Yongjinglong datangi, roamed during the Early Cretaceous period, more than 100 million years ago. This sauropod belonged to a group known as Titanosauria, members of which were among the largest living creatures to ever walk the earth. At roughly 50-60 feet long, the Yongjinglong individual discovered was a medium-sized Titanosaur. Anatomical evidence, however, points to it being a juvenile; adults may have been larger.

Credit: University of Pennsylvania

At roughly 50-60 feet long, theYongjinglong individual discovered was a medium-sized Titanosaur. Anatomical evidence, however, points to it being a juvenile; adults may have been larger.

The find, reported in the journal PLOS ONE, helps clarify relationships among several sauropod species that have been found in the last few decades in China and elsewhere. Its features suggest that Yongjinglong is among the most derived, or evolutionarily advanced, of the Titanosaurs yet discovered from Asia.

Doctoral student Liguo Li and professor Peter Dodson, who have affiliations in both the School of Veterinary Medicine's Department of Animal Biology and the School of Arts and Sciences' Department of Earth and Environmental Science, led the work. They partnered with Hailu You, a former student of Dodson's, who now works at the Chinese Academy of Sciences' Institute of Vertebrate Paleontology and Paleoanthropology, and Daqing Li of the Gansu Geological Museum in Lanzhou, China.

Until very recently, the United States was the epicenter for dinosaur diversity, but China surpassed the U.S. in 2007 in terms of species found. This latest discovery was made in the southeastern Lanzhou-Minhe Basin of China's Gansu Province, about an hour's drive from the province's capital, Lanzhou. Two other Titanosaurs from the same period, Huanghetitan liujiaxiaensis and Daxiatitan binglingi, were discovered within the last decade in a valley one kilometer from the Yongjinglong fossils.

"As recently as 1997 only a handful of dinosaurs were known from Gansu," Dodson said. "Now it's one of the leading areas of China. This dinosaur is one more of the treasures of Gansu."

During a trip to Gansu, Liguo Li was invited to study the remains, which had been in storage since being unearthed in 2008. They consisted of three teeth, eight vertebrae, the left shoulder blade, and the right radius and ulna.

The anatomical features of the bones bear some resemblance to another Titanosaur that had been discovered by paleontologists in China in 1929, named Euhelopus zdanskyi. But the team was able to identify a number of unique characteristics.

"The shoulder blade was very long, nearly 2 meters, with sides that were nearly parallel, unlike many other Titanosaurs whose scapulae bow outward," Li said.

The scapula was so long, indeed, that it did not appear to fit in the animal's body cavity if placed in a horizontal or vertical orientation, as is the case with other dinosaurs. Instead, Li and colleagues suggest the bone must have been oriented at an angle of 50 degrees from the horizontal.

In addition, an unfused portion of the shoulder blade indicated to the researchers that the animal under investigation was a juvenile or subadult.

"The scapula and coracoid aren't fused here," Li said. "It is open, leaving potential for growth."

Thus, a full-grown adult might be larger than this 50-60 foot long individual. Future finds may help elucidate just how much larger, the researchers noted.

The ulna and radius were well preserved, enough so that the researchers could identify grooves and ridges they believe correspond with the locations of muscle attachments in the dinosaur's leg.

The researchers were also able to draw evidence about the dinosaur's relationship to other species from the vertebrae, one of which was from the neck and the other seven from the trunk. Notably, the vertebrae had large cavities in the interior that the team believes provided space for air sacs in the dinosaur's body.

"These spaces are unusually large in this species," Dodson said. "It's believed that dinosaurs, like birds, had air sacs in their trunk, abdominal cavity and neck as a way of lightening the body."

In addition, the longest tooth they found was nearly 15 centimeters long. Another shorter tooth contained unique characteristics, including two "buttresses," or bony ridges, on the internal side, while Euhelopus had only one buttress on its teeth.

To gain a sense of where Yongjinglong sits on the family tree of sauropods, the researchers were able to compare its characteristics with specimens from elsewhere in China, as well as from Africa, South America and the U.S.

"We used standard paleontological techniques to compare it with phylogenies based on other specimens," Dodson said. "It is definitely much more derived than Euhelopus and shows close similarities to derived species from South America."

Not only does the discovery point to the fact that Titanosaurs encompass a diverse group of dinosaurs, but it also supports the growing consensus that sauropods were a dominant group in the Early Cretaceous — a view that U.S. specimens alone could not confirm.

"Based on U.S. fossils, it was once thought that sauropods dominated herbivorous dinosaur fauna during the Jurassic but became almost extinct during the Cretaceous," Dodson said. "We now realize that, in other parts of the world, particularly in South America and Asia, sauropod dinosaurs continued to flourish in the Cretaceous, so the thought that they were minor components is no longer a tenable view."

Funding for the research was provided by the National Natural Science Foundation of China, the Hundred Talents Project of the Chinese Academy of Sciences, the Gansu Bureau of Geology and Mineral Resources and the National Science Foundation.


Contacts and sources:
Katherine Unger Baillie
University of Pennsylvania

An Electronic Tongue Can Identify Brands Of Beer

Spanish researchers have managed to distinguish between different varieties of beer using an electronic tongue. The discovery, published in the journal 'Food Chemistry', is accurate in almost 82% of cases.

Beer is the oldest and most widely consumed alcoholic drink in the world. Now, scientists at the Autonomous University of Barcelona have led a study which analysed several brands of beer by applying a new concept in analysis systems, known as an electronic tongue, the idea for which is based on the human sense of taste.

Spanish researchers have managed to distinguish between different varieties of beer using an electronic tongue. The discovery, published in the journal 'Food Chemistry', is accurate in almost 82 percent of cases.
Credit: Manel del Valle

As Manel del Valle, the main author of the study, explains to SINC: "The concept of the electronic tongue consists in using a generic array of sensors, in other words with generic response to the various chemical compounds involved, which generate a varied spectrum of information with advanced tools for processing, pattern recognition and even artificial neural networks."

In this case, the array of sensors was formed of 21 ion-selective electrodes, including some with response to cations (ammonium, sodium), others with response to anions (nitrate, chloride, etc.), as well as electrodes with generic (unspecified) response to the varieties considered.

The authors recorded the multidimensional response generated by the array of sensors and how this was influenced by the type of beer considered. An initial analysis enabled them to change coordinates to view the grouping better, although it was not effective for classifying the beers.

"Using more powerful tools – supervised learning – and linear discriminant analysis did enable us to distinguish between the main categories of beer we studied: Schwarzbier, lager, double malt, Pilsen, Alsatian and low-alcohol," Del Valle continues, "and with a success rate of 81.9%."

Furthermore, it is worth noting that varieties of beers that the tongue is not trained to recognise, such as beer/soft drink mixes or foreign makes, were not identified (discrepant samples), which, according to the experts, validates the system as it does not recognise brands for which it was not trained.

Robots with the sense of taste

In view of the ordering of the varieties, which followed their declared alcohol content, the scientists estimated this content with a numerical model developed with an artificial neural network.

"This application could be considered a sensor by software, as the ethanol present does not respond directly to the sensors used, which only respond to the ions present in the solution," outlines the researcher.

The study concludes that these tools could one day give robots a sense of taste, and even supplant panels of tasters in the food industry to improve the quality and reliability of products for consumption.



Contacts and sources:
SINC Agency
FECYT - Spanish Foundation for Science and Technology
Manel del Valle 
Departamento de Química - Unidad de Química Analítica
Universidad Autónoma de Barcelona
 
Citation: Xavier Cetó, Manuel Gutiérrez-Capitán, Daniel Calvo, Manel del Valle. "Beer classification by means of a potentiometric electronic tongue". Food Chemistry

Mysterious Ocean Circles In The Baltic Ocean Explained


Are they bomb craters from World War II? Are they landing marks for aliens? Since the first images of the mysterious ocean circles off the Baltic coast of Denmark were taken in 2008, people have tried to find an explanation. Now researchers from the University of Southern Denmark and University of Copenhagen finally present a scientific explanation.

The circles in the shallow water off the coast
Photo: Jacob T. Johansen, journalist

The first pictures appeared in 2008, taken by a tourist and showing some strange circular formations in the shallow waters off the famous white cliffs of chalk on the island Møn in Denmark. In 2011, the circles came back, and this time there were so many that they made it to the media.

Investigating biologists then concluded that the circles consisted of eelgrass plants growing on the bottom of the shallow water. But only now scientists can explain why the eelgrass grows in circles here – eelgrass usually grows as continuous meadows on the seabed.

"It has nothing to do with either bomb craters or landing marks for aliens. Nor with fairies, who in the old days got the blame for similar phenomena on land, the fairy rings in lawns being a well known example", say biologists Marianne Holmer from University of Southern Denmark and Jens Borum from University of Copenhagen.

The circles of eelgrass can be up to 15 meters in diameter and their rim consists of lush green eelgrass plants. Inside the circle there can be seen only very weak or no eelgrass plants.

"We have studied the mud that accumulates among the eelgrass plants and we can see that the mud contains a substance that is toxic to eelgrass", explain Holmer and Borum.

The poison is sulfide, a substance that accumulates in the seabed off the island of Møn, because it is very calcareous and iron-deficient.

"Most mud gets washed away from the barren, chalky seabed, but like trees trap soil on an exposed hillside, eelgrass plants trap the mud. And therefore there will be a high concentrations of sulfide-rich mud among the eelgrass plants," explain the researchers.

Sulfide is toxic enough to weaken the old and new eelgrass plants but not toxic enough to harm adult and strong plants. And since eelgrass spreads radially from the inside out the oldest and weakest plants are located in the center of the growth circle.

  Eelgrass growing in the shallow water off the coast in circle shape  
Credit: Ole Pedersen

Jens Borum and Marianne Holmer say: "Eelgrass populations grow vegetatively by stolons which spread radially in all directions and therefore each plant creates a circular growth pattern. When the sulfide begins to work, it starts with the oldest and thus the inner part of the population because here is an increased release of toxic sulfide and uptake by plants due to accumulation of mud. The result is an exceptional circular shape, where only the rim of the circle survives – like fairy rings in a lawn".

The waters off Møn’s chalk cliffs are not the only place where sulfide destroys eelgrass. Sulfide poisoning of eelgrass is a major problem worldwide. Sulfide is often created where oxygen disappears from the seabed. This can happen when the seabed is fed nutrients from agriculture.

Underwater meadows of eelgrass and other seagrasses grow in many parts of the world where they serve as home to a variety of small animals, filter the water and trap carbon and nutrients. But the meadows are threatened in almost all regions of the world, and in several places, including Denmark, researchers and authorities work to prevent seagrasses from disappearing.

Facts about seagrass

Seagrass is not seaweed, but a plant with flowers, leaves and roots just like plants on land. Seagrass also produces seeds that can be sown in the seabed and grow to new plants. There are approx. 60 seagrass species in the world with eelgrass (Zostera marina) in temperate areas and Halophila ovalis in tropical and subtropical areas as common species. Seagrass needs light and only grows where at least 10% of the sun's light can reach the plants.

The University of Southern Denmark is a partner and coordinator of NOVA GRASS, an international five-year research project focused on the restoration of eelgrass meadows.


Contacts and sources:
Birgitte Svennevig

Citation: Eelgrass fairy rings: sulfide as inhibiting agent . Borum , Holmer , et al. Mar. Biol. Published online 12 October 2013.

Wednesday, January 29, 2014

"Chameleon Of The Sea" Secrets Revealed

Scientists at Harvard University and the Marine Biological Laboratory (MBL) hope new understanding of the natural nanoscale photonic device that enables a small marine animal to dynamically change its colors will inspire improved protective camouflage for soldiers on the battlefield.

The cuttlefish, known as the "chameleon of the sea," can rapidly alter both the color and pattern of its skin, helping it blend in with its surroundings and avoid predators. Researchers at Harvard and MBL now understand the biology and physics behind this process.  
Photo courtesy of Brian Gratwicke/Flickr, Creative Commons BY 2.0.

 In a paper published January 29 in the Journal of the Royal Society Interface, the Harvard-MBL team reports new details on the sophisticated biomolecular nanophotonic system underlying the cuttlefish’s color-changing ways.

Left: Cuttlefish chromatophores change from a punctuate to expanded state in response to visual cues. The scale bar measures one millimeter. Right: This illustrated cross-section of the skin shows the layering of three types of chromatophores. Iridophores and leucophores would be positioned beneath the chromatophores.  
Images courtesy of Lydia Mathger

"Nature solved the riddle of adaptive camouflage a long time ago," said Kevin Kit Parker, Tarr Family Professor of Bioengineering and Applied Physics at the Harvard School of Engineering and Applied Sciences (SEAS) and core faculty member at the Wyss Institute for Biologically Inspired Engineering at Harvard. “Now the challenge is to reverse-engineer this system in a cost-efficient, synthetic system that is amenable to mass manufacturing."

In addition to textiles for military camouflage, the findings could also have applications in materials for paints, cosmetics, and consumer electronics.

The cuttlefish (Sepia officinalis) is a cephalopod, like squid and octopuses. Neurally controlled, pigmented organs called chromatophores allow it to change its appearance in response to visual clues, but scientists have had an incomplete understanding of the biological, chemical, and optical functions that make this adaptive coloration possible.

Chromatophores were previously thought to be simply sacs of pigment that acted as filters; scientists have now discovered that nanostructures (labeled here as "granules") within the cells are capable of fluorescing. 
Images courtesy of George Bell

To regulate its color, the cuttlefish relies on a vertically arranged assembly of three optical components: the leucophore, a near-perfect light scatterer that reflects light uniformly over the entire visible spectrum; the iridophore, a reflector comprising a stack of thin films; and the chromatophore. This layering enables the skin of the animal to selectively absorb or reflect light of different colors, said coauthor Leila F. Deravi, a research associate in bioengineering at Harvard SEAS.

"Chromatophores were previously considered to be pigmentary organs that acted simply as selective color filters,” Deravi said. “But our results suggest that they play a more complex role; they contain luminescent protein nanostructures that enable the cuttlefish to make quick and elaborate changes in its skin pigmentation."

When the cuttlefish actuates its coloration system, each chromatophore expands; the surface area can change as much as 500 percent. The Harvard-MBL team showed that within the chromatophore, tethered pigment granules regulate light through absorbance, reflection, and fluorescence, in effect functioning as nanoscale photonic elements, even as the chromatophore changes in size.

"The cuttlefish uses an ingenious approach to materials composition and structure, one that we have never employed in our engineered displays," said coauthor Evelyn Hu, Tarr-Coyne Professor of Applied Physics and of Electrical Engineering at SEAS. "It is extremely challenging for us to replicate the mechanisms that the cuttlefish uses. For example, we cannot yet engineer materials that have the elasticity to expand 500 times in surface area. And were we able to do so, the richness of color of the expanded and unexpanded material would be dramatically different—think of stretching and shrinking a balloon. The cuttlefish may have found a way to compensate for this change in richness of color by being an 'active' light emitter (fluorescent), not simply modulating light through passive reflection." 
 
The team also included Roger Hanlon and his colleagues at the Marine Biological Laboratory in Woods Hole, Mass. Hanlon’s lab has examined adaptive coloration in the cuttlefish and other invertebrates for many years.

"Cuttlefish skin is unique for its dynamic patterning and speed of change," Hanlon said. "Deciphering the relative roles of pigments and reflectors in soft, flexible skin is a key step to translating the principles of actuation to materials science and engineering. This collaborative project expanded our breadth of inquiry and uncovered several useful surprises, such as the tether system that connects the individual pigment granules."

For Parker, an Army reservist who completed two tours of duty in Afghanistan, using the cuttlefish to find a biologically inspired design for new types of military camouflage is more than an academic pursuit. He understands first-hand that poor camouflage patterns can cost lives on the battlefield.

"Throughout history, people have dreamed of having an 'invisible suit,'" Parker said. "Nature solved that problem, and now it’s up to us to replicate this genius so, like the cuttlefish, we can avoid our predators."

In addition to Parker, Hu, Hanlon, and Deravi, the coauthors of the Interface paper are: Andrew P. Magyar, a former postdoctoral student in Hu’s group; Sean P. Sheehy, a graduate student in Parker’s group; and George R. R. Bell, Lydia M. Mäthger, Stephen L. Senft, Trevor J. Wardill, and Alan M. Kuzirian, who all work with Hanlon in the Program in Sensory Physiology and Behavior at the Marine Biological Laboratory.


Contacts and sources:
Harvard School of Engineering and Applied Sciences

Tuesday, January 28, 2014

Volatility In Oil Prices' Stymie Global Economic Growth Says Oxford Study

The volatility of oil prices is a 'fundamental barrier to stability and economic growth', according to a new study by the University of Oxford published in Frontiers in Energy.

It recommends a raft of measures to bring prices under control, saying the amount of speculative trading taking place in the oil derivatives market is a large part of the problem. It suggests the 'behaviour of speculators compounds existing volatility' and previously unrelated volatility is spilling over from the stock market to the oil market and vice versa. This has changed the nature of the oil derivatives market, driving it away from its original purpose of 'hedging' – a means by which businesses could protect themselves against price fluctuations, say the researchers from the Smith School of Enterprise and the Environment.


Credit: Shutterstock

While the authors welcome the European Commission's proposed Financial Transactions Tax on transactions in the oil derivatives market, they say the amount (pegged at 0.01%) is too small and therefore unlikely to deter speculators. It could even carry the risk of curbing hedging as an unintended consequence, says the report. They conclude that the planned EU tax for financial transactions is a good first step, but unlikely to be sufficient to cut out unnecessary trading.

In the study, Sir David King, Dr Oliver Inderwildi and Zoheir Ebrahim recommend a combination of policies to tackle both the supply and demand side of the oil industry. They highlight the importance of collective action, such as the strategic oil reserve administered by the International Energy Agency (IEA) that can effectively be used to reduce price volatility in times of crisis. The study says given the IEA's projections of oil prices reaching at least $215 a barrel by 2035, global cooperation is fundamental to the management and reduction of future price volatility.

'In this regard, the IEA collective action framework, which mandates the maintenance of strategic oil reserves, has been highly effective on several occasions in reducing the extent of price volatility in the context of oil-supply disruptions,' says the report. It recommends strengthening and expanding such frameworks to improve future market stability.

The study also suggests the introduction of new regulation to make it mandatory for major oil-reliant industries to maintain their own oil stocks, thereby insulating oil prices from sudden spikes during crises in oil-rich parts of the world.

Governments should provide incentives to businesses and companies investing in infrastructure that promotes alternative fuel and energy sources or that develop new greener energy provision, says the study.

The authors note that additional unconventional fossil fuel resources obtained through processes such as fracking are due to come online over the next decade, suggesting this is 'highly likely' to keep resource prices at a relatively low level. It describes this development as 'terrible news for the environment', but 'excellent news for the economy' which will 'buy us time for decarbonisation endeavours'.

To reduce the volatility of oil prices in the long term, the study says governments need to take charge of the 'politically challenging task' of removing subsidies on fuel, particularly in non-OECD countries where fuel subsidies are institutionalised. It says policies aimed at improving energy efficiency, such as the adoption of fuel-economy standards and government requirements for greater energy efficiency, provide a significant opportunity to reduce demand for oil.

Dr Oliver Inderwildi said: 'Unconventional fossil fuel resources are a blessing at the moment as cheap fuel will support the global economic recovery. In the long term, however, we have to reduce our reliance on fossil fuels because of the great damage they are causing to the environment and the toxic economic effect of price volatility. This will require many measures, from extending the strategic oil reserves to the restructuring of derivative markets and large-scale investments in modern, energy efficient infrastructure.

'Reducing our dependence on fossil resource should also increase the energy security of OECD countries and in the long term, as a knock-on effect, this would also be likely to improve relations in areas where there are geopolitical tensions. However, this can only be achieved if countries that depend on imports of resources like oil and gas take collective action.'


Contacts and sources:
Oxford University

Macroeconomic impacts of oil price volatility (1795kb)

Bio Argo Robots Splash Into The Indian Ocean

Robotic floats armed with revolutionary new sensors will be launched in the Indian Ocean, as part of a new India-Australia research partnership to find out what makes the world's third largest ocean tick - and how both nations can benefit from it.


Credit: CSIRO

The Indian Ocean contains vast fisheries and mineral resources that are of strategic importance to both Australia and India. It also plays a direct role in driving the climates of its surrounding regions - home to more than 16 per cent of the world's population.



The new 'Bio Argo' floats, to be launched in mid 2014, will enhance the already successful Argo float technology to measure large-scale changes in the chemistry and biology of marine ecosystems below the Indian Ocean's surface.

The Argo floats are a network of 3600 free-floating sensors, operating in open ocean areas that provide real-time data on ocean temperature and salinity.

The 'Bio Argo' floats will include additional sensors for dissolved oxygen, nitrate, chlorophyll, dissolved organic matter, and particle scattering. They will target specific gaps in our understanding of Indian Ocean ecosystems of immediate concern to India and Australia, such as the Bay of Bengal and the waters of north Western Australia.

CSIRO's Dr Nick Hardman-Mountford said the pilot project, led by CSIRO in collaboration with the Indian National Institute of Oceanography (CSIR-NIO) and the Indian National Centre for Ocean Information Services, will improve our understanding of cause and effect in the Indian Ocean's climate and ecosystems.

"By studying the Indian Ocean in this detail, we can investigate the origin and impact of marine heatwaves like the one that devastated the coral reefs and fisheries off north Western Australian in 2011 - and improve our prediction of them in the future," Dr Hardman-Mountford said.

CSIR-NIO Director, Dr Wajih Naqvi, said the novel technological innovation will give researchers from both countries a new understanding of the Indian Ocean.

"We expect the technology being utilised in this project to provide new insights into the biogeochemistry of the Indian Ocean and how it is being impacted by human activities," Dr Naqvi said.

The proposed advances in ocean observation, ecosystem understanding and resources management, which will benefit the entire Indian Ocean Rim, can only occur through collaboration between India and Australia.

Dr Nick D'Adamo, Head of the Perth Programme Office supporting UNESCO's Intergovernmental Oceanographic Commission (IOC) - a partner in the project - praised the collaborative nature of the project.

"By combining the research capabilities of India and Australia we will see an improved ability to predict and prepare for global climate change, as well as better conservation of marine biodiversity," Dr D'Adamo said.

The $1 million project was funded in part by the Australian Government under the Australia-India Strategic Research Fund.


Contacts and sources:
CSIRO

Study Analyzes Content Of Nightmares And Bad Dreams

According to a new study by researchers at the University of Montreal, nightmares have greater emotional impact than bad dreams do, and fear is not always a factor. In fact, it is mostly absent in bad dreams and in a third of nightmares. What is felt, instead, is sadness, confusion, guilt, disgust, etc. For their analysis of 253 nightmares and 431 bad dreams, researchers obtained the narratives of nearly 10,000 dreams.

John Henry Fuseli - The Nightmare
File:John Henry Fuseli - The Nightmare.JPG
Credit: Wikipedia

"Physical aggression is the most frequently reported theme in nightmares. Moreover, nightmares become so intense they will wake you up. Bad dreams, on the other hand, are especially haunted by interpersonal conflicts," write Geneviève Robert and Antonio Zadra, psychology researchers at the Université de Montréal, in the last issue of Sleep.

"Death, health concerns and threats are common themes in nightmares," says Geneviève Robert, first author of the article, which formed part of her doctoral thesis. "But it would be wrong to think that they characterize all nightmares. "Sometimes, it is the feeling of a threat or a ominous atmosphere that causes the person to awaken. I'm thinking of one narrative, in which the person saw an owl on a branch and was absolutely terrified."

Nightmares in men were also more likely than those of women to contain themes of disasters and calamities such as floods, earthquakes and war while themes involving interpersonal conflicts were twice as frequent in the nightmares of women.

Why do we dream? What are nightmares? These questions are still unanswered, says Professor Zadra, who has focused on sleep disorders for 20 years (he is notably a specialist in sleepwalking). One hypothesis is that dreams are a catharsis to the vicissitudes of daily life; another is that they reflect a disruption of the nervous system. Whatever they are, the scientific community generally agrees that everyone dreams, usually during the stage of sleep called REM sleep, which most people go through three to five times a night. Most sleepers forget their dreams right away; heavy dreamers remember them more easily. Five to six percent of the population report having nightmares.

Treatable

"Nightmares are not a disease in themselves but can be a problem for the individual who anticipates them or who is greatly distressed by their nightmares. People who have frequent nightmares may fear falling asleep – and being plunged into their worst dreams. Some nightmares are repeated every night. People who are awakened by their nightmares cannot get back to sleep, which creates artificial insomnia," says Zadra.

The source of a recurring nightmare may be a traumatic event. Returning soldiers sometimes, in their dreams, see the scenes that marked them. Consumption or withdrawal of alcohol or psychotropic drugs may also explain the frequency or intensity of nightmares. The Diagnostic Statistical Manual of Mental Disorders classifies nightmares in the category "parasomnias usually associated with REM sleep."

The good news is that nightmares are treatable. Through visualization techniques, patients learn to change the scenario of one or more of their dreams and repeat the new scenario using a mental imagery technique. It can be through a life-saving act (the dreamer confronts the attacker) or a supernatural intervention (Superman comes to the rescue). All in mid-dream!

The dream files

One of the research aims of Robert and Zadra, who were funded by the Social Sciences and Humanities Research Council of Canada, was to better understand the difference between bad dreams and nightmares, which seem to be in a continuum with "ordinary" dreams, along a sort of intensity scale.

For this first large-scale comparative study on the topic, the researchers asked 572 respondents to write a dream journal over two to five weeks instead of simply ticking off themes listed in a questionnaire, which is a quicker but less valid method. Some of these journals, stored in a large "dream repository" at the UdeM Department of Psychology, are quite rich.

One example: "I'm in a closet. A strip of white cloth is forcing me to crouch. Instead of clothes hanging, there are large and grotesquely shaped stuffed animals like cats and dogs with grimacing teeth and bulging eyes. They're hanging and wiggling towards me. I feel trapped and frightened."

Not all the narratives are as detailed, says Geneviève Robert, taking several folders from the filing cabinet. While some narratives are written on more than one page (the average is 144 words), some are briefer: one or two lines. Since the participants were asked to write their descriptions as soon as possible after awakening, some of the writing is almost stream-of-consciousness. One can only imagine the work of the research team who transcribed these thousands of narratives before classifying and analyzing them.
 
What more can we understand from dreams? "Almost everything," says Zadra. Through this research, we can better assert that dreams, bad dreams, and nightmares are part of the same emotional and neurocognitive process. How and which one? It remains to be determined.


Contacts and sources:
Julie Gazaille
University of Montreal

NASA Spacecraft Take Aim At Nearby Supernova

An exceptionally close stellar explosion discovered on Jan. 21 has become the focus of observatories around and above the globe, including several NASA spacecraft. The blast, designated SN 2014J, occurred in the galaxy M82 and lies only about 12 million light-years away. This makes it the nearest optical supernova in two decades and potentially the closest type Ia supernova to occur during the life of currently operating space missions.

To make the most of the event, astronomers have planned observations with the NASA/ESA Hubble Space Telescope and NASA's Chandra X-ray Observatory, Nuclear Spectroscopic Telescope Array (NuSTAR), Fermi Gamma-ray Space Telescope, and Swift missions.

As befits its moniker, Swift was the first to take a look. On Jan. 22, just a day after the explosion was discovered, Swift's Ultraviolet/Optical Telescope (UVOT) captured the supernova and its host galaxy.

Before and After Images:  These Swift UVOT images show M82 before (left) and after the new supernova (right). The pre-explosion view combines data taken between 2007 and 2013. The view showing SN 2014J (arrow) merges three exposures taken on Jan. 22, 2014. Mid-ultraviolet light is shown in blue, near-UV light in green, and visible light in red. The image is 17 arcminutes across, or slightly more than half the apparent diameter of a full moon.

Image Credit: NASA/Swift/P. Brown, TAMU

Remarkably, SN 2014J can be seen on images taken up to a week before anyone noticed its presence. It was only when Steve Fossey and his students at the University of London Observatory imaged the galaxy during a brief workshop that the supernova came to light.

"Finding and publicizing new supernova discoveries is often the weak link in obtaining rapid observations, but once we know about it, Swift frequently can observe a new object within hours," said Neil Gehrels, the mission's principal investigator at NASA's Goddard Space Flight Center in Greenbelt, Md.

Although the explosion is unusually close, the supernova's light is attenuated by thick dust clouds in its galaxy, which may slightly reduce its apparent peak brightness.

"Interstellar dust preferentially scatters blue light, which is why Swift's UVOT sees SN 2014J brightly in visible and near-ultraviolet light but barely at all at mid-ultraviolet wavelengths," said Peter Brown, an astrophysicist at Texas A&M University who leads a team using Swift to obtain ultraviolet observations of supernovae.

However, this super-close supernova provides astronomers with an important opportunity to study how interstellar dust affects its light. As a class, type Ia supernovae explode with remarkably similar intrinsic brightness, a property that makes them useful "standard candles" -- some say "standard bombs" -- for exploring the distant universe.

Brown notes that X-rays have never been conclusively observed from a type Ia supernova, so a detection by Swift's X-ray Telescope, Chandra or NuSTAR would be significant, as would a Fermi detection of high-energy gamma rays.

A type Ia supernova represents the total destruction of a white dwarf star by one of two possible scenarios. In one, the white dwarf orbits a normal star, pulls a stream of matter from it, and gains mass until it reaches a critical threshold and explodes. In the other, the blast arises when two white dwarfs in a binary system eventually spiral inward and collide.

Either way, the explosion produces a superheated shell of plasma that expands outward into space at tens of millions of miles an hour. Short-lived radioactive elements formed during the blast keep the shell hot as it expands. The interplay between the shell's size, transparency and radioactive heating determines when the supernova reaches peak brightness. Astronomers expect SN 2014J to continue brightening into the first week of February, by which time it may be visible in binoculars.

M82, also known as the Cigar Galaxy, is located in the constellation Ursa Major and is a popular target for small telescopes. M82 is undergoing a powerful episode of star formation that makes it many times brighter than our own Milky Way galaxy and accounts for its unusual and photogenic appearance.


Contacts and sources:
NASA

Caffeine Use Disorder: This Widespread Health Problem Needs More Attention

"I'm a zombie without my morning coffee." "My blood type is Diet Coke." "Caffeine isn't a drug, it's a vitamin." Most people make jokes like these about needing a daily boost from their favorite caffeinated beverage—whether first thing in the morning or to prevent the after-lunch slump.

Credit: Wikipedia

But a recent study coauthored by American University psychology professor Laura Juliano indicates that more people are dependent on caffeine to the point that they suffer withdrawal symptoms and are unable to reduce caffeine consumption even if they have another condition that may be impacted by caffeine—such as a pregnancy, a heart condition, or a bleeding disorder.

These symptoms combined are a condition called "Caffeine Use Disorder." And according to the study Juliano coauthored, even though caffeine is the most commonly used drug in the world—and is found in everything from coffee, tea, and soda, to OTC pain relievers, chocolate, and now a whole host of food and beverage products branded with some form of the word "energy"—health professionals have been slow to characterize problematic caffeine use and acknowledge that some cases may call for treatment.

"The negative effects of caffeine are often not recognized as such because it is a socially acceptable and widely consumed drug that is well integrated into our customs and routines," Juliano said. "And while many people can consume caffeine without harm, for some it produces negative effects, physical dependence, interferes with daily functioning, and can be difficult to give up, which are signs of problematic use."

"Caffeine Use Disorder: A Comprehensive Review and Research Agenda," which Juliano coauthored with Steven Meredith and Roland Griffiths of the Johns Hopkins University School of Medicine and John Hughes from the University of Vermont, published last fall in the Journal of Caffeine Research.

Grounds for More Research

The study summarizes the results of previously published caffeine research to present the biological evidence for caffeine dependence, data that shows how widespread dependence is, and the significant physical and psychological symptoms experienced by habitual caffeine users. Juliano and her coauthors also address the diagnostic criteria for Caffeine Use Disorder and outline an agenda to help direct future caffeine dependence research.

In so far as heeding the call for more research, the scientific community is beginning to wake up and smell the coffee. Last spring, the American Psychiatric Association officially recognized Caffeine Use Disorder as a health concern in need of additional research in the Diagnostic and Statistical Manual of Mental Health Disorders—the standard classification of mental disorders, now in its fifth edition (DSM-5), used by mental health professionals in the United States.

"There is misconception among professionals and lay people alike that caffeine is not difficult to give up. However, in population-based studies, more than 50 percent of regular caffeine consumers report that they have had difficulty quitting or reducing caffeine use," said Juliano, who served as an appointed advisor to the DSM-5 Substance Use Disorders work group and helped outline the symptoms for the Caffeine Use Disorder inclusion.

"Furthermore, genetics research may help us to better understand the effects of caffeine on health and pregnancy as well as individual differences in caffeine consumption and sensitivity," she added.

A Lack of Labelling

Based on current research, Juliano advises that healthy adults should limit caffeine consumption to no more than 400 mg per day—the equivalent of about two to three 8-oz cups of coffee. Pregnant women should consume less than 200 mg per day and people who regularly experience anxiety or insomnia—as well as those with high blood pressure, heart problems, or urinary incontinence—should also limit caffeine.

But limiting one's caffeine intake is often easier said than done as most people don't know how much caffeine they consume daily.

"At this time, manufacturers are not required to label caffeine amounts and some products such as energy drinks do not have regulated limits on caffeine," Juliano said, adding that if this changed, people could perhaps better limit their consumption and ideally, avoid caffeine's possible negative effects.

But in a nation where a stop at Starbucks is a daily ritual for many people, is there really a market for caffeine cessation? Juliano says yes.



"Through our research, we have observed that people who have been unable to quit or cut back on caffeine on their own would be interested in receiving formal treatment—similar to the outside assistance people can turn to if they want to quit smoking or tobacco use."

Contacts and sources:
Rebecca Basu
American University

What Makes Us Human? Unique Brain Area Linked To Higher Cognitive Powers

Oxford University researchers have identified an area of the human brain that appears unlike anything in the brains of some of our closest relatives.

The brain area pinpointed is known to be intimately involved in some of the most advanced planning and decision-making processes that we think of as being especially human.

An area of the brain that seems to be unique to humans (in red)
Credit: Oxford University

'We tend to think that being able to plan into the future, be flexible in our approach and learn from others are things that are particularly impressive about humans. We've identified an area of the brain that appears to be uniquely human and is likely to have something to do with these cognitive powers,' says senior researcher Professor Matthew Rushworth of Oxford University's Department of Experimental Psychology.

MRI imaging of 25 adult volunteers was used to identify key components in the ventrolateral frontal cortex area of the human brain, and how these components were connected up with other brain areas. The results were then compared to equivalent MRI data from 25 macaque monkeys.

This ventrolateral frontal cortex area of the brain is involved in many of the highest aspects of cognition and language, and is only present in humans and other primates. Some parts are implicated in psychiatric conditions like ADHD, drug addiction or compulsive behaviour disorders. Language is affected when other parts are damaged after stroke or neurodegenerative disease. A better understanding of the neural connections and networks involved should help the understanding of changes in the brain that go along with these conditions.

The Oxford University researchers report their findings in the science journal Neuron. They were funded by the UK Medical Research Council.

Professor Rushworth explains: 'The brain is a mosaic of interlinked areas. We wanted to look at this very important region of the frontal part of the brain and see how many tiles there are and where they are placed.

'We also looked at the connections of each tile – how they are wired up to the rest of the brain – as it is these connections that determine the information that can reach that component part and the influence that part can have on other brain regions.'

From the MRI data, the researchers were able to divide the human ventrolateral frontal cortex into 12 areas that were consistent across all the individuals.

'Each of these 12 areas has its own pattern of connections with the rest of the brain, a sort of "neural fingerprint", telling us it is doing something unique,' says Professor Rushworth.

The researchers were then able to compare the 12 areas in the human brain region with the organisation of the monkey prefrontal cortex.

Overall, they were very similar with 11 of the 12 areas being found in both species and being connected up to other brain areas in very similar ways.

However, one area of the human ventrolateral frontal cortex had no equivalent in the macaque – an area called the lateral frontal pole prefrontal cortex.

'We have established an area in human frontal cortex which does not seem to have an equivalent in the monkey at all,' says first author Franz-Xaver Neubert of Oxford University. 'This area has been identified with strategic planning and decision making as well as "multi-tasking".'

The Oxford research group also found that the auditory parts of the brain were very well connected with the human prefrontal cortex, but much less so in the macaque. The researchers suggest this may be critical for our ability to understand and generate speech.

Contacts and sources:
University of Oxford press office
University of Oxford

Scientists Reveal Cause Of One Of The Most Devastating Pandemics In Human History

An international team of scientists has discovered that two of the world's most devastating plagues – the plague of Justinian and the Black Death, each responsible for killing as many as half the people in Europe—were caused by distinct strains of the same pathogen, one that faded out on its own, the other leading to worldwide spread and re-emergence in the late 1800s. These findings suggest a new strain of plague could emerge again in humans in the future.

"The research is both fascinating and perplexing, it generates new questions which need to be explored, for example why did this pandemic, which killed somewhere between 50 and 100 million people die out?" questions Hendrik Poinar, associate professor and director of the McMaster Ancient DNA Centre and an investigator with the Michael G. DeGroote Institute for Infectious Disease Research.

The findings are dramatic because little has been known about the origins or cause of the Justinian Plague– which helped bring an end to the Roman Empire – and its relationship to the Black Death, some 800 years later.


This photo shows a tooth from a victim of the plague.
Credit: McMaster University

Scientists hope this could lead to a better understanding of the dynamics of modern infectious disease, including a form of the plague that still kills thousands every year.

The Plague of Justinian struck in the sixth century and is estimated to have killed between 30 and 50 million people— virtually half the world's population as it spread across Asia, North Africa, Arabia and Europe. The Black Death would strike some 800 years later with similar force, killing 50 million Europeans between just 1347 and 1351 alone.


Using sophisticated methods, researchers from many universities including McMaster University, Northern Arizona University and the University of Sydney, isolated miniscule DNA fragments from the 1500-year-old teeth of two victims of the Justinian plague, buried in Bavaria, Germany. These are the oldest pathogen genomes obtained to date.

Using these short fragments, they reconstructed the genome of the oldest Yersinia pestis, the bacterium responsible for the plague, and compared it to a database of genomes of more than a hundred contemporary strains.

The results are currently published in the online edition of The Lancet Infectious Diseases. They show the strain responsible for the Justinian outbreak was an evolutionary 'dead-end' and distinct from strains involved later in the Black Death and other plague pandemics that would follow.

The third pandemic, which spread from Hong Kong across the globe is likely a descendant of the Black Death strain and thus much more successful than the one responsible for the Justinian Plague.

This photo shows the skeletal remains of plague victims found in the Aschheim-Bajuwarenring cemetery in Bavaria, Germany.
Credit: Photo courtesy of M. Harbeck of the University of Munich

"We know the bacterium Y. pestis has jumped from rodents into humans throughout history and rodent reservoirs of plague still exist today in many parts of the world. If the Justinian plague could erupt in the human population, cause a massive pandemic, and then die out, it suggest it could happen again. Fortunately we now have antibiotics that could be used to effectively treat plague, which lessens the chances of another large scale human pandemic" says Dave Wagner, an associate professor in the Center for Microbial Genetics and Genomics at Northern Arizona University.

The samples used in the latest research were taken from two victims of the Justinian plague, buried in a gravesite in a small cemetery in the German town of Aschheim. Scientists believe the victims died in the latter stages of the epidemic when it had reached southern Bavaria, likely sometime between 541 and 543.

The skeletal remains yielded important clues and raised more questions.

Researchers now believe the Justinian Y. pestis strain originated in Asia, not in Africa as originally thought. But they could not establish a 'molecular clock' so its evolutionary time-scale remains elusive. This suggests that earlier epidemics, such as the Plague of Athens (430 BC) and the Antonine Plague (165 -180 AD), could also be separate, independent emergences of related Y. pestis strains into humans.


This photo shows the skeletal remains of plague victims found in the Aschheim-Bajuwarenring cemetery in Bavaria, Germany.

Credit: Photo courtesy of M. Harbeck of the University of Munich

"The tick of the plague bacteria molecular clock is highly erratic. Determining why is an important goal for future research" says Edward Holmes, an NHMRC Australia Fellow at the University of Sydney.

Our response to modern infectious diseases is a direct outcome of lessons learned from ancestral pandemics, say the researchers.

"This study raises intriguing questions about why a pathogen that was both so successful and so deadly died out. One testable possibility is that human populations evolved to become less susceptible," says Holmes.

"Another possibility is that changes in the climate became less suitable for the plague bacterium to survive in the wild," says Wagner.

The research was funded in part by the Social Sciences and Humanities Research Council of Canada, Canada Research Chairs Program, U.S. Department of Homeland Security, U.S. National Institutes of Health and the Australian National Health and Medical Research Council.


Contacts and sources:
Michelle Donovan
McMaster University

Nanodiamonds Not Unique To Younger Dryas Sediments, May Not Be Evidence Of Comet Strike 11,000 Years Ago

In a University of Oklahoma-led study, researchers discovered an additional active process, not excluding an extraterrestrial event, that may have led to high concentrations of nanodiamonds in Younger Dryas-age sediments and in sediments less than 3,000 years old. Findings from quantifying sediments of different periods along the Bull Creek valley in the Oklahoma Panhandle suggest the distribution of nanodiamonds was not unique to the Younger Dryas sediments.

“Whatever process produced nanodiamond concentrations in the Younger Dryas sediments may have been active in recent millennia,” said OU scientist Leland Bement, Oklahoma Archeological Survey. Bement led the project with Andrew Madden, OU School of Geology and Geophysics, with collaborators Brian Carter, Oklahoma State University; Alexander Simms, University of California Santa Barbara; and Mourad Benamara, University of Arkansas.

The presence of nanodiamonds in the sedimentological record has been cited as evidence supporting a hypothesis that an ET impact, probably a comet, triggered the Younger Dryas period of global cooling around 11,000 years ago and contributed to the extinction of many animals and altered human adaptations. The OU-led study found no correlation of nanodiamond concentration caused by alternative processes, including soil formation, erosion, prehistoric human activity or other climate reversals in Oklahoma panhandle sediments.

The recent OU-led study, “Quantifying the distribution of nanodiamonds in pre-Younger Dryas to recent age deposits along Bull Creek, Oklahoma Panhandle, USA,” was published in the Proceedings of the National Academy of Sciences, Early Edition.

Nanodiamonds discovered in the Younger-Dryas boundary sediments in the Bull Creek valley of the Oklahoma Panhandle. Such diamonds may support a hypothesis that a comet impact or explosion above the earth’s surface ~11,000 years ago triggered climate change, large mammal extinctions, and altered human cultural trajectories.

Credit:  University of Oklahoma


Contacts and sources:
Jana Smith
University of Oklahoma

Revolutionary Electrical Current Sensors Harvest Wasted Electromagnetic Energy

Groundbreaking passive sensing and energy harvesting technologies safeguard electrical engineering assets

Electricity is the lifeblood of modern cities. It flows at every moment and everywhere to power up everything from home appliances which improve our comfort and convenience, to services like transportation, building, communication and manufacturing that are essential to our daily life. To ensure a reliable operation of power grids and a proper delivery of electricity to where it needs to be, it is crucial to have a loyal guard to keep watch on the activities of electricity transport. As technology advances, the safety, reliability and availability of electrical engineering assets and public utilities can now be guarded by one tiny chip of electrical current sensors.

These smart wireless sensors can now reach hard-to-access locations such as rails where conventional sensors are either impossible or not cost effective.
HKPolyU

Measuring about 1 mm in thickness, the chip is a masterpiece by Professor Derek Siu-wing Or and his research team in the Department of Electrical Engineering of The Hong Kong Polytechnic University. The chip can be placed on any sensing point of interest such as electrical cables, conductors, junctions, bus bars, etc. to detect electrical currents. What’s more, it does not necessitate the use of additional power supplies and signal conditioners which are generally required by traditional current sensors such as Hall sensors, reluctance coils, etc.

According to Professor Or, the chip is an amazing work of advanced functional materials. Made from rare earth multiferroics with giant magnetoelectric properties, the chip enables a direct detection of magnetic fields generated by electricity and a linear conversion of these magnetic fields into electrical voltage signals. The amplitude of the converted signals is linearly proportional to the magnetic fields, while their frequency exactly follows the magnetic fields. The “magnetoelectric smart material”, as called by the team, is then specially engineered into “self-sustainable magnetoelectric smart sensors” that recognize telltale changes of electrical currents within electrical equipment. It is as simple as using a thermometer to give temperatures.

The exciting part is that Professor Or and his team have got rid of power supplies and signal conditioners from traditional current sensors. When power and signal conditioning requirements are eliminated, the smart sensors do not have power cords and electronic active components. They can be conveniently, safely and reliably used for early fault detection in unthinkable territories.

Professor Or explained, “Our smart sensors are essentially simple, totally passive and capable of producing large and clear output voltage signals which are 2,000 times higher than the traditional current sensors. This passive and self-sustainable nature allows real-time, nonstop monitoring of the ‘health’ of electrical equipment, including those carrying high voltages, heavy currents and/or strong electromagnetic fields.

“Besides, these smart sensors can be tailored to harvest electromagnetic radiations emitted by the electrical equipment being monitored and to turn them into useful electrical energy. The stored electrical energy can be used to power up microcontrollers, displays, wireless transmitters, etc., further advancing the smart sensor technology toward ‘energy-harvesting smart wireless sensors’.”

The smart wireless sensors are being tested in electrical traction systems on trains in both Hong Kong and Singapore to provide in-situ monitoring of traction conditions and to detect electrical faults that may bring train services to a halt.

The benefits of the smart wireless sensor innovation go well beyond these advantages. For example, smart wireless sensors can now reach hard-to-access locations such as rails, tunnels, high-rises, underground premises, meter rooms, etc., where hardwired power cords and signal cables are either impossible or not cost effective. Another example is that the patented technology allows quick detection of malfunctions of ventilation fans inside tunnels, reducing the need of tunnel services suspension.

The journey does not end here; in fact, the research team is working further to perfect the technology. Professor Or said, “We aim to enhance the energy harvesting capability while making the smart sensors even more sensitive and reliable in measurement.” Their research work has been supported by E-T-A Elektrotechnische Apparate GmbH (E-T-A) through a EUR500,000 fund. As a global leader in electrical circuit protection, the German company focuses on advancing electrical circuit protection technology. Professor Or and E-T-A are working together to embed the smart wireless sensor technology in new generation electrical circuit protection products that would meet the highest standards in terms of innovation, safety, reliability and efficiency.

A leading power company has engaged Professor Or and his research team in a large scale project to supply, test and commission a significant amount of smart sensors for use in substations. Imagine a power cable that would beep when it is sick and beep even louder when it is about to give out. In the near future, our power grids can be smarter than they currently are.


Contacts and sources:

Monday, January 27, 2014

Solving A 30-Year-Old Problem In Massive Star Formation

An international group of astrophysicists has found evidence strongly supporting a solution to a long-standing puzzle about the birth of some of the most massive stars in the universe.

Young massive stars, which have more than 10 times the mass of the Sun, shine brightly in the ultraviolet, heating the gas around them, and it has long been a mystery why the hot gas doesn't explode outwards.

This false-color Very Large Array image of the ionized gas in the star forming region Sgr B2 Main was used to detect small but significant changes in brightness of several of the sources. The spots and filaments in this image are regions of ionized gas around massive stars. The changes in brightness detected support a model that could solve a 30-year-old question in high mass star formation.
Credit: NRAO/Agnes Scott College

Now, observations made by a team of researchers using the Jansky Very Large Array (VLA), a radio astronomy observatory in New Mexico, have confirmed predications that as the gas cloud collapses, it forms dense filamentary structures that absorb the star's ultraviolet radiation when it passes through them. As a result, the surrounding heated nebula flickers like a candle.

The findings, made by scientists working at Agnes Scott College, Universität Zürich, the American Museum of Natural History, Harvard-Smithsonian Center for Astrophysics, National Radio Astronomy Observatory, European Southern Observatory, and Universität Heidelberg, were published recently in The Astrophysical Journal Letters.

"Massive stars dominate the lives of their host galaxies through their ionizing radiation and supernova explosions," said Mordecai-Mark Mac Low, a curator in the American Museum of Natural History's Department of Astrophysics and an author on the paper. "All the elements heavier than iron were formed in the supernova explosions occurring at the ends of their lives, so without them, life on Earth would be very different."

Stars form when huge clouds of gas collapse. Once the density and temperature are high enough, hydrogen fuses into helium, and the star starts shining. The most massive stars, though, begin to shine while the clouds are still collapsing. Their ultraviolet light ionizes the surrounding gas, forming a nebula with a temperature of 10,000 degrees Celsius. Simple models suggest that at this stage, the gas around massive stars will quickly expand. But observations from the VLA radio observatory show something different: a large number of regions of ionized hydrogen (so-called HII regions) that are very small. 
 
Observations of the massive star forming region Sgr B2 were made with the Karl G. Jansky Very Large Array (VLA) in 1989 and 2012. The VLA has been operational since 1980 and received a major upgrade that was completed in 2011.

Credit: NRAO/AUI

"In the old theoretical model, a high-mass star forms and the HII region lights up and begins to expand. Everything was neat and tidy," said lead author Chris De Pree, a professor of astronomy and director of the Bradley Observatory at Agnes Scott College. "But the group of theorists I am working with were running numerical models that showed accretion was continuing during star formation, and that material was continuing to fall in toward the star after the HII region had formed."

Recent modeling has shown that this is because the interstellar gas around massive stars does not fall evenly onto the star but instead forms filamentary concentrations because the amount of gas is so great that gravity causes it to collapse locally. The local areas of collapse form spiral filaments. When the massive star passes through the filaments, they absorb its ultraviolet radiation, shielding the surrounding gas. This shielding explains not only how the gas can continue falling in, but why the ionized nebulae observed with the VLA are so small: the nebulae shrink when they are no longer ionized, so that over thousands of years, they appear to flicker like a candle.

"These transitions from rarefied to dense gas and back again occur quickly compared to most astronomical events," said Dr. Mac Low, a curator in the Museum's Department of Astrophysics. "We predicted that measurable changes could occur over times as short as a few decades."

The new study tested this theory with a 23-year-long experiment. The researchers used VLA observations of the Sagittarius B2 region made in 1989 and again in 2012. This massive star-forming region located near the Galactic center contains many small regions of ionized gas around high-mass stars, providing a large number of candidates for flickering. During this time, four of the HII regions indeed significantly changed in brightness.

"The long term trend is still the same, that HII regions expand with time," De Pree said. "But in detail, they get brighter or get fainter and then recover. Careful measurements over time can observe this more detailed process."


###

The publication can be viewed at: http://arxiv.org/abs/1312.7768





Contacts and sources:
Kendra Snyder
American Museum of Natural History