Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Monday, October 31, 2016

Bio-Active Prosthetic Joints for More Successful Implants

An unsuccessful joint replacement will cause pain, immobility and a progressively unstable hip or knee - for example - that needs repeat surgery. The EU-funded project BIOSTEM has developed a coating for artificial joints to improve the chances of stability and avoid these symptoms that threaten patient quality of life.

The current treatment for osteoarthritis is to replace the affected joint with an implant. More than 270 000 knee and hip replacements are carried out annually in the UK alone. Unfortunately, the device sometimes fails to integrate with the bone, which may already be compromised by diseases such as osteoarthritis (OA).

Credit: © georgerudy- fotoloia.com


The answer is to make the implant 'biologically active' so that after integration, the bone and the implant behave more like one entity. "Coating with a biological material that will promote bone formation should encourage ‘osseo’ or bone integration," says Mary Murphy, scientific project leader within BIOSTEM.

Surface treatment

Coating the implant is one way to improve implant success. Previous research has shown that the surface properties strongly influence the fate of the implant and whether it is likely to become unstable or not. The BIOSTEM team has modified poly-(ether-ether)-ketone (PEEK) as a coating for metal implants. PEEK is a type of hard plastic with a success record in surgery on the spine as well as face and skull. Its drawback for joint replacement is that it is biologically inactive.

Modified collagen was therefore added to the PEEK; found naturally in bones, muscles and skin, collagen forms a scaffold to provide strength and structure. To complete the recipe, mesenchymal stem cells were added. These can produce bone cells to increase osseointegration.

The project has also tested a novel device to apply the coating that bombards the outer surface of PEEK with a gas containing the collagen. The molecules of collagen react with the destabilised outer area and become incorporated into the PEEK. The machine will fit on a lab bench and can therefore be used directly where it is needed – in the hospitals carrying out operations.

Promising trials

Preclinical trials on rabbits with PEEK implants are almost complete. The mechanical testing produced good results, indicating increased osseointegration.

Examination of the joints at molecular level after removal also proved encouraging. Protein modification indicated that the collagen component of the coating had been incorporated and there were positive effects on the potential of stem cells to form bone.

The team is optimistic that further optimisation of the coating with the addition of growth factors will add to these promising results, and expects additional improvements in implant quality. "Specific growth factors are required for generation of new bone and will increase osseointegration without potential side effects when localised to the area where they are needed," Murphy explains.

The future for total joint replacement

Incidences of OA and the associated need for joint replacements will rise as the average age of the population increases. More critical is the rise in the number of younger people receiving joint replacements due in part to high-impact activities such as running. Murphy sums up the potential success of BIOSTEM research. “PEEK wears well and producing biologically active PEEK will increase osseointegration of joint implants to decrease the need for costly revision surgeries."

Young researcher input

Jessica Hayes, the fellow involved in the BIOSTEM project had just finished her PhD in surface modification for titanium implant removal for paediatrics when she was recruited for this Marie Curie Intra-European Fellowship for Career Development.

"Involvement in this successful programme has certainly contributed to my subsequent achievements, such as becoming a Lindau Fellow, and my future involvement in a Horizon 2020 project, AUTOSTEM," she says.




Contacts and sources:
EC Research and Innovation

Nano-Sized Protein Particles Promise Healthy Food Revolution

An EU-funded scientist has identified a promising method of encapsulating bio-active molecules in protein-based systems, which could enable food-makers to develop tastier and more nutritious products. Long term, this will contribute towards a healthier population and help reduce diet-related diseases.

Fortifying foods with vitamins and other bio-active compounds has become common practice in Europe; you just have to look at the ingredients panel on a packet of sliced bread or a cornflakes box. This is often achieved by coating – or encapsulating – certain molecules and compounds in order to conceal their bad taste or smell and protect them during processing.

© Nitr - fotoloia.com

However, encapsulating certain unstable bio-active molecules has consistently been a major hurdle for the functional food sector. “What the industry needs is a way of treating certain compounds in a manner that ensures their stability, protects them during food processing and then releases these compounds at the right time, either during food processing or in the gastrointestinal tract,” explains project coordinator Iris Joye, who is currently building on her successful research as assistant professor at the University of Guelph in Canada.

Rising to the challenge

Joye was able to focus on this challenge thanks to an EU-funded Marie Curie International Outgoing Fellowship, which took her from the University of Leuven in Belgium to the University of Massachusetts in the US. Supported by world-leading cereal science and encapsulation experts, she began by investigating the potential of emulsions to encapsulate oxidising agents for bread-making, but soon discovered that protein nanoparticles were more promising.

“The grant was flexible enough that it enabled me to change track,” she says. “We found that emulsion-based systems were not that convenient to work with as they tended to be unstable during bread-making steps, such as mixing and heating. Protein nanoparticles, on the other hand, were an effective way of encapsulating bio-active molecules and protecting unstable compounds during food processing.”

A major challenge for food-makers has been how to effectively encapsulate hydrophilic molecules such as certain polyphenols and vitamin C – which dissolve in water – as these tend to break down easily during food processing. “Protein-based delivery systems are very versatile and we found them to be promising for encapsulating and protecting both hydrophobic [insoluble in water] and hydrophilic molecules,” says Joye. “Obtaining more insight into the interactions between proteins and these molecules is the key to the development of economically viable delivery systems.”

This new method of encapsulation promises to make it possible to add bio-active compounds to food products, such as cereal products, more effectively. The compounds are then released into the gut at the right moment for maximum quality and nutritional benefit.

A bright future

While the protein-based encapsulation systems developed by Joye have recorded positive results, no one really knows what is going on at the molecular level. Understanding what molecular changes are happening and how these chemical interactions can be controlled is one of the focuses of Joye’s current work at the University of Guelph. “The commercial side is still in its infancy,” she says. “If successful, patents will certainly be applied for, and collaborations with industrial partners sought. However, the incredibly small size of these particles means that a thorough toxicity assessment will be needed first.”

In addition to uncovering the encapsulating potential of nano-sized protein particles, the Marie Curie grant has also enabled Joye to build up an extensive international network of fellow young scientists who are also just launching their academic careers. “This has been invaluable to me,” she says. “Through working in another scientific environment and setting, this project has transformed me into a critical and more self-aware scientist and it certainly inspired me by exposing me to totally different research cultures.”

It is great that the outgoing researchers come back to the EU and share their knowledge and expertise, as this helps to build up international collaborations, she adds. “The grant has definitely served as a propelling force behind my selection for faculty positions both in Belgium and in Canada.”





Contacts and sources:
EC Research and Innovation

Project acronym: EMULSIFOOD

Bizarre Extinct Spike-Toothed Salmon Reached 400 Pounds


The ancient coastal waters of the Pacific, roughly 11 to 5 million years ago, were home to a bizarre and fascinating species of giant salmon with large spike-like teeth. This spike-toothed salmon reached sizes of 3 to 9 feet in length (1-3 meters), much larger than the typical salmon found in the Pacific today. These hefty spike-toothed fish would have made for a difficult catch at nearly 400 pounds (177 kg).

This is an illustration by Jacob Biewer.

Credit: Society of Vertebrate Paleontology

The spike-like teeth of the salmon could be over an inch long (3 cm), much longer than modern Pacific salmon teeth, even after compensating for their larger size. Researchers from California State University in Turlock, California have been studying the strange teeth of these unusual fish and discovered some tantalizing clues into their past behavior and life history.

Much like modern Pacific salmon, the giant salmon was likely primarily a filter-feeder, so the spike teeth were probably not part of catching prey. Modern salmon go through physical changes in their body, especially their skull, before migrating upriver to spawn where males will fight to defend the eggs they have fertilized. 

To see if these teeth played an important role in breeding of the giant fossil salmon, the team of researchers, led by Dr. Julia Sankey, compared 51 different fossils from ancient deposits of both freshwater and saltwater environments. The teeth of these salmon found in past freshwater environments consistently had longer, more recurved teeth with much larger bases, as well as showed clear signs of wear. Fossil salmon teeth from saltwater deposits were much smaller and less worn. This indicates that they changed prior to migration upriver to spawn.

This is an illustration by Jacob Biewer
Credit: Society of Vertebrate Paleontology

These results help show that these impressive spike-like teeth of the giant salmon are indeed used as part of the breeding process in these extinct fish. Researchers think it is likely these hefty bruisers were using their spike-like teeth for fighting and display against each other during the spawning season, up in the ancient rivers of California. "These giant, spike-toothed salmon were amazing fish. 

You can picture them getting scooped out of the Proto-Tuolumne River [near Modesto, California] by large bears 5 million years ago." said Dr. Sankey "Scientifically, our research on the giant salmon is filling in a gap in our knowledge about how these salmon lived, and specifically, if they developmentally changed prior to migration upriver like modern salmon do today. 

This research is also helping paint the picture of this area 5 million years ago for the general public and my college students, and it excites them to think of this giant salmon swimming up our local rivers 5 million years ago!". Dr.Sankey and colleagues presented their research at this year's meeting of the Society of Vertebrate Paleontology in Salt Lake City, Utah.
 



Contacts and sources:
Serena Weisman
Society of Vertebrate Paleontology

Julia T. Sankey
California State University, Stanislaus


How Frankenstein Saved Mankind from Probable Extinction

Frankenstein as we know him, the grotesque monster that was created through a weird science experiment, is actually a nameless Creature created by scientist Victor Frankenstein in Mary Shelley's 1818 novel, "Frankenstein."

Widely considered the first work of science fiction for exploring the destructive consequences of scientific and moral transgressions, a new study published in "BioScience" argues that the horror of Mary Shelley's gothic novel is rooted in a fundamental principle of biology. (A pdf of the study is available upon request).

Steel engraving (993 x 71mm) for frontispiece to the revised edition of Frankenstein by Mary Shelley, published by Colburn and Bentley, London 1831. The novel was first published in 1818.
Frontispiece to Frankenstein 1831.jpg
Credit: Wikimedia

The co-authors point to a pivotal scene when the Creature encounters Victor Frankenstein and requests a female companion to mitigate his loneliness. The Creature distinguishes his dietary needs from those of humans and expresses a willingness to inhabit the "wilds of South America," suggesting distinct ecological requirements. Frankenstein concedes to this reasoning given that humans would have few competitive interactions with a pair of isolated creatures, but he then reverses his decision after considering the creatures' reproductive potential and the probability of human extinction, a concept termed competitive exclusion. In essence, Frankenstein was saving humankind.

"The principle of competitive exclusion was not formally defined until the 1930s," said Nathaniel J. Dominy, a professor of anthropology and biological sciences at Dartmouth. "Given Shelley's early command of this foundational concept, we used computational tools developed by ecologists to explore if, and how quickly, an expanding population of creatures would drive humans to extinction."

Poster from the 1931 film Frankenstein.
Frankenstein poster 1931.jpg
Credit: Univeral Pictures/Wikimedia Commons

The authors developed a mathematical model based on human population densities in 1816, finding that the competitive advantages of creatures varied under different circumstances. The worst-case scenario for humans was a growing population of creatures in South America, as it was a region with fewer humans and therefore less competition for resources. "We calculated that a founding population of two creatures could drive us to extinction in as little as 4,000 years," said Dominy. Although the study is merely a thought experiment, it casts new light on the underlying horror of the novel: our own extinction. It also has real-word implications for how we understand the biology of invasive species.

"To date, most scholars have focused on Mary Shelley's knowledge of then-prevailing views on alchemy, physiology and resurrection; however, the genius of Mary Shelley lies in how she combined and repackaged existing scientific debates to invent the genre of science fiction," said Justin D. Yeakel, an Omidyar fel

low at the Santa Fe Institute and an assistant professor in the School of Natural Sciences at the University of California, Merced. "Our study adds to Mary Shelley's legacy, by showing that her science fiction accurately anticipated fundamental concepts in ecology and evolution by many decades."

Frankenstein (1931 film) Trailer

Credit: Universal Pictures/Wikimedia Commons




Contacts and sources:
by Amy Olson
Dartmouth Collge

See A Dead Star's Ghostly Glow

The eerie glow of a dead star, which exploded long ago as a supernova, reveals itself in this NASA Hubble Space Telescope image of the Crab Nebula. But don't be fooled. The ghoulish-looking object still has a pulse. Buried at its center is the star's tell-tale heart, which beats with rhythmic precision.

Astronomers discovered a real "tell-tale heart" in space, 6,500 light-years from Earth. The "heart" is the crushed core of a long-dead star, called a neutron star, which exploded as a supernova and is now still beating with rhythmic precision. Evidence of its heartbeat are rapid-fire, lighthouse-like pulses of energy from the fast-spinning neutron star. The stellar relic is embedded in the center of the Crab Nebula, the expanding, tattered remains of the doomed star.

Credits: NASA and ESA, Acknowledgment: M. Weisskopf/Marshall Space Flight Center

The "heart" is the crushed core of the exploded star. Called a neutron star, it has about the same mass as the sun but is squeezed into an ultra-dense sphere that is only a few miles across and 100 billion times stronger than steel. The tiny powerhouse is the bright star-like object near the center of the image.

This time-lapse movie of the Crab Nebula, made from NASA Hubble Space Telescope observations, reveals wave-like structures expanding outward from the "heart" of an exploded star. The waves look like ripples in a pond. The heart is the crushed core of the exploded star, or supernova. Called a neutron star, it has about the same mass as the sun but is squeezed into an ultra-dense sphere that is only a few miles across and 100 billion times stronger than steel.  The movie is assembled from 10 Hubble exposures taken between September and November 2005 by the Advanced Camera for Surveys.

Credits: NASA and ESA, Acknowledgment: J. Hester (Arizona State University)

This surviving remnant is a tremendous dynamo, spinning 30 times a second. The wildly whirling object produces a deadly magnetic field that generates an electrifying 1 trillion volts. This energetic activity unleashes wisp-like waves that form an expanding ring, most easily seen to the upper right of the pulsar.  The bright object to the left of the neutron star is a foreground or background star.

The nebula's hot gas glows in radiation across the electromagnetic spectrum, from radio to X-rays. The Hubble exposures were taken in visible light as black-and-white exposures. The Advanced Camera for Surveys made the observations between January and September 2012. The green hue that gives the nebula a Halloween theme, represents the color range of filter used in the observation..

The Crab Nebula is one of the most historic and intensively studied supernova remnants. Observations of the nebula date back to 1054 A.D., when Chinese astronomers first recorded seeing a "guest star" during the daytime for 23 days. The star appeared six times brighter than Venus. Japanese, Arabic, and Native American stargazers also recorded seeing the mystery star. 

In 1758, while searching for a comet, French astronomer Charles Messier discovered a hazy nebula near the location of the long-vanished supernova. He later added the nebula to his celestial catalog as "Messier 1," marking it as a "fake comet." Nearly a century later British astronomer William Parsons sketched the nebula. Its resemblance to a crustacean led to M1's other name, the Crab Nebula. In 1928 astronomer Edwin Hubble first proposed associating the Crab Nebula to the Chinese "guest star" of 1054.

The nebula, bright enough to be visible in amateur telescopes, is located 6,500 light-years away in the constellation Taurus.





Contacts and sources:
Ray Villard
Space Telescope Science Institut

Sunday, October 30, 2016

How Early Pacific Seafarers Populated on of Earth's Most Remote Regions


Seafaring settlers traveled hundreds — even thousands — of miles, navigating by stars and overcoming ocean currents and difficult weather for months to arrive in a region that includes present-day Tonga, Samoa, Hawaii, Micronesia and Fiji.

Now research by a team that includes Scott Fitzpatrick, a professor in the UO Department of Anthropology, provides insights into how these early travelers came to populate one of the most remote regions on Earth.

Map provides a synthesis of results from computer simulations and climatic data that were used to analyze ocean routes across the Pacific Ocean showing viable ocean crossings, weather patterns, seasonal variations and other factors in Remote Oceania.

Credit: Scott Fitzpatrick,


“Where did these people come from? How did they get to these really remote places, and what were the factors, culturally, technologically and politically that led to these population dispersals?” Fitzpatrick said. “These are really big questions for Pacific archaeology and other related disciplines.”

Fitzpatrick and his team, which includes Ohio State University geographer Alvaro Montenegro and University of Calgary archaeologist Richard Callaghan, offer some potential answers in a paper published Oct. 24 in the online early edition of the Proceedings of the National Academy of Sciences.

The paper, “Using Seafaring Simulations and ‘Shortest Hop’ Trajectories to Model the Prehistoric Colonization of Remote Oceania,” details the team’s use of computer simulations and climatic data to analyze ocean routes across the Pacific. The simulations take into account high-resolution data for winds, ocean currents, land distribution and precipitation.



Replica of a Palauan outrigger canoe

Credit: Scott Fitzpatrick

“We synthesized a lot of new climatic data and ran a lot of new simulations that are exciting in terms of highlighting and pinpointing where some of these prehistoric populations might have come from,” Fitzpatrick said. “The simulation can assess, at any point in time, if somebody left point A, where would they end up if they drifted? We can also model directed voyages. If somebody knew where they were going, how long would it take them to get there?”

Fitzpatrick and his colleagues used their simulations to identify most likely ports of departure for the settlers of five major regions in Remote Oceania. To account for course variations due to wind and currents, the team created shortest-hop trajectories to assess the likely paths of least resistance. This technique factors in the role that distance and remoteness may have played in facilitating voyaging from one island to another and can include changes in sea level at different points in time that may have made these trips easier or harder.

Seafaring models have been developed and used by other researchers in the past, Fitzpatrick said, but they weren’t able to fully harness the high-resolution satellite data sets that have only recently been made available to scientists.

The research team’s simulations include El Nino Southern Oscillation patterns, which settlers most likely knew about and used to their advantage, Fitzpatrick said. Archaeological records reflect El Nino occurrences, which typically happen every three to seven years and can be seen in evidence marking droughts and fires. Because winds and associated precipitation shift from westerly to easterly during El Nino years, settlers would have found travel toward Remote Oceania more favorable when the pattern was occurring and may have timed their departures accordingly.

“What Pacific scholars have long surmised but never really been able to establish very well is that, through time, Pacific Islanders should have developed a great deal of knowledge of different climactic variations, different oscillations of wind and changes in environments that would have influenced their survivability and their abilities to go to certain places,” Fitzpatrick said.


View of the reefs and islands around Airai Bay, Palau

Credit: Scott Fitzpatrick

The analysis provides insights about the origins of the early seafarers. The settlers of western Micronesia probably came from near the Maluku Islands, also known as the Spice Islands. Some of the team’s findings challenge current archaeological theories, while other data support existing lines of evidence. The research suggests Samoa was the most likely staging area for colonizing East Polynesia. It also indicates that Hawaii and New Zealand may have been settled from the Marquesas or Society Islands. Easter Island may have been settled from the Marquesas or Mangareva.

The new paper also highlights areas worthy of further research. It suggests that Samoa may have been an epicenter for colonization and challenges data that suggests the Philippines as a potential point of departure for the settlement of Micronesia.

The three team members brought complementary expertise to the study. Fitzpatrick contributed his knowledge of the archaeology of island and coastal regions in the Pacific and the Caribbean. Montenegro, the paper’s lead author, is a geographer and climatologist. Callaghan is an archaeologist specializing in seafaring simulations.

A question that may never be resolved is why seafaring settlers traveled such immense distances. Although evidence exists that some settlers were motivated by a desire to obtain new resources such as basalt or obsidian for making stone tools, Fitzpatrick said, there’s no easy way to explain the leap of faith it would take to set off on a colonizing mission of 400 to 2,500 miles.

“What drove the movement? That’s the big question," he said. "Was it political? Was it a result of population pressure? There were probably multiple reasons why people decided to leave one place and go to another.”





Contacts and sources:
Lewis Taylor
University of Oregon

Carcinogen Hexavalent Chromium Found in 90% of Tested Water Well in North Carolina's Piedmont


W, a carcinogen made famous by the movie Erin Brockovich, is far more abundant in drinking water wells in North Carolina than previously thought, a new Duke University study finds.

The contamination doesn't, however, stem from leaking coal ash ponds as many people feared after state officials tested wells near coal plants last year and detected potentially harmful levels of hexavalent chromium in the water.

Instead, it's caused by the natural leaching of mostly volcanic rocks in aquifers across the Piedmont region.

"About 90 percent of the wells we sampled had detectable levels of hexavalent chromium, and in many cases the contamination is well above recommended levels for safe drinking water. But our analysis clearly shows it is derived from natural sources, not coal ash," said Avner Vengosh, professor of geochemistry and water quality at Duke's Nicholas School of the Environment.

Groundwater testing revealed that nine out of ten drinking water wells in North Carolina's Piedmont region contain detectable levels of the carcinogen hexavalent chromium, and that the contamination stems from natural sources.

Courtesy Avner Vengosh, Duke University

"This doesn't mean it poses less of a threat," Vengosh stressed. "If anything, because the contamination stems from water-rock interactions that are common across the Piedmont region, people in a much larger geographic area may be at risk. This is not limited only to wells near coal ash ponds.

"The bottom line is that we need to protect the health of North Carolinians from the naturally occurring threat of hexavalent chromium, while also protecting them from harmful contaminants such as arsenic and selenium, which our previous research has shown do derive from leaking coal ash ponds," Vengosh said. "The impact of leaking coal ash ponds on water resources is still a major environmental issue."

To conduct the new study, the researchers collected groundwater samples from 376 wells located both close to and far from coal ash ponds across the Piedmont region of central North Carolina. Using forensic geochemical tracers, they analyzed each sample for a wide range of inorganic chemicals, including hexavalent chromium.

The tracers, which were developed by Vengosh and his team, allowed the scientists to identify the geochemical fingerprints of contaminants in the groundwater and trace each contaminant back to its source.

"Our analysis showed that groundwater samples with high levels of hexavalent chromium have very different geochemical fingerprints than what we see in groundwater contaminated from leaking coal ash ponds," Vengosh said.

"This, combined with the wide geographic distribution of samples containing elevated hexavalent chromium - regardless of proximity to a coal ash pond - points to the natural leaching of chromium from aquifer rocks in certain Piedmont geological formations," he said.

Piedmont formations with volcanic rocks are common across the southeastern United States and other areas worldwide, Vengosh noted, so millions of people in regions outside North Carolina with similar aquifers may be exposed to hexavalent chromium without knowing it.

The Duke team published its findings October 26 in the peer-reviewed journal Environmental Science and Technology Letters.

In 2015, water-quality officials in North Carolina issued temporary "do not drink" recommendations to residents living near coal-burning plants after tests detected potentially harmful levels of hexavalent chromium in their well water samples. Because elevated levels of chromium typically occur in coal ash, many people assumed the contamination was linked to the coal ash ponds.

Vengosh's team's study is the first to show otherwise.

The current drinking water standard for chromium in the United States is 100 parts per billion. This is based on an assumption that most chromium contained in drinking water is composed of a less toxic form known as trivalent chromium. Only California has set a statewide standard of 10 parts per billion for the much more toxic hexavalent form.

Vengosh hopes his study's findings will lead more states to establish hexavalent chromium standards of their own. "One of the most striking outcomes of this study is that it shows the concentration of hexavalent chromium in groundwater is almost identical to the concentration of total dissolved chromium, measured by a totally different technique" he said. "That means when you will find chromium in groundwater, it is actually composed of its toxic form of hexavalent chromium, not the less toxic trivalent form."




Contacts and sources:'
Tim Lucas
Duke University

Saturday, October 29, 2016

Inside A Mysterious Giant Space Blob A Galaxy Gestates


Scientists have witnessed galaxies forming inside a mysterious giant space blob, which will one day form the heart of a giant galaxy cluster.

Lyman-alpha Blobs (LABs) are gigantic clouds of hydrogen gas that can span hundreds of thousands of light years. Their structure looks relatively simple, but they glow far more brightly than might be expected.

Credit: © Imperial College London.


What causes the bright glow has been a mystery for 15 years, but now, scientists have confirmed that two galaxies are forming within the largest ever Lyman-alpha Blob yet discovered – LAB-1. Using an advanced telescope, the researchers have deduced that the blob is creating stars over 100 times faster than the Milky Way. It is this frenzy of star formation that lights up the surrounding blob.

Key: 1) Central starburst galaxies, detected with ALMA. 2) Surrounding satellite – Low mass companions. Most of these are too faint to detect directly. 3) Central galaxies are emitting Lyman-alpha (Ly-α) photons from star formation. 4) The photons scatter off clouds of cold gas in the circumgalactic medium. Most of the cold gas is around satellites. 5) Scattered Ly-α escapes to our line of sight, giving rise to extended blob.

 Credit: © Imperial College London.


LAB-1, or SSA22-LAB 1, was first seen in 2000. It was the first LAB to be discovered, and is located so far away that its light has taken 11.5 billion years to reach us. Measuring 300,000 light years across, LAB-1 is three times larger than the Milky Way, and this research shows for the first time that it is powered by elliptical galaxies at its centre.

A team led by the University of Hertfordshire, and including researchers from Imperial College London, used the Atacama Large Millimeter/Submillimeter Array (ALMA), an array of telescopes with unparalleled ability to observe light from dust clouds in distant galaxies to peer deeply into LAB-1. This allowed them to pinpoint several sources of radiation and light within the space blob, where they spotted the two young, growing elliptical galaxies.

The team, part of the JCMT Legacy Survey, then combined the ALMA images with observations from the Multi Unit Spectroscopic Explorer (MUSE) instrument mounted on the European Southern Observatory (ESO)’s Very Large Telescope (VLT). This maps the light that is emitted from the blob, known as Lyman-alpha light and it showed that the sources of light are the forming stars in the very heart of the Lyman-alpha Blob.

Co-author Dr Dave Clements from the Department of Physics at Imperial College London, said: “What’s exciting about this is that elliptical galaxies usually live in the centre of galaxy clusters, so what we’ve found here could eventually form the centre of a giant galaxy cluster. Deep imaging with the NASA/ESA Hubble Space Telescopeand the W. M. Keck Observatory did indeed show that the young galaxies are surrounded by companion galaxies that could be providing the central galaxies with material, helping to drive their high star formation rates and lending weight to this theory.”

This video zoom sequence starts with a wide-field view of the dim constellation of Aquarius (The Water Carrier) and slowly closes in on one of the largest known single objects in the Universe, the Lyman-alpha blob LAB1. Observations with the ESO VLT show, for the first time, that the giant “blob” must be powered by galaxies embedded within the cloud.

Credit: ESO

LABs are bright but have a murky glow. The team used a galaxy formation simulator to show the glow might come from ultraviolet light being scattered and bouncing off surrounding hydrogen gas, as a result of the star formation.

Lead author Dr James Geach from the Centre for Astrophysics Research at the University of Hertfordshire, explained: “Think of a streetlight on a foggy night — you see the diffuse glow because light is scattering off the tiny water droplets. A similar thing is happening here, except the streetlight is an intensely star-forming galaxy and the fog is a huge cloud of intergalactic gas. The galaxies are illuminating their surroundings.”

Dr Clements concluded: “These blobs have been a mystery for a long time, but thanks to this large collaboration between experts and a variety of telescopes, we think we have solved a 15-year-old mystery: Lyman-alpha Blob-1 is the site of formation of a massive elliptical galaxy that will one day be the heart of a giant cluster. We are seeing a snapshot of the assembly of that galaxy 11.5 billion years ago.” 




Contacts and sources:
by Caroline Brogan
Imperial College London

What Do I Have? Cold, Flu Or Seasonal Allergies? Your Foolproof Guide

Being sick can really put a damper on your day or week, and if you’re achy, sneezing and just downright miserable, you may not be able to tell if you have a cold, the flu or allergies. Although you may opt to try to fight the sickness with hot tea and bed rest, it’s best to know what ailment is plaguing you so you can treat it accordingly—especially if it’s contagious. Cindy Weston, DNP, RN, FNP-BC, assistant professor at the Texas A&M College of Nursing, helps break down these congesting conditions.


Credit: Texas A&M Health Science Center


The common cold, flu and allergies are extremely common, and many people will experience them throughout the year. Still, even though these conditions are so often seen, they can still be tricky to diagnose. “Diagnosis is based on symptoms and supportive diagnostic data,” Weston said. “Someone will come in and think they have a cold, and it may be the flu, and sometimes people think they have the flu and it is a common cold or allergies.”

A common cold

There are many different viruses that can lead to a common cold, and they can be difficult to treat because antibiotics are ineffective against viral infections. The only thing you can do for the common cold is treat your symptoms, drink fluids and get plenty of rest.

“The common cold is complicated to treat and can’t be cured, but rest and nutrition seem to be the best approach,” Weston said. “You can take medications to treat the symptoms and make yourself more comfortable.”

A cold can have a variety of symptoms, but the most common include:

•Mild fatigue
•Fever
•Cough
•Sore throat
•Congestion, runny nose, sneezing
•Watery eyes or nose
•Head, chest or nasal congestion

A cold will usually go away on its own within a week and typically doesn’t warrant a trip to your health care provider. If you’re still feeling bad after a week, however, or if your symptoms are severe or you have an underlying chronic condition like asthma, it might be time to seek help. The common cold can happen year-round, however it seems to be more common in the colder months when everyone migrates indoors and the virus is more communicable.

“A cold can be very tricky because some of the symptoms may linger,” Weston said. “Sometimes your cold may be gone, but your cough could persist for another month.”

The flu

Influenza is a year-round viral infection, with high outbreaks occurring between fall and spring, and one of the most common illnesses in the United States. According to the Centers for Disease Control and Prevention (CDC), there were an estimated 19 million medically attended cases of influenza during the 2014–2015 flu season.

The flu, which can be prevented with a yearly vaccine, can have very similar symptoms to a common cold, but with a few distinct differences.

“The flu typically comes on quick and strong as opposed to a nagging cold,” Weston said. “You may be feeling fine during the morning but can feel horrible, with a fever and aches, in the afternoon.”

Another difference between the flu and the common cold is the type of aches and pain. “Aches and pains are prevalent in both conditions, but with a cold, the aches are mild and generally associated with congestion,” Weston said. “The flu can present with deep muscle pains in your large muscles, including your legs and back.”

Common flu symptoms can include:

•Whole body aches
•High fever (over 101 degrees)
•Extreme exhaustion or fatigue
•Cough
•Sore throat
•Runny nose
•Head, chest or nasal congestion

When you have the flu, it’s best to get medical treatment—and fast. Anti-viral medications may be prescribed within 48 hours of the onset of symptoms to reduce the intensity of symptoms and lessen the chance of complications.

“Both the flu and cold can lead to further problems like pneumonia, bronchitis or sinusitis,” Weston said. “The flu is more likely to do so, but it’s best to treat the symptoms and stay well rested to lessen the chances of further problems.”

Seasonal Allergies

Another common explanation for itchy or runny eyes or nose is seasonal allergies. When pollen is blown around on a windy day, these allergens can trigger chemicals in your body to defend against them.

Seasonal allergies are typically easier to diagnose, mainly because of the lack of certain symptoms commonly found with the flu or a common cold.

“Allergies will not present with a fever,” Weston said. “With allergies, there will be itchiness and irritation around the nose or eyes, but the symptoms should be present only as long as the allergens remain.”

Seasonal allergies can develop at any age, so just because you didn’t have allergies during the last change of seasons, doesn’t mean you can’t develop them the next time. Also, allergies can be an asthma trigger, so getting a grasp on them can be important before they lead to further problems.

Common seasonal allergy symptoms include:

•Sneezing
•Runny, itchy nose
•Red, watery and itchy eyes
•Head, chest or nasal congestion
•Cough

What should I do?

The flu requires prescription medications to prevent complications, but apart from that, a cold, flu and allergies can be treated with over-the-counter medications, such as antihistamines or decongestants, and plenty of rest and hydration. Be sure to use as described and contact your health care provider or pharmacist to make sure you’re not double-dosing on medications.

“If you think you might have a cold or the flu, avoid spreading the germs to others,” Weston added. “It’s best to stay home until you’ve been fever-free for 24 hours or have completed a day’s worth of prescribed medication.”

Also, if your nasal congestion becomes overwhelming, rinsing your sinuses with a nasal irrigation pot can help remove allergens and prevent infection in your sinuses.

“Nasal irrigation systems can work to help prevent infection in your sinuses,” Weston said. “Just be sure that you’re using it as directed and with properly filtered or previously-boiled water.”

Be sure to contact your health care provider if your fever doesn’t go away, or if you have trouble breathing or keeping food down. While complications are rare, they are a possibility and should be caught early.




Coontacts and sources:
Texas A&M Health Science Center

Metabolism: What Is It And Can It Be Controlled?

Surprise! Your metabolism can be managed, and you have the power to do so!

“I have a fast metabolism; I can eat and eat and stay skinny.” Most of us have heard someone say this, and a majority of us have responded with annoyance and envy. But what is metabolism, and can we make ours run a bit faster? Taylor Newhouse, a registered dietitian with the Texas A&M School of Public Health, helps break down what you should know about your metabolism.

What is metabolism?

Your metabolism isn’t just what keeps your bragging friend lean, it’s the constant process that your body is using to keep everything functioning. Your metabolism is always running, even when you’re sleeping.

“Your metabolism is kind of the engine that keeps your body going,” Newhouse said. “It’s the drive that allows your body to utilize the food and nutrients you put into it.”

Some people do have faster metabolism than others, and that is the work of genetics and someone’s lifestyle. Although there’s nothing you can do about your genetics, there are ways to impact the lifestyle side and give your metabolism a boost to keep it running in high gear.

How can you improve your metabolism?

Because the metabolism’s base rate is set by genetics, there’s no quick way to rev it up; it cannot be changed without making some long-term lifestyle changes.

“We can manipulate our metabolism to a degree,” Newhouse said. “It’s like a campfire: just like we need to give a fire tinder and pieces of wood in order to keep it from slowing down and burning out, we need to fuel our metabolism as well.”

Credit: Texas A&M Health Science Center

If you’re looking to boost your metabolism, then there are a few changes you can make throughout the day. Working out, hydrating and eating right can help with your overall health, but there are also specific habits you can foster in order to give it a boost.

“Eating your leafy vegetables and working out can definitely help your metabolism,” Newhouse said. “Muscle burns more energy than fat, so lifting weights or anything else that builds muscle—along with eating correctly—can play a large role in how our body processes nutrients.”

Apart from getting in more muscle-building workouts and eating better, another important habit to kick your metabolism into gear is not ignoring the most important meal of the day: breakfast.

“People tend to overlook how important breakfast is,” Newhouse said. “We go all night without food, and our body can approach a fasting state, an episode where our body will withhold calories, if we wait too long to eat after waking up.”

What can slow your metabolism?

If it’s possible to speed up your metabolism, then it’s equally possible—and far easier—to slow it down. There are many habits that are easy to fall into that can make your metabolism run at a slower pace. One of these happens in the late hours of night, and involves what you’re not doing: getting enough shut-eye.

Sleep deprivation is one of the biggest epidemics in American society, with more than one-third of adults getting less than the recommended seven to eight hours of sleep each night. Sleep is not only crucial for your metabolism, but skimping on sleep can also lead to long-term conditions such as heart disease and diabetes.

“Sleep is one of the biggest factors that people seem to forget about,” Newhouse said. “Even if someone eats well and exercises, if they don’t get adequate sleep, then their metabolism won’t run as efficiently.”

Although snacks often have a bad reputation for being unhealthy, they are very important to keep you fueled and nourish your body throughout the day. Snacks should have some protein, fiber and carbohydrates and should not have too much salt or sodium.

“Eating snacks won’t slow down your metabolism if you’re eating the right foods,” Newhouse said. “Healthy snacks—such as nuts, fruit or vegetables—have the nutrients to slow the rate of digestion, keep you feeling fuller longer and keep your body working to process the nutrients.”

Stress can also indirectly lead to problems with your metabolism. People with high amounts of cortisol, a stress hormone, tend to be overweight, and being overweight can slow your metabolism. Lowering your cortisol levels can start a chain-reaction that can help your metabolism run more efficiently.

What does your metabolism do over time?

Believe it or not, metabolism—just like the rest of our body—goes through the aging process. As your metabolism slows, your continuous diet and exercise choices become more important.

While the cause for this is unclear, women entering menopause will experience a slower metabolism and can find it more difficult to stay at a healthy weight, which makes diet and exercise vital to healthy aging.

“Nothing changes overnight,” Newhouse said. “It’s a matter of making the small choices that can add up to try and negate the effects that are naturally slowing down your metabolism.”

If you’re worried about how your metabolism is affecting your lifestyle, contact your health care provider or sit down with a registered dietician to set up a plan for a healthier daily life.





Contacts and sources: 
Texas A&M University

How Multi-Ring Craters Form Revealed by New Research

The Moon’s Orientale basin is an archetype of “multi-ring” basins found throughout the solar system. New research reveals how those rings were formed.

The Moon's Orientale basin is surrounded by distinct right structures. The image shows the basin's gravitational signature (red indicates excess mass, blue indicates mass deficits), which scientists used to reconstruct the formation of the basin and its rings
Credit: .Ernest Wright, NASA/GSFC Scientific Visualization Studio.)

Using data from NASA’s Gravity Recovery and Interior Laboratory (GRAIL) mission, scientists have shed new light on the formation of a huge bull’s-eye-shaped impact feature on the Moon. The findings, described in two papers published in the journal Science, could help scientists better understand how these kinds of giant impacts influenced the early evolution of the Moon, Mars and Earth.

Impact craters larger than about 180 miles (300 kilometers) in diameter are referred to as basins. With increasing size, craters tend to have increasingly complex structures, often with multiple concentric, raised rings. Orientale is about 580 miles (930 kilometers) wide and has three distinct rings, which form a bullseye-like pattern.

Formed about 3.8 billion years ago, the Orientale basin is located on the southwestern edge of the Moon’s nearside, just barely visible from Earth. The basin’s most prominent features are three concentric rings of rock, the outermost of which has a diameter of nearly 580 miles. Located along the moon's southwestern limb -- the left-hand edge as seen from Earth -- Orientale is the largest and best-preserved example of what's known as a "multi-ring basin."

Scientists have debated for years about how those rings formed. Thanks to targeted close passes over Orientale by the twin GRAIL spacecraft in 2012, mission scientists think they’ve finally figured it out. The GRAIL data revealed new details about the interior structure of Orientale. Scientists used that information to calibrate a computer model that, for the first time, was able to recreate the rings’ formation.

“Big impacts like the one that formed Orientale were the most important drivers of change on planetary crusts in the early solar system,” said Brandon Johnson, a geologist at Brown University, lead author of one of the papers and a co-author of the other. “Thanks to the tremendous data supplied by GRAIL, we have a much better idea of how these basins form, and we can apply that knowledge to big basins on other planets and moons.”

In one of the Science papers, a research team led by MIT’s Maria Zuber, a Brown Ph.D. graduate, performed a detailed examination of the data returned by GRAIL.

“In the past, our view of Orientale basin was largely related to its surface features, but we didn't know what the subsurface structure looked like in detail. It’s like trying to understand how the human body works by just looking at the surface,” said Jim Head, a geologist at Brown, GRAIL science team member and co-author of the research. “The beauty of the GRAIL data is that it is like putting Orientale in an x-ray machine and learning in great detail what the surface features correspond to in the subsurface.”

The Orientale basin is the youngest of the large lunar basins. The distinct outer ring is about 950 km from east-to-west, the full width of the LROC WAC mosaic is 1350 km.
Credit: NASA/GSFC/University of Arizona

One of the key mysteries the data helped to solve involves the size and location of Orientale’s transient crater, the initial depression created when the impactor blasted material away from the surface. In smaller impacts, that initial crater is left behind. But in larger collisions, the rebound of the surface following the impact can sometimes obliterate any trace of that initial impact point.

Some researchers had thought that one of Orientale’s rings might represent the remains of the transient crater. But the GRAIL data showed that’s not the case. Instead Orientale’s gravity signature suggests the transient crater was somewhere between its two inner rings, measuring between 200 and 300 miles across. Any recognizable surface remnants of that crater were erased by the aftermath of the collision.

Constraining the size of the transient crater enabled to team to estimate how much material was blasted out of the surface during the collision. The team calculates that about 816,000 cubic miles of rock was blasted away. For Head, those findings helped to tie together years for research on Orientale.

“I wrote my first paper on the Orientale Basin in 1974, over forty years ago, and I have been studying it ever since," he said. “We now know what parts of the crust were removed, what parts of the mantle and deeper interior were uplifted, and how much ejecta was redistributed over the whole Moon.”

Modeling Orientale’s rings

For the other paper, Johnson led a team of researchers who used the GRAIL data to develop a computer model of the impact and its aftermath. The model that best fit the GRAIL data estimates that Orientale was formed by an object about 40 miles across traveling at about 9 miles per second.

 Orientale Basin gravity map (red=high, blue=low). Generated from GRAIL Colorized Gravity Anomalies (JPL) layer with LRO-WAC Shaded Relief at 30% opacity overlain.
File:Orientale basin GRAIL gravity.jpg
Credit: NASA/JPL-Caltech/Arizona State University; generated by James Stuby

The model was able to recreate Orientale’s rings and explain how they formed. It showed that as the crust rebounded following the impact, warm and ductile rocks in the subsurface flowed inward toward the impact point. That inward flow caused the crust above to crack and slip, forming the cliffs, several kilometers high, that compose the outer two rings.

The innermost ring was formed by a different process. In smaller impacts, the rebound of the crust can form a mound of material in the center of a crater, called a central peak. But Orientale’s central peak was too large to be stable. That material flowed back outward, eventually mounding in a circular fashion, forming the inner ring.

“This was a really intense process,” Johnson said. “These several-kilometer cliffs and the central ring all formed within minutes of the initial impact.”

This color-coded map shows the strength of surface gravity around Orientale basin on the moon, derived from GRAIL data. (The color scale represents units of "gals" -- 1 gal is about 1/1000 of Earth's surface gravitational acceleration.) 

Image credit: NASA/JPL-Caltech


This is the first time a model has been able to reproduce these rings, Johnson said.

“GRAIL provided the data we needed to provide a foundation for the models,” he said. “That gives us confidence that we’re capturing the processes that actually formed these rings.”

Ring basins elsewhere

Orientale is the youngest and best-preserved example of a multi-ring basin anywhere in the solar system, but it’s certainly not the only one. Armed with an understanding of Orientale, scientists can investigate how these processes play out elsewhere.

“There are several basins of this kind on Mars,” Johnson said “But compared to the Moon, there’s a lot more geology that happened after these impacts that degrades them. Now that we have a better understanding of how the basins formed, we can make better sense of the processes that came after.”

Head says that this research is yet another example of how our own Moon helps us understand the rest of the solar system.

“The Moon in some ways is a laboratory full of well-preserved features that we can analyze in great detail,” Head said. “Thanks to Maria Zuber’s leadership, GRAIL continues to help us understand how the Moon evolved and how those processes relate to other planets and moons.”

With a diameter of roughly 2000 km and a depth of over 7 km, the Hellas Basin is the largest impact feature on Mars. Hellas, near the Martian South Pole,  is thought to have formed between 3.8 and 4.1 billion years ago, when a large asteroid hit the surface of Mars. Since its formation, Hellas has been subject to modification by the action of wind, ice, water and volcanic activity.
Image result for hellas basin
Credit: NASA 

 The image was taken by the Mars Orbiter Laser Altimeter, or MOLA, which is an instrument on the Mars Global Surveyor (MGS), a spacecraft that was launched on November 7, 1996.




Contacts and sources:
Kevin Stacey
Brown University

NASA/Jet Propulsion Laboratory

Catalog of Known Near-Earth Asteroids Tops 15,000, 50% Increase Since 2013

The number of discovered near-Earth asteroids (NEAs) now tops 15,000, with an average of 30 new discoveries added each week. This milestone marks a 50 percent increase in the number of known NEAs since 2013, when discoveries reached 10,000 in August of that year.

Surveys funded by NASA's Near Earth Object (NEO) Observations Program (NEOs include both asteroids and comets) account for more than 95 percent of discoveries so far.

The 15,000th near-Earth asteroid is designated 2016 TB57. It was discovered on Oct. 13 by observers at the Mount Lemmon Survey, an element of the NASA-funded Catalina Sky Survey in Tucson, Arizona. 2016 TB57 is a rather small asteroid -- about 50 to 115 feet (16 to 36 meters) in size -- that will come closest to Earth on Oct. 31 at just beyond five times the distance of the moon. It will safely pass Earth.

The 15,000th near-Earth asteroid discovered is designated 2016 TB57. It was discovered on Oct. 13, 2016, by observers at the Mount Lemmon Survey, an element of the NASA-funded Catalina Sky Survey in Tucson, Arizona.
Credits: NASA/JPL-Caltech

A near-Earth asteroid is defined as one whose orbit periodically brings it within approximately 1.3 times Earth's average distance to the sun -- that is within 121 million miles (195 million kilometers) -- of the sun (Earth's average distance to the sun is about 93 million miles, or 150 million kilometers). This distance also then brings the asteroid within roughly 30 million miles (50 million kilometers) of Earth's orbit. Observers have already discovered more than 90 percent of the estimated population of the large NEOs -- those larger than 0.6 miles (one kilometer).

"The rising rate of discovery is due to dedicated NEO surveys and upgraded telescopes coming online in recent years," said NASA's NEO Observations Program Manager Kelly Fast. "But while we're making great progress, we still have a long way to go." It is estimated by astronomers that only about 27 percent of the NEAs that are 460 feet (140 meters) and larger have been found to date. Congress directed NASA to find over 90 percent of objects this size and larger by the end of 2020.

The following chart shows the cumulative number of known Near-Earth Asteroids (NEAs) versus time. Totals are shown for NEAs of all sizes, those NEAs larger than ~140m in size, and those larger than ~1km in size.
Credit: NASA


Currently, two NASA-funded NEO surveys -- the Catalina Sky Survey and the Panoramic Survey Telescope & Rapid Response System (Pan-STARRS) in Hawaii -- account for about 90 percent of new NEO discoveries. Both projects upgraded their telescopes in 2015, improving their discovery rates.

Asteroid 2013 MZ5,the 10,000th NEO, as seen by the University of Hawaii's PanSTARR-1 telescope. In this animated gif, the asteroid moves relative to a fixed background of stars. Asteroid 2013 MZ5 is in the right of the first image, towards the top, moving diagonally left/down.
Asteroid 2013 MZ5 as seen by the University of Hawaii's PanSTARR-1 telescope.
 Image credit: PS-1/UH

A recent upgrade to one of the Catalina Sky Survey's telescopes resulted in a tripling of its average monthly NEO discovery rate. When the Pan-STARRS system increased the observing time it devoted to NEO searching to 90 percent, it increased its rate of discovery by a factor of three. Pan-STARRS also will add a second telescope to the hunt this fall. As more capable telescopes are deployed, the overall NEO survey effort will be able to find more objects as small as and smaller than 140 meters (460 feet).

The NEO Observations Program is a primary element of NASA's Planetary Defense Coordination Office, which is responsible for finding, tracking and characterizing potentially hazardous NEOs, issuing warnings about possible impacts, and coordinating U.S. government planning for response to an actual impact threat.

The chart shows the totals for NEAs of all sizes,
Credit: NASA

"While no known NEO currently poses a risk of impact with Earth over the next 100 years," says NASA Planetary Defense Officer Lindley Johnson, "we've found mostly the larger asteroids, and we have a lot more of the smaller but still potentially hazardous ones to find."





Contacts and sources:
NASA/Jet Propulsion Laboratory

Pumpkin Stars Spinning Super Fast Get Squashed

Astronomers using observations from NASA's Kepler and Swift missions have discovered a batch of rapidly spinning stars that produce X-rays at more than 100 times the peak levels ever seen from the sun. The stars, which spin so fast they've been squashed into pumpkin-like shapes, are thought to be the result of close binary systems where two sun-like stars merge.

"These 18 stars rotate in just a few days on average, while the sun takes nearly a month," said Steve Howell, a senior research scientist at NASA's Ames Research Center in Moffett Field, California, and leader of the team. "The rapid rotation amplifies the same kind of activity we see on the sun, such as sunspots and solar flares, and essentially sends it into overdrive."

The most extreme member of the group, a K-type orange giant dubbed KSw 71, is more than 10 times larger than the sun, rotates in just 5.5 days, and produces X-ray emission 4,000 times greater than the sun does at solar maximum.

This artist's concept illustrates how the most extreme "pumpkin star" found by Kepler and Swift compares with the sun. Both stars are shown to scale. KSw 71 is larger, cooler and redder than the sun and rotates four times faster. Rapid spin causes the star to flatten into a pumpkin shape, which results in brighter poles and a darker equator. Rapid rotation also drives increased levels of stellar activity such as starspots, flares and prominences, producing X-ray emission over 4,000 times more intense than the peak emission from the sun. KSw 71 is thought to have recently formed following the merger of two sun-like stars in a close binary system.

Credits: NASA's Goddard Space Flight Center/Francis Reddy

These rare stars were found as part of an X-ray survey of the original Kepler field of view, a patch of the sky comprising parts of the constellations Cygnus and Lyra. From May 2009 to May 2013, Kepler measured the brightness of more than 150,000 stars in this region to detect the regular dimming from planets passing in front of their host stars. The mission was immensely successful, netting more than 2,300 confirmed exoplanets and nearly 5,000 candidates to date. An ongoing extended mission, called K2, continues this work in areas of the sky located along the ecliptic, the plane of Earth's orbit around the sun.

Dive into the Kepler field and learn more about the origins of these rapidly spinning stars.

Credits: Credits: NASA's Goddard Space Flight Center/Scott Wiessinger, producer 

"A side benefit of the Kepler mission is that its initial field of view is now one of the best-studied parts of the sky," said team member Padi Boyd, a researcher at NASA's Goddard Space Flight Center in Greenbelt, Maryland, who designed the Swift survey. For example, the entire area was observed in infrared light by NASA's Wide-field Infrared Survey Explorer, and NASA's Galaxy Evolution Explorer observed many parts of it in the ultraviolet. "Our group was looking for variable X-ray sources with optical counterparts seen by Kepler, especially active galaxies, where a central black hole drives the emissions," she explained.

Using the X-ray and ultraviolet/optical telescopes aboard Swift, the researchers conducted the Kepler–Swift Active Galaxies and Stars Survey (KSwAGS), imaging about six square degrees, or 12 times the apparent size of a full moon, in the Kepler field.

"With KSwAGS we found 93 new X-ray sources, about evenly split between active galaxies and various types of X-ray stars," said team member Krista Lynne Smith, a graduate student at the University of Maryland, College Park who led the analysis of Swift data. "Many of these sources have never been observed before in X-rays or ultraviolet light."

For the brightest sources, the team obtained spectra using the 200-inch telescope at Palomar Observatory in California. These spectra provide detailed chemical portraits of the stars and show clear evidence of enhanced stellar activity, particularly strong diagnostic lines of calcium and hydrogen.

The researchers used Kepler measurements to determine the rotation periods and sizes for 10 of the stars, which range from 2.9 to 10.5 times larger than the sun. Their surface temperatures range from somewhat hotter to slightly cooler than the sun, mostly spanning spectral types F through K. Astronomers classify the stars as subgiants and giants, which are more advanced evolutionary phases than the sun's caused by greater depletion of their primary fuel source, hydrogen. All of them eventually will become much larger red giant stars.

A paper detailing the findings will be published in the Nov. 1 edition of the Astrophysical Journal and is now available online.

Forty years ago, Ronald Webbink at the University of Illinois, Urbana-Champaign noted that close binary systems cannot survive once the fuel supply of one star dwindles and it starts to enlarge. The stars coalesce to form a single rapidly spinning star initially residing in a so-called "excretion" disk formed by gas thrown out during the merger. The disk dissipates over the next 100 million years, leaving behind a very active, rapidly spinning star.

Howell and his colleagues suggest that their 18 KSwAGS stars formed by this scenario and have only recently dissipated their disks. To identify so many stars passing through such a cosmically brief phase of development is a real boon to stellar astronomers.

"Webbink's model suggests we should find about 160 of these stars in the entire Kepler field," said co-author Elena Mason, a researcher at the Italian National Institute for Astrophysics Astronomical Observatory of Trieste. "What we have found is in line with theoretical expectations when we account for the small portion of the field we observed with Swift."

The team has already extended their Swift observations to additional fields mapped by the K2 mission.

Ames manages the Kepler and K2 missions for NASA’s Science Mission Directorate. NASA's Jet Propulsion Laboratory in Pasadena, California, managed Kepler mission development. Ball Aerospace & Technologies Corp. operates the flight system with support from the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder.

Goddard manages the Swift mission in collaboration with Pennsylvania State University in University Park, the Los Alamos National Laboratory in New Mexico and Orbital Sciences Corp. in Dulles, Virginia. Other partners include the University of Leicester and Mullard Space Science Laboratory in the United Kingdom, Brera Observatory and the Italian Space Agency in Italy, with additional collaborators in Germany and Japan.




Contacts and sources:
By Francis Reddy
NASA's Goddard Space Flight Center,


‘Perfect’ Soap Molecule Invented: Cleans In All Conditions, Better For The Environment And Billion Dollar Potential

The discovery of a perfect soap molecule could have major impact on multibillion-dollar cleaning products industry.

A team of researchers, led by the University of Minnesota, has invented a new soap molecule made from renewable sources that could dramatically reduce the number of chemicals in cleaning products and their impact on the environment.

The soap molecules also worked better than some conventional soaps in challenging conditions such as cold water and hard water. The technology has been patented by the University of Minnesota and is licensed to the new Minnesota-based startup company Sironix Renewables.

The OFS soap molecules made from renewable sources could dramatically reduce the number of chemicals in cleaning products and their impact on the environment.
Photo credit: Paul J. Dauenhauer, University of Minnesota

The new study is now online and will be published in the next issue of the American Chemical Society’s ACS Central Science, a leading journal in the chemical sciences. Authors of the study include researchers from the University of Minnesota, University of Delaware, University of Massachusetts Amherst, Sironix Renewables, and the U.S. Department of Energy’s Catalysis Center for Energy Innovation and Argonne National Laboratory.

“Our team created a soap molecule made from natural products, like soybeans, coconut and corn, that works better than regular soaps and is better for the environment,” said Paul Dauenhauer, a University of Minnesota associate professor of chemical engineering and materials science and a co-author of the study. “This research could have a major impact on the multibillion-dollar cleaning products industry.”

Conventional soaps and detergents are viewed as environmentally unfriendly because they are made from fossil fuels. When formulated into shampoos, hand soaps, or dishwashing detergents, these soaps are mixed with many additional difficult-to-pronounce and harmful chemicals that are washed down the drain.

Funded by the U.S. Department of Energy, researchers from the Catalysis Center for Energy Innovation developed a new chemical process to combine fatty acids from soybeans or coconut and sugar-derived rings from corn to make a renewable soap molecule called Oleo-Furan-Surfactant (OFS). They found that OFS worked well in cold water where conventional soaps become cloudy and gooey rendering them unusable. Additionally, OFS soaps were shown to form soap particles (called micelles) necessary for cleaning applications at low concentrations, which significantly reduces the environmental impact on rivers and lakes.

The new renewable OFS soap was also engineered to work in extremely hard water conditions. For many locations around the world, minerals in the water bind with conventional soaps and turn them into solid goo.

“I think everybody has had the problem of trying to get shampoo out of their hair in hard water—it just doesn’t come out,” said Dauenhauer.

To combat this problem, most existing soaps and detergents add an array of additional chemicals, called chelants, to grab these minerals and prevent them from interfering with soap molecules. This problem has led to a long list of extra chemical ingredients in most conventional cleaning products, many of which are harmful to the environment.

The new OFS soap eliminates the hard water problem by using a naturally derived source that does not bind strongly to minerals in water. The researchers found that OFS molecules were shown to form soap particles (micelles) even at 100 times the conventional hard water conditions. As a result, a cleaning product’s ingredient list could be significantly simplified.

“The impact of OFS soaps will be greater than their detergent performance,” said University of Minnesota chemical engineering and materials science graduate student Kristeen Joseph. “OFS is made from straight carbon chains derived from soybeans or coconut which can readily biodegrade. These are really the perfect soap molecules.”

The researchers also use nanoparticle catalysts to optimize the soap structure for foaming ability and other cleaning capabilities. In addition to biodegradability and cleaning performance, OFS was shown to foam with the consistency of conventional detergents, which means it could directly replace soaps in existing equipment such as washing machines, dishwashers, and consumer products.

The invention of new soap technology is part of a larger mission of the Catalysis Center for Energy Innovation (CCEI), a U.S. Department of Energy – Energy Frontier Research Center led by the University of Delaware. Initiated in 2009, the CCEI has focused on transformational catalytic technology to produce renewable chemicals and biofuels from natural biomass sources.

In addition to Dauenhauer and Joseph, researchers who were part of the study from the University of Minnesota were professor Michael Tsapatsis, postdoctoral researcher Dae Sung Park, and current and former students Limin Ren, Meera H. Shete, Han Seung Lee, and Jonathan N. Damen. Researchers from the University of Delaware were professors Dionisios G. Vlachos, Raul F. Lobo, and graduate student Maura Koehle. Others included University of Massachusetts Amherst professor Wei Fan, Sironix Renewables founder and recent University of Minnesota graduate Christoph Krumm, and Argonne National Laboratory researchers Xiaobing Zuo and Byeongdu Lee.



Contacts and sources:
University of Minnesota College of Science and Engineering

Citation: “Tunable Oleo-Furan Surfactants by Acylation of Renewable Furans,” DOI: 10.1021/acscentsci.6b00208 at ACS Central Science website.

Overlooked Molecules Could Revolutionize Our Understanding of The Immune System

Thousands of new immune system signals have been uncovered with potential implications for immunotherapy, autoimmune diseases and vaccine development.

The researchers behind the finding say it is the biological equivalent of discovering a new continent.

Our cells regularly break down proteins from our own bodies and from foreign bodies, such as viruses and bacteria. Small fragments of these proteins, called epitopes, are displayed on the surface of the cells like little flags so that the immune system can scan them. If they are recognized as foreign, the immune system will destroy the cell to prevent the spread of infection.

Impression of the immune system attacking a virus. The Y-shaped stalks are the epitopes.
main imageCredit: © Imperial College London.

In a new study, researchers have discovered that around one third of all the epitopes displayed for scanning by the immune system are a type known as ‘spliced’ epitopes.

These spliced epitopes were thought to be rare, but the scientists have now identified thousands of them by developing a new method that allowed them to map the surface of cells and identify a myriad of previously unknown epitopes.

The findings should help scientists to better understand the immune system, including autoimmune diseases, as well as provide potential new targets for immunotherapy and vaccine design.


DEEPER UNDERSTANDING

The research was led by Dr Juliane Liepe from Imperial College London and Dr Michele Mishto from Charité - Universitätsmedizin Berlin in Germany in collaboration with the LaJolla Institute for Allergy and Immunology and Utrecht University, and it is published today in Science.

Co-author of the study Professor Michael Stumpf from the Department of Life Sciences at Imperial said: “It’s as if a geographer would tell you they had discovered a new continent, or an astronomer would say they had found a new planet in the solar system.

“And just as with those discoveries, we have a lot of exploring to do. This could lead to not only a deeper understanding of how the immune system operates, but also suggests new avenues for therapies and drug and vaccine development.”

Prior to the new study, scientists thought that the machinery in a cell created signalling peptides by cutting fragments out of proteins in sequence, and displaying these in order on the surface of the cell.

However, this cell machinery can also create ‘spliced’ peptides by cutting two fragments from different positions in the protein and then sticking them together out of order, creating a new sequence.

Video caption: A 3D printed a proteasome, shown creating a spliced epitope by cut-and-paste.
Credit: © Imperial College London/Used with permission


Scientists knew about the existence of the spliced epitopes, but they were thought to be rare. The new study suggests that spliced epitopes actually make up a large proportion of signalling epitopes: they make up around a quarter of the overall number of epitopes, and account for 30-40 per cent of the diversity - the number of different kinds of epitopes.

PROS AND CONS

These extra epitopes give the immune system more to scan, and more possibilities of detecting disease. However, as the spliced epitopes are mixed sequences, they also have the potential to overlap with the sequences of healthy signallers and be misidentified as harmful.

This could help scientists understand autoimmune diseases, where the immune system turns against normal body tissues, such as in Type 1 diabetes and multiple sclerosis.

The study’s lead author, Dr Juliane Liepe from the Department of Life Sciences at Imperial, said: “The discovery of the importance of spliced peptides could present pros and cons when researching the immune system.

“For example, the discovery could influence new immunotherapies and vaccines by providing new target epitopes for boosting the immune system, but it also means we need to screen for many more epitopes when designing personalised medicine approaches.”



Contacts and sources:
Hayley Dunning
Imperial College London


Citation: 'A large fraction of HLA class I ligands are proteasome-generated spliced peptides' by Juliane Liepe, Fabio Marino, John Sidney, Anita Jeko, Daniel E. Bunting, Alessandro Sette, Peter M. Kloetzel, Michael P.H. Stumpf, Albert J.R. Heck, and Michele Mishto is published in Science.

Can You Literally Be Scared To Death?

Halloween is around the corner, and with that comes haunted houses and corn mazes, mummies, ghosts and creatures of the night jumping out at you—all sure to give a harmless fright, or so we thought. Can that scary monster sneaking up behind you actually scare you to death? The answer may be as spooky as it gets.

“It is possible for someone to have health complications or die from fright,” said John P. Erwin III, MD, a cardiologist and professor at the Texas A&M College of Medicine. “It is more probable for people who have pre-existing conditions, but it is possible to suffer a cardiac-related death as a result of being scared.”

Credit: Frédéric DuPont

How can fear turn fatal?

Your body has an automatic nervous system, called the sympathetic nervous system, that governs the fight-or flight-response—the body’s natural protective mechanism. When faced with a life-threatening situation, the nervous system triggers the release of the hormone adrenaline into the blood, sending impulses to organs to create a specific response (typically increased heart rate, increased blood flow to muscles and dilated pupils). While the adrenaline rush can make people faster and stronger (hence the advantage to primitive humans), there is a down side in revving up your nervous system. In rare instances, if the adrenaline kick is too high or lasts too long, your heart may overwork and cause tissue damage or constriction of blood vessels, in turn, raising blood pressure.

“This exaggerated response can actually damage the cardiovascular system in several ways,” Erwin added. In addition to raising the blood pressure and risking heart attack or stroke, it can cause more long-lasting damage to organs if these neuro-hormones are elevated over time or if there is an imbalance in the chemicals.

While it may be rare for a completely healthy individual to drop dead from fear, those with a predisposition to heart disease are at an increased risk of sudden death. “Some people with genetic heart abnormalities who get a sudden rush of adrenaline can have a cardiac arrhythmia,” Erwin said. “They can have an episode where their heart goes out of rhythm, and that can be fatal.” For example, if a woman with damaged heart tissue were to be held at gunpoint, she could experience fatal rhythm abnormalities or increased oxygen demands of her heart that may not be adequately supplied due to blockages or abnormal responsive mechanisms of her blood vessels.

People who experience a great fear can also develop a condition called takotsubo syndrome, or broken-heart syndrome. Scientifically known as stress-induced cardiomyopathy, ‘broken heart syndrome’ can present in healthy individuals with no prior cardiac problems. In rare cases of takotsubo syndrome, a suddenly weakened heart can’t pump enough blood to meet the body’s needs and the rapid rise of stress hormones in the body essentially ‘stuns’ the heart.

“We frequently run across this with psychological stress,” Erwin said. “People can develop a blood flow abnormality that can temporarily stun the heart or possibly leave the person with some degree of long-term damage to the heart.”


Credit; Marc Curran


What are some long-term effects of being scared?

It’s often said, “What doesn’t kill you, will only make you stronger,” but that’s definitely not the case when it comes to repeat exposure to fear.

“Constant exposure to fear can be like a steady drip of water until it overflows,” Erwin said. “People who are chronically scared or anxious have a higher risk of developing high blood pressure or depression as well as many other physical ailments.”

Depression and fear can live along the same emotional spectrum, as many people can express fear instead of a sadness as a sign of depression. And, unfortunately, depression and anxiety can also increase the odds of being scared to death.

“One symptom of depression, for example, is learned helplessness, or fear of things you can’t control,” Erwin said. “This fear and depression can exacerbate pre-existing medical problems or possibly make them more susceptible to other conditions by weakening their immune system.”

And while constant exposure to fear may lead to common heart problems or anxiety, there is a possibility that it can lead to even greater problems down the line.

“Research has shown that there is a higher risk of immunological problems such as cancer or other inflammatory problems,” Erwin said. “But either way, there are deleterious effects on the heart and other organs in a person with constant fear.”

Credit: Texas A&M Health Science Center

While working your heart muscle can be good for your health, constant exposure to fear does not have the same beneficial effects as a jog in the park.

“The chemical buildup that happens when you’re scared and when you’re exercising is different,” Erwin said. “The chemicals, such as adrenaline, are necessary, but when you exercise, you’re actually helping to maintain the healthy balance with other important chemicals. In a sense, you can ‘burn off’ some of the excess adrenaline as well.”

“There is no doubt that there is a small possibility of death or lasting complications from fear.” Erwin said. “Fear has its purpose in life, such as alerting you to danger, but in rare instances the scare is enough to be a danger in itself.”

While the odds of this happening are rare, it certainly puts a different spin on the famous line from Franklin D. Roosevelt: “The only thing we have to fear is fear itself.”




Contacts and sources:
Texas A&M Health Science Center