Unseen Is Free

Unseen Is Free
Try It Now

Google Translate

No censorship at seen.life

Monday, February 20, 2017

Lithium-Sulfur Battery The Next Big Leap In Portable Power


USC researchers may have just found a solution for one of the biggest stumbling blocks to the next wave of rechargeable batteries -- small enough for cellphones and powerful enough for cars.

In a paper published in the January issue of the Journal of the Electrochemical Society, Sri Narayan and Derek Moy of the USC Loker Hydrocarbon Research Institute outline how they developed an alteration to the lithium-sulfur battery that could make it more than competitive with the industry standard lithium-ion battery.

The lithium-sulfur battery, long thought to be better at energy storage capacity than its more popular lithium-ion counterpart, was hampered by its short cycle life. Currently the lithium-sulfur battery can be recharged 50 to 100 times -- impractical as an alternative energy source compared to 1,000 times for many rechargeable batteries on the market today.


This is a lithium-sulfur battery with Mixed Conduction Membrane barrier to stop polysulfide shuttling.

Credit: Sri Narayan and Derek Moy


A small piece of material saves so much life

The solution devised by Narayan and lead author and research assistant Moy is something they call the "Mixed Conduction Membrane," or MCM, a small piece of non-porous, fabricated material sandwiched between two layers of porous separators, soaked in electrolytes and placed between the two electrodes.

The membrane works as a barrier in reducing the shuttling of dissolved polysulfides between anode and cathode, a process that increases the kind of cycle strain that has made the use of lithium-sulfur batteries for energy storage a challenge. The MCM still allows for the necessary movement of lithium ions, mimicking the process as it occurs in lithium-ion batteries. This novel membrane solution preserves the high-discharge rate capability and energy density without losing capacity over time.

At various rates of discharge, the researchers found that the lithium-sulfur batteries that made use of MCM led to 100 percent capacity retention and had up to four times longer life compared to batteries without the membrane.

"This advance removes one of the major technical barriers to the commercialization of the lithium-sulfur battery, allowing us to realize better options for energy efficiency," said Narayan, senior author and professor of chemistry at the USC Dornsife College of Letters, Arts and Sciences. "We can now focus our efforts on improving other parts of lithium-sulfur battery discharge and recharge that hurt the overall life cycle of the battery."

Cheap and abundant building blocks

Lithium-sulfur batteries have a host of advantages over lithium-ion batteries: They are made with abundant and cheap sulfur, and are two to three times denser, which makes them both smaller and better at storing charge.

A lithium-sulfur battery would be ideal for saving space in mobile phones and computers, as well as allowing for weight reduction in future electric vehicles, including cars and even planes, further reducing reliance on fossil fuels, researchers said.

The actual MCM layer that Narayan and Moy devised is a thin film of lithiated cobalt oxide, though future alternative materials could produce even better results. According to Narayan and Moy, any substitute material used as an MCM must satisfy some fundamental criteria: The material must be non-porous, it should have mixed conduction properties and it must be electrochemically inert.



Contacts and sources:
Ian Chaffee
University of Southern California (USC)

Bee Decline Threatens US Crop Production: First US Wild Bee Map Reveals 139 'Trouble Zone' Counties

The first-ever study to map U.S. wild bees suggests they are disappearing in the country's most important farmlands -- from California's Central Valley to the Midwest's corn belt and the Mississippi River valley.

If wild bee declines continue, it could hurt U.S. crop production and farmers' costs, said Taylor Ricketts, a conservation ecologist at the University of Vermont, at the American Association for the Advancement of Science (AAAS) annual meeting panel, Plan Bee: Pollinators, Food Production and U.S. Policy on Feb. 19.

"This study provides the first national picture of wild bees and their impacts on pollination," said Ricketts, Director of UVM's Gund Institute for Ecological Economics, noting that each year $3 billion of the U.S. economy depends on pollination from native pollinators like wild bees.

The first national study to map US wild bees suggests they're disappearing in many of the country's most important farmlands. Relatively low abundances are shown here in yellow; higher abundances in blue.
Credit: PNAS

At AAAS, Ricketts briefed scholars, policy makers, and journalists on how the national bee map, first published in the Proceedings of the National Academy of Sciences in late 2015, can help to protect wild bees and pinpoint habitat restoration efforts.

At the event, Ricketts also introduced a new mobile app that he is co-developing to help farmers upgrade their farms to better support wild bees.

"Wild bees are a precious natural resource we should celebrate and protect," said Ricketts, Gund Professor in UVM's Rubenstein School of Environment and Natural Resources. "If managed with care, they can help us continue to produce billions of dollars in agricultural income and a wonderful diversity of nutritious food."

TROUBLE ZONES

The map identifies 139 counties in key agricultural regions of California, the Pacific Northwest, the upper Midwest and Great Plains, west Texas, and Mississippi River valley, which appear to have most worrisome mismatch between falling wild bee supply and rising crop pollination demand.

These counties tend to be places that grow specialty crops -- like almonds, blueberries and apples -- that are highly dependent on pollinators. Or they are counties that grow less dependent crops -- like soybeans, canola and cotton -- in very large quantities.

Of particular concern, some crops most dependent on pollinators -- including pumpkins, watermelons, pears, peaches, plums, apples and blueberries -- appeared to have the strongest pollination mismatch, growing in areas with dropping wild bee supply and increasing in pollination demand.

Globally, more than two-thirds of the most important crops either benefit from or require pollinators, including coffee, cacao, and many fruits and vegetables.

Pesticides, climate change and diseases threaten wild bees -- but their decline may be caused by the conversion of bee habitat into cropland, the study suggests. In 11 key states where the map shows bees in decline, the amount of land tilled to grow corn spiked by 200 percent in five years -- replacing grasslands and pastures that once supported bee populations.

RISING DEMAND, FALLING SUPPLY

Over the last decade, honeybee keepers facing colony losses have struggled with rising demand for commercial pollination services, pushing up the cost of managed pollinators - and the importance of wild bees.

A new study of wild bees identifies 139 counties in key agricultural regions of California, the Pacific Northwest, the Midwest, west Texas and the Mississippi River valley that face a worrisome mismatch between falling wild bee supply and rising crop pollination demand.
Credit: PNAS

"Most people can think of one or two types of bee, but there are 4,000 species in the U.S. alone," said Insu Koh, a UVM postdoctoral researcher who co-hosted the AAAS panel and led the study.

"When sufficient habitat exists, wild bees are already contributing the majority of pollination for some crops," Koh adds. "And even around managed pollinators, wild bees complement pollination in ways that can increase crop yields."

MAKING THE MAPS

A team of seven researchers -- from UVM, Franklin and Marshall College, University of California at Davis, and Michigan State University -- created the maps by first identifying 45 land-use types from two federal land databases, including croplands and natural habitats. Then they gathered detailed input from national and state bee experts about the suitability of each land-use type for providing wild bees with nesting and food resources.

The scientists built a bee habitat model that predicts the relative abundance of wild bees for every area of the contiguous United States, based on their quality for nesting and feeding from flowers. Finally, the team checked and validated their model against bee collections and field observations in many actual landscapes.

THE GOOD NEWS

"The good news about bees," said Ricketts, "is now that we know where to focus conservation efforts, paired with all we know about what bees need, habitat-wise, there is hope for preserving wild bees."



Contacts and sources:
Basil Waugh
University of Vermont

Touchless Gestures To Control Cellphones and Other Devices, Ambient Light Will Charge Them

Cellphones and other devices could soon be controlled with touchless gestures and charge themselves using ambient light, thanks to new LED arrays that can both emit and detect light.

Made of tiny nanorods arrayed in a thin film, the LEDs could enable new interactive functions and multitasking devices. Researchers at the University of Illinois at Urbana-Champaign and Dow Electronic Materials in Marlborough, Massachusetts, report the advance in the Feb. 10 issue of the journal Science.

“These LEDs are the beginning of enabling displays to do something completely different, moving well beyond just displaying information to be much more interactive devices,” said Moonsub Shim, a professor of materials science and engineering at the U. of I. and the leader of the study. “That can become the basis for new and interesting designs for a lot of electronics.”

A laser stylus writes on a small array of multifunction pixels made by dual-function LEDs than can both emit and respond to light.

Photo courtesy of Moonsub Shim

The tiny nanorods, each measuring less than 5 nanometers in diameter, are made of three types of semiconductor material. One type emits and absorbs visible light. The other two semiconductors control how charge flows through the first material. The combination is what allows the LEDs to emit, sense and respond to light.

The nanorod LEDs are able to perform both functions by quickly switching back and forth from emitting to detecting. They switch so fast that, to the human eye, the display appears to stay on continuously – in fact, it’s three orders of magnitude faster than standard display refresh rates. Yet the LEDs are also near-continuously detecting and absorbing light, and a display made of the LEDs can be programmed to respond to light signals in a number of ways.

For example, a display could automatically adjust brightness in response to ambient light conditions – on a pixel-by-pixel basis.


Professor Moonsub Shim, postdoctoral researcher Seongyong Cho and collaborators developed dual-function nanorod LEDs that could be the basis for multifunctional device displays.
Photo by L. Brian Stauffer

“You can imagine sitting outside with your tablet, reading. Your tablet will detect the brightness and adjust it for individual pixels,” Shim said. “Where there’s a shadow falling across the screen it will be dimmer, and where it’s in the sun it will be brighter, so you can maintain steady contrast.”

The researchers demonstrated pixels that automatically adjust brightness, as well as pixels that respond to an approaching finger, which could be integrated into interactive displays that respond to touchless gestures or recognize objects.

They also demonstrated arrays that respond to a laser stylus, which could be the basis of smart whiteboards, tablets or other surfaces for writing or drawing with light. And the researchers found that the LEDs not only respond to light, but can convert it to electricity as well.

“The way it responds to light is like a solar cell. So not only can we enhance interaction between users and devices or displays, now we can actually use the displays to harvest light,” Shim said. “So imagine your cellphone just sitting there collecting the ambient light and charging. That’s a possibility without having to integrate separate solar cells. We still have a lot of development to do before a display can be completely self-powered, but we think that we can boost the power-harvesting properties without compromising LED performance, so that a significant amount of the display’s power is coming from the array itself.”

In addition to interacting with users and their environment, nanorod LED displays can interact with each other as large parallel communication arrays. It would be slower than device-to-device technologies like Bluetooth, Shim said, but those technologies are serial – they can only send one bit at a time. Two LED arrays facing each other could communicate with as many bits as there are pixels in the screen.

“We primarily interface with our electronic devices through their displays, and a display’s appeal resides in the user’s experience of viewing and manipulating information,” said study coauthor Peter Trefonas, a corporate fellow in Electronic Materials at The Dow Chemical Company. “The bidirectional capability of these new LED materials could enable devices to respond intelligently to external stimuli in new ways. The potential for touchless gesture control alone is intriguing, and we’re only scratching the surface of what could be possible.”

The researchers did all their demonstrations with arrays of red LEDs. They are now working on methods to pattern three-color displays with red, blue and green pixels, as well as working on ways to boost the light-harvesting capabilities by adjusting the composition of the nanorods.

This work was supported by a collaborative research effort between the Dow Chemical Company and the University of Illinois, with the aim of advancing technologies important to industry. The National Science Foundation also supported this work.



Contacts and sources:
Liz Ahlberg Touchstone
University of Illinois at Urbana-Champaign

Engine Produces Hydrogen from Methane and Captures CO2



When is an internal combustion engine not an internal combustion engine? When it’s been transformed into a modular reforming reactor that could make hydrogen available to power fuel cells wherever there’s a natural gas supply available.

By adding a catalyst, a hydrogen separating membrane and carbon dioxide sorbent to the century-old four-stroke engine cycle, researchers have demonstrated a laboratory-scale hydrogen reforming system that produces the green fuel at relatively low temperature in a process that can be scaled up or down to meet specific needs.

The process could provide hydrogen at the point of use for residential fuel cells or neighborhood power plants, electricity and power production in natural-gas powered vehicles, fueling of municipal buses or other hydrogen-based vehicles, and supplementing intermittent renewable energy sources such as photovoltaics.

Georgia Tech researchers have demonstrated a CHAMP reactor, which uses the four-stroke engine cycle to create hydrogen while simultaneously capturing carbon dioxide emission. 
Credit: Candler Hobbs, Georgia Tech

Known as the CO2/H2 Active Membrane Piston (CHAMP) reactor, the device operates at temperatures much lower than conventional steam reforming processes, consumes substantially less water and could also operate on other fuels such as methanol or bio-derived feedstock. It also captures and concentrates carbon dioxide emissions, a by-product that now lacks a secondary use – though that could change in the future.

Unlike conventional engines that run at thousands of revolutions per minute, the reactor operates at only a few cycles per minute – or more slowly – depending on the reactor scale and required rate of hydrogen production. And there are no spark plugs because there’s no fuel combusted.

“We already have a nationwide natural gas distribution infrastructure, so it’s much better to produce hydrogen at the point of use rather than trying to distribute it,” said Andrei Fedorov, a Georgia Institute of Technology professor who’s been working on CHAMP since 2008. “Our technology could produce this fuel of choice wherever natural gas is available, which could resolve one of the major challenges with the hydrogen economy.”

A paper published February 9 in the journal Industrial & Engineering Chemistry Research describes the operating model of the CHAMP process, including a critical step of internally adsorbing carbon dioxide, a byproduct of the methane reforming process, so it can be concentrated and expelled from the reactor for capture, storage or utilization.

Other implementations of the system have been reported as thesis work by three Georgia Tech Ph.D. graduates since the project began in 2008. The research was supported by the National Science Foundation, the Department of Defense through NDSEG fellowships, and the U.S. Civilian Research & Development Foundation (CRDF Global).

Key to the reaction process is the variable volume provided by a piston rising and falling in a cylinder. As with a conventional engine, a valve controls the flow of gases into and out of the reactor as the piston moves up and down. The four-stroke system works like this:
Natural gas (methane) and steam are drawn into the reaction cylinder through a valve as the piston inside is lowered. The valve closes once the piston reaches the bottom of the cylinder.
The piston rises into the cylinder, compressing the steam and methane as the reactor is heated. Once it reaches approximately 400 degrees Celsius, catalytic reactions take place inside the reactor, forming hydrogen and carbon dioxide. The hydrogen exits through a selective membrane, and the pressurized carbon dioxide is adsorbed by the sorbent material, which is mixed with the catalyst.
Once the hydrogen has exited the reactor and carbon dioxide is tied up in the sorbent, the piston is lowered, reducing the volume (and pressure) in the cylinder. The carbon dioxide is released from the sorbent into the cylinder.
The piston is again moved up into the chamber and the valve opens, expelling the concentrated carbon dioxide and clearing the reactor for the start of a new cycle.

“All of the pieces of the puzzle have come together,” said Fedorov, a professor in Georgia Tech’s George W. Woodruff School of Mechanical Engineering. “The challenges ahead are primarily economic in nature. Our next step would be to build a pilot-scale CHAMP reactor.”

The project was begun to address some of the challenges to the use of hydrogen in fuel cells. Most hydrogen used today is produced in a high-temperature reforming process in which methane is combined with steam at about 900 degrees Celsius. The industrial-scale process requires as many as three water molecules for every molecule of hydrogen, and the resulting low density gas must be transported to where it will be used.

Fedorov’s lab first carried out thermodynamic calculations suggesting that the four-stroke process could be modified to produce hydrogen in relatively small amounts where it would be used. The goals of the research were to create a modular reforming process that could operate at between 400 and 500 degrees Celsius, use just two molecules of water for every molecule of methane to produce four hydrogen molecules, be able to scale down to meet the specific needs, and capture the resulting carbon dioxide for potential utilization or sequestration.

“We wanted to completely rethink how we designed reactor systems,” said Fedorov. “To gain the kind of efficiency we needed, we realized we’d need to dynamically change the volume of the reactor vessel. We looked at existing mechanical systems that could do this, and realized that this capability could be found in a system that has had more than a century of improvements: the internal combustion engine.”

The CHAMP system could be scaled up or down to produce the hundreds of kilograms of hydrogen per day required for a typical automotive refueling station – or a few kilograms for an individual vehicle or residential fuel cell, Fedorov said. The volume and piston speed in the CHAMP reactor can be adjusted to meet hydrogen demands while matching the requirements for the carbon dioxide sorbent regeneration and separation efficiency of the hydrogen membrane. In practical use, multiple reactors would likely be operated together to produce a continuous stream of hydrogen at a desired production level.

“We took the conventional chemical processing plant and created an analog using the magnificent machinery of the internal combustion engine,” Fedorov said. “The reactor is scalable and modular, so you could have one module or a hundred of modules depending on how much hydrogen you needed. The processes for reforming fuel, purifying hydrogen and capturing carbon dioxide emission are all combined into one compact system.”

This publication is based on work supported by the National Science Foundation (NSF) CBET award 0928716, which was funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5), and by award 61220 of the U.S. Civilian Research & Development Foundation (CRDF Global) and by the National Science Foundation under Cooperative Agreement OISE- 9531011. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSF or CRDF Global. Graduate work of David M. Anderson, the first author on the paper, was conducted with government support under an award by the DoD, Air Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a.



Contacts and sources:
John Toon
Georgia Institute of Technology

Citation: David M. Anderson, Thomas M. Yun, Peter A. Kottke and Andrei G. Fedorov, “Comprehensive Analysis of Sorption Enhanced Steam Methane Reforming in a Variable Volume Membrane Reactor,” (Industrial & Engineering Chemistry Research, 2017). http://dx.doi.org/10.1021/acs.iecr.6b04392


How To Build a Bio-Bot


Creating tiny muscle-powered robots that can walk or swim by themselves -- or better yet, when prompted -- is more complicated than it looks.

Rashid Bashir, the head of the bioengineering department at the University of Illinois, and Taher Saif, a professor of mechanical science and engineering at Illinois, will speak in Boston on the design and development of walking and swimming bio-bots at the annual meeting of the American Association for the Advancement of Science.

Tiny walking "bio-bots" are powered by muscle cells and controlled by an electric field.

Graphic by Janet Sinn-Hanlon, Design Group@VetMed


The symposium "Integrated Cellular Systems: Building Machines with Cells" was held Feb. 18 in at the Hynes Convention Center.  

Through the National Science Foundation-funded Emergent Behavior of Integrated Cellular Systems center, Bashir, Saif and colleagues have developed small, soft biological robots, dubbed "bio-bots," that can walk and swim on their own or when triggered by electrical or light signals. The researchers make a soft 3-D printed scaffold measuring a centimeter or two in length, seed it with muscle cells, and the cells self-organize to form functional tissues that make the bio-bots move.

"These machines are now viewed as partially living, with the ability to form, the ability to age and the ability to heal if there's an injury," Saif said. "Now that we've got them working, we are beginning to look back and try to understand how the cells organize themselves and what language they use to communicate. This is the developmental biology of living machines."

Miniature "bio-bots" developed at the University of Illinois are made of hydrogel and heart cells, but can walk on their own.

Photo by Elise A. Corbin

In the talk "How to Engineer a Living System," Bashir will describe the methods that the group has used to build the bio-bots and to direct their behavior.

"As engineers, we usually build with materials like wood, steel or silicon. Our focus here is to forward-engineer biological or cell-based systems," Bashir said. "The design is inspired by the muscle-tendon-bone complex found in nature. There's a skeleton or backbone, but made out of soft polymers similar to the ones used in contact lenses, so it can bend instead of needing joints like the body does."

Credit: University of Illinois

Bashir's group developed multiple designs to make bio-bots walk in certain directions and to control their motion with light or electrical currents.

In the talk "Engineered Living Micro Swimmers," Saif will describe bio-bots that swim and the physical and biological interactions that cause the cells to come into alignment. They form a single muscle unit that contracts to beat a tail, propelling the bio-bot through liquid.

"They align themselves in a direction where the tail of the swimmer can be bent most. Which is exactly what we wanted, although we did not pattern or direct them to do it," Saif said. "Why do they behave this way? If each cell beat at its own time, we wouldn't have the swimmer. What made them synchronize into a single entity?"

Bashir and Saif will share insights learned from these questions and more.

"The objective is not to make a walker and a swimmer, but to lay the scientific foundation so we have principles for building biological machines in the future," Saif said.



Contacts and sources:
Liz Ahlberg Touchstone
University of Illinois

Magnet Triggers Drug Release from Implant

University of British Columbia researchers have developed a magnetic drug implant—the first of its kind in Canada—that could offer an alternative for patients struggling with numerous pills or intravenous injections.

The device, a silicone sponge with magnetic carbonyl iron particles wrapped in a round polymer layer, measures just six millimetres in diameter. The drug is injected into the device and then surgically implanted in the area being treated. Passing a magnet over the patient’s skin activates the device by deforming the sponge and triggering the release of the drug into surrounding tissue through a tiny opening.

Size of the magnetic implant compared to the Canadian one-dollar coin. 
Credit: UBC

“Drug implants can be safe and effective for treating many conditions, and magnetically controlled implants are particularly interesting because you can adjust the dose after implantation by using different magnet strengths. Many other implants lack that feature,” said study author Ali Shademani, a PhD student in the biomedical engineering program at UBC.

Actively controlling drug delivery is particularly relevant for conditions like diabetes, where the required dose and timing of insulin varies from patient to patient, said co-author John K. Jackson, a research scientist in UBC’s faculty of pharmaceutical sciences.



“This device lets you release the actual dose that the patient needs when they need it, and it’s sufficiently easy to use that patients could administer their own medication one day without having to go to a hospital,” said Jackson.

The researchers tested their device on animal tissue in the lab using the prostate cancer drug docetaxel. They found that it was able to deliver the drug on demand even after repeated use. The drug also produced an effect on cancer cells comparable to that of freshly administered docetaxel, proving that drugs stored in the device stay effective.

Mu Chiao, Shademani’s supervisor and a professor of mechanical engineering at UBC, said the team is working on refining the device and narrowing down the conditions for its use.

“This could one day be used for administering painkillers, hormones, chemotherapy drugs and other treatments for a wide range of health conditions. In the next few years we hope to be able to test it for long-term use and for viability in living models,” said Chiao.



Contacts and sources:
Lou Corpuz-Bosshart
University of British Columbia

Citation: “Active regulation of on-demand drug delivery by magnetically triggerable microspouters” was recently published online in the journal Advanced Functional Materials. Click here to download a copy.

Sunday, February 19, 2017

Yeast in Babies' Guts Increases Risk of Asthma


University of British Columbia microbiologists have found a yeast in the gut of new babies in Ecuador that appears to be a strong predictor that they will develop asthma in childhood. The new research furthers our understanding of the role microscopic organisms play in our overall health.

"Children with this type of yeast called Pichia were much more at risk of asthma," said Brett Finlay, a microbiologist at UBC. "This is the first time anyone has shown any kind of association between yeast and asthma."

In previous research, Finlay and his colleagues identified four gut bacteria in Canadian children that, if present in the first 100 days of life, seem to prevent asthma. In a followup to this study, Finlay and his colleagues repeated the experiment using fecal samples and health information from 100 children in a rural village in Ecuador.

Canada and Ecuador both have high rates of asthma with about 10 per cent of the population suffering from the disease.

Yeast linked to asthma in Ecuador

Credit:ubcpublicaffairs

They found that while gut bacteria play a role in preventing asthma in Ecuador, it was the presence of a microscopic fungus or yeast known as Pichia that was more strongly linked to asthma. Instead of helping to prevent asthma, however, the presence of Pichia in those early days puts children at risk.

Finlay also suggests there could be a link between the risk of asthma and the cleanliness of the environment for Ecuadorian children. As part of the study, the researchers noted whether children had access to clean water.

"Those that had access to good, clean water had much higher asthma rates and we think it is because they were deprived of the beneficial microbes," said Finlay. "That was a surprise because we tend to think that clean is good but we realize that we actually need some dirt in the world to help protect you."

Now Finlay's colleagues will re-examine the Canadian samples and look for the presence of yeast in the gut of infants. This technology was not available to the researchers when they conducted their initial study.



Contacts and sources:
Heather Amos
University of British Columbia

New Levitation Method Uses Heat and Cold Used To Lift A Variety of Materials

Although scientists have been able to levitate specific types of material, a pair of UChicago undergraduate physics students helped take the science to a new level.

Third-year Frankie Fung and fourth-year Mykhaylo Usatyuk led a team of UChicago researchers who demonstrated how to levitate a variety of objects—ceramic and polyethylene spheres, glass bubbles, ice particles, lint strands and thistle seeds—between a warm plate and a cold plate in a vacuum chamber.

“They made lots of intriguing observations that blew my mind,” said Cheng Chin, professor of physics, whose ultracold lab in the Gordon Center for Integrative Science was home to the experiments.

Researchers achieved levitation of lint among other particles 
Courtesy of Chin Lab


In their work, researchers achieved a number of levitation breakthroughs, in terms of duration, orientation and method: The levitation lasted for more than an hour, as opposed to a few minutes; stability was achieved radially and vertically, as opposed to just vertically; and it used a temperature gradient rather than light or a magnetic field. Their findings appeared Jan. 20 in Applied Physics Letters.

“Magnetic levitation only works on magnetic particles, and optical levitation only works on objects that can be polarized by light, but with our first-of-its-kind method, we demonstrate a method to levitate generic objects,” said Chin.

In the experiment, the bottom copper plate was kept at room temperature while a stainless steel cylinder filled with liquid nitrogen kept at negative 300 degrees Fahrenheit served as the top plate. The upward flow of heat from the warm to the cold plate kept the particles suspended indefinitely.

UChicago researchers achieved levitation of macroscopic objects between warm and cold plates in a vacuum chamber.

Photo byJean Lachat

“The large temperature gradient leads to a force that balances gravity and results in stable levitation,” said Fung, the study’s lead author. “We managed to quantify the thermophoretic force and found reasonable agreement with what is predicted by theory. This will allow us to explore the possibilities of levitating different types of objects.” (Thermophoresis refers to the movement of particles by means of a temperature gradient.)

“Our increased understanding of the thermophoretic force will help us investigate the interactions and binding affinities between the particles we observed,” said Usatyuk, a study co-author. “We are excited about the future research directions we can follow with our system.”

The key to obtaining high levitation stability is the geometrical design of the two plates. A proper ratio of their sizes and vertical spacing allows the warm air to flow around and efficiently capture the levitated objects when they drift away from the center. Another sensitivity factor is that the thermal gradient needs to be pointing upward—even a misalignment of one degree will greatly reduce the levitation stability.

“Only within a narrow range of pressure, temperature gradient and plate geometric factors can we reach stable and long levitation,” Chin said. “Different particles also require fine adjustment of the parameters.”



The apparatus offers a new ground-based platform to investigate the dynamics of astrophysical, chemical and biological systems in a microgravity environment, according to the researchers.

Fourth-year Mykhaylo Usatyuk (left) and third-year Frankie Fung. 
Photo by Jean Lachat

Levitation of macroscopic particles in a vacuum is of particular interest due to its wide applications in space, atmospheric and astro-chemical research. And thermophoresis has been utilized in aerosol thermal precipitators, nuclear reactor safety and the manufacturing of optical fibers through vacuum deposition processes, which apply progressive layers of atoms or molecules during fabrication.

The new method is significant because it offers a new approach to manipulating small objects without contacting or contaminating them, said Thomas Witten, the Homer J. Livingston Professor Emeritus of Physics. “It offers new avenues for mass assembly of tiny parts for micro-electro-mechanical systems, for example, and to measure small forces within such systems.

“Also, it forces us to re-examine how ‘driven gases,’ such as gases driven by heat flow, can differ from ordinary gases,” he added. “Driven gases hold promise to create new forms of interaction between suspended particles.”

Levitation of materials in ground-based experiments provides an ideal platform for the study of particle dynamics and interactions in a pristine isolated environment, the paper concluded. Chin’s lab is now looking at how to levitate macroscopic substances greater than a centimeter in size, as well as how these objects interact or aggregate in a weightless environment. “There are ample research opportunities to which our talented undergraduate students can contribute,” Chin said.



Contacts and sources: 

University of Chicago

Citation: “Stable thermophoretic trapping of generic particles at low pressures,” by Frankie Fung, Mykhaylo Usatyuk, B. J. DeSalvo and Cheng Chin in Applied Physics Letters, Jan. 20, 2017. DOI 10.1063/1.4974489

Funding: National Science Foundation, Grainger Foundation and Enrico Fermi Institute.


Why Do Meteors Make Spooky sounds?

When a meteor is about to conk your neighborhood and gives fair warning by emitting sizzling, rustling and hissing sounds as it descends, you might think that the universe is being sporting.

Sandia National Laboratories researcher Richard Spalding, recently deceased, examines the sky through which meteors travel.

But these auditory warnings, which do occur, seem contrary to the laws of physics if they are caused by the friction of the fast-moving meteor or asteroid plunging into Earth’s atmosphere. Because sound travels far slower than light, the sounds should arrive several minutes after the meteor hits, rather than accompany or even precede it.

So maybe atmospheric shock waves from the meteors are not the cause of the spooky noises
Photo by Randy Montoya 

Another theory is that the sounds are created by radio frequency emissions. That seems unlikely without designated receivers.

But what if the sounds are caused by the brilliant, pulsating light emitted by the asteroid as it burns up in Earth’s atmosphere?

In an article published Feb. 1 in the journal Scientific Reports, the late Sandia National Laboratories researcher Richard Spalding reasoned that such intense light could suddenly heat the surface of objects many miles away, which in turn heats the surrounding air. This could create sounds near the observer. Colleagues John Tencer, William Sweatt, Ben Conley, Roy Hogan, Mark Boslough and Gigi Gonzales, along with Pavel Spurny from the Astronomical Institute of the Czech Republic, experimentally demonstrated and analyzed that effect.

They found that objects with low conductivity, such as leaves, grass, dark paint and even hair, could rapidly warm and transmit heat into nearby air and generate pressure waves by subtle oscillations that create a variety of sounds. The process is called photoacoustic coupling.

Sounds concurrent with a meteor’s arrival “must be associated with some form of electromagnetic energy generated by the meteor, propagated to the vicinity of the observer and transduced into acoustic waves,” according to the article. “A succession of light-pulse-produced pressure waves can then manifest as sound to a nearby observer.”

This bolide appeared over the Flinders Ranges, in the South Australian desert on the evening of the 24th April 2011.
Credit: Wikimedia Commons

The experimenters exposed several materials, including dark cloths and a wig, to intense pulsing light akin to that produced by a fireball. The process produced faint sounds similar to rustling leaves or faint whispers. Computer models bear out the results.

A less extreme version of the photoacoustic effect had been observed in 1880 by Alexander Graham Bell when, testing the possibilities of light for long-distance phone transmissions, he intermittently interrupted sunlight shining on a variety of materials and noted the sounds produced.

Sandia National Laboratories is a multimission laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies and economic competitiveness.


Contacts and sources:
Neal Singer,
Sandia National Laboratories

How an Ice Age Paradox Could Inform Sea Level Rise Predictions

New findings from the University of Michigan explain an Ice Age paradox and add to the mounting evidence that climate change could bring higher seas than most models predict.

The study, published in Nature, shows how small spikes in the temperature of the ocean, rather than the air, likely drove the rapid disintegration cycles of the expansive ice sheet that once covered much of North America.

 The behavior of this ancient ice sheet—called Laurentide—has puzzled scientists for decades because its periods of melting and splintering into the sea occurred at the coldest times in the last Ice Age. Ice should melt when the weather is warm, but that's not what happened.

Credit: University of Michigan

"We've shown that we don't really need atmospheric warming to trigger large-scale disintegration events if the ocean warms up and starts tickling the edges of the ice sheets," said Jeremy Bassis, U-M associate professor of climate and space sciences and engineering. "It is possible that modern-day glaciers, not just the parts that are floating but the parts that are just touching the ocean, are more sensitive to ocean warming than we previously thought."

This mechanism is likely at work today on the Greenland ice sheet and possibly Antarctica. Scientists know this in part due to Bassis' previous work. Several years ago, he came up with a new, more accurate way to mathematically describe how ice breaks and flows. His model has led to a deeper understanding of how the Earth's store of ice could react to changes in air or ocean temperatures, and how that might translate to sea level rise.

Last year, other researchers used it to predict that melting Antarctic ice could raise sea levels by more than three feet, as opposed to the previous estimate that Antarctica would only contribute centimeters by 2100.

In the new study, Bassis and his colleagues applied a version of this model to the climate of the last Ice Age, which ended about 10,000 years ago. They used ice core and ocean-floor sediment records to estimate water temperature and how it varied. Their aim was to see if what's happening in Greenland today could describe the behavior of the Laurentide Ice Sheet.

Scientists refer to these bygone periods of rapid ice disintegration as Heinrich events: Icebergs broke off the edges of Northern Hemisphere ice sheets and flowed into the ocean, raising sea level by more than 6 feet over the course of hundreds of years. As the icebergs drifted and melted, dirt they carried settled onto the ocean floor, forming thick layers that can be seen in sediment cores across the North Atlantic basin. These unusual sediment layers are what allowed researchers to first identify Heinrich events.


Credit: University of Michigan

"Decades of work looking at ocean sediment records has shown that these ice sheet collapse events happened periodically during the last Ice Age, but it has taken a lot longer to come up with a mechanism that can explain why the Laurentide ice sheet collapsed during the coldest periods only. This study has done that," said geochemist and co-author Sierra Petersen, U-M research fellow in earth and environmental sciences.

Bassis and his colleagues set out to understand the timing and size of the Heinrich events. Through their simulations, they were able to predict both, and also to explain why some ocean warming events triggered Heinrich events and some did not. They even identified an additional Heinrich event that had previously been missed.

Heinrich events were followed by brief periods of rapid warming. The Northern Hemisphere warmed repeatedly by as many as 15 degrees Fahrenheit in just a few decades. The area would stabilize, but then the ice would slowly grow to its breaking point over the next thousand years. Their model was able to simulate these events as well.

Bassis' model takes into account how the Earth's surface reacts to the weight of the ice on top of it. Heavy ice depresses the planet's surface, at times pushing it below sea level. That's when the ice sheets are most vulnerable to warmer seas. But as a glacier retreats, the solid Earth rebounds out of the water again, stabilizing the system. From that point the ice sheet can begin to expand again.

"There is currently large uncertainty about how much sea level will rise and much of this uncertainty is related to whether models incorporate the fact that ice sheets break," Bassis said. "What we are showing is that the models we have of this process seem to work for Greenland, as well as in the past so we should be able to more confidently predict sea level rise."

He added that portions of Antarctica have similar geography to Laurentide: Pine Island, Thwaites glacier, for example.

"We're seeing ocean warming in those region and we're seeing these regions start to change. In that area, they're seeing ocean temperature changes of about 2.7 degrees Fahrenheit," Bassis said. "That's pretty similar magnitude as we believe occurred in the Laurentide events, and what we saw in our simulations is that just a small amount of ocean warming can destabilize a region if it's in the right configuration, and even in the absence of atmospheric warming."

The study is called "Heinrich events triggered by ocean forcing and modulated
by isostatic adjustment." The research is supported by the National Science Foundation and the National Atmospheric and Oceanic Administration.



Contacts and sources:
Nicole Casal Moore
University of Michigan 

New Mechanical Metamaterials Block Motion One Way and Push the Other Way

Engineers and scientists at The University of Texas at Austin and the AMOLF institute in the Netherlands have invented the first mechanical metamaterials that easily transfer motion effortlessly in one direction while blocking it in the other, as described in a paper published on Feb. 13 in Nature. The material can be thought of as a mechanical one-way shield that blocks energy from coming in but easily transmits it going out the other side.

The researchers developed the first nonreciprocal mechanical materials using metamaterials, which are synthetic materials with properties that cannot be found in nature.

Breaking the symmetry of motion may enable greater control on mechanical systems and improved efficiency. These nonreciprocal metamaterials can potentially be used to realize new types of mechanical devices: for example, actuators (components of a machine that are responsible for moving or controlling a mechanism) and other devices that could improve energy absorption, conversion and harvesting, soft robotics and prosthetics.

Credit: The University of Texas at Austin

The researchers’ breakthrough lies in the ability to overcome reciprocity, a fundamental principle governing many physical systems, which ensures that we get the same response when we push an arbitrary structure from opposite directions. This principle governs how signals of various forms travel in space and explains why, if we can send a radio or an acoustic signal, we can also receive it. In mechanics, reciprocity implies that motion through an object is transmitted symmetrically: If by pushing on side A we move side B by a certain amount, we can expect the same motion at side A when pushing B.

“The mechanical metamaterials we created provide new elements in the palette that material scientists can use in order to design mechanical structures,” said Andrea Alù, a professor in the Cockrell School of Engineering and co-author of the paper. “This can be of extreme interest for applications in which it is desirable to break the natural symmetry with which the displacement of molecules travels in the microstructure of a material.”

During the past couple of years, Alù, along with Cockrell School research scientist Dimitrios Sounas and other members of their research team, have made exciting breakthroughs in the area of nonreciprocal devices for electromagnetics and acoustics, including the realization of first-of-their-kind nonreciprocal devices for sound, radio waves and light. While visiting the institute AMOLF in the Netherlands, they started a fruitful collaboration with Corentin Coulais, an AMOLF researcher, who recently has been developing mechanical metamaterials. Their close interaction led to this breakthrough.

The researchers first created a rubber-made, centimeter-scale metamaterial with a specifically tailored fishbone skeleton design. They tailored its design to meet the main conditions to break reciprocity, namely asymmetry and a response that is not linearly proportional to the exerted force.

The researchers first created a rubber-made, centimeter-scale metamaterial with a specifically tailored fishbone skeleton design. They tailored its design to meet the main conditions to break reciprocity, namely asymmetry and a response that is not linearly proportional to the exerted force.

“This structure provided us inspiration for the design of a second metamaterial, with unusually strong nonreciprocal properties,” Coulais said. “By substituting the simple geometrical elements of the fishbone metamaterial with a more intricate architecture made of connected squares and diamonds, we found that we can break very strongly the conditions for reciprocity, and we can achieve a very large nonreciprocal response.”

The material’s structure is a lattice of squares and diamonds that is completely homogeneous throughout the sample, like an ordinary material. However, each unit of the lattice is slightly tilted in a certain way, and this subtle difference dramatically controls the way the metamaterial responds to external stimuli.

“The metamaterial as a whole reacts asymmetrically, with one very rigid side and one very soft side,” Sounas said. “The relation between the unit asymmetry and the soft side location can be predicted by a very generic mathematical framework called topology. Here, when the architectural units lean left, the right side of the metamaterial will be very soft, and vice-versa.”

When the researchers apply a force on the soft side of the metamaterial, it easily induces rotations of the squares and diamonds within the structure, but only in the near vicinity of the pressure point, and the effect on the other side is small. Conversely, when they apply the same force on the rigid side, the motion propagates and is amplified throughout the material, with a large effect at the other side. As a result, pushing from the left or from the right results in very different responses, yielding a large nonreciprocity even for small applied forces.

The team is looking forward to leveraging these topological mechanical metamaterials for various applications, optimizing them, and carving devices out of them for applications in soft robotics, prosthetics and energy harvesting.

This research received funding from the Air Force Office of Scientific Research, the Office of Naval Research, the National Science Foundation, the Simons Foundation and the Netherlands Organization for Scientific Research.



Contacts and sources:
Andrea Alù, Professor in the Electrical & Computer Engineering
The University of Texas at Austin

Examining the DNA of Exploding Stars

Imagine being able to view microscopic aspects of a classical nova, a massive stellar explosion on the surface of a white dwarf star (about as big as Earth), in a laboratory rather than from afar via a telescope.

Cosmic detonations of this scale and larger created many of the atoms in our bodies, says Michigan State University's Christopher Wrede, who presented at the American Association for the Advancement of Science meeting. A safe way to study these events in laboratories on Earth is to investigate the exotic nuclei or "rare isotopes" that influence them.

"Astronomers observe exploding stars and astrophysicists model them on supercomputers," said Wrede, assistant professor of physics at MSU's National Superconducting Cyclotron Laboratory. "At NSCL and, in the future at the Facility for Rare Isotope Beams, we're able to measure the nuclear properties that drive stellar explosions and synthesize the chemical elements - essential input for the models. Rare isotopes are like the DNA of exploding stars."

Nova of Star "V838 Mon" in constellation "Monocerotis"
Credit: NASA/ESA Hubble Space Telescope

Wrede's presentation explained how rare isotopes are produced and studied at MSU's NSCL, and how they shed light on the evolution of visible matter in the universe.

"Rare isotopes will help us to understand how stars processed some of the hydrogen and helium gas from the Big Bang into elements that make up solid planets and life," Wrede said. "Experiments at rare isotope beam facilities are beginning to provide the detailed nuclear physics information needed to understand our origins."

In a recent experiment, Wrede's team investigated stellar production of the radioactive isotope aluminum-26 present in the Milky Way. An injection of aluminum-26 into the nebula that formed the solar system could have influenced the amount of water on Earth.

MSU's Chris Wrede explains what it's like to view microscopic aspects of a classical nova, a massive stellar explosion on the surface of a white dwarf star (about as big as Earth), in a laboratory rather than from afar via a telescope.
Credit: MSU

Using a rare isotope beam created at NSCL, the team determined the last unknown nuclear-reaction rate affecting the production of aluminum-26 in classical novae.

They concluded that up to 30 percent could be produced in novae, and the rest must be produced in other sources like supernovae.

Future research can now focus on counting the number of novae in the galaxy per year, modeling the hydrodynamics of novae and investigating the other sources in complete nuclear detail.

To extend their reach to more extreme astrophysical events, nuclear scientists are continuing to improve their technology and techniques. Traditionally, stable ion beams have been used to measure nuclear reactions. For example, bombarding a piece of aluminum foil with a beam of protons can produce silicon atoms. However, exploding stars make radioactive isotopes of aluminum that would decay into other elements too quickly to make a foil target out of them.

"With FRIB, we will reverse the process; we'll create a beam of radioactive aluminum ions and use it to bombard a target of protons," Wrede said. "Once FRIB comes online, we will be able to measure many more of the nuclear reactions that affect exploding stars."



Contacts and sources:
Layne Cameron
Michigan State University

Drug SkQ1 Slows Aging, Works on Mice, May Hit Market in 2 to 3 Years Says Russian Scientist


A group of Russian and Swedish scientists just published a breakthrough paper, reporting results of a joint study by Lomonosov Moscow State University and Stockholm university. The article was published in the US journal Aging.

The major goal of the study was to investigate the role of intracellular power stations -- mitochondria -- in the process of ageing of organism. Importantly, scientists made an attempt to slow down ageing using a novel compound: artificial antioxidant SkQ1 precisely targeted into mitochondria. This compound was developed in the Moscow State University by the most cited Russian biologist professor Vladimir Skulachev.

A genetically-modified mouse participated in the experiment.

Credit: The A.N. Belozersky Institute Of Physico-Chemical Biology

Experiments involved a special strain of genetically-modified mice created and characterized in Sweden. A single mutation was introduced into genome of these mice resulting in the substantially accelerated mutagenesis in mitochondria. This leads to accelerated ageing and early death of the mutant mice. They live less than 1 year (normal mouse lives more than 2 years). The mutation promotes development of many age-related defects and diseases indicating that the major defect of these mice is indeed ageing.

Starting from the age of 100 days one group of mutant mice was treated with small doses of SkQ1 (approx. 12 micrograms) added into their drinking water. Per scientists' hypothesis, the compound must protect animal cells from the toxic byproducts of mitochondria -- free radicals (reactive oxygen species). Another group of animals served as a control group receiving pure water.

Differences between the two groups became obvious starting from the age 200-250 days. Animals in the control group aged rapidly as expected. They were losing weight, their body temperature decreased, severe curvature of the spine (as a result of osteoporosis) and alopecia were developing, their skin became thinner, and in case of females estrus cycle was impaired. Finally their mobility and oxygen consumption were decreased. The development of all these typical traits of ageing was dramatically decelerated in the group treated with SkQ1. Some of the ageing traits did not appear in that group at all.

Professor Vladimir Skulachev, the creator of SkQ1 molecule design and co-author of this study, says: "This work is quite valuable from both theoretical and practical points of view. First, it clearly demonstrates the key role of mitochondrially produced reactive oxygen species in the process of ageing of mammals. At the same time our study opens the way to the treatment of ageing with mitochondrially targeted antioxidants. We are also very honored to cooperate within this project with such prominent Swedish scientists as prof. Barbara Cannon who has such title as the President of Royal Swedish Academy of Sciences in her CV and prof. Jan Nedergaard, Head of Wenner-Gren institute".

Prof. Skulachev's project is now developing a set of pharmaceuticals based on SkQ1 molecule. The first drug -- Visomitin eye drops -- is already approved and marketed in Russia, it also passed phase 2 clinical trials in US. The next pharmaceutical product in project's pipeline is an oral form of SkQ1 (similar to the one used in the aforementioned experiments). It is now in the process of clinical trials in Russia. In case of positive results of these trials, such "anti-ageing" drug can be approved for systemic indications in 2-3 years.



Contacts and sources: 
Vladimir Koryagin
Lomonosov Moscow State University 

Citation: Improved health-span and lifespan in mtDNA mutator mice treated with the mitochondrially targeted antioxidant SkQ1 http://dx.doi.org/10.18632/aging.101174

Researchers See DNA 'Blink' for the First Time

Many of the secrets of cancer and other diseases lie in the cell's nucleus. But getting way down to that level -- to see and investigate the important genetic material housed there -- requires creative thinking and extremely powerful imaging techniques.

Vadim Backman and Hao Zhang, nanoscale imaging experts at Northwestern University, have developed a new imaging technology that is the first to see DNA "blink," or fluoresce. The tool enables the researchers to study individual biomolecules as well as important global patterns of gene expression, which could yield insights into cancer.

A powerful Northwestern University imaging tool is the first to measure the structure of isolated chromosomes without the use of fluorescent labels.
Credit: Northwestern University

Backman discussed the tool and its applications -- including the new concept of macrogenomics, a technology that aims to regulate the global patterns of gene expression without gene editing -- Friday (Feb. 17) at the American Association for the Advancement of Science (AAAS) annual meeting in Boston.

The talk, "Label-Free Super-Resolution Imaging of Chromatin Structure and Dynamics," was part of the symposium "Optical Nanoscale Imaging: Unraveling the Chromatin Structure-Function Relationship."  

The Northwestern tool features six-nanometer resolution and is the first to break the 10-nanometer resolution threshold. It can image DNA, chromatin and proteins in cells in their native states, without the need for labels.

For decades, textbooks have stated that macromolecules within living cells, such as DNA, RNA and proteins, do not have visible fluorescence on their own.

"People have overlooked this natural effect because they didn't question conventional wisdom," said Backman, the Walter Dill Professor of Biomedical Engineering in the McCormick School of Engineering. "With our super-resolution imaging, we found that DNA and other biomolecules do fluoresce, but only for a very short time. Then they rest for a very long time, in a 'dark' state. The natural fluorescence was beautiful to see."

Backman, Zhang and collaborators now are using the label-free technique to study chromatin -- the bundle of genetic material in the cell nucleus -- to see how it is organized. Zhang is an associate professor of biomedical engineering at McCormick.

"Insights into the workings of the chromatin folding code, which regulates patterns of gene expression, will help us better understand cancer and its ability to adapt to changing environments," Backman said. "Cancer is not a single-gene disease."

Current technology for imaging DNA and other genetic material relies on special fluorescent dyes to enhance contrast when macromolecules are imaged. These dyes may perturb cell function, and some eventually kill the cells -- undesirable effects in scientific studies.

In contrast, the Northwestern technique, called spectroscopic intrinsic-contrast photon-localization optical nanoscopy (SICLON), allows researchers to study biomolecules in their natural environment, without the need for these fluorescent labels.

Backman, Zhang and Cheng Sun, an associate professor of mechanical engineering at McCormick, discovered that when illuminated with visible light, the biomolecules get excited and light up well enough to be imaged without fluorescent stains. When excited with the right wavelength, the biomolecules even light up better than they would with the best, most powerful fluorescent labels.

"Our technology will allow us and the broader research community to push the boundaries of nanoscopic imaging and molecular biology even further," Backman said.



Contacts and sources: 
Megan Fellman
Northwestern University,


Join the Search for Planet Nine: Backyard Worlds Website Lets Public Search The Heavens

NASA is inviting the public to help search for possible undiscovered worlds in the outer reaches of our solar system and in neighboring interstellar space. 
A new website, called Backyard Worlds: Planet 9, lets everyone participate in the search by viewing brief movies made from images captured by NASA's Wide-field Infrared Survey Explorer (WISE) mission. The movies highlight objects that have gradually moved across the sky.
Caltech researchers have found evidence suggesting there may be a "Planet X" deep in the solar system. This hypothetical Neptune-sized planet orbits our sun in a highly elongated orbit far beyond Pluto. The object, which the researchers have nicknamed "Planet Nine," could have a mass about 10 times that of Earth and orbit about 20 times farther from the sun on average than Neptune. It may take between 10,000 and 20,000 Earth years to make one full orbit around the sun.
Credit: NASA
"There are just over four light-years between Neptune and Proxima Centauri, the nearest star, and much of this vast territory is unexplored," said lead researcher Marc Kuchner, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "Because there's so little sunlight, even large objects in that region barely shine in visible light. But by looking in the infrared, WISE may have imaged objects we otherwise would have missed."
Join the search for new worlds in the outer reaches of our solar system and in nearby interstellar space at Backyard Worlds Planet 9.
\Credits: NASA's Goddard Space Flight Center Conceptual Image Lab/Krystofer D.J. Kim 
WISE scanned the entire sky between 2010 and 2011, producing the most comprehensive survey at mid-infrared wavelengths currently available. With the completion of its primary mission, WISE was shut down in 2011. It was then reactivated in 2013 and given a new mission assisting NASA's efforts to identify potentially hazardous near-Earth objects (NEOs), which are asteroids and comets on orbits that bring them into the vicinity of Earth’s orbit. The mission was renamed the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE).

The new website uses the data to search for unknown objects in and beyond our own solar system. In 2016, astronomers at Caltech in Pasadena, California, showed that several distant solar system objects possessed orbital features indicating they were affected by the gravity of an as-yet-undetected planet, which the researchers nicknamed "Planet Nine." If Planet Nine — also known as Planet X — exists and is as bright as some predictions, it could show up in WISE data.

Planet X has not yet been discovered, and there is debate in the scientific community about whether it exists. The prediction in the Jan. 20 issue of the Astronomical Journal is based on mathematical modeling.


The search also may discover more distant objects like brown dwarfs, sometimes called failed stars, in nearby interstellar space.

"Brown dwarfs form like stars but evolve like planets, and the coldest ones are much like Jupiter," said team member Jackie Faherty, an astronomer at the American Museum of Natural History in New York. "By using Backyard Worlds: Planet 9, the public can help us discover more of these strange rogue worlds."

A previously cataloged brown dwarf named WISE 0855−0714 shows up as a moving orange dot (upper left) in this loop of WISE images spanning five years. By viewing movies like this, anyone can help discover more of these objects.Credits: NASA/WISE
Unlike more distant objects, those in or closer to the solar system appear to move across the sky at different rates. The best way to discover them is through a systematic search of moving objects in WISE images. While parts of this search can be done by computers, machines are often overwhelmed by image artifacts, especially in crowded parts of the sky. These include brightness spikes associated with star images and blurry blobs caused by light scattered inside WISE's instruments.

Backyard Worlds: Planet 9 relies on human eyes because we easily recognize the important moving objects while ignoring the artifacts. It's a 21st-century version of the technique astronomer Clyde Tombaugh used to find Pluto in 1930, a discovery made 87 years ago this week.

On the website, people around the world can work their way through millions of "flipbooks," which are brief animations showing how small patches of the sky changed over several years. Moving objects flagged by participants will be prioritized by the science team for follow-up observations by professional astronomers. Participants will share credit for their discoveries in any scientific publications that result from the project.

"Backyard Worlds: Planet 9 has the potential to unlock once-in-a-century discoveries, and it's exciting to think they could be spotted first by a citizen scientist," said team member Aaron Meisner, a postdoctoral researcher at the University of California, Berkeley, who specializes in analyzing WISE images.

Backyard Worlds: Planet 9 is a collaboration between NASA, UC Berkeley, the American Museum of Natural History in New York, Arizona State University, the Space Telescope Science Institute in Baltimore, and Zooniverse, a collaboration of scientists, software developers and educators who collectively develop and manage citizen science projects on the internet.

NASA's Jet Propulsion Laboratory in Pasadena, California, manages and operates WISE for NASA's Science Mission Directorate. The WISE mission was selected competitively under NASA's Explorers Program managed by the agency's Goddard Space Flight Center. The science instrument was built by the Space Dynamics Laboratory in Logan, Utah. The spacecraft was built by Ball Aerospace & Technologies Corp. in Boulder, Colorado. Science operations and data processing take place at the Infrared Processing and Analysis Center at Caltech, which manages JPL for NASA.

For more information about Backyard Worlds: Planet 9, visit: http://backyardworlds.org
For more information about NASA's WISE mission, visit:http://www.nasa.gov/wise


Contacts and sources:
By Francis Reddy
NASA's Goddard Space Flight Center

Listen To Your Gut: 100 Trillion Bacteria Cross-Talking to the Immune System

The human gut is home to some 100 trillion bacteria, comprising between 250 and 500 species. This astounding array of organisms, collectively known as the gut microbiome, is a powerful regulator of disease and health and has been implicated in conditions ranging from inflammatory bowel disease to multiple sclerosis.

Gut microbes engage in an intricately choreographed conversation with the immune system, stimulating it just enough to keep disease-causing invaders at bay, while at the same time reining it in so it doesn’t mistakenly launch an attack on the body.

So far, scientists have been able to listen to bits and pieces of the conversation between bacteria and individual immune cells or a handful of genes.

Gut Bacteria
Credit: ChrisChrisW/Getty images

Now, for the first time, scientists from Harvard Medical School have managed to “listen in” on the crosstalk between individual microbes and the entire cast of immune cells and genes expressed in the gut.

The experiments, published Feb. 16 in Cell, provide a blueprint for identifying important microbial influencers of disease and health and can help scientists develop precision-targeted treatments.

Past research has looked at links between disease and the presence or absence of certain classes of bacteria in the gut. By contrast, the HMS team homed in on one microbe at a time and its effects on nearly all immune cells and intestinal genes, an approach that offers a more precise understanding of the interplay between individual gut microbes and their hosts. Beyond that, the team said, the approach could help scientists screen for molecules or bacterial strains that can be used therapeutically to fine-tune certain immune responses.

“We set out to map out interactions between bacteria and the immune system in the hope that this could eventually lead to the development of an apothecary of agents tailored to modulate the immune system selectively and precisely,” said senior investigator Dennis Kasper, professor of medicine and microbiology and immunobiology at HMS.

Such an apothecary, Kasper added, is years away, but the results set the stage for therapeutic discoveries.

For the work, Kasper’s microbiology team collaborated with immunologists from the HMS lab run by Diane Mathis and Christophe Benoist.

“This was an example of true crosspollination expertise and knowledge,” said Mathis, a professor of microbiology and immunobiology at HMS. “This research took place at the intersection of microbiology, immunology and genetics, which is illustrative of the complex and synergistic ways in which multiple organs and organ systems operate in the body.”

Harnessing naturally occurring microbes or molecules and using them to modulate immune response holds the promise of providing better precision-targeted immune therapies. Such therapies, the researchers say, carry the promise of optima benefits with minimal to no toxic side effects.

“Because we observed microbial effects mainly in the gut, we believe that a microbe-based therapy would avoid the collateral damage seen with drugs that wipe out classes of immune cells across the body,” said Benoist, a professor of microbiology and immunobiology at HMS.

Additionally, he said, modulating the gut immune system may also have broader beneficial effects because gut immunity has been linked to several autoimmune diseases, including rheumatoid arthritis, Crohn’s disease and diabetes.

For their experiments, the team collected 53 common bacterial species from human guts and seeded them in sterile mouse guts, one microbe at a time. Two weeks later, the scientists performed immune and genomic analyses, comparing the results with those of mice whose guts were completely microbe-free. Scientists assessed each microbe’s effects on 21 types of immune cells and on the activity of the entire cast of genes that regulate intestinal immunity.
The experiments yielded a few surprises.

Spectrum effect
Each immune cell type was affected by bacteria in a range of ways, the team observed. Some bacteria exerted a powerful influence, while others had far more subtle effects. Very few microbes produced no effect at all.

Some bacteria boosted the activity of certain cells, while others dampened the activity of the very same cells. These oppositional effects, the researchers say, suggest an evolutionary checks-and-balances mechanism to ensure that no single bacterium can overpower the others in its effects on the immune system.
Similarly, some bacteria upregulated certain genes, while others downregulated them, indicating that microbes can have balancing effects on intestinal gene expression.

“We believe that some microbes may upregulate certain genes to create a more hospitable environment for themselves, while others may downregulate certain ones to create a more hostile one for harmful bacteria,” Kasper said.

When researchers analyzed bacterial effects on genes that regulate the activity of cytokines—signaling molecules responsible for inducing inflammation in response to infection, cancer and other diseases—they once again observed the same balancing dynamic at play: Some bacteria boosted the activity of these genes while others turned it down.

Redundancy
Also, contrary to expectations, the researchers said, bacteria that belonged to the same class did not necessarily have the same, or even similar, effects on immune cells. That observation, researchers say, suggests an evolutionary fail-safe mechanism to ensure the preservation of key immune functions even if whole classes of bacteria are lost.

A quarter of the 53 bacteria studied potently boosted the numbers of immune cells known as regulatory T cells, which are responsible for taming inflammation and maintaining immune self-tolerance to shield the body from self-inflicted immune assault.
Another interesting observation, the researchers said, was that a single, little-known microbe, Fusobacterium varium, had, overall, the most powerful effect on immune cells across the board. These effects included suppression of naturally secreted antimicrobials and the ability to turn on several genes that promote inflammation.

The most potently affected class of immune cells was plasmacytoid dendritic cells, known to affect the function of regulatory T-cells and the secretion of interferons, naturally occurring proteins that fend off viruses. Thirty-eight percent of microbes boosted the levels of these dendritic cells, while 8 percent lowered their levels.

The team is currently studying microbial-immune interaction in a more complex context, analyzing the additive effects of several bacterial species at a time.

Co-investigators included Naama Geva-Zatorsky, Esen Sefik, Lindsay Kua, Lesley Pasman, Tze Guan Tan, Adriana Ortiz-Lopez, Tsering Bakto Yanortsang, Liang Yang, all of Harvard, and Ray Jupp of UCB Pharma, the United Kingdom.

This work was funded in part by UCB Pharma; by HFSP (LT00079/2012) and EMBO (ALTF 251-2011) fellowships; a Fulbright Award; a UNESCO L’Oreal National and International Women in Science Award; the Weizmann Institute of Science–Revson National Postdoctoral Award Program for Advancing Women in Science; by a fellowship from the Boehringer Ingelheim Fonds; by the National Science Foundation; and by an A*STAR Graduate Scholarship fellowship.


Contacts and sources:
Harvard Medical School

Efficient Power Converter for IoT Devised: Design Reduces Resting Power Consumption By 50%

The “internet of things” is the idea that vehicles, appliances, civil structures, manufacturing equipment, and even livestock will soon have sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

Those sensors will have to operate at very low powers, in order to extend battery life for months or make do with energy harvested from the environment. But that means that they’ll need to draw a wide range of electrical currents. A sensor might, for instance, wake up every so often, take a measurement, and perform a small calculation to see whether that measurement crosses some threshold. Those operations require relatively little current, but occasionally, the sensor might need to transmit an alert to a distant radio receiver. That requires much larger currents.

Researchers from MIT’s Microsystems Technologies Laboratories (MTL) have designed a new power converter that maintains its efficiency at currents ranging from 100 picoamps to 1 milliamp, a span that encompasses a millionfold increase in current levels.

Credit: MIT 

Generally, power converters, which take an input voltage and convert it to a steady output voltage, are efficient only within a narrow range of currents. But at the International Solid-State Circuits Conference last week, researchers from MIT’s Microsystems Technologies Laboratories (MTL) presented a new power converter that maintains its efficiency at currents ranging from 500 picoamps to 1 milliamp, a span that encompasses a 2,000,000-fold increase.

“Typically, converters have a quiescent power, which is the power that they consume even when they’re not providing any current to the load,” says Arun Paidimarri, who was a postdoc at MTL when the work was done and is now at IBM Research. “So, for example, if the quiescent power is a microamp, then even if the load pulls only a nanoamp, it’s still going to consume a microamp of current. My converter is something that can maintain efficiency over a wide range of currents.”

Paidimarri, who also earned doctoral and master’s degrees from MIT, is first author on the conference paper. He’s joined by his thesis advisor, Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT.

Packet perspective

The researchers’ converter is a step-down converter, meaning that its output voltage is lower than its input voltage. In particular, it takes input voltages ranging from 1.2 to 3.3 volts and reduces them to between 0.7 and 0.9 volts.

“In the low-power regime, the way these power converters work, it’s not based on a continuous flow of energy,” Paidimarri says. “It’s based on these packets of energy. You have these switches, and an inductor, and a capacitor in the power converter, and you basically turn on and off these switches.”

The control circuitry for the switches includes a circuit that measures the output voltage of the converter. If the output voltage is below some threshold — in this case, 0.9 volts — the controllers throw a switch and release a packet of energy. Then they perform another measurement and, if necessary, release another packet.

If no device is drawing current from the converter, or if the current is going only to a simple, local circuit, the controllers might release between 1 and a couple hundred packets per second. But if the converter is feeding power to a radio, it might need to release a million packets a second.

To accommodate that range of outputs, a typical converter — even a low-power one — will simply perform 1 million voltage measurements a second; on that basis, it will release anywhere from 1 to 1 million packets. Each measurement consumes energy, but for most existing applications, the power drain is negligible. For the internet of things, however, it’s intolerable.

Clocking down

Paidimarri and Chandrakasan’s converter thus features a variable clock, which can run the switch controllers at a wide range of rates. That, however, requires more complex control circuits. The circuit that monitors the converter’s output voltage, for instance, contains an element called a voltage divider, which siphons off a little current from the output for measurement. In a typical converter, the voltage divider is just another element in the circuit path; it is, in effect, always on.

But siphoning current lowers the converter’s efficiency, so in the MIT researchers’ chip, the divider is surrounded by a block of additional circuit elements, which grant access to the divider only for the fraction of a second that a measurement requires. The result is a 50 percent reduction in quiescent power over even the best previously reported experimental low-power, step-down converter and a tenfold expansion of the current-handling range.

“This opens up exciting new opportunities to operate these circuits from new types of energy-harvesting sources, such as body-powered electronics,” Chandrakasan says.

“This work pushes the boundaries of the state of the art in low-power DC-DC converters, how low you can go in terms of the quiescent current, and the efficiencies that you can achieve at these low current levels,” says Yogesh Ramadass, the director of power management research at Texas Instruments’ Kilby Labs. “You don’t want your converter to burn up more than what is being delivered, so it’s essential for the converter to have a very low quiescent power state.”

The work was funded by Shell and Texas Instruments, and the prototype chips were built by the Taiwan Semiconductor Manufacturing Corporation, through its University Shuttle Program.


Contacts and sources:
 Larry Hardesty  
Massachusetts Institute of Technology (MIT)