Thursday, April 25, 2019

Microbe Made Artificial Mother of Pearl Fit for Building Houses on the Moon

Bacteria have been put to work manufacturing artificial mother of pearl and they are doing a good job. The strongest synthetic materials are often those that intentionally mimic nature and micro made nacre is fit for building on the Moon.

Bacteria have been put to work manufacturing artificial mother of pearl and they are doing a good job, microbe made nacre is fit for building on the Moon. 

One natural substance scientists have looked to in creating synthetic materials is nacre, also known as mother-of-pearl. An exceptionally tough, stiff material produced by some mollusks and serving as their inner shell layer, it also comprises the outer layer of pearls, giving them their lustrous shine.

This abalone shell is a natural form of nacre—also known as mother-of-pearl—an exceptionally tough material found in shells and pearls. Rochester biologists have developed an innovative method for creating nacre in the lab—and maybe on the moon. 
close-up of a shell, showing the lustrous properties of nacre, or mother-of-pearl
Credit: University of Rochester photo / J. Adam Fenster

But while nacre’s unique properties make it an ideal inspiration in the creation of synthetic materials, most methods used to produce artificial nacre are complex and energy intensive.

Now, a biologist at the University of Rochester has invented an inexpensive and environmentally friendly method for making artificial nacre using an innovative component: bacteria. The artificial nacre created by Anne S. Meyer, an associate professor of biology at Rochester, and her colleagues is made of biologically produced materials and has the toughness of natural nacre, while also being stiff and, surprisingly, bendable.

The method used to create the novel material could lead to new applications in medicine, engineering—and even constructing buildings on the moon.

Impressive mechanical properties

The impressive mechanical properties of natural nacre arise from its hierarchical, layered structure, which allows energy to disperse evenly across the material. In a paper published in the journal Small, Meyer and her colleagues outline their method of using two strains of bacteria to replicate these layers. When they examined the samples under an electron microscope, the structure created by the bacteria was layered similarly to nacre produced naturally by mollusks.

Although nacre-inspired materials have been created synthetically before, the methods used to make them typically involve expensive equipment, extreme temperatures, high-pressure conditions, and toxic chemicals, Meyer says. “Many people creating artificial nacre use polymer layers that are only soluble in nonaqueous solutions, an organic solvent, and then they have this giant bucket of waste at the end of the procedure that has to be disposed of.”

To produce nacre in Meyer’s lab, however, all researchers have to do is grow bacteria and let it sit in a warm place.

From bacteria to nacre

In order to make the artificial nacre, Meyer and her team create alternating thin layers of crystalized calcium carbonate—like cement—and sticky polymer. They first take a glass or plastic slide and place it in a beaker containing the bacteria Sporosarcina pasteurii, a calcium source, and urea (in the human body, urea is the waste product excreted by the kidneys during urination). This combination triggers the crystallization of calcium carbonate. To make the polymer layer, they place the slide into a solution of the bacteria Bacillus licheniformis, then let the beaker sit in an incubator.

The combination of the bacteria Sporosarcina pasteurii, a calcium source, and urea triggers the crystallization of calcium carbonate, pictured above in extreme close up.

Credit: University of Rochester / J. Adam Fenster

Right now it takes about a day to build up a layer, approximately five micrometers thick, of calcium carbonate and polymer. Meyer and her team are currently looking at coating other materials like metal with the nacre, and “we’re trying new techniques to make thicker, nacre-like materials faster and that could be the entire material itself,” Meyer says.

In order to make artificial nacre, Anne S. Meyer and her team use bacteria to create alternating thin layers of crystalized calcium carbonate and sticky polymer. Each layer is approximately five micrometers thick.

 Credit: University of Rochester photo / J. Adam Fenster

Building houses on the moon

One of the most beneficial characteristics of the nacre produced in Meyer’s lab is that it is biocompatible—made of materials the human body produces or that humans can eat naturally anyway. This makes the nacre ideal for medical applications like artificial bones and implants, Meyer says. “If you break your arm, for example, you might put in a metal pin that has to be removed with a second surgery after your bone heals. A pin made out of our material would be stiff and tough, but you wouldn’t have to remove it.”

And, while the material is tougher and stiffer than most plastics, it is very lightweight, a quality that is especially valuable for transportation vehicles like airplanes, boats, or rockets, where every extra pound means extra fuel. Because the production of bacterial nacre doesn’t require any complex instruments, and the nacre coating protects against chemical degradation and weathering, it holds promise for civil engineering applications like crack prevention, protective coatings for erosion control, or for conservation of cultural artifacts, and could be useful in the food industry, as a sustainable packaging material.

The nacre might also be an ideal material to build houses on the moon and other planets: the only necessary “ingredients” would be an astronaut and a small tube of bacteria, Meyer says. “The moon has a large amount of calcium in the moon dust, so the calcium’s already there. The astronaut brings the bacteria, and the astronaut makes the urea, which is the only other thing you need to start making calcium carbonate layers.”

Associate professor of biology Anne S. Meyer. Meyer and her colleagues are using bacteria to replicate the hierarchical, layered structure of nacre to produce a synthetic material with the strength and flexibility of natural mother-of-pearl.

Credit: University of Rochester photo / J. Adam Fenster

Even beyond its qualities as an ideal structural material, nacre itself—as any pearl jewelry owner knows—is “very beautiful,” Meyer says, owing to its stacked layers. Each stacked layer is approximately the same wavelength as visible light. When light hits the nacre, “the wavelengths of light interact with these layers of the same height so it bounces back off in the same wavelength as visible light.” While the bacterial nacre does not interact with visible light because the layers are thicker than natural nacre, it could interact with infrared wavelengths and bounce infrared off itself, Meyer says, which “may offer unique optical properties.”

Contacts and sources:
Lindsey Valich
University of Rochester

Citation: Bacterially Produced, Nacre‐Inspired Composite Materials.
Ewa M. Spiesz, Dominik T. Schmieden, Antonio M. Grande, Kuang Liang, Jakob Schwiedrzik, Filipe Natalio, Johann Michler, Santiago J. Garcia, Marie‐Eve Aubin‐Tam, Anne S. Meyer. Small, 2019; 1805312 DOI: 10.1002/smll.201805312

Supersolidity's Paradoxical State 50 Years in the Making: Matter Is Both Crystallized and Superfluid

An exotic phase of matter was just made,  supersolids were created from quantum gases, a state in which matter is a crystal and a superfluid. It took 50 years from idea to the making.    

Researchers led by Francesca Ferlaino at the University of Innsbruck and Austrian Academy of Sciences report in Physical Review X on the observation of supersolid behavior in dipolar quantum gases of erbium and dysprosium. In the dysprosium gas these properties are unprecedentedly long-lived. This sets the stage for future investigations into the nature of this exotic phase of matter.

Image: Several tens of thousands of particles spontaneously organize in a self-determined crystalline structure while sharing the same macroscopic wavefunction - hallmarks of supersolidity. 

Credit: Uni Innsbruck

Supersolidity is a paradoxical state where the matter is both crystallized and superfluid. Predicted 50 years ago, such a counter-intuitive phase, featuring rather antithetic properties, has been long searched in superfluid helium. However, after decades of theoretical and experimental efforts, an unambiguous proof of supersolidity in these systems is still missing. Two research teams led by Francesca Ferlaino, one at the Institute for Experimental Physics at the University of Innsbruck and one at the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences now report on the observation of hallmarks of this exotic state in ultracold atomic gases.

While so far, most work has focused on helium, researchers have recently turned to atomic gases—in particular, those with strong dipolar interactions. The team of Francesca Ferlaino has been investigating quantum gases made of atoms with a strong dipolar character for a long time. “Recent experiments have revealed that such gases exhibit fundamental similarities with superfluid helium”, says Lauriane Chomaz referring to experimental achievements in Innsbruck and in Stuttgart over the last few years. “These features lay the groundwork for reaching a state where the several tens of thousands of particles of the gas spontaneously organize in a self-determined crystalline structure while sharing the same macroscopic wavefunction - hallmarks of supersolidity.”

The researchers in Innsbruck experimentally created states showing these characteristics of supersolidity by tuning the interaction strength between the particles, in both erbium and dysprosium quantum gases. “While in erbium the supersolid behavior is only transient, in line with recent beautiful experiments in Pisa and in Stuttgart, our dysprosium realization shows an unprecedented stability”, says Francesca Ferlaino. “Here, the supersolid behavior not only lives long but can also be directly achieved via evaporative cooling, starting from a thermal sample.” Like blowing over a cup of tea, the principle here is to remove the particles that carry the most of energies so that the gas becomes cooler and cooler and finally reaches a quantum-degenerate stationary state with supersolid properties at thermal equilibrium.

This offers exciting prospects for near-future experiments and theories as the supersolid state in this setting is little affected by dissipative dynamics or excitations, thus paving the way for probing its excitation spectrum and its superfluid behavior. The work was financially supported by the Austrian Science Fund FWF, the Austrian Academy of Sciences and the European Union.

Long-lived and transient supersolid behaviors in dipolar quantum gases. L. Chomaz, D. Petter, P. Ilzhöfer, G. Natale, A. Trautmann, C. Politi, G. Durastante, R. M. W. van Bijnen, A. Patscheider, M. Sohmen, M. J. Mark, and F. Ferlaino. Phys. Rev. X 9, 021012
Dipolar Quantum Gases

Contacts and sources:
University of Innsbruck

Citation: Long-Lived and Transient Supersolid Behaviors in Dipolar Quantum Gases.
L. Chomaz, D. Petter, P. Ilzhöfer, G. Natale, A. Trautmann, C. Politi, G. Durastante, R. M. W. van Bijnen, A. Patscheider, M. Sohmen, M. J. Mark, F. Ferlaino. Physical Review X, 2019; 9 (2) DOI: 10.1103/PhysRevX.9.021012

First Laser Radio Transmitter Opens New Era Hybrid Electronic–Photonic Devices and Ultra-High-Speed Wi-Fi

"Since the days of Hertz, radio transmitters have evolved from rudimentary circuits emitting around 50 MHz to modern ubiquitous Wi-Fi devices operating at gigahertz radio bands."  A laser radio transmitter opens the door to a type of hybrid electronic–photonic devices and ultra-high-speed WiFi.

This device uses a frequency comb laser to emit and modulate microwaves wirelessly. The laser uses different frequencies of light beating together to generate microwave radiation. The “beats” emitted from the laser are reminiscent of a painting (right) by Spanish artist Joan Miro' named "Bleu II". The researchers used this phenomenon to send a song wirelessly to a receiver. 
Image courtesy of Marco Piccardo/Harvard SEAS
In a paper published in the Proceedings of the National Academy of Sciences, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) demonstrated a laser that can emit microwaves wirelessly, modulate them, and receive external radio frequency signals.

“The research opens the door to new types of hybrid electronic-photonic devices and is the first step toward ultra-high-speed Wi-Fi,” said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, at SEAS and senior author of the study.

This research builds off previous work from the Capasso Lab. In 2017, the researchers discovered that an infrared frequency comb in a quantum cascade laser could be used to generate terahertz frequencies, the submillimeter wavelengths of the electromagnetic spectrum that could move data hundreds of times faster than today’s wireless. In 2018, the team found that quantum cascade laser frequency combs could also act as integrated transmitters or receivers to efficiently encode information.

Now, the researchers have figured out a way to extract and transmit wireless signals from laser frequency combs.

Unlike conventional lasers, which emit a single frequency of light, laser frequency combs emit multiple frequencies simultaneously, evenly spaced to resemble the teeth of a comb. In 2018, the researchers discovered that inside the laser, the different frequencies of light beat together to generate microwave radiation. The light inside the cavity of the laser caused electrons to oscillate at microwave frequencies — which are within the communications spectrum.

“If you want to use this device for Wi-Fi, you need to be able to put useful information in the microwave signals and extract that information from the device,” said Marco Piccardo, a postdoctoral fellow at SEAS and first author of the paper.

The first thing the new device needed to transmit microwave signals was an antenna. So, the researchers etched a gap into the top electrode of the device, creating a dipole antenna (like the rabbit ears on the top of an old TV). Next, they modulated the frequency comb to encode information on the microwave radiation created by the beating light of the comb. Then, using the antenna, the microwaves are radiated out from the device, containing the encoded information. The radio signal is received by a horn antenna, filtered and sent to a computer.

The researchers also demonstrated that the laser radio could receive signals. The team was able to remote control the behavior of the laser using microwave signals from another device.

“This all-in-one, integrated device, holds great promise for wireless communication,” said Piccardo. “While the dream of terahertz wireless communication is still a ways away, this research provides a clear roadmap showing how to get there.”

You’ve never heard Dean Martin like this. This recording of Martin’s classic “Volare” was transmitted wirelessly via a semiconductor laser — the first time a laser has been used as a radio frequency transmitter.

The Harvard Office of Technology Development has protected the intellectual property relating to this project and is exploring commercialization opportunities.

This research was co-authored by Michele Tamagnone, Benedikt Schwarz, Paul Chevalier, Noah A. Rubin, Yongrui Wang, Christine A. Wang, Michael K. Connors, Daniel McNulty and Alexey Belyanin. It was supported in part by the National Science Foundation.

Contacts and sources:
Leah Burrows
Harvard John A. Paulson School of Engineering and Applied Sciences

Citation: Radio frequency transmitter based on a laser frequency comb
Marco Piccardo, Michele Tamagnone, Benedikt Schwarz, Paul Chevalier, Noah A. Rubin, Yongrui Wang, Christine A. Wang, Michael K. Connors, Daniel McNulty, Alexey Belyanin, Federico Capasso.. Proceedings of the National Academy of Sciences, 2019; 201903534 DOI: 10.1073/pnas.1903534116

Causes of Celestial Phenomenon Sky-Glow STEVE Sought

Scientists thinks they have found the causes of STEVE.

The celestial phenomenon known as STEVE is likely caused by a combination of heating of charged particles in the atmosphere and energetic electrons like those that power the aurora, according to new research. In a new study, scientists found STEVE’s source region in space and identified two mechanisms that cause it.

Amateur astronomer’s photograph used in the new research. The photograph was taken on 8 May 2016 in Keller, Washington The major structures are two bands of upper atmospheric emissions 160 kilometers (100 miles) above the ground, a mauve arc and green picket fence. The black objects at the bottom are trees. The background star constellations include Gemini and Ursa Major.

Credit: Rocky Raybell.

Last year, the obscure atmospheric lights became an internet sensation. Typical auroras, the northern and southern lights, are usually seen as swirling green ribbons spreading across the sky. But STEVE is a thin ribbon of pinkish-red or mauve-colored light stretching from east to west, farther south than where auroras usually appear. Even more strange, STEVE is sometimes joined by green vertical columns of light nicknamed the “picket fence.”

Auroras are produced by glowing oxygen and nitrogen atoms in Earth’s upper atmosphere, excited by charged particles streaming in from the near-Earth magnetic environment called the magnetosphere. Scientists didn’t know if STEVE was a kind of aurora, but a 2018 study found its glow is not due to charged particles raining down into Earth’s upper atmosphere.

The authors of the 2018 study dubbed STEVE a kind of “sky-glow” that is distinct from the aurora, but were unsure exactly what was causing it. Complicating the matter was the fact that STEVE can appear during solar-induced magnetic storms around Earth that power the brightest auroral lights.

Authors of a new study published in AGU’s journal Geophysical Research Lettersanalyzed satellite data and ground images of STEVE events and conclude that the reddish arc and green picket fence are two distinct phenomena arising from different processes. The picket fence is caused by a mechanism similar to typical auroras, but STEVE’s mauve streaks are caused by heating of charged particles higher up in the atmosphere, similar to what causes light bulbs to glow.

“Aurora is defined by particle precipitation, electrons and protons actually falling into our atmosphere, whereas the STEVE atmospheric glow comes from heating without particle precipitation,” said Bea Gallardo-Lacourt, a space physicist at the University of Calgary and co-author of the new study. “The precipitating electrons that cause the green picket fence are thus aurora, though this occurs outside the auroral zone, so it’s indeed unique.”

Alberta Aurora Chasers capture STEVE, the new-to-science upper atmospheric phenomenon, on the evening of April 10, 2018 in Prince George, British Columbia, Canada. Fellow Aurora Chaser Robert Downie kneels in the foreground while photographer Ryan Sault captures the narrow ribbon of white-purple hues overhead.

Credit: Ryan Sault.

Images of STEVE are beautiful in themselves, but they also provide a visible way to study the invisible, complex charged particle flows in Earth’s magnetosphere, according to the study’s authors. The new results help scientists better understand how particle flows develop in the ionosphere, which is important goal because such disturbances can interfere with radio communications and affect GPS signals.

Where does STEVE come from?

In the new study, researchers wanted to find out what powers STEVE and if it occurs in both the Northern and Southern Hemispheres at the same time. They analyzed data from several satellites passing overhead during STEVE events in April 2008 and May 2016 to measure the electric and magnetic fields in Earth’s magnetosphere at the time.

The researchers then coupled the satellite data with photos of STEVE taken by amateur auroral photographers to figure out what causes the unusual glow. They found that during STEVE, a flowing “river” of charged particles in Earth’s ionosphere collide, creating friction that heats the particles and causes them to emit mauve light. Incandescent light bulbs work in much the same way, where electricity heats a filament of tungsten until it’s hot enough to glow.

Artist’s rendition of the magnetosphere during the STEVE occurrence, depicting the plasma region which falls into the auroral zone (green), the plasmasphere (blue) and the boundary between them called the plasmapause (red). The THEMIS and SWARM satellites (left and top) observed waves (red squiggles) that power the STEVE atmospheric glow and picket fence (inset), while the DMSP satellite (bottom) detected electron precipitation and a conjugate glowing arc in the southern hemisphere.

Credit: Emmanuel Masongsong, UCLA, and Yukitoshi Nishimura, BU/UCLA.

Interestingly, the study found the picket fence is powered by energetic electrons streaming from space thousands of kilometers above Earth. While similar to the process that creates typical auroras, these electrons impact the atmosphere far south of usual auroral latitudes. The satellite data showed high-frequency waves moving from Earth’s magnetosphere to its ionosphere can energize electrons and knock them out of the magnetosphere to create the striped picket fence display.

The researchers also found the picket fence occurs in both hemispheres at the same time, supporting the conclusion that its source is high enough above Earth to feed energy to both hemispheres simultaneously.

Public involvement has been crucial for STEVE research by providing ground-based images and precise time and location data, according to Toshi Nishimura, a space physicist at Boston University and lead author of the new study.

“As commercial cameras become more sensitive and increased excitement about the aurora spreads via social media, citizen scientists can act as a ‘mobile sensor network,’ and we are grateful to them for giving us data to analyze,” Nishimura said.

Contacts and sources:
Lauren Lipuma
American Geophysical Union

Yukitoshi (Toshi) Nishimura
Boston University

Citation: Magnetospheric signatures of STEVE: Implication for the magnetospheric energy source and inter‐hemispheric conjugacy.
Y. Nishimura, B. Gallardo‐Lacourt, Y. Zou, E. Mishin, D.J. Knudsen, E.F. Donovan, V. Angelopoulos, Rocky Raybell.Geophysical Research Letters, 2019; DOI: 10.1029/2019GL082460

An Army of Micro-Robots Attacks and Wipes Out Dental Plaque

A visit to the dentist typically involves time-consuming and sometimes unpleasant scraping with mechanical tools to remove plaque from teeth. What if, instead, a dentist could deploy a small army of tiny robots to precisely and non-invasively remove that buildup?

With a precise, controlled movement, microrobots cleared a glass plate of a biofilm, as shown in this time-lapse image. 
Time lapse photo of green circle shows a progressively larger cleared off area where a robot has removed a biofilm
Image: Geelsu Hwang and Edward Steager

A team of engineers, dentists, and biologists from the University of Pennsylvania developed a microscopic robotic cleaning crew. With two types of robotic systems—one designed to work on surfaces and the other to operate inside confined spaces—the scientists showed that robots with catalytic activity could ably destroy biofilms, sticky amalgamations of bacteria enmeshed in a protective scaffolding.

 Such robotic biofilm-removal systems could be valuable in a wide range of potential applications, from keeping water pipes and catheters clean to reducing the risk of tooth decay, endodontic infections, and implant contamination.

The robots' movement is directed by magnets. The researchers envision a variety of applications for the technology, from cleaning teeth to water pipes to catheters.
(Video: Geelsu Hwang and Edward Steager)

The work, published in Science Robotics, was led by Hyun (Michel) Kooof the School of Dental Medicine and Edward Steager of the School of Engineering and Applied Science.

“This was a truly synergistic and multidisciplinary interaction,” says Koo. “We’re leveraging the expertise of microbiologists and clinician-scientists as well as engineers to design the best microbial eradication system possible. This is important to other biomedical fields facing drug-resistant biofilms as we approach a post-antibiotic era.”

“Treating biofilms that occur on teeth requires a great deal of manual labor, both on the part of the consumer and the professional,” adds Steager. “We hope to improve treatment options as well as reduce the difficulty of care.”

Biofilms can arise on biological surfaces, such as on a tooth or in a joint or on objects, like water pipes, implants, or catheters. Wherever biofilms form, they are notoriously difficult to remove, as the sticky matrix that holds the bacteria provides protection from antimicrobial agents.

In previous work, Koo and colleagues have made headway at breaking down the biofilm matrix with a variety of outside-the-box methods. One strategy has been to employ iron-oxide-containing nanoparticles that work catalytically, activating hydrogen peroxide to release free radicals that can kill bacteria and destroy biofilms in a targeted fashion.

Serendipitously, the Penn Dental Medicine team found that groups at Penn Engineering led by Steager, Vijay Kumar, and Kathleen Stebe were working with a robotic platform that used very similar iron-oxide nanoparticles as building blocks for microrobots. The engineers control the movement of these robots using a magnetic field, allowing a tether-free way to steer them.

Together, the cross-school team designed, optimized, and tested two types of robotic systems, which the group calls catalytic antimicrobial robots, or CARs, capable of degrading and removing biofilms. The first involves suspending iron-oxide nanoparticles in a solution, which can then be directed by magnets to remove biofilms on a surface in a plow-like manner. The second platform entails embedding the nanoparticles into gel molds in three-dimensional shapes. These were used to target and destroy biofilms clogging enclosed tubes.

Both types of CARs effectively killed bacteria, broke down the matrix that surrounds them, and removed the debris with high precision. After testing the robots on biofilms growing on either a flat glass surface or enclosed glass tubes, the researchers tried out a more clinically relevant application: Removing biofilm from hard-to-reach parts of a human tooth.

The CARs were able to degrade and remove bacterial biofilms not just from a tooth surface but from one of the most difficult-to-access parts of a tooth, the isthmus, a narrow corridor between root canals where biofilms commonly grow.

“Existing treatments for biofilms are ineffective because they are incapable of simultaneously degrading the protective matrix, killing the embedded bacteria, and physically removing the biodegraded products,” says Koo. “These robots can do all three at once very effectively, leaving no trace of biofilm whatsoever.”

The team’s micro-robots were effective at cleaning hard-to-reach surfaces. This time-lapse image shows one of the molded, helicoid-shaped robots traveling inside a tooth canal.
Image: Alaa Babeer, Elizabeth E. Hunter, and Hyun Koo

By plowing away the degraded remains of the biofilm, Koo says, the chance of it taking hold and re-growing decreases substantially. The researchers envision precisely directing these robots to wherever they need to go to remove biofilms, be it the inside of a cathether or a water line or difficult-to-reach tooth surfaces.

“We think about robots as automated systems that take actions based on actively gathered information,” says Steager. In this case, he says, “the motion of the robot can be informed by images of the biofilm gathered from microcameras or other modes of medical imaging.”

To move the innovation down the road to clinical application, the researchers are receiving support from the Penn Center for Health, Devices, and Technology, an initiative supported by Penn’s Perelman School of Medicine, Penn Engineering, and the Office of the Vice Provost for Research. Penn Health-Tech, as it’s known, awards select interdisciplinary groups with support to create new health technologies, and the robotic platforms project was one of those awarded support in 2018.

“The team has a great clinical background on the dental side and a great technical background on the engineering side,” says Victoria Berenholz, executive director of Penn Health-Tech. “We help to round them out by connecting them to business mentors and resources within the Penn community to translate their technology. They have really done a fantastic job on the project.”

In addition to Koo, Steager, Stebe, and Kumar, the study was coauthored by first author Geelsu Hwang, Amauri J. Paula, Yuan Liu, Alaa Babeer, and Bekir Karabucak, all of the School of Dental Medicine, and Elizabeth E. Hunter of the School of Engineering and Applied Science.

The study was supported in part by the National Institute for Dental and Craniofacial Research (grants DE025848 and DE018023) and National Science Foundation.

Hyun (Michel) Koo is a professor in the Penn Dental Medicine Department of Orthodontics and in the divisions of Pediatric Dentistry and Community Oral Health.

Edward Steager is a research investigator in the School of Engineering and Applied Science’s General Robotics, Automation, Sensing and Perception Laboratory (GRASP Lab).

Vijay Kumar is the Nemirovsky Family Dean of Penn Engineering with appointments in the departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering.

Kathleen Stebe is the Richer and Elizabeth Goodwin Professor of Chemical and Biomolecular Engineering in the School of Engineering and Applied Science.

Contacts and sources:
Katherine Unger Baillie
University of Pennsylvania

Citation: Catalytic antimicrobial robots for biofilm eradication.
Geelsu Hwang, Amauri J. Paula, Elizabeth E. Hunter, Yuan Liu, Alaa Babeer, Bekir Karabucak, Kathleen Stebe, Vijay Kumar, Edward Steager, Hyun Koo. Science Robotics, 2019; 4 (29): eaaw2388 DOI: 10.1126/scirobotics.aaw2388

Americans Sit Too Much Risking Obesity, Diabetes, Heart Disease and Certain Cancers

A nation of coach potatoes is growing. A survey of 50,000 people across multiple ages, racial, ethnic groups documents troubling trends that American are too sedentary.

A new study led by Washington University School of Medicine in St. Louis shows that most Americans spend a lot of time sitting, despite public health messages that prolonged sitting is unhealthy. Such inactivity increases the risk of obesity, diabetes, heart disease and certain cancers.
Credit: LAIntern / Wikimedia Commons

Most Americans continue to sit for prolonged periods despite public health messages that such inactivity increases the risk of obesity, diabetes, heart disease and certain cancers, according to a major new study led by researchers at Washington University School of Medicine in St. Louis.

The research team analyzed surveys of 51,000 people from 2001 to 2016 to track sitting trends in front of TVs and computers and the total amount of time spent sitting on a daily basis. Unlike other studies that have looked at sedentary behaviors, the research is the first to document sitting in a nationally representative sample of the U.S. population across multiple age groups – from children to the elderly – and different racial and ethnic groups.

The research, led by Yin Cao, ScD, an epidemiologist and assistant professor of surgery in the Division of Public Health Sciences, is published April 23 in the Journal of the American Medical Association.


Credit: Washington University School of Medicine

“In almost none of the groups we analyzed are the numbers going in the right direction,” said Cao, the study’s senior author. “We want to raise awareness about this issue on multiple levels — from individuals and families to schools, employers and elected officials.”

Epidemiologist and co-senior author Graham A. Colditz, MD, DrPH, the Niess-Gain Professor of Surgery and director of the Division of Public Health Sciences, said: “We think a lot of these sedentary habits are formed early, so if we can make changes that help children be more active, it could pay off in the future, both for children as they grow to adulthood and for future health-care spending. Sedentary behavior is linked to poor health in many areas, and if we can reduce that across the board it could have a big impact.”

The new study fills a gap in knowledge on sedentary behavior, according to the researchers, putting specific numbers on the amount of time Americans actually spend sitting. For example, the most recent edition of the Physical Activity Guidelines for Americans, published in 2018 by the Department of Health and Human Services, recommends less sitting time but offers no guidance on how much.

The researchers analyzed data from more than 51,000 people who participated in the National Health and Nutrition Examination Survey between 2001 and 2016, looking at four age groups: children ages 5 to 11 (as reported by a parent or guardian), adolescents ages 12 to 19, adults ages 20 to 64, and adults ages 65 and older. Race and ethnicity were defined as non-Hispanic white, non-Hispanic black, Hispanic and other races, including multiracial.

Total daily sitting time increased among adolescents and adults from 2007 to 2016, from seven hours per day to just over eight for teenagers, and from 5.5 hours per day to almost 6.5 for adults, the researchers found.

“Until now, we haven’t had data demonstrating the amount of time most Americans spend sitting watching TV or doing other sedentary activities,” Cao said. “Now that we have a baseline — on population level and for different age groups — we can look at trends over time and see whether different interventions or public health initiatives are effective in reducing the time spent sitting and nudging people toward more active behaviors.”

The researchers found that most Americans spend at least two hours per day sitting and watching television or videos. Among children ages 5-11, 62 percent spent at least that long in front of screens daily. For adolescents ages 12-19, that number was 59 percent. About 65 percent of adults ages 20 to 64 spent at least two hours watching television per day. And most recently, from 2015 to 2016, 84 percent of adults over age 65 spent at least that much time sitting watching television. And this remained steady over the course of the study.

Across all age groups, 28 percent to 38 percent of those surveyed spent at least three hours per day watching television or videos, and 13 percent to 23 percent spent four hours or more engaged in watching television or videos.

Importantly, males of all age groups, non-Hispanic black individuals of all age groups and participants who reported being obese or physically inactive were more likely to spend more time sitting to watch televisions or videos compared to their counterparts.

In addition, computer screen time outside of work and school increased over this period. At least half of individuals across all age groups used a computer during leisure time for more than one hour per day in the two most recent years of the study. And up to a quarter of the U.S. population used computers outside of work and school for three hours or more.

“How we create public policies or promote social change that supports less sitting is unclear and likely to be complicated,” Colditz said. “If a neighborhood in a disadvantaged community is unsafe, for example, parents can’t just send their kids outside to play. Our environments — the way our cities, our school days and working days are designed — play roles in this behavior that are difficult to change. But at least now, we have a baseline from which to measure whether specific changes are having an impact.”

Chao Cao, a recent graduate of the Brown School and a data analyst in Yin Cao’s lab, co-led the analyses. Washington University also collaborated with researchers at a number of other institutions, including Charles Matthews, PhD, at the National Cancer Institute (NCI); Lin Yang, PhD, at the Alberta Health Services, Calgary, Canada; the Harvard T.H. Chan School of Public Health; Memorial Sloan Kettering Cancer Center; and Massachusetts General Hospital and Harvard Medical School.

Contacts and sources:
Julia Evangelou Strait
Washington University School of Medicine

Citation: Trends in Sedentary Behavior Among the US Population, 2001-2016.
Lin Yang, Chao Cao, Elizabeth D. Kantor, Long H. Nguyen, Xiaobin Zheng, Yikyung Park, Edward L. Giovannucci, Charles E. Matthews, Graham A. Colditz, Yin Cao. JAMA, 2019; 321 (16): 1587 DOI: 10.1001/jama.2019.3636

Tissue Injection To Supercharge Healing, Once Science Fiction, Now a Reality

 They can supercharge the body’s own processes to regrow and repair an injury with a new device that encases delicate tissue cells into protective microgels making it possible to heal wounds more quickly with an injection.

Doctoral student Mohamed Gamal uses a newly developed cell encapsulation device.
Doctoral student Mohamed Gamal uses a newly developed cell encapsulation device.
Credit: University of British Columbia Okanagan

A simple injection that can help regrow damaged tissue has long been the dream of physicians and patients alike. A new study from researchers at UBC Okanagan takes a big step towards making that dream a reality with a device that makes encapsulating cells much faster, cheaper and more effective.

“The idea of injecting different kinds of tissue cells is not a new one,” says Keekyoung Kim, assistant professor of engineering at UBC Okanagan and study co-author. “It’s an enticing concept because by introducing cells into damaged tissue, we can supercharge the body’s own processes to regrow and repair an injury.”

Kim says everything from broken bones to torn ligaments could benefit from this kind of approach and suggests even whole organs could be repaired as the technology improves.

The problem, he says, is that cells on their own are delicate and tend not to survive when injected directly into the body.

“It turns out that to ensure cell survival, they need to be encased in a coating that protects them from physical damage and from the body’s own immune system,” says Mohamed Gamal, doctoral student in biomedical engineering and study lead author. “But it has been extremely difficult to do that kind of cell encapsulation, which has until now been done in a very costly, time consuming and wasteful process.”

Kim and Gamal have solved that problem by developing an automated encapsulation device that encases many cells in a microgel using a specialized blue laser and purifies them to produce a clean useable sample in just a few minutes. The advantage of their system is that over 85 per cent of the cells survive and the process can be easily scaled up.

“Research in this area has been hampered by the cost and lack of availability of mass-produced cell encapsulated microgels,” says Kim. “We’ve solved that problem and our system could provide thousands or even tens of thousands of cell-encapsulated microgels rapidly, supercharging this field of bioengineering.”

In addition to developing a system that’s quick and efficient, Gamal says the equipment is made up of readily available and inexpensive components.

“Any lab doing this kind of work could set up a similar system anywhere from a few hundred to a couple of thousand dollars, which is pretty affordable for lab equipment,” says Gamal.

The team is already looking at the next step, which will be to embed different kinds of stem cells—cells that haven’t yet differentiated into specific tissue types—into the microgels alongside specialized proteins or hormones called growth factors. The idea would be to help the stem cells transform into the appropriate tissue type once they’re injected.

“I’m really excited to see where this technology goes next and what our encapsulated stem cells are capable of.”

The study was published in the journal Lab on a Chip with funding from the Natural Sciences and Engineering Research Council of Canada and the Canadian Foundation for Innovation.

Contacts and sources:
Nathan Skolski
University of British Columbia Okanagan campus

Citation: An integrated microfluidic flow-focusing platform for on-chip fabrication and filtration of cell-laden microgels.
Mohamed G. A. Mohamed, Sina Kheiri, Saidul Islam, Hitendra Kumar, Annie Yang, Keekyoung Kim. Lab on a Chip, 2019; 19 (9): 1621 DOI: 10.1039/C9LC00073A

Something Fundamentally Different: The Universe Is Expanding Faster Than Ever Thought

The Universe is expanding faster than anyone ever figured according to new measurements from NASA’s Hubble Space Telescope. The math confirms that the Universe is expanding about 9% faster than expected based on its trajectory seen shortly after the big bang, astronomers say.

The new measurements, published April 25 in Astrophysical Journal, reduce the chances that the disparity is an accident from 1 in 3,000 to only 1 in 100,000 and suggest that new physics may be needed to better understand the cosmos.

This is a ground-based telescope’s view of the Large Magellanic Cloud, a satellite galaxy of our Milky Way. The inset image, taken by the Hubble Space Telescope, reveals one of many star clusters scattered throughout the dwarf galaxy.

Credit: NASA, ESA, Adam Riess, and Palomar Digitized Sky Survey

“This mismatch has been growing and has now reached a point that is really impossible to dismiss as a fluke. This is not what we expected,” says Adam Riess, Bloomberg Distinguished Professor of Physics and Astronomy at The Johns Hopkins University, Nobel Laureate and the project’s leader.

In this study, Riess and his SH0ES (Supernovae, H0, for the Equation of State) Team analyzed light from 70 stars in our neighboring galaxy, the Large Magellanic Cloud, with a new method that allowed for capturing quick images of these stars. The stars, called Cepheid variables, brighten and dim at predictable rates that are used to measure nearby intergalactic distances.

The usual method for measuring the stars is incredibly time-consuming; the Hubble can only observe one star for every 90-minute orbit around Earth. Using their new method called DASH (Drift And Shift), the researchers using Hubble as a “point-and-shoot” camera to look at groups of Cepheids, thereby allowing the team to observe a dozen Cepheids in the same amount of time it would normally take to observe just one.

With this new data, Riess and the team were able to strengthen the foundation of the cosmic distance ladder, which is used to determine distances within the Universe, and calculate the Hubble constant, a value of how fast the cosmos expands over time.

This illustration shows the three basic steps astronomers use to calculate how fast the universe expands over time, a value called the Hubble constant. All the steps involve building a strong “cosmic distance ladder,” by starting with measuring accurate distances to nearby galaxies and then moving to galaxies farther and farther away. This “ladder” is a series of measurements of different kinds of astronomical objects with an intrinsic brightness that researchers can use to calculate distances.

Credit: NASA, ESA, and A. Feild (STScI)

The team combined their Hubble measurements with another set of observations, made by the Araucaria Project, a collaboration between astronomers from institutions in Chile, the U.S., and Europe. This group made distance measurements to the Large Magellanic Cloud by observing the dimming of light as one star passes in front of its partner in eclipsing binary-star systems.

The combined measurements helped the SH0ES team refine the Cepheids’ true brightness. With this more accurate result, the team could then “tighten the bolts” of the rest of the distance ladder that uses exploding stars called supernovae to extend deeper into space.

As the team’s measurements have become more precise, their calculation of the Hubble constant has remained at odds with the expected value derived from observations of the early universe’s expansion by the European Space Agency’s Planck satellite based on conditions Planck observed 380,000 years after the Big Bang.

“This is not just two experiments disagreeing,” Riess explained. “We are measuring something fundamentally different. One is a measurement of how fast the universe is expanding today, as we see it. The other is a prediction based on the physics of the early universe and on measurements of how fast it ought to be expanding. If these values don’t agree, there becomes a very strong likelihood that we’re missing something in the cosmological model that connects the two eras.”

While Riess doesn’t have an answer as to exactly why the discrepancy exists, he and the SH0ES team will continue to fine-tune the Hubble constant, with the goal of reducing the uncertainty to 1%. These most recent measurements brought the uncertainty in the rate of expansion down from 10% in 2001 to 5% in 2009 and now to 1.9% in the present study.

Other authors on this paper include Stefano Casertano of the Space Telescope Science Institute; Wenlong Yuan of The Johns Hopkins University; Lucas M. Macri of Texas A&M University, and; Dan Scolnic of Duke University. Adam Riess is also a senior member of the science staff at the Space Telescope Science Institute and an advisory board member for Space@Hopkins.

Funding for this research was provided by the National Aeronautics and Space Administration through programs GO-14648, 15146 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.

Contacts and sources:
Chanapa Tantibanchachai
Johns Hopkins University

A Self Inflating Balloon for Your Stomach to Avoid Overeating

Scientists have developed an edible inflatable pill to keep people from overeating.  The thought is with a balloon in your stomach you will eat less and lose weight/. 

Measuring about 3cm by 1cm, the EndoPil prototype capsule contains a balloon that can be self-inflated with a handheld magnet once it is in the stomach, thus inducing a sense of fullness. Its magnetically activated inflation mechanism causes a reaction between a harmless acid and a salt stored in the capsule, which produces carbon dioxide to fill up the balloon.
Credit: NTU Singapore

A team from Nanyang Technological University, Singapore (NTU Singapore) and the National University Health System (NUHS) has developed a self-inflating weight management capsule that could be used to treat obese patients.

Called the EndoPil, the prototype capsule contains a balloon that can be self-inflated with a handheld magnet once it is in the stomach, thus inducing a sense of fullness. Its magnetically-activated inflation mechanism causes a reaction between a harmless acid and a salt stored in the capsule, which produces carbon dioxide to fill up the balloon. The concept behind the capsule is for it to be ingested orally, though trials using this route for administration have not yet begun.

A team from Nanyang Technological University, Singapore (NTU Singapore) and the National University Health System (NUHS) has developed a self-inflating weight management capsule that could be used to treat obese patients.
Credit: NTU Singapore

Designed by a team led by Professor Louis Phee, NTU Dean of Engineering, and Professor Lawrence Ho, aclinician-innovator at NUHS, such an orally-administered self-inflating weight loss capsule could represent a non-invasive alternative to tackle the growing global obesity epidemic.

Today, moderately obese patients and those who are too ill to undergo surgery can opt for the intragastric balloon, an established weight loss intervention that has to be inserted into the stomach via endoscopy under sedation. It is removed six months later via the same procedure. As a result, not all patients are open to this option as the balloon has to be inserted into the stomach via endoscopy and under sedation.

It is also common for patients who opt for the intragastric balloon to experience nausea and vomiting, with up to 20 per cent requiring early balloon removal due to intolerance . The stomach may also get used to the prolonged placement of the balloon within, causing the balloon to be less effective for weight loss.

The novel made-in-Singapore weight loss capsule, designed to be taken with a glass of water, could overcome these limitations.

The EndoPil’s viability was first tested in a preclinical study, in which a larger prototype was inserted into a pig. The findings, published in a supplement of scientific journal Gastroenterology, showed that the pig with the inflated capsule in its stomach lost 1.5kg a week later, while a control group of five pigs gained weight.

Last year, the team trialled their capsule on a healthy patient volunteer in Singapore , with the capsule inserted into her stomach through an endoscope. The balloon was successfully inflated within her stomach, with no discomfort or injury from the inflation. 

Measuring about 3cm by 1cm, the EndoPil prototype capsule contains a balloon that can be self-inflated with a handheld magnet once it is in the stomach, thus inducing a sense of fullness. Its magnetically activated inflation mechanism causes a reaction between a harmless acid and a salt stored in the capsule, which produces carbon dioxide to fill up the balloon.

Credit: NTU Singapore

The latest findings will be presented next month as a plenary lecture during the Digestive Disease Week 2019 in San Diego, the world’s largest gathering of physicians and researchers in the fields of gastroenterology, hepatology, endoscopy and gastrointestinal surgery.

Currently, the capsule has to be deflated magnetically. The team is now working on a natural decompression mechanism for the capsule, as well as reducing its size.

Professor Louis Phee, who is also the Tan Chin Tuan Centennial Professor in Mechanical Engineering at NTU, said, “EndoPil’s main advantage is its simplicity of administration. All you would need is a glass of water to help it go down and a magnet to activate it. We are now trying to reduce the size of the prototype, and improve it with a natural decompression mechanism. We anticipate that such features will help the capsule gain widespread acceptance and benefit patients with obesity and metabolic diseases.”

Professor Lawrence Ho from the NUS Yong Loo Lin School of Medicine, who is also a Senior Consultant with the Division of Gastroenterology and Hepatology at the National University Hospital, said, “EndoPil’s compact size and simple activation using an external hand-held magnet could pave the way for an alternative that could be administered by doctors even within the outpatient, and primary care setting. This could translate to no hospital stay, and cost saving to the patients and health system.”

A simpler yet effective alternative

The Endopil could potentially remove the need to insert an endoscope or a tube trailing out of the oesophagus, nasal and oral cavities for balloon inflation.

Each capsule should be removed within a month, allowing for shorter treatment cycles that ensure that the stomach does not grow used to the balloon’s presence. As the space-occupying effect in the stomach is achieved gradually, side effects due to sudden inflation such as vomiting and discomfort can be avoided.

The team is now working on programming the capsule to biodegrade and deflate after a stipulated time frame, before being expelled by the body’s digestive system. This includes incorporating a deflation plug at the end of the inner capsule that can be dissolved by stomach acid, allowing carbon dioxide to leak out. In the case of an emergency, the balloon can be deflated on command with an external magnet.

How the new capsule works

Measuring around 3cm by 1cm, the EndoPil has an outer gelatine casing that contains a deflated balloon, an inflation valve with a magnet attached, and a harmless acid and a salt stored in separate compartments in an inner capsule.

Designed to be swallowed with a glass of water, the capsule enters the stomach, where the acid within breaks open the outer gelatine casing of the capsule. Its location in the stomach is ascertained by a magnetic sensor, an external magnet measuring 5cm in diameter is used to attract the magnet attached to the inflation valve, opening the valve. This mechanism avoids premature inflation of the device while in the oesophagus, or delayed inflation after it enters the small intestine.

The opening of the valve allows the acid and the salt to mix and react, producing carbon dioxide to fill up the balloon. The kitchen-safe ingredients were chosen as a safety precaution to ensure that the capsule remains harmless upon leakage, said Prof Phee.

As the balloon expands with carbon dioxide, it floats to the top of the stomach, the portion that is more sensitive to fullness. Within three minutes, the balloon can be inflated to 120ml. It can be deflated magnetically to a size small enough to enter the small intestines.

Further clinical trials

A US patent has been granted in 2016 for the balloon inflating mechanism, which was published in scientific journal PLoS ONE. A new US patent has been filed for the latest innovation.

After improving the prototype, the team hopes to conduct another round of human trials in a year’s time – first to ensure that the prototype can be naturally decompressed and expelled by the body, before testing the capsule for its treatment efficacy.

Prof Phee and Prof Ho will also spin off the technology into a start-up company called EndoPil. The two professors previously co-founded EndoMaster, one of Singapore’s most prominent deep tech start-ups in the field of medical robotics.

Contacts and sources:
Foo Jie Ying
Nanyang Technological University

Machine Speaks Your Recorded Thoughts

Now a machine can read your thoughts and speak them in a natural sounding voice.  It may be a boon for people unable to speak but who can still think. The new technology is a stepping stone to a neural speech prosthesis, researchers say

Credit: UCF

A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract – an anatomically detailed computer simulation including the lips, jaw, tongue and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.

Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100 to 150 words per minute of natural speech.

Study senior author Edward Chang, MD, has been studying how the brain produces and analyzes speech for over a decade and aims to develop a speech prosthesis to restore the voices of individuals who have lost speech to paralysis and other forms of neurological damage.
 Photo by Steve Babuljak

The new system being developed in the laboratory of Edward Chang, MD – described April 24, 2019, in Nature – demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.

“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

Virtual Vocal Tract Improves Naturalistic Speech Synthesis

The research was led by Gopala Anumanchipalli, PhD, a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech. 

Gopala Anumanchipalli, PhD, holding an example array of intracranial electrodes of the type used to record brain activity in the current study.
 Credit: UCSF

From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.

“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one,” Anumanchipalli said. “We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals.”

In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center – patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery – to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

Credit: UCSF

The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.

As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers’ overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.

“We still have a ways to go to perfectly mimic spoken language,” Chartier acknowledged. “We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”
Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance

The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can’t speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.

Josh Chartier, a bioengineering graduate student in the Chang lab.

Credit: UC San Francisco

Preliminary results from one of the team’s research participants suggest that the researchers’ anatomically based system can decode and synthesize novel sentences from participants’ brain activity nearly as well as the sentences the algorithm was trained on. Even when the researchers provided the algorithm with brain activity data recorded while one participant merely mouthed sentences without sound, the system was still able to produce intelligible synthetic versions of the mimed sentences in the speaker’s voice.

The researchers also found that the neural code for vocal movements partially overlapped across participants, and that one research subject’s vocal tract simulation could be adapted to respond to the neural instructions recorded from another participant’s brain. Together, these findings suggest that individuals with speech loss due to neurological impairment may be able to learn to control a speech prosthesis modeled on the voice of someone with intact speech.

“People who can’t move their arms and legs have learned to control robotic limbs with their brains,” Chartier said. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”

Added Anumanchipalli, “I’m proud that we’ve been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients.”

Contacts and sources:
Nicholas Weiler
UC San Francisco

Authors: Anumanchipalli and Chartier are co-first authors of the new study. Chang, a Bowes Biomedical Investigator at UCSF, professor in the Department of Neurological Surgery and member of the UCSF Weill Institute for Neurosciences, is the senior and corresponding author.

Funding: This research was primarily funded by the National Institutes of Health (grants DP2 OD008627 and U01 NS098971-01). Chang is a New York Stem Cell Foundation Robertson Investigator. This research was also supported by the New York Stem Cell Foundation, the Howard Hughes Medical Institute, the McKnight Foundation, the Shurl and Kay Curci Foundation, and the William K. Bowes Foundation.

Red Planet Rumbles: First MarsQuake Recorded, Hear the Seismic Event

Mars is no a dead planet.  The red planet rumbled and NASA sensors recorded their first Marsquakes
They were bigger than Moonquakes

This 360-degree panorama is composed of 354 images taken by the Opportunity rover's Panoramic Camera (Pancam) from May 13 through June 10, 2018, or sols (Martian days) 5,084 through 5,111.

Credit: NASA

NASA's Mars InSight lander has measured and recorded for the first time ever a likely "marsquake."

The faint seismic signal, detected by the lander's Seismic Experiment for Interior Structure (SEIS) instrument, was recorded on April 6, the lander's 128th Martian day, or sol. This is the first recorded trembling that appears to have come from inside the planet, as opposed to being caused by forces above the surface, such as wind. Scientists still are examining the data to determine the exact cause of the signal.

This video and audio illustrates a seismic event detected by NASA's InSight on April 6, 2019, the 128th Martian day, or sol, of the mission. Three distinct kinds of sounds can be heard, all of them detected as ground vibrations by the spacecraft's seismometer, called the Seismic Experiment for Interior Structure (SEIS): There's noise from Martian wind; the seismic event itself; and the spacecraft's robotic arm as it moves to take pictures.

"InSight's first readings carry on the science that began with NASA's Apollo missions," said InSight Principal Investigator Bruce Banerdt of NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California. "We've been collecting background noise up until now, but this first event officially kicks off a new field: Martian seismology!"

The new seismic event was too small to provide solid data on the Martian interior, which is one of InSight's main objectives. The Martian surface is extremely quiet, allowing SEIS, InSight's specially designed seismometer, to pick up faint rumbles. In contrast, Earth's surface is quivering constantly from seismic noise created by oceans and weather. An event of this size in Southern California would be lost among dozens of tiny crackles that occur every day.

"The Martian Sol 128 event is exciting because its size and longer duration fit the profile of moonquakes detected on the lunar surface during the Apollo missions," said Lori Glaze, Planetary Science Division director at NASA Headquarters.

NASA's Apollo astronauts installed five seismometers that measured thousands of quakes while operating on the Moon between 1969 and 1977, revealing seismic activity on the Moon. Different materials can change the speed of seismic waves or reflect them, allowing scientists to use these waves to learn about the interior of the Moon and model its formation. NASA currently is planning to return astronauts to the Moon by 2024, laying the foundation that will eventually enable human exploration of Mars.

InSight's seismometer, which the lander placed on the planet's surface on Dec. 19, 2018, will enable scientists to gather similar data about Mars. By studying the deep interior of Mars, they hope to learn how other rocky worlds, including Earth and the Moon, formed.

This image shows InSight's domed Wind and Thermal Shield, which covers its seismometer. The image was taken on the 110th Martian day, or sol, of the mission. The seismometer is called Seismic Experiment for Interior Structure, or SEIS.
This image shows NASAs InSight landers domed Wind and Thermal Shield, which covers its seismometer. The image was taken on the 110th Martian day, or sol, of the mission.
Credit: NASA

Three other seismic signals occurred on March 14 (Sol 105), April 10 (Sol 132) and April 11 (Sol 133). Detected by SEIS' more sensitive Very Broad Band sensors, these signals were even smaller than the Sol 128 event and more ambiguous in origin. The team will continue to study these events to try to determine their cause.

Regardless of its cause, the Sol 128 signal is an exciting milestone for the team.

"We've been waiting months for a signal like this," said Philippe Lognonné, SEIS team lead at the Institut de Physique du Globe de Paris (IPGP) in France. "It's so exciting to finally have proof that Mars is still seismically active. We're looking forward to sharing detailed results once we've had a chance to analyze them."

Most people are familiar with quakes on Earth, which occur on faults created by the motion of tectonic plates. Mars and the Moon do not have tectonic plates, but they still experience quakes - in their cases, caused by a continual process of cooling and contraction that creates stress. This stress builds over time, until it is strong enough to break the crust, causing a quake.

Detecting these tiny quakes required a huge feat of engineering. On Earth, high-quality seismometers often are sealed in underground vaults to isolate them from changes in temperature and weather. InSight's instrument has several ingenious insulating barriers, including a cover built by JPL called the Wind and Thermal Shield, to protect it from the planet's extreme temperature changes and high winds.

SEIS has surpassed the team's expectations in terms of its sensitivity. The instrument was provided for InSight by the French space agency, Centre National d'Études Spatiales (CNES), while these first seismic events were identified by InSight's Marsquake Service team, led by the Swiss Federal Institute of Technology.

"We are delighted about this first achievement and are eager to make many similar measurements with SEIS in the years to come," said Charles Yana, SEIS mission operations manager at CNES.

JPL manages InSight for NASA's Science Mission Directorate. InSight is part of NASA's Discovery Program, managed by the agency's Marshall Space Flight Center in Huntsville, Alabama. Lockheed Martin Space in Denver built the InSight spacecraft, including its cruise stage and lander, and supports spacecraft operations for the mission.

A number of European partners, including CNES and the German Aerospace Center (DLR), support the InSight mission. CNES provided the SEIS instrument to NASA, with the principal investigator at IPGP. Significant contributions for SEIS came from IPGP; the Max Planck Institute for Solar System Research in Germany; the Swiss Federal Institute of Technology (ETH Zurich) in Switzerland; Imperial College London and Oxford University in the United Kingdom; and JPL. DLR provided the Heat Flow and Physical Properties Package (HP3) instrument, with significant contributions from the Space Research Center of the Polish Academy of Sciences and Astronika in Poland. Spain's Centro de Astrobiología supplied the temperature and wind sensors.

Listen to audio of this likely marsquake at:

For more information about InSight, visit:

For more information about the agency's Moon to Mars activities, visit


Contacts and sources
Andrew Good
NASA/Jet Propulsion Laboratory

Wednesday, April 24, 2019

Introducing The Strangest Crustacean, the Platypus of Crabs: A Perplexing Chimera

The discovery of the Cretaceous era creature called the "Platypus of Crabs", and dubbed for a beautiful and perplexing chimera, meet Callichimaera perplexa. 
The "cute" crab has put science in a temporary tizzy. One of a bunch of new crab family cousins, includes Callichimaera perplexa, a strange 95-million-year-old species that is making scientists rethink the definition of a crab — and perhaps the disparate ways animals evolve over time.

Artistic reconstruction of Callichimaera perplexa: The strangest crab that has ever lived.
Artistic reconstruction of Callichimaera perplexa: The strangest crab that has ever lived
Image credit: Elissa Martin, Yale Peabody Museum of Natural History

An international team of researchers led by Yale paleontologist Javier Luque announced the discovery of hundreds of exceptionally well-preserved specimens from rock formations in Colombia and the United States that date back to the mid-Cretaceous period of 90-95 million years ago. The cache includes hundreds of tiny comma shrimp fossils, with their telltale comma-esque curve; several carideans, which are the widely found “true” shrimp; and an entirely new branch of the evolutionary tree for crabs.

A study about the discovery appears in the April 24 online edition of the journal Science Advances.

Javier Luque poses with Callichimaera perplexa — a 95-million-year-old species that will force scientists to rethink the definition of a crab. 
(Photo credit: Daniel Ocampo R., Vencejo Films.)

The most intriguing discovery, according to the researchers, is Callichimaera perplexa, the earliest example of a swimming arthropod with paddle-like legs since the extinction of sea scorpions more than 250 million years ago. The name derives from the chimera, a mythological creature that has body features from more than one animal. Callichimaera’s full name translates into “perplexing beautiful chimera.”

Callichimaera is about the size of a quarter. Its “unusual and cute” appearance, Luque notes — large compound eyes with no sockets, bent claws, leg-like mouth parts, exposed tail, and long body — are features typical of crab larvae from the open sea. This suggests that some ancient crabs may have retained a few of their larval traits into adulthood, amplified them, and developed a new body architecture. This is an evolutionary process called “heterochrony.”

“Callichimaera perplexa is so unique and strange that it can be considered the platypus of the crab world,” said Luque. “It hints at how novel forms evolve and become so disparate through time. Usually we think of crabs as big animals with broad carapaces, strong claws, small eyes in long eyestalks, and a small tail tucked under the body. Well, Callichimaera defies all of these ‘crabby’ features and forces a re-think of our definition of what makes a crab a crab.”Endless forms most beautiful: the diversity of body forms among crabs, including the enigmatic 

Callichimaera perplexa (center).
Image credit: Arthur Anker and Javier Luque [photos]; figure by Javier Luque, Yale University

Luque also noted the significance of making the discovery in a tropical region of the world. There are fewer researchers actively looking for fossils in the tropics, he said, and the amount of ground cover and thick vegetation of tropical forests make access to well-exposed rocks more challenging.

“It is very exciting that today we keep finding completely new branches in the tree of life from a distant past, especially from regions like the tropics, which despite being hotspots of diversity today, are places we know the least about in terms of their past diversity,” Luque said.

Luque’s team included researchers from the University of Alberta, Kent State University, the University of Montreal, the Smithsonian Tropical Research Institute in Panama, the Canadian Parks and Wilderness Society, the National Autonomous University of Mexico, the University of Nevada, and the College of Communication and Design in Boca Raton, Florida.
See Callichimaera perplexa in 3D

Watch the video below to see an animated, three-dimensional recreation of the puzzling mid-Cretaceous crab that forces a rethink of what a crab is.
Video credit: Daniel Ocampo R., Vencejo Films,and  Javier Luque, Yale University [images]; animation and 3D reconstruction by Alex Duque)

Contacts and sources:
Jim Shelton
Yale University

Citation: Exceptional preservation of mid-Cretaceous marine arthropods and the evolution of novel forms via heterochrony Science Advances 24 Apr 2019:Vol. 5, no. 4, eaav3875

DOI: 10.1126/sciadv.aav3875

Smelling with Your Tongue: On Odors, Taste Buds Open Flavor Doors

You can smell with your tongue and and that sensory pathway may be a way to combat diet-related diseases such as obesity and diabetes by the development of odor-based taste modifiers.

Scientists from the Monell Center report that functional olfactory receptors, the sensors that detect odors in the nose, are also present in human taste cells found on the tongue. The findings suggest that interactions between the senses of smell and taste, the primary components of food flavor, may begin on the tongue and not in the brain, as previously thought.

"Taste buds contain the taste receptor cells, which are also known as gustatory cells. The taste receptors are located around the small structures known as papillae found on the upper surface of the tongue, soft palate, upper esophagus, the cheek, and epiglottis. These structures are involved in detecting the five elements of taste perception: salty, sour, bitter, sweet and umami. A popular myth assigns these different tastes to different regions of the tongue; in reality these tastes can be detected by any area of the tongue."
1402 The Tongue.jpg
Credit: OpenStax / Wikimedia Commons

"Our research may help explain how odor molecules modulate taste perception," said study senior author Mehmet Hakan Ozdener, MD, PhD, MPH, a cell biologist at Monell. "This may lead to the development of odor-based taste modifiers that can help combat the excess salt, sugar, and fat intake associated with diet-related diseases such as obesity and diabetes."

While many people equate flavor with taste, the distinctive flavor of most foods and drinks comes more from smell than it does from taste. Taste, which detects sweet, salty, sour, bitter, and umami (savory) molecules on the tongue, evolved as a gatekeeper to evaluate the nutrient value and potential toxicity of what we put in our mouths. Smell provides detailed information about the quality of food flavor, for example, is that banana, licorice, or cherry? The brain combines input from taste, smell, and other senses to create the multi-modal sensation of flavor.

Until now, taste and smell were considered to be independent sensory systems that did not interact until their respective information reached the brain. Ozdener was prompted to challenge this belief when his 12-year-old son asked him if snakes extend their tongues so they can smell.

"Fungiform papillae - as the name suggests, these are slightly mushroom-shaped if looked at in longitudinal section. These are present mostly at the dorsal surface of the tongue, as well as at the sides. Innervated by facial nerve."

"Foliate papillae - these are ridges and grooves towards the posterior part of the tongue found at the lateral borders. Innervated by facial nerve (anterior papillae) and glossopharyngeal nerve (posterior papillae)."

Credit: Kieli.svg: Antimoni / Wikimedia Commons

"Circumvallate papillae - there are only about 10 to 14 of these papillae on most people, and they are present at the back of the oral part of the tongue. They are arranged in a circular-shaped row just in front of the sulcus terminalis of the tongue. They are associated with ducts of Von Ebner's glands, and are innervated by the glossopharyngeal nerve."

"The fourth type of papillae the filiform papillae are the most numerous but do not contain taste buds.They are characterized by increased keratinisation and are involved in the mechanical aspect of providing abrasion."

In the study, published online ahead of print in Chemical Senses, Ozdener and colleagues used methods developed at Monell to maintain living human taste cells in culture. Using genetic and biochemical methods to probe the taste cell cultures, the researchers found that the human taste cells contain many key molecules known to be present in olfactory receptors.

They next used a method known as calcium imaging to show that the cultured taste cells respond to odor molecules in a manner similar to olfactory receptor cells.

Dr. Ozdener is a cell biologist at the Monell Center

Credit: Monell Center

Together, the findings provide the first demonstration of functional olfactory receptors in human taste cells, suggesting that olfactory receptors may play a role in the taste system by interacting with taste receptor cells on the tongue. Supporting this possibility, other experiments by the Monell scientists demonstrated that a single taste cell can contain both taste and olfactory receptors.

"The presence of olfactory receptors and taste receptors in the same cell will provide us with exciting opportunities to study interactions between odor and taste stimuli on the tongue," said Ozdener.

In addition to providing insight into the nature and mechanisms of smell and taste interactions, the findings also may provide a tool to increase understanding of how the olfactory system detects odors. Scientists still do not know what molecules activate the vast majority of the 400 different types of functional human olfactory receptors. Because the cultured taste cells respond to odors, they potentially could be used as screening assays to help identify which molecules bind to specific human olfactory receptors.

Moving forward, the scientists will seek to determine whether olfactory receptors are preferentially located on a specific taste cell type, for example, sweet- or salt-detecting cells. Other studies will explore how odor molecules modify taste cell responses and, ultimately, human taste perception.

Also contributing to the research were lead author Bilal Makik, Nadia Elkaddi, and Jumanah Turkistani, all from Monell, and Andrew Spielman, from the New York University School of Medicine. The research was funded by institutional funds from the Monell Center and grant P30DC011735 from the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Contacts and sources:
Leslie Stein
The Monell Chemical Senses Center

Wikipedia - Taste bud

Tuesday, April 23, 2019

Potential for Life On Other Planets In Milky Way Revealed in New Study

One of the conditions for the emergence and persistence of life on Earth is the existence of geological activity such as earthquakes and volcanoes.

Volcanic activity caused by the movement of tectonic plates over the mantle (plate tectonics) recycles gases such as carbon dioxide through the mantle, crust, atmosphere and oceans, helping to keep the planet habitable by maintaining temperatures at ideal levels for the survival of living beings, scientists explain.

A study conducted by Brazil’s National Space Research Institute (INPE) suggests our galaxy, the Milky Way, contains other rocky planets with a high probability of having plate tectonics, increasing the chances that they are habitable.

Researchers have found evidence of the existence of rocky exoplanets with a high probability of having plate tectonics, increasing the likelihood that they are habitable 

Credit: R. Hurt / NASA

The study was supported by FAPESP. The results have been published in Monthly Notices of the Royal Astronomical Society (MNRAS).

“We found that geological conditions favorable to the emergence and maintenance of life exist on rocky planets, that life may exist throughout the Milky Way and that it may have originated at any time during our galaxy’s evolution,” said Jorge Luis Melendez Moreno, a professor at the University of São Paulo’s Institute of Astronomy, Geophysics and Atmospheric Sciences (IAG-USP) in Brazil and one of the authors of the study.

Scientists at other research institutions in Brazil and abroad also participated in the study.

They determined the surface parameters, masses and ages of 53 solar twins located at different points in the Milky Way. They also analyzed the chemical composition of these stars, called solar twins because their temperature, gravity and surface chemistry are similar to those of our Sun. The aim of the study was to discover whether potentially habitable rocky planets also orbit around the stars in question.

The analysis was performed using a spectrograph called HARPS (High Accuracy Radial velocity Planet Searcher) installed on the 3.6 m telescope operated by the European Southern Observatory (ESO) at the La Silla facility in Chile. The instrument measures the electromagnetic spectra of the “colors” emitted by celestial bodies, from shorter (ultraviolet) to longer (infrared) wavelengths.

The findings showed that the stars contain an abundance of thorium, a radioactive element with isotopes that split owing to atomic instability into smaller isotopes, emitting energy in a process called radioactive decay.

The energy released by the decay of unstable isotopes – not only thorium but also other radioactive elements such as uranium and potassium – gives rise to Earth’s mantle convection and tectonic activity. Part of the planet’s internal heat is a remnant of the primordial heat from its formation, but at least half is due to radioactive energy.

Thus, the initial levels of these radioactive elements in a rocky exoplanet contribute indirectly to the habitability of its surface, especially given the long time they take to decay, on the scale of billions of years, the researchers explained.

“The thorium levels we measured in these solar twins point to a sufficient amount of available energy from the decay of this radioactive element to maintain mantle convection and plate tectonics in any rocky planets that may be orbiting around them,” said Rafael Botelho, first author of the study. Botelho is studying for a PhD in astrophysics at INPE.

The initial thorium abundances in the solar twins were compared with the abundances of iron, silicon (an indicator of mantle thickness and mass in rocky planets) and two other heavy elements, neodymium and europium. The results showed that the thorium-silicon ratio in the solar twins increased over time and was equal to or higher than that of our Sun since the formation of the Milky Way.

“There are signs that thorium is also abundant in old solar twins. This means the Milky Way’s disk could be full of life,” said André Milone, a scientist at INPE and supervisor of Botelho’s PhD research.

Contacts and sources:
By Elton Alisson
Agência FAPESP

Citation: “Thorium in solar twins: implications for habitability in rocky planets” by R. B. Botelho, A. de C. Milone, J. Melendez, M. Bedell, L. Spina, M. Asplund, L. dos Santos, J. L. Bean, I. Ramirez, D. Yong, S. Dreizler, A. Alves-Brito and J. Yana Galarza can be retrieved from: