Saturday, November 30, 2019

Coated Seeds May Enable Agriculture on Marginal Lands

A specialized silk covering could protect seeds from salinity while also providing fertilizer-generating microbes.

Providing seeds with a protective coating that also supplies essential nutrients to the germinating plant could make it possible to grow crops in otherwise unproductive soils, according to new research at MIT.

Researchers have used silk derived from ordinary silkworm cocoons, like those seen here, mixed with bacteria and nutrients, to make a coating for seeds that can help them germinate and grow even in salty soil.
Researchers have used silk derived from ordinary silkworm cocoons, like those seen here, mixed with bacteria and nutrients, to make a coating for seeds that can help them germinate and grow even in salty soil.
Image courtesy of the researchers

A team of engineers has coated seeds with silk that has been treated with a kind of bacteria that naturally produce a nitrogen fertilizer, to help the germinating plants develop. Tests have shown that these seeds can grow successfully in soils that are too salty to allow untreated seeds to develop normally. The researchers hope this process, which can be applied inexpensively and without the need for specialized equipment, could open up areas of land to farming that are now considered unsuitable for agriculture.

The findings are being published this week in the journal PNAS, in a paper by graduate students Augustine Zvinavashe ’16 and Hui Sun, postdoc Eugen Lim, and professor of civil and environmental engineering Benedetto Marelli.

The work grew out of Marelli’s previous research on using silk coatings as a way to extend the shelf life of seeds used as food crops. “When I was doing some research on that, I stumbled on biofertilizers that can be used to increase the amount of nutrients in the soil,” he says. These fertilizers use microbes that live symbiotically with certain plants and convert nitrogen from the air into a form that can be readily taken up by the plants.

Not only does this provide a natural fertilizer to the plant crops, but it avoids problems associated with other fertilizing approaches, he says: “One of the big problems with nitrogen fertilizers is they have a big environmental impact, because they are very energetically demanding to produce.” These artificial fertilizers may also have a negative impact on soil quality, according to Marelli.

Although these nitrogen-fixing bacteria occur naturally in soils around the world, with different local varieties found in different regions, they are very hard to preserve outside of their natural soil environment. But silk can preserve biological material, so Marelli and his team decided to try it out on these nitrogen-fixing bacteria, known as rhizobacteria.

“We came up with the idea to use them in our seed coating, and once the seed was in the soil, they would resuscitate,” he says. Preliminary tests did not turn out well, however; the bacteria weren’t preserved as well as expected.

Planted in identical pots of salty soil, untreated seeds (left) mostly fail to germinate, while the coated seeds (right) develop normally.
Planted in identical pots of salty soil, untreated seeds (left) mostly fail to germinate, while the coated seeds (right) develop normally.
Courtesy of the researchers

That’s when Zvinavashe came up with the idea of adding a particular nutrient to the mix, a kind of sugar known as trehalose, which some organisms use to survive under low-water conditions. The silk, bacteria, and trehalose were all suspended in water, and the researchers simply soaked the seeds in the solution for a few seconds to produce an even coating. Then the seeds were tested at both MIT and a research facility operated by the Mohammed VI Polytechnic University in Ben Guerir, Morocco. “It showed the technique works very well,” Zvinavashe says.

The resulting plants, helped by ongoing fertilizer production by the bacteria, developed in better health than those from untreated seeds and grew successfully in soil from fields that are presently not productive for agriculture, Marelli says.

In practice, such coatings could be applied to the seeds by either dipping or spray coating, the researchers say. Either process can be done at ordinary ambient temperature and pressure. “The process is fast, easy, and it might be scalable” to allow for larger farms and unskilled growers to make use of it, Zvinavashe says. “The seeds can be simply dip-coated for a few seconds,” producing a coating that is just a few micrometers thick.

The ordinary silk they use “is water soluble, so as soon as it’s exposed to the soil, the bacteria are released,” Marelli says. But the coating nevertheless provides enough protection and nutrients to allow the seeds to germinate in soil with a salinity level that would ordinarily prevent their normal growth. “We do see plants that grow in soil where otherwise nothing grows,” he says.

These rhizobacteria normally provide fertilizer to legume crops such as common beans and chickpeas, and those have been the focus of the research so far, but it may be possible to adapt them to work with other kinds of crops as well, and that is part of the team’s ongoing research. “There is a big push to extend the use of rhizobacteria to nonlegume crops,” he says. One way to accomplish that might be to modify the DNA of the bacteria, plants, or both, he says, but that may not be necessary.

“Our approach is almost agnostic to the kind of plant and bacteria,” he says, and it may be feasible “to stabilize, encapsulate and deliver [the bacteria] to the soil, so it becomes more benign for germination” of other kinds of plants as well.

Even if limited to legume crops, the method could still make a significant difference to regions with large areas of saline soil. “Based on the excitement we saw with our collaboration in Morocco,” Marelli says, “this could be very impactful.”

As a next step, the researchers are working on developing new coatings that could not only protect seeds from saline soil, but also make them more resistant to drought, using coatings that absorb water from the soil. Meanwhile, next year they will begin test plantings out in open experimental fields in Morocco; their previous plantings have been done indoors under more controlled conditions.

The research was partly supported by the Université Mohammed VI Polytechnique-MIT Research Program, the Office of Naval Research, and the Office of the Dean for Graduate Fellowship and Research.

Contacts and sources:
David L. Chandler
MIT - Massachusetts Institute of Technology

Clear, Conductive Coating Could Protect Advanced Solar Cells, Touch Screens

A new coating material should be relatively easy to produce at an industrial scale, researchers say.

MIT researchers have improved on a transparent, conductive coating material, producing a tenfold gain in its electrical conductivity. When incorporated into a type of high-efficiency solar cell, the material increased the cell’s efficiency and stability.

Illustration shows the apparatus used to create a thin layer of a transparent, electrically conductive material, to protect solar cells or other devices. The chemicals used to produce the layer, shown in tubes at left, are introduced into a vacuum chamber where they deposit a layer on a substrate material at top of the chamber.
Illustration shows the apparatus used to create a thin layer of a transparent, electrically conductive material, to protect solar cells or other devices. The chemicals used to produce the layer, shown in tubes at left, are introduced into a vacuum chamber where they deposit a layer on a substrate material at top of the chamber.
Illustration courtesy of the authors, edited by MIT News

The new findings are reported today in the journal Science Advances, in a paper by MIT postdoc Meysam Heydari Gharahcheshmeh, professors Karen Gleason and Jing Kong, and three others.

“The goal is to find a material that is electrically conductive as well as transparent,” Gleason explains, which would be “useful in a range of applications, including touch screens and solar cells.” The material most widely used today for such purposes is known as ITO, for indium titanium oxide, but that material is quite brittle and can crack after a period of use, she says.

Gleason and her co-researchers improved a flexible version of a transparent, conductive material two years ago and published their findings, but this material still fell well short of matching ITO’s combination of high optical transparency and electrical conductivity. The new, more ordered material, she says, is more than 10 times better than the previous version.

The combined transparency and conductivity is measured in units of Siemens per centimeter. ITO ranges from 6,000 to 10,000, and though nobody expected a new material to match those numbers, the goal of the research was to find a material that could reach at least a value of 35. The earlier publication exceeded that by demonstrating a value of 50, and the new material has leapfrogged that result, now clocking in at 3,000; the team is still working on fine-tuning the process to raise that further.

The high-performing flexible material, an organic polymer known as PEDOT, is deposited in an ultrathin layer just a few nanometers thick, using a process called oxidative chemical vapor deposition (oCVD). This process results in a layer where the structure of the tiny crystals that form the polymer are all perfectly aligned horizontally, giving the material its high conductivity. Additionally, the oCVD method can decrease the stacking distance between polymer chains within the crystallites, which also enhances electrical conductivity.

To demonstrate the material’s potential usefulness, the team incorporated a layer of the highly aligned PEDOT into a perovskite-based solar cell. Such cells are considered a very promising alternative to silicon because of their high efficiency and ease of manufacture, but their lack of durability has been a major drawback. With the new oCVD aligned PEDOT, the perovskite’s efficiency improved and its stability doubled.

In the initial tests, the oCVD layer was applied to substrates that were 6 inches in diameter, but the process could be applied directly to a large-scale, roll-to-roll industrial scale manufacturing process, Heydari Gharahcheshmeh says. “It’s now easy to adapt for industrial scale-up,” he says. That’s facilitated by the fact that the coating can be processed at 140 degrees Celsius — a much lower temperature than alternative materials require.

The oCVD PEDOT is a mild, single-step process, enabling direct deposition onto plastic substrates, as desired for flexible solar cells and displays. In contrast, the aggressive growth conditions of many other transparent conductive materials require an initial deposition on a different, more robust substrate, followed by complex processes to lift off the layer and transfer it to plastic.

Because the material is made by a dry vapor deposition process, the thin layers produced can follow even the finest contours of a surface, coating them all evenly, which could be useful in some applications. For example, it could be coated onto fabric and cover each fiber but still allow the fabric to breathe.

The team still needs to demonstrate the system at larger scales and prove its stability over longer periods and under different conditions, so the research is ongoing. But “there’s no technical barrier to moving this forward. It’s really just a matter of who will invest to take it to market,” Gleason says.

The research team included MIT postdocs Mohammad Mahdi Tavakoli and Maxwell Robinson, and research affiliate Edward Gleason. The work was supported by Eni S.p.A. under the Eni-MIT Alliance Solar Frontiers Program.

Contacts and sources:
David L. Chandler
MIT - Massachusetts Institute of Technology

How to Design and Control Robots With Stretchy, Flexible Bodies

Optimizing soft robots to perform specific tasks is a huge computational problem, but a new model can help.

MIT researchers have invented a way to efficiently optimize the control and design of soft robots for target tasks, which has traditionally been a monumental undertaking in computation.

The MIT-invented model efficiently and simultaneously optimizes control and design of soft robots for target tasks, which has traditionally been a monumental undertaking in computation. The model, for instance, was significantly faster and more accurate than state-of-the-art methods at simulating how quadrupedal robots (pictured) should move to reach target destinations.
An MIT-invented model efficiently and simultaneously optimizes control and design of soft robots for target tasks, which has traditionally been a monumental undertaking in computation. The model, for instance, was significantly faster and more accurate than state-of-the-art methods at simulating how quadrupedal robots (pictured) should move to reach target destinations.
Image courtesy of the researchers

Soft robots have springy, flexible, stretchy bodies that can essentially move an infinite number of ways at any given moment. Computationally, this represents a highly complex “state representation,” which describes how each part of the robot is moving. State representations for soft robots can have potentially millions of dimensions, making it difficult to calculate the optimal way to make a robot complete complex tasks.

At the Conference on Neural Information Processing Systems next month, the MIT researchers will present a model that learns a compact, or “low-dimensional,” yet detailed state representation, based on the underlying physics of the robot and its environment, among other factors. This helps the model iteratively co-optimize movement control and material design parameters catered to specific tasks.

“Soft robots are infinite-dimensional creatures that bend in a billion different ways at any given moment,” says first author Andrew Spielberg, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “But, in truth, there are natural ways soft objects are likely to bend. We find the natural states of soft robots can be described very compactly in a low-dimensional description. We optimize control and design of soft robots by learning a good description of the likely states.”

In simulations, the model enabled 2D and 3D soft robots to complete tasks — such as moving certain distances or reaching a target spot —more quickly and accurately than current state-of-the-art methods. The researchers next plan to implement the model in real soft robots.

Joining Spielberg on the paper are CSAIL graduate students Allan Zhao, Tao Du, and Yuanming Hu; Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Credit: MIT


Soft robotics is a relatively new field of research, but it holds promise for advanced robotics. For instance, flexible bodies could offer safer interaction with humans, better object manipulation, and more maneuverability, among other benefits.

Control of robots in simulations relies on an “observer,” a program that computes variables that see how the soft robot is moving to complete a task. In previous work, the researchers decomposed the soft robot into hand-designed clusters of simulated particles. Particles contain important information that help narrow down the robot’s possible movements. If a robot attempts to bend a certain way, for instance, actuators may resist that movement enough that it can be ignored. But, for such complex robots, manually choosing which clusters to track during simulations can be tricky.

Building off that work, the researchers designed a “learning-in-the-loop optimization” method, where all optimized parameters are learned during a single feedback loop over many simulations. And, at the same time as learning optimization — or “in the loop” — the method also learns the state representation.

The model employs a technique called a material point method (MPM), which simulates the behavior of particles of continuum materials, such as foams and liquids, surrounded by a background grid. In doing so, it captures the particles of the robot and its observable environment into pixels or 3D pixels, known as voxels, without the need of any additional computation.

In a learning phase, this raw particle grid information is fed into a machine-learning component that learns to input an image, compress it to a low-dimensional representation, and decompress the representation back into the input image. If this “autoencoder” retains enough detail while compressing the input image, it can accurately recreate the input image from the compression.

In the researchers’ work, the autoencoder’s learned compressed representations serve as the robot’s low-dimensional state representation. In an optimization phase, that compressed representation loops back into the controller, which outputs a calculated actuation for how each particle of the robot should move in the next MPM-simulated step.

Simultaneously, the controller uses that information to adjust the optimal stiffness for each particle to achieve its desired movement. In the future, that material information can be useful for 3D-printing soft robots, where each particle spot may be printed with slightly different stiffness. “This allows for creating robot designs catered to the robot motions that will be relevant to specific tasks,” Spielberg says. “By learning these parameters together, you keep everything as synchronized as much as possible to make that design process easier.”

Faster optimization

All optimization information is, in turn, fed back into the start of the loop to train the autoencoder. Over many simulations, the controller learns the optimal movement and material design, while the autoencoder learns the increasingly more detailed state representation. “The key is we want that low-dimensional state to be very descriptive,” Spielberg says.

After the robot gets to its simulated final state over a set period of time — say, as close as possible to the target destination — it updates a “loss function.” That’s a critical component of machine learning, which tries to minimize some error. In this case, it minimizes, say, how far away the robot stopped from the target. That loss function flows back to the controller, which uses the error signal to tune all the optimized parameters to best complete the task.

If the researchers tried to directly feed all the raw particles of the simulation into the controller, without the compression step, “running and optimization time would explode,” Spielberg says. Using the compressed representation, the researchers were able to decrease the running time for each optimization iteration from several minutes down to about 10 seconds.

The researchers validated their model on simulations of various 2D and 3D biped and quadruped robots. They researchers also found that, while robots using traditional methods can take up to 30,000 simulations to optimize these parameters, robots trained on their model took only about 400 simulations.

"Our goal is to enable quantum leaps in the way engineers go from specification to design, prototyping, and programming of soft robots. In this paper, we explore the potential of co-optimizing the body and control system of a soft robot can lead the rapid creation of soft bodied robots customized to the tasks they have to do," Rus says.

Deploying the model into real soft robots means tackling issues with real-world noise and uncertainty that may decrease the model’s efficiency and accuracy. But, in the future, the researchers hope to design a full pipeline, from simulation to fabrication, for soft robots.

Contacts and sources:
Rob Matheson
MIT - Massachusetts Institute of Technology

Bot Can Beat Humans In Multiplayer Hidden-Role Games

Using deductive reasoning, a bot identifies friend or foe to ensure victory over humans in certain online games.

MIT researchers have developed a bot equipped with artificial intelligence that can beat human players in tricky online multiplayer games where player roles and motives are kept secret.

DeepRole, an MIT-invented gaming bot equipped with “deductive reasoning,” can beat human players in tricky online multiplayer games where player roles and motives are kept secret.
DeepRole, an MIT-invented gaming bot equipped with “deductive reasoning,” can beat human players in tricky online multiplayer games where player roles and motives are kept secret.
Credit: MIT

Many gaming bots have been built to keep up with human players. Earlier this year, a team from Carnegie Mellon University developed the world’s first bot that can beat professionals in multiplayer poker. DeepMind’s AlphaGo made headlines in 2016 for besting a professional Go player. Several bots have also been built to beat professional chess players or join forces in cooperative games such as online capture the flag. In these games, however, the bot knows its opponents and teammates from the start.

At the Conference on Neural Information Processing Systems next month, the researchers will present DeepRole, the first gaming bot that can win online multiplayer games in which the participants’ team allegiances are initially unclear. The bot is designed with novel “deductive reasoning” added into an AI algorithm commonly used for playing poker. This helps it reason about partially observable actions, to determine the probability that a given player is a teammate or opponent. In doing so, it quickly learns whom to ally with and which actions to take to ensure its team’s victory.

The researchers pitted DeepRole against human players in more than 4,000 rounds of the online game “The Resistance: Avalon.” In this game, players try to deduce their peers’ secret roles as the game progresses, while simultaneously hiding their own roles. As both a teammate and an opponent, DeepRole consistently outperformed human players.

“If you replace a human teammate with a bot, you can expect a higher win rate for your team. Bots are better partners,” says first author Jack Serrino ’18, who majored in electrical engineering and computer science at MIT and is an avid online “Avalon” player.

The work is part of a broader project to better model how humans make socially informed decisions. Doing so could help build robots that better understand, learn from, and work with humans.

“Humans learn from and cooperate with others, and that enables us to achieve together things that none of us can achieve alone,” says co-author Max Kleiman-Weiner, a postdoc in the Center for Brains, Minds and Machines and the Department of Brain and Cognitive Sciences at MIT, and at Harvard University. “Games like ‘Avalon’ better mimic the dynamic social settings humans experience in everyday life. You have to figure out who’s on your team and will work with you, whether it’s your first day of kindergarten or another day in your office.”

Joining Serrino and Kleiman-Weiner on the paper are David C. Parkes of Harvard and Joshua B. Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds and Machines.

Deductive bot

In “Avalon,” three players are randomly and secretly assigned to a “resistance” team and two players to a “spy” team. Both spy players know all players’ roles. During each round, one player proposes a subset of two or three players to execute a mission. All players simultaneously and publicly vote to approve or disapprove the subset. If a majority approve, the subset secretly determines whether the mission will succeed or fail. If two “succeeds” are chosen, the mission succeeds; if one “fail” is selected, the mission fails. Resistance players must always choose to succeed, but spy players may choose either outcome. The resistance team wins after three successful missions; the spy team wins after three failed missions.

Winning the game basically comes down to deducing who is resistance or spy, and voting for your collaborators. But that’s actually more computationally complex than playing chess and poker. “It’s a game of imperfect information,” Kleiman-Weiner says. “You’re not even sure who you’re against when you start, so there’s an additional discovery phase of finding whom to cooperate with.”

DeepRole uses a game-planning algorithm called “counterfactual regret minimization” (CFR) — which learns to play a game by repeatedly playing against itself — augmented with deductive reasoning. At each point in a game, CFR looks ahead to create a decision “game tree” of lines and nodes describing the potential future actions of each player. Game trees represent all possible actions (lines) each player can take at each future decision point. In playing out potentially billions of game simulations, CFR notes which actions had increased or decreased its chances of winning, and iteratively revises its strategy to include more good decisions. Eventually, it plans an optimal strategy that, at worst, ties against any opponent.

CFR works well for games like poker, with public actions — such as betting money and folding a hand — but it struggles when actions are secret. The researchers’ CFR combines public actions and consequences of private actions to determine if players are resistance or spy.

The bot is trained by playing against itself as both resistance and spy. When playing an online game, it uses its game tree to estimate what each player is going to do. The game tree represents a strategy that gives each player the highest likelihood to win as an assigned role. The tree’s nodes contain “counterfactual values,” which are basically estimates for a payoff that player receives if they play that given strategy.

At each mission, the bot looks at how each person played in comparison to the game tree. If, throughout the game, a player makes enough decisions that are inconsistent with the bot’s expectations, then the player is probably playing as the other role. Eventually, the bot assigns a high probability for each player’s role. These probabilities are used to update the bot’s strategy to increase its chances of victory.

Simultaneously, it uses this same technique to estimate how a third-person observer might interpret its own actions. This helps it estimate how other players may react, helping it make more intelligent decisions. “If it’s on a two-player mission that fails, the other players know one player is a spy. The bot probably won’t propose the same team on future missions, since it knows the other players think it’s bad,” Serrino says.

Language: The next frontier

Interestingly, the bot did not need to communicate with other players, which is usually a key component of the game. “Avalon” enables players to chat on a text module during the game. “But it turns out our bot was able to work well with a team of other humans while only observing player actions,” Kleiman-Weiner says. “This is interesting, because one might think games like this require complicated communication strategies.”

“I was thrilled to see this paper when it came out,” says Michael Bowling, a professor at the University of Alberta whose research focuses, in part, on training computers to play games. “It is really exciting seeing the ideas in DeepStack see broader application outside of poker. [DeepStack has] been so central to AI in chess and Go to situations of imperfect information. But I still wasn't expecting to see it extended so quickly into the situation of a hidden role game like Avalon. Being able to navigate a social deduction scenario, which feels so quintessentially human, is a really important step. There is still much work to be done, especially when the social interaction is more open ended, but we keep seeing that many of the fundamental AI algorithms with self-play learning can go a long way.”

Next, the researchers may enable the bot to communicate during games with simple text, such as saying a player is good or bad. That would involve assigning text to the correlated probability that a player is resistance or spy, which the bot already uses to make its decisions. Beyond that, a future bot might be equipped with more complex communication capabilities, enabling it to play language-heavy social-deduction games — such as a popular game “Werewolf” —which involve several minutes of arguing and persuading other players about who’s on the good and bad teams.

“Language is definitely the next frontier,” Serrino says. “But there are many challenges to attack in those games, where communication is so key.”

Contacts and sources:
Rob Matheson
MIT - Massachusetts Institute of Technology

Toward More Efficient Computing, With Magnetic Waves

A new circuit design offers a path to “spintronic” devices that use little electricity and generate practically no heat.

MIT researchers have devised a novel circuit design that enables precise control of computing with magnetic waves — with no electricity needed. The advance takes a step toward practical magnetic-based devices, which have the potential to compute far more efficiently than electronics.

An MIT-invented circuit uses only a nanometer-wide “magnetic domain wall” to modulate the phase and magnitude of a spin wave, which could enable practical magnetic-based computing — using little to no electricity.
An MIT-invented circuit uses only a nanometer-wide “magnetic domain wall” to modulate the phase and magnitude of a spin wave, which could enable practical magnetic-based computing — using little to no electricity.
Image courtesy of the researchers, edited by MIT News

Classical computers rely on massive amounts of electricity for computing and data storage, and generate a lot of wasted heat. In search of more efficient alternatives, researchers have started designing magnetic-based “spintronic” devices, which use relatively little electricity and generate practically no heat.

Spintronic devices leverage the “spin wave” — a quantum property of electrons — in magnetic materials with a lattice structure. This approach involves modulating the spin wave properties to produce some measurable output that can be correlated to computation. Until now, modulating spin waves has required injected electrical currents using bulky components that can cause signal noise and effectively negate any inherent performance gains.

The MIT researchers developed a circuit architecture that uses only a nanometer-wide domain wall in layered nanofilms of magnetic material to modulate a passing spin wave, without any extra components or electrical current. In turn, the spin wave can be tuned to control the location of the wall, as needed. This provides precise control of two changing spin wave states, which correspond to the 1s and 0s used in classical computing.

In the future, pairs of spin waves could be fed into the circuit through dual channels, modulated for different properties, and combined to generate some measurable quantum interference — similar to how photon wave interference is used for quantum computing. Researchers hypothesize that such interference-based spintronic devices, like quantum computers, could execute highly complex tasks that conventional computers struggle with.

“People are beginning to look for computing beyond silicon. Wave computing is a promising alternative,” says Luqiao Liu, a professor in the Department of Electrical Engineering and Computer Science (EECS) and principal investigator of the Spintronic Material and Device Group in the Research Laboratory of Electronics. “By using this narrow domain wall, we can modulate the spin wave and create these two separate states, without any real energy costs. We just rely on spin waves and intrinsic magnetic material.”

Joining Liu on the paper are Jiahao Han, Pengxiang Zhang, and Justin T. Hou, three graduate students in the Spintronic Material and Device Group; and EECS postdoc Saima A. Siddiqui.

Flipping magnons

Spin waves are ripples of energy with small wavelengths. Chunks of the spin wave, which are essentially the collective spin of many electrons, are called magnons. While magnons are not true particles, like individual electrons, they can be measured similarly for computing applications.

In their work, the researchers utilized a customized “magnetic domain wall,” a nanometer-sized barrier between two neighboring magnetic structures. They layered a pattern of cobalt/nickel nanofilms — each a few atoms thick — with certain desirable magnetic properties that can handle a high volume of spin waves. Then they placed the wall in the middle of a magnetic material with a special lattice structure, and incorporated the system into a circuit.

On one side of the circuit, the researchers excited constant spin waves in the material. As the wave passes through the wall, its magnons immediately spin in the opposite direction: Magnons in the first region spin north, while those in the second region — past the wall — spin south. This causes the dramatic shift in the wave’s phase (angle) and slight decrease in magnitude (power).

In experiments, the researchers placed a separate antenna on the opposite side of the circuit, that detects and transmits an output signal. Results indicated that, at its output state, the phase of the input wave flipped 180 degrees. The wave’s magnitude — measured from highest to lowest peak — had also decreased by a significant amount.

Adding some torque

Then, the researchers discovered a mutual interaction between spin wave and domain wall that enabled them to efficiently toggle between two states. Without the domain wall, the circuit would be uniformly magnetized; with the domain wall, the circuit has a split, modulated wave.

By controlling the spin wave, they found they could control the position of the domain wall. This relies on a phenomenon called, “spin-transfer torque,” which is when spinning electrons essentially jolt a magnetic material to flip its magnetic orientation.

In the researchers’ work, they boosted the power of injected spin waves to induce a certain spin of the magnons. This actually draws the wall toward the boosted wave source. In doing so, the wall gets jammed under the antenna — effectively making it unable to modulate waves and ensuring uniform magnetization in this state.

Using a special magnetic microscope, they showed that this method causes a micrometer-size shift in the wall, which is enough to position it anywhere along the material block. Notably, the mechanism of magnon spin-transfer torque was proposed, but not demonstrated, a few years ago. “There was good reason to think this would happen,” Liu says. “But our experiments prove what will actually occur under these conditions.”

The whole circuit is like a water pipe, Liu says. The valve (domain wall) controls how the water (spin wave) flows through the pipe (material). “But you can also imagine making water pressure so high, it breaks the valve off and pushes it downstream,” Liu says. “If we apply a strong enough spin wave, we can move the position of domain wall — except it moves slightly upstream, not downstream.”

Such innovations could enable practical wave-based computing for specific tasks, such as the signal-processing technique, called “fast Fourier transform.” Next, the researchers hope to build a working wave circuit that can execute basic computations. Among other things, they have to optimize materials, reduce potential signal noise, and further study how fast they can switch between states by moving around the domain wall. “That’s next on our to-do list,” Liu says.

Contacts and sources:
Rob MathesonMIT - Massachusetts Institute of Technology

Rapamycin May Slow Skin Aging, Drexel Study Reports

The search for youthfulness typically turns to lotions, supplements, serums and diets, but there may soon be a new option joining the fray. Rapamycin, a FDA-approved drug normally used to prevent organ rejection after transplant surgery, may also slow aging in human skin, according to a study from Drexel University College of Medicine researchers published in Geroscience.

Basic science studies have previously used the drug to slow aging in mice, flies, and worms, but the current study is the first to show an effect on aging in human tissue, specifically skin – in which signs of aging were reduced. Changes include decreases in wrinkles, reduced sagging and more even skin tone -- when delivered topically to humans.
Credit: Drexel University

“As researchers continue to seek out the elusive ‘fountain of youth’ and ways to live longer, we’re seeing growing potential for use of this drug,” said senior author Christian Sell, PhD, an associate professor of Biochemistry and Molecular Biology at the College of Medicine. “So, we said, let’s try skin. It’s a complex organism with immune, nerve cells, stem cells – you can learn a lot about the biology of a drug and the aging process by looking at skin.”

In the current Drexel-led study, 13 participants over age 40 applied rapamycin cream every 1-2 days to one hand and a placebo to the other hand for eight months. The researchers checked on subjects after two, four, six and eight months, including conducting a blood test and a biopsy at the six- or eight-month mark.

After eight months, the majority of the rapamycin hands showed increases in collagen protein, and statistically significant lower levels of p16 protein, a key marker of skin cell aging. Skin that has lower levels of p16 has fewer senescent cells, which are associated with skin wrinkles. Beyond cosmetic effects, higher levels of p16 can lead to dermal atrophy, a common condition in seniors, which is associated with fragile skin that tears easily, slow healing after cuts and increased risk of infection or complications after an injury.

So how does rapamycin work? Rapamycin blocks the appropriately named “target of rapamycin” (TOR), a protein that acts as a mediator in metabolism, growth and aging of human cells. The capability for rapamycin to improve human health beyond outward appearance is further illuminated when looking deeper at p16 protein, which is a stress response that human cells undergo when damaged, but is also a way of preventing cancer. When cells have a mutation that would have otherwise created a tumor, this response helps prevent the tumor by slowing the cell cycle process. Instead of creating a tumor, it contributes to the aging process.

“When cells age, they become detrimental and create inflammation,” said Sell. “That’s part of aging. These cells that have undergone stress are now pumping out inflammatory markers.”

In addition to its current use to prevent organ rejection, rapamycin is currently prescribed (in higher doses than used in the current study) for the rare lung disease lymphangioleiomyomatosis, and as an anti-cancer drug. The current Drexel study shows a second life for the drug in low doses, including new applications for studying rapamycin to increase human lifespan or improve human performance.

Rapamycin -- first discovered in the 1970s in bacteria found in the soil of Easter Island – also reduces stress in the cell by attacking cancer-causing free radicals in the mitochondria.

In previous studies, the team used rapamycin in cell cultures, which reportedly improved cell function and slowed aging.

In 1996, a study in Cell of yeast cultures which used rapamycin to block TOR proteins in yeast, made the yeast cells smaller, but increased their lifespan.

“If you ramp the pathway down you get a smaller phenotype,” said Sell. “When you slow growth, you seem to extend lifespan and help the body repair itself – at least in mice. This is similar to what is seen in calorie restriction.”

The researchers note that, as this is early research, many more questions remain about how to harness this drug. Future studies will look at how to apply the drug in clinical settings, and find applications in other diseases. During the current study, the researchers confirmed that none of the rapamycin was absorbed in the bloodstream of participants.

There are two pending patents on this technology, both of which have been licensed to Boinca Therapeutics LLC., of which Sell, Ibiyonu Lawrence, MD, an associate professor of Internal Medicine in the College of Medicine, are shareholders.

In addition to Sell and Lawrence, additional authors on the research are Christina Lee Chung, MD, Melissa Hoffman, Dareen Elgindi, Kumar Nadhan, MD, Manali Potnis, Catlin Sershon, Annie Lee, and Rhonda Binnebose, who contributed to the research while at Drexel, and Antonello Lorenzini, PhD, from the University of Bologna (Italy).

Contacts and sources:
Greg Richter
Drexel University

Citation: Topical rapamycin reduces markers of senescence and aging in human skin: an exploratory, prospective, randomized trial. Christina Lee Chung, Ibiyonu Lawrence, Melissa Hoffman, Dareen Elgindi, Kumar Nadhan, Manali Potnis, Annie Jin, Catlin Sershon, Rhonda Binnebose, Antonello Lorenzini, Christian Sell.GeroScience, 2019; DOI: 10.1007/s11357-019-00113-y

Exactly What Happens During a Chemical Reaction Observed in the Coldest Reaction Known in the Universe

With ultracold chemistry, researchers get a first look at exactly what happens during a chemical reaction.

The coldest chemical reaction in the known universe took place in what appears to be a chaotic mess of lasers. The appearance deceives: Deep within that painstakingly organized chaos, in temperatures millions of times colder than interstellar space, Kang-Kuen Ni achieved a feat of precision. Forcing two ultracold molecules to meet and react, she broke and formed the coldest bonds in the history of molecular couplings.

“Probably in the next couple of years, we are the only lab that can do this,” said Ming-Guang Hu, a postdoctoral scholar in the Ni lab and first author on their paper published in Science. Five years ago, Ni, the Morris Kahn Associate Professor of Chemistry and Chemical Biology and a pioneer of ultracold chemistry, set out to build a new apparatus that could achieve the lowest temperature chemical reactions of any currently available technology. But she couldn’t be sure her intricate engineering would work.
Credit: Jon Chase/Harvard University

Now, she and her team not only performed the coldest reaction yet, they discovered their new apparatus can do something even they did not predict. In such intense cold—500 nanokelvin or just a few millionths of a degree above absolute zero—their molecules slowed to such glacial speeds, Ni and her team could see something no one has been able to see before: the moment when two molecules meet to form two new molecules. In essence, they captured a chemical reaction in its most critical and elusive act.

Chemical reactions are responsible for literally everything: breathing, cooking, digesting, creating energy, pharmaceuticals, and household products like soap. So, understanding how they work at a fundamental level could help researchers design combinations the world has never seen. With an almost infinite number of new combinations possible, these new molecules could have endless applications from more efficient energy production to new materials like mold-proof walls and even better building blocks for quantum computers.

In her previous work, Ni used colder and colder temperatures to work this chemical magic: forging molecules from atoms that would otherwise never react. Cooled to such extremes, atoms and molecules slow to a quantum crawl, their lowest possible energy state. There, Ni can manipulate molecular interactions with utmost precision. But even she could only see the start of her reactions: two molecules go in, but then what? What happened in the middle and the end was a black hole only theories could try to explain.

Chemical reactions occur in just a thousanth of a billionth of a second, better known in the scientific world as a picosecond. In the last twenty years, scientists have used ultra-fast lasers like fast-action cameras, snapping rapid images of reactions as they occur. But they can’t capture the whole picture. “Most of the time,” Ni said, “you just see that the reactants disappear and the products appear in a time that you can measure. There was no direct measurement of what actually happened in the middle.” Until now.

Chemical reactions transform reactants to products through an intermediate state where bonds break and form. Often too short-lived to observe, this phase has so-far eluded intimate investigation. By "freezing out" the rotation, vibration, and motion of the reactants (here, potassium-rubidium molecules) to a temperature of 500 nanokelvin (barely above absolute zero temperature), the number of energetically allowed exits for the products is limited. "Trapped" in the intermediate for far longer, researchers can then observe this phase directly with photoionization detection. This technique paves the way towards the quantum control of chemical reactions with ultracold molecules.

 Image credit: Ming-Guang Hu

Ni’s ultracold temperatures force reactions to a comparatively numbed speed. “Because [the molecules] are so cold,” Ni said, “now we kind of have a bottleneck effect.” When she and her team reacted two potassium rubidium molecules—chosen for their pliability—the ultracold temperatures forced the molecules to linger in the intermediate stage for microseconds. Microseconds—mere millionths of a second—may seem short, but that’s millions of times longer than usual and long enough for Ni and her team to investigate the phase when bonds break and form, in essence, how one molecule turns into another.

With this intimate vision, Ni said she and her team can test theories that predict what happens in a reaction’s black hole to confirm if they got it right. Then, her team can craft new theories, using actual data to more precisely predict what happens during other chemical reactions, even those that take place in the mysterious quantum realm.

Already, the team is exploring what else they can learn in their ultracold test bed. Next, for example, they could manipulate the reactants, exciting them before they react to see how their heightened energy impacts the outcome. Or, they could even influence the reaction as it occurs, nudging one molecule or the other. “With our controllability, this time window is long enough, we can probe,” Hu said. “Now, with this apparatus, we can think about this. Without this technique, without this paper, we cannot even think about this.”

This research was funded in part by the Department of Energy, the David and Lucile Packard Foundation, and the National Science Foundation. Additional authors on the paper include Yu Liu, David Grimes, Yen-Wei Lin, Andrei Gheorghe, R. Vexiau, N. Bouloufa-Maafa, O. Dulieu, and T. Rosenband.

Contacts and sources:
Caitlin McDermott-MurphyHarvard University

Citation: . Direct observation of bimolecular reactions of ultracold KRb molecules. M.-G. Hu, Y. Liu, D. D. Grimes, Y.-W. Lin, A. H. Gheorghe, R. Vexiau, N. Bouloufa-Maafa, O. Dulieu, T. Rosenband, K.-K. Ni Science, 2019; 366 (6469): 1111 DOI: 10.1126/science.aay9531

Biodiversity and Wind Energy: How Stakeholders Evaluate the Green-Green Dilemma

Biodiversity and wind energy: How stakeholders evaluate the green-green dilemma – and what they think about possible solutions

The replacement of fossil and nuclear energy sources for electricity production by renewables such as wind, sun, water and biomass is a cornerstone of Germany’s energy policy.

Victim of wind power plant

Photo: Christian Voigt, IZW

Among these, wind energy production is the most important component. However, energy production from wind is not necessarily ecologically sustainable. It requires relatively large spaces for installation and operation of turbines, and bats and birds die after collisions with rotors in significant numbers. For these reasons, the location and operation of wind energy plants are often in direct conflict with the legal protection of endangered species. The almost unanimous opinion of experts from local and central government authorities, environmental NGOs and expert offices is that the current mechanisms for the protection of bats in wind power projects are insufficient. This is one conclusion from a survey by the Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW) published in the "Journal of Renewable and Sustainable Energy".

More than 500 representatives of various stakeholders (expert groups) involved in the environmental impact assessment of wind turbines took part in the Leibniz-IZW survey. This group included employees of conservation agencies and government authorities, representatives of non-governmental organisations in the conservation sector, consultants, employees of wind energy companies and scientists conducting research on renewable energies or on biodiversity. The survey focused on assessments on the contribution of wind energy to the transformation of the energy system, on ecologically sustainable installation and operation of wind turbines and on possible solutions for the trade-off between tackling climate change and protecting biological diversity.

"We found both significant discrepancies and broad consensus among participants," states Christian Voigt, department head at Leibniz-IZW and first author of the survey. "The overwhelming majority of respondents confirmed that there is a direct conflict between green electricity and bat protection. Most importantly, they considered the protection of biodiversity to be just as important as the contribution to protect the global climate through renewable energy production." Most stakeholders agreed that small to moderate losses in the yield of wind power plants in terms of electricity production and thus in financial terms instigated by the consistent application of conservation laws must become acceptable. Possible shutdown times in electricity production should be compensated. "We will probably have to accept a higher price of green electricity for the purpose of the effective protection of bats in order to compensate for the shutdown times of wind turbines," Voigt sums up. "This does not address the unsolved issue of how to deal with habitat loss, especially when wind turbines are placed in forests."

The conflict between wind power projects and the objectives of biological conservation intensified in recent years because the rapidly rising number of wind plants – there are now around 30,000 on mainland Germany – has made suitable locations scarce. As a result, new plants are increasingly erected in locations where conflicts with wildlife and the protection of wildlife are more likely, for example in forests. "According to members of conservation authorities, only about 25 % of wind turbines are operated under mitigation schemes such as temporary halt of wind turbine operation during periods of high bat activity (for example during the migration season), at relatively low wind speeds and at high air temperatures even though the legal framework that protects bats would require the enforcemnt of such measures," adds author Marcus Fritze of Leibniz-IZW. In addition, it became clear from the survey that members of the wind energy industry hold views on some aspects of the green-green dilemma that differ from those of other expert groups. "Representatives of the wind energy industry consider compliance with climate protection targets as more important than measures to protect species. All other expert groups disagree with this notion," said Fritze. "A consistent dialogue between all participants therefore seems particularly important in order to enable ecologically sustainable wind energy production."

The survey also showed that
• more than 95% of respondents consider the transformation of the energy system (“Energiewende”) to be important,
• all expert groups agreed on aiming for an ecologically sustainable energy transition,
• two thirds of stakeholders in the wind energy industry shared the view that wind energy production should be promoted more strongly than energy production from other renewable sources whereas 85% of representatives from the other stakeholders disagreed with this,
• 86% of the survey participants outside the wind energy sector gave green electricity no higher priority than the protection of wildlife whereas only 4% of representatives of the wind sector industry shared this opinion (almost half were undecided or consider wind power to be more important than biodiversity protection).

For the purpose of this survey, the authors selected bats as a representative group of species for all wildlife affected by wind turbines, because large numbers of bats die at turbines and they enjoy a high level of protection both nationally and internationally, and therefore play an important role in planning and approval procedures for wind turbines. Indeed, the high collision rates of bats at wind turbines may be relevant to entire bat populations. The common noctule is the most frequent victim of wind turbines; this species is rated as declining by the Federal Agency for Nature Conservation in Germany. Furthermore, the results of years of research in the department headed by Voigt at the Leibniz-IZW show that the losses affect both local bat population as well as migrating individuals. Thus, fatalities at wind turbines in Germany affect bat populations in Germany as well as populations in other European regions from where these bats originate.

On the basis of the survey results, the authors argue in favour of a stronger consideration of nature conservation objectives in the wind energy industry and for an appreciation of the targets of the conservation of biological diversity. They suggest ways in which the cooperation of those involved in the planning and operation of wind power projects can be improved that both wind energy production and the goals of biological conservation can be satisfied.

Contacts and sources:
Dr. Christian Voigt,  Marcus Fritze,  Jan Zwilling
Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW in the Forschungsverbund Berlin e.V.

Citation:Producing wind energy at the cost of biodiversity: A stakeholder view on a green-green dilemma. Christian C. Voigt, Tanja M. Straka, Marcus Fritze. Journal of Renewable and Sustainable Energy, 2019; 11 (6): 063303 DOI: 10.1063/1.5118784

Glass from a 3D Printer

ETH researchers used a 3D printing process to produce complex and highly porous glass objects. The basis for this is a special resin that can be cured with UV light.

Various glass objects created with a 3D printer.
Photo: Group for Complex Materials / ETH Zurich

Producing glass objects using 3D printing is not easy. Only a few groups of researchers around the world have attempted to produce glass using additive methods. Some have made objects by printing molten glass, but the disadvantage is that this requires extremely high temperatures and heat-​resistant equipment. Others have used powdered ceramic particles that can be printed at room temperature and then sintered later to create glass; however, objects produced in this way are not very complex.

Researchers from ETH Zurich have now used a new technique to produce complex glass objects with 3D printing. The method is based on stereolithography, one of the first 3D printing techniques developed during the 1980s. David Moore, Lorenzo Barbera, and Kunal Masania in the Complex Materials group led by ETH processor André Studart have developed a special resin that contains a plastic, and organic molecules to which glass precursors are bonded. The researchers reported their results in the latest issue of the journal Natural Materials.

Video: ETH Zurich

Light used to “grow” objects

The resin can be processed using commercially available Digital Light Processing technology. This involves irradiating the resin with UV light patterns. Wherever the light strikes the resin, it hardens because the light sensitive components of the polymer resin cross link at the exposed points. The plastic monomers combine to form a labyrinth-​like structure, creating the polymer. The ceramic-​bearing molecules fill the interstices of this labyrinth.

An object can thus be built up layer by layer. The researchers can change various parameters in each layer, including pore size: weak light intensity results in large pores; intense illumination produces small pores. “We discovered that by accident, but we can use this to directly influence the pore size of the printed object,” says Masania.

The researchers are also able to modify the microstructure, layer by layer, by mixing silica with borate or phosphate and adding it to the resin. Complex objects can be made from different types of glass, or even combined in the same object using the technique.

The researchers then fire the blank produced in this way at two different temperatures: at 600˚C to burn off the polymer framework and then at around 1000˚C to densify the ceramic structure into glass. During the firing process, the objects shrink significantly, but become transparent and hard like window glass.

The blank (left) is fired at 600 degrees to eliminate the plastic framework. In a second firing step, the object becomes glass (right). 
Image: Group for Complex Materials / ETH Zurich

Patent application submitted

These 3D-​printed glass objects are still no bigger than a die. Large glass objects, such as bottles, drinking glasses or window panes, cannot be produced in this way – which was not actually the goal of the project, emphasises Masania.

3D-​printed glass objects. The variety of geometries of 3D-​printed glass objects are almost unlimited.

Credit:  Complex Materials / ETH Zurich)

The aim was rather to prove the feasibility of producing glass objects of complex geometry using a 3D printing process. However, the new technology is not just a gimmick. The researchers applied for a patent and are currently negotiating with a major Swiss glassware dealer who wants to use the technology in his company.

Contacts and sources:
Peter Rüegg
ETH Zurich

Citation:Three-dimensional printing of multicomponent glasses using phase-separating resins. David G. Moore, Lorenzo Barbera, Kunal Masania, André R. Studart. Nature Materials, 2019; DOI: 10.1038/s41563-019-0525-y

Astronomers Discover Unpredicted Stellar Black Hole

Our Milky Way Galaxy is estimated to contain 100 million stellar black holes - cosmic bodies formed by the collapse of massive stars and so dense even light can't escape. Until now, scientists had estimated the mass of an individual stellar black hole in our Galaxy at no more than 20 times that of the Sun. But the discovery of a huge black hole by a Chinese-led team of international scientists has toppled that assumption.

The team, headed by Prof. Liu Jifeng of the National Astronomical Observatory of China of the Chinese Academy of Sciences (NAOC), spotted a stellar black hole with a mass 70 times greater than the Sun. The monster black hole is located 15 thousand light-years from Earth and has been named LB-1 by the researchers. The discovery is reported in the latest issue of Nature.

Figure LB-1: Accretion of gas onto a stellar black hole from its blue companion star, through a truncated accretion disk (Artist impression).

Crecdit: Yu Jingchuan, Beijing Planetarium, 2019.

The discovery came as a big surprise. "Black holes of such mass should not even exist in our Galaxy, according to most of the current models of stellar evolution," said Prof. LIU. "We thought that very massive stars with the chemical composition typical of our Galaxy must shed most of their gas in powerful stellar winds, as they approach the end of their life. Therefore, they should not leave behind such a massive remnant. LB-1 is twice as massive as what we thought possible. Now theorists will have to take up the challenge of explaining its formation."

Until just a few years ago, stellar black holes could only be discovered when they gobbled up gas from a companion star. This process creates powerful X-ray emissions, detectable from Earth, that reveal the presence of the collapsed object.

The vast majority of stellar black holes in our Galaxy are not engaged in a cosmic banquet, though, and thus don't emit revealing X-rays. As a result, only about two dozen Galactic stellar black holes have been well identified and measured.

To counter this limitation, Prof. Liu and collaborators surveyed the sky with China's Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST), looking for stars that orbit an invisible object, pulled by its gravity.

This observational technique was first proposed by the visionary English scientist John Michell in 1783, but it has only become feasible with recent technological improvements in telescopes and detectors.

Still, such a search is like looking for the proverbial needle in a haystack: only one star in a thousand may be circling a black hole.

After the initial discovery, the world's largest optical telescopes - Spain's 10.4-m Gran Telescopio Canarias and the 10-m Keck I telescope in the United States - were used to determine the system's physical parameters. The results were nothing short of fantastic: a star eight times heavier than the Sun was seen orbiting a 70-solar-mass black hole, every 79 days.

The discovery of LB-1 fits nicely with another breakthrough in astrophysics. Recently, the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo gravitational wave detectors have begun to catch ripples in spacetime caused by collisions of black holes in distant galaxies. Intriguingly, the black holes involved in such collisions are also much bigger than what was previously considered typical.

The direct sighting of LB-1 proves that this population of over-massive stellar black holes exists even in our own backyard. "This discovery forces us to re-examine our models of how stellar-mass black holes form," said LIGO Director Prof. David Reitze from the University of Florida in the U.S.

"This remarkable result along with the LIGO-Virgo detections of binary black hole collisions during the past four years really points towards a renaissance in our understanding of black hole astrophysics," said Reitze.

This work was made possible by LAMOST (Xinglong, China), the Gran Telescopio Canarias (Canary Islands, Spain), the W. M. Keck Observatory (Hawaii, United States), and the Chandra X-ray Observatory (United States). The research team comprised scientists from China, the United States, Spain, Australia, Italy, Poland and the Netherlands.

The paper can be found at

Contacts and sources:
Xu Ang
Chinese Academy of Sciences Headquarters

Citation: A wide star–black-hole binary system from radial-velocity measurements
Jifeng Liu, Haotong Zhang, Andrew W. Howard, Zhongrui Bai, Youjun Lu, Roberto Soria, Stephen Justham, Xiangdong Li, Zheng Zheng, Tinggui Wang, Krzysztof Belczynski, Jorge Casares, Wei Zhang, Hailong Yuan, Yiqiao Dong, Yajuan Lei, Howard Isaacson, Song Wang, Yu Bai, Yong Shao, Qing Gao, Yilun Wang, Zexi Niu, Kaiming Cui, Chuanjie Zheng, Xiaoyong Mu, Lan Zhang, Wei Wang, Alexander Heger, Zhaoxiang Qi, Shilong Liao, Mario Lattanzi, Wei-Min Gu, Junfeng Wang, Jianfeng Wu, Lijing Shao, Rongfeng Shen, Xiaofeng Wang, Joel Bregman, Rosanne Di Stefano, Qingzhong Liu, Zhanwen Han, Tianmeng Zhang, Huijuan Wang, Juanjuan Ren, Junbo Zhang, Jujia Zhang, Xiaoli Wang, Antonio Cabrera-Lavers, Romano Corradi, Rafael Rebolo, Yongheng Zhao, Gang Zhao, Yaoquan Chu & Xiangqun Cui - Nature volume 575, pages618–621(2019)

Wednesday, November 27, 2019

Inbreeding and Population/Demographic Shifts Could Have Led to Neanderthal Extinction

Neanderthal extinction could have occurred without environmental pressure or competition with modern humans.

Small populations, inbreeding, and random demographic fluctuations could have been enough to cause Neanderthal extinction, according to a study published November 27, 2019 in the open-access journal PLOS ONE by Krist Vaesen from Eindhoven University of Technology, the Netherlands, and colleagues.

Credit: Petr Kratochvil (CC0)

Paleoanthropologists agree that Neanderthals disappeared around 40,000 years ago--about the same time that anatomically modern humans began migrating into the Near East and Europe. However, the role modern humans played in Neanderthal extinction is disputed. In this study, the authors used population modelling to explore whether Neanderthal populations could have vanished without external factors such as competition from modern humans.

Using data from extant hunter-gatherer populations as parameters, the authors developed population models for simulated Neanderthal populations of various initial sizes (50, 100, 500, 1,000, or 5,000 individuals). They then simulated for their model populations the effects of inbreeding, Allee effects (where reduced population size negatively impacts individuals' fitness), and annual random demographic fluctuations in births, deaths, and the sex ratio, to see if these factors could bring about an extinction event over a 10,000-year period.

The population models show that inbreeding alone was unlikely to have led to extinction (this only occurred in the smallest model population). However, reproduction-related Allee effects where 25 percent or fewer Neanderthal females gave birth within a given year (as is common in extant hunter-gatherers) could have caused extinction in populations of up to 1,000 individuals. In conjunction with demographic fluctuations, Allee effects plus inbreeding could have caused extinction across all population sizes modelled within the 10,000 years allotted.

The population models are limited by their parameters, which are based on modern human hunter-gatherers and exclude the impact of the Allee effect on survival rates. It's also possible that modern humans could have impacted Neanderthal populations in ways which reinforced inbreeding and Allee effects, but are not reflected in the models.

However, by showing demographic issues alone could have led to Neanderthal extinction, the authors note these models may serve as a "null hypothesis" for future competing theories--including the impact of modern humans on Neanderthals.

The authors add: "Did Neanderthals disappear because of us? No, this study suggests. The species' demise might have been due merely to a stroke of bad, demographic luck."

Funding: Research by KV was supported by the Netherlands Organisation for Scientific Research (NWO) (grant 276-20-021).

Contacts and sources:
Krist Vaesen

Citation: Vaesen K, Scherjon F, Hemerik L, Verpoorte A (2019) Inbreeding, Allee effects and stochasticity might be sufficient to account for Neanderthal extinction. PLoS ONE 14(11): e0225117.    Th earticle is freely available  in PLOS ONE:

Nine Climate Tipping Points Now “Active”, Warn Scientists

More than half of the climate tipping points identified a decade ago are now “active”, a group of leading scientists have warned.

This threatens the loss of the Amazon rainforest and the great ice sheets of Antarctica and Greenland, which are currently undergoing measurable and unprecedented changes much earlier than expected.

This “cascade” of changes sparked by global warming could threaten the existence of human civilizations.

The collapse of major ice sheets on Greenland, West Antarctica and part of East Antarctica would commit the world to around 10 meters of irreversible sea-level rise.

Evidence is mounting that these events are more likely and more interconnected than was previously thought, leading to a possible domino effect.

In an article in the journal Nature, the scientists call for urgent action to reduce greenhouse gas emissions to prevent key tipping points, warning of a worst-case scenario of a “hothouse”, less habitable planet.

“A decade ago we identified a suite of potential tipping points in the Earth system, now we see evidence that over half of them have been activated,” said lead author Professor Tim Lenton, director of the Global Systems Institute at the University of Exeter.

“The growing threat of rapid, irreversible changes means it is no longer responsible to wait and see. The situation is urgent and we need an emergency response.”

Co-author Johan Rockström, director of the Potsdam Institute for Climate Impact Research, said: “It is not only human pressures on Earth that continue rising to unprecedented levels.

“It is also that as science advances, we must admit that we have underestimated the risks of unleashing irreversible changes, where the planet self-amplifies global warming.

“This is what we now start seeing, already at 1°C global warming.

“Scientifically, this provides strong evidence for declaring a state of planetary emergency, to unleash world action that accelerates the path towards a world that can continue evolving on a stable planet.”

In the commentary, the authors propose a formal way to calculate a planetary emergency as risk multiplied by urgency.

Tipping point risks are now much higher than earlier estimates, while urgency relates to how fast it takes to act to reduce risk.

Exiting the fossil fuel economy is unlikely before 2050, but with temperature already at 1.1°C above pre-industrial temperature, it is likely Earth will cross the 1.5°C guardrail by 2040. The authors conclude this alone defines an emergency.

Nine active tipping points:
  1. Arctic sea ice
  2. Greenland ice sheet
  3. Boreal forests
  4. Permafrost
  5. Atlantic Meridional Overturning Circulation
  6. Amazon rainforest
  7. Warm-water corals
  8. West Antarctic Ice Sheet
  9. Parts of East Antarctica

The collapse of major ice sheets on Greenland, West Antarctica and part of East Antarctica would commit the world to around 10 metres of irreversible sea-level rise.

Reducing emissions could slow this process, allowing more time for low-lying populations to move.

The rainforests, permafrost and boreal forests are examples of biosphere tipping points that if crossed result in the release of additional greenhouse gases amplifying warming.

Despite most countries having signed the Paris Agreement, pledging to keep global warming well below 2°C, current national emissions pledges – even if they are met – would lead to 3°C of warming.

Although future tipping points and the interplay between them is difficult to predict, the scientists argue: “If damaging tipping cascades can occur and a global tipping cannot be ruled out, then this is an existential threat to civilization.

“No amount of economic cost–benefit analysis is going to help us. We need to change our approach to the climate problem.”

Professor Lenton added: “We might already have crossed the threshold for a cascade of inter-related tipping points.

“However, the rate at which they progress, and therefore the risk they pose, can be reduced by cutting our emissions.”

Though global temperatures have fluctuated over millions of years, the authors say humans are now “forcing the system”, with atmospheric carbon dioxide concentration and global temperature increasing at rates that are an order of magnitude higher than at the end of the last ice age.

The latest UN Climate Change Conference will take place in Madrid from December 2-13.

The article is entitled: “Climate tipping points – too risky to bet against.”

Contacts and sources:
Alex MorrisonUniversity of Exeter

Animal Embryos Evolved before Animals

Animals evolved from single-celled ancestors, before diversifying into 30 or 40 distinct anatomical designs. When and how animal ancestors made the transition from single-celled microbes to complex multicellular organisms has been the focus of intense debate.

Until now, this question could only be addressed by studying living animals and their relatives, but now the research team has found evidence that a key step in this major evolutionary transition occurred long before complex animals appear in the fossil record, in the fossilised embryos that resemble multicellular stages in the life cycle of single-celled relatives of animals.

The team discovered the fossils named Caveasphaera in 609 million-year old rocks in the Guizhou Province of South China. Individual Caveasphaera fossils are only about half a millimeter in diameter, but X-ray microscopy revealed that they were preserved all the way down to their component cells.

These are computer models based on X-ray tomographic microscopy of the fossils, showing the successive stages of development.

Credit: Philip Donoghue and Zongjun Yin

Kelly Vargas, from the University of Bristol's School of Earth Sciences, said: "X-Ray tomographic microscopy works like a medical CT scanner, but allows us to see features that are less than a thousandth of a millimeter in size. We were able to sort the fossils into growth stages, reconstructing the embryology of Caveasphaera."

Co-author Zongjun Yin, from Nanjing Institute of Geology and Palaeontology in China, added: "Our results show that Caveasphaera sorted its cells during embryo development, in just the same way as living animals, including humans, but we have no evidence that these embryos developed into more complex organisms."

Co-author Dr John Cunningham, also from University of Bristol, said: "Caveasphaera had a life cycle like the close living relatives of animals, which alternate between single-celled and multicellular stages. However, Caveasphaera goes one step further, reorganising those cells during embryology."

Co-author Stefan Bengtson, from the Swedish Museum of Natural History, said "Caveasphaera is the earliest evidence of this most important step in the evolution of animals, which allowed them to develop distinct tissue layers and organs".

An embryo of Caveasphaera showing its cellular structure and the growing tips where cells are increasing in number through division. This image was obtained using Scanning Electron Microscopy. The fossil specimen is less than half a millimetre in diameter.

Credit: Philip Donoghue and Zongjun Yin

Co-author Maoyan Zhu, also from Nanjing Institute of Geology and Palaeontology, said he is not totally convinced that Caveasphaera is an animal. He added: "Caveasphaera looks a lot like the embryos of some starfish and corals - we don't find the adult stages simply because they are harder to fossilise

Co-author Dr Federica Marone from the Paul Scherrer Institute in Switzerland said "this study shows the amazing detail that can be preserved in the fossil record but also the power of X-ray microscopes in uncovering secrets preserved in stone without destroying the fossils."

Co-author Professor Philip Donoghue, also from the University of Bristol's School of Earth Sciences, said "Caveasphaera shows features that look both like microbial relatives of animals and early embryo stages of primitive animals. We're still searching for more fossils that may help us to decide.

"Either way, fossils of Caveasphaera tell us that animal-like embryonic development evolved long before the oldest definitive animals appear in the fossil record."

This research was funded through the Biosphere Evolution, Transitions and Resilience (BETR) programme, which is co-funded by the Natural Environment Research Council (NERC) and Natural Science Foundation of China (NSFC).

Contacts and sources:
Philip Donoghue
University of Bristol


Imaging Uncovers Secrets of Medicine's Mysterious Ivory Manikins

Little is known about the origins of manikins--small anatomical sculptures thought to be used by doctors four centuries ago--but now advanced imaging techniques have offered a revealing glimpse inside these captivating ivory dolls. Researchers using micro-CT successfully identified the material composition and components of several ancient ivory manikins, according to a new study being presented next week at the annual meeting of the Radiological Society of North America (RSNA).

This is an ivory figurine reclining on its 'bed' with all organs placed inside.

Credit: Study author and RSNA

Ivory manikins are typically thought to have been carved in Germany in the late 17th century. They are reclining human figurines, 4-8 inches long, generally female, which open to reveal removable organs and sometimes a fetus attached with a fabric "umbilical" cord. The manikins have intricately carved features, and some even have pillows beneath their heads. It is believed that they were used for the study of medical anatomy or perhaps as a teaching aid for pregnancy and childbirth. By the 18th century, they had been replaced by more realistic teaching tools, such as wax models and cadavers. The manikins then became objects of curiosity and luxury status symbols in private collections.

Duke University in Durham, N.C., holds the world's largest collection of manikins (22 out of 180 known manikins worldwide). Most of the manikins in the Duke collection were purchased in the 1930s and 1940s by Duke thoracic surgeon Josiah Trent, M.D., and his wife Mary Duke Biddle Trent, prior to the 1989 ivory trade ban. The researchers noted that after being donated to the university by Trent's granddaughters, the manikins have spent most of their time in archival storage boxes or behind display glass, as they are too fragile for regular handling.

"They are usually stored in a library vault and occasionally rotated into a special display unit in the Duke Medical Library for visitors to appreciate," said Fides R. Schwartz, M.D., research fellow in the Department of Radiology at Duke.

Micro-CT initial scan data. The internal organs and fetus inside the uterus are visible, similar to a photograph.
Credit: Study author and RSNA

Non-destructive imaging with X-rays and CT has been used in the past to examine fragile artwork and ancient artifacts. Imaging of relics has been extremely beneficial to the fields of archaeology and paleopathology--the study of ancient diseases.

Micro-CT is an imaging technique with greatly increased resolution, compared to standard CT. It not only allows visualization of internal features, it noninvasively provides volumetric information about an object's microstructure.

Dr. Schwartz and colleagues hoped that through micro-CT imaging they could determine the ivory type used in the Duke manikins, discover any repairs or alterations that were not visible to the naked eye, and allow a more precise estimation of their age.

"The advantage of micro-CT in the evaluation of these manikins enables us to analyze the microstructure of the material used," she said. "Specifically, it allows us to distinguish between 'true' ivory obtained from elephants or mammoths and 'imitation' ivory, such as deer antler or whale bone."

The research team scanned all 22 manikins with micro-CT and found that 20 out of the 22 manikins were composed of true ivory alone, though materials like antler might have been less expensive in that time. They discovered that one manikin was made entirely of antler bone, and one manikin contained both ivory and whale bone components.

This is an Ivory manikin after removal of the abdominal and chest wall, ribs, and part of the uterus. Internal organs such as the lungs, intestines, as well as a fetus inside the uterus are visible.
Credit: Study author and RSNA

Metallic components were found in four of the manikins, and fibers in two. Twelve manikins contained hinging mechanisms or internal repairs with ivory pins, and one manikin contained a long detachable pin disguised as a hairpiece.

The most established trade routes in the 17th and 18th centuries sourced ivory from Africa, leading the researchers to believe that since nearly all of the manikins were made from true ivory, it is likely that the ivory obtained to craft the manikins was acquired from the African region.

"This may assist in further narrowing down the most probable production period for the manikins," Dr. Schwartz said. "Once historical trade routes are more thoroughly understood, it might become clear that the German region of origin had access to elephant ivory only for a limited time during the 17th and 18th century, for example, from 1650 to 1700 A.D."

Additionally, identifying non-ivory components in the manikins may provide more accessibility to carbon dating, allowing the researchers to more accurately estimate the age of some of the manikins without damage to the fragile pieces.

The researchers also hope to acquire 3D scans to create digital renderings and enable subsequent 3D printed models.

"This is potentially valuable to scientific, historic and artistic communities, as it would allow display and further study of these objects while protecting the fragile originals," Dr. Schwartz said. "Digitizing and 3D printing them will give visitors more access and opportunity to interact with the manikins and may also allow investigators to learn more about their history."

Co-authors are Susan A. Churchill, B.S.R.T. (R)(N)(CT), Rachel Ingold, M.A., M.L.S., Sinan Goknur, M.S., Divakar Gupta, M.D., Justin T. Gladman, M.A., Mark Olson, Ph.D., and Tina D. Tailor, M.D.

RSNA is an association of over 53,400 radiologists, radiation oncologists, medical physicists and related scientists, promoting excellence in patient care and health care delivery through education, research and technologic innovation. The Society is based in Oak Brook, Il

Contacts and sources:
Linda Brooks

Radiological Society of North America

First Measures of Earth's Ionosphere Found with the Largest Atmospheric Radar in the Antarctic

There's chaos in the night sky, about 60 to 600 miles above Earth's surface. Called the ionosphere, this layer of Earth's atmosphere is blasted by solar radiation that breaks down the bonds of ions. Free electrons and heavy ions are left behind, constantly colliding.

This dance was previously measured through a method called incoherent scatter radar in the northern hemisphere, where researchers beam radio wave into the ionosphere. The electrons in the atmosphere scatter the radio wave "incoherently". The different ways they scatter tell researchers about the particles populating the layer.

The Program of the Antarctic Syowa Mesosphere-Stratosphere-Troposphere/Incoherent Scatter radar (PANSY radar) consisting of an active phased array of 1045 Yagi antennas.

Credit: Taishi Hashimoto (NIPR)

Now, researchers have used radar in Antarctica to make the first measurements from the Antarctic region. They published their preliminary results on September 17, 2019 in the Journal of Atmospheric and Oceanic Technology.

"Incoherent scatter radar is currently the most powerful tool available to investigate the ionosphere because it covers a wide altitudinal range and it observes essential ionospheric parameters such as electron density, ion velocity, ion and electron temperatures, as well as ion compositions," said Taishi Hashimoto, assistant professor at the National Institute of Polar Research in Japan. While these radars are powerful, they're also rare due to their size and power demand.

Using the Program of the Antarctic Syowa Mesosphere-Stratosphere-Troposphere/Incoherent Scatter (PANSY) radar, the largest and fine-resolution atmospheric radar in the Antarctic, researchers performed the first incoherent scatter radar observations in the southern hemisphere in 2015. They also made the first 24-hour observation in 2017. While analyzing these observations, Hashimoto and the team expected to see significant differences between the southern measurements and the northern measurements, as Earth's lower atmosphere has a strong asymmetry between hemispheres.

"Clearly, observations in the southern hemisphere are crucial to revealing global features of both the atmosphere and the ionosphere," Hashimoto said.

It's not as simple as taking the measurements, however. Consider the radar as a pebble skipped across a pond's surface. The researchers want to learn how the pebble vertically displaces the water as it skips and eventually sinks. They aren't interested in the concentric ripples created at each skip, but they're so similar that it's difficult to discern which measurements are the ones needed.

These ripples are known as field-aligned irregularities, and Hashimoto's team applied a computer program that can recognize the different signals and suppresses the irregularities that could obscure the data.

"Our next step will be the simultaneous observation of ionosphere incoherent scatter and field-aligned irregularities, since the suppression and extraction are using the same principle from different aspects," Hashimoto said. "We are also planning to apply the same technique to obtain other types of plasma parameters, such as the drive velocity and ion temperature, leading to a better understanding of auroras."

Other authors include Akinori Saito of the Division of Earth and Planetary Sciences at Kyoto University, Koji Nishimura and Masaki Tsutsumi of the National Institute of Polar Research, Kaoru Sato of the Department of Earth and Planetary Science at the University of Tokyo and Toru Sato of the Department of Communications and Computer Engineering at Kyoto University.

Contacts and sources:
Research Organization of Information and Systems
National Institute of Polar Research (NIPR)


Skipping Breakfast Linked To Lower Grades

Students who rarely ate breakfast on school days achieved lower GCSE grades than those who ate breakfast frequently, according to a new study in Yorkshire.

Researchers, from the University of Leeds, have for the first time demonstrated a link between eating breakfast and GCSE performance for secondary school students in the UK.

Students from Leeds City Academy enjoying their free breakfast, which is offered to all the academy’s students daily, supported by Magic Breakfast.

Credit: Ginger Pixie Photography/Magic Breakfast

Adding together all of a student’s exam results, they found that students who said they rarely ate breakfast achieved nearly two grades lower than those who rarely missed their morning meal.

The research is published today in the journal Frontiers in Public Health.

Lead researcher Dr Katie Adolphus, from the University of Leeds’ School of Psychology, said: “Our study suggests that secondary school students are at a disadvantage if they are not getting a morning meal to fuel their brains for the start of the school day.

“The UK has a growing problem of food poverty, with an estimated half a million children arriving at school each day too hungry to learn. Previously we have shown that eating breakfast has a positive impact on children’s cognition.

“This research suggests that poor nutrition is associated with worse results at school.”

The Government in England run a national, means-tested free school lunch programme accessible to all students, but there is no equivalent for breakfast.

“Education is crucial to a child’s future life success and escaping poverty, therefore ensuring every child has access to a healthy start to the day must be a priority.”ALEX CUNNINGHAM, CEO OF MAGIC BREAKFAST

Charities Magic Breakfast and Family Action deliver a breakfast programme funded by the Department for Education, which provides free breakfasts for more than 1,800 schools located in the most socio-economically deprived parts of England.

Separately, Magic Breakfast supports breakfast provision in a further 480 UK schools. However, this leaves many of the 24,000 state-funded schools in England without free breakfast provision for children not getting breakfast at home.

Some schools compensate by offering breakfast clubs they have to fund themselves. In some instances companies such as Kellogg’s fund breakfast clubs in schools.

The Leeds researchers say their findings support the calls to expand the current limited free school breakfast programme to include every state school in England. A policy proposal from Magic Breakfast to introduce school breakfast legislation is currently being considered by politicians, which has been supported by Leeds academics.

Alex Cunningham, CEO of Magic Breakfast, said: “This study is a valuable insight, reinforcing the importance of breakfast in boosting pupils' academic attainment and removing barriers to learning. Education is crucial to a child’s future life success and escaping poverty, therefore ensuring every child has access to a healthy start to the day must be a priority.

“We are grateful to the University of Leeds for highlighting this positive impact and welcome their findings, highlighting once again the importance of our work with schools.”
GCSE performance

The researchers surveyed 294 students from schools and colleges in West Yorkshire in 2011, and found that 29% rarely or never ate breakfast on school days, whilst 18% ate breakfast occasionally, and 53% frequently. Their figures are similar to the latest national data for England in 2019, which found that more than 16% of secondary school children miss breakfast.

GCSE grades were converted to point scores using the Department for Education’s 2012 system, where A* = 58, A = 52, B = 46, and so on. Adding up students’ scores across all subjects gave students an aggregated score.

Those who rarely ate breakfast scored on average 10.25 points lower than those who frequently ate breakfast, a difference of nearly two grades, after accounting for other important factors including socio-economic status, ethnicity, age, sex and BMI.

Looking at performance for each individual GCSE, they found that students who rarely ate breakfast scored on average 1.20 points lower than those who frequently ate breakfast, after accounting for other factors. Each grade equates to six points, so the difference accounted for a drop of a fifth of a grade for every GCSE an individual achieved.

Nicola Dolton, Programme Manager for the National School Breakfast Programme, from Family Action, said: “The National School Breakfast Programme is delighted to see the publication of this thorough and compelling research, highlighting the impact that breakfast consumption has on a child’s GCSE attainment.

“This report provides impressive evidence that eating a healthy breakfast improves a child’s educational attainment, which supports our own findings of improvements in a child’s concentration in class, readiness to learn, behaviour and punctuality.”

The research was funded by the Economic and Social Research Council and The Schools Partnership Trust Academies.

The paper, published in the journal Frontiers in Public Health, is titled ‘Associations between habitual school-day breakfast consumption frequency and academic performance in British adolescents’ and is available online here:

Contacts and sources:
Simon Moore
University of Leeds