Friday, June 11, 2021

A Shocking Fish Story That Will Give You Pause

American writer and humorist Mark Twain, a master of language and noted lecturer, once offered, “The right word may be effective, but no word was ever as effective as a rightly timed pause.”




Photo Credit;   Tsunehiko Kohashi

Electric fish and today’s TED talk speakers take a page from Twain’s playbook. They pause before sharing something particularly meaningful. Pauses also prime the sensory systems to receive new and important information, according to research from Washington University in St. Louis.

Carlson



“There is an increased response in listeners to words — or in this case, electrical pulses — that happens right after a pause,” said Bruce Carlson, professor of biology in Arts & Sciences and corresponding author of the study published May 26 in Current Biology. “Fish are basically doing the same thing we do to communicate effectively.”

Beyond discovering interesting parallels between human language and electric communication in fish, the research reveals an underlying mechanism for how pauses allow neurons in the midbrain to recover from stimulation.

Carlson and collaborators, including first author Tsunehiko Kohashi, formerly a postdoctoral research associate at Washington University, conducted their study with electric fish called mormyrids. These fish use weak electric discharges, or pulses, to locate prey and to communicate with one another.

The scientists tracked the banter between fish housed under different conditions. They observed that electric fish that were alone in their tanks tend to hum along without stopping very much, producing fewer and shorter pauses in electric output than fish housed in pairs. What’s more, fish tended to produce high frequency bursts of pulses right after they paused.

The scientists then tried an experiment where they inserted artificial pauses into ongoing communication between two fish. They found that the fish receiving a pause — the listeners — increased their own rates of electric signaling just after the artificially inserted pauses. This result indicates that pauses were meaningful to the listeners.

Other researchers have studied the behavioral significance of pauses in human speech. Human listeners tend to recognize words better after pauses, and effective speakers tend to insert pauses right before something that they want to have a significant impact.

“Human auditory systems respond more strongly to words that come right after a pause, and during normal, everyday conversations, we tend to pause just before speaking words with especially high-information content,” Carlson said. “We see parallels in our fish where they respond more strongly to electrosensory stimuli that come after a pause. We also find that fish tend to pause right before they produce a high-frequency burst of electric pulses, which carries a large amount of information.”African fish called mormyrids communicate using pulses of electricity. (Photo: Tsunehiko Kohashi)

The scientists wanted to understand the underlying neural mechanism that causes these effects. They applied stimulation to electrosensory neurons in the midbrain of the electric fish and observed that continually stimulated neurons produced weaker and weaker responses. This progressive weakness is referred to as short-term synaptic depression.

Cue Mark Twain and his well-timed pauses.

The scientists inserted pauses into the continuous stimulation. They found that pauses as short as about one second allowed the synapses to recover from short-term depression and increased the response of the postsynaptic neurons to stimuli following the pause.

“Pauses inserted in electric speech reset the sensitivity of the listener’s brain, which was depressed during the continuous part of the speech,” Kohashi said. “Pauses seem to make the following message as clear as possible for the listener.”

Similar to humans.

Synaptic depression and recovery are universal in the nervous system, the researchers noted.

“We expect the same mechanism, more or less, plays a role in pauses during communication in other animals, including humans,” Carlson said.




Contacts and sources:
Talia Ogliore
Washington University in St. Louis


Publication: Pauses during communication release behavioral habituation through recovery from synaptic depression.
Tsunehiko Kohashi, Adalee J. Lube, Jenny H. Yang, Prema S. Roberts-Gaddipati, Bruce A. Carlson. Current Biology, 2021; DOI: 10.1016/j.cub.2021.04.056




Invisible but Mighty Particles above the Earth Come into Focus

Tiny charged electrons and protons which can damage satellites and alter the ozone have revealed some of their mysteries to University of Otago scientists.

An artist's depiction with cutaway section of the two giant donuts of radiation, called the Van Allen Belts, that surround Earth. 



Credit: NASA/Goddard Space Flight Center/Scientific Visualization Studio.

In a study, published in Geophysical Research Letters, the group looked at charged particles interacting with a type of radio wave called ‘EMIC’ – a wave generated in Earth's radiation belts (invisible rings of charged particles orbiting the Earth).

Dr Aaron Hendry



Lead author Dr Aaron Hendry, of the Department of Physics, says it is important to understand how these waves affect the belts – which are filled with expensive and important satellites – and Earth’s climate.

“Much like the Earth's atmosphere, the Earth’s magnetosphere – the region around the Earth where our magnetic field is stronger than the Sun’s – sometimes experiences strong ‘storms’, or periods of high activity. These storms can cause significant changes to the number of particles in the radiation belts and can accelerate some of them to very high speeds, making them a danger to our satellites. Knowing how many of these particles there are, as well as how fast they're moving, is very important to us, so that we can make sure our satellites keep working.

“Activity within the radiation belts can sometimes cause the orbits of these particles to change. If these changes bring the particles low enough to reach the Earth's upper atmosphere, they can hit the dense air, lose all of their energy and fall out of orbit.

“EMIC waves are known to be able to cause these changes and drive the loss of particles from the radiation belts. As well as causing beautiful light displays that we call aurora, this rain of particles can also cause complex chemical changes to the upper atmosphere that can in turn cause small, but important, changes the amount of ozone present in atmosphere.

“Although these changes are small, understanding them is very important to properly understanding how the chemistry of the atmosphere works, how it is changing over time, and the impact it is having on the climate,” Dr Hendry says.

For their latest study, the researchers used data from GPS satellites to look at how many electrons EMIC waves can knock into the Earth's atmosphere.

A general rule in the radiation belts is that at slower speeds, you have many more electrons. So, if the minimum speed of the EMIC wave interaction is lowered, there are a lot more electrons around to interact with waves.

By looking at data from satellites that monitor how many electrons there are in the radiation belts and how fast they're going, the researchers have been able to show that you can see the number of electrons in the radiation belts go down significantly when EMIC waves are around.

“Excitingly, we have also seen changes in the number of electrons at speeds significantly lower than the current 'accepted' minimum speed. This means that EMIC can affect much larger numbers of electrons than we previously thought possible. Clearly, we need to rethink how we’re modelling this interaction, and the impact it has on the radiation belts. There are a lot of electrons in the radiation belts, so being able to knock enough of them into the atmosphere to make a noticeable change is quite remarkable.

“This has shown that we need to take these EMIC waves into account when we're thinking about how the radiation belts change over time, and how these changes in the radiation belt affect the climate on Earth.”

Dr Hendry says the impact of EMIC-driven electrons on atmospheric chemistry is not currently being included by major climate models, which try to predict how the Earth's climate will change over time, so making sure this process is understood and included in these models is very important.

“The changes are very small compared to things like the human impact on climate, but we need to understand the whole picture in order to properly understand how everything fits together.”


Contacts and sources:
Ellie Rowley
University of Otago



Publication: Evidence of Sub‐MeV EMIC‐Driven Trapped Electron Flux Dropouts From GPS Observations.
A. T. Hendry, C. J. Rodger, M. A. Clilverd, S. K. Morley. Geophysical Research Letters, 2021; 48 (9) DOI: 10.1029/2021GL092664




Are You Ready for Benevolent Artificial Intelligence


Picture yourself driving on a narrow road in the near future when suddenly another car emerges from a bend ahead. It is a self-driving car with no passengers inside. Will you push forth and assert your right of way, or give way to let it pass? At present, most of us behave kindly in such situations involving other humans. Will we show that same kindness towards autonomous vehicles?

Using methods from behavioural game theory, an international team of researchers at LMU Munich and the University of London have conducted large-scale online studies to see whether people would behave as cooperatively with artificial intelligence (AI) systems as they do with fellow humans.

Autonomous bus, in Monheim, Rhine 

Credit: © IMAGO / Jochen Tack


Cooperation holds a society together. It often requires us to compromise with others and to accept the risk that they let us down. Traffic is a good example. We lose a bit of time when we let other people pass in front of us and are outraged when others fail to reciprocate our kindness. Will we do the same with machines?

The study which is published in the journal iScience found that, upon first encounter, people have the same level of trust toward AI as for human: most expect to meet someone who is ready to cooperate.The difference comes afterwards. People are much less ready to reciprocate with AI, and instead exploit its benevolence to their own benefit. Going back to the traffic example, a human driver would give way to another human but not to a self-driving car.The study identifies this unwillingness to compromise with machines as a new challenge to the future of human-AI interactions.


Credit: Pixabay


“We put people in the shoes of someone who interacts with an artificial agent for the first time, as it could happen on the road,” explains Jurgis Karpus, Ph.D., a behavioural game theorist and a philosopher at LMU Munich and the first author of the study. “We modelled different types of social encounters and found a consistent pattern. People expected artificial agents to be as cooperative as fellow humans. However, they did not return their benevolence as much and exploited the AI more than humans.”

With perspectives from game theory, cognitive science, and philosophy, the researchers found that ‘algorithm exploitation’ is a robust phenomenon. They replicated their findings across nine experiments with nearly 2,000 human participants. Each experiment examines different kinds of social interactions and allows the human to decide whether to compromise and cooperate or act selfishly. Expectations of the other players were also measured. In a well-known game, the Prisoner’s Dilemma, people must trust that the other characters will not let them down. They embraced risk with humans and AI alike, but betrayed the trust of the AI much more often, to gain more money.

“Cooperation is sustained by a mutual bet: I trust you will be kind to me, and you trust I will be kind to you. The biggest worry in our field is that people will not trust machines. But we show that they do!” notes Dr. Bahador Bahrami, a social neuroscientist at the LMU, and one of the senior researchers in the study. “They are fine with letting the machine down, though, and that is the big difference. People even do not report much guilt when they do,” he adds.

Biased and unethical AI has made many headline — from the 2020 exams fiasco in the United Kingdom to justice systems — but this new research brings up a novel caution. The industry and legislators strive to ensure that artificial intelligence is benevolent. But benevolence may backfire. If people think that AI is programmed to be benevolent towards them, they will be less tempted to cooperate. Some of the accidents involving self-driving cars may already show real-life examples: drivers recognize an autonomous vehicle on the road, and expect it to give way. The self-driving vehicle meanwhile expects for normal compromises between drivers to hold.“

Algorithm exploitation has further consequences down the line. "If humans are reluctant to let a polite self-driving car join from a side road, should the self-driving car be less polite and more aggressive in order to be useful?” asks Jurgis Karpus.







“Benevolent and trustworthy AI is a buzzword that everyone is excited about. But fixing the AI is not the whole story. If we realize that the robot in front of us will be cooperative no matter what, we will use it to our selfish interest,” says Professor Ophelia Deroy, a philosopher and senior author on the study, who also works with Norway’s Peace Research Institute Oslo on the ethical implications of integrating autonomous robot soldiers along with human soldiers.

“Compromises are the oil that make society work. For each of us, it looks only like a small act of self-interest. For society as a whole, it could have much bigger repercussions. If no one lets autonomous cars join the traffic, they will create their own traffic jams on the side, and not make transport easier”.


Contacts and sources:
Ludwig-Maximilians-Universität München


Publication: Algorithm exploitation: humans are keen to exploit benevolent AI.
Jurgis Karpus, Adrian Krüger, Julia Tovar Verba, Bahador Bahrami, Ophelia Deroy. iScience, 2021; 102679 DOI: 10.1016/j.isci.2021.102679






New Way to 3D-Print Custom Medical Devices Boosts Performance and Bacterial Resistance

Using a new 3D printing process, University of Nottingham researchers have discovered how to tailor-make artificial body parts and other medical devices with built-in functionality that offers better shape and durability, while cutting the risk of bacterial infection at the same time.



Credit: University of Nottingham 

“Most mass-produced medical devices fail to completely meet the unique and complex needs of their users. Similarly, single-material 3D printing methods have design limitations that cannot produce a bespoke device with multiple biological or mechanical functions. But for the first time, using a computer-aided, multi-material 3D-print technique, we demonstrate it is possible to combine complex functions within one customised healthcare device to enhance patient wellbeing.”Study lead, Dr Yinfeng He, from the Centre for Additive Manufacturing

The hope is that the innovative design process can be applied to 3D-print any medical device that needs customisable shapes and functions. For example, the method could be adapted to create a highly-bespoke one-piece prosthetic limb or joint to replace a lost finger or leg that can fit the patient perfectly to improve their comfort and the prosthetic’s durability; or to print customised pills containing multiple drugs - known as polypills - optimised to release into the body in a pre-designed therapeutic sequence.

Meanwhile, the aging population is increasing in the world, leading to a higher demand for medical devices in the future. Using this technique could improve the health and wellbeing of older people and ease the financial burden on the government.

How it works

For this study, the researchers applied a computer algorithm to design and manufacture - pixel by pixel - 3D-printed objects made up of two polymer materials of differing stiffness that also prevent the build-up of bacterial biofilm. By optimising the stiffness in this way, they successfully achieved custom-shaped and -sized parts that offer the required flexibility and strength.

Current artificial finger joint replacements, for example, use both silicone and metal parts that offer the wearer a standardised level of dexterity, while still being rigid enough to implant into bone. However, as a demonstrator for the study, the team were able to 3D-print a finger joint offering these dual requirements in one device, while also being able to customise its size and strength to meet individual patient requirements.

Excitingly, with an added level of design control, the team were able to perform their new style of 3D-printing with multi-materials that are intrinsically bacteria-resistant and bio-functional, allowing them to be implanted and combat infection (which can occur during and after surgery) without the use of added antibiotic drugs.

A bacteria-repelling artificial finger joint with customised strength distribution made with the multi-material 3D print process

Credit: University of Nottingham 

The team also used a new high-resolution characterisation technique (3D orbitSIMS) to 3D-map the chemistry of the print structures and to test the bonding between them throughout the part. This identified that - at very small scales - the two materials were intermingling at their interfaces; a sign of good bonding which means better device is less likely to break.

The study was carried out by the Centre for Additive Manufacturing (CfAM) and funded by the Engineering and Physical Sciences Research Council. The complete findings are published in Advanced Science, in a paper entitled: ‘Exploiting generative design for 3D printing of bacterial biofilm resistant composite devices’.

Prior to commercializing the technique, the researchers plan to broaden its potential uses by testing it on more advanced materials with extra functionalities such as controlling immune responses and promoting stem cell attachment.

  

Contacts and sources:
Emma Lowry
University of Nottingham



Publication: Exploiting Generative Design for 3D Printing of Bacterial Biofilm Resistant Composite Devices.
Yinfeng He, Meisam Abdi, Gustavo F. Trindade, Belén Begines, Jean‐Frédéric Dubern, Elisabetta Prina, Andrew L. Hook, Gabriel Y. H. Choong, Javier Ledesma, Christopher J. Tuck, Felicity R. A. J. Rose, Richard J. M. Hague, Clive J. Roberts, Davide S. A. De Focatiis, Ian A. Ashcroft, Paul Williams, Derek J. Irvine, Morgan R. Alexander, Ricky D. Wildman.Advanced Science, 2021; 2100249 DOI: 10.1002/advs.202100249




Wednesday, June 9, 2021

A Temperate, Neptune-Sized Planet Discovered

An international group of collaborators, including scientists from NASA's Jet Propulsion Laboratory and The University of New Mexico, have discovered a new, temperate sub-Neptune sized exoplanet with a 24-day orbital period orbiting a nearby M dwarf star. The recent discovery offers exciting research opportunities thanks to the planet's substantial atmosphere, small star, and how fast the system is moving away from the Earth.

An artist's impression shows an exoplanet orbiting the Sun-like star.


 Credit: ESO/M. Kornmesser

The research, titled TOI-1231 b: A Temperate, Neptune-Sized Planet Transiting the Nearby M3 Dwarf NLTT 24399, will be published in a future issue of The Astronomical Journal. The exoplanet, TOI-1231 b, was detected using photometric data from the Transiting Exoplanet Survey Satellite (TESS) and followed up with observations using the Planet Finder Spectrograph (PFS) on the Magellan Clay telescope at Las Campanas Observatory in Chile. The PFS is a sophisticated instrument that detects exoplanets through their gravitational influence on their host stars. As the planets orbit their hosts, the measured stellar velocities vary periodically, revealing the planetary presence and information about their mass and orbit.

The observing strategy adopted by NASA's TESS, which divides each hemisphere into 13 sectors that are surveyed for roughly 28 days, is producing the most comprehensive all-sky search for transiting planets. This approach has already proven its capability to detect both large and small planets around stars ranging from sun-like down to low-mass M dwarf stars. M dwarf stars, also known as a red dwarf, are the most common type of star in the Milky Way making up some 70 percent of all stars in the galaxy.

M dwarfs are smaller and possess a fraction of the sun's mass and have low luminosity. Because an M dwarf is smaller, when a planet of a given size transits the star, the amount of light that is blocked out by the planet is larger, making the transit more easily detectable. Imagine an Earth-like planet passing in front of a star the size of the sun, it's going to block out a tiny bit of light; but if it's passing in front of a star that's a lot smaller, the proportion of light that's blocked out will be larger. In a sense, this creates a larger shadow on the surface of the star, making planets around M dwarfs more easily detectable and easier to study.

Although it enables the detection of exoplanets across the sky, TESS's survey strategy also produces significant observational biases based on orbital period. Exoplanets must transit their host stars at least twice within TESS 's observing span to be detected with the correct period by the Science Processing Operations Center (SPOC) pipeline and the Quick Look Pipeline (QLP), which search the 2-minute and 30-minute cadence TESS data, respectively. Because 74 percent of TESS' total sky coverage is only observed for 28 days, the majority of TESS exoplanets detected have periods less than 14 days. TOI-1231b's 24-day period, therefore, makes its discovery even more valuable.

NASA JPL scientist Jennifer Burt, the lead author of the paper, along with her collaborators including Diana Dragomir, an assistant professor in UNM's Department of Physics and Astronomy, measured both the radius and mass of the planet.

"Working with a group of excellent astronomers spread across the globe, we were able to assemble the data necessary to characterize the host star and measure both the radius and mass of the planet," said Burt. "Those values in turn allowed us to calculate the planet's bulk density and hypothesize about what the planet is made out of. TOI-1231 b is pretty similar in size and density to Neptune, so we think it has a similarly large, gaseous atmosphere."

"Another advantage of exoplanets orbiting M dwarf hosts is that we can measure their masses easier because the ratio of the planet mass to the stellar mass is also larger. When the star is smaller and less massive, it makes detection methods work better because the planet suddenly plays a bigger role as it stands out more easily in relation to the star," explained Dragomir. "Like the shadow cast on the star. The smaller the star, the less massive the star, the more the effect of the planet can be detected.

"Even though TOI 1231b is eight times closer to its star than the Earth is to the Sun, its temperature is similar to that of Earth, thanks to its cooler and less bright host star," says Dragomir. "However, the planet itself is actually larger than earth and a little bit smaller than Neptune - we could call it a sub-Neptune."

Burt and Dragomir, who actually initiated this research while they were Fellows at MIT's Kavli Institute, worked with scientists specializing in observing and characterizing the atmospheres of small planets to figure out which current and future space-based missions might be able to peer into TOI-1231 b's outer layers to inform researchers exactly what kinds of gases are swirling around the planet. With a temperature around 330 Kelvin or 140 degrees Fahrenheit, TOI-1231b is one of the coolest, small exoplanets accessible for atmospheric studies discovered thus far.

Past research suggests planets this cool may have clouds high in their atmospheres, which makes it hard to determine what types of gases surround them. But new observations of another small, cool planet called K2-18 b broke this trend and showed evidence of water in its atmosphere, surprising many astronomers.

"TOI-1231 b is one of the only other planets we know of in a similar size and temperature range, so future observations of this new planet will let us determine just how common (or rare) it is for water clouds to form around these temperate worlds," said Burt.

Additionally, with its host star's high Near-Infrared (NIR) brightness, it makes an exciting target for future missions with the Hubble Space Telescope (HST) and the James Webb Space Telescope (JWST). The first set of these observations, led by one of the paper's co-authors, should take place later this month using the Hubble Space Telescope.

"The low density of TOI 1231b indicates that it is surrounded by a substantial atmosphere rather than being a rocky planet. But the composition and extent of this atmosphere are unknown!" said Dragomir. "TOI1231b could have a large hydrogen or hydrogen-helium atmosphere, or a denser water vapor atmosphere. Each of these would point to a different origin, allowing astronomers to understand whether and how planets form differently around M dwarfs when compared to the planets around our Sun, for example. Our upcoming HST observations will begin to answer these questions, and JWST promises an even more thorough look into the planet's atmosphere."

Another way to study the planet's atmosphere is to investigate whether gas is being blown away, by looking for evidence of atoms like hydrogen and helium surrounding the planet as it transits across the face of its host star. Generally, hydrogen atoms are almost impossible to detect because their presence is masked by interstellar gas. But this planet-star system offers a unique opportunity to apply this method because of how fast it's moving away from the Earth.

"One of the most intriguing results of the last two decades of exoplanet science is that, thus far, none of the new planetary systems we've discovered look anything like our own solar system," said Burt. "They're full of planets between the size of Earth and Neptune on orbits much shorter than Mercury's, so we don't have any local examples to compare them to. This new planet we've discovered is still weird - but it's one step closer to being somewhat like our neighborhood planets. Compared to most transiting planets detected thus far, which often have scorching temperatures in the many hundreds or thousands of degrees, TOI-1231 b is positively frigid."

In closing, Dragomir reflects that "this planet joins the ranks of just two or three other nearby small exoplanets that will be scrutinized with every chance we get and using a wide range of telescopes, for years to come so keep an eye out for new TOI1231b developments!"
 

** This article is in press at The Astronomical Journal. A pre-print version can be found here: https://arxiv.org/abs/2105.08077.


Contacts and sources:
Steve Carr
University of New Mexico

 




New Size Estimate: Megalodons More Mega Than We Knew

A more reliable way of estimating the size of megalodon shows the extinct shark may have been bigger than previously thought, measuring up to 65 feet, nearly the length of two school buses. Earlier studies had ball-parked the massive predator at about 50 to 60 feet long.

The revised estimate is the result of new equations based on the width of megalodon’s teeth – and began with a high school lesson that went awry.

Victor Perez, then a doctoral student at the Florida Museum of Natural History, was guiding students through a math exercise that used 3D-printed replicas of fossil teeth from a real megalodon and a set of commonly used equations based on tooth height to estimate the shark’s size. But something was off: Students’ calculations ranged from about 40 to 148 feet for the same shark. Perez snapped into trouble-shooting mode.

“I was going around, checking, like, did you use the wrong equation? Did you forget to convert your units?” said Perez, the study’s lead author and now the assistant curator of paleontology at the Calvert Marine Museum in Maryland. “But it very quickly became clear that it was not the students that had made the error. It was simply that the equations were not as accurate as we had predicted.”

Although the equations have been widely used by scientists since their publication in 2002, the classroom exercise revealed they generate varying size estimates for a single shark, depending on which tooth is measured.

“I was really surprised,” Perez said. “I think a lot of people had seen that study and blindly accepted the equations.”

Sharks' jaws are made of cartilage, the same flexible tissue found in the noses and ears of humans. Cartilage breaks down quickly after death, but tooth enamel is extremely durable and preserves well, Perez said.FLORIDA MUSEUM PHOTO BY KRISTEN GRACE

FLORIDA MUSEUM PHOTO BY KRISTEN GRACE


The most accepted methods for estimating the length of megalodon have used great white sharks as a modern proxy, relying on the relationship between tooth size to total body length. While great white sharks and megalodon belong to different families, they share similar predatory lifestyles and broad, triangular teeth serrated like steak knives – ideal adaptations for hunting large, fleshy marine mammals such as whales and dolphins, Perez said.

But these methods also present a challenge: To generate body length estimates, they require the researcher to correctly identify a fossil tooth’s former position in a megalodon jaw. As in humans, the size and shape of shark teeth vary depending on where they’re located in the mouth, and megalodon teeth are most often found as standalone fossils.

Megalodon teeth can be up to 7 inches long and were specialized for feeding on large, fleshy prey, such as whales and dolphins.


FLORIDA MUSEUM PHOTO BY KRISTEN GRACE

So, Perez was ecstatic when fossil collector Gordon Hubbell donated a nearly complete set of teeth from the same megalodon shark to the Florida Museum in 2015, reducing the guesswork. After museum researchers CT scanned the teeth and made them available online, Perez collaborated with teacher Megan Higbee Hendrickson on a plan to incorporate them into her middle school curriculum at the Academy of the Holy Names school in Tampa.

Vertebrate paleontologist Victor Perez began collecting fossils when he was 6 years old. After completing his Ph.D. at the Florida Museum of Natural History, he became assistant curator at the Calvert Marine Museum where he first became fascinated with megalodon as a child. "It definitely feels a little surreal," he said. 

FLORIDA MUSEUM PHOTO BY KRISTEN GRACE

“We decided to have the kids 3D-print the teeth, determine the size of the shark and build a replica of its jaw for our art show,” Hendrickson said.

Perez and Hendrickson co-designed a lesson for students based on the then-most popular method for estimating shark size: Match the tooth to its position in the shark jaw, look up the corresponding equation, measure the tooth from the tip of the crown to the line where root and crown meet and plug the number into the equation.

After a successful pilot test of a few teeth with Hendrickson’s students, he expanded the lesson plan to include the whole set of megalodon teeth for high school students at Delta Charter High School in Aptos, California. Perez expected a slight variability of a couple millimeters in their results, but this time, variations in students’ estimates shot to more than 100 feet. The farther a tooth position was from the front of the jaw, the larger the size estimate.

After Perez detailed the lesson’s results in a fossil community newsletter, he received an email from Teddy Badaut, an avocational paleontologist in France. Badaut suggested a different approach. Why not measure tooth width instead of height? Previous research had suggested tooth width was limited by the size of a shark’s jaw, which would be proportional to its body length.

Ronny Maik Leder, then a postdoctoral researcher at the Florida Museum, worked with Perez to develop a new set of equations based on tooth width.

By measuring the set of teeth from Hubbell, “we could actually sum up the width of the teeth and get an even better approximation of the jaw width,” Perez said.

IMAGE COURTESY OF TIM SCHEIRER/CALVERT MARINE MUSEUM

When megalodon was first described as a species, scientists thought it was the direct ancestor of the great white shark. Although the two species have similar teeth and feeding habits, they last shared a common ancestor about 60 million years ago, Perez said.

The researchers analyzed sets of fossil teeth from 11 individual sharks, representing five species, including megalodon, its close relatives and modern great white sharks.

By measuring the combined width of each tooth in a row, they developed a model for how wide an individual tooth was in relation to the jaw for a given species. Now when a paleontologist unearths a lone megalodon tooth the size of their hand, they can compare its width to the average obtained in the study and get an accurate estimate of how big the shark was.

“I was quite surprised that indeed no one had thought of this before,” said Leder, now director of the Natural History Museum in Leipzig, Germany. “The simple beauty of this method must have been too obvious to be seen. Our model was much more stable than previous approaches. This collaboration was a wonderful example of why working with amateur and hobby paleontologists is so important.”

Perez cautioned that because individual sharks vary in size, the team’s methods still have a range of error of about 10 feet when applied to the largest individuals. It’s also unclear exactly how wide megalodon’s jaw was and difficult to guess based on teeth alone – some shark species have gaps between each tooth while the teeth in other species overlap.

“Even though this potentially advances our understanding, we haven’t really settled the question of how big megalodon was. There’s still more that could be done, but that would probably require finding a complete skeleton at this point,” he said.


Excavated from North Carolina, these 46 fossils comprise the most complete set of megalodon teeth found. Students printed 3D replicas of the teeth and used them to estimate megalodon’s length.
FLORIDA MUSEUM PHOTO BY JEFF GAGE

Perez continues to teach the megalodon tooth lesson, but its focus has changed.

“Since then, we’ve used the lesson to talk about the nature of science – the fact that we don’t know everything. There are still unanswered questions,” he said.

For Hendrickson, the lesson sparked her students’ enthusiasm for science in ways that textbooks could not.

“Victor was an amazing role model for the kids. He is the personification of a young scientist that followed his childhood interest and made a career out of it. So many of these kids had never worked with or spoken to a scientist who respected their point of view and was willing to answer their questions.”

The research was published in the open-access journal Palaeontologia Electronica.

Leder and Badaut co-authored the study.

The research was based on work supported by the Florida Education Fund McKnight Doctoral Fellowship, the National Science Foundation Graduate Research Fellowship program and the NSF Advancing Informal STEM Learning program.




Contacts and sources:
Natalie van Hoose
Florida Museum of Natural History


 



Better sleep is the key to VITALITY! Try this Natural Supplement

Women's Mental Health Has Higher Association with Dietary Factors



- Women's mental health likely has a higher association with dietary factors than men's, according to new research from Binghamton University, State University of New York.

Lina Begdache, assistant professor of health and wellness studies at Binghamton University, had previously published research on diet and mood that suggests that a high-quality diet improves mental health. She wanted to test whether customization of diet improves mood among men and women ages 30 or older.
 
Exercise could reduce negative association of certain food and mental distress in mature women
Credit: Binghamton University

Along with research assistant Cara M. Patrissy, Begdache dissected the different food groups that are associated with mental distress in men and women ages 30 years and older, as well as studied the different dietary patterns in relation to exercise frequency and mental distress. The results suggest that women's mental health has a higher association with dietary factors than that of men. Mental distress and exercise frequency were associated with different dietary and lifestyle patterns, which support the concept of customizing diet and lifestyle factors to improve mental wellbeing.

"We found a general relationship between eating healthy, following healthy dietary practices, exercise and mental well-being," said Begdache. "Interestingly, we found that for unhealthy dietary patterns, the level of mental distress was higher in women than in men, which confirmed that women are more susceptible to unhealthy eating than men."

Based on this study and others, diet and exercise may be the first line of defense against mental distress in mature women, said Begdache.

"Fast food, skipping breakfast, caffeine and high-glycemic (HG) food are all associated with mental distress in mature women," said Begdache. "Fruits and dark green leafy vegetables (DGLV) are associated with mental well-being. The extra information we learned from this study is that exercise significantly reduced the negative association of HG food and fast food with mental distress," said Begadache.

This research provides the framework needed for healthcare professionals for customizing dietary plans to promote exercise and improve mental well-being in mature adults, said Begdache. It could also provide a new perspective for the research community when assessing the role of diet on mental distress.

The researchers are conducting a parallel study with young men and women, looking at diet quality in addition to sleep and seasonal change variables from a longitudinal perspective.

The paper, "Customization of Diet May Promote Exercise and Improve Mental Wellbeing in Mature Adults: The Role of Exercise as a Mediator," was published in the Journal of Personalized Medicine.
http://dx.doi.org/10.3390/jpm11050435


Contacts and sources:
John Brhel
Binghamton University, State University of New York 


Instant savings on Body & Mind Wellness package-upgrade your body immunity & brain power today!

Radicalized and Believing in Conspiracies: Can the Cycle Be Broken?

If your idea of conspiracy theories entails aliens, UFOs, governmental cover-ups at Roswell Air Force base, and the melody of The X-Files--you're not alone. That was, indeed, the classic notion, says Scott Tyson, an assistant professor of political science at the University of Rochester.

But over the course of the last five years, he noticed a watershed. For starters, the term "theory" no longer applied to the convoluted ideas spouted by today's conspiracist groups such as QAnon, the Proud Boys, and the Oath Keepers, all of whom Tyson calls largely "theoryless."

While radical assertions of a “deep state” and “stolen elections” have long bubbled quietly underneath public discourse, Rochester political scientist Scott Tyson says during the last five years, the ideas have moved into the mainstream discourse. 


Credit: Getty Images photo

For example, Tyson, a game theorist whose research focuses on authoritarian politics, conspiracies, and radicalization, points out that those who believe erroneously that former President Donald Trump's "victory was stolen," usually do not believe that votes cast on that same ballot for successful Republican congressional candidates have been tampered with.

Yet, these conspiracies have entered the mainstream discourse and are driving the growing radicalization of average Americans that manifested itself most visibly in the storming of the US Capitol on January 6, he says.

In a recent study, "Sowing the Seeds: Radicalization as a Political Tool" published in the American Journal of Political Science, Tyson--together with University of Michigan coauthor Todd Lehmann--looks at two common policy interventions--economic and psychological--designed to counter the growing radicalization among the US population.

The duo finds that improving economic conditions reduces both radicalization efforts and dissent. However, trying to render people psychologically less susceptible to radicalization attempts can backfire and instead increase the efforts by radical leaders to influence and radicalize more followers.

While radical assertions of a "deep state" and "stolen elections" have long bubbled quietly beneath the public discourse, Tyson says those ideas have now moved into the mainstream. That shift--from fringe to center stage--Tyson argues, happened during the Trump presidency.

Q&A

What's the nutshell definition of "radicalization"?

"Radicalization" is used interchangeably with "indoctrination." Essentially, it's creating self-motivation among people to do certain things. You would call someone radicalized when those things that you would normally have to motivate someone to do--you don't have to do anymore because they've become self-motivated. That's where conspiracism comes in--it restructures the way that people perceive the social world around them. Radicalization involves an element of extremism and is fundamentally a political thought with an ecosystem to it: there needs to be a political group, or a set of political leaders who are trying to restructure people's beliefs or their values in such a way that it helps their own political goals or causes.

How can radicalization be countered?

The way to combat it is not to hope for the easy solution. It's a false idea that we can just take out the leaders and it'll all go away, akin to simply cutting off the head off the snake. That doesn't actually work. You have to go from the bottom up to start trying to siphon off radicalized people, and treat the organization more as a terrorist group, in terms of any hearts and minds policies.

Does leadership "decapitation" work against a radical group such as QAnon?

We looked in our research at what happens when you threaten leadership decapitation and found that you actually provide an incentive for leaders to increase their efforts to radicalize others. The reason is very simple: if we think of radicalized people as having the self-motivation to do things against the government--whether it's protests, attacks, or to bomb things--if more people become radicalized the actual leaders are less important in these kinds of antigovernment actions. Our theory suggests that leaders are less important in the actual production of antigovernment actions, so that the government is essentially forced to divert attention from the leaders and toward these other threats. The leaders intentionally take away from their own control.

Why were conspiracies able to enter the American mainstream so pervasively?

Trump was incredibly important in giving a megaphone to conspiracists who had been on the fringe beforehand until he became a political force and essentially weaponized a lot of those ideas. When Trump unleashed all these conspiracies on the public--many people didn't know that they were really fringe ideas. One other reason they were able to spread so quickly is our so-called "media ecosystem." We have media outlets like Fox News, OAN, and Newsmax who are perfectly willing to spout conspiracies. When it all started back in 2015, the mainstream media wasn't ready to deal with this kind of weaponization. That's why conspiracists were able to misuse the mainstream media to essentially launder their claims: the conspiracists would make a bunch of unfounded assertions and accusations, which the mainstream media would pick up in turn to report on. Part of the debunking, however, was retelling the untrue story. That way a lot of these conspiracy narratives ended up reaching a much larger audience.

What role did the pandemic play in the spread of conspiracies and the radicalization of US citizens?

QAnon was around before the pandemic, and the radicalization campaigns of far-right groups were already under way beforehand. But it certainly accelerated these efforts and made them more effective. Because of the pandemic people were more isolated, which means they were talking to fewer people, and the echo chamber became narrower. That in turn, made people more susceptible to becoming radicalized. It's very similar to how cults recruit people: they isolate them from their family and friends who are not involved in the cult. They keep new recruits in that echo chamber long enough until they've been able to radicalize them. The number of QAnon members and radicalized people through other far-right groups today would be much, much lower if the pandemic hadn't forced us all to isolate in the way that it did.


Contacts and sources:
Sandra Knispel
University of Rochester



 

In Warfare and the Wild, Camouflage Breakers" Can Find a Sniper or Beast in Less Than a Second

This technology puts some truth into the old adage, "you can run but you can't hide." 

After looking for just one-twentieth of a second, experts in camouflage breaking can accurately detect not only that something is hidden in a scene, but precisely identify the camouflaged target, a skill set that can mean the difference between life and death in warfare and the wild, investigators report.

They can actually identify a camouflaged target as fast and as well as individuals identifying far more obvious "pop-out" targets, similar to the concept used at a shooting range, but in this case using easy-to-spot scenarios like a black O-shaped target among a crowd of black C shapes.

Dr. Jay Hegdé and first author Fallon Branch. 
Credit:  Michael Holahan, Augusta University

In fact, the relatively rapid method for training civilian novices to become expert camouflage breakers developed by Medical College of Georgia neuroscientist Dr. Jay Hegdé and his colleagues, also enabled the camouflage breakers to sense that something was amiss even when there was no specific target to identify.

This intuitive sense that something is not quite right has also been found in experienced radiologists finding subtle changes in mammograms, sometimes years before there is a detectable lesion.

The MCG investigators who developed the camouflage breaking technique wanted to know if trainees could detect the actual camouflaged target or just sense that something is different, an issue that is highly significant in real world circumstances, where a sniper might be hiding in the desert sand or a dense forest landscape.

"Merely being able to judge, no matter how accurately, that the given combat scene contains a target is not very useful to a sniper under real-world combat conditions if he/she is unable to tell where the target is," Hegdé and his colleagues write in the journal Cognitive Research: Principles and Implications.

They already knew that they could train most nonmilitary individuals off the street to break camouflage in as little as an hour daily for two weeks as long as their vision is good, a finding they want to benefit military personnel.

"We want to hide our own personnel and military material from the enemy and we want to break the enemy's camouflage," says Hegdé, goals that summarize his research, which has been funded by the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command Army Research Laboratory, for nearly a decade. "What are the things we can tweak? What are the things we can do to make our snipers better at recognizing camouflage?"

Because a missed shot by a sniper also tells the enemy his location. "You can't take shots at things that are not the target," Hegdé says.

"The potential for rapid training of novices in the camouflage-breaking paradigm is very promising as it highlights the potential for application to a wide variety of detection and localization tasks," says Dr. Frederick Gregory, program manager, U.S. Army Combat Capabilities Development Command Army Research Laboratory. "Results in experts highlight an opportunity to extend the training to real world visual search and visualization problems that would be of prime importance for the Army to solve."

For this newly published work, six adult volunteers with normal or corrected-to-normal vision were trained to break camouflage using Hegdé's deep-learning method, but received no specific training about how to pinpoint the target. Participants looked at digitally synthesized camouflage scenes like foliage or fruit and each scene had a 50-50 chance of containing no target versus a camouflaged target like a human head or a novel, 3D digital image. Similar to computer scientists training self-driving cars, the idea was and is to get viewers to get to know the lay of the land that is their focus. "If it turns out there is something that doesn't belong there, you can tell," he says.

Trainees could then either look at the image for 50 milliseconds --.05 seconds -- or as long as they wanted, then proceed to the next step where they quickly viewed a random field of pixels, that work like a visual palate cleanser, before acknowledging whether the camouflage image contained a target, then using a mouse to show where the target was. "You have to work from memory to say where it was," he notes.

When the participants could look at the image for as long as they wanted, the reported location of the actual target was essentially indistinguishable from the actual target but the accuracy did not drop much when the viewing time was just 50 milliseconds, which gives little time for even moving your eyes around, Hegdé says.

The subjects again had no subsequent training on identifying precisely where the target was. And they found that even without that specific training, they could do both equally well. "This was not a given," Hegdé notes.

In a second experiment with seven different individuals they used a much-abbreviated training process, which basically ensured participants knew which buttons to push when, and used instead a clearly more pronounced "pop-out' target without the traditional camouflage background, rather scenarios like that black O-shaped target among a crowd of black C shapes or a blue S shape among a sea of green H shapes. Both the longer and shorter viewing times yielded essentially identical results from the more extensively trained camouflage-breakers, both in accuracy of localization and reaction time.

Camouflage is used extensively by the military, from the deserts of the Middle East to the dense jungles of South America with the visual texture changing to blend with the natural environment. "You often are recognized by your outline, and you use these patterns to break up your outline, so the person trying to break your camouflage doesn't know where you leave off and the background begins," he says.

He notes that context is another important factor for recognition, referencing how you may not recognize a person whose face you have seen multiple times when you see them in a different environ. His current Army funded studies include exploring more about the importance of context, and further exploring ramifications of "camouflage breaking" in identifying medical problems.

He notes that even with his training, some people are better at breaking camouflage than others -- he says he is really bad at it -- and why remains mostly a mystery and another learning point for Hegdé and his colleagues.

###

Coauthors Isabelle Noel Santana and Allison JoAnna Lewis were undergraduate apprentices of the U.S. Army in Hegdé's lab when the work was done. Lewis is now an MCG medical student. First author Fallon Branch is a U.S. Navy veteran.

Read the study.



Contacts and sources:
Toni Baker

 

 
Fulvic Acid has been shown in studies to improve Blood Glucose levels! EXPERIENCE OXYGENATION AT THE CELLULAR LEVEL

Tuesday, June 8, 2021

Saving the Climate with Solar Fuel

Produced in a sustainable way, synthetic fuels contribute to switching mobility to renewable energy and to achieving the climate goals in road traffic. In Empa's mobility demonstrator, move, researchers are investigating the production of synthetic methane from an energy, technical and economic perspective – a project with global potential.

By 2030, the retailer Lidl Switzerland will switch from fossil natural gas to liquefied renewable gas to operate its trucks.

 Image: Lidl Schweiz

Mobility analyses show: Only a small proportion of all vehicles are responsible for the majority of the kilometers driven. We are talking above all about long-distance trucks that transport goods all over Europe. If these continue to be fueled with fossil energy, it will hardly be possible to sufficiently reduce CO2 emissions in road traffic. Synthetic fuels can make a significant contribution to such applications.

With electric mobility, hydrogen mobility and synthetic fuels, Empa's future mobility demonstrator, "move", is investigating three paths for CO2 reduction in road traffic against the background of a rapidly changing energy system. "All these concepts have advantages and drawbacks in terms of energy, operation and economics. In order to use them in a smart way, we need a deeper understanding of the overall system," says Christian Bach, Head of Empa's Automotive Powertrain Technologies lab. "Together with our 'move' partners, we are working to develop knowledge that can be put into practice."

The latest project focuses on the production of synthetic methane from hydrogen and CO2 – the so-called methanization. Such fuels, produced synthetically with renewable energy – thus called synfuel or syngas –, can be transported via conventional routes and made available through the existing infrastructure. This is of interest for Switzerland as well as globally, because it opens up an enormous potential for renewable energy.

A methanization process developed at Empa


The basic chemical process of methanization has been known for over 100 years as the Sabatier reaction. In "move", another process developed further at Empa will be used: the so-called sorption-enhanced methanization. Empa researchers hope that this novel process engineering concept will lead to simpler process control, higher efficiency and better suitability for dynamic operation.

Methanization works as follows: Methane (CH4) and water (H2O) are produced by catalytic conversion from carbon dioxide (CO2) and hydrogen (H2). The water is causing problems with conventional processes, however: To remove it, serial methanization stages are typically required – with condensation areas in between. Due to the high reaction temperatures, a proportion of the water is converted back into hydrogen by the so-called water-gas shift reaction. The gaseous product of the methanization reaction thus contains a few percent hydrogen, which prevents direct feeding into the gas grid; the hydrogen must first be removed.

Carbon dioxide and water from the air


CO2 for the methanization as well as water for hydrogen production is taken directly from the atmosphere with a CO2 collector from the ETH spin-off Climeworks. The system sucks in ambient air and CO2 molecules remain attached to the filter. Using heat – around 100°C – the CO2 molecules can be released from the filter. Empa researchers see further potential for optimization in the heat required for this CO2 desorption. "Both hydrogen production and methanization continuously generate waste heat," says Bach. "By means of a clever heat management, we want to cover the heat requirements of the CO2 collector as much as possible with this waste heat". In addition to CO2, the Climeworks plant also extracts water from ambient air, which is used for hydrogen production in the electrolysis device. This means that such plants are also conceivable in regions without water supply, for example in deserts (see box).

In addition to new knowledge about technical and energetic aspects, insights about the economic efficiency of synthetic methane are one of the project's prime goals. "In order to ensure this holistic perspective, the project consortium consists of partners who cover the entire value chain – from Empa researchers to energy suppliers, filling station and fleet operators and industrial partners in the technology and plant sectors," says Brigitte Buchmann, member of Empa's Board of Directors and strategic head of "move". The project is supported by the Canton of Zurich, the ETH Board, Avenergy Suisse, Migros, Lidl Switzerland, Glattwerk, Armasuisse and Swisspower.

Currently, Christian Bach's team is concentrating on the investigation of water adsorption on porous materials and the process control of the catalytic reaction. Construction of the plant is planned for mid-2021. "About a year later, we want to refuel the first vehicle," says Buchmann. "With methane from solar energy."

Synthetic fuels from the desert?
When converting our energy system to renewable sources, there is a major challenge: Renewable sources such as sun or wind are not always available everywhere. In winter we have too little renewable energy, in summer there is too much – in the northern hemisphere. In the southern hemisphere it is the other way round. But there are also areas with almost continuous sunshine – the so-called sun belt, in which the large deserts of the Earth are located. "From a global perspective, we do not have too little renewable energy worldwide, but "merely" an energy transport problem," says Christian Bach. Synthetic energy carriers could help solve this problem.

Smaller plants in Switzerland can make a valuable contribution to the national energy system by harnessing surplus summer electricity and connecting different energy sectors. However, large plants could exploit their full potential above all in the Earth's sunbelt. This is illustrated by a simple calculation: In order to cover Switzerland's energy needs during winter not covered by hydropower as well as all long-distance domestic traffic exclusively with (imported) synthetic energy sources, a solar power plant would be required in a desert with an area of approximately 700 km2; that is 27 x 27 km or, in other words, 0.008% of the area of the Sahara. The water and CO2 needed for production could be extracted locally from the atmosphere. "Existing trade mechanisms, transport infrastructures, standards and expertise could simply be used further," says Bach. So could the plant in "move" soon be a model for a gigawatt plant in the desert?



Contacts and sources:
Stephan Kälin
Swiss Federal Laboratories for Materials Science and Technology (Empa) 



 

Dream Vividly Again With Humic Fulvic Lion's Mane WHOLE MIND nootropic MEMORY FOCUS CLARITY IMMUNITY

Machine Learning Reduces Microscope Data Processing Time from Months to Just Seconds

With a new method that combines high-powered scanning force microscopes and machine learning, IBEC researchers have drastically reduced the processing time required to achieve nanoscale biochemical compositions map from electric images of eukaryotic cells in just seconds. Using earlier computation methods, processing one image could take even months.


Credit: 

This study can provide an invaluable tool to biologists conducting basic research and it also has the potential to be used in a host of biomedical applications.

Ever since the world’s first ever microscope was invented in 1590 by Hans and Zacharias Janssen —a Dutch father and son— our curiosity for what goes on at the tiniest scales has led to development of increasingly powerful devices. Fast forward to 2021, we not only have optical microscopy methods that allow us to see tiny particles in higher resolution than ever before, we also have non-optical techniques, such as scanning force microscopes, with which researchers can construct detailed maps of a range of physical and chemical properties. IBEC’s Nanoscale bioelectrical characterization group, led by UB Professor Gabriel Gomila, in collaboration with members of the IBEC’s Nanoscopy for nanomedicine group, have been analysing cells using a special type of microscopy called Scanning Dielectric Force Volume Microscopy, an advanced technique developed in recent years with which they can create maps of an electrical physical property called the dielectric constant. Each of the biomolecules that make up cells —that is, lipids, proteins and nucleic acids— has a different dielectric constant, so a map of this property is basically a map of cell composition. The technique that they developed has an advantage over the current gold standard optical method, which involves applying a fluorescent dye that can disrupt the cell being studied. Their approach doesn’t require the addition of any potentially disruptive external agent. However, the application of this technique requires a complex post-processing process to convert the measured observables into the physical magnitudes, which for eukaryotic cells involves huge amounts of computation time. In fact, it can take months to process just one image in a workstation computer, since the dielectric constant is analysed pixel by pixel using local reconstructed geometrical models.

Months to seconds

In this new study, recently published in the journal Small Methods, the researchers opted for a new technique to speed up the microscope data processing. This time, they used machine learning algorithms instead of conventional computational methods. The result was drastic: once trained, the machine learning algorithm was able to produce a dielectric biochemical composition map of the cells in just seconds. In the study, no external substances were added to the sample, a long-sought goal in the composition imaging in cell biology. They achieved these rapid results by using a powerful type of algorithm called neural networks, which mimics the way that neurons in the human brain operate.

The study was first-authored by Martí Checa, who carried it out as part of his PhD in Gomila’s group at IBEC. He is now a postdoctoral researcher at the Catalan Institute of Nanoscience and Nanotechnology (ICN2). “It is one of the first studies to provide such a rapid label-free biochemical composition map of dry eukaryotic cells”, Checa explains. Indeed, in this proof-of-concept study, the researchers used dried out cells, to prevent the huge effects of water in the dielectric measurements due to its high dielectric constant. In a recently published follow-up study, they also analysed fixed cells in their natural in-liquid state. Here they were able to compare the values obtained in the dry and liquid models in order to render an accurate map of the biomolecules that make up eukaryotic cells. These are the multi-structured cells that animals, plants, fungi and other organisms are composed of. “The next step in this research is to apply the method to electrically excitable living cells, such as neurons, where intense electrical activity occurs. We are excited to see what can be obtained with our technique in these systems” Prof. Gomila adds.

Dielectric biochemical composition map of dry Hella-cells obtained with the new developed microscopy method.


Credit: 


Biomedical applications

The researchers validated their methodology by comparing their findings to well-known facts about the composition of cells, such as the lipid-rich nature of the cell membrane or the high quantity of nucleic acids present in the nucleus. With this work, they have opened up the possibility of analysing large quantities of cells in record time.
Lion's Mane WHOLE MIND nootropic MEMORY FOCUS CLARITY IMMUNITY

This study is expected to provide an invaluable tool to biologists to conducting basic research, as well as, to open up potential medical applications. For example, changes in the dielectric properties of cells are currently being studied as possible biomarkers of some illnesses, such as cancer or neurodegenerative diseases.


Contacts and sources:
Guillermo Orts-Gil
Institute of Bioengineering of Catalonia (IBEC)


Publication:  Martí Checa, Ruben Millan-Solsona, Adrianna Glinkowska Mares, Silvia Pujals, and Gabriel Gomila. Fast Label-Free Nanoscale Composition Mapping of Eukaryotic Cells Via Scanning Dielectric Force Volume Microscopy and Machine Learning. Small Methods, 2021. Read the study.

Robots Rejoice: Researchers Create a Camera That Knows Exactly Where It Is

Researchers from the University of Bristol have demonstrated how a new special type of camera can build a pictorial map of where it has been and use this map to know where it currently is, something that will be incredibly useful in the development of smart sensors, driverless cars and robotics.

Overview of the on-sensor mapping. The system moves around and as it does it builds a visual catalogue of what it observes. This is the map that is later used to know if it has been there before.
Credit: University of Bristol


Knowing where you are on a map is one of the most useful pieces of information when navigating journeys. It allows you to plan where to go next and also tracks where you have been before. This is essential for smart devices from robot vacuum cleaners to delivery drones to wearable sensors keeping an eye on our health.

But one important obstacle is that systems that need to build or use maps are very complex and commonly rely on external signals like GPS that do not work indoors, or require a great deal of energy due to the large number of components involved.


Right: the system moves around the world, Left: A new image is seen and a decision is made to add it or not to the visual catalogue (top left), this is the pictorial map that can then be used to localise the system later.
Credit: University of Bristol

Walterio Mayol-Cuevas, Professor in Robotics, Computer Vision and Mobile Systems at the University of Bristol’s Department of Computer Science, led the team that has been developing this new technology.

He said: “We often take for granted things like our impressive spatial abilities. Take bees or ants as an example. They have been shown to be able to use visual information to move around and achieve highly complex navigation, all without GPS or much energy consumption.

“In great part this is because their visual systems are extremely efficient and well-tuned to making and using maps, and robots can't compete there yet.”

During localisation the incoming image is compared to the visual catalogue (Descriptor database) and if a match is found, the system will tell where it is (Predicted node, small white rectangle at the top) relative to the catalogue. Note how the system is able to match images even if there are changes in illumination or objects like people moving.


Credit: University of Bristol

However, a new breed of sensor-processor devices that the team calls Pixel Processor Array (PPA), allow processing on-sensor. This means that as images are sensed, the device can decide what information to keep, what information to discard and only use what it needs for the task at hand.

An example of such PPA device is the SCAMP architecture that has been developed by the team’s colleagues at the University of Manchester by Piotr Dudek, Professor of Circuits and Systems from the University of Manchester and his team. This PPA has one small processor for every pixel which allows for massively parallel computation on the sensor itself.

The team at the University of Bristol has previously demonstrated how these new systems can recognise objects at thousands of frames per second but the new research shows how a sensor-processor device can make maps and use them, all at the time of image capture.

This work was part of the MSc dissertation of Hector Castillo-Elizalde, who did his MSc in Robotics at the University of Bristol. He was co-supervised by Yanan Liu who is also doing his PhD on the same topic and Dr Laurie Bose.

Hector Castillo-Elizalde and the team developed a mapping algorithm that runs all on-board the sensor-processor device.

The algorithm is deceptively simple: when a new image arrives, the algorithm decides if it is sufficiently different to what it has seen before. If it is, it will store some of its data, if not it will discard it.

As the PPA device is moved around by for example a person or robot, it will collect a visual catalogue of views. This catalogue can then be used to match any new image when it is in the mode of localisation.

Importantly, no images go out of the PPA, only the key data that indicates where it is with respect to the visual catalogue. This makes the system more energy efficient and also helps with privacy.

The team believes that this type of artificial visual systems that are developed for visual processing, and not necessarily to record images, is a first step towards making more efficient smart systems that can use visual information to understand and move in the world. Tiny, energy efficient robots or smart glasses doing useful things for the planet and for people will need spatial understanding, which will come from being able to make and use maps.

The research has been partially funded by the Engineering and Physical Sciences Research Council (EPSRC), by a CONACYT scholarship to Hector Castillo-Elizalde and a CSC scholarship to Yanan Liu.





Contacts and sources:
University of Bristol

 

Publication: This work is the subject of a paper that has been accepted for publication in the IEEE International Conference on Robotics and Automation (ICRA) 2021. One of the main annual academic conferences covering advances in robotics.

Instant savings on Body & Mind Wellness package-upgrade your body immunity & brain power today! Immunity, Improved Gut Health, Better Nutrition!

Scientists Develop the 'Evotype' to Unlock Power of Evolution for Better Engineering Biology

A defining characteristic of all life is its ability to evolve. However, the fact that biologically engineered systems will evolve when used has, to date, mostly been ignored. This has resulted in biotechnologies with a limited functional shelf-life that fail to make use of the powerful evolutionary capabilities inherent to all biology.

Sim Castle, first author of the research, published in Nature Communications, and a PhD student in the School of Biological Sciences at Bristol, explained the motivation for the work: "The thing that has always fascinated me about biology is that it changes, it is chaotic, it adapts, it evolves. Bioengineers therefore do not just design static artefacts - they design living populations that continue to mutate, grow and undergo natural selection."

Realising that describing this change was key to harnessing evolution, the team developed the concept of the evotype to capture the evolutionary potential of a biosystem. Crucially, the evotype can be broken into three key parts: variation, function, and selection, with each of these offering a tuning knob for bioengineers to control the possible paths available to evolution.

Prof Claire Grierson, co-author and Head of the School of Biological Sciences at Bristol, added: "Learning how to effectively engineer with evolution is one of, if not the biggest, challenges facing bioengineers today. Our work provides a desperately needed framework to help describe the evolutionary potential of a biosystem and re-imagine biological engineering so that it works in harmony with life's ability to evolve."

Sim Castle further stated: "What was surprising was that many of the tools already available to bioengineers fitted nicely into our framework when considered from an evolutionary perspective. We therefore might not be too far from making evolution a core feature of future engineered biological systems."

Dr Thomas Gorochowski, senior author and a Royal Society University Research Fellow at Bristol, ended by saying: "Our concept of the evotype not only provides a means for developing biotechnologies that can harness evolution in new ways, but also opens exciting new avenues to think about and implement evolution in completely new contexts. Potentially, this could even lead to us designing new, self-adaptive technologies that evolve from scratch, rather than tinkering with biological ones that already do."

This work was funded by the Royal Society, BBSRC/EPSRC Bristol Centre for Synthetic Biology (BrisSynBio) and EPSRC/BBSRC Synthetic Biology Centre for Doctoral Training (SynBioCDT) with support from the Bristol BioDesign Institute (BBI).

Paper

Simeon D. Castle, Claire S. Grierson, Thomas E. Gorochowski. Towards an engineering theory of evolution. Nature Communications, 2021; https://www.nature.com/articles/s41467-021-23573-3A defining characteristic of all life is its ability to evolve. However, the fact that biologically engineered systems will evolve when used has, to date, mostly been ignored. This has resulted in biotechnologies with a limited functional shelf-life that fail to make use of the powerful evolutionary capabilities inherent to all biology.

Sim Castle, first author of the research, published in Nature Communications, and a PhD student in the School of Biological Sciences at Bristol, explained the motivation for the work: "The thing that has always fascinated me about biology is that it changes, it is chaotic, it adapts, it evolves. Bioengineers therefore do not just design static artefacts - they design living populations that continue to mutate, grow and undergo natural selection."

Anima Techne 


Credit: Simeon Castle


Realising that describing this change was key to harnessing evolution, the team developed the concept of the evotype to capture the evolutionary potential of a biosystem. Crucially, the evotype can be broken into three key parts: variation, function, and selection, with each of these offering a tuning knob for bioengineers to control the possible paths available to evolution.

Prof Claire Grierson, co-author and Head of the School of Biological Sciences at Bristol, added: "Learning how to effectively engineer with evolution is one of, if not the biggest, challenges facing bioengineers today. Our work provides a desperately needed framework to help describe the evolutionary potential of a biosystem and re-imagine biological engineering so that it works in harmony with life's ability to evolve."

Sim Castle further stated: "What was surprising was that many of the tools already available to bioengineers fitted nicely into our framework when considered from an evolutionary perspective. We therefore might not be too far from making evolution a core feature of future engineered biological systems."

Dr Thomas Gorochowski, senior author and a Royal Society University Research Fellow at Bristol, ended by saying: "Our concept of the evotype not only provides a means for developing biotechnologies that can harness evolution in new ways, but also opens exciting new avenues to think about and implement evolution in completely new contexts. Potentially, this could even lead to us designing new, self-adaptive technologies that evolve from scratch, rather than tinkering with biological ones that already do."

This work was funded by the Royal Society, BBSRC/EPSRC Bristol Centre for Synthetic Biology (BrisSynBio) and EPSRC/BBSRC Synthetic Biology Centre for Doctoral Training (SynBioCDT) with support from the Bristol BioDesign Institute (BBI).

 


Immunity, Improved Gut Health, Better Nutrition!
Contacts and sources:
Shona East
University of Bristol 


Publication : Simeon D. Castle, Claire S. Grierson, Thomas E. Gorochowski. Towards an engineering theory of evolution. Nature Communications, 2021; https://www.nature.com/articles/s41467-021-23573-3



 : 

Study Shows How Taking Short Breaks May Help Our Brains Learn New Skill

 

 In a study of healthy volunteers, National Institutes of Health researchers have mapped out the brain activity that flows when we learn a new skill, such as playing a new song on the piano, and discovered why taking short breaks from practice is a key to learning. The researchers found that during rest the volunteers' brains rapidly and repeatedly replayed faster versions of the activity seen while they practiced typing a code. The more a volunteer replayed the activity the better they performed during subsequent practice sessions, suggesting rest strengthened memories.

In a study of healthy volunteers, NIH researchers discovered that our brains may replay compressed memories of learning new skills when we rest. Above is a map of the memory replay activity observed in the study. 
Credit: Courtesy of Cohen lab, NIH/NINDS.

"Our results support the idea that wakeful rest plays just as important a role as practice in learning a new skill. It appears to be the period when our brains compress and consolidate memories of what we just practiced," said Leonardo G. Cohen, M.D., senior investigator at the NIH's National Institute of Neurological Disorders and Stroke (NINDS) and the senior author of the study published in Cell Reports. "Understanding this role of neural replay may not only help shape how we learn new skills but also how we help patients recover skills lost after neurological injury like stroke."

The study was conducted at the NIH Clinical Center. Dr. Cohen's team used a highly sensitive scanning technique, called magnetoencephalography, to record the brain waves of 33 healthy, right-handed volunteers as they learned to type a five-digit test code with their left hands. The subjects sat in a chair and under the scanner's long, cone-shaped cap. An experiment began when a subject was shown the code "41234" on a screen and asked to type it out as many times as possible for 10 seconds and then take a 10 second break. Subjects were asked to repeat this cycle of alternating practice and rest sessions a total of 35 times.

During the first few trials, the speed at which subjects correctly typed the code improved dramatically and then leveled off around the 11th cycle. In a previous study, led by former NIH postdoctoral fellow Marlene Bönstrup, M.D., Dr. Cohen's team showed that most of these gains happened during short rests, and not when the subjects were typing. Moreover, the gains were greater than those made after a night's sleep and were correlated with a decrease in the size of brain waves, called beta rhythms. In this new report, the researchers searched for something different in the subjects' brain waves.

"We wanted to explore the mechanisms behind memory strengthening seen during wakeful rest. Several forms of memory appear to rely on the replaying of neural activity, so we decided to test this idea out for procedural skill learning," said Ethan R. Buch, Ph.D., a staff scientist on Dr. Cohen's team and leader of the study.
Immunity, Improved Gut Health, Better Nutrition!
To do this, Leonardo Claudino, Ph.D., a former postdoctoral fellow in Dr. Cohen's lab, helped Dr. Buch develop a computer program which allowed the team to decipher the brain wave activity associated with typing each number in the test code.

The program helped them discover that a much faster version - about 20 times faster - of the brain activity seen during typing was replayed during the rest periods. Over the course of the first eleven practice trials, these compressed versions of the activity were replayed many times - about 25 times - per rest period. This was two to three times more often than the activity seen during later rest periods or after the experiments had ended.

Interestingly, they found that the frequency of replay during rest predicted memory strengthening. In other words, the subjects whose brains replayed the typing activity more often showed greater jumps in performance after each trial than those who replayed it less often.

"During the early part of the learning curve we saw that wakeful rest replay was compressed in time, frequent, and a good predictor of variability in learning a new skill across individuals," said Dr. Buch. "This suggests that during wakeful rest the brain binds together the memories required to learn a new skill."

As expected, the team discovered that the replay activity often happened in the sensorimotor regions of the brain, which are responsible for controlling movements. However, they also saw activity in other brain regions, namely the hippocampus and entorhinal cortex.

"We were a bit surprised by these last results. Traditionally, it was thought that the hippocampus and entorhinal cortex may not play such a substantive role in procedural memory. In contrast, our results suggest that these regions are rapidly chattering with the sensorimotor cortex when learning these types of skills," said Dr. Cohen. "Overall, our results support the idea that manipulating replay activity during waking rest may be a powerful tool that researchers can use to help individuals learn new skills faster and possibly facilitate rehabilitation from stroke."

NINDS is the nation's leading funder of research on the brain and nervous system. The mission of NINDS is to seek fundamental knowledge about the brain and nervous system and to use that knowledge to reduce the burden of neurological disease.

About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.


Contacts and sources:
Christopher Thomas 
National Institute of Neurological Disorders and Stroke



Publication: Buch et al., Consolidation of human skill linked to waking hippocampo-neocortical replay, Cell Reports, June 8, 2021, DOI: 10.1016/j.celrep.2021.109193
This study was supported by the NIH Intramural Research Program at the NINDS.
For more information:
http://www.ninds.nih.gov/Disorders/All-Disorders/Stroke-Information-Page
http://www.stroke.nih.gov/materials/needtoknow.htm
http://www.ninds.nih.gov
irp.nih.gov/
clinicalcenter.nih.gov/
neuroscience.nih.gov/ninds/Home.aspx
dir.ninds.nih.gov/ninds/Home.html