Echoes can make speech harder to understand, and tuning out echoes in an audio recording is a notoriously difficulty engineering problem. The human brain, however, appears to solve the problem successfully by separating the sound into direct speech and its echo, according to a study publishing February 15th in the open-access journal PLOS Biology by Jiaxin Gao from Zhejiang University, China, and colleagues.

The audio signals in online meetings and auditoriums that are not properly designed often have an echo lagging at least 100 milliseconds from the original speech. These echoes heavily distort speech, interfering with slowly varying sound features most important for understanding conversations, yet people still reliably understand echoic speech. To better understand how the brain enables this, the authors used magnetoencephalography (MEG) to record neural activity while human participants listened to a story with and without an echo. They compared the neural signals to two computational models: one simulating the brain adapting to the echo, and another simulating the brain separating the echo from the original speech.


๐ŸŒŒ Science is not just a subject; it’s a way of life. Embrace your inner scientist with our “Science is Golden” tee. Elevate your fashion game while celebrating the beauty of discovery. Shop now!

Participants understood the story with over 95% accuracy, regardless of echo. The researchers observed that cortical activity tracks energy changes related to direct speech, despite the strong interference of the echo. Simulating neural adaptation only partially captured the brain response they observedโ€”neural activity was better explained by a model that split original speech and its echo into separate processing streams.

This remained true even when participants were told to direct their attention toward a silent film and ignore the story, suggesting that top-down attention isnโ€™t required to mentally separate direct speech and its echo. The researchers state that auditory stream segregation may be important both for singling out a specific speaker in a crowded environment, and for clearly understanding an individual speaker in a reverberant space.


Sign up for the Daily Dose Newsletter and get every morning’s best science news from around the web delivered straight to your inbox? It’s easy like Sunday morning.

Processingโ€ฆ
Success! You're on the list.

The authors add, โ€œEchoes strongly distort the sound features of speech and create a challenge for automatic speech recognition. The human brain, however, can segregate speech from its echo and achieve reliable recognition of echoic speech.โ€

IMAGE CREDIT: Jiaxin Gao (CC-BY 4.0, https://creativecommons.org/licenses/by/4.0/)


If you enjoy the content we create and would like to support us, please consider becoming a patron on Patreon! By joining our community, you’ll gain access to exclusive perks such as early access to our latest content, behind-the-scenes updates, and the ability to submit questions and suggest topics for us to cover. Your support will enable us to continue creating high-quality content and reach a wider audience.

Join us on Patreon today and let’s work together to create more amazing content! https://www.patreon.com/ScientificInquirer


The edge of the Milky Way’s star-forming disc revealed
Astronomers have defined the Milky Way's star-forming disc edge at 40,000 light-years …

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading