
An investigational brain-computer interface (BCI) allows the study participant to communicate through a computer.
University of California, Davis
A new brain-computer interface (BCI) system has allowed a patient with Amyotrophic Lateral Sclerosis (ALS) to “speak” with his family in real time.
ALS is a neurological disease that damages nerve cells in the brain and spinal cord, leading to a loss of muscle control.
According to its developers at the University of California, Davis, this new “investigational” BCI aims to facilitate “faster, more natural conversation.”
“Our voice is part of what makes us who we are. Losing the ability to speak is devastating for people living with neurological conditions,” said David Brandman, co-director of the UC Davis Neuroprosthetics Lab.
“The results of this research provide hope for people who want to talk but can’t. We showed how a paralyzed man was empowered to speak with a synthesized version of his voice. This kind of technology could be transformative for people living with paralysis,” added Brandman, who is also a neurosurgeon, and performed implant surgery on a participant.
The researchers collected data while the participant was asked to try to speak sentences shown to him on a computer screen.
Real-time voice synthesis
The current speech neuroprostheses (BCIs that translate brain activity into speech) face limitations due to their slow conversion speed. It often takes several seconds for brain signals to become audible.
This delay hinders natural conversation.
“This new real-time voice synthesis is more like a voice call. With instantaneous voice synthesis, neuroprosthesis users will be able to be more included in a conversation. For example, they can interrupt, and people are less likely to interrupt them accidentally,” said Sergey Stavisky, senior author and an assistant professor in the UC Davis Department of Neurological Surgery.
The device decodes brain signals with remarkable precision. It involves surgically implanting tiny microelectrode arrays into the brain’s speech-producing region.
The 256 electrodes capture the activity of hundreds of neurons, transmitting these signals to computers that then interpret and reconstruct the voice.
A model of a human brain showing a microelectrode array. The arrays are designed to record brain activity.
Reduced delay
The researchers included a 45-year-old man in the BrainGate2 clinical trial, conducted at UC Davis Health.
To train the system, researchers presented the participant with sentences on a screen. He was instructed to attempt to speak these sentences aloud.
Some of those had specific intonations (e.g., as a question versus a statement, like “How are you doing today?” versus “How are you doing today?”), while his brain activity was simultaneously recorded.
The BCI quickly translated the man’s brain signals into audible speech.
The speed is astonishing – a mere one-fortieth of a second delay, similar to how we hear our own voice. This means true, spontaneous conversation.
He was even able to adjust the pitch of his voice by singing basic melodies.
The researchers noted that the BCI-synthesized voice was largely understandable, with listeners correctly identifying nearly 60% of the words.
Advanced artificial intelligence (AI) algorithms are key to this real-time speech generation. These algorithms were trained by matching the participant’s neural firing patterns with the speech sounds he intended to make.
“The main barrier to synthesizing voice in real-time was not knowing exactly when and how the person with speech loss is trying to speak,” said Maitreyee Wairagkar, first author and project scientist.
“Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice,” Wairagkar added.
RECOMMENDED ARTICLES
The researchers highlight that brain-to-voice neuroprostheses are still in early stages. The next steps involve replicating these incredible results with more participants, including those with speech loss from other causes, like stroke.
The findings were published in the journal Nature.
0COMMENT
ABOUT THE AUTHOR
Mrigakshi Dixit Mrigakshi is a science journalist who enjoys writing about space exploration, biology, and technological innovations. Her work has been featured in well-known publications including Nature India, Supercluster, The Weather Channel and Astronomy magazine. If you have pitches in mind, please do not hesitate to email her.
MasterCard