A glimmer of hope.. How do brain implants help paralyzed people "speak" again?
How do brain implants help paralyzed people "speak" again?

Dr. Jimmy Henderson had one wish throughout his childhood, that his father could talk to him. Now, as a scientist and neurosurgeon at Stanford University, Henderson and his colleagues are working on developing brain implants that might be able to fulfill similar desires in other paralyzed or speech-impaired people.
Two studies, published in the journal Nature on Wednesday, show how brain implants, called neural prostheses, can record the neural activity of a person as they attempt to speak normally.
Then, the brain activity can be decoded into words on a computer screen, via sound, or even by using an animated avatar.
“When I was five years old, my dad got into a devastating car accident that left him barely able to move or talk,” Henderson, an author of one of the studies and a professor at Stanford University, said at a press conference about his research. But his ability to speak was so weak that we couldn't understand the joke."
Henderson and colleagues at Stanford University and other US institutions studied the use of brain sensors implanted in 68-year-old Pat Bennett.
Benin was diagnosed with amyotrophic lateral sclerosis in 2012, which affected her ability to speak.
In their study, the researchers write that Bennett was able to display some limited facial movements and articulate sounds, but was unable to produce clear words due to amyotrophic lateral sclerosis (ALS), a rare neurological disease that affects nerve cells in the brain and spinal cord.
And in March of 2022, Henderson underwent surgery to implant arrays of electrodes in two areas of Bennett's brain.
The implants recorded neural activity when Bennett tried to make facial movements, make sounds or speak single words.
The arrays were attached to wires emerging from the skull and connected to a computer.
The software decoded the neural activity and converted the activity into words displayed on a computer screen in real time.
When she finished speaking, Bennett pressed a button to complete the decoding.
The researchers evaluated this innovation while Bennett attempted to speak by articulating, moving her mouth without articulating.
The researchers found that when using a vocabulary of 50 words, the rate of decoding errors was 9.1% on days when Bennett spoke, and 11.2% on days when she was silent.
When using a vocabulary of 125,000 words, the word error rate was 23.8% on all articulated days, and 24.7% on the silent days.
"In our work, we show that we can decode speech attempts with an error rate of 23% when using a large cohort of 125,000 Possible word. That means about 3 out of 4 words are decoded correctly.”
"With these new studies, it is now possible to envision a future in which we can provide the ability to speak fluently again to someone with paralysis, enabling them to say what they want to say with freedom and precision high enough to be reliably understood," Willett added.
"For non-verbals, this means they can stay connected to the larger world, possibly continuing to work, maintaining friends, and family relationships," Bennett wrote in a press release.
However, for now, the researchers state that their findings are a "proof of concept" that decoding speech movements using large vocabularies is possible, but should be tested in more people before it can be considered for clinical use.
The other study, published Wednesday, included a woman who was unable to speak clearly due to paralysis after suffering a stroke in 2005, at the age of 30.
And in September of 2022, an electrode device was implanted in her brain at UCSF Medical Center in San Francisco, without any surgical complications.
The implant recorded neural activity, which was decoded into text on the screen.
The researchers wrote in the study that they experienced "accurate and rapid decoding of large vocabulary" with an average rate of 78 words per minute, and a word error rate of 25 percent.
On the other hand, when the patient tried to speak silently, her neural activity was superimposed on the speech sounds.
The researchers also developed an animation of a facial avatar to accompany the synthesized speech based on the facial movements the patient attempted to show.
At the press conference, Dr. Edward Chang, a neurosurgeon and study author from the University of California, San Francisco, noted the "overlapping" results of the two new studies on brain implants, as well as their "long-term goal" of restoring the ability to communicate in paralyzed people.
Although the devices described in the new papers are being studied as "proofs of concept" and are not currently commercially available, they could pave the way for future science, and possibly future commercial devices.


Comments
There are no comments for this story
Be the first to respond and start the conversation.