In a first, medical researchers harnessed brain waves of a paralyzed man unable to speak and turned what he intended to say into sentences on a computer screen.
It will take years of additional research but the study marks an important step toward one day restoring more natural communication for people who can’t talk because of injury or illness.
“Most of us take for granted how easily we communicate through speech,” said Dr. Edward Chang, a neurosurgeon at the University of California, San Francisco, who led the work.
“It’s exciting to think we’re at the very beginning of a new chapter, a new field” to ease the devastation of patients who lost that ability.”
Today, people who can’t speak or write because of paralysis have very limited ways of communicating.
For example, the man in the experiment, who was not identified used a pointer attached to a baseball cap that lets him move his head to touch words or letters on a screen. Other devices can pick up patients’ eye movements. But it’s a frustratingly slow and limited substitution for speech.
Tapping brain signals to work around a disability is a hot field. In recent years, experiments with mind-controlled prosthetics have allowed paralyzed people to shake hands or take a drink using a robotic arm.
Chang’s team built on that work to develop a “speech neuroprosthetic”, decoding brain waves that normally control the vocal tract, the tiny muscle movements of the lips, jaw, tongue and larynx that form each consonant and vowel.
Volunteering to test the device was a man in his late 30s who 15 years ago suffered a brain-stem stroke that caused widespread paralysis and robbed him of speech. The researchers implanted electrodes on the surface of the man’s brain, over the area that controls speech.
A computer analyzed the patterns when he attempted to say common words such as “water” or “good,” eventually becoming able to differentiate between 50 words that could generate more than 1,000 sentences.
It takes about three to four seconds for the word to appear on the screen after the man tries to say it, said lead author David Moses, an engineer in Chang’s lab. That’s not nearly as fast as speaking but quicker than tapping out a response.
Next steps include ways to improve the device’s speed, accuracy and vocabulary size and maybe one day allow a computer-generated voice rather than text on a screen.