Scientists have developed a device that can translate thoughts about speech into spoken words in real time.
Although it’s still experimental, they hope the brain-computer interface could someday help give voice to those unable to speak.
A new study described testing the device on a 47-year-old woman with quadriplegia who couldn’t speak for 18 years after a stroke. Doctors implanted it in her brain during surgery as part of a clinical trial.
It “converts her intent to speak into fluent sentences,” said Gopala Anumanchipalli, a co-author of the study published Monday in the journal Nature Neuroscience.
Other brain-computer interfaces, or BCIs, for speech typically have a slight delay between thoughts of sentences and computerized verbalization. Such delays can disrupt the natural flow of conversation, potentially leading to miscommunication and frustration, researchers said.
This is “a pretty big advance in our field,” said Jonathan Brumberg of the Speech and Applied Neuroscience Lab at the University of Kansas, who was not part of the study.
A team in California recorded the woman’s brain activity using electrodes while she spoke sentences silently in her brain. The scientists used a synthesizer they built using her voice before her injury to create a speech sound that she would have spoken. They trained an AI model that translates neural activity into units of sound.
It works similarly to existing systems used to transcribe meetings or phone calls in real time, said Anumanchipalli, of the University of California, Berkeley.
The implant itself sits on the speech center of the brain so that it’s listening in, and those signals are translated to pieces of speech that make up sentences. It’s a “streaming approach,” Anumanchipalli said, with each 80-millisecond chunk of speech – about half a syllable – sent into a recorder.
“It’s not waiting for a sentence to finish,” Anumanchipalli said. “It’s processing it on the fly.”
Decoding speech that quickly has the potential to keep up with the fast pace of natural speech, said Brumberg. The use of voice samples, he added, “would be a significant advance in the naturalness of speech.”
Though the work was partially funded by the National Institutes of Health, Anumanchipalli said it wasn’t affected by recent NIH research cuts. More research is needed before the technology is ready for wide use, but with “sustained investments,” it could be available to patients within a decade, he said.
—Laura Ungar, AP science writer
The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Science and Educational Media Group and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup

Every now and then, you run into a tool that truly wows you.
It’s rare—especially nowadays, when everyone and their cousin is coming out with overhyped AI-centric codswallop tha

Tesla released its quarterly earnings report on Tuesday, its first since the company’s chief executive, Elon Musk, took up residence in the Trump White House and immediately began trying to fire f

There’s never a dull day in the world of weight-loss medication. This week brought new restrictions on compounded GLP-1 medication, the cheaper, copycat versions of brand-name drugs that tel

In December 2023, I wrote an article exploring Apple CEO Tim Cook’s most likely successors, because t

“Meta profits, kids pay the price,” was the message delivered by dozens of grieving families at the doors of Meta’s Manhattan office on Thursday.
Forty-five families traveled from

The world’s auto industry is getting a shake-up from Chinese automakers that

There’s Blue Sky and then there’s Bluesky.
Blue Sky, a paper goods company