wp header logo 286

AI Helps a Stroke Patient Speak Again, a Milestone for Tech and Neuroscience – The New York Times

Posted by

Advertisement
Supported by
The brain activity of a paralyzed woman is being translated into words spoken by an avatar. This milestone could help others who have lost speech.

Pam Belluck has been reporting on brain science and neurological disorders for over a decade.
At Ann Johnson’s wedding reception 20 years ago, her gift for speech was vividly evident. In an ebullient 15-minute toast, she joked that she had run down the aisle, wondered if the ceremony program should have said “flutist” or “flautist” and acknowledged that she was “hogging the mic.”
Just two years later, Mrs. Johnson — then a 30-year-old teacher, volleyball coach and mother of an infant — had a cataclysmic stroke that paralyzed her and left her unable to talk.
On Wednesday, scientists reported a remarkable advance toward helping her, and other patients, speak again. In a milestone of neuroscience and artificial intelligence, implanted electrodes decoded Mrs. Johnson’s brain signals as she silently tried to say sentences. Technology converted her brain signals into written and vocalized language, and enabled an avatar on a computer screen to speak the words and display smiles, pursed lips and other expressions.
The research, published in the journal Nature, demonstrates the first time spoken words and facial expressions have been directly synthesized from brain signals, experts say. Mrs. Johnson chose the avatar, a face resembling hers, and researchers used her wedding toast to develop the avatar’s voice.
“We’re just trying to restore who people are,” said the team’s leader, Dr. Edward Chang, the chairman of neurological surgery at the University of California, San Francisco.
“It let me feel like I was a whole person again,” Mrs. Johnson, now 48, wrote to me.
The goal is to help people who cannot speak because of strokes or conditions like cerebral palsy and amyotrophic lateral sclerosis. To work, Mrs. Johnson’s implant must be connected by cable from her head to a computer, but her team and others are developing wireless versions. Eventually, researchers hope, people who have lost speech may converse in real time through computerized pictures of themselves that convey tone, inflection and emotions like joy and anger.
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in.
Want all of The Times? Subscribe.
Advertisement

source

Leave a Reply

Your email address will not be published. Required fields are marked *