Berkeley Voices: A stroke left her ‘locked in.’ With the help of AI, she heard her voice again.
Researchers at UC Berkeley and UC San Francisco restored Ann Johnson’s ability to speak using a brain-computer interface nearly two decades after she had a brainstem stroke at age 30.

Noah Berger, 2023
April 28, 2025
Key takeaways
- Researchers at UC Berkeley and UC San Francisco have created a brain-computer interface that can restore a person’s ability to speak who lost it from paralysis or another condition.
- The technology continues to evolve, and researchers expect rapid advancements, including photorealistic avatars and wireless, plug-and-play neuroprosthetic devices.
- This ongoing research has enormous potential to make the workforce and the world more accessible to people with disabilities.
Follow Berkeley Voices, a Berkeley News podcast about the people and research that make UC Berkeley the world-changing place that it is. Review us on Apple Podcasts.
When Ann Johnson had a rare brainstem stroke at age 30, she lost control of all of her muscles. One minute, she was playing volleyball with her friends. The next, she couldn’t move or speak.
Up until that moment, she’d been a talkative and outgoing person. She taught math and physical education, and coached volleyball and basketball at a high school in Saskatchewan, Canada. She’d just had a baby a year earlier with her new husband.
And the thing was, she still was that person, but no one could tell, because the connection between her brain and her body didn’t work anymore. She would try to speak, but her mouth wouldn’t move.
Eighteen years later, she finally heard her voice again.
It’s thanks to researchers at UC Berkeley and UC San Francisco who are working to restore people’s ability to communicate using a brain-computer interface. The technology, the researchers say, has enormous potential to make the workforce and the world more accessible to people like Ann.
This year on Berkeley Voices, we’re exploring the theme of transformation. In eight episodes, we explore how transformation — of ideas, of research, of perspective — shows up in the work that happens every day at UC Berkeley. New episodes come out on the last Monday of each month, from October through May.
See all episodes of the series.
Anne Brice (narration): This is Berkeley Voices, a UC Berkeley News podcast. I’m Anne Brice.
(Music: “Sticktop” by Blue Dot Sessions)
It had been 18 years since Ann Johnson heard her own voice.
She lost it at age 30, when she had a brainstem stroke while she was warming up for a volleyball game with friends. The stroke caused extreme paralysis, and Ann lost the ability to speak and move all the muscles in her body.
Up until that moment, she’d been a talkative and outgoing person. She taught math and physical education, and coached volleyball and basketball at a high school in Saskatchewan, Canada. She’d just had a baby a year earlier with her new husband, and had given a joyful 15-minute-long speech at their wedding.
(Music fades out)
And the thing is, she still was that person, at 30. It’s just that no one could tell. Because the connection between her brain and her body didn’t work anymore. She would try to speak, but her mouth wouldn’t move, and no sound would come out.
It’s what’s commonly known as locked-in syndrome. It’s a rare condition when someone has near complete paralysis and no ability to communicate naturally. It’s most often caused by a stroke or the neurological disorder ALS.
(Music comes back up)
Then, in 2022, at the age of 48, Ann heard her voice again.
She was part of a clinical trial being conducted by researchers at UC Berkeley and UC San Francisco that’s trying to restore people’s ability to communicate using a brain-computer interface. The technology, the researchers say, has enormous potential to make the workforce and the world more accessible to people like Ann.
(Music comes up, then fades out)
Gopala Anumanchipalli: My involvement on this project, particularly, started maybe a decade ago at this point.
Anne Brice (narration): Gopala Anumanchipalli is an assistant professor of electrical engineering and computer sciences at UC Berkeley. As a postdoc in 2015, he began working with Edward Chang, a neurosurgeon at UC San Francisco, to understand how speech happens in the brain.
Gopala Anumanchipalli: Like, what’s enabling us from going into what we are thinking to what we’re actually saying out loud. As part of the postdoc, we were able to get a good sense of, like, here is the part of the brain that is actually responsible for speech production, and here is how we can computationally model this process so we can synthesize just from brain activity what someone might be saying.
Anne Brice (narration): Essentially, they figured out how to go to the source of knowledge — the brain — and then bypass what’s broken — the connection to the body — and restore what’s lost. In this case, they’re using a neuroprosthesis that’s reading from the part of the brain that processes speech.
They started the clinical trial in 2020, and Ann joined as the third participant in 2022.
Kaylo Littlejohn is a Ph.D. student in the Berkeley Speech Group, which is part of the Berkeley AI Research Lab. He was a co-lead on the study with Anumachipalli and Chang, where he worked with researchers to create an AI model that would help restore Ann’s ability to communicate.
Kaylo Littlejohn: My role in the trial was primarily to lead the modeling efforts, so training the decoders and also working with the participant, working with clinical research coordinators to really try and train a good model that can translate Ann’s brain activity into the desired output, which is to restore her natural voice and the content of what she’s trying to say.
(Music: “Vegimaine” by Blue Dot Sessions)
Anne Brice (narration): Although the population of people who lose their ability to speak in this way is relatively small, the researchers say, they are among the most vulnerable in terms of quality of life.
Since her stroke, Ann has regained some muscle control. She now has full neck movement, and she can laugh and cry and smile.
She communicates mostly using an eye-tracking system that allows her to select each letter to spell words out on a computer screen. As you can imagine, it’s a slow process. She can only write about 14 words per minute, compared to conversational speech, which is more like 160 words per minute.
So when she finally heard her thoughts out loud for the first time in nearly two decades, it was emotional for her.
(Music comes up, then fades out)
Anne Brice (narration): To give Ann an embodied experience, researchers had her choose from a selection of avatars. And they used a recording of her wedding speech to recreate her voice.
An implant rested on top of the region of her brain that processes speech, acting as a kind of thought decoder. Then they showed her sentences and asked her to try to say them.
Kaylo Littlejohn: So basically, she’s sitting there, and she has the brain implant and a connector from that brain implant into the computer nearby, and she essentially tries to speak as best as she can.
Now, she can’t, because she has paralysis, but those signals, as Gopala had described, are still being invoked from her brain, and the neural recording device is sensing those signals. So she sees the screen, and then she’ll essentially try to speak a sentence.
And the neural decoding device will sense those signals, and then they will be sent to the computer where the AI model resides. And that model will translate. Just like how Siri translates your voice to text, this AI model translates the brain activity into the text or the audio or the facial animation.
Gopala Anumanchipalli: The attempt to speak is such a strong signal that we don’t even need an AI model there. Just looking at the brain signal, we know that she’s trying to say it.
Just from the raster of, like, here is the brain data we’re recording, the voltage is there that you can look at precisely, which is what enables us to easily sense the intent to speak, which is very different than thinking casually. It’s a very volitional system because the intent of speech is very, very reliably detected.
Kara Manke: So in a way, is she almost picturing herself, or, like, willing herself, to make the speech, even if her body currently is not capable of that?
Anne Brice (narration): That’s Kara Manke, my colleague at UC Berkeley News who co-produced this episode.
Gopala Anumanchipalli: Yes, yeah. The training part, which we didn’t talk about, is really for her to assume that if you are speaking a sentence, how would you go about it? And on she goes about speaking a sentence. We don’t have evidence of her speaking, but we believe she is cooperative and she is doing that, and on she goes for some trials. And that is our data, where we have a sense of, OK, here she’s trying to speak and this is what the data looks like.
That’s the training process which lets the AI model to learn: OK, this is her attempt to speak, and when the brain signal looks like this, she’s actually trying to say, “Hello.” When it looks like this, she’s trying to say, “Help,” and so on and so forth.
Anne Brice: That makes a lot of sense. Because we were wondering, can it pick up your errant thoughts? But actually, you have to really be trying to say something.
Gopala Anumanchipalli: Exactly. We didn’t want to read her mind. We really wanted to give her the agency to do this. So we don’t want to randomly read. In some sessions where she’s doing nothing, we have the decoder running, and it does nothing because she’s not trying to say anything. Only when she’s attempting to say something do we hear a sound or action command.
(Music: “Syriah” by Blue Dot Sessions)
Anne Brice (narration): OK, so to reiterate, this brain-computer interface is not reading all thoughts in a person’s mind. Rather, it’s picking up strong signals when a person tries to talk.
But how realistic is it, really? Does it sound and look just like Ann? Or is it more rudimentary and robotic? The answer, at least at this point, is somewhere in between.
When you watch a video of Ann speaking with the brain-computer interface when she first joined the clinical trial, you can hear her voice piecing together words in singsongy tones, but it’s not seamless. Here she is speaking in the clinical trial in 2023.
Recording of Ann speaking using the brain-computer interface during the clinical trial in 2023 (courtesy of the Anumanchipalli Lab/UC Berkeley and the Chang Lab/UCSF): What do you think of my artificial voice? Tell me about yourself. I am doing well today.
Anne Brice (narration): You can’t hear it here, but there was about an eight-second delay between the prompt and when the avatar speaks.
But just last month, the team published new research in Nature Neuroscience that shows a dramatic decrease in this delayed response.
(Music fades out)
Gopala Anumanchipalli: The paper from 2023, it had this sequence-to-sequence architecture, meaning we had to wait for an entire attempt of a sentence and then convert that sentence to sound or movement and so on. Whereas now we are working with this streaming architecture, much like how we’re speaking right now. We are not speaking a full sentence and then producing it; we are speaking on the fly.
Anne Brice (narration): Now the models are listening in and translating, relaying all the information and converting it to sound in real time with only about a one-second delay.
In the 2023 study, the avatar moves its mouth when Ann’s talking, and makes little movements when she’s asked to make a face, like a smile or a frown. Although the avatar wasn’t used in the most recent study, researchers believe the streaming architecture will work with the avatar, too.
The avatar looks kind of like Ann, but it’s not a strong resemblance. In the near future, though, Anumanchipalli says it’s possible there could be 3D photorealistic avatars.
Gopala Anumanchipalli: We can imagine that we can completely create a digital clone that is very much plugged in. And of course, the user will have all the preferences, like how Zoom lets us have all these effects like, you know — airbrushing and all that, all of this is possible.
Anne Brice: Wow, when do you imagine … things are just advancing so fast, I’m curious, do you think that’d be possible in, like, five years?
Gopala Anumanchipalli: I definitely think so. I definitely think so. But, you know, that’s the thing, we need multiple avenues with which developments must happen. One is the breakneck speed of research that is happening in AI in enabling all these avatar kinds of worlds.
Also, the groundbreaking research that we have been involved in with respect to prosthesis development, and also the neuroscience of it. All of these three have to happen hand in hand.
It’s not something that we have off-the-shelf models that we can use now. So the development must happen in the science, in the technology, in the clinical translation, as well — all of them together to make this happen.
(Music: “Noe Noe” by Blue Dot Sessions)
Anne Brice (narration): In February 2024, Ann had her implant removed for a reason unrelated to the trial. But she continues to communicate with the research team.
Gopala Anumanchipalli: Ann is very eloquent. She sends us very elaborate emails using her current technology on what she felt, how it went and what she likes and what she prefers to see.
Anne Brice (narration): She enjoyed hearing her own voice, she told them, and the streaming synthesis approach made her feel in control. She also wants the implants to be wireless, instead of plugged into a computer — something the research team is working on.
(Music comes up, then fades down)
Anne Brice: Thinking further in the future, how do you imagine it maybe working? Do you imagine a person, in real time, communicating exactly what they want with people around them?
Gopala Anumanchipalli: Well, it’s hard to predict (laughs), but what I’m seeing are innovations that enable us to let people have the best quality of their lives. And if that means they have a digital version of themselves communicating for them, that’s what they need to be able to do.
What we are working on are much better models that are faster, low-footprint, that don’t need a lot of training time to get there. Like these, you know, let’s say universal models, like a robotic arm, for example, you have prosthetic hands that one could buy, and they can learn to use them and so on. We need to be able to have neuroprostheses, be that plug-and-play, so that it becomes a standard of care and not, like, a research experiment. That’s where we need to be.
The whole metaverse is one way of thinking about it, where there is a digital avatar of yourself, where you can be more fully embodied, where you’re communicating and interacting. Like, you can participate in a Zoom call, for example, with your avatar, and actually be as able-bodied as the rest of us in a Zoom call can be.
(Music: “Hedgeliner” by Blue Dot Sessions)
Anne Brice (narration): When writing about her stroke, Ann has said, quote, “Overnight, everything was taken away from me.”
But she’s come to realize that she can use own experiences to help others. She now wants to become a counselor in a physical rehabilitation facility, ideally one day using a neuroprosthesis to talk with her clients.
She’s written, quote, “I want patients there to see me and to know their lives are not over now. I want to show them that disabilities don’t need to stop us or slow us down.”
(Music)
I’m Anne Brice, and this is Berkeley Voices, a UC Berkeley News podcast from the Office of Communications and Public Affairs. This episode was produced by Kara Manke and me. Script editing by Tyler Trykowski and Gretchen Kell. Music by Blue Dot Sessions.
You can find Berkeley Voices wherever you listen to podcasts, including YouTube @BerkeleyNews.
To watch a video of Ann using the brain-computer interface, and to read a transcript of the episode, visit UC Berkeley News at news.berkeley.edu/podcasts. There’s a link to the story in our show notes.
This was the seventh episode of our series on transformation. In eight episodes, we’re exploring how transformation — of ideas, of research, of perspective — shows up in the work that happens every day at UC Berkeley. New episodes of the series come out on the last Monday of each month. Our last episode of the series will come out at the end of May.
We also have another show, Berkeley Talks, that features lectures and conversations at Berkeley. You can find all of our podcast episodes on UC Berkeley News at news.berkeley.edu/podcasts.
(Music fades out)