Berkeley Voices: One brain, two languages
UC Berkeley sociolinguist Justin Davidson is part of a research team that has discovered where people who are bilingual store language-specific sounds and sound sequences in their brains.
April 16, 2024
Follow Berkeley Voices, a Berkeley News podcast about the people and research that make UC Berkeley the world-changing place that it is. Review us on Apple Podcasts.
For the first three years of Justin Davidson’s childhood in Chicago, his mom spoke only Spanish to him. Although he never spoke the language as a young child, when Davidson began to learn Spanish in middle school, it came very quickly to him, and over the years, he became bilingual.
Now an associate professor in UC Berkeley’s Department of Spanish and Portuguese, Davidson is part of a research team that has discovered where in the brain bilinguals store language-specific sounds and sound sequences. The research project is ongoing.
This is the final episode of a three-part series with Davidson about language in the U.S. Listen to the first two episodes: “A linguist’s quest to legitimize U.S. Spanish” and “A language divided.”
Anne Brice: This is Berkeley Voices. I’m Anne Brice.
For UC Berkeley sociolinguist Justin Davidson, Spanish has always been a part of who he is.
Justin Davidson: My grandparents on my mother’s side enrolled my mom in Chicago’s first pilot program of Spanish-English bilingual education. I don’t know why, but they did, because there was no cultural, linguistic, any kind of connection there.
So my mom, from K-12, kindergarten to 12th grade, was part of a single cohort. So the second graders above her, no. And the kindergartners below her, no, when she was in first grade. She was in that one cohort, where from K-12, all of their curriculum was in English and in Spanish.
And so with me, my mom spoke to me in Spanish exclusively until I was about 3, and then she stopped and reverted to English.
Anne Brice: His dad didn’t speak Spanish, and few people in their community did, so it just got easier to not be the only one speaking a different language other than English.
Justin Davidson: So did I speak Spanish as a child? No. I understood some things. But then, when I learned Spanish formally in school starting in, I think, sixth or seventh grade, everything came to me quite quickly.
Anne Brice: Wow. I wonder if it was stored in your brain…
Justin Davidson: So I have plenty to say about that. (Laughs)
Anne Brice: Yeah?
[Music fades out]
Justin Davidson: There are linguists now, now that we have such advanced technology to do fMRI, to do PET scans, to do all sorts of imaging of the brain, we can study so many facets of language, and where and how language is processed in the brain.
Anne Brice: Davidson is an associate professor in the Department of Spanish and Portuguese. When he teaches a course in psycholinguistics and neurolinguistics, he always shares with his students a 2014 study in which researchers looked at a group of people who were born in China and adopted by French-speaking families before the age of 3.
This group of international adoptees grew up in France as monolinguals, speaking only French. Unlike French, Chinese dialects have multiple sets of different words distinguished orally only by their pitch or tone.
Justin Davidson: So the question is, how do different speakers process tone, process pitch in the brain, if they’ve been exposed to a language where that’s very linguistically meaningful, relative to languages like French, where, you know — au revoir, au revoir, au revoir (said in different tone and pitch) — same word, no difference.
Anne Brice: When the adoptees were older — between 9 and 17 years old — linguists played pseudowords, or fake words, for them while they were in fMRI machines. These words contained tones that are present in Mandarin and Cantonese and in several other Chinese dialects.
And what researchers found was that for these adoptees, electrical activity in their brains was present in the left hemisphere, where most of language is processed. So their brains recognized the tones as related to language.
But for French speakers with no exposure to Chinese, electrical activity was only in the right hemisphere, which has general cognitive acoustic capabilities. So their brains recognized the tones as just sound, not connected to language.
Justin Davidson: And so the idea is that the brain at a very early age has primarily linguistic faculties on the left hemisphere of the brain. And so speakers whose languages use tone or pitch in a linguistically meaningful way, those are the areas of the brain where pitch and tone are processed.
And for everyone else in the world (laughs) whose pitch and tone in their language isn’t linguistically meaningful in that way, other general, sort of, cognition and acoustic processing areas of the brain are what’s used.
Anne Brice: In fact, electrical activity in the adoptees’ brains was the same as it was for native Chinese speakers.
So even though the adoptees, after they moved to France, had no subsequent exposure to Chinese and no conscious recollection of that language, their brains had effectively stored it — for years, if not indefinitely.
Justin Davidson: So the brain is very, sort of, geographically organized according to the languages that we’re exposed to as children. It’s fascinating.
[Music comes up, then fades out]
Anne Brice: With advanced brain imaging technologies, researchers have learned a lot about where and how language is processed in the brain — and they continue to learn more.
Since 2020, Davidson has been part of a research team at UC San Francisco that’s working with bilinguals to map where exactly in the brain particular sound features of different languages are processed and stored.
To conduct their research, the team is studying brains of people undergoing surgery to treat certain brain conditions, like seizure disorders or chronic migraines.
Justin Davidson: One of the medical solutions for that is to implant, physically touching the brain, a piece of sort of mesh electronic wiring that allows the person with a remote control to send electronic energy, shocks to the brain to calm it down.
Anne Brice: The first phase of the project began in 2011 with two collaborators — principal investigator Edward Chang, the chief of neurosurgery at UCSF, and Keith Johnson, then a professor of linguistics at Berkeley, who’s now emeritus.
Justin Davidson: So their original work, with a National Institutes of Health grant, was working with monolingual English speakers. Those were the patients they were getting that required the surgery.
So again, they were looking at what specific areas of the brain process, i.e., where is electrical energy sent, when you hear certain sounds?
I was brought into the picture because the type of patients that Dr. Chang was getting — there was this big influx of bilingual patients that weren’t English speakers. They were English-Mandarin. They were English-Spanish.
And so the thinking was: Gosh, you know, we’ve already studied these really interesting components of how speech sounds are processed, where geographically they’re processed in the brain, in English.
Now we can look at bilinguals. Now we can see, are monolinguals different from bilinguals, in that way? Or for bilinguals, now that they have two languages, is the way that they process sounds in those two languages the same, or different? And what about a language that they’ve not heard before?
So all sorts of new, really interesting questions about where and how the brain processes speech sound with this group of bilinguals. The majority were Spanish-speaking, so that’s where I was invited to join the team. (Laughs)
Anne Brice: During certain types of brain surgery, a patient is awake so neurosurgeons and their teams can make informed decisions in real time about how a procedure will affect a patient’s cognitive ability and language. For clinical studies like this, medical professors and students are present during the operation to coordinate the surgery alongside language tasks.
And in this study, they play audio recordings of sets of speech stimuli that Davidson helped design. The set includes all sorts of word combinations that differ in how acoustically similar they are across the two languages, and also whether they share a similar meaning.
Justin Davidson: So for example, we have ropa. And that is not a rope. That is clothes.
Anne Brice: Or soup and sopa, which sound similar and have the same meaning in both languages.
[Music fades out]
Some of the words in the set have sounds that are acoustically similar in both languages, like “esss.” But there are also sounds that are only present in one of the languages. So in English, for example, a lot of words start with sp, like Sprite or Spain. But in Spanish, these words start with an e, so it’s Esprite or España.
Justin Davidson: And so what’s been going on, then, at UCSF is that we have these patients that are, again, bilingual in English and Mandarin and English and Spanish, and we’re playing to them these words and these snippets of sounds from both Spanish, English, Mandarin and other languages that they’re not familiar with.
Anne Brice: In doing this, the researchers are trying to answer where bilinguals store the sound inventories of their two languages, among other research questions.
Although the research is ongoing, there are some new findings from the study that the team has documented and presented at conferences.
Justin Davidson: What they found was that when it came to the location of electrical activity for hearing the sounds across the two languages, that was the same. So the sounds in English and the sounds in Spanish were all, relatively speaking, overlapping on top of each other.
But there was a very striking language-specific difference when it came to the sequences of sounds.
So when Spanish-specific clusters of sounds were just listened to, right — these people are just sitting there, they’re hearing this, they don’t have to talk. They’re just listening.
This is sort of simplifying a bit, but there’s the Spanish location for the Spanish clusters. And then there’s the English location for the English clusters. They were teased apart very readily.
[Music fades out]
Anne Brice: So this discovery means that for bilinguals, the clusters of sounds unique to each language, so sounds specific to Spanish and sounds specific to English, are stored in separate areas of the brain, millimeters apart.
Justin Davidson: So that’s to say, when you learn a new language, acquiring new sounds might be using the same sort of neural networks that are used to have acquired all the sounds that you’ve gotten prior.
But when you have enough exposure to the language to learn the patterning of the sounds, which sounds tend to appear together, that information, the brain is attuned to that, recognizes that pattern, and stores it separately based on the different languages. That’s fascinating.
[Music: “Gondola Blue” by Blue Dot Sessions]
The brain is an incredible pattern detector, and one of the patterns that it very clearly detects and physically — electrophysically — responds to is not just sounds in language, but their co-occurrence.
Children that are exposed to two languages from birth will have different sound systems and grammatical systems than other kids that were exposed to one language at birth and another language later in life. And it’s not, again, to say that one is inherently better or worse than the other, but it’s that it’s different. It’s different.
Anne Brice: Our language and the way we speak it, he says, is a reflection of our past and present linguistic and social environments. And neurologically, our brains, too, reflect our sociolinguistic experience.
I’m Anne Brice, and this is Berkeley Voices. This is the last episode of a three-part series with Davidson about language in the United States. You can find links to the episodes in our show notes.
Berkeley Voices is a Berkeley News podcast from the Office of Communications and Public Affairs at UC Berkeley. If you like what we do, please tell a friend and leave us a review on Apple Podcasts. We also have another show, Berkeley Talks, that features lectures and conversations at Berkeley. You can find all of our podcast episodes, with transcripts and photos, on Berkeley News at news.berkeley.edu/podcasts.
[Music fades down and ends]
Read a written version of “A language divided”:
For UC Berkeley sociolinguist Justin Davidson, Spanish has always been a part of who he is.
“My grandparents on my mother’s side enrolled my mom in Chicago’s first pilot program of Spanish-English bilingual education,” Davidson said. “I don’t know why, but they did, because there was no cultural, linguistic, any kind of connection there.
“So my mom, from K-12, kindergarten to 12th grade, was part of a single cohort. So the second graders above her, no. And the kindergartners below her, no, when she was in first grade. She was in that one cohort, where from K-12, all of their curriculum was in English and in Spanish.
“And so with me, my mom spoke to me in Spanish exclusively until I was about 3, and then she stopped and reverted to English.”
His dad didn’t speak Spanish, and few people in their community did, so it just got easier to not be the only one speaking a different language other than English.
“So did I speak Spanish as a child?” he asked. “No. I understood some things. But then, when I learned Spanish formally in school starting in, I think, sixth or seventh grade, everything came to me quite quickly.”
“Wow,” I said. “I wonder if it was stored in your brain…”
“I have plenty to say about that,” said Davidson, an associate professor in the Department of Spanish and Portuguese.
“There are linguists now, now that we have such advanced technology to do fMRI, to do PET scans, to do all sorts of imaging of the brain, we can study so many facets of language, and where and how language is processed in the brain.”
When Davidson teaches a course in psycholinguistics and neurolinguistics, he always shares with his students a 2014 study in which researchers looked at a group of people who were born in China and adopted by French-speaking families before the age of 3.
This group of international adoptees grew up in France as monolinguals, speaking only French. Unlike French, Chinese dialects have multiple sets of different words distinguished orally only by their pitch or tone.
“The question is: How do different speakers process tone, process pitch in the brain, if they’ve been exposed to a language where that’s very linguistically meaningful, relative to languages like French, where, you know — au revoir, au revoir, au revoir (said in different tone and pitch) — same word, no difference?”
When the adoptees were older — between 9 and 17 years old — linguists played pseudowords, or fake words, for them while they were in fMRI machines. These words contained tones that are present in Mandarin and Cantonese and in several other Chinese dialects.
And what researchers found was that for these adoptees, electrical activity in their brains was present in the left hemisphere, where most of language is processed. So their brains recognized the tones as related to language.
But for French speakers with no exposure to Chinese, electrical activity was only in the right hemisphere, which has general cognitive acoustic capabilities. So their brains recognized the tones as just sound, not connected to language.
“The idea is that the brain at a very early age has primarily linguistic faculties on the left hemisphere of the brain,” Davidson said. “And so speakers whose languages use tone or pitch in a linguistically meaningful way, those are the areas of the brain where pitch and tone are processed.
“And for everyone else in the world whose pitch and tone in their language isn’t linguistically meaningful in that way, other general, sort of, cognition and acoustic processing areas of the brain are what’s used.”
In fact, electrical activity in the adoptees’ brains was the same as it was for native Chinese speakers.
So even though the adoptees, after they moved to France, had no subsequent exposure to Chinese and no conscious recollection of that language, their brains had effectively stored it — for years, if not indefinitely.
“The brain is very, sort of, geographically organized according to the languages that we’re exposed to as children,” Davidson said. “It’s fascinating.”
With advanced brain imaging technologies, researchers have learned a lot about where and how language is processed in the brain — and they continue to learn more.
Since 2020, Davidson has been part of a research team at UC San Francisco that’s working with bilinguals to map where exactly in the brain particular sound features of different languages are processed and stored.
To conduct their research, the team is studying brains of people undergoing surgery to treat certain brain conditions, like seizure disorders or chronic migraines.
“One of the medical solutions for that is to implant, physically touching the brain, a piece of sort of mesh electronic wiring that allows the person with a remote control to send electronic energy, shocks to the brain to calm it down,” he said.
The first phase of the project began in 2011 with two collaborators — principal investigator Edward Chang, the chief of neurosurgery at UCSF, and Keith Johnson, then a professor of linguistics at Berkeley, who’s now emeritus.
“Their original work, with a National Institutes of Health grant, was working with monolingual English speakers. Those were the patients they were getting that required the surgery.
“So again, they were looking at what specific areas of the brain process, i.e., where is electrical energy sent, when you hear certain sounds?
“I was brought into the picture because the type of patients that Dr. Chang was getting — there was this big influx of bilingual patients that weren’t English speakers. They were English-Mandarin. They were English-Spanish.
“And so the thinking was: Gosh, you know, we’ve already studied these really interesting components of how speech sounds are processed, where geographically they’re processed in the brain, in English.
“Now we can look at bilinguals. Now we can see, are monolinguals different from bilinguals, in that way? Or for bilinguals, now that they have two languages, is the way that they process sounds in those two languages the same, or different? And what about a language that they’ve not heard before?
“So all sorts of new, really interesting questions about where and how the brain processes speech sound with this group of bilinguals. The majority were Spanish-speaking, so that’s where I was invited to join the team.”
During certain types of brain surgery, a patient is awake so neurosurgeons and their teams can make informed decisions in real time about how a procedure will affect a patient’s cognitive ability and language. For clinical studies like this, medical professors and students are present during the operation to coordinate the surgery alongside language tasks.
And in this study, they play audio recordings of sets of speech stimuli that Davidson helped design. The set includes all sorts of word combinations that differ in how acoustically similar they are across the two languages, and also whether they share a similar meaning.
For example, soup and sopa, which sound similar and have the same meaning in both languages. Or ropa, which means clothes in Spanish, but doesn’t have a meaning in English.
Some of the words in the set have sounds that are acoustically similar in both languages, like “esss.” But there are also sounds that are only present in one of the languages. So in English, for example, a lot of words start with sp, like Sprite or Spain. But in Spanish, these words start with an e, so it’s Esprite or España.
“What’s been going on at UCSF is that we have these patients that are bilingual in English and Mandarin and English and Spanish, and we’re playing to them these words and these snippets of sounds from both Spanish, English, Mandarin and other languages that they’re not familiar with,” said Davidson.
In doing this, the researchers are trying to answer where bilinguals store the sound inventories of their two languages, among other research questions.
Although the research is ongoing, there are some new findings from the study that the team has documented and presented at conferences.
“What they found was that when it came to the location of electrical activity for hearing the sounds across the two languages, that was the same,” Davidson said. “So the sounds in English and the sounds in Spanish were all, relatively speaking, overlapping on top of each other.
“But there was a very striking language-specific difference when it came to the sequences of sounds.
“So when Spanish-specific clusters of sounds were just listened to, right — these people are just sitting there, they’re hearing this, they don’t have to talk. They’re just listening.
“This is sort of simplifying a bit, but there’s the Spanish location for the Spanish clusters. And then there’s the English location for the English clusters. They were teased apart very readily.”
This discovery means that for bilinguals, the clusters of sounds unique to each language, so sounds specific to Spanish and sounds specific to English, are stored in separate areas of the brain, millimeters apart.
“So that’s to say, when you learn a new language, acquiring new sounds might be using the same sort of neural networks that are used to have acquired all the sounds that you’ve gotten prior,” he said.
“But when you have enough exposure to the language to learn the patterning of the sounds, which sounds tend to appear together, that information, the brain is attuned to that, recognizes that pattern and stores it separately based on the different languages. That’s fascinating.”
The brain is an incredible pattern detector, and one of the patterns that it very clearly detects and physically — electrophysically — responds to is not just sounds in language, but their co-occurrence.
“Children that are exposed to two languages from birth will have different sound systems and grammatical systems than other kids that were exposed to one language at birth and another language later in life,” Davidson said. “And it’s not, again, to say that one is inherently better or worse than the other, but it’s that it’s different. It’s different.”
Our language and the way we speak it, he says, is a reflection of our past and present linguistic and social environments. And neurologically, our brains, too, reflect our sociolinguistic experience.