Research, Science & environment

Research News Brief

By Sarah Yang

Listening to music
When faced with many different sounds, the human brain summarizes what they hear to get the gist, according to a new UC Berkeley study.

A digest of new and noteworthy research to complement UC Berkeley press releases. A complete archive of all campus research news is available online .

‘Nuff said: Humans get the gist of complex sounds

Berkeley — New research by neuroscientists at UC Berkeley, suggests that the human brain is not detail-oriented, but opts for the big picture when it comes to hearing.

Listening to music

When faced with many different sounds, the human brain summarizes what they hear to get the gist, according to a new UC Berkeley study.

Researchers found that when faced with many different sounds, such as notes in a violin melody, the brain doesn’t bother processing every individual pitch, but instead quickly summarizes them to get an overall gist of what is being heard.

The study , published today (Wednesday, June 12) in the journal Psychological Science , could potentially improve the ability of hearing aids to help people tune into one conversation when multiple people are talking in the background, something people with normal hearing do effortlessly. Also, if speech recognition software programs could emulate the information compression that takes place in the human brain, they could represent a speaker’s words with less processing power and memory.

In the study, participants could accurately judge the average pitch of a brief sequence of tones. Surprisingly, however, they had difficulty recalling information about individual tones within the sequence, such as when in the sequence they had occurred.

“This research suggests that the brain automatically transforms a set of sounds into a more concise summary statistic — in this case, the average pitch,” said study lead author Elise Piazza, a UC Berkeley Ph.D. student in the Vision Science program. “This transformation is a more efficient strategy for representing information about complex auditory sequences than remembering the pitch of each individual component of those sequences.”

Other UC Berkeley co-authors on this study are Timothy Sweeny, postdoctoral researcher in psychology; David Wessel, professor of music; Michael Silver, associate professor of optometry and neuroscience; and David Whitney, associate professor of psychology.

RELATED INFORMATION