New study identifies neurons that respond to pitch changes in spoken language

Source: Xinhua| 2017-08-29 07:40:12|Editor: Yurou
Video PlayerClose

SAN FRANCISCO, Aug. 28 (Xinhua) -- Researchers at the University of California, San Francisco, or UCSF, have identified neurons in the human brain that respond to pitch changes in spoken language, which are essential to clearly conveying both meaning and emotion.

Changes in vocal pitch during speech are a fundamental part of human communication, nearly as fundamental as melody to music. In tonal languages such as Mandarin Chinese, pitch changes can completely alter the meaning of a word, but even in a non-tonal language like English, differences in pitch can significantly change the meaning of a spoken sentence.

The brain's ability to interpret these changes in tone on the fly is particularly remarkable, given that each speaker also has their own typical vocal pitch and style. Moreover, the brain must track and interpret these pitch changes while simultaneously parsing which consonants and vowels are being uttered, what words they form, and how those words are being combined into phrases and sentences - with all of this happening on a millisecond scale.

Previous studies in both humans and non-human primates have identified areas of the brain's frontal and temporal cortices that are sensitive to vocal pitch and intonation, but none have answered the question of how neurons in these regions detect and represent changes in pitch to inform the brain's interpretation of a speaker's meaning. The new study was carried out at the lab of Edward Chang, a professor of neurological surgery at the UCSF Weill Institute for Neurosciences, and led by Claire Tang, a fourth-year graduate student in the Chang lab.

Chang specializes in surgeries to remove brain tissue that causes seizures in patients with epilepsy. In some cases, to prepare for these operations, he places high-density arrays of tiny electrodes onto the surface of the patients' brains, both to help identify the location triggering the patients' seizures and to map out other important areas, such as those involved in language, to make sure the surgery avoids damaging them. In the study, Tang asked 10 volunteers awaiting surgery with these electrodes in place to listen to recordings of four sentences: "Humans value genuine behavior," "Movies demand minimal energy," "Reindeer are a visual animal" and "Lawyers give a relevant opinion."

Spoken by three different synthesized voices, the sentences were designed to have the same length and construction, and could be played with four different intonations: neutral, emphasizing the first word, emphasizing the third word, or as a question. And these intonation changes alter the meaning of the sentence. Tang and her colleagues monitored the electrical activity of neurons in a part of the volunteers' auditory cortices called the superior temporal gyrus (STG), which previous research had shown might play some role in processing speech prosody. They found that some neurons in the STG could distinguish between the three synthesized speakers, primarily based on differences in their average vocal pitch range. Other neurons could distinguish between the four sentences, no matter which speaker was saying them, based on the different kinds of sounds that made up the sentences.

And yet, another group of neurons could distinguish between the four different intonation patterns. These neurons changed their activity depending on where the emphasis fell in the sentence, but didn't care which sentence it was or who was saying it.

To prove to themselves that they had cracked the brain's system for pulling intonation information from sentences, the team designed an algorithm to predict how neurons' response to any sentence should change based on speaker, phonetics, and intonation and then used this model to predict how the volunteers' neurons would respond to hundreds of recorded sentences by different speakers.

They showed that while the neurons responsive to the different speakers were focused on absolute pitch of the speaker's voice, the ones responsive to intonation were more focused on relative pitch: how the pitch of the speaker's voice changed from moment to moment during the recording.

The findings reveal how the brain begins to take apart the complex stream of sounds that make up speech and identify important cues about the meaning of what we're hearing, Tang was quoted as saying in a news release from UCSF. "We were able to show not just where prosody is encoded in the brain, but also how, by explaining the activity in terms of specific changes in vocal pitch."

TOP STORIES
EDITOR’S CHOICE
MOST VIEWED
EXPLORE XINHUANET
010020070750000000000000011100001365636101