Using a cappella to explain speech and music specialization

Researchers at The Neuro (Montreal Neurological Institute-Hospital) of McGill University created 100 a capella recordings, each of a soprano singing a sentence. They then distorted the recordings along two fundamental auditory dimensions: spectral and temporal dynamics, and had 49 participants distinguish the words or the melodies of each song. The experiment was conducted in two groups of English and French speakers to enhance reproducibility and generalizability.

They found that for both languages, when the temporal information was distorted, participants had trouble distinguishing the speech content, but not the melody. Conversely, when spectral information was distorted, they had trouble distinguishing the melody, but not the speech. This shows that speech and melody depend on different acoustical features.

To test how the brain responds to these different sound features, the participants were then scanned with functional magnetic resonance imaging (fMRI) while they distinguished the sounds. The researchers found that speech processing occurred in the left auditory cortex, while melodic processing occurred in the right auditory cortex.

Music and speech exploit different ends of the spectro-temporal continuum

Next, they set out to test how degradation in each acoustic dimension would affect brain activity. They found that degradation of the spectral dimension only affected activity in the right auditory cortex, and only during melody perception, while degradation of the temporal dimension affected only the left auditory cortex, and only during speech perception. This shows that the differential response in each hemisphere depends on the type of acoustical information in the stimulus.

Previous studies in animals have found that neurons in the auditory cortex respond to particular combinations of spectral and temporal energy, and are highly tuned to sounds that are relevant to the animal in its natural environment, such as communication sounds. For humans, both speech and music are important means of communication. This study shows that music and speech exploit different ends of the spectro-temporal continuum, and that hemispheric specialization may be the nervous system’s way of optimizing the processing of these two communication methods.

Solving the mystery of hemispheric specialization

“It has been known for decades that the two hemispheres respond to speech and music differently, but the physiological basis for this difference remained a mystery,” says Philippe Albouy, the study’s first author. “Here we show that this hemispheric specialization is linked to basic acoustical features that are relevant for speech and music, thus tying the finding to basic knowledge of neural organization.”

Their results were published in the journal Science on Feb. 28, 2020. It was funded by a Banting fellowship to Albouy and by grants to senior author Robert Zatorre from the Canadian Institutes for Health Research and from the Canadian Institute for Advanced Research. A cappella recordings were made with the help of McGill University’s Schulich School of Music.

The experiment is demonstrated here: https://www.zlab.mcgill.ca/spectro_temporal_modulations/

https://www.sciencedaily.com/rss/all.xml