Rogalsky et al. obtain data that fills out in much more detail a description of which brain activations overlap and which differ during the processing of speech versus music. Here is their abstract, following by a figure from the paper:
Language and music exhibit similar acoustic and structural properties, and both appear to be uniquely human. Several recent studies suggest that speech and music perception recruit shared computational systems, and a common substrate in Broca's area for hierarchical processing has recently been proposed. However, this claim has not been tested by directly comparing the spatial distribution of activations to speech and music processing within subjects. In the present study, participants listened to sentences, scrambled sentences, and novel melodies. As expected, large swaths of activation for both sentences and melodies were found bilaterally in the superior temporal lobe, overlapping in portions of auditory cortex. However, substantial nonoverlap was also found: sentences elicited more ventrolateral activation, whereas the melodies elicited a more dorsomedial pattern, extending into the parietal lobe. Multivariate pattern classification analyses indicate that even within the regions of blood oxygenation level-dependent response overlap, speech and music elicit distinguishable patterns of activation. Regions involved in processing hierarchical aspects of sentence perception were identified by contrasting sentences with scrambled sentences, revealing a bilateral temporal lobe network. Music perception showed no overlap whatsoever with this network. Broca's area was not robustly activated by any stimulus type. Overall, these findings suggest that basic hierarchical processing for music and speech recruits distinct cortical networks, neither of which involves Broca's area. We suggest that previous claims are based on data from tasks that tap higher-order cognitive processes, such as working memory and/or cognitive control, which can operate in both speech and music domains.
Figure - regions selective for speech versus music. Speech stimuli selectively activates more lateral regions in the superior temporal lobe bilaterally, while music stimuli selectively activate more medial anterior regions on the supratemporal plane and extending into the insula, primarily in the right hemisphere. (This apparently lateralized pattern for music does not mean that the right hemisphere preferentially processes music stimuli as is often assumed. An analysis in the paper also shows that music activates both hemispheres rather symmetrically; the lateralization effect is in the relative activation patterns to music versus speech.)
Long but an awesome information buddy. Great Share. But I am experiencing issue with the rss . Fail to subscribe. So anybody else getting identical rss feed problem? Anybody who knows please reply. Thnkx
ReplyDelete