Monday, March 23, 2009

Musical training enhances detection of emotional components of speech.

As a followup to my Feb. 25 post, I pass on work by Kraus and collaborators at Northwestern Univ. in which they tested 30 young adults in three categories: those with no musical training, those who started learning to play a musical instrument before age 7, and those who started later but had at least 10 years of training. The scientists hooked them up to electrodes that recorded the response of the auditory brainstem to a quarter-second of an emotion-laden sound: an infant's wail (see the figure below). The subjects with the most musical experience responded the fastest to the sound. with those who had practiced since early childhood having the strongest response to the parts of the cry for which timing, pitch, and timbre were most complex. Non-musicians did not pick up on fine-grained information in the signal.
Here is their abstract, followed by a figure:
Musicians exhibit enhanced perception of emotion in speech, although the biological foundations for this advantage remain unconfirmed. In order to gain a better understanding for the influences of musical experience on neural processing of emotionally salient sounds, we recorded brainstem potentials to affective human vocal sounds. Musicians showed enhanced time-domain response magnitude to the most spectrally complex portion of the stimulus and decreased magnitude to the more periodic, less complex portion. Enhanced phase-locking to stimulus periodicity was likewise seen in musicians' responses to the complex portion. These results suggest that auditory expertise engenders both enhancement and efficiency of subcortical neural responses that are intricately connected with acoustic features important for the communication of emotional states. Our findings provide the first biological evidence for behavioral observations indicating that musical training enhances the perception of vocally expressed emotion in addition to establishing a subcortical role in the auditory processing of emotional cues.


Fig. 1. Stimulus and grand average response waveforms. Response waveforms have been shifted back in time (∼7 msec) to align the stimulus and response onsets. Boxes delineate two stimulus subsections and the corresponding brainstem responses. The first subsection (112–142 ms) corresponds to the periodic portion and the second (145–212 ms) corresponds to the more complex portion. (A) Stimulus time-amplitude waveform. (B) Stimulus spectrogram. The stimulus F0 is superimposed as a highlighted line (∼280 Hz, left axis) with higher frequency spectral components plotted between white dotted lines (right axis). Although the F0 is detectable during the first section, the greater acoustic complexity of the second section results in the inability of the sound analyzing software (Praat) to track the F0. The harmonics are likewise more aperiodic. (C) The averaged responses of MusYrs and NonMus. Major peaks (β, 1 and 2) are labeled above the waveform.


No comments:

Post a Comment