In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Monday, March 26, 2012
Emotion in Eastern and Western Music Mirrors Vocalization
Further interesting work from Purves and his colleagues:
With all due respect, but this doesn't seem interesting at all, because there's nothing surprising about it. In fact, it's just a complex way of confirming something that I'd say most people already believe about the world. After all, we can listen to music from some African aboriginal tribe and can pretty much tell whether it's a happy or sad song.
ReplyDeleteTim--
ReplyDeleteI see your point and confirm that it is a common observation when talking about parallels between affect in music and language. The authors undersell their punchline by pointing out that we perhaps use the same mechanisms to pick out emotions both music and speech. The real juice here is further describing those exact "musical" events that might correlate to dynamic acoustical speech signals. The research is implicated, in my view, in understanding how acoustical properties of noises may convey affective information. Moreover, the research also attempts to add speech and music "contexts" in order to try to better generalize the results.
Ultimately, such affective processing may tell us more about higher level mechanisms that are employed when decoding complex auditory environments. But, this type of research needs a starting point and that's the value of the article.
On a side note, I would like to point out the heinous omission of David Huron's work on the subject by the authors.
I knew it! And apart from the important specifics which Brandon points out to Tim, I think the conclusion is much less obvious, especially for those diehards on the Nurture side of the Nature/Nurture debate. It's not just cultural. As a trained musician, I came strongly to believe that as physical beings, our bodies respond to vibrations inseparably from our emotional and cognitive experiences of sound. Phenomena for which we use language like emotion, thought, or even soul and spirit, are always embodied. The Mond/Body duality is not real.
ReplyDelete