Sievers et al. do a fascinating analysis. They designed an ingenious computer program that used slider bars to adjust a music player or a bouncing ball with varying rate, jitter (regularity of rate), direction, step size, and dissonance/visual spikiness. Participants were instructed to take as much time as needed to set the sliders in the program to express five emotions: “angry,” “happy,” “peaceful,” “sad,” and “scared.” One set of participants was instructed to move sliders to express the emotion with the moving ball, then other set told to move the sliders to use music to express the emotion. U.S. college students were one experimental group, the other was a culturally isolated Kreug ethnic minority in northern Cambodia with music formally dissimilar to Western music: no system of vertical pitch relations equivalent to Western tonal harmony, constructed using different scales and tunings, and performed on morphologically dissimilar instruments. Here is the authors' summary abstract:
Music moves us. Its kinetic power is the foundation of human behaviors as diverse as dance, romance, lullabies, and the military march. Despite its significance, the music-movement relationship is poorly understood. We present an empirical method for testing whether music and movement share a common structure that affords equivalent and universal emotional expressions. Our method uses a computer program that can generate matching examples of music and movement from a single set of features: rate, jitter (regularity of rate), direction, step size, and dissonance/visual spikiness. We applied our method in two experiments, one in the United States and another in an isolated tribal village in Cambodia. These experiments revealed three things: (i) each emotion was represented by a unique combination of features, (ii) each combination expressed the same emotion in both music and movement, and (iii) this common structure between music and movement was evident within and across cultures.