Wednesday, January 28, 2009

Seeing who we hear and hearing who we see

In an article with the title of this post Seyfarth and Cheney make the following comments on work by Proops et al.
Imagine that you're working in your office and you hear two voices outside in the hallway. Both are familiar. You immediately picture the individuals involved. You walk out to join them and there they are, looking exactly as you'd imagined. Effortlessly and unconsciously you have just performed two actions of great interest to cognitive scientists: cross-modal perception (in this case, by using auditory information to create a visual image) and individual recognition (the identification of a specific person according to a rich, multimodal, and individually distinct set of cues, and the placement of that individual in a society of many others). Proops, McComb, and Reby show that horses do it, too, and just as routinely, without any special training. The result, although not surprising, is nonetheless the first clear demonstration that a non-human animal recognizes members of its own species across sensory modalities. It raises intriguing questions about the origins of conceptual knowledge and the extent to which brain mechanisms in many species—birds, mammals, as well as humans—are essentially multisensory.

According to a traditional view, multisensory integration takes place only after extensive unisensory processing has occurred. Multimodal (or amodal) integration is a higher-order process that occurs in different areas from unimodal sensory processing, and different species may or may not be capable of multisensory integration...An alternative view argues that, although different sensory systems can operate on their own, sensory integration is rapid, pervasive, and widely distributed across species. The result is a distributed circuit of modality-specific subsystems, linked together to form a multimodal percept...A third view argues that many neurons are multisensory, able to respond to stimuli in either the visual or the auditory domain (for example), and capable of integrating sensory information at the level of a single neuron as long as the two sorts of information are congruent. As a result, “much, if not all, of neocortex is multisensory”. By this account, perceptual development does not occur in one sensory modality at a time but is integrated from the start.
Here is the experiment as described in the Proops et al. abstract:
...we use a cross-modal expectancy violation paradigm to provide a clear and systematic demonstration of cross-modal individual recognition in a nonhuman animal: the domestic horse. Subjects watched a herd member being led past them before the individual went of view, and a call from that or a different associate was played from a loudspeaker positioned close to the point of disappearance. When horses were shown one associate and then the call of a different associate was played, they responded more quickly and looked significantly longer in the direction of the call than when the call matched the herd member just seen, an indication that the incongruent combination violated their expectations. Thus, horses appear to possess a cross-modal representation of known individuals containing unique auditory and visual/olfactory information. Our paradigm could provide a powerful way to study individual recognition across a wide range of species.

1 comment:

  1. Anonymous6:43 PM

    Not me. I don't have a "mind's eye". I hear the people outside the office, identify the voices, but I don't picture them. I just know their names, and of course I can retrieve a lot of information about them. If I walk into the hallway I will recognize them. But I have no consciously retrievable mental image of them. (I can dream in images, and see people I know in my dreams.)

    ReplyDelete