Here are some fascinating observations from Dils and Boroditsky, who ask:
To what extent is hearing a story about something similar to really witnessing it? What is the nature of the representations that arise in the course of normal language processing? Do people spontaneously form visual mental images when understanding language, and if so how truly visual are these representations?
We test whether processing linguistic descriptions of motion produces sufficiently vivid mental images to cause direction-selective motion adaptation in the visual system (i.e., cause a motion aftereffect illusion). We tested for motion aftereffects (MAEs) following explicit motion imagery, and after processing literal or metaphorical motion language (without instructions to imagine). Intentionally imagining motion produced reliable MAEs. The aftereffect from processing motion language gained strength as people heard more and more of a story (participants heard motion stories in four installments, with a test after each). For the last two story installments, motion language produced reliable MAEs across participants. Individuals differed in how early in the story this effect appeared, and this difference was predicted by the strength of an individual’s MAE from imagining motion. Strong imagers (participants who showed the largest MAEs from imagining motion) were more likely to show an MAE in the course of understanding motion language than were weak imagers. The results demonstrate that processing language can spontaneously create sufficiently vivid mental images to produce direction-selective adaptation in the visual system. The timecourse of adaptation suggests that individuals may differ in how efficiently they recruit visual mechanisms in the service of language understanding. Further, the results reveal an intriguing link between the vividness of mental imagery and the nature of the processes and representations involved in language understanding.
Here is their description of motion aftereffects and their measurement.
The MAE arises when direction-selective neurons in the human visual area MT+ complex lower their firing rate as a function of adapting to motion in their preferred direction. The net difference in the firing rate of neurons selective for the direction of the adapting stimulus relative to those selective for the opposite direction of motion produces a motion illusion. For example, after adapting to upward motion, people are more likely to see a stationary stimulus or a field of randomly moving dots as moving downward, and vice versa. To quantify the size of the aftereffect, one can parametrically vary the degree of motion coherence in the test display of moving dots. The amount of coherence necessary to null the MAE provides a nice measure of the size of the aftereffect produced by the adapting stimulus.
Language links when mirror neurons can't
ReplyDeleteThank you for your share your information.
ReplyDeletemah jongg 2