To convincingly show that new perceptual meanings for sensory signals can be learned automatically, one needs an “invisible visual signal,” that is, a signal that is sensed but that has no effect on visual appearance. The gradient of vertical binocular disparity, created by 2% vertical magnification of one eye's image (the eye of vertical magnification [EVM]), can be such a signal. In several control experiments, we ensured that EVM could not be seen by the participants.
The stimulus we used was a horizontal cylinder rotating either front side up or front side down. In its basic form, the cylinder was defined by horizontal lines with fading edges. The lines moved up and down on the screen, thereby creating the impression of a rotating cylinder with ambiguous rotation direction, so participants perceived it rotating sometimes as front side up and sometimes as front side down.
We tested whether the signal created by 2% vertical magnification could be recruited to control the perceived rotation direction of this ambiguously rotating cylinder. To do so, we exposed participants to a new contingency. We used a disambiguated version of the cylinder that contained additional depth cues: dots provided horizontal disparity, and a rectangle occluded part of the farther surface of the cylinder. These cues disambiguated the perceived rotation direction of the cylinder. In training trials, we exposed participants to cylinder stimuli in which EVM and the unambiguously perceived rotation direction were contingent upon one another. To test whether EVM had an effect on the perceived rotation direction of the cylinder, we interleaved these training trials with probe trials that had ambiguous rotation direction. If participants recruited EVM to the new use, then perceived rotation direction on probe trials would come to depend on EVM. If participants did not recruit EVM, then perceived rotation direction would be independent of EVM.
Importantly, after exposure to the new contingency, all participants saw a majority of probe trials consistent with the rotation direction contingent with EVM during exposure—that is, the learning effect was highly significant.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Friday, November 19, 2010
Using invisible visual signals to see things.
Di Luca et al. have done an ingenious experiment that demonstrates that an invisible signal can be recruited as a cue for perceptual appearance. Regularities between the 'invisible' (below perceptual threshold) signal and a perceived signal can be unconsciously learned - perception can rapidly undergo “structure learning” by automatically picking up novel contingencies between sensory signals, thus automatically recruiting signals for novel uses during the construction of a percept. It is worthwhile to step through their description of how the experiment works:
No comments:
Post a Comment