Visual attention mechanisms are known to select information to process based on current goals, personal relevance, and lower-level features. Here we present evidence that human visual attention also includes a high-level category-specialized system that monitors animals in an ongoing manner. Exposed to alternations between complex natural scenes and duplicates with a single change (a change-detection paradigm), subjects are substantially faster and more accurate at detecting changes in animals relative to changes in all tested categories of inanimate objects, even vehicles, which they have been trained for years to monitor for sudden life-or-death changes in trajectory. This animate monitoring bias could not be accounted for by differences in lower-level visual characteristics, how interesting the target objects were, experience, or expertise, implicating mechanisms that evolved to direct attention differentially to objects by virtue of their membership in ancestrally important categories, regardless of their current utility.
Sample stimuli with targets circled. Although they are small (measured in pixels), peripheral, and blend into the background, the human (A) and elephant (E) were detected 100% of the time, and the hit rate for the tiny pigeon (B) was 91%. In contrast, average hit rates were 76% for the silo (C) and 67% for the high-contrast mug in the foreground (F), yet both are substantially larger in pixels than the elephant and pigeon. The simple comparison between the elephant and the minivan (D) is equally instructive. They occur in a similar visual background, yet changes to the high-contrast red minivan were detected only 72% of the time (compared with the smaller low-contrast elephant's 100% detection rate).
Wednesday, October 24, 2007
Our visual system is tuned to animals.
New et al. argue that the human attention system evolved category-specific selection criteria to monitor animals (including humans) in the environment. Ohman gives a nice commentary that puts the work in perspective (PDF here) Below is the abstract and a figure from New et.al. (PDF of article here):