You are on page 1of 14

Journal of Vision (2019) 19(12):22, 1–14 1

Category selectivity for animals and man-made objects:


Beyond low- and mid-level visual features
Department of Psychology, Division of Science,
Chenxi He New York University Abu Dhabi, United Arab Emirates

Department of Psychology, Division of Science,


Olivia S. Cheung New York University Abu Dhabi, United Arab Emirates $

Distinct concepts, such as animate and inanimate environment contain visual or conceptual information?
entities, comprise vast and systematic differences in While several recent studies have shown that the effects
visual features. While observers search for animals faster of visual properties may account for category effects
than man-made objects, it remains unclear to what such as animacy (e.g., Long, Störmer, & Alveraz, 2017;
extent visual or conceptual information contributes to Zachariou, Del Giacco, Ungerleider, & Yue, 2018; see
such differences in visual search performance. Previous also Rice, Watson, Hartley, & Andrews, 2014), here we
studies demonstrated that visual features are likely examine whether higher-level processes such as catego-
sufficient for distinguishing animals from man-made
rization may also influence visual search performance
objects. Across four experiments, we examined whether
low- or mid-level visual features solely contribute to the when low- and mid-level visual differences that have
search advantage for animals by using images of been previously shown to sufficiently distinguish among
comparable visual shape and gist statistics across the animals and man-made objects are minimized.
categories. Participants searched for either animal or One animate–inanimate distinction in visual per-
man-made object targets on a multiple-item display with ception is that there is an advantage for the processing
fruit/vegetable distractors. We consistently observed of animals compared with man-made objects. For
faster search performance for animal than man-made instance, observers are faster at detecting animals,
object targets. Such advantage for animals was unlikely compared with man-made objects or other inanimate
affected by differences in low- or mid-level visual items such as plants, in complex natural scenes (New,
properties or whether observers were either explicitly Cosmides, & Tooby, 2007; Wang, Tsuchiya, New,
told about the specific targets or not explicitly told to Hurlemann, & Adolphs, 2015). Even when observers
search for either animals or man-made objects. Instead, are not actively searching for animals, the presence of
the efficiency in categorizing animals over man-made animals in a scene also appears to capture attention.
objects appeared to contribute to the search advantage. For instance, change detection for inanimate targets
We suggest that apart from low- or mid-level visual (e.g., a leaf) was slowed when there were animal
differences among categories, higher-order processes,
such as categorization via interpreting visual inputs and distractors in the display, compared with when there
mapping them onto distinct concepts, may be critical in were no animal distractors (Altman, Khislavsky,
shaping category-selective effects. Coverdale, & Gilger, 2016). The attentional advantage
for animals has also been shown in other tasks
examining visual search efficiency (Jackson & Calvillo,
2013; Lipp, Derakshan, Waters, & Logies, 2004; but see
Introduction Levin, Takarae, Miner, & Keil, 2001), resistance of the
inattentional blindness (Calvillo & Hawkins, 2016;
We maintain stable representations of the world via Calvillo & Jackson, 2014) and attentional blink
interactions of incoming sensory input and existing (Guerrero & Calvillo, 2016), using behavioral or eye
conceptual knowledge about the world. Distinct con- tracking measures (Yang et al., 2012).
cepts, such as animate and inanimate entities, often Despite the wealth of evidence of the attentional
comprise vast and systematic differences in visual advantage for animals compared with man-made
features. As the brain transforms percepts into concepts, objects, it remains unclear whether visual or conceptual
to what extent do the mental representations that guide aspects of the differences between animals and man-
our behavior in searching for relevant items in an made objects may contribute to the advantage. For

Citation: He, C., & Cheung, O. S. (2019). Category selectivity for animals and man-made objects: Beyond low- and mid-level
visual features. Journal of Vision, 19(12):22, 1–14, https://doi.org/10.1167/19.12.22.
https://doi.org/1 0. 11 67 /1 9 .1 2. 22 Received June 13, 2019; published October 24, 2019 ISSN 1534-7362 Copyright 2019 The Authors

This work isonlicensed


Downloaded from jov.arvojournals.org under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
09/29/2020
Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 2

complex natural scenes, although systematic differences categories. Gist statistics describe the holistic shape and
in low-level visual properties, such as power spectrum, structure of an image by sampling spatial frequency
luminance, and contrast, have been found between and orientation information of image segments (Oliva
images with or without animals in the scenes (Torralba & Torralba, 2001, 2006), which capture more complete
& Oliva, 2003; Wichmann, Drewes, Karl, & Gegen- and sophisticated visual shape properties compared
furtner, 2010), such differences cannot account for with other low-level visual measures such as luminance,
human observers’ rapid detection of animals in these contrast, or pixel-wise measures. Gist statistics also
images (Wichmann et al., 2010). Nonetheless, inde- provide information about the amount of visual details
pendent of any scene details, it is possible that visual in the images (Oliva & Torralba, 2006). More
features of the items alone are sufficient for human importantly, gist statistics have been shown to support
observers to distinguish between animals and man- behavioral performance in visual categorization, and to
made objects (e.g., Levin et al., 2001; LoBue, 2014). predict neural responses in the occipitotemporal cortex
Specifically, animals often have curvilinear shapes for various visual object and scene categories (e.g.,
whereas man-made objects, especially tools, tend to be Andrews, Watson, Rice, & Hartley, 2015; Bar, 2003;
rectilinear or elongated in shape, which affords Loschky & Larson, 2010; Oliva & Torralba, 2001,
graspability (Almeida et al., 2014). Curvilinear and 2007; Rice et al., 2014; Watson, Hartley, & Andrews,
rectilinear visual features alone may be sufficient to 2014). Here, we used images of comparable visual
support categorization between animals and man-made shape and gist statistics across different categories to
objects even for scrambled, texture-form images with or examine whether the search advantage for animals
without recognizable global shape information (Long would remain.
et al., 2017; Zachariou et al., 2018). Such mid-level In Experiments 1A, 1B, and 2A, observers were
visual features facilitate visual search performance if a asked to search for either an animal or a man-made
target (e.g., animal) is from a different category than object target on a multiple-item display with varied set
the distractors (e.g., man-made objects; Long et al., sizes, with fruits/vegetables as additional fillers. In
2017). Animals and man-made objects also differ in Experiment 2B, observers were told to search for any
visual similarity among category members (Gerlach, non-fruits/vegetable items, which could either be
Law, & Paulson, 2004), as animals tend to be more animals or man-made objects. Note that in addition to
visually similar to each other (e.g., many animals have comparable visual shape and gist statistics in the
four legs and one head, etc.) than man-made objects images we used across the three categories, fruits/
(e.g., different tools may have few overlapping visual vegetables were used as additional fillers as previous
features; Humphreys, Riddoch, & Quinlan, 1988). studies showed comparable semantic distance of fruits/
Moreover, animals may also be more visually complex vegetables to either animals or man-made objects
compared with man-made objects (e.g., a lion vs. a (Bracci & Op de Beeck, 2016; Carota, Kriegeskorte,
hammer; Gerlach, 2007; Moore & Price, 1999; Snod- Nili, & Pulvermüller, 2017).
grass & Vanderwart, 1980; Tyler et al., 2003). Taken Our main focus was the effect of animacy on the
together, it is possible that vast differences in the visual search for a target, in other words, whether the search
features between animals and man-made objects may for animals is faster than the search for man-made
drive the differential performance in visual search. objects among items with comparable visual shape and
If the advantage for animals in visual search remains gist statistics. Apart from the main focus of the effect of
with images of comparable visual features between the target category, in Experiments 1A and 1B we also
categories, it would suggest that higher-level processes examined whether task-irrelevant animals or man-
such as categorization, which involves semantic pro- made objects may attract attention away from the
cessing or extracting meanings from any available target, revealing a stimulus-driven effect (Langton,
visual features, may play an additional role. Here we Law, Burton, & Schweinberger, 2008; Theeuwes, 1991,
tested this hypothesis by minimizing visual differences 1992; Yantis, 1993). Therefore, in half of the trials, an
among the categories. Our findings would extend from item from the non-target category (e.g., a man-made
previous studies that found search advantage for object when searching for an animal) also appeared as a
animals to further understand the nature of represen- distractor among the fruit/vegetable distractors. Addi-
tations among animals and man-made objects that are tionally, while we do not expect animacy to be a basic
critical for performance. feature that guides visual search (Wolfe & Horowitz,
The current study used two approaches in attempt to 2004; but see Levin et al., 2001), as indicated by a pop-
minimize visual differences between animals and man- out effect for animal search with the search slope less
made objects. First, we selected images that were either than 10 ms/item with respect to the set size increase
round or elongated in the outline shapes. Second, we (Duncan & Humphreys, 1989; Theeuwes, 1993; Treis-
measured gist statistics of the images and used only man & Gelade, 1980; but see Buetti, Cronin, Madison,
images with comparable gist statistics between the Wang, & Lleras, 2016), we examined the search

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 3

performance for animal and man-made object targets


across set sizes of 3, 6, and 9 in all experiments.
To anticipate the results, we found in Experiment 1A
that the search was faster for animal than man-made
object targets across set sizes. In Experiment 1B, we
replicated the results in Experiment 1A while further
ruling out the influences of low-level visual factors
including luminance, contrast, and power spectrum, as
these factors could affect selective attention (Moraglia,
1989; Parkhurst, Law, & Niebur, 2002; Sagi, 1988;
Smith, 1962; Theeuwes, 1995; Wolfe & Horowitz, 2004,
2017). Experiments 2A and 2B further demonstrated
that categorization is more efficient for animals than
man-made objects, either when participants were
shown the names of all target items prior to the search
task to minimize the range of possible targets for both
categories, or when participants were asked to detect
any items that were not fruits/vegetables.

Experiments 1A and 1B Figure 1. Sample images of animals, man-made objects, and


fruits/vegetables used in Experiments 1A (A) and 1B (B). In all
three categories, the overall shape of half of the items was
Method elongated, whereas the overall shape of the remaining items
was round.
Participants
Fifty-six undergraduate students aged between 18 image (Oliva & Torralba, 2001). Specifically, a series of
and 23 of years (mean ¼ 19.8, SD ¼ 1.3, 34 women and Gabor filters across eight orientations and four spatial
22 men) at New York University Abu Dhabi partici- frequencies were applied to each image to generate 32
pated for either course credits or subsistence allow- filtered images. Each filtered image was then segmented
ances; 28 participants completed Experiment 1A and 28 into a 4 3 4 grid within which the values were averaged
participants completed Experiment 1B. All participants per cell, resulting in 16 values. The final gist statistics
had normal or corrected-to-normal vision. All exper- for each image were made up by a vector of 512 (32 3
iments were approved by New York University Abu 16) values (e.g., Oliva & Torralba, 2001; Rice et al.,
Dhabi Institutional Review Board. 2014). To compare gist statistics across shapes and
categories, the values were first averaged across all
exemplars for each item. Dissimilarity indicated by
Stimuli squared Euclidean distance between gist statistics of
Figure 1A illustrates examples of the stimuli in each pair of items both within and across shapes and
Experiment 1A. A total of 52 items including 12 categories were then calculated (Figure 2). We com-
animals, 12 man-made objects, and 28 fruits/vegetables pared the dissimilarity between round versus elongated
were used. Each item had 16 exemplars. All images shapes separately for each category, and between each
were in grayscale. An exemplar of each item is shown in pair of categories separately for each shape. We found
the Appendix, Figure A1. The animals and man-made that the gist statistics were significantly different
objects were either target or distractor categories for between elongated and round shapes in each category,
each of the two participant groups. The fruits/ but they were comparable across categories for either
vegetables were filler distractors. Half of the items from shape. Within-shape dissimilarity (e.g., a turtle and a
the three categories had round shapes (e.g., turtle, squirrel) was significantly lower than cross-shape
steering wheel, apple), while the other half had dissimilarity (e.g., a turtle and a wall lizard) for all
elongated shapes (e.g., wall lizard, butter knife, categories: animals, t(64) ¼ 5.9, p , 0.0001; man-
cucumber). All items had neutral valence, so any made objects, t(64) ¼ 12.7, p , 0.0001; fruits/
potential emotional effect on visual search should be vegetables, t(376) ¼ 11.8, p , 0.0001. Conversely,
minimized (e.g., LoBue, 2014; Ohman, Flykt, & there was no statistical difference between within-
Esteves, 2001). category dissimilarity (e.g., a turtle and a squirrel) and
Gist statistics were measured on spatial frequency cross-category dissimilarity (e.g., a turtle and a steering
and orientation information of all segments for each wheel) for animals versus man-made objects: elongated,

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 4

Figure 2. Pair-wise dissimilarity (squared Euclidean distance) of gist statistics showed significant differences between elongated and
round shapes in each category, but no significant differences across animals, man-made objects, and fruits/vegetables for either
shape.

t(64) ¼0.2, p ¼ 0.82; round, t(64) ¼0.02, p ¼ 0.99, for attention (Moraglia, 1989; Parkhurst et al., 2002; Sagi,
animals versus fruits/vegetables: elongated, t(188) ¼ 1988; Smith, 1962; Theeuwes, 1995; Wolfe & Horowitz,
0.4, p ¼ 0.71; round, t(188) ¼ 0.6, p ¼ 0.52, and for man- 2004). For the images used in Experiment 1A, the
made objects versus fruits/vegetables: elongated, t(188) contrast and power spectrum were statistically com-
¼ 0.6, p ¼ 0.58; round, t(188) ¼ 0.6, p ¼ 0.58. These parable between animals and man-made objects (ps .
results indicated that for either elongated or round 0.29); however, the comparisons between animals and
shape, there was no systematic difference across fruits/vegetables, and between man-made objects and
different categories in visual properties quantified by fruits/vegetables, were significantly different (ps ,
gist statistics. 0.01). In Experiment 1B, all images were further
Low-level visual features, such as luminance, con- processed with the SHINE toolbox (Willenbockel et al.,
trast and power spectrum could affect selective 2010; see Figure 1B) to balance low-level visual

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 5

explicitly reminding participants that three distinct


categories of items would be shown in the study.
For each target-present trial, one item from the target
category was presented (e.g., a squirrel for the animal
search group). For each distractor-present trial, one item
from the distractor category (e.g., a kitchen knife for the
animal search group) was presented. The rest of the items
on the displays were fruits/vegetables. Half of the trials
showed image arrays of round stimuli, and the other half
presented image arrays of elongated stimuli. There were
864 trials in total, with 72 trials in each Target-Present 3
Distractor-Present 3 Set Size condition.
The selection of items on each display and the trial
presentation order were randomized across partici-
pants, and the randomization was matched between
groups in each experiment so that participants search-
ing for either animal or man-made object targets would
Figure 3. Schematic sample displays with a set size of 6 for the see the same displays. The amount of times that the
target present–distractor absent, target present–distractor targets appeared on each of the three, six, or nine
present, target absent–distractor absent, and target absent– possible locations on the displays were evenly distrib-
distractor present conditions for Experiment 1A. Half of the uted, while the locations for the appearance of other
participants were asked to search for animals, the other half items were randomized. Because each animal or man-
were asked to search for man-made objects.
made object target and distractor exemplar was
presented only once for each set size during the
properties including mean luminance, contrast, and experiment, to minimize the possibility that the rarity
power spectrum (averaged across orientations at each of either animals or tools as distractors compared with
spatial frequency) across images of all categories. fruits/vegetables might account for any distractor
effects, a subset of images of fruits/vegetables (6 out of
14 items from each of the elongated or round sets) were
Procedure
randomly selected to appear for the same number of
The experiments were run on MATLAB (Math- times as the animal/tool distractors. For these items, a
Works, Natick, MA) with Psychtoolbox (Brainard, total of 12 out of the 16 exemplars were randomly
1997; Kleiner et al., 2007). On each trial, a fixation was selected to maintain identical presentation frequencies
presented at the center of the screen for 500 ms, as the animal or man-made object distractors for each
followed by a display of 3, 6, or 9 items presented participant, whereas all 16 exemplars were used for all
around an invisible circle (Figure 3). Participants were remaining fruit/vegetable items.
asked to determine whether a target was present or
absent by pressing either of two keys on a keyboard as
accurately and as fast as possible. The importance of Results
accuracy was more emphasized than response speed.
The stimulus display was shown until a response was The mean correct response time (RT) for the target
made. The distance from the center of each item to the present trials is illustrated in Figure 4, and sensitivity
center of the screen was 5.58, and the visual angle of (d 0 ) for all trials is shown in Table 1. Correct RT was
each item was 3.68 3 3.68. A chinrest was used to ensure the main measure here as it reflected the performance
that the visual distance was fixed to be 57 cm for target present trials, whereas d 0 demonstrated the
throughout the study. performance for both target-present and target-absent
Participants were randomly assigned to the animal trials. For each experiment, trial outliers with RT
or man-made object search group. All participants were below 150 ms or above three standard deviations of the
explicitly instructed to search for any items from the average of the individual RT were excluded from the
category of interest (e.g., animals for the animal search analyses (average 2.1% of the trials). Two three-way
group), but there was no mention about the identities ANOVAs were conducted on correct RT for target
of other two categories (e.g., man-made objects and present trials and d 0 for all trials, with a between-
fruits/vegetables for the animal search group). In this subjects factor target category (animals vs. man-made
way, we were able to examine the effect of attentional objects), and two within-subject factors, distractor
capture from either of animal or man-made object presence (present vs. absent) and set size (three vs. six
distractors, which appeared in half of the trials, without vs. nine).

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 6

Experiment 1A
For correct RT on target present trials, the main
effect of Target category was significant, F(1, 26) ¼ 9.0,
p ¼ 0.006, g2p ¼ 0.26, with animal search faster than
man-made object search, indicating an overall advan-
tage for animal search. The significant main effect of
distractor presence, F(1, 26) ¼ 6.0, p ¼ 0.021, g2p ¼ 0.19,
revealed longer RT when the distractors were present
relative to when they were absent. The significant main
effect of set size, F(2, 52) ¼ 138.4, p , 0.0001, g2p ¼ 0.84,
showed that RT increased with increased set sizes
(Bonferroni-corrected pairwise comparisons: ps ,
0.0001). None of the interactions was significant
(Target Category 3 Distractor Presence: F(1, 26) ¼ 1.8,
p ¼ 0.19, g2p ¼ 0.07; Target Category 3 Set Size: F(2, 52)
¼ 0.4, p ¼ 0.70, g2p ¼ 0.01; Distractor Presence 3 Set Size:
F(2, 52) ¼ 1.3, p ¼ 0.28, g2p ¼ 0.05; three-way interaction:
F(2, 52) ¼ 0.2, p ¼ 0.82, g2p ¼ 0.01). The search slopes
were comparable between animal search (23.8 ms/item
and 20.6 ms/item with and without the presence of
man-made object distractors), and for man-made
object search (25.4 ms/item and 23.7 ms/item with or
without the presence of animal distractors).
To examine the overall sensitivity performance on
the task, we also calculated d 0 on all trials. Only the
main effect of set size was significant, F(2, 52) ¼ 8.2, p ¼
0.001, g2p ¼ 0.24, with lower d 0 for Set Size 9 compared
with Set Sizes 3 and 6 (Bonferroni-corrected ps , 0.01),
and no statistical difference between set sizes 3 and 6 (p
¼ 0.99). All other main effects or interactions were not
significant (target category: F(1, 26) ¼ 0.01, p ¼ 0.92, g2p
, 0.0001; distractor presence: F(1, 26) ¼ 0.05, p ¼ 0.83,
g2p ¼ 0.002; Target Category 3 Distractor Presence: F(1,
26) ¼ 3.1, p ¼ 0.092, g2p ¼ 0.11; Target Category 3 Set
Figure 4. Mean correct response times on target present trials Size: F(2, 52) ¼ 2.1, p ¼ 0.13, g2p ¼ 0.08; Distractor
averaged across items, as a function of target category, Presence 3 Set Size: F(2, 52) ¼ 0.5, p ¼ 0.60, g2p ¼ 0.02;
distractor presence, and set size for Experiments 1A and 1B three-way interaction: F(2, 52) ¼ 0.2, p ¼ 0.79, g2p ¼
(bold lines). Error bars represent the 95% confidence intervals 0.01).
of the within-subject interaction between distractor presence
and set size. Mean correct response times on target-present
trials for individual items, averaged across distractor-present Experiment 1B
and distractor-absent trials, are also shown.
Replicating Experiment 1A, for correct RT on target
present trials, the main effect of target category was

Animal targets Object targets


Set size 3 6 9 3 6 9
Experiment 1A
Distractor present 3.85 (0.49) 3.65 (0.47) 3.47 (0.65) 3.83 (0.56) 3.78 (0.60) 3.66 (0.56)
Distractor absent 3.87 (0.57) 3.88 (0.58) 3.51 (0.51) 3.68 (0.56) 3.73 (0.58) 3.63 (0.48)
Experiment 1B
Distractor present 3.30 (0.79) 3.19 (0.58) 3.19 (0.76) 3.47 (0.61) 3.35 (0.45) 3.28 (0.63)
Distractor absent 3.33 (0.70) 3.14 (0.56) 2.90 (0.80) 3.47 (0.44) 3.40 (0.57) 3.08 (0.56)
Table 1. Mean sensitivity (d 0 ) and standard deviations (in parentheses) as a function of target category, distractor presence, and set
size for Experiments 1A and 1B.

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 7

significant, F(1, 26) ¼ 8.3, p ¼ 0.008, g2p ¼ 0.24, with distractors, compared with fruit/vegetable distractors.
animal search faster than man-made object search. The Alternatively, the effect of actively searching for animal
main effect of set size was also significant, F(1.43, versus man-made object targets, which was observed in
37.07) ¼ 91.4, p , 0.0001, g2p ¼ 0.78, showing longer both Experiments 1A and 1B, appeared to be highly
search time with increased set sizes (Bonferroni- robust and independent of low-level visual differences,
corrected ps , 0.0001). The Target Category 3 Set Size compared with the stimulus-driven distractor effect.
interaction approached significance, F(2, 52) ¼ 2.6, p ¼ While it is unlikely that the faster search perfor-
0.08, g2p ¼ 0.09, but all other main effects or interactions mance for animal than man-made object targets is
were not significant (distractor presence: F(1, 26) ¼ 0.5, solely due to low- and mid-level visual differences
p ¼ 0.48, g2p ¼ 0.02; Target Category 3 Distractor among the categories, one possible account for such
Presence: F(1, 26) ¼ 0.5, p ¼ 0.48, g2p ¼ 0.02; Distractor difference in performance is that the categorization
Presence 3 Set Size: F(2, 52) ¼ 0.1, p ¼ 0.92, g2p ¼ 0.003; process based on the visual features beyond low- and
three-way interaction: F(2, 52) ¼ 2.0, p ¼ 0.15, g2p ¼ mid-level differences is more efficient for animals than
0.07). The search slopes for animal targets were 24.0 man-made objects. In Experiments 2A and 2B, we
ms/item with man-made object distractors, and 24.1 aimed to extend the findings to conditions when
ms/item without such distractors. The search slopes for participants were either explicitly told the names of all
man-made object targets were 35.0 ms/item with the target items prior to the search task to minimize the
presence of animal distractors, and 32.8 ms/item range of possible targets, or when they were only asked
without animal distractors. to detect any items that were not fruits/vegetables and
For d 0 , only the main effect of set size was significant, were not explicitly asked to search for either animal or
F(2, 52) ¼ 6.7, p ¼ 0.002, g2p ¼ 0.21, with significantly man-made object targets. Furthermore, while different
lower d 0 for Set Size 9 compared with Set Size 3 groups of participants searched for either animal or
(Bonferroni-corrected p , 0.01), and no statistical man-made object targets in Experiments 1A and 1B,
difference between other set sizes (ps . 0.10). All other Experiments 2A and 2B further tested whether search
main effects or interactions were not significant: target performance would remain faster for animals than
category, F(1, 26) ¼ 0.7, p ¼ 0.41, g2p ¼ 0.03; distractor man-made objects by manipulating target category as a
presence, F(1, 26) ¼ 2.0, p ¼ 0.17, g2p ¼ 0.07; Target within-subjects factor.
Category 3 Distractor Presence, F(1, 26) ¼ 0.2, p ¼ 0.65,
g2p ¼ 0.01; Target Category 3 Set Size, F(2, 52) ¼ 0.1, p ¼
0.87, g2p ¼ 0.01; Distractor Presence 3 Set Size, F(2, 52)
¼ 2.8, p ¼ 0.07, g2p ¼ 0.10; three-way interaction: F(2, 52) Experiments 2A and 2B
¼ 0.2, p ¼ 0.84, g2p ¼ 0.01.
Method
Discussion Participants
Forty undergraduate students aged between 18 and
With images of comparable visual shape and gist 23 of years (mean ¼ 19.8, SD ¼ 1.1, 23 women and 17
statistics, we found in both Experiments 1A and 1B men) from the same participant pool as in Experiments
that the search for animal targets remained significantly 1A and 1B took part in this study; 20 participants
faster than that for man-made object targets across set completed Experiment 2A and 20 participants com-
sizes, suggesting that the search advantage for animals pleted Experiment 2B.
is not merely driven by visual shape and gist statistics.
Moreover, Experiment 1B further showed that the
search advantage for animals cannot be accounted for Stimuli and procedure
by low-level visual features. The stimuli and procedure of Experiments 2A and
The presence of distractors appeared to attract 2B were identical to those in Experiment 1B, except for
attention from searching for the targets in Experiment the following changes: The distractor-present trials
1A, suggesting that the animal or tool distractors might were removed. In Experiment 2A, each participant
be processed differently from the fruit/vegetable searched for both animal and man-made object targets
distractors. However, this effect was not found in in separate blocks, with the task order counter-
Experiment 1B when the low-level visual properties balanced across participants (i.e., half of the partici-
were equated among the categories. It is possible that pants searched for animal targets first, and the other
the target search in Experiment 1A might have been half searched for man-made object targets first).
slowed by the presence of an animal or man-made Participants studied a list of target names (12 animals
object distractor because of the low-level visual or 12 man-made objects) for one minute prior to each
differences between animals or man-made object task. In Experiment 2B, participants were asked to

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 8

Results
The mean correct response time (RT) for the target
present trials is illustrated in Figure 5, and sensitivity
(d 0 ) for all trials is shown in Table 2. As in Experiment
1A and 1B, trial outliers with RT below 150 ms or
above three standard deviations of the average of the
individual RT for each participant were excluded from
the analyses (average 2.2% of the trials). Two two-way
ANOVAs were conducted on correct RT for target
present trials and d 0 for all trials, with two within-
subject factors target category (animals vs. man-made
objects) and set size (three vs. six vs. nine).

Experiment 2A
For correct RT on target present trials, the main
effect of target category was significant, F(1, 19) ¼ 8.0, p
¼ 0.011, g2p ¼ 0.30, with animal search faster than man-
made object search. The main effect of set size was also
significant, F(2, 38) ¼ 117.5, p , 0.0001, g2p ¼ 0.86,
showing slower search with increased set sizes (Bon-
ferroni-corrected ps , 0.0001). The Target Category 3
Set Size interaction was not significant, F(2, 38) ¼ 1.1, p
¼ 0.35, g2p ¼ 0.05, with 33.1 ms/item for the search slope
for animals and 33.3 ms/item for man-made objects.
For d 0 , the main effect of set size was significant, F(2,
38) ¼ 4.1, p ¼ 0.025, g2p ¼ 0.18, with significantly lower d 0
for Set Size 9 compared with Set Size 3 (Bonferroni-
corrected p , 0.05), and no statistical difference
between other set sizes (ps . 0.10). There was no
significant main effect of target category, F(1, 19) ¼ 2.3,
p ¼ 0.15, g2p ¼ 0.11, or interaction, F(2, 38) ¼ 0.5, p ¼
0.59, g2p ¼ 0.03.

Figure 5. Mean correct response times on target-present trials Experiment 2B


averaged across items, as a function of target category and set
size for Experiments 2A and 2B (bold lines). Error bars represent For correct RT on target present trials, the main
the 95% confidence intervals of the within-subject interaction effect of target category was significant, F(1, 19) ¼ 9.5, p
of target category and set size. Mean correct response times on ¼ 0.006, g2p ¼ 0.33, with animal search faster than man-
target-present trials for individual items are also shown. made object search. The main effect of set size was also
significant, F(2, 38) ¼ 51.5, p , 0.0001, g2p ¼ 0.73,
detect any non-fruit/vegetable item on each trial, and showing longer search time with increased set sizes
there was no mention of animals or man-made objects. (Bonferroni-corrected ps , 0.0001). The Target Cate-
The presentation order of animal target trials and man- gory 3 Set Size interaction was not significant, F(2, 38)
made object target trials was randomized. ¼ 0.5, p ¼ 0.60, g2p ¼ 0.03, revealing no statistical

Animal targets Object targets


Set size 3 6 9 3 6 9
Experiment 2A 3.31 (0.55) 3.30 (0.58) 3.14 (0.64) 3.20 (0.55) 3.15 (0.68) 2.90 (0.81)
Experiment 2B 3.54 (0.57) 3.21 (0.52) 3.11 (0.60) 3.06 (0.47) 2.80 (0.65) 2.66 (0.58)
Table 2. Mean sensitivity (d 0 ) and standard deviations (in parentheses) as a function of target category and set size for Experiments 2A
and 2B.

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 9

difference in the search slopes between animals (34.3 parable gist statistics also suggested similar levels of
ms/item) and man-made objects (40.1 ms/item). visual details in the images across the categories. Thus,
For d 0 , the main effect of target category was the present work highly constrains the scope of
significant, F(1, 19) ¼ 22.7, p ¼ 0.0001, g2p ¼ 0.54, with potential explanations as to the visual differences
the sensitivity for animal search higher than man-made between categories. Instead, higher-level cognitive
object search. The main effect of set size was also processing, such as the process of categorization, which
significant, F(2, 38) ¼ 14.8, p , 0.0001, g2p ¼ 0.44, with involves the interpretation or semantic processing of
significantly higher d 0 for Set Size 3 compared with the visual features of animals or man-made objects,
both Set Size 6 and Set Size 9 (Bonferroni-corrected ps may contribute to the category differences and facilitate
, 0.01), and no statistical difference between Set Size 6 visual search performance for animals over man-made
and Set Size 9 (p ¼ 0.33). There was no significant objects. These possibilities are elaborated below.
interaction, F(2, 38) ¼ 0.1, p ¼ 0.90, g2p ¼ 0.01). It is possible that correct categorization of the
animals and man-made objects reflects the different
interpretations of the visual input. Even though the
Discussion visual shape and gist statistics are comparable between
categories in this study, there are other available visual
Experiments 2A and 2B replicated the results of both features that observers may utilize to interpret the
Experiments 1A and 1B by showing that searching for meanings of the visual input for categorization.
animal targets is faster than searching for man-made Previous studies have shown that the interpretation of
objects, even when visual shape, gist statistics, and ambiguous visual stimuli can shape neural representa-
other low-level visual features across categories were tions in the occipitotemporal cortex. For instance, with
comparable. Experiments 2A and 2B further demon- the absence of actual facial features, face-selective
strated that this category effect was consistently activations in the fusiform ‘‘face’’ area (FFA) were
observed in various conditions, such as when explicit observed when the non-face images were interpreted as
prior knowledge was given about the specific target faces (Cox, Meyers, & Sinha, 2004; Hadjikhan,
items for the animal or man-made object categories, or Kveraga, Naik, & Ahlfors, 2009; Summerfield, Egner,
when there was no explicit mention regarding animal or Mangels, & Hirsch, 2006). Nonetheless, successful
man-made object categories. categorization of stimuli often relies on correct
interpretations of diagnostic information from the
visual input. What kinds of diagnostic information may
lead us to interpret an image as an animal or a man-
General discussion made object? One possibility is that the binding or
configuration of certain features that relies on prior
Across four experiments that used images of knowledge about the categories may play a critical role.
comparable visual shape and gist statistics across For instance, while detecting curvilinear features alone
categories, our findings extended from previous re- may likely bias observers to interpret a visual input as
search that used a variety of animal and man-made an animal (Long et al., 2017; Zachariou et al., 2018),
object images with naturally varied visual features various man-made objects such as several of those used
within and across categories (e.g., Jackson & Calvillo, in the current study may also have curvilinear features.
2013; Levin et al., 2001; Long et al., 2017) to show that In addition to curvilinear features, detecting an
the nature of representations that guide visual search ensemble of a head, a body, and four limbs may further
performance may not solely reflect low- or mid-level facilitate the categorization (Delorme, Richard, &
visual influences. Specifically, while visual differences Fabre-Thorpe, 2010).
such as curvilinearity and gist statistics clearly have an It is also important to note that the current study
important role to support visual categorization (Long used a relatively diverse set of animals and man-made
et al., 2017; Loschky & Larson, 2010; Rice et al., 2014; objects (see Appendix); thus, the results were unlikely
Zachariou et al., 2018), our findings suggest that these due to particular characteristics of a small subset of
visual information, in addition to other low-level visual animals or objects, but were more likely due to general
features such as luminance, contrast, and power knowledge about animals and man-made objects
spectrum, cannot entirely account for the faster search (Diesendruck & Gelman, 1999; Humphreys et al., 1988;
performance for animals than man-made objects. Laws & Neve, 1999; Patterson, Nestor, & Rogers, 2007;
Although it is likely impossible to rule out all visual Taylor, Devereux, Acres, Randall, & Tyler, 2012).
differences between animals and man-made objects, Specifically, the items used for each category in the
our stimulus set was designed to rule out a large set of experiments were highly different from each other in
possible visual features including low-level features, terms of diagnostic visual features (e.g., a squirrel vs. a
curvilinearity, elongation, and gist statistics; the com- dolphin). Moreover, the exemplars for each item also

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 10

varied in visual appearance. To successfully categorize indicated by relatively shallow search slopes (Duncan &
the target items as either animals or man-made objects, Humphreys, 1989; Theeuwes, 1993; Treisman &
participants would need to utilize information beyond Gelade, 1980; Wolfe & Horowitz, 2004). Instead, the
any visual features that were specific to only a few items advantage for animal search observed in this study may
or exemplars of a category. Therefore, the use of our arise from a subsequent stage of recognition or
stimulus set provided further support that our findings categorization. Nonetheless, it is possible that the
are likely related to general knowledge about possible natural variations in visual differences between animals
visual features of the categories. and man-made objects might have contributed in
Although the current study was not designed to determining search efficiency for the categories in the
examine the effects of individual items, the results previous studies (Jackson & Calvillo, 2013; Levin et al.,
showed a range of item differences (Figures 4 and 5). 2001). Future work should address how the processing
For animals, a continuum of animacy representation of different sources of information, such as different
ranged from mammals to insects have been observed levels of visual and semantic features about animals
from lateral to medial occipitotemporal cortex (Con- and man-made objects, may contribute to the differ-
nolly et al., 2012; Sha et al., 2015). However, it appears ential performance between the categories in visual
unlikely that the search performance among different search. Nonetheless, our work adds to a growing
animals was simply explained by the animacy contin- understanding that semantic information may play a
uum, as a follow-up observation revealed that the
role in visual perception (Coren & Enns, 1993; Lupyan,
mammals (e.g., squirrels, mice, dolphins) were not
2015; Lupyan & Ward, 2013).
necessarily among the fastest search, and the insects
To sum up, using images of comparable visual shape
(e.g., snails, locusts) were also not necessarily among
the slowest search. Similarly, as man-made objects also and gist statistics across categories, the current study
differ in many aspects such as manipulability, size, provides convergent evidence of faster search for
portability, and context (Bar, 2004; Mullally & animals compared with man-made objects. While
Maguire, 2011; Peelen & Caramazza, 2012), it remains category-selective effects between animals and man-
unclear which aspects determine the search perfor- made objects are likely influenced by the vast visual
mance for these items. Future work should examine differences between the categories, as revealed by
how different visual and conceptual features of previous studies (Long et al., 2017; Zachariou et al.,
individual items contribute to the category effects. Our 2018), we suggest that a focus purely on visual
work provides a method to address these questions. differences is incomplete. Rather, a full understanding
While the search advantage for animal targets was of category-selective effects in visual perception may
robust and was replicated across four experiments, the need to incorporate the contributions of high-level
extent to which the animal or man-made object processes, such as categorization processes or semantic
distractors captured attention was less clear, as the influences, on category-selective effects. More broadly,
distractor effect was only observed in Experiment 1A, the current finding sheds light on the understanding of
but it was diminished in Experiment 1B. We speculated the close interactions between perceptual and semantic
that the animal or man-made object distractors might systems in human cognition.
have captured attention because of the different image
Keywords: category selectivity, visual search,
contrasts or power spectrum compared with neighbor-
animacy, gist statistics, categorization
ing fruits/vegetables items. The failure to observe
reliable distractor effects in Experiment 1B suggests
that such effects might primarily be influenced by low-
level visual properties, and that different underlying Acknowledgments
attentional mechanisms may be involved for the active
search for a target and the involuntary interference
during an active search by a distractor (Langton et al., We thank Emma Wei Chen for helpful discussions,
2008; Theeuwes, 1991, 1992; Yantis, 1993). Daryl Fougnie and Garry Kong for comments on the
In contrast with previous studies where visual manuscript, and Anna Noer for assistance in data
features were not strictly controlled (Jackson & collection. Declarations of interest: none.
Calvillo, 2013; Levin et al., 2001), we found relatively
steep search slopes for animal and man-made object Commercial relationships: none.
categories, and that the search efficiency was compa- Corresponding author: Olivia S. Cheung.
rable for the two categories. These results suggest that Email: olivia.cheung@nyu.edu.
the differential search performance for the two Address: Department of Psychology, Division of
categories in the current study is unlikely to have Science, New York University Abu Dhabi, United
occurred during the pre-attentive stage, as often Arab Emirates.

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 11

References F. (2017). Representational similarity mapping of


distributional semantics in left inferior frontal,
middle temporal, and motor cortex. Cerebral
Almeida, J., Mahon, B. Z., Zapater-Raberov, V., Cortex, 27(1), 294–309, https://doi.org/10.1093/
Dziuba, A., Cabaço, T., Marques, J. F., & cercor/bhw379.
Caramazza, A. (2014). Grasping with the eyes: The
Connolly, A. C., Guntupalli, J. S., Gors, J., Hanke, M.,
role of elongation in visual recognition of manip-
Halchenko, Y. O., Wu, Y.-C., . . . Haxby, J. V.
ulable objects. Cognitive, Affective & Behavioral
(2012). The representation of biological classes in
Neuroscience, 14(1), 319–335, https://doi.org/10.
the human brain. The Journal of Neuroscience,
3758/s13415-013-0208-0.
32(8), 2608–2618, https://doi.org/10.1523/
Altman, M. N., Khislavsky, A. L., Coverdale, M. E., & JNEUROSCI.5547-11.2012.
Gilger, J. W. (2016). Adaptive attention: How Coren, S., & Enns, J. T. (1993). Size contrast as a
preference for animacy impacts change detection. function of conceptual similarity between test and
Evolution and Human Behavior, 37(4), 303–314, inducers. Perception & Psychophysics, 54(5), 579–
https://doi.org/10.1016/j.evolhumbehav.2016.01. 588.
006.
Cox, D., Meyers, E., & Sinha, P. (2004, April 2).
Andrews, T. J., Watson, D. M., Rice, G. E., & Hartley, Contextually evoked object-specific responses in
T. (2015). Low-level properties of natural images human visual cortex. Science, 304, 115–117, https://
predict topographic patterns of neural response in doi.org/10.1126/science.1093110.
the ventral visual pathway visual pathway. Journal
of Vision, 15(7):3, 1–12, https://doi.org/10.1167/15. Delorme, A., Richard, G., & Fabre-Thorpe, M. (2010).
7.3. [PubMed] [Article] Key visual features for rapid categorization of
animals in natural scenes. Frontiers in Psychology,
Bar, M. (2003). A cortical mechanism for triggering 1(21), 1–13, https://doi.org/10.3389/fpsyg.2010.
top-down facilitation in visual object recognition. 00021.
Journal of Cognitive Neuroscience, 15, 600–609.
Diesendruck, G., & Gelman, S. A. (1999). Domain
Bar, M. (2004). Visual objects in context. Nature differences in absolute judgments of category
Review Neuroscience, 5(8), 617–629. membership: Evidence for an essentialist account of
Bracci, S., & Op de Beeck, H. (2016). Dissociations and categorization. Psychonomic Bulletin and Review,
associations between shape and category represen- 6(2), 338–346, https://doi.org/10.3758/BF03212339.
tations in the two visual pathways. Journal of Duncan, J., & Humphreys, G. W. (1989). Visual search
Neuroscience, 36(2), 432–444, https://doi.org/10. and stimulus similarity. Psychological Review,
1523/JNEUROSCI.2314-15.2016. 96(3), 433–458, https://doi.org/10.1037/0033-295X.
Brainard, D. H. (1997). The Psychophysics Toolbox. 96.3.433.
Spatial Vision, 10(4), 433–436, https://doi.org/10. Gerlach, C. (2007). A review of functional imaging
1163/156856897X00357. studies on category specificity. Journal of Cognitive
Buetti, S., Cronin, D. A., Madison, A. M., Wang, Z., & Neuroscience, 19(2), 296–314, https://doi.org/10.
Lleras, A. (2016). Towards a better understanding 1162/jocn.2007.19.2.296.
of parallel visual processing in human vision: Gerlach, C., Law, I., & Paulson, O. B. (2004).
Evidence for exhaustive analysis of visual infor- Structural similarity and category-specificity: A
mation. Journal of Experimental Psychology: Gen- refined account. Neuropsychologia, 42(11), 1543–
eral, 145(6), 672–707, https://doi.org/10.1037/ 1553, https://doi.org/10.1016/j.neuropsychologia.
xge0000163. 2004.03.004.
Calvillo, D. P., & Hawkins, W. C. (2016). Animate Guerrero, G., & Calvillo, D. P. (2016). Animacy
objects are detected more frequently than inani- increases second target reporting in a rapid serial
mate objects in inattentional blindness tasks visual presentation task. Psychonomic Bulletin &
independently of threat. The Journal of General Review, 23, 1832–1838, https://doi.org/10.3758/
Psychology, 143(2), 101–115, https://doi.org/10. s13423-016-1040-7.
1080/00221309.2016.1163249. Hadjikhan, N., Kveraga, K., Naik, P., & Ahlfors, S. P.
Calvillo, D. P., & Jackson, R. E. (2014). Animacy, (2009). Early (N170) activation of face-specific
perceptual load, and inattentional blindness. Psy- cortex by face-like objects. NeuroReport, 20(4),
chonomic Bulletin & Review, 21(3), 670–675, https:// 403–407, https://doi.org/10.1097/WNR.
doi.org/10.3758/s13423-013-0543-8. 0b013e328325a8e1.
Carota, F., Kriegeskorte, N., Nili, H., & Pulvermüller, Humphreys, G. W., Riddoch, M. J., & Quinlan, P. T.

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 12

(1988). Cascade processes in picture identification. neuroimaging study of the variables that generate
Cognitive Neuropsychology, 5(1), 67–104, https:// category-specific object processing differences.
doi.org/10.1080/02643298808252927. Brain, 122, 943–962, https://doi.org/10.1093/brain/
Jackson, R. E., & Calvillo, D. P. (2013). Evolutionary 122.5.943.
relevance facilitates visual information processing. Moraglia, G. (1989). Visual search: Spatial frequency
Evolutionary Psychology, 11(5), 1011–1026. and orientation. Perceptual and Motor Skills, 69(2),
Kleiner, M., Brainard, D. H., Pelli, D. G., Broussard, 675–689.
C., Wolf, T., & Niehorster, D. (2007). What’s new Mullally, S. L., & Maguire, E. E. (2011). A new role for
in Psychtoolbox-3? Perception, 36, S14, https://doi. the parahippocampal cortex in representing space.
org/10.1068/v070821. The Journal of Neuroscience, 31(20), 7441–7449,
Langton, S. R. H., Law, A. S., Burton, A. M., & https://doi.org/10.1523/JNEUROSCI.0267-11.
Schweinberger, S. R. (2008). Attention capture by 2011.
faces. Cognition, 107(1), 330–342, https://doi.org/ New, J., Cosmides, L., & Tooby, J. (2007). Category-
10.1016/j.cognition.2007.07.012. specific attention for animals reflects ancestral
priorities, not expertise. Proceedings of the National
Laws, K. R., & Neve, C. (1999). A ‘‘normal’’ category-
Academy of Sciences, USA, 104(42), 16598–16603,
specific advantage for naming living things. Neu-
https://doi.org/10.1073/pnas.0703913104.
ropsychologia, 37(11), 1263–1269, https://doi.org/
10.1016/S0028-3932(99)00018-4. Ohman, A., Flykt, A., & Esteves, F. (2001). Emotion
drives attention: Detecting the snake in the grass.
Levin, D. T., Takarae, Y., Miner, A. G., & Keil, F.
Journal of Experiemntal Psychology: General,
(2001). Efficient visual search by category: Speci- 130(3), 466–478, https://doi.org/10.1037/AXJ96-
fying the features that mark the difference between 3445.130.3.466.
artifacts and animals in preattentive vision. Per-
ception & Psychophysics, 63(4), 676–697, https:// Oliva, A., & Torralba, A. (2001). Modeling the shape
doi.org/10.3758/BF03194429. of the scene: A holistic representation of the spatial
envelope. International Journal of Computer Vision,
Lipp, O. V., Derakshan, N., Waters, A. M., & Logies, 42(3), 145–175.
S. (2004). Snakes and cats in the flower bed: Fast
detection is not specific to pictures of fear-relevant Oliva, A., & Torralba, A. (2006). Building the gist of a
animals. Emotion, 4(3), 233–250, https://doi.org/10. scene: The role of global image features in
1037/1528-3542.4.3.233. recognition. Progress in Brain Research, 155, 23–36,
https://doi.org/10.1016/S0079-6123(06)55002-2.
LoBue, V. (2014). Deconstructing the snake: The
relative roles of perception, cognition, and emotion Oliva, A., & Torralba, A. (2007). The role of context in
on threat detection. Emotion, 14(4), 701–711, object recognition. Trends in Cognitive Sciences,
https://doi.org/10.1037/a0035898. 11(12), 520–527, https://doi.org/10.1016/j.tics.2007.
09.009.
Long, B., Störmer, V. S., & Alvarez, G. A. (2017). Mid-
Parkhurst, D., Law, K., & Niebur, E. (2002). Modeling
level perceptual features contain early cues to
the role of salience in the allocation of overt visual
animacy. Journal of Vision, 17(6):20, 1–20, https://
attention. Vision Research, 42(1), 107–123, https://
doi.org/10.1167/17.6.20. [PubMed] [Article]
doi.org/10.1016/S0042-6989(01)00250-4.
Loschky, L. C., & Larson, A. M. (2010). The natural/
Patterson, K., Nestor, P. J., & Rogers, T. T. (2007).
man-made distinction is made before basic-level
Where do you know what you know? The
distinctions in scene gist processing. Visual Cogni-
representation of semantic knowledge in the human
tion, 18(4), 513–536, https://doi.org/10.1080/ brain. Nature Reviews Neuroscience, 8(12), 976–
13506280902937606. 987, https://doi.org/10.1038/nrn2277.
Lupyan, G. (2015). Object knowledge changes visual Peelen, M. V., & Caramazza, A. (2012). Conceptual
appearance: Semantic effects on color afterimages. object representations in human anterior temporal
Acta Psychologica, 161, 117–130, https://doi.org/ cortex. The Journal of Neuroscience, 32(45), 15728–
10.1016/j.actpsy.2015.08.006. 15736, https://doi.org/10.1523/JNEUROSCI.1953-
Lupyan, G., & Ward, E. J. (2013). Language can boost 12.2012.
otherwise unseen objects into visual awareness. Rice, G. E., Watson, D. M., Hartley, T., & Andrews, T.
Proceedings of the National Academy of Sciences, J. (2014). Low-level image properties of visual
USA, 110(35), 14196–14201, https://doi.org/10. objects predict patterns of neural response across
1073/pnas.1303312110. category-selective regions of the ventral visual
Moore, C. J., & Price, C. J. (1999). A functional pathway. Journal of Neuroscience, 34(26), 8837–

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 13

8844, https://doi.org/10.1523/JNEUROSCI.5265- 97–136, https://doi.org/10.1016/0010-


13.2014. 0285(80)90005-5.
Sagi, D. (1988). The combination of spatial frequency Tyler, L. K., Bright, P., Dick, P., Tavares, P., Pilgrim,
and orientation is effortlessly perceived. Perception L., Fletcher, P., . . . Moss, H. (2003). Do semantic
& Psychophysics, 43(6), 601–603, https://doi.org/10. categories activate distinct cortical regions? Evi-
3758/BF03207749. dence for a distributed neural semantic system.
Sha, L., Haxby, J. V., Abdi, H., Guntupalli, J. S., Cognitive Neuropsychology, 20(3–6), 541–559,
Oosterhof, N. N., Halchenko, Y. O., & Connolly, https://doi.org/10.1080/02643290244000211.
A. C. (2015). The animacy continuum in the human Wang, S., Tsuchiya, N., New, J., Hurlemann, R., &
ventral vision pathway. Journal of Cognitive Neu- Adolphs, R. (2015). Preferential attention to
roscience, 27(4), 665–678, https://doi.org/10.1162/ animals and people is independent of the amygdala.
jocn. Social Cognitive and Affective Neuroscience, 10(3),
Smith, S. L. (1962). Color coding and visual search. 371–380, https://doi.org/10.1093/scan/nsu065.
Journal of Experimental Psychology, 64(5), 434– Watson, D. M., Hartley, T., & Andrews, T. J. (2014).
440, https://doi.org/10.1037/h0047634. Patterns of response to visual scenes are linked to
Snodgrass, J. G., & Vanderwart, M. (1980). A the low-level properties of the image. NeuroImage,
standardized set of 260 pictures: Norms for name 99, 402–410, https://doi.org/10.1016/j.neuroimage.
agreement, image agreement, familiarity, and visual 2014.05.045.
complexity. Journal of Experimental Psychology: Wichmann, F. A., Drewes, J., Karl, P. R., &
Human Learning and Memory, 6(2), 174–215, Gegenfurtner, K. R. (2010). Animal detection in
https://doi.org/10.1109/ICIP.2001.958943. natural scenes: Critical features revisited. Journal of
Vision, 10(4):6, 1–27, https://doi.org/10.1167/10.4.
Summerfield, C., Egner, T., Mangels, J., & Hirsch, J.
6. [PubMed] [Article]
(2006). Mistaking a house for a face: Neural
correlates of misperception in healthy humans. Willenbockel, V., Sadr, J., Fiset, D., Horne, G. O.,
Cerebral Cortex, 16(4), 500–508, https://doi.org/10. Gosselin, F., & Tanaka, J. W. (2010). Controlling
1093/cercor/bhi129. low-level image properties: The SHINE toolbox.
Behavior Research Methods, 42(3), 671–684, https://
Taylor, K. I., Devereux, B. J., Acres, K., Randall, B., &
doi.org/10.3758/BRM.42.3.671.
Tyler, L. K. (2012). Contrasting effects of feature-
based statistics on the categorisation and basic-level Wolfe, J. M., & Horowitz, T. S. (2004). What attributes
identification of visual objects. Cognition, 122(3), guide the deployment of visual attention and how
363–374, https://doi.org/10.1016/j.cognition.2011. do they do it? Nature Reviews Neuroscience, 5(6),
11.001. 495–501, https://doi.org/10.1038/nrn1411.
Theeuwes, J. (1991). Cross-dimensional perceptual Wolfe, J. M., & Horowitz, T. S. (2017). Five factors
selectivity. Perception & Psychophysics, 50(2), 184– that guide attention in visual search. Nature Human
193, https://doi.org/10.3758/BF03212219. Behaviour, 1:0058, https://doi.org/10.1038/s41562-
017-0058.
Theeuwes, J. (1992). Perceptual selectivity for color and
form. Perception & Psychophysics, 51(6), 599–606, Yang, J., Wang, A., Yan, M., Zhu, Z., Chen, C., &
https://doi.org/10.3758/BF03211656. Wang, Y. (2012). Distinct processing for pictures of
animals and objects: Evidence from eye move-
Theeuwes, J. (1993). Visual selective attention: A ments. Emotion, 12(3), 540–551, https://doi.org/10.
theoretical analysis. Acta Psychologica, 83(2), 93– 1037/a0026848.
154, https://doi.org/10.1016/0001-6918(93)90042-P.
Yantis, S. (1993). Stimulus-driven attentional capture
Theeuwes, J. (1995). Abrupt luminance change pops and attentional control settings. Journal of Exper-
out; abrupt color change does not. Perception & imental Psychology-Human Perception and Perfor-
Psychophysics, 57(5), 637–644, https://doi.org/10. mance, 19(3), 676–681, https://doi.org/10.1037//
3758/BF03213269. 0096-1523.19.3.676.
Torralba, A., & Oliva, A. (2003). Statistics of natural Zachariou, V., Del Giacco, A. C., Ungerleider, L. G., &
image categories. Network: Computation in Neural Yue, X. (2018). Bottom-up processing of curvilin-
Systems, 14, 391–412, https://doi.org/10.1088/0954- ear visual features is sufficient for animate/inani-
898X. mate object categorization. Journal of Vision,
Treisman, A. M., & Gelade, G. (1980). A feature- 18(12):3, 1–12, https://doi.org/10.1167/8.12.3.
integration of attention. Cognitive Psychology, 12, [PubMed] [Article]

Downloaded from jov.arvojournals.org on 09/29/2020


Journal of Vision (2019) 19(12):22, 1–14 He & Cheung 14

Appendix
An exemplar of each of the three categories (animals,
man-made objects, fruits/vegetables) is shown in Figure
A1.

Figure A1. An exemplar of each of the three categories: animals, man-made objects, and fruits/vegetables.

Downloaded from jov.arvojournals.org on 09/29/2020

You might also like