You are on page 1of 11

Vision Research 49 (2009) 2870–2880

Contents lists available at ScienceDirect

Vision Research
journal homepage: www.elsevier.com/locate/visres

Anchoring gaze when categorizing faces’ sex: Evidence from eye-tracking data
Line Sther a,*, Werner Van Belle b, Bruno Laeng c, Tim Brennen c, Morten Øvervoll a
a
University of Tromsø, Norway
b
Swiss Federal Institute of Technology Zürich, Switzerland
c
University of Oslo, Norway

a r t i c l e i n f o a b s t r a c t

Article history: Previous research has shown that during recognition of frontal views of faces, the preferred landing posi-
Received 11 December 2008 tions of eye fixations are either on the nose or the eye region. Can these findings generalize to other facial
Received in revised form 13 August 2009 views and a simpler perceptual task? An eye-tracking experiment investigated categorization of the sex
of faces seen in four views. The results revealed a strategy, preferred in all views, which consisted of
focusing gaze within an ‘infraorbital region’ of the face. This region was fixated more in the first than
Keywords: in subsequent fixations. Males anchored gaze lower and more centrally than females.
Face perception
Ó 2009 Elsevier Ltd. All rights reserved.
Sex categorization
Eye fixation
Optimal viewing position

1. Introduction Hoenig, 1972; Findlay, 1982; He & Kowler, 1991; Kowler & Blaser,
1995; McGowan, Kowler, Sharma & Chubb,1998; Melcher &
Faces constitute a set of 3D objects which seem to be mentally Kowler, 1999; Vishwanath & Kowler, 2004; Vishwanath, Kowler,
represented in a holistical way (e.g., Biederman & Kalocsai, 1997; & Feldman, 2000). The COG of a shape can also be based on its
Dailey & Cottrell, 1999; Laeng & Caviness, 2001; O’Toole, Millward, 3D structure (by spatial weighting according to implied depth;
& Anderson, 1988) that is, in a less part-based manner than most Vishwanath & Kowler, 2004). Nevertheless, most of this work has
objects. Presumably, holistic facial representations develop along- studied saccades to targets with sudden onset. The COG preference
side the human expertise in face recognition (Gauthier, Tarr, is less obligatory with more voluntary saccadic movements (Find-
Anderson, Skudlarski, & Gore, 1999; Sther & Laeng, 2008). lay & Blythe, 2009; He & Kowler, 1991) as would be the case in face
Although visual expertise can result in expert gaze behavior where processing.
‘diagnostic’ features are prioritized (e.g., Rehder & Hoffman, 2005; Studies of fixations on faces have focussed less on the possibil-
Underwood, Chapman, Brocklehurst, Underwood, & Crundall, ity that a similar COG-anchoring mechanism might be operating.
2003; cf. Biederman & Shiffrar, 1987) there is reason to believe that The available evidence suggests that overt attention appears to
both the extreme familiarity with the object class, faces, and its be drawn towards the internal region of the face, particularly to-
dependence on a template like representation could result in a wards the eyes (Althoff & Cohen, 1999; Fisher & Cox, 1975;
gaze strategy where the goal is to get as much information as pos- Henderson, Williams, & Falk, 2005; Langdell, 1978; Minut,
sible with just one fixation (Hsiao & Cottrell, 2008). That is, from Mahadevan, Henderson, & Dyer, 2000; Schwarzer, Huber, &
the first glance, gaze might be anchored onto a position which pro- Dümmler, 2005; Walker-Smith, Gale, & Findlay, 1977; Yarbus,
vides a perceptual span that either covers the whole stimulus or 1967). Thus, one could speculate that the eyes might be attended
that maximizes the area of the object (especially for large displays to independently of the task as a side effect of an initial centering
or very near objects) which is included within the region of high of the gaze (Deaner & Platt, 2003; Grosbras, Laird, & Paus, 2005;
resolution acuity (cf. Hsiao & Cottrell, 2008; Tyler & Chen, 2006). Kingstone, Tipper, Ristic, & Ngan, 2004). In other words, fixations
Interestingly, studies on visual inspection of other shapes than on the eye region in a particular task may incorrectly suggest a
faces have shown that when participants look at an object or a diagnostic role of this facial region (e.g., Schyns, Bonnar, &
group of objects, the preferred landing position of their fixations Gosselin, 2002) and might just as well reflect ‘automatic’ anchoring
occurs near the ‘‘center-of-gravity” (COG) of the display (Coren & of the gaze (cf. Vinette, Gosselin, & Schyns, 2004). In fact, Tyler and
Chen (2006) using a forced choice face detection paradigm, and
aperture windows of varying extent, identified a region centered
* Corresponding author. Address: University Library, Department of Psychology
and Law, University of Tromsø, N-9073 Tromsø, Norway. Fax: +47 77 64 67 89. on the nose bridge in the full face as a zone of specialized percep-
E-mail address: line.sether@uit.no (L. Sther). tual processing. Although the other internal features of the face

0042-6989/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.visres.2009.09.001
L. Sther et al. / Vision Research 49 (2009) 2870–2880 2871

provided some improvement in sensitivity, there was no further part of the face attended to first. Current knowledge of sex differ-
advantage for the information around the edge of the face (i.e. hair, ences in face morphology strongly suggests that the most sexually-
ears, and jaw). dimorphic facial trait is indeed the nose (Bruce & Young, 1998;
Most interestingly, recent work by Hsiao and Cottrell (2008) Burton, Bruce, & Dench, 1993; Enlow, 1982, 1990; cf. O’oole, Vetter,
indicates that during recognition of frontal faces, the preferred Troje, & Bülthoff, 1997). If so, we can draw some straightforward
landing positions for the first two eye fixations is around the center predictions about the locus of eye fixations and their priority in a
of the nose, slightly biased to the left, and that fixations on the eyes sex categorization task: (1) if gaze anchors itself onto the most
do occur at a later stage (typically after the second fixation). Impor- diagnostic part of the face, then the nose should be attended to
tantly, Hsiao and Cottrell showed that after these two fixations, in the first fixation regardless of facial pose (indeed, in the 3=4 and
performance does not improve, that is, no further information ap- profile poses, one could expect more fixations on the nose, since
pears to be necessary to perform the identification task. Thus, in visibility of its size and shape would seem optimal in these per-
previous studies that used a central starting point for eye fixations, spectives; Chronicle et al., 1995). This could be interpreted as a
it would be unnecessary to fixate centrally during recordings, since viewing position that optimally reveals information diagnostic to
recordings would start after the initial informative fixation was al- the task. (2) If gaze anchors itself onto a central position of the im-
ready accomplished and the subsequent gaze behavior (towards age, the nose should be fixated preferentially in frontal views and
the eyes) would simply be redundant and would not add decreasingly so for views increasingly distant from frontal close-
information. ups. This would indicate the existence of a preferred landing posi-
One purpose of the present study was to investigate to what ex- tion that optimizes the visual inspection (foveally and parafoveal-
tent the findings in Hsiao and Cottrell (2008) can be generalized. ly) of the face as a whole.
On the one hand, central fixations on the nose in frontal views However, it is likely that the perceptual process may not be
may occur because the nose happens to be in a central position that straightforward. That is, several dimorphic parts (e.g., the
in the facial image. Alternatively, one could argue that the nose zygomatic protrusion; Ikeda, Nakamura, & Itoh, 1999) or their
(and the face region immediately around it) might be a diagnostic segments might contribute in parallel, and perhaps equally, to
part for face recognition; in this view, anchoring gaze on the nose the decision (cf. Bruce et al., 1993; Burton et al., 1993). Also, the
would constitute an ‘optimal viewing position’ for several specific diagnostic value of the parts may not depend on their dimorphic
face tasks (cf. O’Regan, Lèvy-Schoen, Pynte, & Brugaillère, 1984). value alone, but may be influenced by perceptual variables (e.g.,
Such a confound between fixations on the nose as merely an an- small parts, or ambiguous parts, like a feminine-looking male
chor point versus as an informationally-rich nexus can be resolved nose, might need foveal vision, and relative size might need to
by exploring fixations with the use of other Angles of View than the be computed in a context, by a scan path). Some of these second-
full face view. Therefore, the face stimuli used in the present ary diagnostic cues might influence eye fixations subsequent to
experiment were shown in four static views (from full-front to the first fixation. One might think that an ‘intelligent’ strategy
profile). could be to anchor gaze onto an intermediate position between
One additional point is that the findings of Hsiao and Cottrell two or more highly diagnostic parts (especially if the viewing con-
(2008) might be restricted to the identification task. Face identifi- ditions allow them to be included within a visual angle of maxi-
cation is a complex, visual categorization task involving only famil- mal visual acuity). Moreover, it may be sufficient to view ‘large’
iar faces, whereas other face processing tasks can be performed on diagnostic parts outside of the foveal- and the parafoveal area,
unfamiliar faces as well. Schyns et al. (2002), using the ‘‘Bubbles where low frequencies might provide enough dimorphic informa-
technique”, found differing patterns of attended information be- tion for sex categorizations (Schyns et al., 2002; Valentin, Abdi,
tween three perceptual tasks (identity, expression and sex catego- Edelman, & O’Toole, 1997). In fact, if few or none of the dimorphic
rization). In addition, Malcolm, Lanyon, Fugard, and Barton (2008) regions are in need of foveal or central vision, it would be possible
monitored eye fixations while participants made judgments about and economical to gather information from more than one part in
the identity or the expression of faces, and found that fixations cor- one glance.
related with regional variations of diagnostic information in the Current theories of gaze control and oculomotor strategies have
different processing tasks. Finally, we reason that the visual infor- abandoned the idea that the choice of gaze location occurs at ran-
mation which is most informative for a complex task might not be dom (e.g., Kundel, Nodine, Thickman, & Toto, 1987) and recognize
the same as the information used in a simpler and more basic task. the influence of both stimulus factors (Itti & Koch, 2000) and
To investigate this question we used a very simple face processing expectations (Land & McLeod, 2000; Malcolm et al., 2008; Torralba,
task, i.e. ‘sex categorization’, which can be performed also for unfa- Oliva, Castlhano, & Henderson, 2006; Turano, Geruschat, & Baker,
miliar faces, and which has been little explored with the eye mon- 2003; Wolfe, Cave, & Franzel, 1989). Thus, the present study pro-
itoring technique. vides an opportunity to establish whether scanning of a face during
Sex categorization can be more efficiently performed than other a sex categorization task follows a specific and replicable pattern,
processing tasks (i.e., in about 613 ms; as opposed to 897 ms for and also whether any of the sexually-dimorphic Facial Parts are
familiarity decisions; Bruce, Ellis, Gibling, & Young, 1987). ERP prioritized in the oculomotor behavior of the viewer. In particular,
studies have shown that the brain potential’s latency related to by presenting faces in different views we have an opportunity to
sex categorizations of faces is remarkably fast (i.e., 150 ms; Schen- assess whether the gaze targets dimorphic parts or perhaps seg-
dan, Ganis, & Kutas, 1998). Even with very fast and peripherally lo- ments that may not correspond to ‘parts’ that have verbal labels.
cated exposures (26–75 ms) accuracy of sex decisions is Alternatively, gaze may always target the image’s COG so as to fa-
surprisingly good (O’Toole, Peterson, & Deffenbacher, 1996; Reddy, vor the holistic processing of the face. Thus, eye movements of a
Wilken, & Koch, 2004; Sergent & Hellige, 1986). In addition, neuro- group of participants, performing a sex categorization task, on
imaging studies reveal that areas of the brain activated by sex cat- stimuli shown in four Angles of View, were monitored by use of
egorizations of faces are more posterior than those activated by an infrared eye-tracker.
identifying the same faces (Sergent, Ohta, & MacDonald, 1992) Finally, fixational analysis methods (on the basis of a priori and
which in turn suggests that such information may be processed a posteriori defined parts) were compared in the present study so
early within the visual pathways. as to guide future eye-tracking investigations of face stimuli. Spe-
It might be that even in this simple sex categorization task, the cifically, a common problem has been that some a priori knowledge
most dimorphic anatomical difference would correspond to the about what portion of the facial surface counts as a part is assumed
2872 L. Sther et al. / Vision Research 49 (2009) 2870–2880

before analyses are completed (Burton et al., 1993; Valentin et al., 2.4. Analysis procedure
1997). Several computational studies have however derived macro
parts a posteriori from faces learned by classification networks (e.g., Fixations outside the face were not included in the ‘center-of-
Abdi, Valentin, Edelman, & O’Toole, 1995; Cottrell & Flemming, gravity’ or the ‘a priori’ analysis. A fixation was defined as focus
1990; Golomb, Lawrence, & Sejnowski, 1991; Gray, Lawrence, Go- on an area of 60 screen points (the whole screen being
lomb & Sejnowski,1995; O’Toole et al., 1997; O’Toole et al., 1998). 800  600 points) for more than 150 ms. According to Reichle, Ray-
Thus, in the present experiment these two methods of parsing will ner, and Pollatsek (2003), duration is related to the difficulty in-
be compared. volved in the processing task and can vary between 100–400 ms.
The constraint was chosen on the basis of sex categorization being
an efficient task, and this definition resulted in only 1.97% of trials
2. Methods without fixations on the face.

2.1. Participants
2.4.1. Computation of the ‘center-of-gravity’
The participants were 49 naive students (25 females). All partic- The 2D center-of-gravity (COG) was computed through a MAT-
ipants (mean age = 26.7, SD = 7.0) had normal, or corrected to nor- LABÒ program that computed the centroid of each stimulus face ta-
mal vision by the use of contact lenses. ken as a plane region of uniform density. The centroid was
computed through the MATLABÒ function regionprops.

2.2. Stimulus and apparatus


2.4.2. Natural parts ‘a priori’ parsing
For the a priori analyses of fixations, we counted each separate
The stimuli used in this experiment consisted of 96 color photos
fixation on the major parts of the face as identified by colloquial
of 24 unfamiliar faces (12 female, 12 male) of young Caucasian stu-
speech (i.e.: hair, forehead, brows, eyes, nose, cheeks, mouth,
dents, randomly presented in four different viewing angles (fron-
chin/jaw and ears). Note that, instead of choosing a few specific
tal = 00°, intermediate = 22.5°, three-quarter = 45° and profile =
areas of interest, we parsed the whole facial image into its major
90° angles as seen from the viewer). The 22.5° angle view was
anatomical parts. This parsing was based on the major bone and
included as this has been shown to be more diagnostic than the
muscle structures in the face (Putz & Pabst, 2001). The parts’
45° angle view (Laeng & Rouw, 2001; cf. Blanz, Tarr, & Bühltoff,
boundaries were then adjusted to each face and to all four views
1999).
of the faces.
The models were all asked to assume a neutral expression when
photographed. The images, including hair, were unaltered so as to
keep the visual stimuli as natural as possible. Half of the images 2.4.3. Coordinate grid ‘a priori’ parsing
shown in angles other than the frontal were oriented towards The coordinate grid parsing consisted of 80 parts, based on a grid
the left, and the other half towards the right side of the screen. that was finer in the internal, than the external, region of the face.
Each model’s head (including hair) subtended an average visual an- The frontal internal region of the face was thereby divided into 68
gle of 14° (vertical dimension; min 13°–max 15°). However, the approximately equally sized parts. In other views than the frontal,
face alone (i.e., the internal ‘mask’ from eyebrows to mouth) sub- fewer than 80 segments were visible. The number of segments did
tended an average visual angle of 6°. change slightly from face to face, as some segments, in some faces,
The experimental editor was SuperLabÒ Pro 1.04, and eye move- would be occluded by other segments or parts. However, the num-
ments were registered by the monocular ‘‘Remote Eye-Tracking bering system still remained the same. A MATLABÒ program recog-
Device” or RED, built by SensoMotoric InstrumentsÒ (SMI, Teltow, nized the parts in the grid in the analysis.
Germany). The initial analysis of the recordings was computed by
the software iViewÓ version 3.0 from SMI. The RED employs the
2.4.4. A posteriori analysis
contrast technique, which determines the center between two
For the a posteriori analysis of fixations we developed a Linux
coordinates, by tracking the position of the pupil and the reflection
based, Interactive Image Spreadsheet program written in C++
of the cornea. The tracking device can operate at a distance of 60–
(Stroustrup, 2000) and Qt (http://www.trolltech.com/products/qt/
80 cm and the recording eye-tracking sample rate is 50 Hz, with a
index.html). As Facial Parts differ in size, shape and position be-
tracking resolution of 0.1° (as specified by the manufacturer;
tween faces, the program normalized each face by morphing it
http://www.smivision.com/en/gaze-eye-tracking-systems/prod-
with all other faces (in the same angle) in order to create a proto-
ucts/iview-x-red-red250.html).
type face, as well as a prototype female and a prototype male face.
Because all of the face stimuli, except the full face, were either ori-
2.3. Procedure ented towards the left or the right side of the screen, the faces were
first turned towards the left side. All faces were then morphed to
Participants were seated in front of the monitor, with their the prototypes, and these were then added, to make sure that no
heads stabilized by a headstand to maintain a constant distance face was more heavily weighted than others. Sixty-six morphing
of 73 cm. A calibration procedure on a 3  3 regularly spaced ma- points were used, and most points were positioned within the face
trix was performed for each participant. Participants were asked to itself. The program’s next step was to morph the eye movement
discriminate between female and male faces by pressing one of data using the same technique and the same morphing points in
two keys (labelled $ or #) as quickly as possible. Before starting order to create a heat map/spotlight image of the most frequent
the experiment a practice set of six pictures was presented. Before fixated facial areas. These images were created by representing
each stimulus, a fixation point was presented for 1500 ms. Each each point in the eye-tracking path with a patch of the same size
fixation cross was positioned 7.85° away from the stimulus’ center as the fovea (23 pixels in diameter) and plot the resulting map onto
to force eye movements to start exploring the face from a position the morphed facial images. The morphed prototypes were then
external to it. The cross had to be fixated before each recording normalized such that the maximum signal was 1 and the lowest
started. Each stimulus remained on the monitor until the partici- signal was 0. A cut-off value of 5% was used to remove areas that
pant made a response. were of no significance.
L. Sther et al. / Vision Research 49 (2009) 2870–2880 2873

3. Results directly, the two sides of the frontal face were collapsed, and the
percentage of fixations within each view and for each of the a priori
3.1. Accuracy defined parts was calculated. Fixation frequency was then sub-
jected to a repeated-measures ANOVA with nine Facial Parts (hair,
Participants’ sex categorizations were on average 98.75% cor- forehead, brows, eyes, nose, cheeks, mouth, chin/jaw and
rect. There were no significant differences between face views. ears)  four view (00°, 22.5°, 45°, 90°)  two Sex of Model
Male and female participants did equally well in this task, but both (female, male) as the within-participant factors, and two sex of
sexes made more mistakes when categorizing male than female participant (female, male) as the between-subject factor. The anal-
models (F(1, 47) = 4.91, p = 0.03). This difference was caused by ysis revealed a highly significant main effect of Facial Parts
one male model (judged as ‘low in masculinity’ by 10 independent (F(8, 376) = 33.99, p < 0.0001), as eyes (mean = 27.4%, SD = 18.7)
viewers) that triggered 0.31 of the 1.25% mistakes. The remaining and nose (mean = 23.8%, SD = 12.5), followed by cheeks
0.93% errors indicated no pattern, and can most likely be consid- (mean = 19.5%, SD = 13.6) elicited significantly more (70.7%) fixa-
ered erroneous key presses. Despite the near-to-ceiling perfor- tions than other parts. Post-hoc comparisons (Fisher’s PLSD;
mance, the few erroneous trials were removed from the data. p = 0.11) showed no significant differences between eyes, nose
and cheeks. In other words, observers looked repeatedly on the
3.2. RTs central area of the face (eyes + nose + cheek).
However, an interaction with View (F(24, 1128) = 26.83,
The mean response time (RT) was 715 ms, SD = 155 (frontal view: p = 0.0001; Fig. 1) revealed that the nose was most attended to in
frontal views (nose to eye: p = 0.15, eye to cheek: p < 0.0001,
726 ms, SD = 162; intermediate view: 700 ms, SD = 136; three-quar-
ter view: 713 ms, SD = 152; profile view: 722 ms, SD = 153). A re- t(48) = 4.3) and gradually less attended to as the head turned.
The opposite pattern was found for attention towards the cheeks,
peated-measures ANOVA with four Angles of View (00°, 22.5°, 45°,
90°)  two Sex of Model (female, male) as the within-participant with more attention in profile (cheeks vs. eyes; p = 0.0009,
t(48) = 3.54). In other words, the central area of the face that was
factors, and two sex of participant (female, male) as the between-
specific to each view was the preferentially-attended target.
participants factor, measured participants response time. Results
Additionally, the eye region was highly attended, regardless of
showed a main effect of View, F(3, 141) = 5.9, p = 0.0008. Post-hoc
view.
tests revealed that, when judging the Sex of faces, participants
The effect of Parts was also influenced by sex of participant,
were significantly faster with the intermediate view (22.5°) than
F(8, 376) = 2.9, p = 0.003. Specifically, while both sexes attended
when seeing frontal and profile views. There were no other signifi-
more towards the eyes, women looked significantly more on the
cant effects, and there was no relationship between response time
eyes than other parts, and more on the brows than men did,
and the eye movement patterns.
whereas men attended more towards the nose and cheeks than
women. There was also a three ways interaction between Facial
3.3. Percent dwell time in the ‘center-of-gravity’ Parts, Participants’ Sex and Angle of View F(24, 1128) = 2.7,
p < 0.0001. This revealed that women looked overall more at eyes,
An analysis of the time participants focused on the center-of- whereas men looked at noses in frontal view and cheeks in profile
gravity (within 2° of visual angle, corresponding to the size of the fo- (male participants: frontal view: nose to eye: p = 0.018; nose to
vea) was computed in order to check: (1) whether eye fixations sim- cheek: p < 0.0001; profile: cheek to eye: p < 0.0001; cheek to nose:
ply have a tendency to select a central point of gaze, and (2) whether p < 0.0001). In other words, males attended more towards the
such centered focusing was as preferred when a dimorphic part was view-specific center-of-gravity, whereas females attended more
positioned in the center as when no such part were present in the towards the eyes, regardless of the view.
center-of-gravity (e.g. ear in profile view). The computation was
done on the head, excluding the neck, as the head may be considered
a natural part or independent object with respect to the rest of the 3.4.2. Order of Fixations
body. The head is separated from the neck by geometrical properties In order to assess whether the initial fixations were positioned
based on concavities/convexities, as suggested by several models of more centrally than the subsequent fixations, an analysis of Order
object processing (e.g., Hoffman & Richards, 1985). of Fixations was performed. As above, a repeated-measures ANOVA
A repeated-measures ANOVA with four Angle of View (00°,
22.5°, 45°, 90°)  two Sex of Model (female, male) as the within-
participant factors and two sex of participant (female, male) as
the between-participants factor, measured the percentage of time
that participants’ gaze was within 2° of the COG during each trial.
The results revealed that the gaze was on or near the COG 22.33%
or 159.65 ms (SD = 27.02) of the time. A significant main effect of
Angle of View (F(3, 138) = 102.17, p < 0.0001) demonstrated that
the gaze dwelled longer near the COG when the face was presented
in frontal view (41.12% or 298.53 ms, SD = 30.60) than in all other
views (intermediate view: 28.20%/197.40 ms, SD = 28.27; three-
quarter view: 13.72%/97.82 ms, SD = 19.01; profile view: 6.27%/
45.26 ms, SD = 10.29). The difference between the views was con-
firmed by post-hoc tests (p < 0.0001).

3.4. Analysis based on the natural parts’ a priori parsing

3.4.1. Distribution of fixations Fig. 1. The mean percentage of fixations of the three most attended Facial Parts
First, the mean number of fixations during a trial (1.87; (eyes, nose and cheeks) for each of the four face views (00°, 22.5°, 45° and 90° view).
SD = 0.39) was calculated. In order to compare the face views The bars indicate 95% confidence intervals.
2874 L. Sther et al. / Vision Research 49 (2009) 2870–2880

was performed, with the additional within-participant factor of Or- 3.5. Analysis based on coordinate grid a priori parsing
der of Fixations (first, second and third fixation). The results
showed a trivial main effect of Order of Fixations (F(2, 94) = 3.5.1. Percentage of dwell time
877.1, p < 0.0001) with more first fixations and less third The natural, but coarse part-based analysis could not provide
fixations. information according to whether there were specific locations
There was a significant interaction between Order of Fixations within each part that actually captured the participants’ attention.
and Facial Parts (F(16, 752) = 16.3, p < 0.0001) that revealed a pref- Also, the time span of the fixations and the size of the areas were
erential looking towards eyes, nose and cheek in the first and sec- not accounted for. We therefore measured the percentage of time
ond fixation (see Fig. 2). The pattern of the first and second fixation the gaze dwelled within each small segment of a finer-grid, in each
was almost identical except for gaze on cheeks (t = 9.2, p < 0.0001) of the two extreme views (front and profile) and corrected for size
and hair (t = 5.7, p < 0.0001) where the distance between first and of segment. A ‘heat diagram’ of this analysis is illustrated in Fig. 3.
second fixation was larger than for the other parts. By the third fix- Dark blue represents 0% dwell time and dark red approximately 5%
ation, preferential fixating towards specific parts was not present dwell time in the frontal view, and approximately 9.5% dwell time
to any notable extent. in profile view.
Furthermore, there was a significant three way interaction be- The analysis in terms of dwell time distributions principally
tween Angle, Facial Parts and Order of Fixation (F(48, 2256) = 9.1, resembled those of the fixation distribution analysis. Differences
p < 0.0001) which showed that the central parts of the face in each could be accounted for by the use of a normalized measure, as well
view (e.g. nose in front, eyes/cheek in intermediate view, eyes/ as the inclusion of fixation pauses in the dwell time distribution,
cheek in three-quarter view, and cheek/hair in profile) were fixated but not in the distribution-of-fixation analysis. Specifically, the
more in the first fixation than in the subsequent fixations (see main effect of Facial Parts was confirmed (F(8, 368) = 340.58,
Fig. 2). p < 0.0001) but with more attention towards the nose (29.65%,

Fig. 2. The percentage of first, second and third fixations on the listed Facial Parts within frontal, intermediate view (22.5°), three-quarter view (45°) and profile views. The
bars indicate 95% confidence intervals. The results indicate that the central parts of the face in each view (e.g. nose in front, eyes/cheek in intermediate view, eyes/cheek in
Three-quarter view, and cheek/hair in profile) were fixated more in the first fixation than in the subsequent fixations.
L. Sther et al. / Vision Research 49 (2009) 2870–2880 2875

Fig. 3. An illustration of percentage of time (corrected for segment size) spent in each pre-selected facial segment within front and profile view, where dark red represents the
highest amount of dwell time. The most attended part in frontal view was the right lower nasal eye (4.99% dwell time), and in profile: the lower central eye (9.52% dwell
time). The green circle illustrates 2° of visual angle, corresponding to the fovea, and the bottom left image shows the numbers given to the central segments.

SD = 13.5) followed by the eyes (24.32%, SD = 6.86) and the cheek Table 1
(24.32%, SD = 13.04). The interactive effect between Facial Parts The means and standard deviations of the most attended segments within the most
attended Facial Parts.
and View (F(8, 368) = 69.94, p < 0.0001) was also confirmed.
As the dwell time distribution depended upon the exact choice of Part Segment Overall Frontal Profile
segments included within each part, an analysis over all central seg- Mean SD Mean SD Mean SD
ments was also run. A repeated-measures ANOVA with 25 segments
Eyes No. 6 2.85 1.28 5.71 2.56 0.00 0.00
(five nose segments, Nos. 1–5 in Fig. 3; nine eye segments, Nos. 6–14; No. 8 2.34 1.33 0.50 0.41 4.19 2.26
and 11 cheek segments, Nos. 15–25)  two Views (frontal and pro- No. 9 7.50 2.06 8.13* 2.13 0.00 0.00
file) showed a main effect of segments (F(24, 1104) = 33.50, No. 10 3.75 1.06 6.88 2.00
No. 11 4.32 4.28 1.10 1.03 7.55 7.53
p < 0.0001) with significantly more dwell time overall on the seg-
No. 12 9.16 2.99 8.80* 2.53 0.00 0.00
ment between the eye and the nose (infraorbital margin, segment No. 13 2.86 1.07 9.52* 3.14
No. 15) than any other segment (mean = 7.14, SD = 3.84). This was No. 14 4.22 1.86 0.79 0.68 7.66 3.05
mainly caused by the fact that this segment was one of the few highly Nose No. 1 4.13 1.62 6.01 1.40 2.25 1.85
attended segments that was visible in both views. No. 2 5.43 2.12 7.15 1.68 3.70 2.56
An interaction between Segments and View (F(24, 1104) = No. 3 4.62 1.90 6.58 2.03 2.67 1.78
54.28, p < 0.0001) was more informative according to specific loca- No. 4 3.76 1.88 5.23 2.06 2.29 1.70

tions within each attended facial part. Table 1 presents the means Infra-orbital No. 15 7.14 3.84 7.30 3.05 6.97 4.55
of the most attended segments in both presented Angles of View. No. 16 3.57 1.78 3.19 1.31 3.96 2.26
No. 17 3.82 1.50 2.28 1.03 5.37 1.98
In frontal view the overall most attended segments was the two
No. 20 2.00 1.18 0.54 0.53 3.47 1.83
lower nasal eye segments (No. 12 and No. 9). In profile presenta-
*
tions the eye dwelled longer on the lower central eye segment Attendance was significantly higher than attendance to any other segment in the
same Angle of View (p < 0.0005).
(No. 13). These two most attended eye segments (No. 12 in the full
face, and No. 13 in profile) did not differ significantly, and they
actually constituted the same segment when the face was turned,
tween the eye, the cheek and the nose. This would seem to
as the lower central eye concealed the lower nasal eye in profile.
constitute a preferred infraorbital region of the face, but biased to-
Therefore a collapsed segment, representing the lower nasal eye
wards the eye.
in both views, was compared to the infraorbital margin (No. 15).
A main effect (F(1, 47) = 9.67, p = 0.003) showed that the collapsed
lower nasal eye segment was significantly more attended to than 3.6. A posteriori analysis of eye movements
the infraorbital margin. To conclude, in frontal views participants
paid most attention towards the lower nasal eye and an area be- The value of the previous analysis might be dependent upon the
tween the eyes and the nose. In profile, attention was directed be- level of resolution, shape and number of segments used in the ‘a
2876 L. Sther et al. / Vision Research 49 (2009) 2870–2880

priori’ analysis design. As such, it may also, to some extent, be tended (blue), and the maximum and minimum probability densi-
based on arbitrary decisions. In this section we present a novel ties for each analysis representation are included in the heat scales
way of analysing eye movement data which we label a posteriori of the figure. It can be determined that red is 20 times more prob-
eye movement analysis. A computer program, described in the able than blue, two times more probable than green and 1.3 times
method section (Analysis Procedure) created a prototypical stimu- the probability of yellow. Likewise, yellow is 15.2 times more prob-
lus face (morph) which was overlaid by a ‘heat map’. In other able than blue; green is 10.4 times more probable than blue and
words, the eye itself was used, metaphorically, as a ‘paintbrush’ cyan is 5.6 times more probable than blue.
(see Fig. 4) or alternatively as an ‘incremental spotlight’ (see As Fig. 4 illustrates, only the central (colored) area of the face
Fig. 5), both illustrating the most frequent viewed locations of actually corresponded to an attended area. In addition the figure
the direction of gaze over the whole facial area. shows that the facial area that received most fixations (red) was lo-
As can be appreciated by inspecting these Figures, these dis- cated below the pupil, and in all cases towards the central merid-
plays are very consistent with the findings based on the fine- ian, specifically the infraorbital area of the face. Notably, the most
grained a priori part analysis, previously described. However, the attended area appears to be more directed towards the center of
present a posteriori analysis gives a more veridical (point by point) the face in the present analysis than in the fine-grained a priori
picture of the participants’ direction of gaze. An even finer-grained analysis. The os nasale (the nasal bone) and lateral cartilage (nasal
analysis grid than the one we used, might yet display the same re- bridge) were captured by these high levels of attention in all views,
sults as an a posteriori analysis would. Fig. 4 presents the probabil- whereas the zygomatic area was more attended to in profile view.
ity distribution function that we obtained by plotting the eye-track The alar cartilage (lower part of the nose) and the upper regio oralis
positions on a prototypical face. The scale ranges from the area (beneath the nose) was attended to in frontal view. The surround-
most likely to be attended (red) to the area least likely to be at- ing central area was much less attended (blue). In the intermediate

Fig. 4. Heat diagrams of the most attended areas (a posteriori analysed) over a morphed face in four different views (00°, 22.5°, 45° and 90°). The left column shows where all
participants placed their overt attention. The attention patterns of female participants (middle column) and male participants (right column) were significantly different from
each other. The scale ranges from the area most likely to be attended (red) to the area least likely to be attended (blue). These maps can be viewed as probability density
functions, and the maximum and minimum probability for each analysis is included in the heat scales. The green circle illustrates 2° of visual angle, corresponding to the
fovea. The left-turned morphs serve only as a consistent illustration, as in the experiment the stimuli faced the left equally often as the right.
L. Sther et al. / Vision Research 49 (2009) 2870–2880 2877

Fig. 5. Facial areas that participants overtly attended when looking at female (left) or male (right) faces. These images can be described as ‘‘spotlight” versions of the
probability density functions in Fig. 4. Here, black areas indicate regions of the image where gaze was directed with less than 0.05 probability. The brightest areas represent
the regions that were most probable to be attended (analogous to dark red in the heat maps) and the darkest of the illuminated areas represent the minimum probability for
each analysis (corresponding to dark blue in the heat maps). The left face is a morph of 12 females, and the right face is a morph of 12 males. The morphs only serve the
present illustration and were never shown in the experiment.

(22.5°) view, where participants were faster in responding, atten- Fig. 5 illustrates the same findings, but with undisguised facial
tion appeared to be mostly directed or biased towards one eye features. In this illustration only the illuminated area of the face
(the closest of the eyes) followed by the nasal bridge and zygo- corresponds markedly to the direction of the gaze. The illustrations
matic area. can be described as ‘spotlight’ versions of the probability density
The overall strategies of male and female participants differed functions of Fig. 4. The maximum and minimum probability for
slightly as males focused their visual attention a bit lower, and each analysis, included in the heat scales of Fig. 4, corresponds to
more towards the center than females did (p < 0.004). Specifically, the brightest and darkest degree of illumination in Fig. 5. Here,
males offered more attention towards the nasal area in all angles the brightest areas represent the regions that were mostly at-
except in profile. Conversely, males attended more towards the tended (corresponding to dark red in Fig. 4) and the dark gray areas
cheek area (regio infaorbitalis and buccalis) than females in all an- represent the least attended areas (corresponding to dark blue in
gles except in frontal presentations. In other words, males attended Fig. 4). Black, on the other hand, represents areas with less than
more towards the view-specific center-of-gravity, whereas females 0.05 likelihood of being attended, corresponding to non-colored
attended more towards the eyes, regardless of the view. areas in the heat maps. The results indicate that it could be suffi-
2878 L. Sther et al. / Vision Research 49 (2009) 2870–2880

cient to focus within these illuminated face areas to solve the sex preted as a gaze strategy that selects a central ‘anchor point’
categorization task efficiently, although the remaining face areas which is biased towards an intermediate position between the
might also contribute peripherally. eye and the nose in front, and gradually more between the eye,
nose and cheek as the head turns. An important remark to make
is that the observed center of overt attention (infraorbital areas)
4. Discussion in the present study was not a sexually dimorphic part in itself.
In fact, it has been proposed that the largest anatomical sexual
The internal parts of the face (eyes, nose and cheeks) were dimorphism in the human face is represented by the protuberance
overtly attended to more than the rest of the face, and this oc- of the nose (Burton et al., 1993; Enlow, 1982; Enlow, 1990; O’Toole
curred in all views. Both the distribution and the duration of the et al., 1997). Crucially, we assumed that sexually dimorphic infor-
fixations showed that the relative importance of these parts de- mation would be more visible in other views than the frontal,
pended upon Angle of View, indicating that visual attention pre- which however resulted in lower levels of fixation (e.g. the nasal
dominantly targeted a preferred infraorbital region of the face in bridge is more visible in profile, but was more attended to in fron-
addition to the eyes. A finer-grid parsing, and an a posteriori anal- tal views; and the zygomatic protrusion is better visible in three-
ysis, revealed that the most attended segments within the central quarter views, but was more attended to in profile). The position-
areas and the eye area were adjacent to each other. Thus, the cen- ing of gaze on the infraorbital areas might play a role for the spe-
ter of the most attended area for frontal presentations was located cific sex categorization task so that the fixations might be
between the nose and the eye (infraorbital margin), and in profile considered a ‘perspective anchoring’ which directs the more
presentations between the eye and the zygomatic bone (infraor- dimorphic internal region of the face into parafoveal vision. How-
bital foramen). The three-quarter views (22.5° and 45°) confirmed ever, the present results do not reject the possibility of holistic pro-
the pattern that the most attended area in all views was infraor- cessing that is unspecific to the perceptual task.
bital. This was particularly true for the first fixation, as central Interestingly, the COG was intensively attended in frontal view
areas were progressively less attended to resulting in less preferen- but gradually less attended as the head turned, and the least at-
tial fixating towards specific parts in the second and third fixations. tended in profile view, where the COG did not correspond to the
Males used the ‘central’ strategy to a higher degree and with less center of the face or to any dimorphic part. A possible explanation
weight on the eyes, than females. for this finding is that the head volume or outline is not used by the
The results of Hsiao and Cottrell (2008) indicated that people visual system to compute its COG, but rather another conceptual-
fixate the center of the nose in frontal views when recognizing a ization of COG may be employed. Previous eye monitoring re-
face. Correspondingly, Tyler and Chen (2006) identified a region search, in object recognition tasks, has shown that the observers’
centered at the nose bridge in the full face as a zone of specialized gaze often lands near a COG of the luminance distribution in the
perceptual processing in a forced choice detection paradigm. One display, instead of individual elements (e.g., Findlay, 1982; Kowler
could expect the same to be true for sex categorizations of frontal & Blaser, 1995; McGowan, Kowler, Sharma, & Chubb 1998). Specif-
faces based on the consensual notion that faces are perceived ically, when the stimuli are 3-D objects casting shadows (compara-
holistically or at least with less parsing than for other objects. Fol- ble to our photographic face stimuli) the COG of gaze has been
lowing this train of thought, fixations would be expected to land shown to be biased towards the informative parts (Vishwanath &
centrally in other perspectives as well instead of landing consis- Kowler, 2004). It has also been shown that the tendency to fixate
tently in a position close the nose. The present results partially on the COG is much less marked on tasks allowing for more volun-
support these expectations, since a ‘central’ position was attended tary saccadic movements (Findlay & Blythe, 2009; He & Kowler,
more than the nose in three-quarter views and even in profile 1991) like the task in the present experiment. Considering that ear-
views. lier studies have shown that low spatial frequency information is
Previous research has emphasized the diagnostic value of the sufficient for the sex categorization task (Abdi et al., 1995; Schyns
eye region (Althoff & Cohen, 1999; Henderson et al., 2005; Minut et al., 2002; Valentin et al., 1997) it seems consistent to conclude
et al., 2000). However, attention towards the eyes may be inter- that in the present study, fixations were directed to a gravity point
preted as a strategy of bringing a small, diagnostic central region that provided the best sampling of the internal regions of the face
of the face into focus, while larger parts can be viewed parafoveal- within foveal/parafoveal vision.
ly. The present study also found eye fixations in frontally viewed The weighting of a central point of gravity towards a position
faces, but these were shown to be on the nasal eye segment, in between the nose and the eye might also be understood as an opti-
agreement with Schyns et al. (2002) using the Bubbles technique. mal viewing strategy that brings the smaller parts (e.g. the eye)
This finding may support an interpretation which states that eye into the highest focus; while at the same time larger informative
attention not only indicates the eyes’ diagnostic value, but also parts (e.g. the nose) can be viewed parafoveally. This is in line with
functions as a central landmark from which the larger dimorphic the findings of Schyns et al. (2002) that the nose was more diag-
nasal area can be viewed parafoveally. nostic in coarser frequency scales, and the eye in finer scales. Since
Hsiao and Cottrell (2008) observed that eyes were only fixated the mean fixation position within the eye area in all views was
in the third fixation in full faces and that performance did not im- underneath the eye itself and fixation on the brows, previously
prove after the second fixation. In the present study eyes were at- shown to be more diagnostic than eyes (Brown & Perrett, 1993;
tended as much as noses in the second, but not in the first, fixation Campbell, Benson, Wallace, Doesbergh, & Coleman, 1999; Camp-
in full faces. This could have been caused by the change of task, as bell, Wallace, & Benson, 1996; Sadr, Jarudi, & Sinha, 2003) were ab-
sex categorization decisions are simpler and more efficiently per- sent, it is unlikely that the mean fixation position in the present
formed than identity decisions (Bruce et al., 1987; Schendan study corresponds to the maximal informational nexus or diagnos-
et al., 1998). In addition, for intermediate and 3=4 Angles of View, tic trait of faces. In fact, what is emerging from the present research
the most attended parts in the first fixation were eyes and cheeks, is that the locus of eye fixations might generalize over tasks (cf.
whereas cheeks were more attended to in the first fixation on pro- Carmel & Bentin, 2002; Deaner & Platt, 2003; Grosbras et al.,
file faces. This finding again indicates the use of a central fixation 2005; Hsiao & Cottrell, 2008; Kingstone et al., 2004; Schyns, Jen-
strategy. tzsch, Johnson, Schweinberger, & Gosselin, 2003; Smith, Gosselin,
The main finding in the present study was the increased atten- & Schyns, 2004) and that task-dependent effects will be revealed
tion towards the infraorbital areas in all views. This can be inter- as subtle displacements from a central, general, anchoring point.
L. Sther et al. / Vision Research 49 (2009) 2870–2880 2879

Intuitively, the eyes, in particular the contrast between the Finally, the present experiment suggested that an a priori pars-
white sclera and the colored iris, provide a very salient low-level ing of the face might give a rather incomplete understanding of the
visual cue that could be used to pilot the initial ballistic eye move- eye’s scan paths during a perceptual task, since parsing strongly
ment towards the face (cf. Brunelli & Poggio, 1993; Langton, Watt, depends upon an arbitrarily chosen number, size and shape of
& Bruce, 2000). In other words, the eyes may form a horizontal the segments, though if the parsing is sufficiently fine-grained,
meridian that facilitates alignment of the facial image to internal such a method might still be functional.
templates, thus constituting a highly informative point for the rec-
ognition of the face and, perhaps, other perceptual judgments to
Acknowledgments
faces. Indeed, we observed a rather uniform distribution of atten-
tion towards the eye region regardless of the view, even though
We would like to thank Øyvind Sther for recruiting and test-
it would seem intuitive that the eyes would be less informative ing some of the participants, and Tore Morten Andreassen for his
in a profile view where most of its shape is occluded and only assistance with the initial data analysis.
one eye is visible. Additionally, the social significance of gaze
should also be considered a possible explanation for the eye bias.
Previous research has shown that direct gaze leads to longer RTs References
in a sex categorization task, especially for faces of opposite sex
Abdi, H., Valentin, D., Edelman, B., & O’Toole, J. A. (1995). More about the difference
from the observer (Vuilleumier, George, Lister, Armony, & Driver, between men and women: Evidence from linear, neural networks and the
2005). This could indicate that a social aspect is interfering with principal component approach. Perception, 24, 539–562.
the task, especially in frontally viewed faces where the gaze is Althoff, R. R., & Cohen, N. J. (1999). Eye-movement-based memory effect: A
reprocessing effect in face perception. Journal of Experimental Psychology:
more direct. Eye-tracking studies of monkeys have shown that Learning Memory and Cognition, 25(4), 997–1010.
gaze dwells on pictures of other monkeys’ faces longer than on Bem, S. L. (1981). Gender schemata theory: A cognitive account for sex typing.
other objects or scenes, and that the eye region receives most of Psychological Review, 88(4), 354–364.
Biederman, I., & Kalocsai, P. (1997). Neurocomputational basis of object and face
their fixations (e.g. Guo, Mahmoodi, Robertson, & Young, 2006; recognition. Philosophical Transactions of the Royal Society of London: B, 352,
Guo, Robertson, Mahmoodi, Tadmor, & Young, 2003). 1203–1219.
One should also add that categorization performance was high- Biederman, I., & Shiffrar, M. (1987). Sexing day-old chicks: A case study and expert
system analysis of a difficult perceptual-learning task. Journal of Experimental
er for the intermediate view than the other views. The superiority Psychology: Learning Memory and Cognition, 13, 640–645.
of this particular view seems well accounted for by the fact that Blanz, V., Tarr, M. J., & Bühltoff, H. H. (1999). What object attributes determine
both the shape and size of the nose are optimally visible in this canonical views? Perception, 28(5), 575–599.
Brown, E., & Perrett, D. I. (1993). What gives a face its gender? Perception, 22,
view (Chronicle et al., 1995) and that the size and gradient of cur-
829–840.
vature of the eyes and cheekbones are also easily appreciated. Thus Bruce, V., Burton, A. M., Hanna, E., Healey, P., Mason, O., Coombes, A., et al. (1993).
a strategy of anchoring fixations infraorbitally, between the nose, Sex discrimination: How do we tell the difference between male and female
the eye and the cheek, might be rather optimal for sex categoriza- faces? Perception, 22, 131–152.
Bruce, V., Ellis, H., Gibling, F., & Young, A. W. (1987). Parallel processing of the sex
tion and perhaps for other face perception tasks as well (e.g. iden- and familiarity of faces. Canadian Journal of Psychology, 41, 510–520.
tification; cf. Laeng & Caviness, 2001; Laeng & Rouw, 2001). Bruce, V., & Young, A. (1998). In the eye of the beholder. Oxford, UK: Oxford
The present findings also indicated that female and male partic- University Press.
Brunelli, R., & Poggio, T. (1993). Face recognition: Features versus templates. IEEE
ipants used slightly different strategies, because females preferably Transactions on Pattern Analysis and Machine Intelligence, 15, 1042–1052.
attended towards the eyes, whereas males displaced their atten- Burton, A. M., Bruce, V., & Dench, N. (1993). What’s the difference between men and
tion more towards a lower central location of the face. This may women? Perception, 22, 153–176.
Butler, S., Gilchrist, I. D., Burt, D. M., Perrett, D. I., Jones, E., & Harvey, M. (2005). Are
show that both strategies are useful, since males and females the perceptual biases found in chimeric face processing reflected in eye-
solved the task with the same accuracy and speed. We surmise that movement patterns? Neuropsychologia, 43, 52–59.
males use a more global gaze strategy than females. This might be Campbell, R., Benson, P. J., Wallace, S. B., Doesbergh, S., & Coleman, M. (1999). More
about brows: How poses that change brow position affect perceptions of
understood in terms of gender specific experiences. However, fe- gender. Perception, 28, 489–504.
males have been found to have greater interest in social aspects Campbell, R., Wallace, S., & Benson, P. J. (1996). Real men don’t look down: Direction
than males (Kaplan, 1978) and this could be linked to feminine of Gaze affects sex decisions on faces. Visual Cognition, 3(4), 393–412.
Carmel, D., & Bentin, S. (2002). Domain specificity versus expertise: Factors
gender schemata (Bem, 1981) which often involve interest in the
influencing distinct processing of faces. Cognition, 83, 1–29.
‘‘people dimension” (Lippa, 1998). The models’ gaze could be inter- Chronicle, E. P., Chan, M.-Y., Hawkings, C., Mason, K., Smethurst, K., Stallybrass, K.,
preted by the female observer as a social aspect (Vuilleumier et al., et al. (1995). You can tell by the nose – Judging sex from an isolated facial
2005). The present finding could therefore be understood as a so- feature. Perception, 24, 969–973.
Coren, S., & Hoenig, P. (1972). The effect of non-target stimuli on length of voluntary
cial mechanism external to the specific task demands. Future stud- saccades. Perceptual and Motor Skills, 34, 499–508.
ies may clarify whether the two sexes do use gaze in strategically Cotrell, G., & Flemming, M. K. (1990). Face recognition using unsupervised feature
different ways. extraction. In Proceedings of the international neural network conference, Paris
(pp. 322–325). Dordrecht: Kluwer.
Compared to some other eye-tracking studies on face perception, Dailey, M. N., & Cottrell, G. W. (1999). Organization of face and object recognition in
our findings did not reveal any marked preference for either the left modular neural network models. Neural Networks, 12(7–8), 1053–1173.
or right side of the face (Butler et al., 2005; Leonards & Scott-Samuel, Deaner, R. O., & Platt, M. L. (2003). Reflexive social attention in monkeys and
humans. Current Biology, 13, 1609–1613.
2005) or for the left eye (Vinette et al., 2004). One possibility is that Enlow, D. H. (1982). Handbook of facial growth. Philadelphia, PA: W.B. Saunders.
this null finding indicates that for a sex categorization task the right Enlow, D. H. (1990). Facial growth (3rd ed.). Philadelphia, PA: W.B. Saunders.
side of the face is as equally important as the left side of the face. Findlay, J. M. (1982). Global visual processing for saccadic eye movements. Vision
Research, 22, 1033–1045.
Alternatively, since eye-tracking studies with face stimuli (including
Findlay, J. M., & Blythe, H. I. (2009). Saccade target selection: Do distractors affect
the present one) have used rather small sets of faces, random varia- saccade accuracy? Vision Research, 49(10), 1267–1274.
tions between faces might have caused spurious asymmetries. How- Fisher, G. H., & Cox, R. L. (1975). Recognizing human faces. Applied Ergonomics, 6(2),
104–109.
ever, Butler et al. (2005) used a sex categorization task and found a
Gauthier, I., Tarr, M. J., Anderson, A. W., Skudlarski, P., & Gore, J. C. (1999). Activation
left-sided bias despite the fact that their stimuli consisted of chime- of the middle fusiform ‘face area’ increase with expertise in recognizing novel
ric faces where the left and right side of each face were identical. An- objects. Nature Neuroscience, 2, 568–573.
other possibility is that individuals largely differ in their perceptual Golomb, B. A., Lawrence, D. T., & Sejnowski, T. J. (1991). Sexnet: A neural network
identifies sex from human face. In R. P. Lippman, J. Moody, & D. S. Touretzky
biases in facial inspection and, consequently, that a specific direc- (Eds.). Advances in neural information processing systems (Vol. 3, pp. 572–577).
tional bias might not be a particularly robust effect. San Mateo, CA: Morgan Kaufmann.
2880 L. Sther et al. / Vision Research 49 (2009) 2870–2880

Gray, M., Lawrence, D. T., Golomb, B. A., & Sejnowski, T. J. (1995). A perceptron O’Toole, A. J., Millward, R. B., & Anderson, J. A. (1988). A physical system approach to
reveals the face of sex. Neural Computation, 7, 1160–1164. recognition memory for spatially transformed faces. Neural Networks, 1, 179–199.
Grosbras, M.-H., Laird, A. R., & Paus, T. (2005). Cortical regions involved in eye O’Toole, A. J., Peterson, J., & Deffenbacher, K. A. (1996). An ‘‘other race effect” for
movements, shifts of attention, and gaze perception. Human Brain Mapping, 25, classifying faces by sex. Perception, 25, 669–676.
140–154. Putz, R., & Pabst, R. (2001). Head, neck, upper limb. Sobotta, atlas of human anatomy
Guo, K., Mahmoodi, S., Robertson, R. G., & Young, M. P. (2006). Longer fixation (Vol. 1). Germany: Appl, Wemding, Urban & Fischer.
duration while viewing face images. Experimental Brain Research, 171(1), Reddy, L., Wilken, P., & Koch, C. (2004). Face-gender discrimination is possible in the
91–98. near-absence of attention. Journal of Vision, 4, 106–117.
Guo, K., Robertson, R. G., Mahmoodi, S., Tadmor, Y., & Young, M. P. (2003). How do Rehder, B., & Hoffman, A. B. (2005). Eyetracking and selective attention in category
monkeys view faces? – A study of eye movements. Experimental Brain Research, learning. Cognitive Psychology, 51, 1–41.
150(3), 363–374. Reichle, E. D., Rayner, K., & Pollatsek, A. (2003). The E-Z Reader model of eye-
He, P., & Kowler, E. (1991). Saccadic localization of eccentric forms. Journal of the movement control in reading: Comparisons to other models. Behavioral and
Optical Society of America A – Optics Image Science and Vision, 8(2), 440–449. Brain Sciences, 26, 445–526.
Henderson, J. M., Williams, C. C., & Falk, R. J. (2005). Eye movements are functional Sadr, J., Jarudi, I., & Sinha, P. (2003). The role of eyebrows in face recognition.
during face learning. Memory and Cognition, 33(1), 98–106. Perception, 32(3), 285–293.
Hoffman, J., & Richards, W. A. (1985). Parts of recognition. Cognition, 18, 65–96. Sther, L., & Laeng, B. (2008). On facial expertise: Processing strategies of twins’
Hsiao, J. H., & Cottrell, G. (2008). Two fixations suffice in face recognition. parents. Perception, 37, 1227–1240.
Psychological Science, 19(10), 998–1006. Schendan, H. E., Ganis, G., & Kutas, M. (1998). Neurophysiological evidence for
Ikeda, T., Nakamura, M., & Itoh, M. (1999). Sex differences in the zygomatic angle in visual perceptual categorization of words and faces within 150 ms.
Japanese patients analyzed by MRI with reference to Moire fringe patterns. Psychophysiology, 35, 240–251.
Aesthetic Plastic Surgery, 23, 349–353. Schwarzer, G., Huber, S., & Dümmler, T. (2005). Gaze behavior in analytical and
Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert holistic face processing. Memory and Cognition, 33, 344–354.
shifts of visual attention. Vision Research, 40, 1489–1506. Schyns, P. G., Bonnar, L., & Gosselin, F. (2002). Show me the features! Understanding
Kaplan, H. B. (1978). Sex differences in social interest. Journal of Individual recognition from the use of visual information. Psychological Science, 13(5),
Psychology, 34(2), 206–209. 402–409.
Kingstone, A., Tipper, C., Ristic, J., & Ngan, E. (2004). The eyes have it!: An fMRI Schyns, P. G., Jentzsch, I., Johnson, M., Schweinberger, S. R., & Gosselin, F. (2003). A
investigation. Brain and Cognition, 55, 269–271. principled method for determining the functionality of ERP components.
Kowler, E., & Blaser, E. (1995). The accuracy and precision of saccades to small and NeuroReport, 14, 1665–1669.
large targets. Vision Research, 35, 1741–1754. Sergent, J., & Hellige, J. B. (1986). Role of input factors in visual-field asymmetries.
Kundel, H. L., Nodine, C. F., Thickman, D., & Toto, L. (1987). Searching for lung Brain and Cognition, 5, 174–199.
nodules. A comparison of human performance with random and systematic Sergent, J., Ohta, S., & MacDonald, B. (1992). Functional neuroanatomy of face and
scanning models. Investigative Radiology, 22, 417–422. object processing. Brain, 115, 15–36.
Laeng, B., & Caviness, V. (2001). Prosopagnosia as a deficit in encoding curved Smith, M. L., Gosselin, F., & Schyns, G. (2004). Receptive fields for flexible face
surface. Journal of Cognitive Neuroscience, 13(5), 556–557. categorizations. Psychological Science, 15(11), 753–761.
Laeng, B., & Rouw, R. (2001). Canonical views of faces and the cerebral hemispheres. Stroustrup, B. (2000). The C++ programming language (3rd ed.). Addison-Wesley.
Laterality, 6(3), 193–224. The iView XTM RED system from SMI. <http://www.smivision.com/en/gaze-eye-
Land, M., & McLeod, P. (2000). From eye movements to actions: How batsmen hit tracking-systems/products/iview-x-red-red250.html>.
the ball. Nature Neuroscience, 3, 1340–1345. Torralba, A., Oliva, A., Castlhano, M. S., & Henderson, J. M. (2006). Contextual
Langdell, T. (1978). Recognition of faces – Approach to study of autism. Journal of guidance of eye movements and attention in real-world scenes: The role of
Child Psychology and Psychiatry, 19(3), 255–268. global features in object search. Psychological Review, 113, 766–786.
Langton, S. R. H., Watt, R. J., & Bruce, V. (2000). Do the eyes have it? Cues to the The QT product by trolltech. <http://www.trolltech.com/products/qt/index.html>.
direction of social attention. Trends in Cognitive Sciences, 4, 50–59. Turano, K. A., Geruschat, D. R., & Baker, F. H. (2003). Oculomotor strategies for the
Leonards, U., & Scott-Samuel, N. E. (2005). Idiosyncratic initiation of saccadic face direction of gaze tested with real-world activity. Vision Research, 43, 333–346.
exploration in humans. Vision Research, 45(20), 2677–2684. Tyler, C. W., & Chen, C. C. (2006). Spatial summation of face information. Journal of
Lippa, R. (1998). Gender related individual differences and the structure of Vision, 6(10), 1117–1125.
vocational interest: The importance of the people-things dimension. Journal of Underwood, G., Chapman, P., Brocklehurst, N., Underwood, J., & Crundall, D. (2003).
Personality and Social Psychology, 74(4), 996–1009. Visual attention while driving: Sequences of eye fixations made by experienced
Malcolm, G. L., Lanyon, L. J., Fugard, A. J. B., & Barton, J. J. S. (2008). Scan patterns and novice drivers. Ergonomics, 46, 629–646.
during the processing of facial expression versus identity: An exploration of Valentin, D., Abdi, H., Edelman, B., & O’Toole, A. J. (1997). Principal component and
task-driven and stimulus-driven effects. Journal of Vision, 8(8), 1–9. 2. neural network analyses of face images: What can be generalized in gender
McGowan, J. W., Kowler, E., Sharma, A., & Chubb, C. (1998). Saccadic localization of classification? Journal of Mathematical Psychology, 41, 398–413.
random dot targets. Vision Research, 38, 895–909. Vinette, C., Gosselin, F., & Schyns, P. G. (2004). Spatio-temporal dynamics of face
Melcher, D., & Kowler, E. (1999). Shapes, surfaces and saccades. Vision Research, recognition in a flash: It’s in the eyes. Cognitive Science, 28, 289–301.
39(17), 2929–2946. Vishwanath, D., & Kowler, E. (2004). Saccadic localization in the presence of cues to
Minut, S., Mahadevan, S., Henderson, J. M., & Dyer, F. C. (2000). Face recognition three-dimensional shape. Journal of Vision, 4, 445–458.
using foveal vision. Lecture Notes in Computer Science, 1811, 424–433. Vishwanath, D., Kowler, E., & Feldman, J. (2000). Saccadic localization of occluded
O’oole, A. J., Vetter, T., Troje, N. F., & Bülthoff, H. H. (1997). Sex classification is better targets. Vision Research, 40(20), 2797–2811.
with three-dimensional head structure than with image intensity information. Vuilleumier, P., George, N., Lister, V., Armony, J., & Driver, J. (2005). Effects of
Perception, 26, 75–84. perceived mutual gaze and gender on face processing and recognition memory.
O’Regan, J. K., Lèvy-Schoen, A., Pynte, J., & Brugaillère, B. (1984). Convenient Visual Cognition, 12(1), 85–101.
fixation location within isolated words of different length and structure. Walker-Smith, G. J., Gale, A. G., & Findlay, J. M. (1977). Eye-movement strategies
Journal of Experimental Psychology: Human Perception and Performance, 10(2), involved in face perception. Perception, 6(3), 313–326.
250–257. Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided search: An alternative to the
O’Toole, A. J., Deffenbacher, K. A., Valentin, D., McKee, K., Huff, D., & Abdi, H. (1998). feature integration model for visual search. Journal of Experimental Psychology:
The perception of face gender: The role of stimulus structure in recognition and Human Perception and Performance, 15, 419–433.
classification. Memory and Cognition, 26, 146–160. Yarbus, A. L. (1967). Eye movements and vision. New York: Plenum Press.

You might also like