You are on page 1of 16

E i g e d c e s for Recognition

Matthew Turk and Alex Pentland


Vision and Modeling Group
The Media Laboratory
Massachusetts Institute of Technology

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


Abstract
We have developed a near-real-timecomputer system that face images onto a feature space that spans the significant
can locate and track a subject’s head, and then recognize the variations among known face images. The significant features
person by comparing characteristics of the face to those of are known as “eigenfaces,”because they are the eigenvectors
known individuals. The computational approach taken in this (principal components) of the set of faces; they do not neces-

Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October


system is motivated by both physiology and information theory, sarily correspond to features such as eyes, ears, and noses. The
as well as by the practical requirements of near-real-timeper- projection operation characterizes an individual face by a
formance and accuracy. Our approach treats the face recog- weighted sum of the eigenface features, and so to recognize a
nition problem as an intrinsically two-dimensional (2-D) particular face it is necessary only to compare these weights to
recognition problem rather than requiring recovery of three- those of known individuals. Some particular advantages of our
dimensional geometry, taking advantage of the fact that faces approach are that it provides for the ability to learn and later
are normally upright and thus may be described by a small set recognize new faces in an unsupervised manner, and that it is
of 2-D characteristic views. The system functions by projecting easy to implement using a neural network architecture.

INTRODUCTION can be important. Detecting faces in photographs, for


instance, is an important problem in automating color
The face is our primary focus of attention in social in- film development, since the effect of many enhancement
tercourse, playing a major role in conveying identity and and noise reduction techniques depends on the picture
emotion. Although the ability to infer intelligence or content (e.g., faces should not be tinted green, while
character from facial appearance is suspect, the human perhaps grass should).
ability to recognize faces is remarkable. We can recog- Unfortunately, developing a computational model of
nize thousands of faces learned throughout our lifetime face recognition is quite difficult, because faces are com-
and identify familiar faces at a glance even after years of plex, multidimensional, and meaningful visual stimuli.
separation. This skill is quite robust, despite large They are a natural class of objects, and stand in stark
changes in the visual stimulus due to viewing conditions, contrast to sine wave gratings, the “blocks world,” and
expression, aging, and distractions such as glasses or other artificial stimuli used in human and computer vi-
changes in hairstyle or facial hair. As a consequence the sion research (Davies, Ellis, & Shepherd, 1981). Thus

by 2022
visual processing of human faces has fascinated philos- unlike most early visual functions, for which we may

guest on 18 May 2021


ophers and scientists for centuries, including figures such construct detailed models of retinal o r striate activity,
as Aristotle and Darwin. face recognition is a very high level task for which com-
Computational models of face recognition, in partic- putational approaches can currently only suggest broad
ular, are interesting because they can contribute not only constraints on the corresponding neural activity.
to theoretical insights but also to practical applications. We therefore focused our research toward developing
Computers that recognize faces could be applied to a a sort of early, preattentive pattern recognition capability
wide variety of problems, including criminal identifica- that does not depend on having three-dimensional in-
tion, security systems, image and film processing, and formation or detailed geometry. Our goal, which we
human-computer interaction. For example, the ability to believe we have reached, was to develop a computational
model a particular face and distinguish it from a large model of face recognition that is fast, reasonably simple,
number of stored face models would make it possible and accurate in constrained environments such as an
to vastly improve criminal identification. Even the ability office o r a household. In addition the approach is bio-
to merely detect faces, as opposed to recognizing them, logically implementable and is in concert with prelimi-

0 1991 Massachusetts Institute of Technology Journal of Cognitive Neuroscience Volume 3, Number 1


nary findings in the physiology and psychology of face Yuille, this volume). Their strategy is based on “deform-
recognition. able templates,” which are parameterized models of the
The scheme is based on an information theory ap- face and its features in which the parameter values are
proach that decomposes face images into a small set of determined by interactions with the image.
characteristic feature images called “eigenfaces,” which Connectionist approaches to face identification seek to
may be thought of as the principal components of the capture the configurational, or gestalt-like nature of the
initial training set of face images. Recognition is per- task. Kohonen (1989) and Kohonen and Lahtio (1981)
formed by projecting a new image into the subspace describe an associative network with a simple learning
spanned by the eigenfaces (“face space”) and then clas- algorithm that can recognize (classify) face images and
sifying the face by comparing its position in face space recall a face image from an incomplete or noisy version
with the positions of known individuals. input to the network. Fleming and Cottrell(l990) extend
Automatically learning and later recognizing new faces these ideas using nonlinear units, training the system by
is practical within this framework. Recognition under backpropagation. Stonham’s WSARD system (1986) is a

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


widely varying conditions is achieved by training on a general-purpose pattern recognition device based on
limited number of characteristic views (e.g., a “straight neural net principles. It has been applied with some
on” view, a 45” view, and a profile view). The approach success to binary face images, recognizing both identity
has advantages over other face recognition schemes in and expression. Most connectionist systems dealing with
its speed and simplicity, learning capacity, and insensitiv- faces (see also Midorikawa, 1988; O’Toole, Millward, &
ity to small or gradual changes in the face image. Anderson, 1988) treat the input image as a general 2-D

Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October


pattern, and can make no explicit use of the configura-
tional properties of a face. Moreover, some of these
Background and Related Work
systems require an inordinate number of training ex-
Much of the work in computer recognition of faces has amples to achieve a reasonable level of performance.
focused on detecting individual features such as the eyes, Only very simple systems have been explored to date,
nose, mouth, and head outline, and defining a face model and it is unclear how they will scale to larger problems.
by the position, size, and relationships among these fea- Others have approached automated face recognition
tures. Such approaches have proven difficult to extend by characterizing a face by a set of geometric parameters
to multiple views, and have often been quite fragile, and performing pattern recognition based on the param-
requiring a good initial guess to guide them. Research eters (e.g., Kaya & Kobayashi, 1972; Cannon, Jones,
in human strategies of face recognition, moreover, has Campbell, & Morgan, 1986; Craw, Ellis, & Lishman, 1987;
shown that individual features and their immediate re- Wong, Law, & Tsaug, 1989). Kanade’s (1973) face identi-
lationships comprise an insufficient representation to ac- fication system was the first (and still one of the few)
count for the performance of adult human face systems in which all steps of the recognition process
identification (Carey & Diamond, 1977). Nonetheless, were automated, using a top-down control strategy di-
this approach to face recognition remains the most pop- rected by a generic model of expected feature charac-
ular one in the computer vision literature. teristics. His system calculated a set of facial parameters
Bledsoe (1966a,b) was the first to attempt semiauto- from a single face image and used a pattern classification
mated face recognition with a hybrid human-computer technique to match the face from a known set, a purely
system that classified faces on the basis of fiducial marks statistical approach depending primarily on local histo-
entered on photographs by hand. Parameters for the gram analysis and absolute gray-scale values.
classification were normalized distances and ratios Recent work by Burt (1988a,b) uses a “smart sensing”
among points such as eye corners, mouth corners, nose approach based on multiresolution template matching.
tip, and chin point. Later work at Bell Labs (Goldstein, This coarse-to-fine strategy uses a special-purpose com-

by 2022
Harmon, & Lesk, 1971; Harmon, 1971) developed a vec- puter built to calculate multiresolution pyramid images

guest on 18 May 2021


tor of up to 21 features, and recognized faces using quickly, and has been demonstrated identifying people
standard pattern classification techniques. The chosen in near-real-time. This system works well under limited
features were largely subjective evaluations (e.g., shade circumstances, but should suffer from the typical prob-
of hair, length of ears, lip thickness) made by human lems of correlation-based matching, including sensitivity
subjects, each of which would be quite difficult to to image size and noise. The face models are built by
automate. hand from face images.
An early paper by Fischler and Elschlager (1973) at-
tempted to measure similar features automatically. They
THE EIGENFACE APPROACH
described a linear embedding algorithm that used local
feature template matching and a global measure of fit to Much of the previous work on automated face recogni-
find and measure facial features. This template matching tion has ignored the issue of just what aspects of the face
approach has been continued and improved by the re- stimulus are important for identification. This suggested
cent work of Yuille, Cohen, and Hallinan (1989) (see to us that an information theory approach of coding and

72 Journal of Cognitive Neuroscience Volume 3, Number 1


decoding face images may give insight into the infor- of characteristic features or eigenpictures, perhaps an
mation content of face images, emphasizing the signifi- efficient way to learn and recognize faces would be to
cant local and global “features.” Such features may or build up the characteristic features by experience over
may not be directly related to our intuitive notion of face time and recognize particular faces by comparing the
features such as the eyes, nose, lips, and hair. This may feature weights needed to (approximately) reconstruct
have important implications for the use of identification them with the weights associated with known individuals.
tools such as Identikit and Photofit (Bruce, 1988). Each individual, therefore, would be characterized by
In the language of information theory, we want to the small set of feature or eigenpicture weights needed
extract the relevant information in a face image, encode to describe and reconstruct them-an extremely com-
it as efficiently as possible, and compare one face encod- pact representation when compared with the images
ing with a database of models encoded similarly.A simple themselves.
approach to extracting the information contained in an This approach to face recognition involves the follow-
image of a face is to somehow capture the variation in a ing initialization operations:

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


collection of face images, independent of any judgment
1. Acquire an initial set of face images (the training
of features, and use this information to encode and com-
pare individual face images. set).
In mathematical terms, we wish to find the principal 2. Calculate the eigenfaces from the training set, keep-
components of the distribution of faces, or the eigenvec- ing only the M images that correspond to the highest
tors of the covariance matrix of the set of face images, eigenvalues. These M images define the face space. As

Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October


treating an image as a point (or vector) in a very high new faces are experienced, the eigenfaces can be up-
dimensional space. The eigenvectors are ordered, each dated or recalculated.
one accounting for a different amount of the variation 3. Calculate the corresponding distribution in M-di-
among the face images. mensional weight space for each known individual, by
These eigenvectors can be thought of as a set of fea- projecting their face images onto the “face space.”
tures that together characterize the variation between These operations can also be performed from time
face images. Each image location contributes more or to time whenever there is free excess computational
less to each eigenvector, so that we can display the ei- capacity.
genvector as a sort of ghostly face which we call an Having initialized the system, the following steps are
eigenface. Some of the faces we studied are illustrated then used to recognize new face images:
in Figure 1, and the corresponding eigenfaces are shown
in Figure 2. Each eigenface deviates from uniform gray 1. Calculate a set of weights based on the input image
where some facial feature differs among the set of train- and the M eigenfaces by projecting the input image onto
ing faces; they are a sort of map of the variations between each of the eigenfaces.
faces. 2. Determine if the image is a face at all (whether
Each individual face can be represented exactly in known or unknown) by checking to see if the image is
terms of a linear combination of the eigenfaces. Each sufficiently close to “face space.”
face can also be approximated using only the “best” 3. If it is a face, classify the weight pattern as either a
eigenfaces-those that have the largest eigenvalues, and known person or as unknown.
which therefore account for the most variance within 4. (Optional) Update the eigenfaces and/or weight
the set of face images. The best M eigenfaces span an patterns.
M-dimensional subspace-“face space”-of all possible 5. (Optional) If the same unknown face is seen several
images. times, calculate its characteristic weight pattern and in-
The idea of using eigenfaces was motivated by a tech- corporate into the known faces.

by 2022
nique developed by Sirovich and Kirby (1987) and Kirby

guest on 18 May 2021


and Sirovich (1990) for efficiently representing pictures
Calculating Eigenfaces
of faces using principal component analysis. Starting with
an ensemble of original face images, they calculated a Let a face image Z(x,y) be a two-dimensional N by N array
best coordinate system for image compression, where of (8-bit) intensity values. An image may also be consid-
each coordinate is actually an image that they termed an ered as a vector of dimension N2, so that a typical image
eigenpicture. They argued that, at least in principle, any of size 256 by 256 becomes a vector of dimension 65,536,
collection of face images can be approximately recon- or, equivalently, a point in 65,536-dimensional space. An
structed by storing a small collection of weights for each ensemble of images, then, maps to a collection of points
face and's small set of standard pictures (the eigenpic- in this huge space.
tures). The weights describing each face are found by Images of faces, being similar in overall configuration,
projecting the face image onto each eigenpicture. will not be randomly distributed in this huge image space
It occurred to us that if a multitude of face images can and thus can be described by a relatively low dimen-
be reconstructed by weighted sums of a small collection sional subspace. The main idea of the principal compo-

Turk and Pentland 73


Figure 1. (a)Face images
used as the training set.

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October
nent analysis (or Karhunen-Loeve expansion) is to find
the vectors that best account for the distribution of face
images within the entire image space. These vectors de-
fine the subspace of face images, which we call “face The vectors w and scalars Xk are the eigenvectors and
space.” Each vector is of length N‘, describes an N by N eigenvalues, respectively, of the covariance matrix
image, and is a linear combination of the original face
images. Because these vectors are the eigenvectors of
1
c = M- c
n = ~
(3)
the covariance matrix corresponding to the original face
images, and because they are face-like in appearance, we =A T

refer to them as “eigenfaces.” Some examples of eigen-


where the matrix A = [alcP2 . . . @MI. The matrix C,

by 2022
faces are shown in Figure 2 .

guest on 18 May 2021


Let the training set of face images be rl,r2,r3,. . . , however, is N’ by N’, and determining the N’ eigenvec-
r,,, The average face of the set is defined by =
+Z:=:=, r,. Each face differs from the average by the
* tors and eigenvalues is an intractable task for typical
image sizes. Me need a computationally feasible method
vector = I?, - W ,An example training set is shown to find these eigenvectors.
in Figure la, with the average face shown in Figure If the number of data points in the image space is less
l b . This set of very large vectors is then subject to prin-
than the dimension of the space ( M < N’), there will be
cipal component analysis, which seeks a set of M ortho-
only M - 1, rather than N2, meaningful eigenvectors.
normal vectors, un,which best describes the distribution
of the data. The kth vector, u k , is chosen such that (The remaining eigenvectors will have associated eigen-
values of zero.) Fortunately we can solve for the N2-
l M dimensional eigenvectors in this case by first solving for
Ak = - (Uz@n)2 (1)
M n=1 the eigenvectors of an M by M matrix--e.g., solving a
is a maximum, subject to 16 X 16 matrix rather than a 16,384 X 16,384 matrix-

74 Journal of Cognitive Neuroscience Volume 3, Number 1


Following this analysis, we construct the M by M matrix
L = ATA, where L,, = and find the M eigenvec-
tors, VL,of L . These vectors determine linear combina-
tions of the M training set face images to form the
eigenfaces UI.
M

Ul = xv[k@k, t! = 1 , . . . ,M (6)
k= 1

With this analysis the calculations are greatly reduced,


from the order of the number of pixels in the images
( N 2 )to the order of the number of images in the training
set (M). In practice, the training set of face images will
be relatively small (M G N’), and the calculations become

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


quite manageable. The associated eigenvalues allow us
to rank the eigenvectors according to their usefulness in
characterizing the variation among the images. Figure 2
shows the top seven eigenfaces derived from the input
images of Figure 1.

Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October


Figure 1. (b) The average face w. Using Eigenfaces to Classify a Face Image
The eigenface images calculated from the eigenvectors
of L span a basis set with which to describe face images.
Sirovich and Kirby (1987) evaluated a limited version of
this framework on an ensemble of M = 115 images of
Caucasian males, digitized in a controlled manner, and
found that about 40 eigenfaces were sufficient for a very
good description of the set of face images. With M‘ =
40 eigenfaces, RMS pixel-by-pixel errors in representing
cropped versions of face images were about 2%.
Since the eigenfaces seem adequate for describing face
images under very controlled conditions, we decided to
investigate their usefulness as a tool for face identifica-
tion. In practice, a smaller M’ is sufficient for identifica-
tion, since accurate reconstruction of the image is not a
requirement. In this framework, identification becomes
a pattern recognition task. The eigenfaces span an M’-
dimensional subspace of the original N’ image space.
The M’ significant eigenvectors of the L matrix are chosen
as those with the largest associated eigenvalues. In many
of our test cases, based o n M = 16 face images, M’ = 7
eigenfaces were used.
Figure 2. Seven of the eigenfaces calculated from the input images A new face image (I?) is transformed into its eigenface

by 2022
of Figure 1. components (projected into “face space”) by a simple

guest on 18 May 2021


operation,
and then taking appropriate linear combinations of the wk = U,’<r- q ) (7)
face images Consider the eigenvectors vrof ATA such
that for k = 1, . . . , M’. This describes a set of point-by-point
image multiplications and summations, operations per-
A ~ A V=
, F,V, (4) formed at approximately frame rate on current image
processing hardware. Figure 3 shows an image and its
Premultiplying both sides by A, we have projection into the seven-dimensional face space.
AArAvl = pAv1 (5) The weights form a vector RT = [w,, 0 2 . . . wM8]that
describes the contribution of each eigenface in repre-
from which we see that Av, are the eigenvectors of C = senting the input face image, treating the eigenfaces as a
AAT. basis set for face images. The vector may then be used

Turk and Pentland 75


in a standard pattern recognition algorithm to find which with the highest associated eigenvalues. (Let M’ = 10 in
of a number of predefined face classes, if any, best de- this example.)
scribes the face. The simplest method for determining 3. Combine the normalized training set of images ac-
which face class provides the best description of an input cording to Eq. (6) to produce the (M’ = 10) eigenfaces
face image is to find the face class k that minimizes the uk.
Euclidian distance 4. For each known individual, calculate the class vec-
tor by averaging the eigenface pattern vectors IR [from
Eq. (S)] calculated from the original (four) images of the
where a k is a vector describing the kth face class. The individual. Choose a threshold 8, that defines the maxi-
face classesa, are calculated by averaging the results of mum allowable distance from any face class, and a
the eigenface representation over a small number of face threshold 8, that defines the maximum allowable dis-
images (as few as one) of each individual. A face is tance from face space [according to Eq. (9)].
classified as belonging to class k when the minimum Ek 5. For each new face image to be identified, calculate

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


is below some chosen threshold 8,. Otherwise the face its pattern vector 0,the distances to each known class,
is classified as “unknown,” and optionally used to create and the distance E to face space. If the minimum distance
a new face class. Ek < 8, and the distance E < 8,, classify the input face
Because creating the vector of weights is equivalent to as the individual associated with class vector &. If the
projecting the original face image onto the low-dimen- minimum distance Ek > 8, but distance E < 8,, then the

Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October


sional face space, many images (most of them looking image may be classifed as “unknown,” and optionally
nothing like a face) will project onto a given pattern used to begin a new face class.
vector. This is not a problem for the system, however, 6. If the new image is classified as a known individual,
since the distance E between the image and the face this image may be added to the original set of familiar
space is simply the squared distance between the mean- face images, and the eigenfaces may be recalculated
adjusted input image Q, = r - 1Ir and Q,f = Ct21 w2u2, (steps 1-4). This gives the opportunity to modify the face
its projection onto face space: space as the system encounters more instances of known
faces.
In our current system calculation of the eigenfaces is
Thus there are four possibilities for an input image and
done offline as part of the training. The recognition
its pattern vector: (1) near face space and near a face
currently takes about 400 msec running rather ineffi-
class, ( 2 ) near face space but not near a known face class, ciently in Lisp on a Sun4, using face images of size 128 X
(3) distant from face space and near a face class, and (4) 128. With some special-purpose hardware, the current
distant from face space and not near a known face class. version could run at close to frame rate (33 msec).
In the first case, an individual is recognized and iden- Designing a practical system for face recognition
tified. In the second case, an unknown individual is pres- within this framework requires assessing the tradeoffs
ent. The last two cases indicate that the image is not a
between generality, required accuracy, and speed. If the
face image. Case three typically shows up as a false pos-
face recognition task is restricted to a small set of people
itive in most recognition systems; in our framework,
(such as the members of a family or a small company),
however, the false recognition may be detected because
a small set of eigenfaces is adequate to span the faces of
of the significant distance between the image and the
interest. If the system is to learn new faces or represent
subspace of expected face images. Figure 4 shows some many people, a larger basis set of eigenfaces will be
images and their projections into face space and gives a required. The results of Sirovich and Kirby (1987) and
measure of distance from the face space for each. Kirby and Sirovich (1990) for coding of face images gives
some evidence that even if it were necessary to represent

by 2022
guest on 18 May 2021
Summary of Eigenface Recognition a large segment of the population, the number of eigen-
Procedure faces needed would still be relatively small.

To summarize, the eigenfaces approach to face recogni-


tion involves the following steps: Locating and Detecting Faces
1. Collect a set of characteristic face images of the The analysis in the preceding sections assumes we have
known individuals. This set should include a number of a centered face image, the same size as the training
images for each person, with some variation in expres- images and the eigenfaces. We need some way, then, to
sion and in the lighting. (Say four images of ten people, locate a face in a scene to do the recognition. We have
so M = 40.) developed two schemes to locate and/or track faces, us-
2 . Calculate the (40 X 40) matrix L , find its eigenvec- ing motion detection and manipulation of the images in
tors and eigenvalues, and choose the M‘ eigenvectors “face space”.

76 Journal of Cognitive Neuroscience Volume 3, Number I


I

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October
Figure 3. An original face image and its projection onto the face space defined by the eigenfaces of Figure 2.

Motion Detecting a n d Head Tracking faces from motion (e.g., if there is too little motion or
many moving objects) or as a method of achieving more
People are constantly moving. Even while sitting, we
precision than is possible by use of motion tracking
fidget and adjust our body position, nod our heads, look
alone. This method allows us to recognize the presence
around, and such. In the case of a single person moving
of faces apart from the task of identifying them.
in a static environment, a simple motion detection and
As seen in Figure 4, images of faces do not change
tracking algorithm, depicted in Figure 5, will locate and
radically when projected into the face space, while the
track the position of the head. Simple spatiotemporal
projection of nonface images appears quite different.
filtering (e.g., frame differencing) accentuates image lo-
This basic idea is used to detect the presence of faces in
cations that change with time, so a moving person “hghts
a scene: at every location in the image, calculate the
up” in the filtered image. If the image “lights up” at all,
distance E between the local subimage and face space.
motion is detected and the presence of a person is
This distance from face space is used as a measure of
postulated.
“faceness,”so the result of calculating the distance from
After thresholding the filtered image to produce a
face space at every point in the image is a “face map”
binary motion image, we analyze the “motion blobs” over
E ( ~ J ) . Figure 7 shows an image and its face map-low
time to decide if the motion is caused by a person
values (the dark area) indicate the presence of a face.
moving and to determine head position. A few simple
Unfortunately, direct application of Eq. (9) is rather
rules are applied, such as “the head is the small upper
expensive. We have therefore developed a simpler, more
blob above a larger blob (the body),” and “head motion
efficient method of calculating the face map E ( x , ~ ) , which
must be reasonably slow and contiguous” (heads are not
is described as follows.
expected to jump around the image erratically). Figure

by 2022
To calculate the face map at every pixel of an image
6 shows an image with the head located, along with the

guest on 18 May 2021


Z(x,y), we need to project the subimage centered at that
path of the head in the preceding sequence of frames.
pixel onto face space, then subtract the projection from
The motion image also allows for an estimate of scale.
the original. To project a subimage r onto face space,
The size of the blob that is assumed to be the moving
we must first subtract the mean image, resulting in Q, =
head determines the size of the subimage to send to the
recognition stage. This subimage is rescaled to fit the
r - W.With @f being the projection of CD onto face
space, the distance measure at a given image location is
dimensions of the eigenfaces.
then

E2 = /I@ - @#
Using ‘FaceSpace” to Locate the Face
= (a- - @f)
(10)
We can also use knowledge of the face space to locate = WQ, - + a$(@ - @f)

faces in single images, either as an alternative to locating =VQ, - @;a,

Turk and Pentland 77


Figure 4 . Three images and
their projections onto the face
space defined by the eigen-
faces of Figure 2. The relative
measures of distance from face
Space are (a) 29.8, (b) 58.5,
( c ) 5217.4. Images (a) and (b)
are in the original training set.

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October
since Qf i (@ - af). Because @f is a linear combination
of the eigenfaces (@f = C:=, omi)and the eigenfaces
are orthonormal vectors,

by 2022
guest on 18 May 2021
and

where E ( x , ~ ) and oi(x,y) are scalar functions of image


location, and @(x,y) is a vector function of image loca-
tion.
The second term of Eq. (12) is calculated in practice
by a correlation with the L eigenfaces:

78 Journal of Cognitiije Neuroscience Volume 3, Number 1


Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf
Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October
Figure 5. The head tracking and locating system

works, these computations can be implemented by a


simple neural network.

Learning to Recognize New Faces


The concept of face space allows the ability to learn and
subsequently recognize new faces in an unsupervised
manner. When an image is sufficiently close to face space
but is not classified as one of the familiar faces, it is
initially labeled as “unknown.” The computer stores the
pattern vector and the corresponding unknown image.
If a collection of “unknown” pattern vectors cluster in
the pattern space, the presence of a new but unidentified
face is postulated.
The images corresponding to the pattern vectors in
the cluster are then checked for similarity by requiring
that the distance from each image to the mean of the
images is less than a predefined threshold. If the images
pass the similarity test, the average of the feature vectors
Figure 6. The head has been located-the image in the box is sent is added to the database of known faces. Occasionally,
the eigenfaces may be recalculated using these stored

by 2022
to the face recognition process. Also shown is the path of the head
tracked over several previous frames. images as part of the new training set.

guest on 18 May 2021


Other Issues
Since the average face W and the eigenfaces uiare fixed, A number of other issues must be addressed to obtain a
the terms WTW and W @ ui may be computed ahead robust working system. In this section we will briefly
of time. mention these issues and indicate methods of solution.
Thus the computation of the face map involves only
L + 1 correlations over the input image and the com-
putation of the first term rr(x, y)r(x, y ) . This is com- Eliminating the Background
puted by squaring the input image I(x, y ) and, at each In the preceding analysis we have ignored the effect of
image location, summing the squared values of the local the background. In practice, the background can signif-
subimage. As discussed in the section on Neural Net- icantly effect the recognition performance, since the ei-

Turk and Pentland 79


Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf
Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October
Figure 7. (a) Original image. (b) The corresponding face map, where low values (dark areas) indlcate the presence of a face.

genface analysis as described above does not distinguish scale the input image to multiple sizes and use the scale
the face from the rest of the image. In the experiments that results in the smallest distance measure to face space.
described in the section on Experiments with Eigenfaces, Although the eigenfaces approach is not extremely
the background was a significant part of the image used sensitive to head orientation (i.e., sideways tilt of the
to classify the faces. head), a non-upright view will cause some performance
To deal with this problem without having to solve degradation. An accurate estimate of the head tilt will
other difficult vision problems (such as robust segmen- certainly benefit the recognition. Again, two simple meth-
tation of the head), we have multiplied the input face ods have been considered and tested. The first is to
image by a two-dimensional gaussian window centered calculate the orientation of the motion blob of the head.
on the face, thus diminishing the background and accen- This is less reliable as the shape tends toward a circle,
tuating the middle of the face. Experiments in human however. Using the fact that faces are reasonably sym-
strategies of face recognition (Hay & Young, 1982) cite metric patterns, at least for frontal views, we have used
the importance of the internal facial features for recog- simple symmetry operators to estimate head orientation.
nition of Familiar faces. Deemphasizing the outside of Once the orientation is estimated, the image can be
the face is also a practical consideration since changing rotated to align the head with the eigenfaces.
hairstyles may otherwise negatively affect the recogni-
tion.
Distribution in Face Space
The nearest-neighbor classification previously described
assumes a Gaussian distribution in face space of an in-
Scale (Head Size) and Orientation Invariance
dividual’s feature vectors a.Since there is no a priori

by 2022
guest on 18 May 2021
The experiments in the section on Database of Face reason to assume any particular distribution, we want to
Images show that recognition performance decreases characterize it rather than assume it is gaussian. Nonlin-
quickly as the head size, or scale, is misjudged. The head ear networks such as described in Fleming and Cottrell
size in the input image must be close to that of the (1990) seem to be a promising way to learn the face
eigenfaces for the system to work well. The motion anal- space distributions by example.
ysis gives an estimate of head size, from which the face
image is rescaled to the eigenface size.
Multiple Views
Another approach to the scale problem, which may be
separate from or in addition to the motion estimate, is We are currently extending the system to deal with other
to use multiscale eigenfaces, in which an input face image than full frontal views by defining a limited number of
is compared with eigenfaces at a number of scales. In face classes for each known person corresponding to
this case the image will appear to be near the face space characteristic views. For example, an individual may be
of only the closest scale eigenfaces. Equivalently, we can represented by face classes corresponding to a frontal

80 Journal of Cognitive Neuroscience Volume 3, Number 1


face view, side views, at -+ 45”, and right and left profile lighting variation, 85% correct averaged over orientation
views. Under most viewing conditions these seem to be variation, and 64% correct averaged over size variation.
sufficient to recognize a face anywhere from frontal to As can be seen from these graphs, changing lighting
profile view, because the real view can be approximated conditions causes relatively few errors, while perfor-
by interpolation among the fixed views. mance drops dramatically with size change. This is not
surprising, since under lighting changes alone the neigh-
borhood pixel correlation remains high, but under size
EXPERIMENTS WITH EIGENFACES changes the correlation from one image to another is
largely lost. It is clear that there is a need for a multiscale
To assess the viability of this approach to Face recogni-
approach, so that faces at a particular size are compared
tion, we have performed experiments with stored face
with one another. One method of accomplishing this is
images and built a system to locate and recognize faces
to make sure that each “face class” includes images of
in a dynamic environment. We first created a large da-
the individual at several different sizes, as was discussed
tabase of face images collected under a wide range of
in the section on Other Issues.

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


imaging conditions. Using this database we have con-
In a second experiment the same procedures were
ducted several experiments to assess the performance
followed, but the acceptance threshold 6 , was also var-
under known variations of lighting, scale, and orienta-
ied. At low values of 0,, only images that project very
tion. The results of these experiments and early experi-
closely to the known face classes will be recognized, so
ence with the near-real-time system are reported in this
that there will be few errors but many of the images will

Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October


section.
be rejected as unknown. At high values of 0, most images
will be classified, but there will be more errors. Adjusting
0, to achieve 100% accurate recognition boosted the
Database of Face Images
unknown rates to 19% while varying lighting, 39% for
The images from Figure l a were taken from a database orientation, and 60% for size. Setting the unknown rate
of over 2500 face images digitized under controlled con- arbitrarily to 20% resulted in correct recognition rates
ditions. Sixteen subjects were digitized at all combina- of loo%, 94%, and 74% respectively.
tions of three head orientations, three head sizes or These experiments show an increase of performance
scales, and three lighting conditions. A six level Gaussian accuracy as the threshold decreases. This can be tuned
pyramid was constructed for each image, resulting in to achieve effectively perfect recognition as the threshold
image resolution from 512 X 512 pixels down to 16 X tends to zero, but at the cost of many images being
16 pixels. Figure 8 shows the images from one pyramid rejected as unknown. The tradeoff between rejection rate
level for one individual. and recognition accuracy will be different for each of the
In the first experiment the effects of varying lighting, various face recognition applications. However, what
size, and head orientation were investigated using the would be most desirable is to have a way o f setting the
complete database of 2500 images of the 16 individuals threshold high, so that few known face images are re-
shown in Figure la. Various groups of 16 images were jected as unknown, while at the same time detecting the
selected and used as the training set. Within each training incorrect classifications. That is, we would like to in-
set there was one image of each person, all taken under crease the efficiency (the d-prime) of the recognition
the same conditions of lighting, image size, and head process.
orientation. All images in the database were then classi- One way of accomplishing this is to also examine the
fied as being one of these sixteen individuals (i.e., the (normalized) Euclidian distance between an image and
threshold 0, was effectively infinite, so that no faces were face space as a whole. Because the projection onto the
rejected as unknown). Seven eigenfaces were used in eigenface vectors is a many-to-one mapping, there is a

by 2022
the classification process. potentially unlimited number of images that can project

guest on 18 May 2021


Statistics were collected measuring the mean accuracy onto the eigenfaces in the same manner, i.e., produce
as a function of the difference between the training con- the same weights. Many of these will look nothing like a
ditions and the test conditions. The independent varia- face, as shown in Figure 4c. This approach was described
bles were difference in illumination, imaged head size, in the section on Using “Face Space” to Locate the Face
head orientation, and combinations of illumination, size, as a method of identifying likely face subimages.
and orientation.
Figure 9 shows results of these experiments for the
Real-Time Recognition
case of infinite 0,. The graphs of the figure show the
number o f correct classifications for varying conditions We have used the techniques described above to build
of lighting, size, and head orientation, averaged over the a system that locates and recognizes faces in near-real-
number of experiments. For this case where every face time in a reasonably unstructured environment. Figure
image is classified as known, the system achieved ap- 10 shows a diagram of the system. A fixed camera, mon-
proximately 96% correct classification averaged over itoring part of a room, is connected to a Datacube image

Turk and Pentland 81


Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf
Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October
Figure 8. Val-iation of face images for one individual: three head sizes, three lighting conditions, and three head orientations

processing system, which resides on the bus of a Sun 3/ jected as not a face, recognized as one of a group of

by 2022
160. The Datacube digitizes the video image and per- familiar faces, or determined to be an unknown face.

guest on 18 May 2021


forms spatiotemporal filtering, thresholding, and sub- Recognition occurs in this system at rates of up to two
sampling at frame rate (30 framedsec). (The images are or three times per second. Until motion is detected, or
subsampled to speed up the motion analysis.) as long as the image is not perceived to be a face, there
The motion detection and analysis programs run on is no output. When a face is recognized, the image of
the Sun 3/160, first detecting a moving object and then the identified individual is displayed on the Sun monitor.
tracking the motion and applying simple rules to deter-
mine if it is tracking a head. When a head is found, the RELATIONSHIP TO BIOLOGY AND
subimage, centered on the head, is sent to another com- NEURAL NETWORKS
puter (a Sun Sparcstation) that is running the face rec-
Biological Motivations
ognition program (although it could be running on the
same computer as the motion program). Using the dis- High-level recognition tasks are typically modeled as re-
tance-from-face-space measure, the image is either re- quiring many stages of processing, e.g., the Marr (1982)

82 Journal of Cognitive Neuroscience Volume 3, Number I


hi‘‘..\
r1
cells were sensitive to identity, some to “faceness,” and
some only to particular views (such as frontal or profile).
Although we do not claim that biological systems have
18.0 “eigenface cells” or process faces in the same way as the
818.0
.W
6.00 6.88 6.00 eigenface approach, there are a number of qualitative
4.80
2.88
..w
2.88
..88
2.88
similarities between our approach and current under-
8.80
8.88 2.00
0.00
0.88 2.00
88
88.00
8.W i.80
standing of human face recognition. For instance, rela-

i:‘p
,
tively small changes cause the recognition to degrade
gracefully, so that partially occluded faces can be recog-
nized, as has been demonstrated in single-cell recording
experiments. Gradual changes due to aging are easily
8.80 0.88
handled by the occasional recalculation of the eigenfaces,
so that the system is quite tolerant to even large changes
B 80
as long as they occur over a long period of time. If,

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


-
6.80
..a8
2.80
4.88
2.00
4.08
however, a large change occurs quickly-e.g., addition
8.80
0.80
+*
1.08
8.80
0.88 2.88
8.8E
8.00 Z.0B
of a disguise or change of facial hair-then the eigenfaces

pp,
approach will be fooled, as are people in conditions of
casual observation.

Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October


8.88 8.88

Neural Networks
6.00
4.88 4.w
2.08 2.88
Although we have presented the eigenfaces approach to
8.00
0.80
m
i.08 O.w 2 00
face recognition as an information-processing model, it
may be implemented using simple parallel computing
elements, as in a connectionist system o r artificial neural
network. Figure 11 shows a three-layer, fully connected
linear network that implements a significant part of the
Figure 9. Kesults of experiments measuring recognition perfor-
mance using eigenfaces. Each graph shows averaged performance as
system. The input layer receives the input (centered and
the lighting conditions, head size, and head orientation vary-the y- normalized) face image, with one element per image
axis depicts number of correct classifications (out of 16). The peak pixel, or N elements. The weights from the input layer
(16/16 correct) in each grdph results from recognizing the particular to the hidden layer correspond to the eigenfaces, so that
training set perfectly. The other two graph points reveal the decline the value of each hidden unit is the dot product of the
in performance as the following parameters are varied: (a) lighting,
(b) head size (scale), ( c ) orientation, (d) orientation and lighting,
input image and the corresponding eigenface: wi = aT
( e ) orientation and size (#l), (f) orientation and size ( # 2 ) , (g) size ul. The hidden units, then, form the pattern vector CnT =
and lighting, (h) size and lighting ( # 2 ) . [WI, 0 2 . . . WLI.
The output layer produces the face space projection
of the input image when the output weights also corre-
paradigm of progressing from images to surfaces to spond to the eigenfaces (mirroring the input weights).
three-dimensional models to matched models. However, Adding two nonlinear components we construct Figure
the early development and the extreme rapidity of face 12, which produces the pattern class Cn, face space pro-
recognition makes it appear likely that there must also jection af,distance measure d (between the image and

by 2022
be a recognition mechanism based on some fast, low- its face space projection), and a classification vector. The

guest on 18 May 2021


level, two-dimensional image processing. classification vector is comprised of a unit for each
On strictly phenomenological grounds, such a face known face defining the pattern space distances ei. The
recognition mechanism is plausible because faces are unit with the smallest value, if below the specified thresh-
typically seen in a limited range of views, and are a very old O,, reveals the identity of the input face image.
important stimulus for humans from birth. The existence Parts of the network of Figure 12 are similar to the
of such a mechanism is also supported by the results of associative networks of Kohonen (1989) and Kohonen
a number of physiological experiments in monkey cortex and Lehtio (1981). These networks implement a learned
claiming to isolate neurons that respond selectively to stimulus-response mapping, in which the learning phase
faces (e.g., see Perrett, Rolls, & Caan, 1982; Perrett, Mist- modifies the connection weights. An autoassociative net-
lin, & Chitty, 1987; Bruce, Desimone, & Gross, 1981; work implements the projection onto face space. Simi-
Desimone, Albright, Gross, & Bruce, 1984; Rolls, Baylis, larly, reconstruction using eigenfaces can be used to
Hasselmo, & Nalwa, 1989). In these experiments, some recall a partially occluded face, as shown in Figure 13.

Turk and Pentland 83


Figure 10. System diagram o f
the face recognition system.
Datacube
+ Image
Processor

Sun 3/160 Sparcstation


(motion analysis) (recognition)

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


Figure 11. Three-layer linear
network for eigenhce calcula-
tion. The symmetric weights u,
are the eigenfaces, and the
hidden units reveal the projec-
tion o f the input image @ onto

Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October


the eigenfaces. The output @f
is the face space projection o f
the input image.

Input layer @

eigenface approach does provide a practical solution that


CONCLUSION
is well fitted to the problem of face recognition. It is fast,
Early attempts at making computers recognize faces were relatively simple, and has been shown to work well in a
limited by the use of impoverished face models and constrained environment. It can also be implemented
feature descriptions (e.g.,locating features from an edge using modules of connectionist or neural networks.
image and matching simple distances and ratios), assum- It is important to note that many applications of face
ing that a face is no more than the sum of its parts, the recognition d o not require perfect identification, al-
individual features. Recent attempts using parameterized though most require a low false-positive rate. In search-

by 2022
guest on 18 May 2021
feature models and multiscale matching look more ing a large database of faces, for example, it may be
promising, but still face severe problems before they are preferable to find a small set of likely matches to present
generally applicable. Current connectionist approaches to the user. For applications such as security systems or
tend to hide much of the pertinent information in the human-computer interaction, the system will normally
weights that makes it difficult to modify and evaluate be able to “view” the subject for a few seconds or min-
parts of the approach. utes, and thus will have a number of chances to recognize
The eigenface approach to face recognition was mo- the person. Our experiments show that the eigenface
tivated by information theory, leading to the idea of technique can be made to perform at very high accuracy,
basing face recognition on a small set of image features although with a substantial “unknown”rejection rate, and
that best approximates the set of known face images, thus is potentially well suited to these applications.
without requiring that they correspond to our intuitive We are currently investigating in more detail the issues
notions of facial parts and features. Although it is not an of robustness to changes in lighting, head size, and head
elegant solution to the general recognition problem, the orientation, automatically learning new faces, incorpo-

84 Journal of Cognitive Neuroscience Volume 3, Number 1


Identity
umnnn
Input image
r
Projected mage
a)

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


f

Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October


Mean image
- 0
Y
&
Distance measure

Figure 12. Collection of networks to implement computation o f the pattern vector, projection into face space, distance from face space
measure, and identification.

by 2022
guest on 18 May 2021

Figure 13. (a) Partially occluded face image and (b) its reconstruction using the eigenfaces

Turk and Pentland 85


individual, and the tradeoffs between the number of Verlag.
Hay, D. C., & Young, A. W. (1982). The human face. In A. W,
people the system needs to recognize and the number Ellis (Ed.), Normality and pathology in cognitivefunctions.
of eigenfaces necessary for unambiguous classification. London: Academic Press.
In addition to recognizing faces, we are also beginning Kanade, T. (1973). Picture processing system by computer
efforts to use eigenface analysis to determine the gender complex and recognition of human faces. Dept. of Informa-
of the subject and to interpret facial expressions, two tion Science, Kyoto University.
Kaya, Y., & Kobayashi, K. (1972). A basic study on human face
important face processing problems that complement the recognition. In S. Watanabe (Ed.), Frontiers of pattern rec-
task of face recognition. ognition. New York: Academic Press.
Kirby, M., & Sirovich, L. (1990). Application of the Karhunen-
Loeve procedure for the characterization of human faces.
REFERENCES IEEE Transactions on Pattern Analysis and h4achine Intelli-
Bledsoe, W. W. (1966a). The model method in facial recogni- gence, 12(1).
tion. Panoramic Research Inc., Palo Alto, CA, Rep. PRI:15, Kohonen, T. (1989). Self-organization and asociative mem-

Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/3/1/71/1755727/jocn.1991.3.1.71.pdf


August. ory. Berlin: Springer-Verlag.
Bledsoe, W. W. (1966b). Man-machine facial recognition. Pan- Kohonen, T., & Lehtio, P. (1981). Storage and processing of
oramic Research Inc., Palo Alto, CA, Rep. PFU:22, August. information in distributed associative memory systems. In
Bruce, V. (1988). Recognlsing faces. Hillsdale, NJ: Erlbaum. G. E. Hinton & J. A. Anderson (Eds.), Parallel models of
Bruce, C. J., Desimone, R., & Gross, C. G. (1981). Journal of associative memory. Hillsdale, NJ: Erlbaum, pp. 105-143.
Neuropbysiology, 46, 369-384. Marr, D. (1982). Vision. San Francisco: W. H. Freeman.

Downloaded from http://direct.mit.edu/jocn/article-pdf/3/1/71/1932018/jocn.1991.3.1.71.pdf by guest on 24 October


Burt, P. (1988a). Algorithms and architectures for smart sen- Midorikawa, H. (1988). The face pattern identification by
sing. Proceedings of the Image Understanding Workshop, back-propagation learning procedure. Abstracts of the First
April. Annual INNS Meeting, Boston, p. 515.
Burt, P. (1988b). Smart sensing within a Pyramid Vision Ma- O’Toole, Millward, & Anderson (1988). A physical system ap-
chine. Proceedings of IEEE, 76(8),139-153. proach to recognition memory for spatially transformed
Cannon, S. R., Jones, G. W., Campbell, R., & Morgan, N. W. faces. Neural Networks, I, 179-199.
(1986). A computer vision system for identification o f indi- Perrett, D. I., Mistlin, A. J., & Chitty, A. J. (1987). Visual neu-
viduals. Proceedings of IECON, 1. rones responsive to faces. TINS, 10(9), 358-364.
Carey, S., & Diamond, R. (1977). From piecemeal to configu- Perrett, Rolls, & Caan (1982). Visual neurones responsive to
rational representation of faces. Science, 195, 312-313. faces in the monkey temporal cortex. Ehperimental Brain
Craw, Ellis, & Lishman (1987). Automatic extraction of face Reseurch, 47, 329-342.
features. Pattern Recognition Letters, 5, 183-187. Rolls, E. T., Baylis, G. C., Hasselmo, M. E., & Nalwa, V. (1989).
Davies, Ellis, & Shepherd (Eds.), (1981). Perceiuing and re- The effect of learning on the face selective responses of
memberingfaces. London: Academic Press. neurons in the cortex in the superior temporal sulcus of
Desimone, R., Albright, T. D., Gross, C. G., & Bruce, C. J. the monkey. Experimental Brain Ke.yearch, 76, 153-164.
(1984). Stimulus-selective properties of inferior temporal Sirovich, L., & Kirby, M. (1987). Low-dimensional procedure
neurons in the macaque. Neuroscience, 4, 2051-2068. for the characterization of human faces. Journal of the Op-
Fischler, M. A., & Elschlager, R. A. (1973). The representation tical Society of America A, 4(3), 519-524.
and matching of pictorial structures. IEEE Transactions on Sronham, T. J. (1986). Practical face recognition and verifica-
Computers, c-22(1). tion with WISARD. In H. Ellis, M. Jceves, F. Newcombe, &
Fleming, M., & Cottrell, G. (1990). Categorization of faces us- A. Young (Eds.),Aspects of faceprocessing. Dordrecht: Mar-
ing unsupervised feature extraction. Proceedings of IJCNN- tinus Nijhoff.
Wong, K., Law, H., & Tsang, P. (1989). A system for recognis-
90, 2.
Goldstein, Harmon, & Lesk (1971). Identification of human ing human faces. Proceedings of ICASSP, May, 1638-1642.
faces. Proceedings IEEE, 59, 748. Yuille, A. L., Cohen, D. S., & Hallinan, P. W. (1989). Feature
Harmon, L. D. (1971). Some aspects of recognition of human extraction from faces using deformable templates. Proceed-
faces. In 0.J. Grusser & R. Klinke (Eds.), Pattern recogni- ings of CVPR, San Diego, CA, June.

by 2022
guest on 18 May 2021

86 Journal of CognitiveNeuroscience volume 3, Number 1

You might also like