Face Recognition

using
PCA (Eigenfaces) and LDA (Fisherfaces)

Slides adapted from Pradeep Buddharaju

Principal Component Analysis

A N x N pixel image of a face,
represented as a vector occupies a
single point in N2-dimensional image
space.

Images of faces being similar in overall
configuration, will not be randomly
distributed in this huge image space.

Therefore, they can be described by a
low dimensional subspace.

Main idea of PCA for faces:


To find vectors that best account for
variation of face images in entire
image space.
These vectors are called eigen
vectors.
Construct a face space and project the
images into this face space
(eigenfaces).

x2.x3.xM Example  1 2 3  3  1 2    4 5 1   1  2    3     33        3  1  2 4  5 1  91 .Image Representation • Training set of m images of size N*N are represented by vectors of size N2 x1.….

Average Image and Difference Images • The average training set is defined by = (1/m) ∑mi=1 xi • Each face differs from the average by vector ri = xi –  .

Hence.rm] Size of this matrix is N2 x N2 • Finding eigenvectors of N2 x N2 matrix is intractable. .Covariance Matrix • The covariance matrix is constructed as C = AAT where A=[r1. use the matrix ATA of size m x m and find eigenvectors of this small matrix.….

Example  (eigenvalues)  2 1   1  1  1 2   1  3   1       A v (eigenvectors) .Definition • If v is a nonzero vector and λ is a number such that Av = λv. then v is said to be an eigenvector of A with eigenvalue λ.Eigenvalues and Eigenvectors .

we have AAT(Avi) = i(Avi) .Eigenvectors of Covariance Matrix • The eigenvectors vi of ATA are: • Consider the eigenvectors vi of ATA such that ATAvi = ivi • Premultiplying both sides by A.

hence called Eigenfaces .Face Space • The eigenvectors of covariance matrix are ui = Avi Face Space • ui resemble facial images which look ghostly.

m .Projection into Face Space • A face image can be projected into this face space by pk = UT(xk – ) where k=1.….

m .Recognition • The test image x is projected into the face space to obtain a vector p: p = UT(x – ) • The distance of p to each face class is defined by Єk2 = ||p-pk||2.…. j.k = 1.k {||pj-pk||}. k = 1.m • A distance threshold Өc. is half the largest distance between any two face images: Өc = ½ maxj.….

xf. – IF Є<Өc AND Єk*=mink{ Єk} < Өc then input image contains the face of individual k* .Recognition • Find the distance Є between the original image x and its reconstructed image from the eigenface space. where xf = U * x +  • Recognition process: – IF Є≥Өc then input image is not a face image. Є2 = || x – xf ||2 . – IF Є<Өc AND Єk≥Өc for all k then input image contains an unknown face.

Limitations of Eigenfaces Approach • Variations in lighting conditions – Different lighting conditions for enrolment and query. – Bright light causing image saturation. • • Expression .Change in feature location and shape. Differences in pose – Head orientation .2D feature distances appear to distort. .

they may not be optimal from a discrimination standpoint.Linear Discriminant Analysis • PCA does not use class information – • PCA projections are optimal for reconstruction from a low dimensional basis. LDA is an enhancement to PCA – Constructs a discriminant subspace that minimizes the scatter between images of same class and maximizes the scatter between different class images .

X2.Mean Images • Let X1.k.c has k facial images xj. ….2. • We compute the mean image i of each class Xi as: 1 k i   x j k j 1 • Now.2. i = 1. j=1. the mean image  of all the classes in the database can be calculated as: 1 c    i c i 1 . Xc be the face classes in the database and let each face class Xi.….….

Scatter Matrices • We calculate within-class scatter matrix as: c SW   T ( x   ) ( x   )  k i k i i 1 xk X i • We calculate the between-class scatter matrix as: c S B   N i (  i   )(  i   ) T i 1 .

Multiple Discriminant Analysis We find the projection directions as the matrix W that maximizes |W T SBW | W  argmax J(W )  |W T SW W | ^ This is a generalized Eigenvalue problem where the columns of W are given by the vectors wi that solve  SB w i  i SW w i .

• Each face image xj  Xi can be projected into this face space by the operation pi = WT(xj – ) .AFTER REDUCING THE DIMENSION OF THE FEATURE SPACE. • Use same technique as Eigenfaces approach to reduce the dimensionality of scatter matrix to compute eigenvectors.Fisherface Projection • We find the product of SW -1 and SB and then compute the Eigenvectors of this product (SW -1 SB) . • Form a matrix W that represents all eigenvectors of SW -1 SB by placing each eigenvector wi as a column in W.

.

Testing • Same as Eigenfaces Approach .

: Eigenfaces vs.. IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (1997) 711–720. J. Pentland. P.. • Belhumeur. J. Fisherfaces: recognition using class specific linear projection.. M. Kriegman. A. Cognitive Neuroscience 3 (1991) 71–86. D.Hespanha. .References • Turk.: Eigenfaces for recognition.