Face Recognition

using
PCA (Eigenfaces) and LDA (Fisherfaces)

Slides adapted from Pradeep Buddharaju

Principal Component Analysis

A N x N pixel image of a face,
represented as a vector occupies a
single point in N2-dimensional image
space.

Images of faces being similar in overall
configuration, will not be randomly
distributed in this huge image space.

Therefore, they can be described by a
low dimensional subspace.

Main idea of PCA for faces:


To find vectors that best account for
variation of face images in entire
image space.
These vectors are called eigen
vectors.
Construct a face space and project the
images into this face space
(eigenfaces).

xM Example 1 2 3  3  1 2   4 5 1  33 1 2   3   3  1   2 4   5 1   91 .x3.….x2.Image Representation • Training set of m images of size N*N are represented by vectors of size N2 x1.

Average Image and Difference Images • The average training set is defined by m= (1/m) ∑mi=1 xi • Each face differs from the average by vector ri = xi – m .

…. use the matrix ATA of size m x m and find eigenvectors of this small matrix. .rm] Size of this matrix is N2 x N2 • Finding eigenvectors of N2 x N2 matrix is intractable. Hence.Covariance Matrix • The covariance matrix is constructed as C = AAT where A=[r1.

then v is said to be an eigenvector of A with eigenvalue λ.Eigenvalues and Eigenvectors .Definition • If v is a nonzero vector and λ is a number such that Av = λv. Example 2 1 1 1 1 2  1  3  1     .

we have AAT(Avi) = mi(Avi) .Eigenvectors of Covariance Matrix • The eigenvectors vi of ATA are: • Consider the eigenvectors vi of ATA such that ATAvi = mivi • Premultiplying both sides by A.

hence called Eigenfaces .Face Space • The eigenvectors of covariance matrix are ui = Avi • ui resemble facial images which look ghostly.

….Projection into Face Space • A face image can be projected into this face space by pk = UT(xk – m) where k=1.m .

k {||pj-pk||}. is half the largest distance between any two face images: Өc = ½ maxj.Recognition • The test image x is projected into the face space to obtain a vector p: p = UT(x – m) • The distance of p to each face class is defined by Єk2 = ||p-pk||2. j.m • A distance threshold Өc.….k = 1. k = 1.….m .

where xf = U * x + m • Recognition process: – IF Є≥Өc then input image is not a face image. Є2 = || x – xf ||2 . – IF Є<Өc AND Єk*=mink{ Єk} < Өc then input image contains the face of individual k* . xf.Recognition • Find the distance Є between the original image x and its reconstructed image from the eigenface space. – IF Є<Өc AND Єk≥Өc for all k then input image contains an unknown face.

Change in feature location and shape. • Differences in pose – Head orientation .Limitations of Eigenfaces Approach • Variations in lighting conditions – Different lighting conditions for enrolment and query. – Bright light causing image saturation. .2D feature distances appear to distort. • Expression .

Linear Discriminant Analysis • PCA does not use class information – • PCA projections are optimal for reconstruction from a low dimensional basis. LDA is an enhancement to PCA – Constructs a discriminant subspace that minimizes the scatter between images of same class and maximizes the scatter between different class images . they may not be optimal from a discrimination standpoint.

…. X2.2. • We compute the mean image mi of each class Xi as: 1 k mi   x j k j 1 • Now.2. Xc be the face classes in the database and let each face class Xi. the mean image m of all the classes in the database can be calculated as: 1 c m   mi c i 1 . i = 1.….c has k facial images xj. j=1.….k.Mean Images • Let X1.

Scatter Matrices • We calculate within-class scatter matrix as: c SW   T ( x  m ) ( x  m )  k i k i i 1 xk  X i • We calculate the between-class scatter matrix as: c S B   N i ( m i  m )(m i  m ) T i 1 .

Multiple Discriminant Analysis We find the projection directions as the matrix W that maximizes |W T SBW | W  argmax J(W )  |W T SW W | ^ This is a generalized Eigenvalue problem where the columns of W are given by the vectors wi that solve  SB wi  i SW wi .

• Each face image xj  Xi can be projected into this face space by the operation pi = WT(xj – m) .AFTER REDUCING THE DIMENSION OF THE FEATURE SPACE.Fisherface Projection • We find the product of SW-1 and SB and then compute the Eigenvectors of this product (SW-1 SB) . • Form a matrix W that represents all eigenvectors of SW-1 SB by placing each eigenvector wi as a column in W. • Use same technique as Eigenfaces approach to reduce the dimensionality of scatter matrix to compute eigenvectors.

.

Testing • Same as Eigenfaces Approach .

. J.: Eigenfaces vs. • Belhumeur. Kriegman. P. Pentland.. . D.References • Turk. Cognitive Neuroscience 3 (1991) 71–86. A.: Eigenfaces for recognition..Hespanha. IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (1997) 711–720. M. Fisherfaces: recognition using class specific linear projection. J.