You are on page 1of 13

# Face Recognition

## A Report on Face Recognition Using

Principal Component Analysis.
Final Report

Prepared for:
Submis sion of Ass ignment w or k(3)
on A pplied M athematics

Prepared by:

Darshan Venkatrayappa

dars h. venkat@gmail.com

Sharib Ali

ali.sharib2002@gmail.com

Submitted to

Desire Sidebe
Dro-desire.sidebe@u-bourgogne.fr
17 th November 2010
Contents
Acronyms .................................................................................................................. 1

## 1.1 Background ........................................................................................ 1

1.2 Objective............................................................................................. 2
1.3 Problem Statement............................................................................. 2
1.4 Stages in Face Recognition

Chapter 2 Normalization

## 2.1 Why Normalization of images?....................................................... 3

2.1.1 Flow Chart and Algorithm of Normalization…………… 3-5
2.1.2 Algorithm for Mapping of the image to 64x64 window... 5-6

## 3.1 What does they mean?........................................................................ 7

3.2 Algorithm ………………………………………………………..... 7-9

## Chapter 4 Recognition of the Face

4.1 Analysis............................................................................................... 10
4.2 Algorithm ........................................................................................... 10
4.3 Result
4.3.1 Accuracy………………………………………………… 10
4.3.2 Limitation………………………………………………... 10
4.3.3 Scope of Improvement………………………………….. 11

## Chapter 5 Conclusions …………………………………………………………... 11

REFERENCES................................................................................................................ R-1
1

Acronyms
SVD = Singular Value Decomposition

## PCA = Principal Component Analysis

EV = Eigen Vector

## Chapter 1.Introduction to PCA in Face

Recognition
1.1 Background
The Face Recognition system has various potential applications, such as

 person identification.

 human-computer interaction.

 security systems.

The history of it goes back to the start of computer vision. Several other approach like irish,
fingerprint etc has been used in the application but however this has been used more widely
as the research. Face recognition has always remain a major focus of research because of its
non- invasive nature and because it is peop le's primary method of person identification. The
most famous early example of a face recognition system is due to Kohonen. Kohonen's
system was not a practical success, however, because of the need for precise alignment and
normalization.
F A C E REC O NI TI ON U S IN G P C A

In Principal Component Analysis for the face recognition, we train the faces and create a
database of the trained sample images of the person. We then find the covariance matrix from
this trained set. We get the eigen faces which corresponds to the principal components in the
eigen vector obtained. This will give an ‘eigen face’ which is ghost like in appearance. Now,
each face in the training set is the linear combination of the eigen vectors. Now, when we
take a test image for the recognition, we follow the same normalization steps and then project
the test image to the same eigen space containing the eigen faces. Finally we calculate the
minimum Euclidean distance between the eigen faces in the trained set of images with the
eigen face of the test image. This is explained in the further discussions.

The main purpose of PCA is to reduce the large dimensionality of the data space (observed
variables) to the smaller intrinsic dimensionality of feature space (independent variables),
which are needed to describe the data economically. This is the case here is a strong
correlation between observed variables.
2

1.2 Objective
i. To Study and implement the concept of SVD and PCA.
ii. To decrease the dimension of the feature space
iii. To make the computation fast
iv. To built a strong correlation between observed variables.
v. To see how the Eigen vectors gives the principa l components of the image to be
recognized from a trained set of images.
vi. To make effort towards improvement of the result obtained.

## 1.3 Problem Statement :

 Given an image, to identify it as a face and/or extract face images from it.
 To retrieve the similar images (based on a PCA) from the given database of trained face
images.

## Training of the Images

(to create database)

(eigen space)

## Test Face Location

F A C E REC O NI TI ON U S IN G P C A

Identification
3

Chapter 2. Normalization
2.1 Why Normalization required?
The normalization steps are done for the sacling, orientation and location variation
adjustment to all the images with some predefined features and the feature of images. Basically,
all the images are mapped to a window 64x64(in our case) taking some of the important facial
features. In our case, we have taken, 1. Left Eye Center 2. Right Eye Center 3. Tip of Nose 4.
Left Mouth Corner and 5. Right mouth corner.

## 2.1.1 Flow chart and Algorithm for Normalization

F A C E REC O NI TI ON U S IN G P C A

1
4
1

## Fig. Flow Chart to get the Convergence FBAR

*which gives the affine transformation ‘A’ and ‘b’ used to map the images to 64x64 window

Algorithm:
p
1. We take the predefined co-ordinates , f i
14 20
50 20
F A C E REC O NI TI ON U S IN G P C A

34 34
16 50
48 50

2. We take all the feature f i files starting from the first feature file and compute the equation

p

transformation.

## 3. We update FBAR each time by using SVD

FBar=Singular_Value_Decomposition(FBar,fp);
5

4. We take the average of all the FBAR calculated and update FBAR again with this
average value
5. Now, we compare the previous result with the current and if it’s greater than the
threshold error 10^-6 then we come out of the loop and the final converged FBAR gives
the affine transformation matrix A and vectoe b.

## 2.1.2 Algorithm for Mapping 384x384 image into 64x64 window

Since we have got the matrix A and vector b we can easily put the pixels of 384x384 into
64x64 window. This follows the following algorithm.

1. We now use FBAR to get the values of ‘A’ and ‘b’. The first 4 values gives the values of
matrix A and the last two gives the vector b.
x=(V*S*U')*F_BAR; b=x(5:6,1);
a=x(1:4,1); a=a(1:1:4,1); A=zeros(2,2);
A(:,1)=a(1:2,1); A(:,2)=a(3:4,1);

2. Since we know the the transformation matrix so we will plot each pixel of 64x64 window
into the 384x384 image using.
F384x384  A1 ( F4096  b)
3. We extract this window image i.e. the transformed image of 64x64.

## Normalize image1 Normalize image2

F A C E REC O NI TI ON U S IN G P C A

6

## Fig. Normalized and Mapped Images in 64x64window

2.2 Limitations
 Non-uniform illumination
 Images do not show the correct positions of the features taken for the affine
transform may result in convergence or some bad result.

## Chapter 3 Eigen Faces and Eigen Space

3.1. What do they mean?

PCA computes the basis of a space which is represented by its training vectors. These
basis vectors, actually eigenvectors, computed by PCA are in the direction of the largest
variance of the training vectors. Each eigenface can be viewed a feature. these are
Eigenvectors and have a face like appearance, they are called Eigenfaces. Sometimes,
they are also called as Ghost Images because of their weird appearance. When a
particular face is projected onto the face space, its vector into the face space describe the
importance of each of those features in the face. The face is expressed in the face space
by its eigenface coefficients (or weights).

## 3.2 Algorithm for Creating Eigen Face DataBase

1. We have taken the mean of all the 57 training faces. We substract these values from each
face. We concatenate all the substracted faces in a a single matrix which we will call D.
F A C E REC O NI TI ON U S IN G P C A

 I1 (1,1).........................................I1 (64,64)
. 
 
. 
 
D  . 
. 
 
. 
 I (1,1)....................................I (64,64) 
 57 57 
7

## 2. We must use the covariace formula to compute PCA

1
C DT D and compute the eigen values and eigen vector whose principal
N 1
components will give the eigen faces and hence eigen space. But, if the database has
many images then the situation will become worse and the computation may be very
vague. Even in our case it will give 4096x4096 dimentional matrix. Now, as the number
of non-zero covariance matrx is limited to N (57), we calculate the other way out which
will reduce the dimention but still give the eigen vectors which correspond to the eigen
vectors obtained from this covariance matrix.
1
3. So, we do C '  DDT . This reduces the dimention to 57x57 in our case. The
N 1
Eigenvector computed from this matrix will be of size 57x57
4. Now, we find the eigen faces which is given by multiplying DT * Eigenvecto r obtained by
C’.
F A C E REC O NI TI ON U S IN G P C A

5. Each face in the training set, i ,can be represented as a linear combination of these
Eigenvectors.
i  X i . , where,  is the principal components
X i is the training images
8

7
x 10
3

2.5

1.5

0.5

-0.5
0 10 20 30 40 50 60

## 1st Eigen Face 8th Eigen Face 10th Eigen Face

F A C E REC O NI TI ON U S IN G P C A

9

## Chapter 4 Recognition of the Face

4.1 Analysis
The face is expressed in the face space by its eigenface coefficients (or weights). We can handle
a large input vector now, facial image, only by taking its small weight vector in the face space.
As seen in the previous chapter, we have already found 57 eigen faces of the trained images.
Now, when user enters the face, (to be detected), which is unknown and we are taking it as a test
set of faces. We follow the following algorithm to find the eigen face related to it.
4.2 Algorithm :
1. We normalize the incoming image and map it as done in normalization chapter.
2. We now project this normalized image onto the eigen space to get a corresponding
feature vector  j as,
 j  X j
3. We finally find the Euclidean distance between  j and i .

Euclidean Distance: The Euclidean Distance is probably the most widely used distance
metric. It is a special case of a general class of norms and is given as:

x y  xi  yi
2

4. The minimum distance position between them will give the nearly identical face in the
Eigen space.
5. We read the corresponding image which will be identical to the test face.

4.3 Result
The result with the 30 test images was not 100% accurate but it gave some good
matching with almost 28 images.
F A C E REC O NI TI ON U S IN G P C A

4.3.1 Accuracy

## noofmatche dim age

%Accuracy= *100
count

4.3.2 Limitaitions

 The images of a face, and in particular the faces in the training set should
lie near the face space.
 Each image should be highly correlated with itself.
10

## 4.3.2 Scope and Improvement

Further changes in the algorithm may lead to better accuracy. Inaddition, we can
take some more distinct features to the training set of faces like length of forehead, chin
positon etc. Facial recognition is still an ongoing research topic for computer vision
scientists.

Test Image

## Matched Image Image-Matched(:>

20 20

40 40

60 60
20 40 60 20 40 60

## Fig. Showing One sample of Matched Output

Test Image
F A C E REC O NI TI ON U S IN G P C A

## Matched Image !!!Miss-Matched(:<!!

20 20

40 40

60 60
20 40 60 20 40 60

## Fig. Showing One sample of Unmatched Output

11

Chapter-5 Conclusion:
1. We must choose some features of the sample face and create a database of the images. In
our case, we have taken 57 face images.

2. Use of the Affine transform in finding the variables responsible for the same orientation,
scaling and other feature variations for all the images.

3. The special features taken should be mapped in the window we are taking the face, it
should include most part of the face rather than body.

## 4. Trained images should be mapped to smaller window.

5. Principal component Analysis can be used to both decrease the computational complexity
and measure of the covariance between the images.

6. How PCA reduces the complexity of computation when large number of images are
taken?

7. The principal components of the Eigen vector of this covariance matrix when
concatenated and converted gives the Eigen faces.

8. These eigenfaces are the ghostly faces of the trained set of faces forms a face space.

9. For each new face(test face), 30 in our case, we need to calculate its pattern vector.

10. The distance of it to the eigen faces in the eigen space must be minimum.

11. This distance gives the location of the image in the eigen space which is taken as the
F A C E REC O NI TI ON U S IN G P C A

## output matched image.

REFERENCES
[1] Matthew Turk and Alex Pentland, Eigen Faces For Recognition, MIT , The Media
Laboratory
[2] Desire Sidebe, Face Recognition Using PCA,Assignment-3 sheet,UB
[3] Wikipedia