This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

com/2009/ 02/11/face-recognition-using-eigenfaces-anddistance-classifiers-a-tutorial/

Onionesque Reality

..::A Random Walk::..

Feeds: Posts Comments « Giant¶s Causeway Problem Solved Face Recognition in Bees »

**Face Recognition using Eigenfaces and Distance Classifiers: A Tutorial
**

February 11, 2009 by Shubhendu Trivedi Why? Two Reasons: 1. Eigenfaces is probably one of the simplest face recognition methods and also rather old, then why/how is this post going to help AT ALL? Well, I have to give a presentation on face recognition using Support Vector Machines in a weeks time and I have to initiate the talk with a system based on Eigenfaces. I was just preparing notes for my talk, so I thought it wouldn¶t be entirely useless to put up my notes. 2. I was thinking of writing a post based on face recognition in Bees next, so this should serve as a basis for the next post for people who are new to face recognition. My intention is to give a very simple introduction to the topic. _____

What this means is that you can¶t just split a periodic signal into simple sines and cosines.s This specific condition where an otherwise normal person who suffered some damage to a specific region in the temporal lobe loses the ability to recognize faces is called prosopagnosia. suppose defined in the interval is a periodic function with period and satisfies a set of conditions called the Dirichlet¶s conditions: . In one of my previous posts. but you can also approximately reconstruct that signal given you have information how the sines and cosines that make it up are stacked. More Formally: Put in more formal terms. Fourier series are named so in the honor of Jean Baptiste Joseph Fourier (Generally Fourier is pronounced as ³fore-yay´. _____ A Motivating Parallel Eigenfaces has an interesting parallel in one of the most fundamental ideas in mathematics and signal processing. This is something that most engineering and science students (with some mathematical subjects) should be aware of. We are not yet even close to an understanding of how we manage to do it. however the correct French pronunciation is ³foor-yay´) who made important contributions to their development. cognition and consciousness. What is known is that it is that the Temporal Lobe in the brain is partly responsible for this ability. It is a very interesting condition when it occurs as the perception of faces remains normal (vision pathways and perception is fine) and the person can recognize people by their voice but not by faces. In case you are not aware don¶t worry. I did link to one lecture by him in which he talks in detail about this condition. face recognition in humans is a wonder.Introduction Like almost everything associated with the Human body . which had links to a series of lectures by Dr Vilayanur Ramachandran. Damage to the temporal lobe CAN result in the condition in which the patient can lose the ability to recognize face.The Brain. perceptive abilities. Representation of a signal in the form of a linear combination of complex sinusoids is called the Fourier Series.

The Note: It is common to define the above using An example that illustrates or the Fourier series is: . 3. then is finite. single valued and its integral exists in the interval. can be represented by the trigonometric series The above representation of is called the Fourier series and the coefficients . 2. has a finite number of discontinuities in the interval. has a finite number of extrema in that interval.1. and are called the fourier coefficients and are determined from coefficients are given as : by Euler¶s formulae.

This can be represented aptly in a figure as: . The aim is to represent a face as a linear combination of a set of basis images. represent weights and the eigenvectors. we could reconstruct the square wave exactly with simply sines and cosines.[Image Source] A square wave (given in black) can be approximated to by using a series of sines and cosines (result of this summation shown in blue). Clearly in the limiting case. _____ Though not exactly the same. That is : Where represents the face with the mean subtracted from it. the idea behind Eigenfaces is similar. This was just like mentioning at the start what we have to do. If this makes somewhat sketchy sense then don¶t worry. It would be clear exactly what it is once you finish reading.

Because of this. Please note that in the figure the ghost like basis images (also called as Eigenfaces. such as distance between eyes. This is particularly useful for reducing the computational effort. To understand this. detailed 3-D information about the face is not needed. which essentially are the principal components of the face images. faces will be mostly upright and frontal. It uses an Information Theory appraoch wherein the most relevant face information is encoded in a group of faces that will best distinguish the faces. Before the method for face recognition using Eigenfaces was introduced. They have just been picked randomly from a pool of 70 by me. we¶ll see why they are called so) are not in order of their importance. This reduces complexity by a significant bit. most of the face recognition literature dealt with local and intuitive features. This wasn¶t very effective. These Eigenfaces were prepared using images from the MIT-CBCL database (also I have adjusted the brightness of the Eigenfaces to make them clearer after obtaining them. This is illustrated by this figure: . ears and similar other features.Click to Enlarge In the above figure. Eigenfaces inspired by a method used in an earlier paper was a significant departure from the idea of using only intuitive features. this is based on the assumption that at the time of recognition. therefore the brightness of the reconstructed image looks different than those of the basis images). _____ An Information Theory Approach: First of all the idea of Eigenfaces considers face recognition as a 2-D recognition problem. out of these about 40 might be insignificant and only 20 might represent the variation in data significantly. The Principal Components basically seek directions in which it is more efficient to represent the data. It transforms the face images in to a set of basis faces. a face that was in the training database was reconstructed by taking a weighted summation of all the basis faces and then adding to them the mean face. so for calculations it would work quite well to only use the 20 and leave out the rest. suppose we get 60 such directions.

Click to Enlarge Such an information theory approach will encode not only the local features but also the global features. The number of Eigenfaces that we would obtain therefore would be equal to the number of images in the training set. Like I mentioned one paragraph before. Let us take this number to be . Needless to say K < M. There are images in the training set. each Eigenvector has some contribution from EACH face used in the training set. _____ Algorithm for Finding Eigenfaces: . Every image in the training set can be represented as a weighted linear combination of these basis faces. some of these Eigenfaces are more important in encoding the variation in face images. So the Eigenvectors also have a face like appearance. There are most significant Eigenfaces using which we can satisfactorily approximate a face. most significant Eigenfaces. thus we could also approximate faces using only the _____ Assumptions: 1. To take an example: An image of size 112 x 112 can be represented as a vector of dimension 12544 or simply as a point in a 12544 dimensional space. When we find the principal components or the Eigenvectors of the image set. Such features may or may not be intuitively understandable. 3. 2. These look ghost like and are ghost images or Eigenfaces. which can be represented as dimensional vectors. All images are matrices. The same logic would apply to images that are not of equal length and breadths.

training images . Obtain centered.1. To avoid confusion and to maintain continuity. Same goes for some formulae below in the post. . it is very important that the images are 2. 3. The purpose of subtracting the mean image from each image vector is to be left with only the distinguishing features from each face and ³removing´ in a way information that is common. . Subtract the mean face from each face vector to get a set of vectors . 4. there is some trouble with constructing matrices with multiple columns. for the time being I am posting an image for the above formula that¶s showing an error message. Represent each image as a vector as discussed above. Find the average face vector . Note: Due to a recent WordPress bug.

Instead of the Matrix is a consider the matrix . The computations required would easily make your system run out of memory. Find the Covariance matrix : . Also keep in mind that . where Note that the Covariance matrix has simply been made by putting one modified image vector obtained in one column each. For an image this number is HUGE. it follows that: This implies that using . Remember that as M is simply the number of training images. by using the relation discussed above. Find the best Eigenvectors of . each of Dimension Now from some properties of matrices. earlier. We have found out we can calculate the M largest Eigenvectors of . That is: . Also note that is a matrix and is a matrix. let¶s call these Eigenvectors . We now need to calculate the Eigenvectors matrix and it would return Eigenvectors each being dimensional. Remember is a matrix. thus matrix. However note that is a 6. of . Eigenvectors. it would return . How do we get around this problem? 7.5. 8. If we find the Eigenvectors of this matrix.

where µs are Eigenfaces. Select the best _____ Finding Weights: The Eigenvectors found at the end of the previous section. Sometimes. . they are also called as Ghost Images because of their weird appearance. Now each face in the training set (minus the mean). these are not in any order] 9. Each normalized training image is represented in this basis as a vector. These weights can be calculated as : . combination of these Eigenvectors m. they are called Eigenfaces. have a face like appearance. the selection of these Eigenvectors is done heuristically. : can be represented as a linear Eigenvectors. Since these are Eigenvectors and have a face like appearance.[6 Eigenfaces for the training set chosen from the MIT-CBCL database. when converted to a matrix in a process that is reverse to that in STEP 2.

This means we have to calculate such a vector corresponding to every image in the training set and store them as templates. The normalized probe can then simply be represented as: After the feature vector (weight vector) for the probe has been found out. We then project this normalized probe onto the Eigenspace (the collection of Eigenvectors/faces) and find out the weights.where i = 1. For the classification task we could simply use some distance measures or use some classifier like Support Vector Machines (something that I would cover in an upcoming post). In case we use distance measures. their associated weights after selecting a set of most relevant Eigenfaces and have stored these vectors corresponding to each training image. We normalize the incoming probe 2. . we simply need to classify it. _____ Recognition Task: Now consider we have found out the Eigenfaces for the training images .2 M. If an unknown probe face is to be recognized then: as . classification is done as: . 3. 1.

And a probe that is not in the training set comes up for the recognition task. The other being the Mahalanobis Distance. For distance measures the most commonly used measure is the Euclidean Distance. It takes into account the covariance between the variables and hence removes the problems related to scale and correlation that are inherent with the Euclidean Distance. This means we take the weight vector of the probe we have just found out and find its distance with the weight vectors associated with each of the training image. It is given as: Where _____ is the covariance between the variables involved. It is a special case of a general class of norms and is given as: The Mahalanobis Distance: The Mahalanobis Distance is a better distance measure when it comes to pattern recognition problems. _____ Distance Measures: Euclidean Distance: The Euclidean Distance is probably the most widely used distance metric. And if .Find . important? Consider for simplicity we have ONLY 5 images in the training set. Let¶s take a brief digression and look at these two simple distance measures and then return to the task of choosing a threshold. then we can say that the probe image is recognized as the image with which it gives the lowest score. I will come to the point on how the threshold should be chosen. Deciding on the Threshold: Why is the threshold. If however then the probe does not belong to the database. The Mahalanobis distance generally gives superior performance. where is a threshold chosen heuristically. The score for each of the 5 images will be .

this image will be scored with each of the training images. But the probe image is clearly not beloning to the database. And even if an image of the probe is not in the database. it will still say the probe is recognized as the training image with which its score is the lowest. we then calculate the scores for images of people in the database and also for this random set and set the threshold in the ³recognition´ part above) accordingly. It is for this purpose that we decide the threshold. Let¶s say is the lowest score out of all. To choose the threshold we choose a large set of random images (both face and non-face). Clearly this is an anomaly that we need to look at. Click to Enlarge Now to illustrate what I just said. consider a simpson image as a non-face image.found out with the incoming probe. _____ (which I have mentioned . The threshold is decided heuristically.

Distant from both the face space and face class: When the probe is not a face image i. _____ References and Important Papers . The images of There are four possible combinations on where an input image can lie: 1. 3.e a facial image). face detection is recommended to be a part of such a system. type 3 is responsible for most false positives. and in particular the faces in the training set should lie near the face space. Out of the four. Near face space but away from face class : This is the case when the probe image is of a person (i. The projection distance should be under a threshold known individual fall near some face class in the face space. The images of a face. but does not belong to anybody in the database i. Near a face class and near the face space : This is the case when the probe is nothing but the facial image of a known individual (known = image of this person is already in the database). Distant from face space and near face class : This happens when the probe image is not of a face however it still resembles a particular face class stored in the database. Which in general describes images that are face like. as already seen.e away from any face class. [Image Source [1]] Consider a simplified representation of the face space as shown in the figure above. here is a brief discussion on the face space. 2. 4.e is away from the face space and is nothing like any face class stored.More on the Face Space: To conclude this post. To avoid them.

_____ .1. Turk and Alex P. Kreigman. Belhumeur. Pentland. 3. Eigenfaces Versus Fischerfaces : Recognition using Class Specific Linear Projection. Journal of Cognitive Neuroscience ¶91. Hespanha. Face Recognition Using Eigenfaces. 2. Turk and Alex P. Pentland. Matthew A. Matthew A. Eigenfaces for Recognition. CVPR ¶91. PAMI ¶97. MIT Vision and Modeling Lab.

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd