This action might not be possible to undo. Are you sure you want to continue?

Pentland

Presentation by: Maria Uretskiy

We

all different«.

to the detection and identification of human faces. Near real time face recognition system which recognizes the person by comparing characteristics of the face to those of know individuals.

Approach

249 249 186 185 186 182 186 180 186 183 185 182 . . .

251 185 183 180 180 183

247 183 186 183 179 183

249 186 179 184 183 179

247 183 183 183 185 180

249 182 186 182 184 183

251 ..... 193 181 186 186 188

. 247 249 251 249 249 . . .

The

face space is defined by the "Eigenfaces" which are the eigenvectors of the set of faces.

X Y Z

Transform

face images into a small set of characteristic feature images ² ´eigenfaces" which are the principal components of the initial training set of face images.

249 183 186 183 247 186 179 184 249 183 183 183 251 ..... 182 193 186 181 182 186

247 185 183 183 . .

0 0 0 0 0 0 0 0 0 0«.. 0000000000 0 0 0 0 0 0 0 123 0 0 0 0 0 0 0 121 123 . .

Dataset:

Eigenfaces:

We

can display the eigenvector as a sort of ghostly face which we call an eigenface.

Eac

fac i t trai i g s t can r pr s nt xactly in t r s f a linear c inati n f t e eigenfaces.

The

number of possible eigenfaces is equal to the number of face images in the training set.

PCA-

method for reducing the dimension of the problem.

**set of vectors X i Output: set of vectors Yi which are the projection of the input on lower dimension.
**

Input:

**the mean vector -m=E{ X i } Calculate the covariance matrix ² Cx Find the eigenvectors of Cx
**

Calculate

faces can be approximated using only the "best" eigenfaces. The ´bestµ eigenfaces are those that have the largest eigenvalues. The best M' eigenfaces span an M'dimentional subspace-"face space" of all possible face images.

The

a face image I(x,y) be a two-dimentional N by N array of intensity values. Image of size 100 by 100 describes a vector of dimension 10,000 or equivalently, a point in 10,000-dimentional space.

Let

247 185 183 183 . . 249 183 186 183 247 186 179 184 249 183 183 183 251 ..... 182 193 186 181 182 186 247 249 247 251 188 .

face images will not be randomly distributed in this huge image space and thus can be described by a relatively low dimensional subspace. We will use "principal component analysis" to find the vectors which best account for the distribution of face images within the entire image space.

Similar

The

face space efined y the faces i ages:

+!

m

+1 ,K , + m

e

r each M-di ensi nal s -space S define the appr xi ati n err r as

|| +i Ps ( +i ) ||2 §

j !1

here Ps ( +i ) is the pr jecti n f +i int the s -space S

The

best sub-space S is the one with the smallest approximation error

For

finding it we need to:

Calculate the average face Calculate for each face his difference from the average - * Build the covariance matrix - C

=

**Find eigenvectors {vi },eigenvalues {Ei } Sort the eigenvalues in descending order
**

**E1 u E2 u ..... u En For each M ,the best sub-space S
**

the eigenvectors:

is spanned by

{v1 ,......, vM }

Let

**the training set of face images be :
**

+1 ,K , + m

The

average face of the set is defined by: 1 M =! § +i M i !1

Each

face differs from the average by the vector :

*i ! +i =

Calculate

**the covariance matrix:
**

M n n !1 T n

1 C! M

! AA

T

Where-

A ! [*1 ,K , * M ] ( N v M

2

)

and * i is the difference from the average

aim is to find the eigenvectors of C = This set of very large vectors. PCA seeks a set of orthonormal vectors and their associated eigenvalues , which best describes the distribution of the data.

Our

AA

T

Problem:

The dimension of C = is N v N Where N is the number of pixels. This matrix is too large- not practical

AA

T

2

2

We will consider the matrix AT A T The dimension of A A is M v M , M is the number of the images in the training set

Solution:

Let

ui , vi

T

to be the eigenvectors of

AA , A A

T

T

(1)

**A Avi ! Qi vi ( Qi are the eigenvalues)
**

by A

Multiply

**AA Avi ! AQi vi ! Qi Avi
**

CAvi ! Qi Avi

T

Or

: Cui ! Qi ui where:

ui ! Avi

**AT A have the same eigenvalues Thus, AA and and their eigenvectors are related as follows:
**

T

ui ! Avi

We

reduced our dimension from N -the number of pixels to M -the number of the pictures in the training set.

2

It·s

obvious that:

N ? M

2

If

it is a face: it should be a linear combination of the eigenfaces we have calculated.

Creating

the vector of weights for an image I is equivalent to projecting the image onto the low-dimensional face space.

wk ! u ( I = )

T k

The

distance between the image and its projection onto the face space is simply the distance between : the mean adjusted input image ²

I

I =

and its projection onto face space-

* f ! § i !1 wk

i!M

k

Given

a sub-image q of unknown image

I

Compute the mean adjusted image:

*! q=

**Compute linear combination of eigenfaces:
**

T k

f

wk ! u ( = )

! § i !1 wk uk

i!M

Compute the distance between those two:

ed ! || * * f ||

Check the threshold, if ed T than we found that the sub-image

q

is a face

The threshold have to be chosen manually by trial and error.

distance from face space is used as a measure of "faceness´ The result of calculation the distance from face space at every point in the image is a "face map´.

This

There

is a distinct minimum in the face map corresponding to the location of the face in the image.

There

are 4 possibilities for an input image:

Case (1): near face space and near face class we found face and we can recognize it

|| I = § wI ui ||

|| * * f ||

There

are 4 possibilities for an input image:

Case (2): near face space but distant from a known face class we found face, it·s a new face

|| I = § wI ui ||

|| * * f ||

There

are 4 possibilities for an input image:

Case (3): distant from the face space and near a face class false positive ² it·s isn't a face

|| I = § wI ui ||

|| * * f ||

There

are 4 possibilities for an input image:

Case (4): distant from the face space and distant from face class isn·t face

|| I = § wI ui ||

|| * * f ||

We

want to extract the relevant information in a face image, encode it and compare one face encoding with a database of models encoded similarly. Recognition is preformed by projecting a new image into the sub-space spanned by the eigenfaces. Then classifying the face by comparing its position in the face space with the positions of known individuals.

Initialization:

**Acquire the training set of face images Calculate the eigenfaces Define face class k
**

When

new face image is encountered:

Project the input image on the face space calculate a set of weights based on the input image and the M eigenfaces. Classify the image :

check if it·s a face: if it a face ² is it a known one?

Define

face class k:

For each person class define a vector which describes him This vector is the average weight vector of the projected images of person k:

; k ! {v ' ,......, v ' M }

{v !

' i

§ k

k j!

wij }

Given an input image I Compute the mean adjusted image:

*! I =

Check if it a face- Detect the face. Compute the weights vector:

; ! {w1 ,...., wM }

wi ! u *

T i

**Check if it one of the known faces: 2 Calc : I ! || ; ; || when r ! 1,....,
**

r r

Choose:

th

MIN (I r )

The class is the most resembling face to the input image.

Advantages:

**Almost not affected by lightning Doesn·t use complex geometry Almost real time classification
**

Disadvantages:

**Sensitive to head position Sensitive to variation of sizes
**

Dataset:

**11 Different people 5 Face pictures for each one Every picturedifferent facial expression
**

Eigenfaces:

Input image:

Adjustments:

Transform to gray scale Transform all images according to some standards Same lightning conditions Only frontal images

Input

image:

Reconstructed

image:

f

! § i !1 wk uk

i!M

Output

images:

In

M.turk & A.Pentland article: Sixteen subjects were digitized at all combinations: Three head orientations, Three head sizes or scales, Three lightning conditions.

Statistics

: The system achieved approximately: 96% correct classification over lightening variation 85% correct over head orientation variation 64% correct over size variation

http://www.slideshare.net/wolf/eigenfaces-andfisherfaces http://cs.haifa.ac.il/hagit/courses/ist/Lectures/IST06_Im ageFormation.pdf http://www.cs.ucsb.edu/~mturk/Papers/mturkCVPR91.pdf http://en.wikipedia.org/wiki/Eigenface http://www.face-rec.org/algorithms/PCA/jcn.pdf http://www.cse.unr.edu/~bebis/MathMethods/PCA/case_s tudy_pca1.pdf

- Eigenfaces
- Eigenfaces
- PCA_FACE_RECOGNITION_REPORT
- Kaymak
- Base 1
- Face Recognition - Ham Rara
- KLT
- 5. IJECE -Dimensionality Reduction of Feature Descriptors - Thanmaya - OPaid
- A Bespoke Approach for Face-Recognition Using PCA
- MATLAB Based Face Recognition System Using PCA and Neural Network
- manpreet
- PCA ppt
- Real Time Application of Face Recognition Concept
- Face Recognition Using Laplacian Faces
- moghaddam-pami-97(56)
- 19 - 20 March 2010 Human Face Recognition Using Superior Principal Component Analysis (SPCA)
- Personal Identification Using Ear Images Based on Fast and Accurate Principal Component Analysis_INFOS 2012
- v3-8-77
- Pose and Illumination in Face Recognition Using Enhanced Gabor LBP & PCA
- Facerec Python
- Performance Evaluation of Selected Principal Component Analysis Based Techniques for Face Image Recognition
- pcaimg
- Principal Component Analysis Based Facial Expression Recognition
- pca
- 00799223
- LDA Plus PCA Based Facial Expression Recognition
- Face Recognition Using DCT and PCA Approach
- Krzanowski - Sensitivity of Principal Components - 1984
- What is a Principal Component Analysing
- 3 - Feature Extraction
- eigenfaces-pca

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd