This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

Abhishek Roy 2K3/EC/603

Anuj Jain 2K3/EC/613

Arpit Gupta 2K3/EC/614

Guide: Ms S Indu, ECE Dept, DCE

Objective –

To do face detection using Principal Component Analysis (PCA).

**Why face detection ?
**

Various potential applications, such as • • • • Applications in biometrics Video surveillance Human-computer interaction. Image database management systems

• Eigenvector approach • Data Compression . and expressing the data in such a way as to highlight their similarities and differences.Principal Components Analysis • It is a way of identifying patterns in data.

Some important terms – Standard Deviation -. These two measures are purely 1-dimensional. Variance -. .is the square of standard deviation.a measure of how spread out the data is.

. Covariance is the measure of how much two random variables vary together.is always measured between 2 dimensions. If the value is +ve then both dimensions increase together & if –ve.Covariance -. they are inversely proportional.

Covariance Matrix where S is the covariance matrix. . sjk is the covariance of variables Xj and Xk when j ≠ k and the diagonal element sjjis the variance of variable Xj when j = k.

4. Calculate the eigenvectors and eigenvalues of the covariance matrix. Obtain the covariance matrix for the data. 3. . Select the eigenvectors with the highest eigenvalues and form a matrix which is smaller in size compared to the data matrix.Method – 1. The eigenvector with the highest eigenvalue is the principle component of the data set. 2. Obtain the data in the form of a matrix.

– Directions are determined by the eigenvectors of the covariance matrix corresponding to the largest eigenvalues .• PCA projects the data along the directions where the data varies the most.

EIGENFACES • Set of eigenvectors • Develpoed by Matthew Turk and Alex Pentland • Eigenfaces can be extracted out of the image data by means of the mathematical tool called Principal Component Analysis (PCA) .

•Calculate the covariance matrix for the training set. keeping only the best M images with the highest eigenvalues. .Face detection based on eigenface approach • Acquire a set of training images of same size. These M images define the “face space”. • Calculate the eigenfaces from the covariance matrix.

•Given an image. calculate a set of weights of the M eigenfaces by projecting it onto each of the eigenfaces • Determine if the image is a face at all by checking to see if the image is sufficiently close to the face space .

Say we have 20 images.Image Representation A square. N by N image can be expressed as an N2 dimensional vector. Each image is N pixels high by N pixels wide. Write each image as an image vector and put all the images together in one big image-matrix like this: .

The Space of Faces • An image is a point in a high dimensional space Pixel 2 gray value ˆ x is an image of N pixels and ˆ x Pixel 1 gray value a point in N-dimensional space .

Training Set – .

Experiment • JAFFE image database – 20 images(M) – 256 x 256 (N x N) • Each image converted into a column vector of N2 x 1 • Column vectors stacked together to get a matrix – (N2 x M) • Covariance matrix needed – (N2 x N2) – over 4 billion entries • Calculate eigenvectors of the covariance matrix .

this method is very inefficient! . So. Size of C will be 65536 x 65536 ! Number of eigenvectors will be 65536 ! Typically.Problem: Size of Covariance Matrix C • Each data point is N2 -dimensional (N2 pixels) –The size of covariance matrix A is N2 x N2 –The number of eigenfaces is N2 –Example: For N2 = 256 x 256 pixels. only a few eigenvectors suffice.

Efficient Computation of Eigenvectors If B is N2xM and M<<N2 then C=BBT is N2xN2 >> MxM –M number of images. eigenvector of BTB is easily converted to that of BBT (BTB) y = e y => B(BTB) y = e (By) => (BBT)(By) = e (By) => By is the eigenvector of BBT . N2 number of pixels –use BTB instead.

v3… – Each vector represents a dimension in the face space – What do they look like? .Eigenfaces • PCA extracts the Eigenfaces of the set of images – Gives a set of vectors v1.v2.

Projecting an image onto face space – • The eigenfaces v1…vk span the face space. – A face is projected onto eigen coordinates by ) a9v9 a10v10 .

Projections – .

Given a new image (to be detected) x. calculate K coefficients 3.• Procedure 1. Detect if x is a face . Process the image database (training set of images) • Run PCA—compute eigenfaces 2.

Detection Result .

115 3.4271e+012 Distance Images not part of training set but belonging to people in the set – YM.NE3.27 4.53 KA.0430e+012 1.Result Image name Images part of the training set – KA.NE2.3121e+012 .29 MK.HA2.HA1.4622e+012 1.

9903e+012 3.9097e+012 8.8347e+012 3.Images with no relation to the training set … Containing a face – lena Not containing a face – nens wheel grey 2356R CT_scan Lone 7.4745e+012 8.8256e+012 2.9020e+012 5.6338e+012 .

• Also. .YM.4745e+12 (corresponding to image – grey) – fails to detect a face in the image -.HA2. but they are not detected.53.Threshold value • Threshold value = 3. • If the threshold value is increased to the distance corresponding to the image -. then 100% detection takes place but accuracy goes down as the program will “detect” faces in images with no human face in them.HA2. the image – nens consists of a number of faces.53.YM.

Implementation – MATLAB – Imread Reshape Double Clear Eig Zeros Pdist Uint8 .

Eigenfaces – summary in words • Eigenfaces are the eigenvectors of the covariance matrix obtained from the training set. • Eigenfaces are the ‘standardized face ingredients’ derived from the statistical analysis of many pictures of human faces • A human face may be considered to be a combination of these standardized faces .

Conclusion – • Project images onto a low-dimensional linear subspace— ‘face space’. • The distance between an image and its projection in face space is compared to a threshold value determined experimentally. . • This approach was tested on a number of images giving good detection results. defined by eigenfaces.

Advantage • Ease of implementation • Simplicity of the maths behind the concept. . • No knowledge of geometry or specific feature of the face is required.

etc) as the images in the trainig set . color.Limitation • Applicable only to front views • Input images should be of the same type(size.

Harinder P Singh. Jiro Gyoba . Turk and A. “Face recognition using eigenfaces”.“Eigenfaces for recognition”.Refrences – 1. Pentland 3. http://en.org/wiki/Eigenface 5. M. M. Miyuki Kamachi. Shigeru Akamatsu.wikipedia. http://en. Ravi K Gulati and Ranjan Gupta 4. Advanced Engineering Mathematics by Erwin Kreyszig 7.org/wiki/Principal_Component_Analysis 6. JAFFE image database – "Coding Facial Expressions with Gabor Wavelets”. “Stellar Spectral Classification using Principal Component Analysis and artificial neural networks”. Turk and A. Michael J.wikipedia. Pentland 2. Lyons.

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd