You are on page 1of 4

Seeing WithOpenC

Face Recognition WithEigenface


- is measured tion algorithm that's easy to mple i tacepaper asthe ment lts the fifst face,recognitiontcpornt distance This is also methodthat computer visionstudents Euclidean distance. In two dime earn,and ts a standard, wo*horse (2D),the Euclidean dirtance method n the computer vision po nts Pj and P, rs feld. next,* s seris p!blished conddes by Turkand Pentand the paper that describes ther Eigenface method d r z = s q f t ( A x 2 + A y 2 ) , showingyou how to use n 1991(Reference 3, beow).Citeseer whereAx = x2- xr, and y = y, - yr. OpenCV!implementation of sts 223 c(ationsfor thir paper- an per yearstnce of 16 citations + Ay' + l n 3D, lt's sqrt(Ax' eigenface for facerecognition. average publlcationl Figure I showsEuclidean dista The neps used in eigenface are in 2D ce recognition i5 the process of also usedin manyadvanced methods. ln a 2D plotsuch a5Figurc 1, Puttinga nameto a face.Once ln fact, f you're nterestedn learning d mensions are the X and Y axes. you've detected a face, face computer vision fundamentals,I get3D,throwin a Z axis. Butwhat recogn t on meansfigurng out whose recommendyou earn about and thedimensons fora facemage? lace it is. You won't seesecurity level implement eigenface, even f you dont The simple answer is recognitionfrom eigenface.lt works plan to incorporate face recogntion eigeniace consders eachpixel ocataon wel enough,howevef, to makea fun Into a prolectlOne reason eigenface is to be a separate dimension, Butthere't enhancement to a hobbyistrobotics to mportant s that the basic pr nciples a catch behind it PCA and distance,based The catchis that we're first going r,r s month's article gives a - appearover and over in matchrng to do sor.eth n9 caled dimensionality r.:: ed expanaton of how egenface numerou5 computer vision and reductlo, Eefore expainingwhat that .Jor<s a^d the theorybehind it. Next machine learning appications. is, lefs look at why we needh. montn t artce \4/i concldethislopic Hefe'show recognition works: Evena rmall face imagehar a lot by takrngyo! throughrhe program G veneramplefaceimages for eachof of pixels.A common image 5ize for mingsteps to mpemnt e genface. severapeople,plus an unknownface face recognitions 50 x 50. An image imageto recognize, prxes.to (ompute rnrssrze nas2,5urJ the Eucideandistancebetween iro 1) Computea "distance" betweenthe ol these images, using pixes as Egenlacet d 5imple face recogni. new rmageand each of the example dimensions, you d sum the squareof taces the brightness difference at eachof the FI6URE . uclid..n dlstnca d& for two 2,500 pixel ocations,then take the 2) Select the example imagethafs square root of that sLlm. closestto the new one as the most There are severalproblemswith I kelyknown pe6on. this approach.Let's ook at one of them signal-to-noise fat o. 3) lf ihe d stance to that face image s above a threshold, "recognize"the rmage asthat person, otherwtse. classtfy the faceas an "unknown"person. By computng distancebetween Jace mages, we've replaced2,500 drfferences betweenpixelvaueswith a s ngle value. The question we want to Dstance in the orgina eigen- consder s, "What effect does noise

a arrt G -Fr
-' Et ta

Whatis Eigenface?

rnJ : 5a

-.rE

?rat,

Noise Times 2,500 is a lot of Noise

How'Far Apart" AreIheselmages?

Di| Re

36 sEnvo o+sooz

a9.aE 9. Righ! Fitthg 3 b ttrrec poht! 6 . c.s. of PCA- Lcft: aFd t ror..t points from thc in D onto thc lD 3 ld!i? thc Polnt on -. tr thrt's closast lo ! 9D Polnt. Sottoml t lo subsp6cq and th? t b?t\ i?an Polnts Pt6

'4. :n thisvalue? -:r's define noise a5 r.- .9 - other than an _'-':/ di{ference - that ',:'s . Pixel bnghtness . 'ra images are exactly .._:.aI, and small,inci

fil
t E ,rO I a'e thal Ie5

on9 atrty tnal

lage

pure
a5

wth ? o l

,500 rlra

a rectangular lf we position QqY) uses is called ., : nfluences In one that eigenface causechanges its origin b that so system coordinate PCA Anarsis lf eachone ol these Pincipal Components : ,' Drghtness. canwlhe line, we this on somewhere ':,: pxelscontributes evena small Jorshort. assimPlY the lineequation _ ol number , -nt of noise, the sheer level Line Fitting and PCA the totalnoise means :, pixels for what PcA Y = m x , To get an intuition hlgh. .e very case of PCA let'slookat a special Amdst all thesenoisecontnbu' does. ' The where "least AY/lx. s slope: m isthe line fit line squares '!, whatever is uSelul calleda information way,the this it's described when an shows 2 - dentifying side of Figure faces i5 lefthand individual of the 2D space of fitting a line to three line is a subspace son or example some contributing lr.imably system coordinate bythe(x,y) for Los defined -rBtent the 2D mao locations Butwith2,500pxels ooints: signal. the emPhasizes description (To This York New and ingeles,Chicago, to of noise amount some ::r adding in of the datawe'reinterested simple, l've aspect -. answel keep the explanation is hardto signal thatsmall these that keeps the direction suchas elevatlon namely _r andharder 3D factors ignored to measure. lrom one points most separated the Eadh of andthe curvature ) o{ very often, the information arealmost another. maP three _'rest These Points hasa muchlowerdimension_ singleline qLlite on a not but i f than the numberof measure a tnp, The PCASublPe(e were lf -.nts. ln the case Planning each already. You of an image, This dire(tion ol maximum useiul be would relationshiP Most that : (el'svalueis a measurement the first principal i5 called line separation a single In thatsense. rePresent information. <ely,we can (somehow) Thedirection dataset of a about component essential '-e informationthat would allow expresses something i5the separation larqest A linehasonlyone with the next from theirrelationship. faces between -r to distinguish the Thafs to this perpendicular the points' one so if we rcplace with a much dimension, r fferentindividuals In a 2D principal component. second alonga with locations than 2D locations :naller numberof Parameters two most at hav can we dataset, their is 10o; singleline, we'll havereduced that number :500. Maybe principal comPonents -raybe dimensionalitY. to know it's 12.we don'tclaim for the dimensionality Since they'realmostlineduP Because . advance what it i5, only that it's can.have we highet much is images a linecan be fitted to them much smallerthan the akeady, srobably In a comPonents more princiPal with little errorTheerrorin the linefit .umberof pixels. lmagesol made uP dataset the by addingtogether sum- is measured 15 correcl. lf thisassumpt|on ol Principal the number However, polnt fromeach the distance of square pixel differences T ng all the squared limited find is also can we components the one line is r,/ouldcreatea noise contribution to the line.Thebest_fit Tosee of dataPoints. by the number error' the smallest to the that has highcompared extremely rhat's that why that is, think of a dataset Lrsefulinformation one goal of the point What's of just one consists a Subspace is to tone Defining reduction Cmensionality lor separatpn of maximum Althouqhthe line found above direction sothe important level, downthe noise because isn'tone There is a 1D ;biect, it's locatedinside thisdataset? through cancome Information to separate Now nothing there's nas an and 2D space, a larger, with a dataset justtlvo points consider (its slope). The slopeot orientation two points these the line expressessomethlng Thelineconnecting component is the first Principal aboutthe thrce points lt important But there's no second Principal in which direction indi.ates the There are many methoos s nothlng there because component, out the most. spread reduction.The they're for dimensionalitY

Dimensionality bYPCA Reduction

37 o+.qooz senvo

ProiectingData Onto a Sub5pa(e Meanwhile. let's flnish the description of ity reduction dimenslona by PCA. we're almost therel coing back to the map in Fgure 2, now that weve folnd a lD subspace, we need a way to convert 2D points The to 1D points. process for doing thal is caled projection. When you projecra po nr onro you assrgn rr a suospace, the subspacelocation that's closest to its iore to separate: locationin the higher dimensiona both polntsare fully space. That sounds me5sy and We canextendth s ideaindeflnite- complicated,but it's nether. To projecta 2D map point onto the line whichis ly.Threepointsdef ne a plane, in Figure 2, you'd find the point on a 2D object, so a datasetwith three the line that's closest to that 2D data pointscan neverhavemorethan point. Thats ts projection. two principal components, even if its in a 3D. or higher,coordinate systern. There'sa function in Opencv for pointsonto a subspace, projectlng so In e genface, each 50 x 50 again. you only need a conceptual You can leave the fare mage s treated as one data understanding. po nt ( n a 2,500 dimens to the library. onal algoithmicdetails "space') The blue tic marks n Figure2 so the number of princpal be show the subspace ocation of components we canfind wil never morethan the numberoi face irnages the three cties that defined th line. Other 2D point5can aso be projected Althoughit's importantto havea onto this line The righthand conceptual understanding of what side of Figure2 showsthe prolected principalcomponents locationsfor Phoenix,Albuquerque, are, you won't need to know the detais of how to and Boston. find them to implement eigenface. That part has been done for you Computing DistancesBetween Fa<6 In eigenface. the distance already in OpenCV.l l take you throLrgh the API for that in next between two face images s the month'9 art cle. Eucldean distance between their projected points in a

FIGURE 3. L.tt F6c. lm.sEs ior 10 p.oplc. Righi Thc PcA subspace,rather than firstsh prlnclprlcomponants d stance in the or ginal 2 vlcwrd 6r cigcnf.c6. imagespace. Com dimensional

II LU TI

the distance betweenfacesin lower dimensional subspace i!


techniquethat eigenfaceuset the signal'to noiseratio. improve face Many advanced techniques are exten5ron5 ol

The man basicconcept. between eigenface and is the advanced techniques for definng the subspace. I
of usng PCA, the sLrbspace m be based on Inde ComponentAnalysis(lCA)

ysis Linear Discrim nant Ana andsoon.


As mentioned above, this

idea -

reducti dimensronality

in fo lowed by dstance calculation - is wldelyusedln com subspace vison work. lt's also lsed in of Al. In fact. it's one of branches primarytoolsfor manaqn9 complexi9 and for findng the patterns hidden within massi\eamountsof real worH data.

x,
IF -

Picturing the Principal Components


ln our definition of a line as a lD subspace, we used both x and y coordinates to define m, its 2D slope. componentfor When m is a principal a set of points,it has another name, As yolr no doubt It s an eigenvector. guessed, for the name this is the basis "eigenface." Eigenvectors are a linearagebra concept.That concept is lmportant to us here only as an a ternative name for prinopal For face recognitionon 50 x 50 imags,each eigenvectorrepresents the slopeof a linein a 2.500dimensionalspace,As n the 2D case,we need all 2,500 d mensionsto detine the sope of each line. While its impossible to visualize a line n that we can vrew the many dimensions, eigenvectorsn a different way. We can convert their 2,500 dimensional "slope to an image smplyby placing pixel each value in its cofiesponding location.When we do that, we get - guess facelke lmages called whateigenlacesl

r9r
tla

'+

:a

II IT
38 sERVo oa9oo7

FIGIJRE 4. F.c im.ss Elch trom two individuals, f.c. lsdisplod indivdu.l3 undcr four ditferentlighting Tha v.rl.blllty condltlons, dueto liShting herisgffit r thrn thc vlrilbillty btw.n indivldu.h. Eignfaca tands to confusc individu.13 when lighting

:OF

to look areinteresting l8enfaces gome about give intuition us 3nd t for our tq pnncipalcomponenls fhe lelthand side of Figure ]riet for 10 People. faceimages ;-ol/vs ! -6e face imageSare from the 4 ,.. FaceDatabase B (Re{erences ,',: 5). lt contains images ol 'xes under a range ol lighting for images -ro,tions. I usedseven to createa ..:.l of these l0 PeoPle 'CA subspace. ft ghthandside ot Figure3 compc the first six pdncipal r-!:\^/s _..ts of this dataset,displayed as oftenhav Theeigenfaces r:enfaces. theycomblne r :.osily look,bcause .inents from severalfaces. The an :.ghtest and the darkestPixel5 ma* the face regions .'.+ eiqenface -:i co;tributed to that principal most

limitationsof Eigenface

F A tldv Dpe. I for tme. )ubt l a aept :rpal r50

!trne rts tnar the

Kng prxel
EI_

you hopr4,on't happen.In thb Penel, individual contaan tor each images great So much vadability deal ol a rcA the skewed thdv so. that rt makes that in a way subspace tell these to eigen{ace for impossible apart.Theirfaceimages two people ln are plojectingonlo the 5ameplaces the PcAsubspace. find In practice, You'llProbably for face that the data distributions in between imagesfall somewhere The bottom panel in theseextremes. distribution a realistic 5 shows Figure tor eigenface. Since the eigenvectorsare determinedonly by data varjability, in what Youcando to that youle limited The principalcomponents t{A finds are the directionsof control how eigenlace behaves you can take stepsto limit, However, in the data.one of variation .rate5t envircnmenmanage, to olherwise that or is eigenface in :.e assumptions it might confuse that conditions tal images underlying .ariability in the at the camera Placing to differencesbetween For example, :crrespond5 in is, face level will reducevariability .dividualfaces.This assumption ang|e, ,nfortunately, not always valid. cameta - suchasside @nditions .rgure 4 shows Iaces from two Lighting -are harderfor trornwindows face is lighting ndividuals. Eachindividual's 8ut You control. lighting a mobib robot to fourdifferent under I splayed on intelligence adding mightconsider aonditaons, to compensate recognition are alsofrom the top ot face images The5e if Yourrobot B. In fact,they're for that. For example, Yale Database Face and it'5 located, where roughly knows faceimages for two of the 10 PeoPle it can fa<ing, it's direction 3. Canyoutell which which in Figure shown image only face current the can'l When compard Eigenface arewhich? ones previously in a similar it's seen llghtingis highlyva able,eigenface to ones often does no better than random situation. Even highly'tunedcommercial "stretch" are subied face recognitionsystems othlr factorsthat may In tact, identity. of mistaken cases to that tend in directions mage variability of incorporating part the challenge of include space in rcA to blur identity appliintoanyrobotics angle, facerecognition camera in expression, changes to accommodate ways is finding cation andheadpose. howdatadistribu- these. 5 shows Figure performance tionsalfecteigenface's is at the for eigen{ace Thebestcase rmage5 Here, 5. Figure top of this concludes article monlh's Next are clumped from two individuals stePby-step into tight clustersthat are well series by taking You that irnplements Thats througha program from one anolherseparated withOPenCV. what you hope will hapPen.The eigenface youl SV Beseeing what panel 5 shows in Figure middle

Up Coming

39 servoo+.loor

You might also like