P. 1
Final Project10

Final Project10

|Views: 0|Likes:
Published by Prasad Kalum

More info:

Published by: Prasad Kalum on Aug 30, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

08/30/2011

pdf

text

original

ECEN 448: Real-time DSP, Final Project, Texas A&M University (Fall 2010), Instructor: Dr. D.

KundurPage 1 of 10  

Final Project: Face Recognition
Objectives of this Project
• This project introduces students to the practical capabilities and challenges of automated facial recognition. Good luck!

Instructions
• This project can be conducted in groups of at most two people. It is acceptable to discuss the project with other groups and to help other groups. However, the report has to be done separately for each group and no “copying” of code is allowed between groups. You should use the last lab session on December 2, 2010 to ask the TA any questions you may have and to attempt to conduct the project. You must attend the December 2, 2010 lab session for full points.

Deliverables
• Please provide a group report presenting the following: o o o o A picture of the *.mdl and/or *.m file(s) that you used to generate the results. Please note that you can use either MATLAB or Simulink to conduct this project. A picture of your test and reference image(s) that you used to generate the results. Documentation and explanation of your results. A discussion of the challenges you encountered and any changes in models you made to obtain good results.

You will be graded on the completeness, accuracy and presentation quality of your report.

Grading and Due Date
This project report is due at noon on Tuesday December 14, 2010. Please note that this is a strict deadline, as the grades must be reported shortly after this date.

Grading and Due Date
  William Luh developed a preliminary version of this project description.  

© Dr. Deepa Kundur

Final Project. we must capture only the face. If you haven’t guessed what this project is about from the above introduction (and the title). For example. D. note that most face recognition devices that search huge databases are implemented via software. and thus consider one plane. and the camera always at the same distance from the face – pretty much like taking a photo for your driver’s license or passport. may give the background an unwanted luster. The reason is due to lighting from various sources. instead of using the RGB format. then be prepared to smile. which represents color also using 3 planes: luminance (describes the intensity). For our algorithm. In addition the background should be completely black1. we must water down the algorithms used substantially. and of course no smile! This guarantees higher delectability. REFERENCE IMAGE OF FACE FEATURE EXTRACTION X COMPARISON TEST IMAGE OF FACE FEATURE EXTRACTION IMAGE REGISTRATION Y Figure 1: General Block Diagram for Face Recognition Before you dive into the details. a digital grayscale image can be considered as a matrix. blame the DSP used in airport face recognition devices that can’t handle a smile. The reason we want a black background will become apparent soon. and chrominance for blue and red. © Dr. Different formats represent color images differently. Since software implementations allow greater computational power (compare your Pentium to the DSP you’re using for your labs!). Images as Matrices Before we begin describing each block used in this project. or while standing at the craps table. Naturally. Deepa Kundur .ECEN 448: Real-time DSP. and then use a photo editing software such as Photoshop to make the background black. green. so that the software can access the image face databases stored locally or over some network. one may use the YCbCr format. blue (RGB). An 8-bit grayscale                                                                                                                           1 You may want to take the picture initially with a white background. If you want to get fancier and include color images. For example. instead of saying “smile!” the photographer will say “don’t smile!” Why are these photographers so cranky? Hey don’t blame them. then you would have a 2-D matrix for every color plane: red. Instructor: Dr. Recall that a sampled time-series signal can be considered as a 1-D vector. let’s take a moment to review images. Texas A&M University (Fall 2010). In this project we’ll deal only with grayscale images. because you are actually going to implement a very rudimentary face recognition device! Most face recognition devices follow the same block diagram as in Figure 1. KundurPage 2 of 10   Final Project: Face Recognition Introduction and Background Nowadays when you get your passport photo taken. the face recognition software used in Las Vegas casinos can recognize the face of patrons while they move across the gambling floor. with no make-up. you can take these ideas and implement more complex and more effective algorithms. and lower false alarm. including your flash. looking straight at the camera. After learning the basics in this project. and thus your implementation will lose the efficacy enjoyed by commercial face recognition software.

If we simply take the sample                                                                                                                           2 More advanced face recognition software will also compute the geometries between sets of features to test whether the two faces in question may be the same or completely different. During the registration process one image is matched to the other’s geometry. and many face recognition software require a training face. There are many ways to do this. Texas A&M University (Fall 2010). we will use MATLAB’s function edge. the difference image will not be all 0 valued pixels even when the two photos are of the same face! Why? Each time you snap a shot of someone’s face. Feel free to look through the file if you want the details of registration. your eyes and nose can be considered as the vertices of a triangle. KundurPage 3 of 10   image is thus a matrix whose values take on the integers 0 to 255. Common sense also tells you that unless you are comparing exactly the same image file. then subtracting one from the other (pixel-by-pixel) will yield a difference image that consists of all 0 valued pixels. nose. in other words 256 = 28 shades of gray.m (MATLAB) file to help you with this. In other words the mean of the difference pixels should be 0 (in a perfect world where lighting conditions are constant. In this project. but not the individual pixel values themselves. the difference image of two faces that are the same should contain very few pixels that are nonzero. the white parts of the image correspond to the edges of the image. Of course two different faces may share the same triangle type. so registration is important in removing any difference that are due to difference in photographic angle on the face. Final Project. Some triangles may be equilateral. Statistical Test for Comparison2 After registration. For example. while the maximum 255 corresponds to white. © Dr. Instructor: Dr. and thus this test is not the only test to be performed. We shall use a simple method of comparison. the output will actually be a binary image (only 0’s and 1’s) highlighting the edge contours of the original image. but easy to explain. while others will have different angles. Hence in reality.ECEN 448: Real-time DSP. Extracting Facial Features What are facial features? These turn out to be items such as your eyes. Each pixel of the output will be either solid black or solid white. However machines do not have the brilliance of our vision and brain. Deepa Kundur . instead of using Simulink’s edge detection algorithm. Recall that in edge detection if you input an 8-bit grayscale image. etc. nose and mouth because there are clear visual boundaries or contours that outline these items. In our algorithm the test image features are registered to the reference image features as shown in Figure 1. since 0 intensity is darkness. lighting conditions. and mouth. as the brightest scenario is all white. and easy to implement and test without training! Common sense tells you that if the two features are exactly the same. The minimum value 0 corresponds to black. Often the comparison is based on the geometry between the features. you may see right away if the two eye shapes and distances between the eyes match. you need to train your algorithm on several faces. and need a more systematic way to compare images. which is not so effective. D. movement of the facial muscles. These algorithms are based on correlation detectors and to enhance its performance. Registration may be conducted using MATLAB or Simulink and we provide a *. Image Registration (Alignment) The reason that facial feature extraction is so important is because it allows one to register or align two facial images together for more accurate comparison. Most of us can see one another’s eyes. We can exploit these contours using edge detection in order to identify facial features. will result in a photo that may look the same. but whose pixel values may be somewhat different. and facial expressions are mannequin).

and Y denote the test feature extracted (i. If we model the pixels X i. but we will use it in the next € paragraph). average) and sample variance of the difference image X − Y as in Equations (1) and (2). KundurPage 4 of 10   average of all difference pixels. Do we decide same face or different face? Where does the threshold lie for the comparison stage? For this. details are beyond the scope of € this project. So now we have a random variable mean(X − Y ) that has a Gaussian distribution with an estimate of its mean and variance. € € € mean(x − y) is the actual sample mean computed using the registered feature image data. Our approach is to model the image pixels as random variables and then apply a confidence interval test to give a probability measure that can be used for comparison of the facial similarities.Y ) . which makes. and check if 0 is inside this confidence € interval. respectively: € € mean(X − Y ) = ∑ €(X i. D.e. We can use probability theory to test whether it is likely that 0 is the true mean. The question we ask during this comparison phase of the facial recognition is whether the sample mean of Equation (1) is close enough to zero that it seems the faces in the reference and test images are the same. Final Project. i.ECEN 448: Real-time DSP. we instead employ the sample mean of Equation € (1). j i.a. we rely on statistics. j and Yi. Assume the pixels of each image are denoted X i. Instructor: Dr. € Without access the true mean and true variance. it is unlikely that the resulting value will be identical to 0. Let both images be M×N matrices. with its own mean being equal to € true mean of the difference image X − Y denoted m .e. Let X denote the reference feature extracted (i. j − Yi. j as random variables that are independent. j .e. By the Central Limit Theorem. are also random variables. The minus one is used to make the sample variance an unbiased estimate of the true variance. The parameter b is a number. j − Yi. and define the sample mean (a.mean(x − y) + b]} = 1 − p Y (3) © Dr. mean(x − y) +b]? As discussed before. respectively.. Texas A&M University (Fall 2010). j ) − mean(X. edge detected) image after registration as shown in Figure 1. and its variance being the € “true variance of ( X − Y )”/(M N) € (don’t worry about why this is true.5. Deepa Kundur € . then the sample mean and variance denoted   mean(X − Y )  and var(X.. Specifically. for a user-defined p: € € Pr{mean(X −€ ) ∈[mean(x − y) − b.Y )]2 M⋅ N −1 (2) Notice that the minus one in the denominator of Equation (2) is not a typo. we ask the question: Is 0 in the interval [ mean(x − y) -b. j [(X i. Say we compute a sample average of 1. Equation (1) with the actual image pixel value numbers substituted which we denote with mean(xy) as not to confuse with with the random variable version. edge detected) image.Y ) = ∑ i. we can assume that mean(X − Y )  is a Gaussian random variable. We form a confidence interval. j ) M⋅ N (1) € var(X.k. j   and Yi. and substitute the sample variance appropriately into “true variance of ( X − Y )”/(M N) .

The smaller the p-value. we know that this only had a 0. mean(x − y) + b].95 (or confidence at 95%). Let us consider a pictorial representation as in Figure 2. Why is this? Recall our definition of the relationship between p and b in Equation (3). the p-value is given by Equation (4): € ⎛ ⎞ ⎜ ⎟ | mean(X − Y ) |⎟ p-value = 1 − erf ⎜ ⎜ 2 var(X − Y ) ⎟ ⎜ ⎟ ⎝ ⎠ MN € (4) The erf or error function is used to find the probabilities for the Gaussian distribution.b. mean(x − y) + b].y)-b.05.y)-b mean(x. It is calculated based on a look-up table. then the corresponding b value is set so that with probability 0.y) Figure 2: Tail Probabilities as an Indicator of whether 0 is an Anomaly or not. KundurPage 5 of 10   € For example. Analytically. so it € must be an anomaly and we reject the notion that the true mean can be 0.y)+b estimated with mean(x.05 chance of happening.y)+b] mean(x. Recall from probability theory that integrating under a density function about a portion of the horizontal axis say [a c] gives the probability of the random variable falling in that range particular range of values.mean(x. then we can say that with probability 0. [mean(x. Recall also that integrating over the entire probability density function equals one.95.b. Even though you’ll be performing the © Dr. but when you try to actually make it.ECEN 448: Real-time DSP.05. of Figure 2 corresponds to the event that € mean(X − Y ) does not fall in the region [ mean(x − y) . called “tail probabilities”. mean(x − y) + b] which has probability 1-p. and the corresponding parameter b is such that 0 is not in the interval [ mean(x − y) . Texas A&M University (Fall 2010). Final Project. and thus is commonly called the p-value. € Design and Implementation Implementing a Simulink model for Figure 1 may look simple enough.b. all occurrences of sample means should be in this interval. Deepa Kundur . the details of doing so are not inherently obvious. Another way to look at this is if p = 0. Instructor: Dr. € You can see in Figure 2 that the shaded region corresponds precisely to p (because the un-shaded € € region corresponds to 1-p and the overall area under the function is one). The shaded region. Figure 2 shows a Gaussian probability density function for the estimated mean random variable denoted mean(X − Y ) . Thus if we get a sample mean outside of this interval. D. the more likely 0 is an anomaly if it is not in the interval [ mean(x − y) . the two faces are not the same. if we choose p = 0.

Instructor: Dr. D. go ahead and save it. As discussed. in this project you and your partner should also use images of yourselves. type the following command. please make sure they are all the same size. you specify a start time and a stop time). In addition. Under Solver Options. Even though you haven’t added any blocks to your model. all at time 0.e. From there. KundurPage 6 of 10   face recognition in Simulink. if not. Some people in the past have taken a picture against a white background (say against a whiteboard) and then changed it to black. 2. and to the output. Open a new Simulink model and go to Simulation → Configuration Parameters. Loading the Images to Simulink First. Please note that you are also given sample images to help you get started on this project. Final Project. It doesn’t really matter how long that calculation takes (within reason. face recognition is like a one-time calculation.jpg’ with the name of one of your pictures: © Dr. In order to work around this apparent incompatibility. Simulink models are usually time based (i. make sure that the images are the same size (in pixels). Also. there will be two inputs and thus such images). you’ll have to load them to the MATLAB workspace and change them from color to grayscale. it will be an entire image matrix (in fact. It is important to keep in mind that there are some best practices to taking your test and reference images that can help immensely in the performance of your face recognition algorithm. of course). It is important that your face be against a black background. Press OK. set Type to Fixed-step and Solve to Discrete. However. The best thing though is to get a black screen behind you and take a picture.ECEN 448: Real-time DSP. Select the Solver pane. this you’ll be ready to send the images to Simulink. in other words. If you crop them. through the block diagram. In the MATLAB command window. try taking a picture of yourself under the lab table – yes this works sometimes. After. you’ll have to write a couple of MATLAB function m-files to help you out. keep in mind some general guidelines for doing the face recognition. we’re going to “trick” Simulink by setting both the start and stop time to 0 seconds. in our case. On the other hand. Texas A&M University (Fall 2010). Preparing the Simulink Model One of the first problems you may recognize with face recognition is that it is not time based. but can obscure your facial features. rather. Make sure that the MATLAB Current Directory is set to this same directory as these images. Acquiring the Test Images You have been provided with some test images to get you started. take passport-type photos. Set the Start time and Stop time to 0. replacing ‘name. 1. Save your reference and test images in the same directory as your Simulink model. If you are stuck in the lab. you’ll have to put them in a structure-with-time format (we discuss this more specifically below). This is equivalent to only sending one sample from the input. You’ll also have to do some preliminary work in the MATLAB command window to prepare the images for the Simulink model. Deepa Kundur . this “sample” at time 0 will not just be a scalar value. However. The requisite calculations will be performed on these input two “samples”. crop them in a program such as Microsoft Office Picture Manager or Photoshop until they are the same size.

Also. The array is stored in the variable called anchor. Type anchor = rgb2gray(anchor) into the command window. 4. do the same for the target. Final Project. and green intensities (the image is read as a color image). The second field is called signals. for you. they must be put into a structure format. specifies the dimensions of the data (in your case. this would be a vector [M N]).) in an organized fashion. 3. 7. do the same for the target. blue. Type anchor = double(anchor). The “target” picture will be compared to the “anchor” picture from step 4. The first thing you need to do to the pictures is convert them into grayscale. here’s a visual depiction of the structure’s organization: © Dr. note that the data type is an unsigned 8-bit integer. KundurPage 7 of 10   This command reads a picture from a file and stores it into an MxNx3 array where MxN is the dimensions of the image. This is quite easy. Here’s Simulink’s way. If you type whos into the command window. In MATLAB. The third dimension (3) represents the red. Instructor: Dr. In order to be compatible with Simulink. If you’ve learned a programming language like C. a structure is a data type that stores more elementary data types (i. This is the “red-green-blue to grayscale” command. the second. the time vector is a single value: 0. as its name suggests. To make this a little clearer. you need to convert the image matrices from unsigned 8-bit integers to double-precision floating point numbers (MATLAB’s edge command only operates on double-precision numbers). Save it in a variable named target. the second field. doubleprecision floating point numbers. D. Specifically. you might have recognized by now that a MATLAB structure is much like a struct in C. The first field is called time and contains. 6. and values are the actual data within those categories. Texas A&M University (Fall 2010). Deepa Kundur . the syntax is pretty straightforward (don’t actually type this command): A structure. etc. anchor and target are now MxN matrices. strings. is just an organized way of storing data types that you are already know about. You could make a structure called S that has two fields: the first field. the image matrix). the structures you make must be exactly the way Simulink wants. could store a vector of the actual values of the sine wave (let’s call this vector x). called time. here’s an example that should make more sense. 5. This “substructure” has two fields of its own: the first. If you wanted to create this structure. Use the imread command again to read your second picture to the MATLAB workspace. of course. In order to load the image matrices into Simulink. called dimensions. We call this picture the “anchor” because it is the reference face to which all others will be compared. structures consist of fields and values. then. The complexity of structures is almost limitless because you can even put structures within other structures (which is what we’re about to do). Within the structure are two fields.ECEN 448: Real-time DSP. Suppose you wanted to a store the data for a sine wave that lasts over a period of ten seconds. A field is like a category.e. In anticipation of the edge detection that will be done later. If you’re confused. a vector of time values. contains the actual data in question (in your case. could store a vector of the corresponding time values from 0 to 10 seconds (let’s call this vector t). called values. it actually contains another structure. you will see that the third dimension of the arrays has been removed. called signal.

Make two such Fcn blocks and place them into your Simulink model according to Figure 1. this is the output you want). Here. Download im_reg_MI. and J. I. Final Project. it has four inputs: image1 (the anchor). theta. Namely. the block will not support a binary output (the edge command produces a binary output. due to some peculiarity in Simulink. Edge Detection The first step in our face recognition process is to extract the facial features (represented by the blocks labeled “Extraction of Facial Features” in Figure 1). In order to work around this problem. and angle (a vector of the possible angles of rotation allowed). It can even be used for functions that you have written yourself in an m-file. multi-output function. you’re going to do this MATLAB’s edge function. 2. image2 (the target). as explained in the Introduction). the register function is a multi-input. © Dr. Registering the Images 1. create a structure for the anchor and a structure for the target that are compatible with Simulink. you’ll have to use an Fcn block again. However. Based on the description in step 6 and the discussion of structures in step 5. 9. In the Block Parameters window. Also.m and save it to the same directory as your Simulink model. respectively. you’re going to have to right your own function m-file (a one-input. Texas A&M University (Fall 2010). change the Data parameter to I1 and I2. which is not compatible with the Fcn block. In your Simulink model add two From Workspace blocks from Simulink → Sources. We’ve provided you with a registration function in an m-file. It has five outputs: im_matched (the registered target image. Call the name of the anchor’s structure I1 and the name of the target’s structure I2.ECEN 448: Real-time DSP. h. The Fcn block is a one-input. KundurPage 8 of 10   8. In the Block Parameters of the Fcn block. one-output function that is defined in MATLAB. the u stands for the input to the block. all that you have to do for edge detection is to type double(edge(u)) into the MATLAB function parameter. as you might have noticed. one-output block in which you can specify pretty much any one-input. Instructor: Dr. oneoutput function) that calls on im_reg_MI and extracts only the necessary output. the double() command is necessary because. the easiest way to use a function that you would normally use in the command window is to use an Fcn block from Simulink → User-Defined Functions. Deepa Kundur . Now you are ready to load the images into Simulink. In Simulink. step (the maximum vertical or horizontal translation allowed in pixels). To use the function m-file in your Simulink model. D. As stated in the Introduction. The next step in the facial recognition process is the registration of the images (represented by the block labeled “Alignment of Images” in Figure 1).

Mode to Multidimensional array. place an Fcn block that calls on the register function you just created. registered target image from the anchor (the edge-detected anchor. Texas A&M University (Fall 2010). Connect the blocks that you have so far to reflect the flow of data shown in Figure 1. into two separate matrices). In your Simulink model. KundurPage 9 of 10   3. You may consider writing separate function m-files for equations (1) and (2) and have your p-value function call on these functions. as well as the anchor and target images you extracted in part c. Instructor: Dr. of course. In your Simulink model. This combined matrix will be the only input to your function. erf. 3. you’ll have to write some code to implement those formulae before you can compute equation (3). You’ll have to do this with a Matrix Concatenate block (In the Block Parameters. you’ll once again need an Fcn block. Notice that to calculate the p-value in equation (3). CAUTION: MATLAB’s built-in functions mean and var are written for one-dimensional data and will not work as you want on your two-dimensional images. Save your function m-file as register.01:2] and step = 50. 6. Make the single output of your function equal to the second output of im_reg_MI. Here’s some ideas on how proceed: a. Here are some things to consider as you write your function: a.m in the same directory as your Simulink model. the input to this function should be the difference image and the output should be the p-value itself. This gets rid of two of the four inputs to im_reg_MI. b. attach a Display block from Simulink → Sinks to view the p-value with. Inside your function. Now. 4. implement your p-value function in step 3 in an Fcn block in your Simulink model. e. If you haven’t already done so. subtract the edge-detected. once more. NOT the original anchor and target). And. you need calculate the p-value of the difference image. As you may have guessed. extract the anchor and target images from the concatenated input of part b (i. you need the two-dimensional sample mean and sample variance. define angle = [2:0. 4. To do this. Comparison of Images 1. In your Simulink model.e. d. At the output of this Fcn block. The last step in the face recognition process is to compare the anchor to the registered target image. Final Project. Open a new m-file and write a one-input. NOT the original anchor). concatenate the anchor and target images into one Mx2N matrix (use the edge-detected images. call on im_reg_MI. you guessed it. one-output function m-file that calculates the p-value from a difference image. The error function. Instead. Write a one-input. using the angle and step variables you defined in part a. Namely. one-output function called register. b. c. the erf() command. you can ignore the other outputs. Inside your function. Hardcode the step and angle inputs within your function. set Number of inputs to 2. in equation (3) can be computed in MATLAB using. and Concatenate dimension to 2).ECEN 448: Real-time DSP. 2. you’ll have to write your own function m-file to put in this Fcn block. © Dr. 5. as defined in equations (1) and (2). you need to calculate the difference image. Deepa Kundur . D. To do this.m.

specifically use the test images provided and your own iamges. Test different sets of images in your model. one window will appear for each Matrix Viewer you have in your model. Final Project. and you don’t have to build real-time code with Real-Time Workshop. convert it to grayscale and double-precision floating point. Texas A&M University (Fall 2010). then it will take the program much longer to complete. the intensity values of the image are integers between 0 and 256.Page 10 of 10   ECEN 448: Real-time DSP. Try adjusting the angle and step parameters in your register function to see how they affect both the speed of the calculation and the resulting p-value. Run your Simulink model. Kundur Viewing Images Although your face recognition algorithm is now complete. If you want to view any image after edge detection. It may take awhile to run (perhaps close to a minute). set the Colormap matrix to gray(256) and uncheck the box labeled “Display colorbar”. To do this. Instead. Running the Model 1. If you want to view any image before edge detection. set the Minimum input value to 0 and the Maximum input value to 1. 3. 2. use a Matrix Viewer block from Signal Processing Blockset → Signal Processing Sinks. each time you want to test a new image. Also: 1. so be patient. When you run your Simulink model. Deepa Kundur . so keep this to a minimum. 2. Note that because this is a non-real time application. In the Block Parameters Window. Instructor: Dr. Suggestion: don’t delete or overwrite any of the structures you create in the MATLAB workspace. © Dr. If your value for he angle is too large. your simulation is in Normal mode (as opposed to External mode). This is because before edge detection. it would be nice to view some of the images at various stages in the process. and put it in a structure format. This is because edge detection produces a binary output. you’ll have to load it into the MATLAB workspace. Remember. set the Minimum input value to 0 and the Maximum input value to 256. D. change the names of the variables in the From Workspace blocks in your model.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->