This action might not be possible to undo. Are you sure you want to continue?
ARKA MAITRA (07/IT/36) SAROJIT BERA (07/IT/06) Guided by: Mrs. Modhumita Maity
Abstract Introduction Why Real-Time Face Recognition? Real-Time Face Recognition System Requirements What Is Difficult About Real-Time Face Recognition General Face Recognition Steps PCA (Principle Component Analysis) Parameter Based Facial Recognition Artificial Neural Networks in Real-Time Face Recognition Elastic Graph Matching Other Face Recognition Algorithms Future of Face Recognition Reference and Links
In recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearance-based and model-based schemes. For appearance-based methods, three linear subspace analysis schemes are presented, and several non-linear manifold analysis approaches for face recognition are briefly described. The model-based approaches are introduced, including Elastic Bunch Graph matching, Active Appearance Model and 3D Morphable Model methods. A number of face databases available in the public domain and several published performance evaluation results are digested. Future research directions based on the current recognition results are pointed out.
In recent years face recognition has received substantial attention from researchers in biometrics, pattern recognition, and computer vision communities. The machine learning and computer graphics communities are also increasingly involved in face recognition. This common interest among researchers working in diverse fill is motivated by our remarkable ability to recognize people and the fact that human activity is a primary concern both in everyday life and in cyberspace. Besides, there are a large number of commercial, securities, and forensic applications requiring the use of face recognition technologies. These applications include automated crowd surveillance, access control, mug shot identification (e.g., for issuing driver licenses), face reconstruction, design of human computer interface (HCI), multimedia communication (e.g., generation of synthetic faces), and contentbased image database management. A number of commercial face recognition systems have been deployed, such as Cognitec , Eyematic , Viisage , and Identix . Facial scan is an effective biometric attribute/indicator. Different biometric indicators are suited for different kinds of identification applications due to their variations in intrusiveness; accuracy, cost, and ease of sensing. Among the six biometric indicators considered in, facial features scored the highest compatibility in a machine readable travel documents (MRTD) system based on a number of evaluation factors.
Why Real-Time Face Recognition?
Security ◦ Fight terrorism ◦ Find fugitives Personal information access ◦ ATM ◦ Sporting events ◦ Home access (no keys or passwords) ◦ Any other application that would want personal identification Improved human-machine interaction Personalized advertising Beauty search
Real-Time Face Recognition System Requirements
Want the system to be inexpensive enough to use at many locations Match within seconds ◦ Before the person walks away from the advertisement ◦ Before the fugitive has a chance to run away Ability to handle a large database Ability to do recognition in varying environments
What Is Difficult About Real-Time Face Recognition
Lighting variation Orientation variation (face angle) Size variation Large database Processor intensive Time requirements
General Face Recognition Steps
Face recognition scenarios can be classified into two types; (i) face verification (or authentication) and (ii) face identification (or recognition). In the Face Recognition Vendor Test, which was conducted by the National Institute of Standards and Technology (NIST), another scenario is added, called the ’watch list’.
I. Face verification:
Face verification is a one-to-one match that compares a query face image against a template face image whose identity is being claimed. To evaluate the verification performance, the verification rate (the rate at which legitimate users is granted access) vs. false accepts rate (the rate at which imposters are granted access) is plotted, called ROC curve. A good veriﬁcation system should balance these two rates based on operational needs.
II. Face identiﬁcation:
Face identiﬁcation (”Who am I?”) is a one-to-many matching process that compares a query face image against all the template images in a face database to determine the identity of the query face (see Fig. 3). The identiﬁcation of the test image is done by locating the image in the database who has the highest similarity with the test image. The identiﬁcation process is a ”closed” test, which means the sensor takes an observation of an individual that is known to be in the database. The test subject’s (normalized) features are compared to the other features in the system’s database and a similarity score is found for each comparison. These similarity scores are then numerically ranked in a descending order. The percentage of times that the highest similarity score is the correct match for all individuals is referred to as the ”top match score.” If any of the top r similarity scores corresponds to the test subject, it is considered as a correct match in terms of the cumulative match. The percentage of times one of those r similarity scores is the correct match for all individuals is referred to as the ”Cumulative Match Score”,. The ”Cumulative Match Score” curve is the rank n versus percentage of correct identiﬁcation, where rank n is the number of top similarity scores reported.
PCA (Principle Component Analysis )
The Eigenface algorithm uses the Principle Component Analysis (PCA) for dimensionality reduction to find the vectors which best account for the distribution of face images within the entire image space . These vectors define the subspace of face images and the subspace is called face space. All faces in the training set are projected onto the face space to find a set of weights that describes the contribution of each vector in the face space. To identify a test image, it requires the projection of the test image onto the face space to obtain the corresponding set of weights. By comparing the weights of the test image with the set of weights of the faces in the training set, the face in the test image can be identified.
Eigenfaces Initialization Algorithm
Step1: Step2: Acquire an initial set of face images (the training set) Calculate the eigenfaces from the training set, keeping only the M images that correspond to the highest eigenvalues. These M images define the face space. As new faces are experienced, the eigenfaces can be updated or recalculated. Calculate the corresponding distribution in M-dimensional weight space for each known individual, by projecting their face images onto the “face space”.
Eigenfaces Recognition Algorithm
Step1: Step2: Step3: Step4:
Calculate a set of weights based on the input image and the M eigenfaces by projecting the input image onto each of the eigenfaces. Determine if the image is a face at all by checking to see if the image is sufficiently close to “face space.” If it is a face, classify the weight pattern as either a known person or as unknown. (Optional) Update the eigenfaces and/or weight patterns.
Recognition performance decreases quickly as the head size, or scale, is misjudged. The head size in the input image must be close to that of the eigenfaces for the system to work well. In the case where every face image is classified as known, a sample system achieved approximately 96% correct classification averaged over lighting variation, 85% correct averaged over orientation variation, and 64% correct averaged over size variation.
Parameter Based Facial Recognition
Facial image is analyzed and reduced to small set of parameters describing prominent facial features Major features analyzed are: eyes, nose, mouth and cheekbone curvature These features are then matched to a database Advantage: Recognition task is not very expensive Disadvantage: The image processing required is very expensive and parameter selection must be unambiguous to match an individual’s face.
Artificial Neural Networks in RealTime Face Recognition
Use many of the same algorithms described before but with back propagation ANN’s Disadvantages: Complex and difficult to train Difficult to implement Sensitive to lighting variation
Elastic Graph Matching
Each face is represented by a set of feature vectors positioned on the nodes of a coarse 2D grid placed on the face Each feature vector is comprised of a set of responses of 2D Gabor wavelets, differing in orientation and scale Comparing two faces is accomplished by matching and adapting the grid of a test image to the grid of a reference image, where both grids have the same number of nodes; the test grid has initially the same structure as the reference grid. The elasticity of the test grid allows accommodation of face distortions (e.g., due to the expression change) and to a lesser extent, changes in the view point. The quality of match is evaluated using a distance function
Other Face Recognition Algorithms:
LDA (Linear Discriminant Analysis) Bayesian Classifier Gabor Wavelet Algorithm
Future of Face Recognition
Some consider the problem impossible No standard way of approaching the problem Advancements in hardware and software Slow integration into society in limited environments Very large potential market
Reference and Links
Signal Processing Institute, Swiss Federal Institute of Technology http://scgwww.epfl.ch/ Biometric Systems Lab, University of Bologna http://bias.csr.unibo.it/research/biolab/
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.