National Conference on Emerging Trends and Applications in Computer Science - 2012

Kaveri Chetia Kaustubh Bhattacharyya Don Bosco College of Engineering & Technology Assam Don Bosco University Guwahati

Plan of Presentation
 Introduction
 Basic Theory  Related Works

 Problem Formulation
 Experimental Model  Results and Performance Analysis  Conclusion  References  Acknowledgement

 Biometrics

Introduction

 Identifies or verifies a person based on one’s physical

characteristics by matching the testing images against the enrolled ones .

 Human Computer Interface (HCI)
 The demand for reliable personal identification in

computerized access control has resulted in an increased interest in biometrics to replace password and identification (ID) card.

 Artificial Modeling of Human Natural Ability  Surveillance System

Identification or Verification .Three Key Steps/Sub Tasks Involved 1.Detect the Face in a given image 2.Essential features (necessary) 3. Face Detection . Feature Extraction .

Identification FR System Yes/No This is Mr. Y Face Detection System Process Face Identification Face Verification . Verification. J This is Ms.Three Key Steps/Sub Tasks Involved Face Detection.

 Basic Blocks of a Face Recognition System Pre-processing Unit Size Normalization Illumination Normalization Noise Removal Feature Extraction Global Features Local Features Training Testing .

. such as  The differences of facial expression.  The variety of posture. the images taken in different surroundings may be unlike.  The light intensity and directions of imaging. size and angle.  Even to the same people.Complications Involved in Face Recognition Systems  Face recognition is influenced by many complications.

 Further. posing positions. the designed system classifies the face and verifies as well. The system must be able to know that two images of the same person with different facial expressions actually are the same person. Hence. .Problem Formulation  An Artificial Neural Network is to be designed that recognizes and classifies a human face relative to the faces of the training database. illumination conditions and effect of noise. the face recognition system needs to solve the problem concerning different facial expressions as well.  Other issues of face recognition systems need to solve is: makeup.

 Accurate only for frontal view of faces  Less accurate for face tilt more than 30 degree  (2) Feature-based (Local) matching methods: Typically. LDA. ANN. nose. HMM. PCA. local features such as the eyes.  Requires a large data base for training  High Computational Complexity  (3) Hybrid methods: This system uses both local features and the whole face region to recognize a face.Popular Approaches of Face Recognition Systems (FRS)  (1) Holistic / Global matching methods: These methods use the whole face region as the raw input to a recognition system.  The trade off between the computational complexity and the accuracy is easily overcome . in these methods. SVD etc. and mouth are first segmented and features are extracted for these.g. E. a machine recognition system should use both. E. Fisher face etc.g.

.Artificial Neural Network ANN is the massively parallel dynamic system out of which MLP feed forward network is best for face recognition  Resembles the Brain Processor in Two Respects:    Knowledge is acquired by the network through a learning process. Interneuron connection strengths known as synaptic weights are used to store the knowledge.

Basic Blocks of the FRS Size Normalization Pre-Processing for Local Feature Extraction Acquisition of Database Intensity and Illumination Normalization Face Normalization for Global Feature Extraction Segmentation of the Landmark segments of the face Size Normalization Feature Extraction Simulation of the Test Image Training the Neural Network .

of Persons –8 No. Database Images  Varying illumination. of Images per Person – 25 Total No.Experimental Model  1. pose. distance from the camera No. facial expression. of Training Images – 160 (20 per person) Acquired Database Images Variation in Expressions .

Pre-processing Normalization .Experimental Model  2.

Experimental Model  3. normalized face of the database faces and the resulting face of the subtraction of the individual face from the mean face Segmentation of Landmark parts of Face for Geometrical Approach . Normalization and Segmentation An individual face.

21 x 0.38 Segmented Face . Dimension of the Segmented Segments Facial Region Size in % of the Full Face (M x N) 0.Experimental Model  4.26 0.14 x 0.16 x 0.25 Size in Pixels (M x N) 12 x 19 12 x 19 Right Eye Left Eye Nose Mouth 18 x 20 14 x 28 0.25 0.14 x 0.

Feature Extraction Length of the Extracted Features Segment Central Moment 19 19 20 28 Eigen Vector 12 x 4 12 x 4 18 x 4 14 x 4 12 12 18 14 Standard Deviation Row Col.Experimental MOdel  4. Total Feature Length 98 98 130 126 75x4=300 752 Right Eye Left Eye Nose Mouth Global Total Length 19 19 20 28 .

9 0. 2000 0.7 Log-Sigmoid and Tan-Sigmoid Number of Neurons in the Input Layer Number of Neurons in the Output Layer Number of Neurons in the Hidden Layer Number of Iterations Learning Rate Momentum Activation Functions .7.Experimental Model Artificial Neural Network . 1500.8.5.6. 0. Hidden Layer. 0. 0. Output Layer) Number of Input Unit 1 Feature Matrix consisting of feature vectors of eight persons Number of Output Unit 1 Binary Encoded matrix consisting of eight target vectors for eight persons. 0. 752 08 (To recognize 8 persons) (752 + 8) * (2/3) = 506 1000.Specifications Type: Feed Forward Backpropagation Network Parameters Specifications Number of Layers 3 (Input layer.

Experimental Model Binary Encoded Output .

4x10-4 1x10-7 1x10-12 1x10-6 .Results and Performance Analysis Convergence of Mean Squared Error for Different Number of Iterations and Different Activation Function Activation Function of Hidden Layers Activation Function of Output Layers MSE Iterations 1000 1500 2000 1000 1500 2000 Tansigmoid Tansigmoid Logsigmoid Logsigmoid 1x10-4 1.2x10-4 1.

Results and Performance Analysis MSE Convergence Curve .

5 0.3x10-10 1.8x10-6 1.7 0.8 0.3x10-3 1.3x10-5 1x10-8 1x10-5 1.7x10-4 1x10-7 1x10-12 1x10-6 1.7x10-3 1.Results and Performance Analysis Convergence of Mean Squared Error for Different Number of Iterations and Different Learning Rates Learning Rate Momentum MSE for different Iterations 1000 1500 2000 1.4x10-6 1.5x10-7 1.7x10-3 0.6x10-4 1.7 .2x10-4 1x10-7 1.4x10-3 1.4x10-2 1.9 0.6 0.3x10-4 1.6 0.4x10-5 1x10-7 1.6 0.5 0.1x10-5 1.2x10-6 1.7 0.5x10-7 1.7 0.2x10-10 1.4x10-4 1.5 0.

FALSE REJECTION RATE (FRR) . .measure of the likelihood that the biometric security system will incorrectly reject an access attempt by an authorized user. of training images 5 10 20 5 10 20 5 10 20 Analytical System 57 67 73 27 20 15 16 13 12 Holistic System 62 76 80 22 15 12 16 09 08 Hyrbrid System 70 83 86 18 10 08 12 07 06 CRR (%) FAR (%) FRR (%) FALSE ACCEPTANCE RATE (FAR) .Results and Performance Analysis Comparison of Recognize Rates of the Proposed System with the other Systems with Variation in the Number of Training Images Performance No.measure of the likelihood that the proposed system will incorrectly accept an access attempt by an unauthorized user.

Holistic and Proposed Hybrid Systems .Result and Performance Analysis Comparison of Correct Recognition Rate of Analytical.

Result and Performance Analysis Performance Parameters of the Proposed Hybrid System .

Result and Performance Analysis Plot of the CRR of the System Affected by Gaussian Noise .

Result and Performance Analysis COMPARISON OF RECOGNITION RATES OF THE PROPOSED SYSTEM WITH THE DATABASE AFFECTED BY VARIOUS T YPES OF NOISE 90 Database with Noise C F F R A R R R R % % % 86 8 6 80 70 Ideal Images Gaussian Noise Salt & Pepper Noise 60 50 40 30 20 Ideal Images Gaussian Salt & Pepper 78 10 12 73 15 12 10 0 CRR (%) FAR (%) FRR (%) .

The system could be tested for the universally accepted database such as ORL. Yale.Future Prospects  The designed system has been tested only on the acquired     database images. A more effective system that is robust to the presence of noise could be designed. The system’s accuracy could be validated training it to recognize more persons. The most essential features could be extracted by different mathematical modeling thus reducing the input features to the Neural Network The Global Features could be extracted by methods other than PCA and the Recognition Rate could be estimated. . The system is trained only for 8 persons.

vol. Patnaik.” Encyclopedia of Artificial Intelligence. Abdel-Hay and H. no. B. 2008.” International Journal on Computer Science and Engineering. 21-25. Noble. vol.” Proceedings of the IEEE International Conference on Image Processing. Fahmy. R. 2010. K. “Face recognition and verification using artificial neural network.References  Marcos Faundez-Zanuy. M. Raja.  K. 956 – 961. Ramesha . pp. pp. “Biometric security technology. Springer-Verlag. . pp. Ramesha and K.” Phil. “Feature extraction based face recognition. April 1955. 2010.” Proceedings of the International Conference on Computer Engineering and System. “Intelligent face recognition: local versus global pattern averaging”. gender and age classification. pp. K. Ranawade. 262-264. 01S. A247. 14-23. “Age regression from faces using random forests.  K.  Albert Montillo and Haibin Ling. 2006.G. S. pp. Sneddon. B. N. 115-120. II. Soc. no. Venugopal and L.” International Journal of Computer Applications. Raja. 14. Lecture Notes in Artificial Intelligence. Khalil. I. “Dual transform based feature extraction for face recognition.” International Journal of Computer Science Issues. M. 59-64. pp. no. 2000. I. (references)  Khashman. London. and I. vol. 5.  Abbas. M. Eason. Roy. “On certain integrals of Lipschitz-Hankel type involving products of Bessel functions.VIII. 2011. “Expression and illumination invariant preprocessing technique for face recognition.  S. B. 2009. vol. pp. Trans. 4304. pp. 24652468. 529–551.

Weng. 5. Pentland. “Face recognition using Eigenfaces. 5-24. “Visual learning and recognition of 3-D objects from appearance.” International Journal of Computer Science Issues. Hespanha and David Kriegman. 1995. 1997. “Locality preserving projections. . XIX. “Learning recognition and segmentation of 3D objects from 2D images.  M. no. in Proc. XIV. Ramesha and K. vol.  J. 2001. Y Purdy Ho.References  Bernd Heisele. N. 1991. pp. vol. 7. no.  H. Huang. 586-591. “Face recognition with support vector machines: global versus component-based approach”. and T. “Dual Transform based Feature Extraction for Face Recognition.  P. 115-120. K. Ahuja. Murase and S. pp. Turk and A.  Peter Belhumeur. 8th International Conference on Computer Vision. 2003. 2011. pp.” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI).  K.” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Raja. J. Niyogi. 1993. “Eigenfaces versus Fisherfaces: Recognition using class specific linear projection. vol.” Proceedings of Conference on Advances in Neural Information Processing Systems. B.VIII. S. pp 121–128. Nayar. P. and Tomaso Poggio. pp. pages 688–694.711-720.” Journal of Computer Vision.” Proceedings of the International Conference on Computer Vision.

Pearson Education. “Eyes Recognition System Using Central Moment Features”. J. 2003 Simon Haykin. “Visual pattern recognition by moment invariants”. 2002. J. Peter Belhumeur. 7. 1400-1407. Hespanha and David Kriegman. vol. An Introduction to Neural Networks. Nov. pp. K.tw/~bbailey/Moments%20in%20IP. 179–187. “Eigenfaces versus Fisherfaces: Recognition using class specific linear projection.” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). no. IRE transactions on Information Theory.711-720. XIX. 1962. "How Facial Recognition Systems Work". Fundamentals of Neural Networks. M.csie.       Engineering & Technology Journal. pp. 7. Bob Bailey. 29. “Moments in image processing”.ntnu.htm.edu. Prentice Hall. Anderson. vol.howstuffworks. http://www. 2003. pp. 2011. Bonsor. 1997.References  Sundos A. K.htm. Hu. http://computer.com/facial-recognition. Hameed Al_azawi. . no.

But for his masterly advice.Acknowledgement  At the very outset. Gratitude to the organizers of NCETACS 2012 and the chair personnel. Due thanks to Mr. Kaustubh Bhattacharyya for his expertise and scholarly guidance. Heartfelt gratitude to the Don Bosco (DBCET) Family and especially the department of ECE for furnishing us with necessary opportunities to explore our creative technical skills. Thanks one and all . We would like to place on record my sincere thanks and gratitude to all who were instrumental in the completion of this paper work. this work would not have seen the light of the presentation. we thank the Almighty for his gracious      presence all through the days of this work.

Sign up to vote on this title
UsefulNot useful