P. 1
NCETACS Final Kaveri

NCETACS Final Kaveri

|Views: 3|Likes:
Published by mariauxilium

More info:

Published by: mariauxilium on May 06, 2012
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PPSX, PDF, TXT or read online from Scribd
See more
See less





National Conference on Emerging Trends and Applications in Computer Science - 2012

Kaveri Chetia Kaustubh Bhattacharyya Don Bosco College of Engineering & Technology Assam Don Bosco University Guwahati

Plan of Presentation
 Introduction
 Basic Theory  Related Works

 Problem Formulation
 Experimental Model  Results and Performance Analysis  Conclusion  References  Acknowledgement

 Biometrics


 Identifies or verifies a person based on one’s physical

characteristics by matching the testing images against the enrolled ones .

 Human Computer Interface (HCI)
 The demand for reliable personal identification in

computerized access control has resulted in an increased interest in biometrics to replace password and identification (ID) card.

 Artificial Modeling of Human Natural Ability  Surveillance System

Identification or Verification . Face Detection .Three Key Steps/Sub Tasks Involved 1. Feature Extraction .Essential features (necessary) 3.Detect the Face in a given image 2.

Identification FR System Yes/No This is Mr.Three Key Steps/Sub Tasks Involved Face Detection. Y Face Detection System Process Face Identification Face Verification . J This is Ms. Verification.

 Basic Blocks of a Face Recognition System Pre-processing Unit Size Normalization Illumination Normalization Noise Removal Feature Extraction Global Features Local Features Training Testing .

the images taken in different surroundings may be unlike. .  Even to the same people. size and angle.  The light intensity and directions of imaging. such as  The differences of facial expression.  The variety of posture.Complications Involved in Face Recognition Systems  Face recognition is influenced by many complications.

Problem Formulation  An Artificial Neural Network is to be designed that recognizes and classifies a human face relative to the faces of the training database. illumination conditions and effect of noise.  Other issues of face recognition systems need to solve is: makeup.  Further. . Hence. posing positions. The system must be able to know that two images of the same person with different facial expressions actually are the same person. the designed system classifies the face and verifies as well. the face recognition system needs to solve the problem concerning different facial expressions as well.

SVD etc.Popular Approaches of Face Recognition Systems (FRS)  (1) Holistic / Global matching methods: These methods use the whole face region as the raw input to a recognition system.g.  The trade off between the computational complexity and the accuracy is easily overcome . local features such as the eyes. a machine recognition system should use both. PCA. in these methods. Fisher face etc. nose. ANN. E.  Accurate only for frontal view of faces  Less accurate for face tilt more than 30 degree  (2) Feature-based (Local) matching methods: Typically.g. LDA. and mouth are first segmented and features are extracted for these.  Requires a large data base for training  High Computational Complexity  (3) Hybrid methods: This system uses both local features and the whole face region to recognize a face. HMM. E.

Artificial Neural Network ANN is the massively parallel dynamic system out of which MLP feed forward network is best for face recognition  Resembles the Brain Processor in Two Respects:    Knowledge is acquired by the network through a learning process. Interneuron connection strengths known as synaptic weights are used to store the knowledge. .

Basic Blocks of the FRS Size Normalization Pre-Processing for Local Feature Extraction Acquisition of Database Intensity and Illumination Normalization Face Normalization for Global Feature Extraction Segmentation of the Landmark segments of the face Size Normalization Feature Extraction Simulation of the Test Image Training the Neural Network .

distance from the camera No. facial expression.Experimental Model  1. of Training Images – 160 (20 per person) Acquired Database Images Variation in Expressions . of Images per Person – 25 Total No. of Persons –8 No. Database Images  Varying illumination. pose.

Experimental Model  2. Pre-processing Normalization .

Experimental Model  3. Normalization and Segmentation An individual face. normalized face of the database faces and the resulting face of the subtraction of the individual face from the mean face Segmentation of Landmark parts of Face for Geometrical Approach .

21 x 0.Experimental Model  4.38 Segmented Face .25 0.14 x 0. Dimension of the Segmented Segments Facial Region Size in % of the Full Face (M x N) 0.16 x 0.25 Size in Pixels (M x N) 12 x 19 12 x 19 Right Eye Left Eye Nose Mouth 18 x 20 14 x 28 0.26 0.14 x 0.

Total Feature Length 98 98 130 126 75x4=300 752 Right Eye Left Eye Nose Mouth Global Total Length 19 19 20 28 . Feature Extraction Length of the Extracted Features Segment Central Moment 19 19 20 28 Eigen Vector 12 x 4 12 x 4 18 x 4 14 x 4 12 12 18 14 Standard Deviation Row Col.Experimental MOdel  4.

2000 0.9 0.8.5. 1500. 0.7.7 Log-Sigmoid and Tan-Sigmoid Number of Neurons in the Input Layer Number of Neurons in the Output Layer Number of Neurons in the Hidden Layer Number of Iterations Learning Rate Momentum Activation Functions . Output Layer) Number of Input Unit 1 Feature Matrix consisting of feature vectors of eight persons Number of Output Unit 1 Binary Encoded matrix consisting of eight target vectors for eight persons. 0. 0. 752 08 (To recognize 8 persons) (752 + 8) * (2/3) = 506 1000. Hidden Layer.Specifications Type: Feed Forward Backpropagation Network Parameters Specifications Number of Layers 3 (Input layer.Experimental Model Artificial Neural Network . 0.6.

Experimental Model Binary Encoded Output .

2x10-4 1.4x10-4 1x10-7 1x10-12 1x10-6 .Results and Performance Analysis Convergence of Mean Squared Error for Different Number of Iterations and Different Activation Function Activation Function of Hidden Layers Activation Function of Output Layers MSE Iterations 1000 1500 2000 1000 1500 2000 Tansigmoid Tansigmoid Logsigmoid Logsigmoid 1x10-4 1.

Results and Performance Analysis MSE Convergence Curve .

5 0.7x10-4 1x10-7 1x10-12 1x10-6 1.7x10-3 0.2x10-6 1.3x10-5 1x10-8 1x10-5 1.4x10-4 1.6x10-4 1.6 0.5x10-7 1.6 0.4x10-5 1x10-7 1.4x10-6 1.8 0.7 0.7 .2x10-10 1.4x10-3 1.3x10-10 1.1x10-5 1.5 0.3x10-3 1.5x10-7 1.6 0.4x10-2 1.7 0.5 0.8x10-6 1.3x10-4 1.2x10-4 1x10-7 1.7 0.7x10-3 1.Results and Performance Analysis Convergence of Mean Squared Error for Different Number of Iterations and Different Learning Rates Learning Rate Momentum MSE for different Iterations 1000 1500 2000 1.9 0.

measure of the likelihood that the proposed system will incorrectly accept an access attempt by an unauthorized user.Results and Performance Analysis Comparison of Recognize Rates of the Proposed System with the other Systems with Variation in the Number of Training Images Performance No. .measure of the likelihood that the biometric security system will incorrectly reject an access attempt by an authorized user. of training images 5 10 20 5 10 20 5 10 20 Analytical System 57 67 73 27 20 15 16 13 12 Holistic System 62 76 80 22 15 12 16 09 08 Hyrbrid System 70 83 86 18 10 08 12 07 06 CRR (%) FAR (%) FRR (%) FALSE ACCEPTANCE RATE (FAR) . FALSE REJECTION RATE (FRR) .

Holistic and Proposed Hybrid Systems .Result and Performance Analysis Comparison of Correct Recognition Rate of Analytical.

Result and Performance Analysis Performance Parameters of the Proposed Hybrid System .

Result and Performance Analysis Plot of the CRR of the System Affected by Gaussian Noise .

Result and Performance Analysis COMPARISON OF RECOGNITION RATES OF THE PROPOSED SYSTEM WITH THE DATABASE AFFECTED BY VARIOUS T YPES OF NOISE 90 Database with Noise C F F R A R R R R % % % 86 8 6 80 70 Ideal Images Gaussian Noise Salt & Pepper Noise 60 50 40 30 20 Ideal Images Gaussian Salt & Pepper 78 10 12 73 15 12 10 0 CRR (%) FAR (%) FRR (%) .

. The system is trained only for 8 persons. Yale.Future Prospects  The designed system has been tested only on the acquired     database images. A more effective system that is robust to the presence of noise could be designed. The system could be tested for the universally accepted database such as ORL. The system’s accuracy could be validated training it to recognize more persons. The most essential features could be extracted by different mathematical modeling thus reducing the input features to the Neural Network The Global Features could be extracted by methods other than PCA and the Recognition Rate could be estimated.

5.G. Sneddon.” Proceedings of the International Conference on Computer Engineering and System. 262-264. gender and age classification. Eason. K. vol. .” International Journal of Computer Science Issues.VIII. “Expression and illumination invariant preprocessing technique for face recognition. 21-25. no. vol. pp. 2010. “Biometric security technology. 115-120. 2000. Trans. pp.  K.” International Journal on Computer Science and Engineering. no. “Dual transform based feature extraction for face recognition. “Intelligent face recognition: local versus global pattern averaging”. Abdel-Hay and H. 956 – 961. M. pp. Fahmy. Springer-Verlag. B. B. Khalil. 2011. N. 2010. I. no. S. pp.” International Journal of Computer Applications. I. 2006. 529–551. 2008. and I. Patnaik. Raja. K. “Feature extraction based face recognition. Ranawade.” Phil. London. Venugopal and L. Noble. Raja. April 1955. A247. M. “Face recognition and verification using artificial neural network. 14-23. vol. vol. Lecture Notes in Artificial Intelligence. 24652468.  Albert Montillo and Haibin Ling. pp. II. “On certain integrals of Lipschitz-Hankel type involving products of Bessel functions. Ramesha . “Age regression from faces using random forests.  K. Ramesha and K. Roy.” Encyclopedia of Artificial Intelligence. R.  S. pp. Soc.  Abbas.References  Marcos Faundez-Zanuy. 01S. (references)  Khashman. 4304. M. 14. 59-64. pp. 2009. pp. B.” Proceedings of the IEEE International Conference on Image Processing.

pp. XIX. 2003. 5. 1997. “Learning recognition and segmentation of 3D objects from 2D images. “Visual learning and recognition of 3-D objects from appearance. no. Hespanha and David Kriegman.711-720.  K.” Journal of Computer Vision.  H.VIII. Turk and A. 1991. Nayar. “Eigenfaces versus Fisherfaces: Recognition using class specific linear projection. 1993. pages 688–694. K. vol. 115-120. .  J. pp. vol. S.” Proceedings of the International Conference on Computer Vision. Ahuja. “Face recognition using Eigenfaces. B. “Dual Transform based Feature Extraction for Face Recognition. in Proc. no. 7.” Proceedings of Conference on Advances in Neural Information Processing Systems. Niyogi. 586-591.” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.  M. J. N. 1995. Pentland. 5-24. pp. P.” International Journal of Computer Science Issues. pp.References  Bernd Heisele. Murase and S. vol. Weng. Raja. and T.” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). and Tomaso Poggio. Y Purdy Ho. 8th International Conference on Computer Vision.  P. 2011. “Face recognition with support vector machines: global versus component-based approach”.  Peter Belhumeur. 2001. Huang. “Locality preserving projections. pp 121–128. XIV. Ramesha and K.

Nov. “Eyes Recognition System Using Central Moment Features”.csie. “Visual pattern recognition by moment invariants”. Peter Belhumeur. M. 7. Pearson Education. no.” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). An Introduction to Neural Networks.tw/~bbailey/Moments%20in%20IP. 1997. .htm.htm.ntnu. 1400-1407. no. 29.       Engineering & Technology Journal. pp. “Moments in image processing”.howstuffworks. XIX. 2003 Simon Haykin. http://computer. Prentice Hall. Hameed Al_azawi. K. K. 2002. J.711-720. “Eigenfaces versus Fisherfaces: Recognition using class specific linear projection. vol. pp. "How Facial Recognition Systems Work". http://www. Hu. 7. 179–187. 2003. Hespanha and David Kriegman. pp.com/facial-recognition.References  Sundos A. Bob Bailey. J.edu. 2011. Bonsor. 1962. IRE transactions on Information Theory. Anderson. vol. Fundamentals of Neural Networks.

Thanks one and all . Kaustubh Bhattacharyya for his expertise and scholarly guidance. Gratitude to the organizers of NCETACS 2012 and the chair personnel. this work would not have seen the light of the presentation. But for his masterly advice.Acknowledgement  At the very outset. we thank the Almighty for his gracious      presence all through the days of this work. We would like to place on record my sincere thanks and gratitude to all who were instrumental in the completion of this paper work. Heartfelt gratitude to the Don Bosco (DBCET) Family and especially the department of ECE for furnishing us with necessary opportunities to explore our creative technical skills. Due thanks to Mr.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->