(IJCSIS) International Journal of Computer Science and Information Security, Vol. 11, No.

7, July 2013

Scale Invariant Feature Transform Based Multimodal Biometric System with Face and Finger: A Review
Shubhangi Sapkal Govt. College of Engg. Aurangabad, India Dr. R.R. Deshmukh Dr. BAM University Aurangabad, India

Abstract - Biometrics has long been known as a robust approach for person authentication. The face is one of the most acceptable biometrics because it is one of the most common methods of recognition that humans use in their visual interactions. In addition, the method of acquiring face images is nonintrusive. It is very challenging to develop face recognition techniques that can tolerate the effects of aging, facial expressions, and the variations in the pose of the face with respect to camera. No single biometric is expected to effectively meet the requirements of all the applications. In other words, no single biometric is optimal. To reduce limitations of biometrics, multimodal biometrics can be used. This paper deals with a biometric authentication system by fusing face and finger images. Scale Invariant Feature Transform (SIFT) is used to extract invariant features from images. Principal Component Analysis (PCA) and Fisher Linear Discriminant (FLD) are commonly used feature extraction methods for face recognition. In this paper, the focus is on perspective biometrics method based on facial and finger images using Scale Invariant Feature Transform. Image features generation transforms an image into a large collection of feature vectors, each of which is invariant to image translation, scaling, and rotation. It is useful due to its distinctiveness, which enables the correct match for keypoints between subjects. These features can be used to find distinctive objects in different images. Keywords- SIFT, Biometric, Keypoints, Security, Person Identification, Scale invariant. I. INTRODUCTION

In order to overcome the shortcomings of biometric system, fusion of two or more modalities of biometrics can be applied. Biometric fusion consolidates the output of multiple biometric classifiers to render a decision about the identity of an individual. Early research in this area dealt with decision level fusion. While only few papers have appeared in the area of feature level fusion and sensor

level fusion, while score-level fusion has received considerable attention in the literature. Evidence in a multi-biometrics system can be integrated in several different levels: Sensor level, Feature level, matchscore level, decision level. Biometrics is a rapidly evolving technology that is being widely used in forensics, such as criminal identification and prison security, and that has the potential to be used in a large range of civilian application areas. It is used in environments that require high levels of accuracy, robust security, and solid customer service. Multimodal biometric systems overcome some of the limitations associated with unimodal biometric systems such as noise in sensed data, intra-class variations, distinctiveness, non-universality, spoof attacks etc, by combining the data from different biometrics using an effective fusion rule, thus achieving higher accuracy and better performance. Fusion at the sensor level and feature level are expected to perform better than fusion at the other two levels, because there are more information about person identity[1],[2]. It has been observed that, a biometric system that integrates information at an earlier stage of processing is expected to provide more accurate results than the systems that integrate information at a later stage. Fusion at the sensor and feature level is expected to provide better recognition performances. But feature level fusion is difficult as feature vectors are not compatible and efficient fusion method is required. In this paper, the method is proposed for combining face and fingerprint biometrics using SIFT and get score densities for these fused images. It has been proved that SIFT has very good performance in object recognition and other machine vision applications[3],[4], and is also used for face recognition[3]. Sovel et. al., proposed a discriminative scale invariant feature transform (DSIFT) for facial expression recognition [5] and Warren et. al. used it in medical imaging[6]. To ensure robustness, face-recognition algorithms have often worked in conjunction with fingerprint, iris, gait, and voice-recognition systems. This has led to the creation of a new research area: multimodal or


http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 11, No. 7, July 2013

multibiometrics systems. A salient challenge in fusing biometrics algorithms or systems is to devise efficient and robust fusion methods[7]. Faisal et. al. showed that performance in face recognition system is improved in data level fusion than score level fusion[8]. Section 2 of this paper, deals with the related work in this area. Section 3 discusses about multibiometric system. Section 4 describes proposed method of feature extraction by SIFT. Sections 5 and 6 deals with image fusion and conclusion, respectively. II. RELATED WORK

biometrics is the fusion of various biometric modality data. In general, multimodal biometrics is based on the notion that the sets of data obtained from different modalities are complementary to each other. An appropriate combination of such data sets can be more useful than using the data from any single modality. IV. SIFT FEATURE EXTRACTION

Author[9], apply SIFT descriptors to 2-D matrices for 3-D face recognition. Rattani[10], used feature level fusion of face and fingerprint. SIFT features are extracted for face and minutiae points for fingerprint. In [11], unsupervised discriminant projection (UDP) technique is used to identify a person with face and palm biometrics. Author[3], presented an application of the SIFT approach to the face recognition and proposed a new method based on SIFT and Support Vector Machine (SVM) for the face recognition problem. In[12], author worked on extraction of distinctive invariant image features that can be used to find the correspondence between different views of an object or a scene. SIFT is used as an automatic computer assisted diagnostic system for renal cell carcinoma subtype classification[13]. Wang et. al.[14] presents the method for face based human verification using PCA. Score level fusion is done for face and voice in [15], [16]. Asha et. al.[17] proposed an authentication system with multi- biometrics to support various services in e-Learning where user authentication is necessary. III. MULTIBIOMETRIC SYSTEM

Feature vector detector seeks out points in an image that are structurally distinct, invariant to imaging conditions, and stable under geometric transformations. Lowe [18] presented a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. SIFT can be used for feature extraction from face and fingerprint images. The features are invariant to image scaling and rotation, and partially invariant to change in illumination. Following are the major stages of computation used to generate the set of image features[18],[19]: i) Scale-space extrema detection: The first stage of computation searches over all scales and image locations. It is implemented efficiently by using a difference-of-Gaussian function to identify potential interest points that are invariant to scale and orientation. The scale space of an image is defined as a function, L( x, y, σ ) that is produced from the convolution of a variable-scale Gaussian, G ( x, y , σ ) with an input image, I ( x, y ) :

L ( x, y , σ ) = G ( x, y , σ ) * I ( x, y ) Where * is the convolution operation in x and y

G ( x, y , σ ) =

1 − ( x 2 + y 2 ) / 2σ 2 e 2 ∏σ 2

Multibiometric system reduces some of the limitations observed in unimodal biometric systems. Biometric methods of human identification have gained much attention recently, mainly because they easily deal with most problems of traditional identification. Biometric technology—the automated recognition of individuals using biological and behavioral traits—is a natural identity management tool that offers high security and convenience than traditional methods of personal recognition. In addition to other applications biometrics is used in access control systems, where it recognize individuals already known to the system and allow them access to secured spaces. Several systems require authenticating a person’s identity before giving access to resources. The key to multimodal

Difference of two nearby scales separated by a constant multiplicative factor k,

D ( x, y, σ ) = (G ( x. y.kσ ) − G ( x, y, σ )) * I ( x, y ) = L ( x, y, kσ ) − L( x, y, σ )
ii) Keypoint localization: At each candidate location, a detailed model is fit to determine location and scale. Keypoints are selected based on measures of their stability. iii) Orientation assignment: One or more orientations are assigned to each keypoint location


http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 11, No. 7, July 2013

based on local image gradient directions. All future operations are performed on image data that has been transformed relative to the assigned orientation, scale, and location for each feature, thereby providing invariance to these transformations. For each image sample, L ( x, y ) , at this scale, the gradient magnitude,

m ( x, y ) , and orientation,

systems do not provide access to information at this level. It is also difficult to fuse data at feature level because of incompatibility of different feature spaces of biometrics traits. At decision level, limited information is available for fusion[2]. Decision level or score level fusion are commonly used levels of fusion. VI. CONCLUSION

θ ( x, y ) , is precomputed using pixel differences:

m( x, y ) = ( L( x + 1, y ) − L( x − 1, y )) 2 + ( L( x, y + 1) − L ( x, yeither − 1)) 2proposed or deployed – US-VISIT, FBI been

Now a days large scale multibiometric systems have

database, India UID card etc. This shows that attention of researchers towards multibiometrics is −1 necessary. Multibiometric systems can address the θ ( x, y ) = tan (( L( x, y + 1) − L( x, y − 1)) /(( L( x + 1, y ) − L( x − 1, y ))) problems of non-universality and spoofing in addition to matching performance. We discussed the approach that can be used for multimodal biometric iv) Keypoint descriptor: The local image gradients system. Major challenge in multimodal biometrics is are measured at the selected scale in the region of finding the optimal approach to combine different around each keypoint. These are transformed into a biometrics, and algorithms. This challenge is representation that allows for significant levels of expected to continue for the coming years. local shape distortion and change in illumination. V. IMAGE FUSION REFERENCES 1. Gunawan Sugiarta, Riyanto Bambang, Hendrawan, and Suhardi, “Feature Level Fusion of Speech and Face Image based Person Identification”, International Conference on Computer Engineering and Applications, 2010 IEEE, pp.221-225. 2. Anil K. Jain and Arun Ross, “Multibiometric Systems”, Communications of the ACM, January 2004, Vo. 47 No. 1, pp. 34-40. 3. Lichun Zhang, Junwei Chen, Yue Lu, and Patrick Wang, “Face Recognition Using Scale Invariant Feature Transform and Support Vector Machine”, International Conference for Young Computer Scientists, 2008 IEEE. 4. Qiu Tu, Yiping Xu, and Manli Zhou, “Robust Vehicle Tracking based on Scale Invariant Feature Transform”, IEEE International Conference on Information and Automation June 20 -23, 2008, Zhangjiajie, China. 5. H. Soyel and H. Demirel, “Facial expression recognition based on discriminative scale invariant feature transform”, Electronics Letters, March 2010, Vol. 46, No. 5. 6. Warren Cheung, Ghassan Hamarneh, “NSIFT: N-Dimensional Scale Invariant Feature Transform for Matching Medical Images”, 2007 IEEE. 7. Rama Chellappa, Pawan Sinha, P. Jonathon Phillips, “Face recognition by computers and humans”, 2010 IEEE. 8. Faisal R. Al-Osaimi, Mohammed Bennamoun, and Ajmal Mian, “Spatially Optimized

Figure1: Image fusion The data presented by multiple levels can be integrated at various levels – Data level, Feature level, Score level and Decision level. Two new parameters, fusion factor (FF), and fusion symmetry (FS), will provide useful guidelines to select best fusion algorithm[20]. Feature level fusion is not accepted generally, as most commercial


http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 11, No. 7, July 2013

Data-Level Fusion of Texture and Shape for Face Recognition”, IEEE Transactions on Image Processing, Vol. 21, No. 2, February 2012, pp. 859572. 9. Tolga Inan and Ugur Halici, “3-D face recognition with local shape descriptors”,IEEE transactions on Information Forensics and Security, Vol. 7, No. 2, April 2012, pp. 577-587.. 10. A. Rattani, D. R. Kisku, M. Bicego, and M. Tistarelli, “Feature Level Fusion of Face and Fingerprint Biometrics”, 2007 IEEE. 11. Jian Yang, David Zhang, Jing-yu Yang, and Ben Niu, “Globally Maximizing, Locally Minimizing: Unsupervised Discriminant Projection with Applications to Face and Palm Biometrics”, IEEE Transactions on pattern Analysis and Machine Intelligence, Vol. 29, No. 4, April 2007. 12. Gul-e-Saman, S. Asif M. Gilani, “Object Recognition by Modified Scale Invariant Feature Transform, 2008 IEEE. 13. S. Hussain Raza, Yachna Sharma, Qaiser Chaudry, Andrew N. Young, May D. Wang, “Automated Classification of Renal Cell Carcinoma Subtypes Using Scale Invariant Feature Transform”, International Conference of the IEEE EMBS Minneapolis, Minnesota, USA, September 2-6, 2009IEEE. 14. Yongjin Wang, KN. Plataniotis, “Face based Biometric Authentication with Changeable privacy preservable templates”, 2007 IEEE. 15. F. Alsaadea, A.M. Ariyaeeiniaa, A.S. Malegaonkara, M. Pawlewskib, S.G. Pillay, “Enhancement of multimodal biometric segregation using unconstrained cohort normalization”, Pattern Recognition 2007, pp. 814-820. 16. Mingxing He, Shi-JinnHorng, PingzhiFan, Ray-ShineRun, Rong-JianChen, Jui-LinLai, Muhammad KhurramKhan, KevinOctaviusSentosa, “Performance evaluation of score level fusion in multimodal biometric systems”, Pattern Recognition 2010, pp. 1789-1800. 17. S.Asha, Dr.C.Chellappan, “Authentication of E-Learners Using Multimodal Biometric Technology”, 2008 IEEE. 18. David G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, 2004. 19. Leidy P. Dorado-Munoz, Miguel VelezReyes, Amit Mukherjee and Badrinath Roysam, “A Vector SIFT detector for interest point detection in Hyperspectral imagery”, IEEE transactions on Geoscience and Remote sensing, Vol. 50, No. 11, Nov. 2012, pp. 4521-4533. 20. Qiang Wang, Daren Yu, Yi Shen, “An overview of image fusion metrics”, I2MTC 2009 International Instrumentation and Measurement

Technology Conference Singapore, May 2009, pp. 16.


http://sites.google.com/site/ijcsis/ ISSN 1947-5500

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.