You are on page 1of 9

Available online at www.sciencedirect.

com Procedia
Computer
Procedia Computer Science 00 (2009) 000–000 Science
www.elsevier.com/locate/procedia
Procedia Computer Science 2 (2010) 164–172
www.elsevier.com/locate/procedia

ʹͲͳͲ

Palmprint Authentication using Modified Legendre Moments


Lakshmi Deepika.C*, Dr.Kandaswamy A, Vimal C, Satish B

Department of Biomedical Engineering, PSG College of Technology, Coimbatore-4

Abstract

The resolution of the acquired biometric image is considered as one of the primary factors affecting the performance of a biometric
authentication system. For low quality images, a powerful feature extraction method is very essential. Orthogonal Legendre moments are used
in several pattern recognition and image processing applications for feature extraction. Translation and scale invariant Legendre moments are
achieved directly by using the Legendre polynomial. However, this method does not yield a rotational invariant form. In this paper we propose
a palmprint verification system in which the 2D Legendre moments are represented as a linear combination of geometric moment invariants.
Geometric moments are invariant to translation, non-uniform scaling and rotation. The modified Legendre moments are used for feature
extraction and a weighted fusion technique is used to fuse the matching scores of the sub-images. The results obtained using a Baye’s classifier
indicate an impressive prediction accuracy of 98%, validating the choice of low order Legendre moment for effective palmprint verification.
⃝c 2010 Published by Elsevier Ltd Open access under CC BY-NC-ND license.
Keywords : Palmprint;Region Of Interest(ROI);Image Enhancement; Legendre moments; Bayesian Belief Networks;

1. Introduction

Palmprint has emerged as one of the promising technologies for accurate identification of individuals. Palmprint identification
is done based on the uniqueness of the lines, ridges and wrinkles in the palm which is found to be unique even to identical twins.
Further the palmprint biometric is a relatively large biometric identifier when compared to the other popular biometric traits like
the fingerprint and iris. This offers two advantages namely, the chances of duplicating the original palmprint data for purposes of
gaining unauthorised access, is less due to the difficulty in faking the multitude of details available in a palmprint, secondly, the
uniqueness of the palmprint template created will be better since the number of features available in a palmprint is more.
The geometrical features that are considered in a palmprint for template creation are, wrinkles, ridges and principal lines. In
the literature, there have been many approaches proposed for palmprint Verification/Identification that are based on the structural
details available in the palm image. Zhang et al.[1], extracted palmlines using twelve templates. Han et al.[2], proposed principal
line extraction using sobel and morphological operation. Duta et al.[3], binaries the palm images directly to get the lines by
setting the threshold. Kong et al.[4], used a texture based approach where 2D Gabor filter is used, which is one of the commonly
used filters. Zhang et al.[5], use Fourier transform to extract frequency domain features of palmprint and obtain improved result.
Hu et al.[6], uses Principal Component Analysis (PCA), PCA and Linear Discriminate Analysis (LDA), PCA and Locality
Preserving Projection (LPP) and 2DLPP for palmprint recognition. Aykut et al.[7], has used discrete wavelet transform followed
by kernel PCA (KPCA) for palmprint recognition and achieved improved performance as compared with Fourier representation.
Shang et al.[8], proposed the use of FastICA on palmprint images. Li et al. [9] introduced the concept of texture energy to define
both global and local features of a palmprint and used four masks to highlight the distribution of four directional lines. However
images obtained in the visible spectrum are highly dependent on the orientation of the presented biometric and the illumination
conditions, for sustaining their similarity at different instances. The uniqueness of the palmprint image is largely decided by the
wrinkles rather than the principal lines. However the images obtained from low resolution camera, may not be good contributors
to the structural information in the image. Hence statistical feature based approaches could be a good alternative in case of low
quality images. In this direction a new approach based on Bayes net classifer and statistical features is presented in this paper.
The lower order legendre moments of the sub images are used as prominent statistical features. These intramodal palmprint
features can characterize palmprints more effectively with high accuracy at very low computational burden.

*
Corresponding author. Tel.:+091-9952299454.
E-mail address: cldeepika@yahoo.com

1877-0509 ⃝c 2010 Published by Elsevier Ltd Open access under CC BY-NC-ND license.
doi:10.1016/j.procs.2010.11.021
C. Lakshmi Deepika et al. / Procedia Computer Science 2 (2010) 164–172 165

Lakshmi Deepika.C / Procedia Computer Science 00 (2010) 000–000

2. Proposed System

In the proposed system, the entire palmprint input image cannot be used for feature extraction since in contain many
unwanted information. So, there is a need of extraction the required region of interest. The extracted ROI contains many
prominent features and image enhancement is performed to use these features effectively. The modified Legendre moments
which are invariant to translation, scaling and rotation are obtained from the enhanced image and are used as a unique feature in
this system. These features are then used for classification using Bayes net classifier.

Fig. 1. Proposed System

2.1. Datasource

The palmprint images are obtained from the publically available PolyU Palmprint database [10] for test the system. The
PolyU palmprint database contains 7752 gray scale images corresponding to 386 users. Each palm contains twenty samples, out
of which ten samples are taken in first session and another ten are taken in second session. The average interval between the first
and second session was two months.

2.2. ROI Extraction

The accuracy of any biometric system depends on the ROI extraction. The extracted ROI must be invariant to translation,
scaling and rotation. In the proposed system the centre part of the palmprint image is extracted as it contain prominent features
such as wrinkles, ridges and principal lines. Following are steps involved in ROI extraction:

• Apply the low pass filter, such as Gaussian smoothing to the original image.

• Convert the filtered image to Binary image using the threshold ‘tf’ as shown in fig 1b. Mathematically this transform
can be represented by

,   1,
,   
1
,   0,
,   

` Where, B(x, y) and O(x, y) are the binary image and the original image, respectively.

• Compute the centroid of an palmprint and locate the point ‘I’ between the middle finger and ring finger as shown in fig
1c.

• Take a 3×3 8-connectivity matrix by placing the pointer ‘P’ of the matrix at ‘I’ [11] and trace the corner points ‘k1’ and
‘k2’ as shown in fig 1d.

• Locate the midpoint ’mid’ between ’k1’ & ’k2’ see Figure1(e).

• Move from the point ’mid’ with the fixed number of pixels toward center of the palm and position the fixed sized square
to crop the image (See Figure1(f)) and extract the subimage (ROI).
166 C. Lakshmi Deepika et al. / Procedia Computer Science 2 (2010) 164–172

Lakshmi Deepika.C / Procedia Computer Science 00 (2010) 000–000

k1 k2

(a) (d)

k1 mid k2

(b) (e)

(c) (f)

Fig. 2. ROI Extraction

2.3. Image Enhancement

Wavelet-based normalization method is used in ROI extracted palmprint images so as to normalize illuminations [12]. In this
method the contrast and edges of the palmprint images are enhanced simultaneously using the wavelet transform. Wavelet-based
image analysis decomposes an image into approximate coefficients and detail coefficients. Contrast enhancement can be done by
histogram equalization of the approximation coefficients and meanwhile edge enhancement can be achieved by multiplying the
detail coefficients with a scalar (>1). A normalized image is obtained from the modified coefficients by inverse wavelet
transform. In this proposed model we used db10 1st level wavelet decomposition and 1.5 as the scalar value for multiplying the
detail coefficient.

Approximation
Wavelet Coefficient
Transform Modification

ROI Image
Reconstruction
Enhanced
Image
Detail
Coefficient
Modification
Fig. 3.Image Enhancement

2.4. Feature Extraction

The main objective of the feature extraction process is to derive a set of parameters that best characterize the palmprint image.
These parameters, in other words, should contain maximum information about the palmprint image. Hence the selection of these
parameters is an important criterion to be considered for proper classification. The lower order legendre moment is found to be
one of the prominent statistical feature for classification.

2.4.1 Legendre Moments:

The two-dimensional Legendre moments of order (p q) for image intensity function f(x,y) are defined as:
C. Lakshmi Deepika et al. / Procedia Computer Science 2 (2010) 164–172 167

Lakshmi Deepika.C / Procedia Computer Science 00 (2010) 000–000

 
2  12  1
        ,  2
4
 

where  (x) is the p th-order Legendre polynomial defined as [13]:



' (
&
&%
 ! "  # $%, ! , 3
%)*

where

,1% 2 , 2-!


$%,  4
2 -!  , -!  , 2-!

Legendre polynomial coefficients are represented by $%, . Legendre polynomial / obeys the following recursive relation:

2  1 
/      ,  , 5
  1   1 /

With *  1,     and p >1. The set of Legendre polynomials { } forms a complete orthogonal basis set on the
interval [-1,1].The orthogonality property is defined as:

 0, 2
     1 2 3 6
, 
 2  1

Digital image of size M ×N is an array of pixels. Centers of these pixels are the points 5 5 , where the image intensity
function is defined only for this discrete set of points 5 5  . Since Legendre polynomials are defined only in the square [-
1×1]×[-1×1] therefore, the digital image must be mapped into this square. The mapping transformations are defined as follows:

1
5  ,1  6
, 7 ∆ 7 , 1
2
1
5  ,1  6: , 7 ∆ 7 , 2
2

with
 1,2,3 , … … . . = and :  1,2,3 , … … . . >. ?5  5/ , 5 and ?!  !/ , ! sampling intervals in the x -and y -
directions respectively. In the literature of digital image processing, the intervals ?5 and ?! are fixed at constant values
5  2⁄= and ?!  2⁄> respectively. For this discrete-space version of the image, "(2)," is usually approximated by using
zeroth-order approximation (ZOA) as follows:
B B
2  12  1
  # #  5  ! " 5 , ! " 8
>&
5) !)

The lower order Legendre moments are computationally less complex and invariant to translation and scaling but suffer with
variation in rotation [14]. This is a big disadvantage of Legendre moment invariant and is solved by representing the 2D
Legendre moment as a linear combination of geometric moment invariants or affine moment invariants. Geometric moments are
invariant to translation, non-uniform scaling and rotation.

Translation invariance is achieved by shifting the image so that the image centroid ,  is coincides with the origin of the
coordinate system. The centroid of 2D image is:

C* C*
 ,  9
C** C**
168 C. Lakshmi Deepika et al. / Procedia Computer Science 2 (2010) 164–172

Lakshmi Deepika.C / Procedia Computer Science 00 (2010) 000–000

The central moments

F F

E     ,   ,  ,  10


F F

Where the   -order geometric moments are defined as:


F F

=        ,  11


F F

By using the binomial theorem, central geometric moments simply represented as:
 
 
E  # # G H G : H ,% ,! C%! 12
-
%)* !)*

According to "(12)," exact computation of geometric moments [12] results exact values for central moments.

With S is the scaling factor in both x- and y-directions. Central moments after a uniform scaling are defined as:

EI  J //& E 13

Scaling invariance could be achieved through the cancellation of the scaling factors. By setting EI** equal to the unity,
the scale-normalized moments are:

EI  E ⁄E** K , L      2⁄2 14


Rotation through an angle ǰabout the coordinate origin is represented by the following form:

F F

M    NOPQ , P
RQ NOPQ  P
RQ 15
F F
S , 

By using the binomial theorem with "(15)," the moments of the normalized image with respect to rotation could be written as
follows:
 
 
M  # # G H G : H ,1% P
RQ/%! 16
-
%)* !)*
/!%
S NOPQ EI/%!,%/!

Rotation normalization can be achieved by the major principal axis method. The principal axis moments are obtained by rotating
the axis of the central moments until M is zero.

2.4.1.1 Relation with Geometric Moments


 
' ( ' (
& &
2  12  1
  # # $,5 $,! M&5 ,&! 17
4
5)* !)*

The geometric moment invariants M&5.)&! are defined by using "(10)." According to the "(4)," the expensive factorial
computations are avoided by implementing the following recurrence relations:

2 , 1
$*,  $*, , 18 , 1

C. Lakshmi Deepika et al. / Procedia Computer Science 2 (2010) 164–172 169

Lakshmi Deepika.C / Procedia Computer Science 00 (2010) 000–000

2 , 2- , 1
$%,  $%, , 18 , 2
 , 2-

 , 2- , 1 , 2- , 2
$%,  , %, , 18 , 3
2-2 , 2-  1

with $*,* = 1 and k ҆1.

To reduce the computational complexity, "(11)," could be rewritten in a separable form as follows:


' (
&
2  12  1
   # $,5 T,5 19 , 1
4
5)*


' (
&

T 5  # $,! M&5,&! 19 , 2


!)*

2.5. Classifier

Bayes Net or Bayesian Belief Networks (BBN) is a special type of “probabilistic graphical model”. A BBN is a directed
acyclic graph consisting of three elements: nodes representing random variables, arcs representing probabilistic dependencies
among those variables, and a Conditional Probability-Distribution Table (CPT) for each variable [15]. The nodes can be either
evidence variables or latent variables; an evidence variable is one with a known value (i.e. it is measured). Arcs specify the
causal relation between variables [16,17]. Finally, each node has a CPT which includes the probabilities of outcomes of its
variable given the values of its parents.

The conditional probability is given by


V

U , U& , … , UV   W U5 |YZ[RPU5  20


5)
3. Results and Discussion

The enhanced image is rotated at different angles as shown in figure 4. The third order legendre descriptor is used to test the
invariance. Classical non-rotational Legendre moments [18] and the rotational proposed ones are evaluated. The absolute errors
for the three different rotations are plotted in figures(5.a-c). It is clear that, the absolute error of the non-rotational descriptors
show disturbed behaviour and increased as the increasing of the order. On the other side, the absolute error of the proposed
descriptors shows steady behaviour. Their values decreased and approach zero values as the descriptor's order increased.

(a) (b)

(c) (d)

Fig. 4.(a) Original Palmprint Image (b)-(d) rotated at different angles


170 C. Lakshmi Deepika et al. / Procedia Computer Science 2 (2010) 164–172

Lakshmi Deepika.C / Procedia Computer Science 00 (2010) 000–000

0.2
Classical Non-rotational Descriptor
Legendre Rotational Descriptor
0.15
Absolute Error

0.1

0.05

0
1 2 3 4 5 6 7 8 9
Legendre Descriptor
(a)
0.2
Classical Non-rotational Descriptor
Legendre Rotational Descriptor
0.15
Absolute Error

0.1

0.05

0
1 2 3 4 5 6 7 8 9
Legendre Descriptor
(b)
0.25
Classical Non-rotational Descriptor
0.2 Legendre Rotational Descriptor
Absolute Error

0.15

0.1

0.05

0
1 2 3 4 5 6 7 8 9
Legendre Descriptor
(c)
Fig. 5 Absolute invariance errors of the rotated palmprint images.

This section described in detail the experimental protocol employed in our present work. Ten images per palm are considered
for training and remaining 10 images are used for testing the proposed algorithm. The experiments are conducted for moment
orders ranging from 1 to 3, and portioning the palmprint into (2×2); (3×3); (4×4); (5×5); (6×6); (7×7); (8×8); (9×9);
(10×10);(11×11) ;(12×12) sub-images. The experiment is also conducted for moments of order 1 to 5 for the entire palm image.
Accuracy (ACC), True Positive (TP), False Positive (FP) are used for validating the proposed system. The results for palmprint
image of orders from 1 to 4 are presented in Table 1.
Table 1. Overall Result for Entire Palmprint Image
Order 1 Order 2 Order 3 Order 4 Order 5
Accuracy 65.5 74 86.4 88.7 92.5
TP 0.655 0.74 0.864 0.887 0.925
FP 0.032 0.022 0.015 0.013 0.008
C. Lakshmi Deepika et al. / Procedia Computer Science 2 (2010) 164–172 171

Lakshmi Deepika.C / Procedia Computer Science 00 (2010) 000–000

Table 2. Order 1 for Sub-images

Order 1 (Legendre Descriptor =2)


Sub Image 2×2 3×3 4×4 5×5 6×6 7×7 8×8 9×9 10×10 11×11 12×12
Accuracy 85.2 83.1 75.4 72.5 90 91 88.4 90.5 92 87.5 86.2
TP 0.852 0.831 0.754 0.725 0.9 0.91 0.884 0.905 0.92 0.875 0.862
FP 0.013 0.015 0.016 0.017 0.006 0.01 0.010 0.07 0.009 0.011 0.012

Table 3. Order 2 for Sub-images

Order 2 (Legendre Descriptor =5)


Sub Image 2×2 3×3 4×4 5×5 6×6 7×7 8×8 9×9 10×10 11×11 12×12
Accuracy 89.4 88.1 86.5 87.2 82.4 83.5 87.6 91.8 95 90 88.6
TP 0.894 0.881 0.865 0.872 0.824 0.835 0.876 0.918 0.95 0.9 0.886
FP 0.012 0.014 0.017 0.016 0.021 0.020 0.016 0.008 0.003 0.011 0.015

Table 4. Order 3 for Sub-images

Order 3 (Legendre Descriptor =9)


Sub Image 2×2 3×3 4×4 5×5 6×6 7×7 8×8 9×9 10×10 11×11 12×12
Accuracy 90.1 89.2 88.5 89.4 91.6 88.7 94.7 95.2 97.74 93.3 92.1
TP 0.901 0.892 0.885 0.894 0.916 0.887 0.947 0.952 0.974 0.933 0.921
FP 0.007 0.008 0.009 0.007 0.006 0.009 0.004 0.004 0.003 0.005 0.006

From results of Table 5 it can be said that the proposed system with low order 3 Legendre moments as features from
subimages (10×10) outperforms conventional method of high order Legendre moments features with high order (5) for the whole
palmprint.

Table 5 Overall Result Comparison

Order 5 Order 1 Order 2 Order 3


Sub Image Entire 10×10 10×10 10×10
Accuracy 92.5 92 95 97.74
TP 0.925 0.92 0.95 0.974
FP 0.008 0.009 0.003 0.003

4. Conclusion

This paper has proposed palmprint verification system using low order Legendre moments as features from the sub images of
palmprint image. This method also performs better than the conventional method using the high order Legendre moments for
entire palmprint. The proposed method also solves the major disadvantage of the legendre moment and thus making it invariant
to translation, scaling and rotation.
. Another important observation from this is the fact that, due to practical consideration and limitation, when multimodal
biometric data is not available for improved verification one can uses single features of various sub images of an image of the
unimodal biometric to obtain improved performance of verification. This also opens up area of research where multiple features
can be obtained from sub images of an image may offer acceptable performance as against multimodal biometrics.

References
[1] D.Zhang, W.Kong, J.You, and M.Wong, Online palmprint identification, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9):1041–
1050, 2003.
[2] C. Han, H. Cheng, C. Lin, and K. Fan, Personal authentication using palmprint features, Pattern Recognition, 37(10):371– 381, 2003.
[3] N. Duta, A. Jain, and K. V. Mardia, Matching of Palmprints, Pattern Recognition Letters, 23:477–485, 2002.
[4] W. K. Kong, D. Zhang, andW. Li, Palmprint feature extraction using 2-D Gabor filter, Pattern Recognition, 36:2339 2347, 2003.
[5] D. Zhang and W. Shu, Two novel charateristics in palmprint verification: Datum point invariance and Line feature matching, Pattern Recognition, 32:691–
702, 1999.
[6] D. Hu, G. Feng, and Z. Zhou, Two-dimensional locality preserving projections (2dlpp) with its application to palmprint recognition, Pattern Recognition,
40:339–342, 2007.
[7] M. Ekinci and M. Aykut, Palmprint recognition by applying wavelet subband representation and kernel PCA, 5th International Conference on Machine
Learning and Data Mining in Pattern Recognition, Germany, pages 628–642, 2007.
[8] L. Shang, D. Huang, J. Du, and Z. Huang, Palmprint recognition using fast ICA and Radial Basis Probabilistic Neural Network. Nerocomputing, 69:1782–
1786, 2006.
[9] W. Li, J. You, and D. Zhang, Texture-based palmprint retrieval using a layered search scheme for personal identification, IEEE Transactions on
Multimedia, vol. 7, no. 5, pp. 891–898, 2005.
[10] http://www.comp.polyu.edu.hk/~biometrics/
[11] R. Raghavendra, Ashok Rao, G. Hemantha Kumar, A Novel Three Stage Process for Palmprint Verification, International Conference on Advances in
Computing, Control and Telecommunication Technologies, 2009.
172 C. Lakshmi Deepika et al. / Procedia Computer Science 2 (2010) 164–172

Lakshmi Deepika.C / Procedia Computer Science 00 (2010) 000–000

[12] S. Du and R. Ward, Wavelet-based illumination normalization for face recognition, Proc. of the IEEE International Conference on Image Processing,
September 2005.
[13] K. M. Hosny, Efficient computation of Legendre moments for gray level images, Int. J. of Image and Graphics, vol. 7, no. 4, pp. 735-747, Oct. 2007.
[14] Khalid M. Hosny, New Set Of Rotationally Legendre Moment Invariants, International Journal of Electrical, Computer, and Systems Engineering 4:3 2010.
[15] Siavash Mirarab, Alaa Hassouna, Ladan Tahvildari, Using Bayesian Belief Networks to Predict Change Propagation in Software Systems, icpc, pp.177-
188, 15th IEEE International Conference on Program Comprehension (ICPC '07), 2007.
[16] A. T. T. Ying, G. C. Murphy, R. T. Ng, and M. Chu-Carroll. Predicting source code changes by mining change history. IEEE Transaction on Software
Engineering, 30(9):574–586, 2004.
[17] Genie/Smile, 2005-2006. http://genie.sis.pitt.edu/.
[18] C. H. Teh, and R.T. Chin, On image analysis by the method of moments, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 10, no. 4, pp. 496–
513, April 1988.
[19] G. S.Badrinath, Naresh K. Kachi and Phalguni Gupta, Palmprint based Verification System Robust to Occlusion using Low-order Zernike Moments of Sub-
images , Special Issue of Biometrics Systems and Applications in the Journal of Telecommunication Systems, Springer Verlag, 2010.

You might also like