You are on page 1of 6

2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA2011)

An Efficient Method for Finger-Knuckle-Print


Recognition Based on Information Fusion
Zahra S. Shariatmadar, Karim Faez
EE Department, Amirkabir University of Technology (Tehran Polytechnic)
Tehran, Iran
{Zshariatmadar,kfaez}@aut.ac.ir

Abstract— Information fusion of various biometrics for its captured and the unique FKP features can be better exploited.
high performance in biometric recognition system, has recently Finally, in Lin Zhang’s later work [5], they exploit the Gabor
attracted much attention. So in this paper the information of feature and Fourier transform coefficients as local and global
biometrics in different aspects is considered. At the first, the information of each image, respectively. In the two previous
information fusion in single modal is investigated. In this stage,
works, researchers could improve significantly the recognition
two different subsets of feature vectors are extracted from each
image and are serially combined. With this work a new vector accuracy in verification mode. In this paper, a FKP
with higher dimension is considered as feature vector of each recognition algorithm will be developed in identification
image. The characteristic biometric used in this paper is Finger- mode. In this mode, the user identity is unknown. On the other
Knuckle-Print (FKP) which is unique and is recently used for hand, identification involves comparing the acquired
personal identity authentication. The database used in our biometric information against the templates corresponding to
experiment contains the images of four different fingers. all users in the database. After all comparisons, the most
Therefore at the second stage, the information fusion of each similar template is considered as unknown identity.
finger at feature level is investigated. In fact, this fusion works as Also, it has been reported in literatures that different
a kind of multi-modal method with a single biometric
feature subsets of each sample, have complementary
characteristic but multiple units. Poly-U FKP database was used
to examine the performance of the proposed method and the information. Therefore a fusion scheme which utilizes various
experimental results proved that combining the features of four representations can improve the classification result. On the
fingers is able to increase the recognition rate compared to that other hand, the outputs of various feature extractors can be
produced by each finger, separability. integrated to obtain decisions which are more accurate than
the decisions made by any one of the individual feature
I. INTRODUCTION representations.
Biometrics based methods, which are unique physical or So in this paper, at the first, the extracted features of the
behavioural characteristics of human being, are effective for intensity and Gabor images of each FKP are combined. On the
automatically recognizing a person’s identity. Through the other hand, two different subsets of feature vectors are
past years, many biometrics characteristics have been extracted from each image and are serially combined. The
investigated, including fingerprint, face, iris, retina, palmprint, experimental results in this stage demonstrate that with this
hand geometry, gate, etc [1]. Recently, it has been found that work, the user identity can be identified with a high
the Finger-Knuckle-Print (FKP), which is the texture pattern recognition rate. At the next stage, the feature vectors of
produced by bending the finger knuckle, has high capability to different combination of fingers are fused. On the other hand,
discriminate different individuals. Already, several researches the information fusion at feature level is examined. The
are done in this field. For example, Woodard and Flynn in [2] experimental results in this stage show that the best
used the 3D range image of hand for extracting the curvature recognition performance can be achieved by combining the
surface representation of the index, middle and ring fingers. In feature vectors of all fingers.
[3], Ravikanth et al. used the 2D finger-back surface images In the rest of this paper, section 2 introduces the detail of
and applied on them some subspace analysis methods for proposed method. Section 3 reports the experimental results of
feature extraction. Although the foregoning methods proved various implementations. Finally conclusions and future
that the biometrics features in the outer surface of finger are works are presented in section 4.
unique; however they have some disadvantage; for example in
the former work, the proposed method mainly uses the 3D II. FKP RECOGNITION
shape information of finger back surface rather than the FKP characteristic is similar to any other biometric
texture information and in the latter work, the subspace characteristic which can be used in two modes: identification
analysis methods may not be able to effectively extract the and verification. In this paper we focus into the identification
distinctive features in finger back surface. In [4], Lin Zhang et mode. On the other hand, for specifying the user identity, we
al. developed an online personal recognition system using the compare the input image with all templates in the database.
FKP. With their design, the finger knuckle is slightly bent, The block diagram of identification task is shown in Fig. 1,
when being imaged. Therefore the skin patterns can be clearly using the five main modules of a biometric system, i.e., sensor,

978-1-4577-0242-6/11/$26.00 ©2011 IEEE 210


2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA2011)

feature extractor, matcher, decision making and system representations, which are, the gray-level intensity and the
database. Gabor transform of the gray-level intensity.
In the following, the detail of each above important The main steps of feature extraction in our method are as
components used in our proposed algorithm is explained. follows:
Step1) Dividing each ROI image into 22 segments (the
number of segments has been selected arbitrary), so that each
segment has 1100 pixels. (In fact with this work, we believe
that the feature elements extracted from each segments,
capture the local information contained in each sub image.)
Step2) Computing the Average Absolute Deviation (AAD)
from the means of gray values in individual segments. With
this work, 22 features are obtained for each gray-scale image.
The ADD parameter can be obtained by:
N

∑f
1
Fig. 1. The five main modules of a biometric system in identification AAD j = ( )
j x,y −M j (1)
mode N
i =1
A. Sensor Module Where, f j ( x , y ) is the gray level of pixel (x , y ) , M j is the
This module captures the biometric data of an individual. mean value, and N is the total pixels of each segment j.
Fig. 2, shows the outlook of FKP image acquisition device. In Although the features based on statistical properties of images
this paper, for various experiments, the Poly-U FKP database are likely to degrade with the image quality deterioration [8],
which is collected by this device is used. This database is but in this study we claim that these features will be useful for
intended to be a benchmark and it is available at [6]. FKP identification. As noted in section III, the results based
on this simple statistical feature, are well but it is expected
that better accuracies can be achieved with more
discriminative attributes.
Step3) Filtering each ROI image with a bank of Gabor filter in
10 directions and 5 scales ( θ j = j π 10 , j = {0,...,9} ). In fact, in
this part of algorithm, we use a bank of Gabor filters as
feature extractors. The Gabor filters can simultaneously
Fig. 2. The outlook of the FKP image acquisition device capture the spatial and frequency uncertainty information [9].
Since these filters can extract the three basic features -
Since the variations in spatial locations of the fingers may magnitude, phase and orientation- of each image, they have
affect on the final result, it is necessary to construct a local been widely used for extracting the features. The general form
coordinates system for each FKP image. After making such a of this filter which usually used in the literature is defined as
coordinates system, a Region of Interest (ROI) can be cropped following, [10]:
from the original image for feature extraction and matching. ω2 2 2 k2
ω −(8k 2 )(4x ′ +y ′ ) i ωx ′ −( 2 )
We have used the steps given in [4] for constructing the G(x , y ,ω,θ) = e (e −e ) (2)
coordinates system. 2π k
Fig. 3(b) and 3(e) shows two typical extracted ROI images.
In our work, the size of these ROI images has been selected as Where x′ = ( x − x0 ) cosθ + ( y − y0 ) sinθ and,
110*220 pixels. y ′ = − ( x − x0 ) sinθ + ( y − y0 )cosθ , ( x0 , y0 ) is the center of
the function, ω is the radial frequency in radians per unit
length and θ is the orientation of the Gabor functions in
radians. k is defined by k = 2 ln 2 (2δ + 1 / 2δ − 1) , where δ is
the half-amplitude bandwidth of the frequency response. ω
can be determined by ω = k , where σ is the standard
σ
deviation of the Gaussian envelop [7].
But a disadvantage of Gabor filters is that the maximum
Fig. 3. (a) and (d) are two FKP images; (b) and (e) are the ROI images of (a)
and (d) [7]. bandwidth of them is limited to approximately one octave and
if we want the broad spectral information with maximal
B. Feature Extraction Module spatial localization, the Gabor filters will not be optimal.
In this module the acquired data is processed to extract Therefore Field [11] in 1987 proposed the Log-Gabor function
feature values. In this work we fuse two subsets of feature which can be constructed with arbitrary bandwidth and the
vectors for each image, therefore we employ two FKP bandwidth can be optimized to produce a filter with minimal

978-1-4577-0242-6/11/$26.00 ©2011 IEEE 211


2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA2011)

spatial extent. The frequency response of a Log-Gabor filter is The column vector of the projection matrix is composed of
given as: the corresponding eigenvectors of M largest eigenvalues for
−(log( f ))2 the covariance matrix S. In our experiment we adopted 980
f0
G (f ) = exp( ) (3) largest eigenvalues to get their eigenvectors.
2(log(σ ))2 In fact by this method, the subspace composed of the
f0
orthogonal basis vectors, is obtained. Then by mapping the
Where f 0 represents the center frequency, and σ gives the samples into this subspace, we can get the projection
coefficient vectors as the sample feature vectors. Similarly,
bandwidth of the filter. The details of the Log-Gabor filter has mapping the test sample images into this subspace can obtain
been explained in [11]. The visualization of the gray-level the feature vectors of the test samples.
intensity and its log-Gabor transform is shown in Fig. 4. We Since the PCA algorithm does not consider the separability
here choose five scales and ten orientations in Log-Gabor of various classes, we can apply the LDA [12] algorithm on
transform. the PCA weights for optimal separability of feature subspaces.
LDA is a subspace analysis method which searches for a
group of basis vectors, and produces different class samples
having the smallest within-class scatter SW and the largest
between-class scatter S B . S B and SW are defined as follows:
c
Fig. 4. (a) Gray-level intensity (b) Log-Gabor transform of the gray-level
intensity.
SB = ∑N ( μ − μ ) (μ − μ )
i =1
i i i
T
(7)
Step4) Dividing each filtered image into 22 segments and then
computing the AAD parameters of each segment. The 22 c
features obtained from each filtered image are serially
combined. With this work, a new vector with 50*22(1100)
SW = ∑∑ (x
i =1 xk X i
ε
k − μi ) ( xk − μi )T
(8)
features is obtained for each image. (By applying the bank of
Log-Gabor filters into each image, 50 images at different
scales and orientations are obtained for them.) Where xi , s are n -dimensional vectors, μi is the mean
image of class X i , and Ni is the number of samples in class
Step5) Applying the combination of Principle Component
Analysis (PCA) and Linear Discriminate Analysis (LDA) Xi .
algorithm on features obtained by previous step (by applying LDA obtains the optimal W in such a way that the ratio of
these algorithms, 164 most important features are selected). In SB and SW is maximized:
the following, the PCA and LDA algorithms are explained,
briefly. W T S BW
Wopt = argmax (9)
PCA [12] is a standard technique used to reduce the W T SW W
W

dimensionality of original data. By using this method, a group In fact, LDA features are the projection vectors of the
of new orthogonal basis is obtained to indicate the subspace training samples mapped by projection matrix W .
spanned by the training samples. With PCA, the image X is Step6) Combining two sets of feature vectors extracted from
mapped to a low-dimensional space vectors Y as follows: steps 2 and 5 into one vector. (164 + 22 = 186)
The classical method of feature combination is to group
Y =WT X (4) two sets of feature vectors into union-vector (various feature
vectors are serially combined). On the other hand, suppose α
and β are two feature vectors of an arbitrary sample ξ which
Here X ∈ R N , Y ∈ R M ( N  M ) and W indicates the are n and m-dimensional respectively. Therefore the combined
projection matrix. feature vector of ξ is defined γ which is (n + m) dimensional
In this method for N-dimensional vector xi (i = 1, 2,..., n) , [13].
It is noted that when the dimensions of α and β are unequal,
we use the average ( x ) and covariance matrix(S) of samples as
it may be possible that the higher-dimensional one be still
following: more powerful than the lower-dimensional. For this reason,
n
1 we used the weighted combination form which is defined
x=
n ∑x
i =1
i (5) ⎛α ⎞
γ =⎜ ⎟ where the weight θ is called a combination coefficient.
⎝ θβ ⎠
n
1
S=
n ∑( x − x )( x − x )
i =1
i i
T
(6)
Ifn > m and δ=n/m, then θ
and δ2 [13].
value can be selected between δ

In this work we used the following method proposed in


[13] to combine the two feature vectors α and β: (in this paper
the dimension of α and β is 164 and 22, respectively.):

978-1-4577-0242-6/11/$26.00 ©2011 IEEE 212


2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA2011)

1 8 6 F ea tu re s

C o m p u tin g th e 2 2 F e atu res


D iv id in g in to 2 2
seg m en ts
A A D p a ram ete r +
o f each seg m en t

D iv id in g each
C o m p u tin g th e
filtered im ag e
G a b o r F ilte rin g A A D p aram eter PC A+LD A
in to 2 2
o f ea ch seg m en t
seg m en ts

Fig 5. The proposed method for feature extraction in a FKP identification system

• Obtaining the unit vectors of α and β by dividing These samples were collected in two separate sessions that
each vector to their norm. in each session, 6 images were provided for each of the left
index, left middle, right index and right middle fingers.
• Selecting the combination coefficient θ and Therefore in total, the database contains 7920 images from
⎛α ⎞ 660 different fingers. In our experiment, the images collected
combining the unit vectors in the form of γ = ⎜⎜ ⎟⎟ ,
⎝ θβ ⎠ at the first session are selected as the gallery set and images
collected at the second session are the probe set [4]. In this
which α and β are the unit vectors of α and β
work, two experiments were conducted in which at first each
respectively. finger is evaluated separately, and then the different
combinations of fingers are used for improving the accuracy.
In this work, we implemented our experiments by using Our results in terms of the recognition rate are represented by
different values of θ , so that the best performance was the cumulative match characteristic (CMC) curves for which
2
achieved when θ is selected as n . these curves are the accuracy versus the various ranks.
m2
The block diagram of the above steps is shown in figure 5. A. Experiment 1
C. Matching Module The aim of the first experiment is to evaluate the
In this module, the feature values are compared against performance of the proposed algorithm for personal
those templates in the database to generate a matching score. identification on each type of fingers separately. In other
In this work, the Euclidean distance is used for comparing two words, for each type of FKP, the gallery and the probe sets
feature vectors. contain 165 classes and 990(165*6) sample images. Therefore
for each input image, we will have 990 score by comparing
D. Decision-Making Module the input image with all of gallery’s templates.
In our experiments we compare our proposed algorithm
This module establishes the user identity. On the other
with three other methods. These three methods are as follows:
hand, this module finds the most similar template in database
Method 1: In this method, the PCA algorithm is applied to
for each input image. In this paper, the minimum distance
the entire of each image without dividing them. The reported
between two images is considered as a similarity score.
result has been obtained when the dimension of each feature
E. System Database vectors is selected as 980.
Method 2: In this method, the combination of PCA and
In the enrolment mode of each biometric-based
LDA algorithm is used. In the other hand, at first the PCA
authentication system, a user’s biometric data is acquired algorithm is used and then the LDA method is applied on the
using a biometric reader. The generated templates after PCA weights. Using this method on our data, 164 most
labeling with a user identity are stored in a database and are important features are selected.
used for generating the scores in the matching modules. Method 3: In this method, at first, the bank of Gabor filters
in the stage of feature extraction is used. But the extracted
III. EXPERIMENTAL RESULTS features have high dimension and it is difficult to evaluate
For our identification experiment, a closed-set model was them. Therefore after the feature extraction, the combination
used. In this model, every subject in the probe set is also in the of PCA and LDA algorithm is used for reducing the
gallery set. We used the Poly-U FKP database collected from dimension of each vectors and selecting the most important
165 volunteers, including 125 males and 40 females. Among features. With this work, similar to the previous method, we
them, 143 subject were20–30 years old and the others were will have feature vectors with 164 dimensions.
30-50 years old.

978-1-4577-0242-6/11/$26.00 ©2011 IEEE 213


2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA2011)

100 100

Recognition Rate (%)


Recognition Rate (%)

80 80

60 60

Method 1 40 Method 1
40
Method 2 Method 2
Method 3 Method 3
Proposed Method Proposed Method
20 20
50 100 150 200 50 100 150 200
Rank Rank
(a) (b)

100 100

Recognition Rate (%)


80
Recognition Rate (%)

80

60 60

40 Method 1
40 Method 1
Method 2
Method 2
Method 3
Method 3
Proposed Method
Proposed Method
20 20
50 100 150 200 50 100 150 200
Rank Rank
(c) (d)
Fig 6. CMC curves obtained in experiment 1 for FKPs from (a) left index fingers, (b) left middle fingers, (c) right index fingers and (d) right middle fingers

Fig. 6 shows the CMC curves for different finger type those for one feature vector (Log-Gabor features). On the
generated by four different FKP recognition schemes. From other hand, the information fusion of each FKP can improve
these results it can be observed that our proposed method the recognition rate.
performs better than the three other schemes. The The performance of this experiment for each fingers in
experimental results in terms of Recognition rate at rank 1 are terms of recognition rate at rank 1, is summarized in Table II.
summarized in Table I.
TABLE II
TABLE I Recognition Rates by Comparison Between One and Two Extracted Feature
Recognition Rates Obtained by Methods in Experiment 1 Vectors of any image of each finger.
Propose Gabor Gabor + Gray-
Finger Method Method Metho fingers
d Feature level intensity
type 1 2 d3 Left Index 79.29% 89.90%
method
Left index 23.23% 50.64% 68.48% 89.90% Left Middle 80.71% 88.59%
Left middle 24.14% 47% 67.27% 88.59% Right Index 81.11% 89.49%
Right index 21.11% 51.08% 67.07% 89.49% Right Middle 81.11% 88.48%
Right
24.75% 54.68% 73.94% 88.48% B. Experiment 2
middle
Information fusion of multi-modal biometrics has attracted
Also, for proving that the combination of two vectors of much attention in recent years. So in this section we study the
each image can be better than one feature vector, we evaluate fusion problem in FKP recognition. The three possible levels
our method when one and two feature vectors of each image is of fusion are: (a) fusion at feature extraction level, (b) fusion
extracted. From the summarized results in Table II it can be at the matching score level, (c) fusion at the decision level
seen that by using two feature vectors (Gabor + Gray-Level [14]. In this paper we examined the results of fusion at feature
intensity) of each FKP, the results are well slightly better than level.
Since the features extracted from one finger are
independent of those extracted from the others, we can fuse

978-1-4577-0242-6/11/$26.00 ©2011 IEEE 214


2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA2011)

the features of each single finger and then compute the multi-modal biometrics was investigated. On the other hand,
recognition rate. On the other hand, the feature vectors of the features of different combinations of two, three and four
different fingers are serially combined, so that the new vector fingers were fused. In all of the experiments, the Euclidean
with higher dimension is obtained. This work is known the distance is used as a classifier for recognition and in each case
fusion technique at feature level. In this stage we combine the 6 sample images per training class were used. Experimental
features for 2, 3 and 4 fingers by which with the new feature results presented in section III, illustrates the suitability of
vectors will have 372(2*186), 558(3*186) and 744(4*186) proposed method for recognizing each finger separately. In
dimensions, respectively. addition based on the results that were obtained on Poly-U
In fact, the goal of this experiment is to investigate the dataset, we can conclude that the information fusion of
algorithm’s performance when the information from more different fingers at feature level is able to increase the
than finger of a person is fused. In fact in this case, the recognition rates to a higher value compared to that achieved
algorithm works as a kind of multi-modal method with a by each single finger. In this case, the recognition rate
single biometric trait but multiple units. obtained by fusion of four fingers at feature level is 96.56%.
We tested several different combinations of fingers. The We believe that the recognition rate can be improved by using
results of these fusions have been presented in Table III, from the features which are more discriminative.
which it can be observed that by integrating the information In future work we will investigate several methods which
from more fingers, the recognition performance of the may improve the recognition rate:
algorithm could be improved. So that by combining the • Fusion method at matching score and decision
features of four fingers, the best recognition accuracy is level in verification mode
achieved. Also, our method has been compared with three • Combining FKPs features with other biometric
methods mentioned in previous experiment. From the result of characteristics
Table III, it can be observed that our method works better than
other methods. ACKNOWLEDGMENT
From the results of above experiments we can conclude We acknowledge the Hong Kong Polytechnic University
that the FKP recognition systems which use multi-modal (Poly-U) for providing the FKP database; also the authors
biometrics work better than the systems with uni-modal would like to thank Iran Telecommunication Research Center
biometric. (ITRC) for financially supporting this work.
TABLE III
Recognition Rates by Integrating the Features of Different Combinations of REFERENCES
Fingers (Experiment 2)
Fingers in Method 1 Method 2 Method 3 Proposed [1] D. Zhang, Automated Biometrics: Technology and systems, Kluwer
Academic, 2000.
Fusion Method [2] D.L. Woodard and P. J. Flynn, “Finger surface as a biometric identifier,”
LI + LM 48% 63.54% 83.64% 95.65% Computer Vision and Image Understanding (CVIU), vol. 100, pp. 357-
LI +RI 51% 70.20% 85.35% 95.25% 384,2005.
LI +RM 53% 62% 89.70% 95.05% [3] C. Ravikanth and A. Kumar, “Biometric Authentication using Finger-Back
Surface, CVPR’07, pp. 1- 6, 2007.
LM + RI 52% 61.4% 86.73% 94.65%
[4] L. Zhang, L. Zhang, D. Zhang, H. Zhu, “Online Finger-Knuckle-Print
LM+ RM 54% 67.88% 87.07% 95.15% verification for personal authentication”, Pattern Recognition vol.43, no.7,
RI + RM 56% 71.01% 88.89% 95.56% pp. 2560-2571, 2010.
LI + LM + 51% 75.86% 91.72% 95.66% [5] . Zhang, L .Zhang, D. Zhang, H. Zhu, “Ensemble of local and global
RI information for Finger-Knuckle-print recognition”, Pattern Recognition
vol.44, no.9, pp. 1990-1998, 2011.
LI + LM + 53% 76.57% 93.33% 96.06% [6] Poly-U Finger-Knuckle-Print Database.
RM http://www.comp.polyu.edu.hk/~biometrics/FKP.htm
LI + RI + 55% 79.49% 93.84% 95.74% [7] L. Zhang, L. Zhang, D. Zhang, “Finger-Knuckle-Print: A new biometric
RM identifier, in: Proceedings of the ICIP 09, 2009.
LM+RI + 57% 76.57% 93.33% 95.86% [8] A.K. Jain, S. Prabhakar, L. Hong, and S. Pankanti, “Filterbank-based
Fingerprint Matching”, IEEE Transactions on Image Processing, vol.9,
RM no.5, pp. 846-859, 2000.
Four 59% 83.94% 95.9% 96.56% [9] D. Gabor, “Theory of communication,” Journal of the Institute of Electrical
Fingers Engineers, vol. 93, pp.429-457, 1946.
LI: Left Index, LM: Left Middle, RI: Right Index, RM: Right Middle [10] T. S. Lee, “Image representation using 2D Gabor wavelet,” IEEE
TPAMI, vol. 18, no. 10, pp. 957-971, 1996.
IV. CONCLUSION [11] D. Field. “Relations between the statistics of natural images and response
properties of cortical cell”, Journal of the Optical Society of America, vol. 4,
In this work we have presented an approach for personal no. 12, 1987.
identification based on FKP features. At first we extract two [12] M.I. Ahmad , W.L. Woo , S.S. Dlay, “multiple biometric fusion at feature
level: face and palmprint”, Communication Systems Networks and Digital
subsets of features from each finger (from gray-level intensity Signal Processing (CSNDSP), 801-805, 2010.
and its Log-Gabor transform) to investigate the information [13] J. Yang, J. Yang, D .Zhang, J. Lu, “Feature fusion: parallel strategy vs.
fusion of single modal, that is, the FKP of each finger. In fact, serial strategy”, pattern recognition, vol. 36, no. 6, pp. 1369-1381, 2003.
in this stage, two feature vectors of each FKP were fused and [14] A. Ross, A. Jain, “Information fusion in biometrics”, pattern
recognition letter, vol. 24, no. 13, pp. 2115-2125, 2003.
the recognition performance is obtained based on the
combined feature vector. Then the information fusion of

978-1-4577-0242-6/11/$26.00 ©2011 IEEE 215

You might also like