You are on page 1of 5

JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 15, ISSUE 2, OCTOBER 2012

12

Fingerprint Core Point Detection using Two-
Conditional Filters
G. A. Bahgat, A. H. Khalil, N.S. Abdel Kader and S. Mashali
Abstract The core point (CP), in the fingerprint image, can be used for the fast alignment between any two fingerprints needed to be
matched. One of the fingerprint types, the plain arch type, is a challenging problem for the detection of this point. This paper presents a fast
CP detection method, based on two conditional filters to detect the CP in all the fingerprint types. A modified CP conditional filter is combined
with another developed filter. Analysis is made on the first filter design and on the effect of adding the second filter. Also, constrains on the
ridge orientation consistency is added to increase the CP detection accuracy. This method is characterized by its simplicity of design that will
make it feasible for the hardware implementation. The performance is tested on the fingerprints taken from the digital sensors (FVC2002
DB2 and FVC2004 DB1 databases) and on inked fingerprints (NIST 4 database), and compared with other CP detection techniques. The
results show that the modified CP detection method is faster, simpler in design while having a good accuracy.
Index Terms Core point detection, design analysis, filters, fingerprint images, orientation consistency, reference point detection.

1 INTRODUCTION
HE fingerprint recognition systems are used in many ap-
plications around us. They are used in accessing the lap-
tops, the employee recognition in companies, and the cus-
tomer verification in the banking processes. Some of these ap-
plications are implemented using embedded systems, which
require using system modules that consume a small area in the
system. This can be achieved by using simple designed algo-
rithms. The most common digital modules used are FPGA
modules ([1],[2]) and DSP modules [3]. Besides, many applica-
tions work in real time environment that require fast recogni-
tion process. The fingerprint recognition includes alignment
procedure, which can be implicit or explicit [4]. The explicit
alignment can reduce the matching time per finger by 43% [5].
The fingerprint features are referenced relative to the CP in [6].
The fingerprint consists of line patterns called ridges that
tend to enter from one side of the fingerprint and exiting from
the opposite side. They may exhibit high curvature pattern [7].
One of these patterns is the loop. It is the innermost recurving
ridge. It can exist in two forms (Fig. 1); an upper loop and a
lower loop. An upper core point is defined on the peak or on a
point inside the upper loop. It is usually located in the central
area of the fingerprint [8]. The CP is defined as the upper core
point in the loop and the whorl types of the fingerprint [9]. In
other types, which do not contain loops, it is defined as the
center of the highest curvature region [10]. The forensics defini-
tions [11] partially coincide with these definitions except that
the CP is defined at the shoulder of the loop. The manual CPs
are always defined on a ridge.
The fingerprint images are classified into five classes [7]:
The two classes: the left loop and the right loop. Both contain
one core point. The whorl class contains two cores points (Fig.
1). The plain arch class contains no core points (Fig. 2a). It is
formed of ridges that tend to rise in the center of the pattern,
forming a wave-like pattern. Finally, the tented arch class con-
tains high curvature area (Fig. 2b, c, and d). It possesses either:
an angle, an up-thrust, or semi-loop [11]. The angular type is
formed by the intersection of two ridge endings. In the up-
thrust type, there is an ending ridge. The third type of the
tented arch is the semi-loop which loses one of the three cha-
racteristics of the loop, which are: (a) the existence of sufficient
recurve, (b) the existence of delta point shape, and (c) the ridge
count, between the core and the delta point, is greater than
zero.














Fig. 1. A fingerprint image of type whorl containing an upper and a lower
loop.


2012 JCSE
www.Journalcse.co.uk
T

- G.A. Bahgat is with the Department of Computers and Systems, Electronic
Research Institute, Giza, Egypt.
- A.H. Khalil is with the Department of Electronics and Electrical Commu-
nications, Faculty of Engineering, Cairo University, Giza, Egypt.
- N.S. Abdel Kader is with the Department of Electronics and Electrical
Communications, Faculty of Engineering, Cairo University, Giza, Egypt.
- S. Mashali is with the Department of Computers and Systems, Electronic
Research Institute, Giza, Egypt.

13








Fig. 2. Arch types: (a) Plain arch. (b) Angular. (c) Up-thrust. (d) Insufficient
loop.

There are methods that can effectively detect the CP in the
loop and the whorl classes of the fingerprints and some tented
arch types such as the semi-loop type. CP is detected at the
average of the crossing points of the lines normal to the ridges
[12]. This method is iterative consuming a large execution time,
besides; the CP is detected outside the boundary in case of the
plain arch. A multi-resolution method, based on the integration
of the sine components in two adjacent regions is used to cap-
ture the maximum curvature in concave ridges [9]. It fails
when the CP is near to the border. A hierarchical analysis of
the orientation coherence detects the CP [13]. Complex filters
based method detect the points of symmetry in the complex
valued tensor orientation field [14]. The tensor orientation field
is a squared function of the orientation field. But their accuracy
is low in detecting the CP in the arch type and a second order
filter is used.
A recent category of the CP detection methods are presented
in ([15], [16], [17]). This category works on the line appearing in
the ridge orientation map, presented by a gray-scale with an
angle range of (0
o
<180
o
). This line is generated from the dis-
continuity between the angle values 0
o
and 180
o
. Thus, it is
called the discontinuous line (DL). An edge-map based method
locates the DL by applying an edge detection method on the
ridge orientation map [15, 16]. The CP is extracted by analyz-
ing the orientation consistency around the end points of the
DL. This method can detect the CP with high speed. It locates
the CP at the arch types at the point of minimum orientation
consistency located on the DL. Another method locates the CP
at the point with the highest curvature value along the DL [17].
In the direction of using a simple design that is more conve-
nient for the hardware implementation, a singular point detec-
tion method based on applying a masking technique directly
on the orientation map is given in [18]. Its execution time is less
than that of Poincare Index method [7] by a factor of 0.12, but
with almost the same accuracy. A fast technique, based on a
conditional filter, detects the CP in the loop, the whorl and
some tented arch types, while it defines no point in the plain
arch type [19].
This paper increases the accuracy of the CP conditional filter
presented in [19], by combining it with a developed DL detec-
tion filter and putting constrains on the orientation consistency.
An anlysis is presented on the design of the CP filter and on
the effect of adding the DL filter. The results show that the
modified CP detection method achieves better accuracy with
less processing detection time, and using simpler arithmetic
modules compared to the edge map based method.
The rest of this paper is organized as follows: the definitions
are given in section 2. Section 3 describes the proposed CP de-
tection method. Its performance, compared with other me-
thods, are measured and discussed in section 4. Finally, the
conclusions are presented in section 5.
2 ORIENTATION MAP AND CP CONDITIONAL FILTER
2.1 Smoothed Orientation Map
The CP calculation is based on the ridge orientation, which is
defined as the angle o(x, y) made by the ridges, crossing
through a small neighborhood centered at a point (x, y), with
the horizontal axis [7]. The orientation is calculated at discrete
positions of a step w; where w is slightly greater than the ridge
width. The image can be divided into blocks of size w x w.
Then, a special averaging technique is used to calculate the
orientation o(i, j) of each block (i, j). Direct orientation averag-
ing is not applicable here due to the discontinuity at 0
o
and
180
o
[7]. The conventional method, for the orientation map cal-
culation, is the gradient-based method [7]. Its equations are
given by


,, (1)

, (2)


,, (3)
(3)
, (4)


where Vis the gradient, x and y are the pixel coordinates of the
center of the block (i, j) and b=(w/2 1). The gradient compo-
nents can be calculated using the simple gradient operator: the
Sobel mask [20]. It is characterized by its effectiveness in aver-
aging, since it gives more weight to the center pixel
Adaptive smoothing technique [13] based on the orientation
consistency is used to increase the accuracy of the orientation
map, and, consequently, increase the accuracy of the CP detec-
tion. The orientation consistency describes how well the ridge
orientations, in a neighborhood, are consistent with the domi-
nant orientation in the neighborhood. The technique smoothes
the orientation map and, at the same time, does not affect the
accuracy of the CP location; the adaptive window is used to
attenuate the noise of the orientation field while maintaining
the detailed orientation information in the high curvature re-
gions. The smoothed orientation map is obtained by [13]



, (5)

where s(i, j) is the smoothed orientation and (s) is the sur-
rounding neighborhood of the block (i, j), which is defined by
the (2s+1) x (2s+1) outside surrounding blocks, and s is the con-
sistency level. The orientation consistency equation is given by




, (6)

where M is the number of the surrounding blocks. The ridge
orientation maps are shown, by a gray scale presentation, in
Fig. 3. The axis of the orientation values is the standard axis;

|
|
|
.
|

\
|

e
e
(s) l) (k,
o
(s) l) (k,
o
s
l) (k, (
l) (k, (
= j) (i,
) 2 cos
) 2 sin
tan
2
1
1 -
(a) (d)
(c) (b)

yy xx
xy o
o
G G
G
+ = j) (i,
2.
tan
2
1
90
1

( ) ( ) ( )


V
b
b
= h
b
b
= k
y yy
k + y h, + x = y x, G
2

( ) ( ) ( )
2


V
b
b
= h
b
b
= k
x xx
k + y h, + x = y x, G

( ) ( ) ( ) k + y h, + x k + y h, + x = y x, G
y
b
b
= h
b
b
= k
x xy
V V


.

M
j)) (i, ( + j)) (i, (
= s) j cons(i
(s) j) (i, (s) j) (i,
2 2
2 sin 2 cos
, ,
|
.
|

\
|
|
.
|

\
|

e e
14

the x-axis points to the right and the y-axis points upwards.
The orientation values of the fingerprint images of Fig. 3 are
also displayed by short lines in Fig. 4.
2.2 CP Conditional Filters
The n x n CP conditional filter [19] operation is shown in Fig.
5a. it operates on the surrounding blocks arranged in an anti-
clockwise direction from B1 to B4(n-1). It is applied on each seg-
mented block B(i, j) in the smoothed orientation map. The
orientation values (1.. 4(n-1)) of the blocks (B1 to B4(n-1) ) are
checked if it is in the determined range or not, according to the
following equation

, (7)



where k ={1..4(n-1)}, is the index of the block in the filter, Ck is
the output of the conditional operation, Lk is the lowest allowed
orientation value for the block Bk, and Hk is a value, below
which the orientation value for the block is allowed. The num-
ber of the filter blocks that satisfy the required range is accu-
mulated by


, (8)

where A(i, j) is the accumulation of the conditional filter for the
block B(i, j).










Fig. 3. The orientation maps of the non arch types: (a) left loop, and (b) whorl, and
the orientation map of the arch types: (c) up-thrust tented arch, and (d) plain arch.

Fig. 4. The orientation values are presented by short lines on the equivalent
fingerprint images shown in Fig. 3: the non arch types: (a) left loop, and (b) whorl,
and the arch types: (c) up-thrust tented arch, and (d) plain arch.




















Fig. 5. The conditional filters operation (a) The CP filter. (b) The DL filter.
3 PROPOSED CP DETECTION METHOD
In this section, the proposed DL conditional filter is presented
followed by the combined conditional filters based method.
Then, the whole procedure of the CP detection is given, with
the use of both CP and DL conditional filters.

3.1 Developed DL Conditional Filter
The disontinious line (DL) in the orientation map is defined
as a line generated in the orientation map presented by a gray-
scale due to the discontinuity between the orientation values;
0
o
and 180
o
. The presence of DL depends on the presence of the
curved ridges with the horizontal tangents. The DL can be pre-
sented by



, (9)

where is a small value. The DL conditional filter operates on
two adjacent blocks (Fig. 5b).

3.2 Proposed Combined CP Detection Method
The CP conditional filter operation [19] is modified and
combined with the DL conditional filter. The operation of the
proposed method is as follows:
1. For each segmented smoothed orientation s(i, j), the CP
conditional filter centered at the block B(i, j) is applied
((7) and (8)). Set a=4*(n-1).
2. If there exists blocks with accumulation A(i, j)=a, then the
CP location is determined by

, (10)

, (11)

and the procedure ends. Otherwise, a is reduced by one.
3. If a>D1, then go to step 2. Otherwise, continue. D1 is a
threshold value that separates the non arch types from
the arch types.
4. If there exists two adjacent blocks B(i, j) and B(i, j+1) that
satisfies (9), and if a=D1 , then the CP location is deter-
mined by


( ) ( )
( ) ( )

< + s A
A + < s
=
o
o
o
o
o
o
j i
j i
j i DL
180 1 , ) 180 (
& ) 0 ( , 0
: ) , (
u
u

< s
=
otherwise
H L if
C B
k k k
k k
0
1
:
u

=
=
) 1 ( 4
1
) , (
n
k
k
C j i A

w j) (i, CP = y) (x, CP
block location


{ } a j i A j i cons CP
j i
block
= = ) , ( : ) , ( min arg
,
B1
j) B(i,
Bn
B4(n-1)
(a)
B(i, j+1) B(i , j)
(b)
(a)
(b) (c) (d)
(a) (b) (c) (d)
15

, (12)

and (11), then the procedure ends. Otherwise, continue.
5. a is reduced by one, then the CP location is determined
by (12) and (11), then the procedure ends.

3.3 The CP Detection System
The image is divided into blocks of size w x w. Then, the seg-
mentation is applied on each block. The segmentation is de-
fined as the separation between the fingerprint areas (fore-
ground) from the image background. It is applied on the orien-
tation map to prevent the false detection of the CP. The mean
of each block is calculated relative to the global mean of the
image and the variance of each block is calculated relative to
the difference between the global, maximum and minimum,
variance value [21]. The block is segmented if the relative mean
is less than an upper limit (mth), and the relative variance is
smaller than a lower limit (vth). Morphological operations are
applied that include dilation and erosion to fill the holes in the
foreground and isolate the points in the background [22]. The
structuring element size is (str).
The gradient-based method [7] is applied on each block ((1)-
(4)) with averaging window size w x w. An adaptive smoothing
technique [13], using (5) and (6), is used to smooth the orienta-
tion map. Then, the proposed combined conditional filters scan
the segmented orientation map.
4 EXPERIMENTAL RESULTS AND DISCUSSION
The FVC2002 DB2 [23], FVC2004 DB1 [24] and NIST 4 [25] da-
tabases are used to test the performance of the CP detection
methods. The fingerprints in FVC2002 DB2 database are taken
by a capacitive sensor. The images size is 256 x 364 pixels. The
fingerprints in FVC2004 DB1 database is taken by an optical
sensor. The images size is 640 x 480 pixels. Both databases im-
ages are of resolution 500 dpi. Also, they contain 8 impressions
for each finger that are taken in different skin conditions; nor-
mal, dry and wet conditions. The NIST 4 database: the finger-
print images are two rollings of the same finger. Image size is
512 x 512 pixels with resolution 19.7 pixels/mm. The CP detec-
tion training data set is taken from FVC2002 DB1 set (B), five
arch fingerpers from FVC2004 DB2 set (A) and a sample from
NIST 4.
The fingerprint images is divided into blocks of size w=5
pixels [13] for the FVC images and w=10 for the NIST 4 images.
The segmentation parameters values used with the FVC data-
base are: mth =25, vth = 0.05 and str = 5 pixels. The parameters
values used with the NSIT 4 database are: mth = 45, vth = 0.06
and str = 5 pixels. The CP filter size is taken 5 x 5 blocks. The
orientation values limit Lk and Hk, are given in degrees as fol-
lows: R1 = {[0, 50] U [120, 180]}, R2 = [0,80], R3 = [15, 90], R4 =
[15, 95], R5 = [15,100], R6 = [20, 120], R7 = [20, 115], R8 = [25,
125], R9 = [40,140], R10 = [40, 150], R11 = [40,160], R12 = [40,165],
R13 = [60,175], R14 = [100,180], R15 = [122,180] and R16 =
{[120,180] U [0, 40]}; where Rk=[Lk Hk].
The accuracy of the CP detection methods are measured rel-
ative to the CP detected manually. CP is defined for each fin-
gerprint unless the fingerprint scanned is shifted and its center
is not shown. CP detection rate is used as a measure; it is the
ratio of the number of the CPs detected correctly by the algo-
rithm, to the number of the CPs detected manually. The loca-
tion of the detected CPs is compared with the manual in-
spected CPs using the Euclidian distance ([13], [10]). If the dis-
tance is between 10 and 20 pixels, it is considered as a small
error that can be caused by both the human vision and the al-
gorithm. If the distance is between 20 and 40 pixels, it is consi-
dered as a significant error, which may affect the subsequent
processing steps. If the distance is larger than 40 pixels, the CP
is considered a false detected point [10].
In order to demonstrate the performance of the proposed
CP detection method, it is compared with: the fast edge-map
based method [15], which is the same category of the proposed
method. The edge map-based method is implemented with the
same preprocessing of the proposed method. The latency of
detection is taken by a length of less than 40 pixels [10]. The
testing images are: 800 images in the FVC2002 DB2 set(A), the
arch images (120 images) in the FVC2004 set(A), and 100 im-
ages in NIST4.
The accuracy is as shown in Table 1 for different designs of
D1 values of the CP filter, with and without the DL conditional
filter. Also, the consistency constrains are applied for the
CONSH case (Table 1). The constrain values on the consistency
(cons) are given in Table 2. The accuracy results for the
FVC2002 DB2 set (A) (Table 1) show that the CP filter performs
best for the CP detection without the DL filter, since there are
only 3 plain arches in this database. In the FVC2004 DB1 set (A)
database, the best results are shown when the CP and DL fil-
ters are used with consistency conditions, because all the tested
images are of the arch type. As for the NIST 4 database, the
best results are given for the CP detection with the DL filter
and consistyency constrains. The sample tested from NIST4
contains 40% arch images. The optimal design is that depends
on the combined filters with orientation sonsistency constrains.
Moreover, the average execution time of the CP detection
methods per fingerprint is shown in Table 3. It is reduced by
the proposed method by an average factor of 3.2 compared to
the edge map based method. This can be explained by analyz-
ing the minimum arithmetic operations used by both methods
(Table 4). The edge map based method uses complicated opera-
tions that consumes more time in its processing. The average
computational complexity of the proposed method is of order
O(m), where m is the number of blocks in the orientation map.
The main error cause is generated by the distorted areas in the
fingerprint images that cause an error in the equivalent areas in
the orientation map.
TABLE 1
THE ACCURACY OF THE CP DETECTION METHODS (IN PERCENTAGE).
(NA:NOT AVAILABLE)
method FVC2002 DB2
set (A)
800 images
FVC2004 DB1
120 Arch
images
NIST4
100
images
Edge map [15] 78.21 59.66 54
D1=2 89.8 50.42 51
D1=3 91.44 57.98 49
D1=4 91.18 65.55 66
D1=5 91.81 66.39 73
(D1=2) + DL 88.41 81.51 83
(D1=3) + DL 89.55 82.35 87
(D1=4) + DL 90.05 85.71 86
(D1=5) + DL 90.81 86.55 87
(D1=5) + DL +CONSH 90.3 89.92 88
Consistency [13] NA 72 NA
Complex filter [14] NA 90 NA

{ } DL j i B a j i A j i cons CP
j i
block
e = = ) , ( & ) , ( : ) , ( min arg
,
16

TABLE 2
THE ORIENTATION CONSISTENCY CONSTRAINS
Response level CONSH
a 0.0906<cons<0.86
a-1 0.16<cons<0.7
a-2 0.09<cons<0.85
a-3 0.5cons<0.87
TABLE 3
THE AVERAGE EXECUTION TIME OF THE CP DETECTION METHODS (IN
MS).
Method and the code
language
FVC2002
DB2A
800 images
FVC2004 DB1
120 Arch
images
NIST4
100
images
Edge map based [15]
(MatLab)
96 105.8 97.5
Proposed method (Matlab) 36.7 35.6 19.4
TABLE 4
THE ARITHMETIC OPERATIONS USED BY THE CP DETECTION METHODS
PER BLOCK.
CP detection Methods
Arithmetic operations
cond add & sub multi trig
Edge map based [15] 8+2 n
2
5+4n n
2
2 n
2

Proposed method 2n+39 4n-5 - -
7 CONCLUSION
The CP conditional filter detects effectively the CP in the non
arch types, while when combined with a developed DL con-
ditional filter and applying constrains on the orientation con-
sistency, it detects the CP in the arch types, too. The condi-
tional filter-based method is characterized by its simplicity of
design that would be appropriate for the hardware imple-
mentation. In addition, the proposed method is faster than
the edge map based method by a factor of 3.2, which makes it
suitable for the use in the real time environment systems.
REFERENCES
[1] R. Arjona and I. Baturone, A digital Circuit for Extracting Singular
Points from Fingerprint Images, Proc. Eighteenth IEEE Int. Conf. on
Electronics, Circuits and Systems, Lebanon, pp. 627-630, Dec. 2011.
[2] N. Neji, A. Boudabous, W. Hahrrat and N. Masmoudi, Architecture
and FPGA Implementation of the CORDIC Algorithm for the Finger-
print Recognition Systems, Proc. Eighth Int. Multi Conf. on Systems,
Signals and Devices, Tunisia , Mar. 2011.
[3] X. H. Su, L. Q. Yin, L. Gao and Z.X. Zhang, The Design of
Fingerprint Identification System based on TMS320VC5402,
Advanced Materials Research J., vol.542-543, pp. 1339-1342, Jun. 2012.
[4] S. Chikkerur, A. N. Cartwright and V. Govindaraju, K-plet and
CBFS: A Graph based Fingerprint Representation and Matching
Algorithm, Lecture Notes in Computer Science, Adv. Biometrics, vol.
3832, pp. 309315, 2005.
[5] S. Chikkerur and N. Ratha, Impact of Singular Point Detection on
Fingerprint Matching Performance, Proc. Workshop on Automatic
Identification Advanced Technologies, pp. 207212, Oct. 2005.
[6] K.C. Chan, Y.S. Moon and P.S. Cheng, Fast Fingerprint Verification
using Subregions of Fingerprint Images, IEEE Trans. on Circuits and
Systems for Video Technology, vol. 14, no. 1, pp. 95-101, Jan. 2004.
[7] D. Maltoni, D. Maio, A. K. Jain and S. Prabhakar, Handbook of
Fingerprint Recognition, second edition, Springer, chapter 3, pp. 97-166,
2009.
[8] A.M. Bazen and S.H. Gerez, Systematic Methods for the
Computation of the Directional Fields and Singular Points of
Fingerprints, IEEE Trans. on Pattern Analysis Machine Intelligence, vol.
24, no. 7, pp. 905919, Jul. 2002.
[9] A.K. Jain, S. Prabhakar, L. Hong and S. Pankanti, Filterbank-based
fingerprint matching, IEEE Transactions on Image Processing, vol. 9,
pp. 846859, 2000.
[10] T. Le and H. Van, Fingerprint Reference Point Detection for Image
Retrieval Based on Symmetry and Variation, Pattern Recognition, vol.
45 no. 9, pp. 3360 - 3372, Sept. 2012.
[11] M. R. Hawthorne, Fingerprints Analysis and Understanding, CRC Press,
Taylor and Francis group, 2009.
[12] V. Areekul, K. Suppasriwasuseth and S. Jirachawang, The New Focal
Point Localization Algorithm for Fingerprint Registration, Eighteenth
Proc. Int. Conf. on Pattern Recognition, vol. 4, pp. 497500, 2006.
[13] M. Liu, X. Jiang and A.C. Kot, Fingerprint Reference Point
Detection, EURASIP J. on Applied Signal Processing, Hindawi
Publishing, vol. 4, pp.:498-509, Jan. 2005.
[14] K. Nilsson and J. Bigun, Localization of Corresponding Points in
Fingerprints by Complex Filtering, Pattern Recognition Letters, vol. 24,
no. 13, pp. 21352144, Sept. 2003.
[15] G. Cao, Z. Mao, Q.S. Sun, Core-Point Detection Based on Edge Maps
in Fingerprint Images, J. of Electronic Imaging, vol.18, no. 1, Feb. 2009.
[16] G. Cao, Q. S. Sun, Z. Mao, Y. Mei, Detection of Core Points in
Fingerprint Images Based on Edge Map, Proc. Int. Conf. on Electronic
Computer Technology, Feb. 2009.
[17] S. Mohammadi, A. Farajzadeh, Reference Point and Orientation
Detection of Fingerprints, Proc. Second Int. Conf. on Computer and
Electrical Engineering, United Arab Emirates, pp.: 469-473, Dec. 2009.
[18] G.A. Bahgat, A. H. Khalil, S. Mashali, Singular Point Detection using
a Matching Candidate in Fingerprint Images, Proc. First Int. Conf. on
New Paradigms in Electronics and Information Technologies, Egypt, Oct.
2011.
[19] G.A. Bahgat, A.H. Khalil, N.S. Abdel Kader, and S. Mashali, Fast and
Accurate Algorithm for Core Point Detection in Fingerprint Images,
Egyptian Informatics Journal, submitted for publication.
[20] R.C. Gonzalez and R.E. Woods, Digital Image Processing, second
edition, chapter 3, 2001.
[21] X. Chen, J. Tian, J. Cheng, and X. Yang, Segmentation of Fingerprint
Images using Linear Classifier, EURASIP J. on Applied Signal
Processing, vol. 2004, no. 4, pp.: 480-494, Jan. 2004.
[22] J. Zhou, F. Chen, and J. Gu, A Novel Algorithm for Detecting
Singular Points from Fingerprint Images, IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol. 31, no. 7, pp.:1239-1250, July
2009.
[23] http://bias.csr.unibo.it/fvc2002/. 2009.
[24] http://bias.csr.unibo.it/fvc2004/ . 2012.
[25] http://www.nist.gov/srd/nistsd4.cfm. 2012.