You are on page 1of 7

Liveness Detection on Touchless Fingerprint Devices Using Texture Descriptors

and Artificial Neural Networks

Cauê Zaghetto, Mateus Mendelson, Alexandre Zaghetto, Flávio de B. Vidal

University of Brası́lia - Department of Computer Science - Brası́lia, Brasil


{zaghettoc, msilva}@bitgroup.co, {zaghetto, fbvidal}@unb.br

Abstract sition surface. Regardless the technology in use, the prob-


lem with biometric systems in general is that they do not
This paper presents a liveness detection method based perform perfect matches and, therefore, are subject to be
on texture descriptors and artificial neural networks, whose attacked by fraudulent agents whose objective is to be iden-
objective is to identify potential attempts of spoofing attacks tified as a valid user. Attacks on touchbased systems have
against touchless fingerprinting devices. First, a database been largely investigated. Regarding touchless technology,
was created. It comprises a set of 400 images, from which there is not much being done. Hence, in order to provide
200 represent real fingers and 200 represent fake fingers an understanding about malicious interactions with touch-
made of beeswax, corn flour play dough, latex, silicone and less devices, this work presents a method based on texture
wood glue, 40 samples each. The artificial neural network descriptors and artificial neural networks (ANN) that distin-
classifier is trained and tested in 7 different scenarios. In guishes whether a user is interacting with the scanner using
Scenario 1, there are only two classes, “real finger” and a real finger or not.
“fake finger”. From Scenarios 2 to 6, six classes are used, The paper is organized as follows. In Section 2, basic
but classification is done considering the “real finger” class concepts related to biometric systems are presented. Sec-
and each one of the five “fake finger” classes, separately. tion 3 describes the types of attacks that may be performed
Finally, in Scenario 7, six classes are used and the classifier against biometric systems. Section 4 presents the texture
must indicate to which of the six classes the acquired sample descriptors, while the proposed method is discussed in Sec-
belongs. Results show that the proposed method achieves tion 5. The experimental results are presented in Section 6
its goal, since it correctly detects liveness in almost 100% and, finally, Section 7 presents the conclusions and future
of cases. works.

2. Biometrics
1. Introduction Biometric authentication can be defined as the automatic
recognition of an individual using physiological or behav-
Given the increasing necessity to restrict access to infor-
ioral characteristics [1]. Fingerprints, hand geometry, sig-
mation/environments; to elevate the efficiency of criminal
nature, speech, iris, face, keystroke dynamics and gait are
investigations; and to offer better services in government
some examples.
programs, major efforts have been made towards the devel-
Behavioral characteristics are those related to how a per-
opment of a variety of automatic recognition mechanisms,
son acts. Physiological characteristics, in turn, are those
such as biometric systems. Among all existing biometrics,
related to the physical structure of an individual. Examples
fingerprints are probably the most popular and largely used
of behavioral biometrics are signature, keystroke dynamics
all over the world.
and gait. Fingerprints, iris and face are examples of physio-
The evolution of biometric systems can be achieved logical biometrics. Some may be considered a combination
through improvements in algorithms designed to extract of behavior and physiology, such as speech.
discriminant features, or in acquisition hardware. Regard- A certain characteristic can be used as a biometric trait if
ing biometric fingerprint-based technologies one may men- it meets the following requirements [2]:
tion two principal categories: (a) the traditional touchbased
technology; and (b) the photographic touchless technology 1. Universality: all individuals must have it;
that do not require the user to place his fingers on an acqui- 2. Distinctiveness: must be different for each individual;

978-1-5386-1124-1/17/$31.00 ©2017 IEEE 2017 IEEE International Joint Conference on Biometrics (IJCB)

406
3. Permanence: must not vary with external parameters
or over time;

If the above requirements are met, it is still necessary to


evaluate the user interaction with the biometric system.
Therefore, the following requirements must also be taken (a) (b) (c)
into account: Figure 1. Set of images captured by a three-camera touchless fin-
gerprint sensor: (a), (b), and (c) exemplify capture by the main
4. Collectability: its acquisition is possible; camera and cameras spaced 45 degrees counterclockwise and
5. Acceptability: individuals need to be willing to pro- clockwise, respectively.
vide it;
6. Circumvention: must be robust to attacks. bottleneck for further improvement of fingerprint image
quality. Instead of generating a representation of the finger
Finally, the evaluation of a biometric trait according to these that tries to mimic ink-based samples, one can use a more
six requirements will determine its Performance. Finger- faithful high definition photographic image.
prints are among the most successful ones [3].
2.1. Fingerprints 2.2.2 Touchless acquisitions

Fingerprint is the name given to the impression left by Touchless devices do not compel users to press their fingers
the friction of ridges and valleys located on the surface of on a platen and rely on photographic acquisitions. Among
each finger. It presents high levels of distinctiveness and the proposed touchless solutions, TBS’1 devices [8] use an
permanence; and medium levels of universality, collectabil- interesting approach. It combines reflection-based touch-
ity, acceptability and circumvention [2]. Therefore, finger- less finger imaging with a three-camera multiview system.
prints result in efficient biometric systems. One camera is positioned to capture the portion of the finger
where normally the core and deltas are located; and taking
2.2. Fingerprint Acquisition Paradigms this central camera as a reference, the other two are dis-
placed by 45 degrees clockwise and counter-clockwise. An
The input of fingerprint authentication systems are digi-
example of acquisition is depicted in Figure 1.
tal images representing the ridge-valley structure of real fin-
gers. In general, fingerprint technologies deal with touch-
based acquisitions. That means, they require users to press 3. Attacks on Biometric Systems
their fingers against an acquisition surface. However, solu- The attempt to circumvent a biometric system is called
tions that do not demand contact (touchless) have increas- attack. It can be performed in many different ways and
ingly being proposed [4, 5, 6, 7, 8] in order to overcome the is classified into indirect attack (consists of an action in-
intrinsic problems related to touchbased technologies. Next side the system) or direct attack (the attacker interacts with
these two paradigms are briefly discussed. the biometric system directly through the acquisition sen-
sor) [10]. In Figure 2 [11], (a) illustrates a direct attack,
2.2.1 Touchbased acquisitions while (b) to (e) show indirect attacks. Although in (b) the
attack is performed on the sensor level, it requires some
The quality of acquired fingerprints clearly affects the over-
knowledge about the internal architecture of the system,
all performance of a fingerprint recognition system. Most
thus is considered as an indirect attack. This paper is con-
of today’s fingerprinting technology is touchbased. Ma-
cerned only with direct synthesis attacks (fingerprints are
jor problems with this kind of technology are the uncon-
synthesized and presented to the sensor).
trollable distortions and inconsistencies that may be intro-
Traditionally, a direct attack can be further classified into
duced due to skin elasticity. Fingerprint quality may also
3 subcategories: (a) obfuscation (the attacker changes his
be seriously influenced by non-ideal contact caused by dirt,
biometric traits so that the system is not able to recognize
sweat, moisture, excessive dryness, air humidity, temper-
him); (b) zero-effort attack (the attacker presents himself
ature and latent fingerprints [9]. In some scenarios, the
as a valid user and simply provides his biometric traits un-
above-mentioned drawbacks impose the need of several ac-
changed); and (c) spoofing (the attacker mimics the biomet-
quisition attempts per finger, in order to ensure a high qual-
ric traits of a valid user) [10]. It is important to mention that
ity template and the enrollment process may become very
direct attacks do not require an intruder to have any spe-
time-consuming if the number of users to be registered is
cific technological skills, representing a great risk regarding
large. Although over the past few years many algorithms
identification systems.
have been proposed to compensate the limitations of touch-
based technology, this sensing paradigm may represent a 1 http://www.tbs-biometrics.com/de/

407
Replay or Override 4. Texture Descriptors
synthesise features
a b c Texture descriptors are techniques that extract features
from an image and use them to provide information that
Sensor Feature extraction
differentiates it from other images. Three texture descrip-
tors will be described, as follows: Local Binary Pattern, Im-
proved Local Binary Pattern and Gray Level Co-occurrence
Matrix.
Override e Client d Override
decision model model 4.1. Improved Local Binary Pattern
e d
The Local Binary Pattern (LBP) [17] is a simple com-
putational technique that consists of analyzing the neigh-
Accept/reject borhood around each pixel in grayscale images in order to
Comparison
decision
generate codes that describe them.
The neighborhood of a pixel p can be defined in many
Figure 2. Points of vulnerability of a biometric system: (a) attack different ways. The neighborhood of special interest in this
on the acquisition module, replayed or a synthesized finger is pre-
paper is the one composed by the 8-connected closest pixels
sented to the sensor; (b) attack on the acquisition module, insertion
N8 (p) that surround p in the vertical, horizontal and diago-
of a replayed or synthesized fingerprint image into system, after
the sensor; (c) attack on features detection module; (d) attack on nal directions.
the client model (template database); (e) attack on the accept/reject Since an 8-connected neighborhood is used, the LBP al-
module. gorithm generates an 8-bit code for all pixels p in the im-
age. The bits are set to 0 or 1 depending on the difference
between p and each of its 8 neighbors N8i (p)|i=0..7 . If the
difference is greater than or equal to 0, then a bit 1 is as-
signed to that specific neighbor. Otherwise, the bit is set
Spoofing requires the use of some kind of artificial mate-
to zero. Finally, in order to guarantee rotational invariance,
rial in order to reproduce an authentic fingerprint captured
7 circular shifts are applied to the 8-bit binary code. The
from latent samples, which, for instance, are very likely to
minimum integer value observed is chosen as the final de-
be found on mobile phone touch-screens. Therefore, before
scriptor.
allowing the system to perform feature extraction and pos-
The Improved Local Binary Pattern (ILBP)[18] includes
terior matching, one may insert a liveness detection module
the main properties of LBP, but presents some extra fea-
right after or in parallel to the acquisition step.
tures which enable the detection of patterns that classic LBP
Liveness detection algorithms design must consider is not able to detect. The first difference between them
three dimensions: (a) static or dynamic (operates on one is that instead of using p to compute de binary codes, it
or multiple frames); (b) dependent of user training (requires uses the average (here denoted avg) of N8 including p. In
the user to perform a specific previously trained procedure); other words, considering P9 (p) = {N8i (p)|i=0..7 ∪ p}, if
and (c) binary or user specific (indistinctly classifies all pre- avg(P9 (p)) − P9i (p)|i=0..8  0, then a bit 1 is assigned
sented samples as live or spoof; or is included as part of the to the i-th pixel of P9 (p). Otherwise, the bit is set to zero.
user’s biometric template) [12]. It is important to notice that, since the central pixel is now
included, i varies from 0 to 8 and the binary code has 9
Liveness detection for traditional touchbased scanners bits. Here rotational invariance is also achieved by succes-
has already been addressed [13, 14, 15, 16]. For touchless sive shifts until the code with minimum value is found.
scanners, however, the topic still remains to be explored. 4.2. Gray-level Co-occurrence Matrix
Considering the previous definitions, the main objective in
this paper is to present a method based on artificial neu- The Gray-level Co-occurrence Matrix (GLCM) [19] is
ral networks, texture descriptors and principal component another technique used for image texture description. Its
analysis that implements a static, user independent, binary simplest configuration consist of a L × L matrix where the
liveness detection for touchless biometric fingerprinting de- elements at coordinate (i, j) represent the number of times
vices. It is assumed that also in this kind of scanners an ef- two adjacent pixels have the values i and j in the origi-
ficient liveness detection module will decisively contribute nal image. However, this method is not limited to just one
to minimize successful spoofing attacks. specific direction, a single neighboring pixel or two dimen-
sions. More complex uses may be employed.
Next we describe the texture descriptors used as inputs Among the 14 measurements that can be derived from
to our classifier. the GLCM, 4 are most commonly used [19, 20], namely,

408
contrast (f1 ), correlation (f2 ), energy (f3 ) and homogene- Image
acquisition
ity (f4 ). These measurements, used to compose the texture
descriptor, are defined according to Equations 1 to 4,
Circular Morphological Histogram
Binarization Segmentation
averaging filter operation equalization

G−1
 G
G   
  
f1 = n 2
p(n, m) (1)
k=0 n=1 n=1 |n−m|=k ILBP GLCM

G−1
 G−1
 (nm)p(n, m) − μx μy )
f2 = (2)
n=0 m=0
σ x σy
 G−1
G−1  Features
concatenation
2
f3 = p(n, m) (3)
n=0 m=0 Principal
G−1
 G−1
 component
p(n, m) analysis
f4 = (4)
n=0 m=0
1 + (n − m)2
ANN
Report real Report fake
True classification False
finger finger
(Real?)
where G is the number of gray levels in the image, p(n, m)
is the normalized GLCM matrix, and μx , μy , σx and σy
are the mean and standard deviations for px (i) and py (j),
marginal-probability vectors obtained by summing the rows End
and the columns of p(i, j), respectively.
In the next section, the proposed method is detailed. Figure 3. Fluxogram of the proposed method: (a) pre-processing
(light gray); (b) feature extraction and dimensionality reduction
5. Proposed Method (mid gray); (c) classification (dark gray).
The proposed method aims to evaluate images captured
by a multiview touchless fingerprinting devices in order 5.3. Feature Extraction
to detect liveness as means of preventing spoofing attacks.
Each step, presented in Figure 3, will be detailed in the next The proposed method uses a combination of the ILBP
sections. and GLCM texture descriptor. They are evaluated only
for pixels belonging to the region of interest (foreground).
5.1. Image Acquisition From the GLCM matrix, the contrast, correlation, energy
and homogeneity calculations are used. The features vector,
The biometric device produces samples composed by
vc , is defined by Equation 5. It is noteworthy that 8 GLCM
three images captured by equidistant cameras placed along
matrices are calculated considering all the directions deter-
a 90 degrees arch. Although all three images may be used,
mined by the current pixel being analyzed, p, and the pixels
only the main view, shown in Figure 1 (a) is considered.
belonging to N8 (p). The final value for contrast, correla-
As described in Section 6, satisfactory results are achieved
tion, energy and homogeneity are defined as the average of
without using the other two views, with the advantage of
the 8 values calculated from the 8 GLCM matrices.
reducing the complexity of the training algorithm.
5.2. Pre-processing vc = [a1 . . . a512 b1 b2 b3 b4 ], (5)
The pre-processing steps prepare the image for the ILBP The coefficients a1 to a512 and b1 to b4 represent the ILBP
and GLCM texture extraction algorithms. The result is an and GLCM descriptors, respectively.
enhanced image with the finger’s region segmented. The
pre-processing steps are detailed below. 5.4. Principal Component Analysis
The Principal Component Analysis (PCA) [21] is a tech-
1. Circular averaging filter: reduces noise. nique that may be applied to problems where the reduction
2. Binarization: binarizes image with a global threshold. of complexity is critically necessary. In classification prob-
3. Morphological operation: removes remaining noise lems, for instance, it accomplishes this goal by reducing
after binarization. the dimensionality of feature vectors. PCA generates the
4. Segmentation: defines a region of interest. so-called principal components, an orthogonal basis which
5. Histogram equalization: applies histogram equaliza- are nothing more than new features that result from the lin-
tion to the region of interest. ear combination of those that make up the original vector.

409
The dimensionality reduction of the original problem oc- shows the experimental results obtained by the proposed
curs through the elimination of principal components that method.
do not contribute significantly to the reconstruction of the
original signal. In other words, it concentrates in a small 6. Experimental Results
number of components most part of the energy contained in
a signal. Our input vector, vc , is composed by a significant As previously described, to evaluate the proposed
number of features (516) and, therefore, a principal com- method, 7 different scenarios are assembled. Scenario 1
ponent analysis is performed in order to eliminate potential is designed with the objective of testing to the efficiency
irrelevant data, generating a new input vector, vP CA . of our method regarding its capacity to detect live/liveness.
Scenario 2 to 6 evaluate the distinctiveness between real fin-
5.5. Classification gers and fake fingers synthesized using 5 different materi-
als, individually. In Scenario 7 the general performance of
Using the processed vector vP CA as input, a feedforward
the proposed classifier is evaluated. In this last scenario one
artificial neural network (ANN) will be used to classify the
of the six possible materials, including real fingers, is de-
acquired samples. The possible classification scenarios are:
tected.
1. real fingers × all fake fingers; A dataset of 400 images was constructed and used to
2. real fingers × fake beeswax fingers; train, validate and test the feedforward neural network clas-
3. real fingers × fake corn flour play dough fingers; sifier. This dataset is composed by 200 images of real fin-
4. real fingers × fake latex fingers; gers (20 individuals, 5 samples of thumb and 5 samples of
5. real fingers × fake silicone fingers; index finger each) and 200 images of fake fingers (40 of
6. real fingers × fake wood glue fingers; each material). The real fingers subset was constructed as
diversified as possible, containing different skin tones, thus
7. classification returns one of six possible classes (real,
raising the classification challenge. Figure 4 shows an ex-
beeswax, corn flour play dough, latex, silicone, wood
ample of an image for each one of the possible materials
glue).
that compose the dataset, including a sample of a real fin-
The proposed ANN is composed by three layers: the in- ger. That the dataset was equally divided into training, val-
put layer, one hidden layer and the output layer. The first idation and testing subsets.
layer is composed by the input vector vP CA . The hidden To generate the vP CA input vector, a principal compo-
layer is composed by a variable number of neurons defined nent analysis is performed. In our experiments, the PCA is
experimentally for each scenario. Here it is assumed that used to reduce the dimensionality of the input vector from
the kind of neural network chosen as our classifier can ap- 516 features to 2, 4 or 8 principal component, as will be
proximate any function with a finite number of discontinu- detailed in the results. Table 1 summarizes the results for
ities arbitrarily well [22]. As for the output layer, there are Scenarios 1 to 6 using 2 and 4 principal components. For 8
two possibilities: for Scenarios 1 to 6 (see above), the out- principal components the hit rate for all scenarios is 100%.
put layer has only one neuron and generates a real number Therefore, presenting this result in a table format is unnec-
between −1 and +1. In this case, the expected target α is essary. Note that the performance of the classifier is deter-
defined as shown in Eq.6; for Scenario 7, the output layer mined by its False Rejection Rate (FRR), False Acceptance
is composed by 6 neurons and generates an output vector. Rate (FAR) and Hit Rate. Furthermore, the best number of
Here, the classifications are made comparing the output vec- neurons (#nrns) of the ANN hidden layer, for each case, is
tor with the expected target vector Φ, as shown in Eq.7. also presented. For Scenario 7, three confusion matrices are
 show in Tables 2, 3 and 4, one for each number of principal
+1, if real f inger components used (2, 4 and 8). The confusion matrices con-
α= (6)
−1, if f ake f iger tain the six possible classes, identified as follows: (R) real
⎧ finger; (B) beeswax; (D) play dough; (L) latex; (S) silicone;

⎪ [+1 −1 −1 −1 −1 − 1]t , if real and (W) wood glue.



⎪ [−1 +1 −1 −1 −1 − 1]t , if beeswax One may notice that the number of principal components


⎨[−1 −1 +1 −1 −1 − 1]t , if dough used to generate the input vector vP CA highly influences
Φ= (7) overall results. Looking at the classifications performed

⎪ [−1 −1 −1 +1 −1 − 1]t , if latex

⎪ in Scenarios 1 to 6 with 8 principal components, the pro-

⎪ [−1 −1 −1 −1 +1 − 1]t ,


if silicone
⎩ posed solution correctly detects liveness in 100% of times.
[−1 −1 −1 −1 −1 + 1]t , if woodglue
In other words, no fake finger are mis-confused with a real
Except for the input layer, whose neurons uses linear one. Besides that, for Scenario 7, it can be verified that
transfer function, all neurons, in all scenarios, have hy- the principal diagonal of Table 4 shown much batter results
perbolic tangent transfer functions [23]. The next section than principal diagonals from Tables 2 and 3. These prin-

410
Table 2. Scenario 7: 2 Principal Components. 20 neurons in hid-
den layer.

(a) Real (b) Latex (c) Dough

(d) Beeswax (e) Silicone (f) Wood glue


Figure 4. One example of capture obtained by main camera for
each one of the materials that compose our train and test sets.
Table 3. Scenario 7: 4 Principal Components. 20 neurons in hid-
Table 1. Results: Scenarios 1 to 6. den layer.
Scn. FAR (%) FRR (%) Hit Rate (%) #nrns
2 Principal Components
1 2.21 2.21 95.59 11
2 0.00 0.00 100.00 18
3 0.00 0.00 100.00 15
4 0.00 3.70 96.30 14
5 3.70 3.70 92.59 14
6 0.00 3.70 96.30 17
4 Principal Components
1 0.00 1.47 98.53 15
2 0.00 0.00 100.00 16
3 0.00 0.00 100.00 10 Table 4. Scenario 7: 8 Principal Components. 11 neurons in hid-
den layer.
4 0.00 0.00 100.00 13
5 0.00 0.00 100.00 16
6 0.00 0.00 100.00 16

cipal diagonals show the hit rate in percentage when actual


classes match the prediction made by the classifier. More
principal components can be used. However, the classifier
already presents a satisfactory performance with 8 of them.
One final observation is that silicone seems to be the hard-
est material to be distinguished from real fingers as well as
from other fake fingers.

7. Conclusion ture descriptors, the classifier correctly detects liveness in


100% of the time. When the method was applied to the
This paper presented a liveness detection method to iden- specific scenarios where the 6 classes (materials) are pre-
tify potential attempts of spoofing attacks against touchless sented and the classifier must indicate to which of the 6
fingerprinting devices. The method takes nothing but a pho- classes the acquired sample belongs, general performance
tographic image of the finger as input. The main idea is to was around 97.56%. Since to the best of our knowledge
take advantage of the fact that light reflects differently ac- there is no definitive liveness detection method for touch-
cording to each material, generating different texture pat- less fingerprint acquisition processes, the main contribution
terns. To test our method, 200 images from real fingers of of the present work is to address this problem successfully,
20 different people were compared against 200 images of as show by the results. Future work may include the expan-
fake fingers made of beeswax, play-dough, latex, silicone sion of the dataset to have more images and more materials
and wood glue. Results show that using only 8 principal in an attempt to evaluate the robustness of the proposed so-
components, obtained from applying PCA on a 516 tex- lution. Other kinds of attacks may also be considered.

411
References [15] W. Kim. Fingerprint liveness detection using local coherence
patterns. IEEE Signal Processing Letters, 24(1):51–55, Jan
[1] J. Wayman. A definition of biometrics. In National Biomet- 2017.
ric Test Center Colected Works 1997-2000. San Jose State
University, 2000. [16] LiveDet: Liveness Detection Competition Series. Available
at http://www.livedet.org. (Date last accessed 14-
[2] A. K. Jain, A. Ross, and S. Prabhakar. An introduction to April-2017).
biometric recognition. IEEE Transactions on Circuits and
Systems for Video Technology, 14(1):4–20, Jan 2004. [17] T. Ojala, M. Pietikainen, and T. Maenpaa. Multiresolution
gray-scale and rotation invariant texture classification with
[3] Arun Ross and Anil K. Jain. Human recognition using bio- local binary patterns. IEEE Transactions on Pattern Analysis
metrics: an overview. Annales Des Télécommunications, and Machine Intelligence, 24(7):971–987, Jul 2002.
62(1):11–35, 2007.
[18] Hongliang Jin, Qingshan Liu, Hanqing Lu, and Xiaofeng
[4] R.D. Labati, V. Piuri, and F. Scotti. Neural-based quality Tong. Face detection using improved LBP under Bayesian
measurement of fingerprint images in contactless biometric framework. In Image and Graphics (ICIG’04), Third Inter-
systems. In Neural Networks (IJCNN), The 2010 Interna- national Conference on, pages 306–309, Dec 2004.
tional Joint Conference on, pages 1 –8, july 2010.
[19] R. M. Haralick, K. Shanmugam, and I. Dinstein. Textural
[5] Ruggero Donida Labati, Vincenzo Piuri, and Fabio Scotti. A features for image classification. IEEE Transactions on Sys-
neural-based minutiae pair identification method for touch- tems, Man, and Cybernetics, SMC-3(6):610–621, Nov 1973.
less fingerprint images. In Computational Intelligence in
Biometrics and Identity Management (CIBIM), 2011 IEEE [20] L. K. Soh and C. Tsatsoulis. Texture analysis of sar
Workshop on, pages 96 –102, april 2011. sea ice imagery using gray level co-occurrence matrices.
IEEE Transactions on Geoscience and Remote Sensing,
[6] A. Genovese, E. Muñoz, V. Piuri, F. Scotti, and G. Sforza. 37(2):780–795, Mar 1999.
Towards touchless pore fingerprint biometrics: A neural ap-
proach. In 2016 IEEE Congress on Evolutionary Computa- [21] I.T. Jolliffe. Principal Component Analysis. Springer-Verlag,
tion (CEC), pages 4265–4272, July 2016. 2002.

[7] R. Donida Labati, A. Genovese, V. Piuri, and F. Scotti. To- [22] Howard Demuth, Mark Beale, Howard Demuth, and Mark
ward unconstrained fingerprint recognition: A fully touch- Beale. Neural network toolbox for use with matlab, 1993.
less 3-d system based on two views on the move. IEEE [23] Simon Haykin and Neural Network. A comprehensive foun-
Transactions on Systems, Man, and Cybernetics: Systems, dation. Neural Networks, 2(2004):41, 2004.
46(2):202–219, Feb. 2016.
[8] G. Parziale. Touchless fingerprinting technology. In
Nalini K. Ratha and Venu Govindaraju, editors, Advances
in Biometrics: Sensors, Algorithms and Systems, chapter 2.
Springer, London, 2008.
[9] A. K. Jain and S. Pankanti. Automated fingerprint identifica-
tion and imaging systems. In H. C. Lee and R. E. Gaensslen,
editors, Advances in Fingerprint Technology, chapter 8. CRC
Press, Boca Raton, 2nd edition, 2001.
[10] Sbastien Marcel, Mark S. Nixon, and Stan Z. Li. Hand-
book of Biometric Anti-Spoofing: Trusted Biometrics Under
Spoofing Attacks. Springer Publishing Company, Incorpo-
rated, 2014.
[11] Michael Wagner and Girija Chetty. Liveness assurance in
face authentication. In Stan Z. Li and Anil Jain, editors, En-
cyclopedia of Biometrics, pages 908–915. Springer, 2015.
[12] Stephanie A. C. Schuckers. Liveness detection: Fingerprint.
In Stan Z. Li and Anil Jain, editors, Encyclopedia of Biomet-
rics, pages 924–930. Springer, 2015.
[13] Z. Akhtar, C. Micheloni, and G. L. Foresti. Correlation based
fingerprint liveness detection. In 2015 International Confer-
ence on Biometrics (ICB), pages 305–310, May 2015.
[14] W. Kim. Towards real biometrics : An overview of finger-
print liveness detection. In 2016 Asia-Pacific Signal and In-
formation Processing Association Annual Summit and Con-
ference (APSIPA), pages 1–3, Dec 2016.

412

You might also like