You are on page 1of 10

Pattern Recognition Letters 82 (2016) 232–241

Contents lists available at ScienceDirect

Pattern Recognition Letters


journal homepage: www.elsevier.com/locate/patrec

A framework for liveness detection for direct attacks in the visible


spectrum for multimodal ocular biometrics✩
Abhijit Das a,∗, Umapada Pal b, Miguel Angel Ferrer c, Michael Blumenstein a
a
Institute for Integrated and Intelligent Systems, Griffith University, Queensland, Australia
b
Computer Vision and Pattern Recognition Unit, Indian Statistical Institute, Kolkata, India
c
IDeTIC, University of Las Palmas de Gran Canaria, Las Palmas, Spain

a r t i c l e i n f o a b s t r a c t

Article history: In this research a new framework for software-based liveness detection for direct attacks in multimodal oc-
Available online 9 December 2015 ular biometrics across the visible spectrum is proposed. The framework aims to develop a more realistic
method for liveness detection compared to previous frameworks proposed in the literature. To fulfil the above
Keywords:
Biometrics highlighted aims in this framework, intra-class level (i.e. user level) liveness detection is introduced. To detect
Sclera liveness, a new set of image quality-based features is proposed for multimodal ocular biometrics in the vis-
Liveness ible spectrum. A variety of transformed domain (focus related) aspect and contrast-related quality features
Iris are employed to design the framework. Furthermore a new database is developed for liveness detection of
Visible spectrum multimodal ocular biometrics, which has the prominent presence of multimodal ocular traits (both sclera
and iris). Moreover this database is comprised of a larger variety of fake images; those were prepared by
employing versatile forging techniques which can be exhibited by imposters. Therefore the proposed schema
has dealt with versatile categories of spoofing methods, which were not considered previously in the litera-
ture. The database contains a set of 50 0 fake and 50 0 genuine eye images acquired from 50 different eyes. An
appreciable liveness detection result is achieved in the experiments. Furthermore, the experimental results
conclude that this new framework is more efficient and competitive when compared to previous liveness
detection schemes.
© 2015 Elsevier B.V. All rights reserved.

1. Introduction liability of these biometric systems, liveness detection is a necessary


step to prevent threats from intruders.
The eye offers a wide variety of biometric traits.1 Among the eye Liveness detection refers to the different techniques used as a
traits, iris biometric is the most reliable and promising one [6]. Re- countermeasure to overcome the threat of physical spoofing of bio-
gardless of their usefulness, iris traits exhibit some limitations with metric samples. The liveness detection methods may scrutinise phys-
regards to their universal applicability [8,9]. This is the case in uncon- ical properties of a living body in terms of density, elasticity, electrical
trolled environments and specifically with the change of gaze angle capacitance, etc. or spectral reflection and absorbance, visual (colour,
of the eye and in non-ideal eye images. etc.) or may analyse body fluids (DNA, etc.), involuntary signals of a
In order to mitigate this challenge, multi-modal biometrics are living body such as the pulse, blood pressure, etc. Bodily responses
proposed in the visible spectrum, combining iris traits with ocu- to external stimuli, for instance smiling or the blinking of an eye can
lar vessel patterns or sclera blood vessel patterns [10,11]. Tradi- also employed to establish liveness.
tional multimodal ocular biometric authentication systems are not Most of the eye liveness detection work and the associated
equipped to discriminate between impostors (those who can illegally databases that were developed were aimed at incorporating iris live-
duplicate genuine traits and take the privileges to access a system as ness. In those datasets, iris images in the infra-red spectrum were
a genuine user). Therefore, in order to enhance the security and re- acquired. However, for multimodal eye biometrics, the acquisition of
iris and sclera traits is required in the visible spectrum (because the
sclera vessel patterns are not prominent in Infra-red images). Further-

more, in these well-known existing databases, (Biosec [13], Clarkson
This paper has been recommended for acceptance by Maria De Marsico.
∗ [22], NotreDame [5], Warsaw [3] and MobBIO [24]), the sclera ves-
Corresponding author at: School of Information and Communication Technology,
G06, 1.17 Griffith University, Gold Coast 4222 Australia. Tel.: +61 755527090. sel patterns are not visible due to the image acquisition approach
E-mail address: abhijit.das@griffithuni.edu, abhijitdas2048@gmail.com (A. Das). adopted while developing them. Moreover the fake images included
1
http://clinfowiki.org/wiki/index.php/Ocular_biometrics. in these databases were developed from printed eye images and

http://dx.doi.org/10.1016/j.patrec.2015.11.016
0167-8655/© 2015 Elsevier B.V. All rights reserved.
A. Das et al. / Pattern Recognition Letters 82 (2016) 232–241 233

artificial lens patterns. Whereas, nowadays several other sophisti- detection of iris biometrics can be found in the literature. The poten-
cated techniques such as portable screens and mobiles can be used tial approaches for forging iris-based systems that are highlighted in
by intruders to forge the highlighted multimodal ocular system in the the literature are:
visible spectrum. Eye image/video: Scanning an image/video from a probable
Therefore, the way forward is the development of a new database screen.
acquired in the visible lighting range, which consists of original and Printed images: Scanning a high resolution printed image of an
fake images (populated from printed images and portable screen im- artificial eye.
ages). Additionally, in this research, we aim to propose a quality im- Lens: Glass/plastic artificial lenses etc.
age measure framework for software-based liveness detection for oc- The feasibility of these attacks has been investigated and some re-
ular biometrics in the visible spectrum. searchers have reported that it is actually possible to spoof iris recog-
Furthermore, a class level/user level-based liveness framework nition systems with printed iris images and well-made colour iris
is proposed in this work. In a real life scenario, liveness detection lenses [4,6,19].
is required only for those query images which are recognized cor- In the context of iris liveness detection the potential of quality
rectly. Therefore, the liveness detection problem can be clearly re- assessment to identify genuine and fake iris samples acquired from
duced to a two-class classifier problem at the intra-class level/user a high quality printed images has been explored as a way to detect
level rather than the inter-class level/database level. A class level spoofing attacks in [14]. In their work, a strategy based on the com-
liveness detector not only reduces the complexity of the schema, bination of several quality-related features has also been used for
but also makes the quality features more useful to classify the gen- spoofing detection in the context of iris liveness. This work also pro-
uine and fake data. The reason behind the improvement is the fact posed a framework of feature selection. Another approach for quality-
that each class has an individual feature represented by a thresh- based features assessment has been used for liveness detection in the
old of the quality feature score (different thresholds per user were context of iris images [17]. One more example for assessing the iris
obtained). image quality based liveness measures, such as occlusion, contrast,
The organization of the rest of the paper is as follows. In Section 2, focus and angular deformation can be found in [2].
the existing literature on liveness detection is reviewed, Section 3 de- The use of texture analysis of the iris liveness is recorded in [16].
scribes the proposed framework and the various new quality-based The analysis of frequency distribution rates of some specific regions
features employed. Section 4 contains the details of the supporting of the iris can be found in [20]. Some significant developments in
experimental results and describes the proposed database utilized for iris liveness detection can be found in the competition (Livedet),2
experimentation. Finally, Section 5 draws the overall conclusions and organized to record the recent developments in this research do-
the future scope of the work. main. Some well-established texture assessment-based measures
were used by the participants to establish liveness detection. In the
context of previous research pertaining to liveness, manual segmen-
2. Literature review
tation was adopted to find the region of interest, which is quite un-
realistic. A recent work, [23] investigated and concluded that auto-
Many approaches for identifying forgeries and establishing live-
matic segmentation does not influence the liveness detection mea-
ness have been investigated in the literature. In [21], the authors iden-
sure. Moreover databases for iris liveness research are also proposed
tified two types of attacks that can be adopted by intruders while
in the literature, such as Biosec [13], Clarkson [22], NotreDame [5],
penetrating a biometric system: Direct and Indirect.
Warsaw [3] and MobBIO [24]. In another research [12], multi-angle
Direct attack: Direct attacks are committed at the sensor level. It based liveness is proposed for sclera biometrics. In this work, com-
relates to the generation of synthetic biometric samples (for in- binations of multi-angle sequences were employed for liveness de-
stance, iris, face images or videos) in order to fraudulently access tection. Frequently in the literature, most of the seminal and recent
a system. works concerning liveness detection [1] in fingerprint-based recog-
nition systems [15] have employed image quality features for live-
Indirect attacks: Indirect attacks are performed at the digital level
ness detection. Another approach for software-based liveness de-
intercepting the data flow [21], which attacks the feature extrac-
tection of direct attacks, by challenge-response framework embed-
tor, the matcher or the weak points in the communication chan-
ded into the visual or audio-visual signal is reported in [18]. Sev-
nels.
eral properties of a living body such as bodily responses to external
The proposed solutions and specific countermeasures proposed in stimuli are reported in [18] for real-time face detection and motion
the literature can be categorized into two groups of techniques [14] analysis.
as follows. Similar idea can also be implemented for the ocular liveness sce-
nario by asking blinking left eye, opening mouth, rotating head, etc.
Software-based techniques: In this schema, fake biometric traits in an unknown sequence. However, it may increase the complexity
are detected once the sample has been acquired with a standard of the model and may reduce the user friendliness of the proposed
sensor. Subsequently, image quality features, body movement fea- system (as it requires user participation to give the response with re-
tures, motion features, and physical properties are employed to spect to the challenges prompted by the system). In this work mak-
realize liveness. ing a trade-off, we have developed a model that is simple. Therefore,
by analysing the above fact in our framework, we have endeavoured
Hardware-based techniques: In this schema, specific devices are
to propose our subject-specific image quality-based feature and its
added to the sensor in order to detect particular properties. As an
combination for liveness detection.
example this may include the detection of live fingers in a finger-
The specific contributions of the proposed liveness detection
print recognition system via blood pressure.
schema are as follows:
Furthermore, it can be noted in the literature that the methods of
liveness detection can be classified into four categories. This catego- (1) A new realistic framework for liveness detection, as shown in
rization is based on the approach adapted to feature the biometric Fig. 1(b), is proposed at the intra class level/user level, which
trait and liveness as well as the timing of measurement [25].
As mentioned previously, among the ocular biometrics, the iris is
the most promising trait. So, various examples of research for liveness 2
LivDet-series, available online at http://livdet.org/.
234 A. Das et al. / Pattern Recognition Letters 82 (2016) 232–241

Fig. 1. (a) Inter class level or database level liveness frame work, (b) intra class/user level liveness frame work, (c) a graphical representation of the proposed framework for liveness
detection.

makes the quality features more useful than the other (previ- 3. Proposed liveness framework
ous) liveness frameworks.
(2) A more realistic database for liveness detection is proposed, It can be concluded from the literature that the biometric liveness
which is equipped with fake images that were prepared em- detection problem is a two-class classifier problem i.e. it classifies the
ploying more versatile and sophisticated forging techniques input image into two classes, either as a genuine sample or as a fake
that can be adopted for imposters. sample. By processing the input images through image processing,
(3) Analysing and observing the genuine and the fake images, fea- computer vision and signal processing-based quality assessment, this
ture and property image quality-based features are proposed classification can be executed.
for liveness detection in ocular biometrics across the visible For this classification, various liveness frameworks have been pro-
spectrum. These measures are based on the transformed do- posed in the literature. These frameworks have been designed at the
main, geometrical relations and contrast measure quality fea- inter-class/database level. In contrast to the existing frameworks, the
tures. proposed framework deals with the liveness problem as a two-class
A. Das et al. / Pattern Recognition Letters 82 (2016) 232–241 235

classifier problem at the intra-class/user level (fake and genuine clas-


sifications at each user level – a block diagram of the intra-class and
inter-class level is provided in Fig. 1(a) and (b)).
The working principle of the proposed liveness framework is as
follows:
Enrolment: At the enrolling step, we obtain separate genuine and
fake representations of each user. For enrolling the genuine data we
need the individual’s cooperation to scan the eye images from a fixed
distance. Whereas for fake samples, these are acquired in a user inde-
pendent way, i.e. we do not require the individual’s cooperation and it
is performed automatically as shown in Fig. 1(c). In order to produce
the fake images, the genuine images acquired from each user in the
enrolment phase are displayed in a portable screen/as a printed im-
age (without manual intervention) placed in front of a camera which
captures the fake images.
Identification: At the identification step, an input is scanned in
front of the sensor from a fixed distance (same distance as during
enrolment). If an individual is identified correctly then the liveness is
detected by the framework. For detecting the liveness, various qual-
ity parameters are employed and compared with the genuine and
fake representation of the recognized class. A flow diagram of the
above-mentioned framework for liveness detection is given below in
Fig. 1(c).
The various proposed quality-based features employed in the
framework to detect liveness can be grouped into three main cate-
gories namely: transformed domain (focus)-related quality features,
aspect-related features and contrast-related features. These features
are described below in the following sub-sections.

3.1. Transform domain or focus-related quality features

Fig. 2. (a) Power spectrum of the genuine image, (b) power spectrum of the fake im-
Various types of direct attacks that can be used for spoofing in oc-
age, (c) genuine image and (d) fake image.
ular biometrics across the visible spectrum include scanning a high
resolution printed image, a high resolution digital image or high def-
inition video from a portable screen such as a mobile device or tablet
etc. However in these scenarios, as the spoofing is performed by scan-
ning images which are a projection of a 2D object, it is expected that
there will be a difference in the focus property for the images cap-
tured from ‘alive’ data which are 3D objects. Therefore, various trans-
formed domain (focus)-related features are explored and discussed
as follows.

• Power spectrum (QF1): The power spectrum of an image is a


unique feature to discriminate the genuine and fake images [3]. Fig. 3. (a) DAL of a genuine image, (b) DAL of a fake image.
The power spectrum splits a signal or an image into frequency
bins, where the maximum frequency can be observed as the
Nyquist frequency or half of the sampling frequency. The power 3.2. Aspect-related feature
spectrum for a sample genuine and fake image is represented in
Fig. 2(a) and (b) respectively. Similar to focus-related features, aspect ratio-based features can
• Discrete approximation of the Laplacian (QF2): The Discrete also play an important role for liveness detection in the visible spec-
Laplace operator is an analogue of the continuous Laplace oper- trum. This is because the images scanned for a system have a unique
ator whereby it provides an accurate approximation value of the camera specification. This constraint is quite similar for cross sensor
digital signals passing through this filter. So, Discrete Approxima- system scenarios and they will produce a sound difference in aspect
tion of the Laplacian (DAL) operator will be different for genuine ratios between the genuine and fake images. Hence the various as-
and fake images as the signal quality of both representations is pect ratio-based features are assumed to work well in discriminat-
different. DAL for a sample genuine and fake image is represented ing the genuine and fake images. A set of aspect-related features are
in Fig. 3(a) and (b) respectively. presented and employed here for the proposed liveness system, as
• High frequency 2D filter response (QF3): The filter response described below.
from a high frequency 2D filter is also a signal processing ap- • Ratio of the pupil and the iris radius (QF4): The first aspect-
proach to detect the quality of an image. Here a Discrete Meyer de- related feature employed in the framework is the ratio of the
composition filter was employed for the implementation of high pupil and the iris radius. The ratios were calculated by the integro-
frequency filters. The high frequency representations for a sam- differential operator proposed in [6].
ple genuine and a fake image are represented in Fig. 4(a) and (b) • Ratio of the iris radius and image length (QF5): The aspect of
respectively. the ratio of the iris radius and image length is also expected to be
236 A. Das et al. / Pattern Recognition Letters 82 (2016) 232–241

• Ratio of the sclera segmented image and binary image (QF13):


The ratio of the count of the white pixels of the segmented mask
and the binary mask achieved by applying Otsu’s binarization
method is quite different for genuine and fake images because
of the intensity distribution difference present in both categories
of the image. This aspect related feature of the two binarization
methods is employed for liveness detection.

3.3. Contrast related feature

Fig. 4. (a) High frequency filter response for a genuine image, (b) high frequency filter It can be easily visualized looking at the fake and the genuine im-
response for a fake image.
ages that there is a huge difference in the contrast distribution be-
tween both categories of images. For this reason, contrast-related
quality features can be a key attribute to distinguish the fake and
different for a genuine image and a fake image. This is because the
the genuine images correctly. So, the following contrast-based quality
dimensions of the genuine and fake eyes in the image are differ-
features are proposed in this work.
ent.
• Ratio of the iris radius and image width (QF6): Similar to the • Global construct (QF14): The global construct is the first contrast-
previous aspect feature, the aspect ratio of the iris radius and im- related quality feature employed in this work. The global con-
age width is also expected to be different for a genuine image and struct of the image is calculated for each image as a quality fea-
a fake image. ture, by the following equation. Very low and very high intensity
• Ratio of the pupil radius and image width (QF7): As with the values (<30 and >245) are normalized to 0, they were considered
previous quality feature QF6, an aspect ratio of the pupil radius as noise.
and the image width can play an important role as an aspect- Contrast = Value of highest intesity − average intesity (1)
based quality feature. This quality feature is assumed to be differ-
ent for genuine and fake images, as the pupillary’ dilation changes The global contrast of the fake image and the genuine image varies
in 2D (printed image or video used to spoof) and a 3D object (gen- in a huge way, which is reflected from the images, so this feature plays
uine human eye) will always be different. an important role in this framework.
• Ratio of the pupil radius and image length (QF8): Similar to the • Local construct (QF15): It is also evident from the fake and the
previous aspect feature based on the ratio of the pupil radius and genuine images that there is a good discernible difference in local
the image length, in this feature, the ratio of the pupil radius and contrast between them. So, the images are divided into 10 × 10
the image length is introduced, which is also expected to be suffi- patches; intensities of each block are normalized to a scale of 1–5
ciently capable of discriminating a live and fake image. and the contrast of each patch is calculated and summed to ob-
• Ratio of the sclera length and image width (QF9): In this as- tain the final representation. Similar to the global contrast, this
pect ratio-based quality feature, the ratio of the sclera length and feature also plays an important role in detecting liveness within
the image width is employed. The sclera length is estimated by this framework.
calculating the length difference between the two corners of the • Local construct at the channel level (QF16): A new image
eyes. At first, the sclera is segmented by the Time Adaptive Re- channel-based contrast-related quality feature is proposed in this
gion growing algorithm as proposed in [7]. Then the two corners work for liveness detection. The images are divided into10 × 10
are estimated by finding the first column of the binary segmented patches at each channel level (red, green and blue channel) and
mask image having a white pixel and the corresponding last col- the contrast of each patch of subsequent channel representations
umn having a white pixel. The estimation of one such image is are added to get the channel level local feature. Finally, the three
shown in Fig. 5. channel level representations are concatenated to get the final
• Ratio of the sclera length and image length (QF10): Similar to representation of this proposed feature.
QF9, the sclera length is estimated by calculating the length dif- • Image red channel contrast (QF17): Among the contrast-related
ference between the two corners of the eyes from the sclera seg- features, a few channel-based global features are also proposed. In
mented image, and then the ratio of the sclera length and image this feature, the red channel global contrast of the image is calcu-
length is calculated. lated by the following equation. Very low and very high intensity
• Ratio of the sclera length and iris radius (QF11): The ratio of the values (<35 and >245) are normalized to 0, they were considered
sclera length and the iris radius is another quality feature that can as noise.
be used for liveness detection. The sclera length is calculated in a
similar way to QF9. RcC = RcHi − RcAi (2)
• Ratio of the sclera length and pupil radius (QF12): The ratio of where,
the sclera length and pupil radius is also a unique aspect for live- RcC = Red channel Contrast.
ness detection, which is employed in this quality feature. RcHi = Value of highest intensity in the red channel of an image.
RcAi = Average intesity in the red channel of an image;
• Image green channel contrast (QF18): In this feature, the green
channel contrast of the image is calculated in a similar way to Eq.
(2) but by replacing the red channel attribute by a green channel
attribute. Very low and very high intensity values (<30 and >245)
are normalized to 0; they were considered as noise. This was the
most effective feature among all the other features.
• Image blue channel contrast (QF19): Similar to the red and green
channel, the blue channel contrast is proposed and calculated by
Fig. 5. Sclera length estimation technique adopted by calculating the two corners from the same Eq. (2), but by employing the blue channel contrast at-
the segmented mask. tributes. Very low and very high intensity values (<25 and >240)
A. Das et al. / Pattern Recognition Letters 82 (2016) 232–241 237

Fig. 6. Different varieties of genuine images in the database.

are normalized to 0; they were considered as noise. These quality-


based features were also quite efficient at detecting liveness.

The results of liveness detection after employing the above-


mentioned set of proposed quality features are reported in the next
section.

4. Experimental results

To justify the practical effectiveness of the proposed framework,


an experimental setup was designed. The detailed results of the ex-
periments, and the database used to implement the experiments, are Fig. 7. (a) Genuine image captured by a NIKON camera, (b) fake image prepared by
explained in the following sub-sections. scanning a portable screen displaying image (a), (c) images captured by a mobile cam-
era, and (d) fake image prepared by scanning a printed eye image of (c).
4.1. Database

In order to implement the proposed liveness framework and to i.e. 5 samples per user (4 fake images were generated through ac-
assess the effectiveness of the proposed quality features, a suitable quisition from a digital screen and 1 image from a printed image
database was developed. This database is more realistic for this sub- using a laser printer). These printed images can also be generated
ject of research, as compared to other well-known existing databases automatically by developing photometric models (generating a pho-
for iris liveness analysis (Biosec [13], Clarkson [22], NotreDame [5], tometric model from a set of training images and applying the ef-
Warsaw [3] and MobBIO [24]), as the sclera vessel patterns were fect to generate automatic fake images). For testing for each user,
not prominent in the above-mentioned existing databases due to 5 (4 from the screen and 1 from a printed image) of the fake im-
the image acquisition process. Therefore it would be quite unre- ages for each eye were prepared, altogether comprising 250 fake im-
alistic to conduct experiments examining liveness of multimodal ages for testing (so altogether 500 fake images for enrolment and
eye biometrics by employing the above-mentioned databases. Fur- testing).
thermore the fake images that were included in these databases In order to stimulate our proposed user-level liveness, for each
were developed from printed eye images and artificial lens patterns eye, 5 fake images were used for training and the remaining 5 for
only. testing. Some fake test images were also prepared by scanning a high
This proposed database consists of 500 genuine RGB images from quality images captured from a NIKON D 80 0 camera and 2830 0
both eyes of 25 individuals (in other words 50 different eyes). For lenses (as imposters can use very sophisticated cameras to acquire
each eye, 10 sample images were captured. The database contained the original biometric trait). Printed spoofed image were prepared
blurred images and images with blinking eyes. The individuals were from printed (laser colour) images, employing the above-mentioned
comprised of both males and females (12 males and 13 females), of high (NIKON D 800) and low quality (mobile sensor that is employed
different ages and different skin colours, 2 of them were wearing con- to capture genuine images) images and then scanning them in front
tact lenses and the images were taken at different times of the day. of the sensor. Examples of a few fake images are shown in Fig. 7 .
Variation in image quality (blur, lighting condition, etc.) and different
acquisition conditions were included intentionally in the database to
investigate the performance of the framework in non-ideal scenar-
ios. High resolution images (3264 × 2448) of 96 dpi are included in 4.2. Classifier used and feature selection technique
the database. All the images are in JPEG format. We have used im-
ages of different quality and some of the sample images are shown in As liveness detection is a two-class classification problem and
Fig. 6. The images were captured using a mobile camera with an 8 the dimensions of the features used are not large, various pair-wise
mega-pixel rear camera. The dataset will be available upon request classifiers such as the Euclidean distance, Cityblock distance, Cheby-
from the author via email at: abhijitdas2048@gmail.com. chev distance, Cosine distance, Spearman distance, Hamming dis-
In order to simulate our proposed user-level liveness, for each eye, tance and Jaccard distance (available in the literature) are employed
5 genuine images were used for enrolment and 5 for testing (so 500 here. Among them, the Euclidean distance produced the best result.
images altogether for enrolment and testing). In order to create the Furthermore, we combine this feature to analyze their behaviour
forged attack experiment, the fake images for enrolment were pre- and performance. Moreover it can be found in the literature that com-
pared automatically by displaying/printing the acquired images of binations of such features can also enhance the result. So, we also fol-
genuine image enrolment and subsequently a fake image was gen- lowed the same procedure to enhance the performance of our system
erated by capturing the displayed image via a camera positioned in and the effectiveness of the proposed quality feature.
front of the screen/printed image. Varying types of display screens For feature selection we have first found the performance of the
were used to produce a real-life situation of a cross sensor scenario, individual features and then the features producing the best results
and the same standard sensor was used to acquire them. The 250 are combined with the feature producing the second best result and
fake images were prepared by the above methodology for enrolment so on.
238 A. Das et al. / Pattern Recognition Letters 82 (2016) 232–241

Table 1 a clear architecture of the liveness system for tonal correction of the
Liveness detection performance of the various individual and combinations of image
fake images. Moreover they need to attack at the software level rather
quality-based features.
the sensor level, which is more difficult than a direct attack.
Test averaged accuracy The feature distributions of the best discriminative features, such
Feature in % achieved for
as the pupil and the iris radius QF4, are shown in Fig. 9 and the con-
Fake Genuine trast quality features (QF14, QF17, QF18 and QF19) are shown in Fig.
samples samples 8. Although the cumulative frequency or the probability distributions,
Focus features QF1 81 79 i.e. the score of the feature normalized at each class level, would have
QF2 70 83 been a better measure, but for better discernibility of the feature, this
QF3 76 80 feature distribution is used. Moreover the feature distribution is also
Aspect features QF4 95 94 a better measure to reflect the effectiveness of the feature at the class
QF5 86 82
QF6 87 84
level and the database level. In each graph, the quality feature score
QF7 84 86 distribution of the entire genuine sample is represented by a green
QF8 87 84 line and the fake sample by a blue line. Along the X-axis is the sample
QF9 85 78 and along the Y-axis is the feature value. In each of the lines in the
QF10 84 70
graph, the first ten feature values represent the feature value of the
QF11 80 70
QF12 78 78 ten fake/genuine samples of the first class, the next ten from the sec-
QF13 85 86 ond class and so on. It is clear from the graphs that the feature values
Contrast measures QF14 96 96 of the fake and genuine samples have a discernible difference within
QF15 88 89 each class. Whereas, if we considered the database-level feature dis-
QF16 89 90
tribution, then this feature value would have a certain overlapping
QF17 96 95
QF18 98 97 region. Therefore that creates confusion in the classification of ‘alive’
QF19 95 94 and fake data at the database level.
Combination QF14+QF18 98 98
QF14+QF17+QF18 98 99
4.4. Comparison with the state-of-the-art methods
QF14+QF17+QF18+QF19 99 99
QF4+QF14+QF17+QF18+QF19 100 100
The performance of the above-mentioned framework is compared
with the most relevant previously proposed liveness framework in
[15] using our proposed set of quality features, feature selection pro-
4.3. Experimental result details
cess and classification technique. The results producing the better
performance were on the basis of the newly proposed liveness frame-
The liveness experimental results with various quality-based fea-
work. The detailed comparison of the results of the two frameworks
tures are explained in this subsection. For training, five samples from
is given in Table 2.
each class from the fake and the genuine categories are used to pro-
It can be seen from Table 2 that the focus-related feature worked
duce the training model (the feature values are averaged to get the
averagely for detecting liveness in the previous framework of [15],
feature representation of each class) and the remaining five samples
whereas the aspect-related feature worked less accurately than the
from each class of fake and genuine images were used for testing.
focus-related feature. Furthermore, the group of contrast-related fea-
The performances of the above-mentioned features individually and
tures produced the best result. Similar to the results of Table 1, here
in combination are presented in Table 1.3
also the QF18, i.e. the green channel contrast, outperformed every
The table illustrates the satisfactory performance of the proposed
other quality feature introduced here. The combination of the fea-
quality feature for both aspects of liveness detection (i.e. ability to
tures was also performed, and the improvement in results was no-
classify genuine and fake samples correctly). It can be seen from the
ticeable.
above table that the focus-related features worked for detecting live-
Although the proposed aspect-related features worked for the
ness. Aspect-related features worked better than the focus-related
previous framework [15], it is clear from the above table that
feature. Among the aspect-related features, the performance of the
the concept of class-level liveness classification has outperformed
QF4 (i.e. the ratio of iris and pupil radius) was the best. Further-
the database-level liveness detection/classification by an apprecia-
more the group of contrast-related features produced the best results.
ble margin. These results and comparisons demonstrate the effec-
Among them QF18 i.e. the green channel contrast, outperformed the
tiveness of the proposed framework in contrast to the previous
other quality features introduced here. As mentioned previously, we
framework.
also combined this feature to analyse the performance. We fused the
For further analysis and comparisons, the best quality feature
features according to the performance rank so we combined the 1st
combinations in [15] i.e. the combination of occlusion (pupillary dila-
rank feature with 2nd and so on. It can be concluded from Table 1 that
tion), global contrast and local construct are applied in our database,
the combination of the contrast-based features and the aspect-based
both using the class-level liveness classification and database-level
features produced the best result.
classification. In the database level classification these features were
Moreover, it will be also quite interesting to observe the correla-
able to discriminate 94% of the genuine and 89% of the fake images.
tions between the features. It can be seen from the results that the
Whereas, in the inter-class level liveness framework, these same fea-
accuracy of the features is quite close. But the features are significant
tures were able to correctly discriminate only 95% of the genuine and
because when they are combined, they boost the accuracy as opposed
93% of the fake images.
to when they are applied individually.
From the above analysis, it is again demonstrated that the class-
Although the contrast-related features have worked exceptionally
level classification is more efficient than the database-level liveness
well in our proposed schema, unfortunately such features can be at-
classification framework. Moreover it is also clear that the best qual-
tacked by intruders at the software level by using photometric nor-
ity feature combination of [15] in the class-level classification could
malization. However, for such scenarios of attack, the intruder will
not outperform the result of our proposed quality-related feature.
be required to have technical details about the inner functionality and
Another comparison was performed with a very recent and most
efficient work on ocular liveness detection proposed in [23,24]. The
3
All percentage accuracy values reported in the paper are rounded off to unit place. liveness algorithm of the above-mentioned research was employed
A. Das et al. / Pattern Recognition Letters 82 (2016) 232–241 239

Fig. 8. Feature distributions of the best discriminative quality features for the genuine and fake samples of each class (a) Global contrast QF14, (b) red channel contrast QF17, (c)
green channel contrast QF18, and (d) blue channel contrast QF18.

using our framework and the proposed database to perform the anal- similar to the previous analysis, this analysis also demonstrates the
ysis. On the basis of the experiments performed it is clear from the superior effectiveness of the proposed framework and the feature set.
results that it can only efficiently classify 89% of the fake images and So, from the above analysis, the effectiveness and applicability
94% of the real images. of the proposed framework in a real-life scenario is demonstrated.
Whereas, employing their algorithm at the database-level liveness Moreover, the effectiveness of the quality features is also justified on
framework and using our database, the result was less effective. Thus, the basis of the above comparison.
240 A. Das et al. / Pattern Recognition Letters 82 (2016) 232–241

Fig. 9. Feature distributions of QF4 (is the ratio of the pupil and the iris radius).

Table 2 in the database also contain sufficient accurate biometric information


The detailed performance-based comparison of results between the proposed frame-
for the aforementioned ocular traits.
work and the framework presented in [15] using our proposed set of quality features.
Future work extending this research will concentrate on exploring
Feature Test averaged accuracy this framework on other biometric traits for liveness detection. Incor-
in % achieved for
poration of challenge-response mechanism in our framework will be
Proposed another extension of the present work.
framework Framework
[15]
Acknowledgments
Focus features QF1 80 70
QF2 77 73 The authors sincerely thank Prabir Mondal from CVPR, ISI Kolkata
QF3 79 74 for assisting with the data collection process.
QF4 95 86
Aspect features QF5 84 83 References
QF6 85 80
QF7 85 80
[1] F. Alonso-Fernandez, J Fierrez, J. Ortega-Garcia, J.G. Rodriguez, H. Fronthaler,
QF8 85 78
K. Kollreider, J. Bigun, A comparative study of fingerprint image quality estima-
QF9 81 75 tion methods, IEEE Trans. Inf. Forensics Secur. 2 (4) (2008) 734–743.
QF10 77 74 [2] Abhyankar A., Schuckers, S., 2009., Iris quality assessment and biorthogonal
QF11 75 74 wavelet based encoding for recognition, Pattern Recognit. 42(9), 1878–1894.
QF12 78 70 [3] A. Czajka, Database of iris printouts and its application: development of liveness
QF13 86 80 detection method for iris recognition, in: 2013 18th International Conference on
QF14 96 89 Methods and Models in Automation and Robotics (MMAR), 2013, pp. 28–33.
QF15 88 84 [4] J. Daugman, Recognizing people by their iris patterns, Inf. Secur. Tech. Rep. 3 (1)
Contrast measures QF16 90 82 (1998) 33–39.
QF17 96 88 [5] Doyle, J., and Bowyer, K.W., 2004, Notre Dame image dataset for contact lens de-
QF18 97 100 tection in iris recognition.
[6] J. Daugman, Iris recognition and anti-spoofing countermeasures, in: 7th Interna-
QF19 95 87
tional Biometrics Conference, 2004.
Combination QF14+QF18 98 91
[7] A. Das, U. Pal, M.F.A. Ballester, M. Blumenstein, A new method for sclera vessel
QF14+QF17+QF18 99 93
recognition using OLBP, in: Chinese Conference on Biometric Recognition, LNCS,
QF14+QF17+QF18+QF19 99 94 8232, 2013, pp. 370–377.
QF4+QF14+QF17+QF18+QF19 100 94 [8] A. Das, U. Pal, M.F.A. Ballester, M. Blumenstein, Sclera recognition using D-SIFT,
All 100 94 in: 13th International Conference on Intelligent Systems Design and Applications,
2013, pp. 74–79.
[9] A. Das, Pal, M. Blumenstein, M.F.A. Ballester, Sclera recognition – a survey, in:
2013 2nd IAPR Asian Conference on Advancement in Computer Vision and Pattern
Recognition, 2013, pp. 917–921.
5. Conclusions and future work [10] A. Das, U. Pal, M.F.A. Ballester, M. Blumenstein, Fuzzy logic based sclera recogni-
tion, in: 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2014,
pp. 561–568.
This paper proposes a new liveness detection approach for mul- [11] A. Das, U. Pal, M.F.A. Ballester, M. Blumenstein, A new efficient and adaptive sclera
timodal ocular biometrics in the visible spectrum based on image recognition system, in: IEEE Symposium Series on Computational Intelligence,
2014.
quality features. The novel framework proposes inter class/class level [12] A. Das, U. Pal, M.F.A. Ballester, M. Blumenstein, Multiangle based lively sclera bio-
liveness detection based on the combination of domain transforms, metrics at a distance, in: IEEE Symposium Series on Computational Intelligence,
geometrical ratios and contrast measures. 2014.
[13] J. Fierrez, J. Ortega-Garcia, T.D. Toledano, J. Gonzalez-Rodriguez, Biosec baseline
Furthermore, this framework and the new quality features pro-
corpus: a multimodal biometric database, Pattern Recognit. 40 (4) (2007) 1389–
posed can also be used to explore liveness detection of multimodal 1392.
biometric traits across the visible spectrum, in a more realistic and [14] J. Galbally, J. Ortiz-Lopez, J. Fierrez, J. Ortega-Garcia, Iris liveness detection based
on quality related features, in: Proceedings of International Conference on Bio-
accurate way compared to previous liveness detection schemas. The
metrics, India, ICB, 2012, pp. 271–276.
success of the proposed framework and the proposed set of quality [15] J. Galbally, F. Alonso-Fernandez, J. Fierrez, J. Ortega-Garcia, A high performance
features of liveness are evident from the experimental results. fingerprint liveness detection method based on quality related features, Future
Moreover, a new database is proposed for liveness detection, con- Gener. Comput. Syst. 28 (2012) 311–321.
[16] X. He, S. An, P. Shi, Statistical texture analysis-based approach for fake iris de-
sisting of fake images that were developed with more versatile forg- tection using support vector machines, Advances in Biometrics, Springer, 2007,
ing technology that can be used by intruders. Moreover, the images pp. 540–546.
A. Das et al. / Pattern Recognition Letters 82 (2016) 232–241 241

[17] M. Kanematsu, H. Takano, K. Nakamura, Highly reliable liveness detection method [22] S. Schuckers, K. Bowyer, D. Yambay, Liviness Detection – Iris Competition, 2013,
for iris recognition, in: SICE Annual Conference, IEEE, 2007, pp. 361–364. http://people.clarkson.edu/projects/biosal/iris/.
[18] K. Kollreider, H. Fronthaler, M.I. Faraj, J. Bigun, Real-time face detection and mo- [23] A.F. Sequeira, J. Murari, J.S. Cardoso, Iris liveness detection methods in the mobile
tion analysis with application in liveness assessment, IEEE Trans. Inf. Forensics biometrics scenario, in: International Joint Conference on Neural Networks, 2014,
Secur. 2 (3–2) (2007) 548–558. pp. 30 02–30 08.
[19] E. Lee, K. Park, J. Kim, Fake iris detection by using purkinje image, Advances in [24] A.F. Sequeira, J. Murari, J.S. Cardoso, Iris liveness detection methods in mobile ap-
Biometrics Lecture Notes in Computer Science, 3832, Springer, Berlin/Heidelberg, plications, in: International Conference on Computer Vision Theory and Applica-
2005, pp. 397–403. tions (VISAPP), 2014.
[20] L. Ma, T. Tan, Y. Wang, D. Zhang, Personal identification based on iris texture anal- [25] M. Une, Y. Tamura, Liveness detection techniques, IPSJ Mag. 47 (6) (2006) 605–
ysis, IEEE Trans. Pattern Anal. Mach. Intell. 25 (12) (2003) 1519–1533. 608.
[21] N. Ratha, J. Connell, R. Bolle, An analysis of minutiae matching strength, in: Pro-
ceedings of AVBPA, International Conference on Audio- and Video-Based Biomet-
ric Person Authentication III, 2001, pp. 223–228.

You might also like