You are on page 1of 14

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO.

3, MARCH 2010 417

Non-Orthogonal View Iris Recognition System


Chia-Te Chou, Sheng-Wen Shih, Member, IEEE, Wen-Shiung Chen, Victor W. Cheng, and Duan-Yu Chen

Abstract—This paper proposes a non-orthogonal view iris Wildes et al. also proposed another iris recognition system
recognition system comprising a new iris imaging module, an at almost the same time [6]. The accuracy of their system
iris segmentation module, an iris feature extraction module and
is also very high. However, its computation cost is very high
a classification module. A dual-charge-coupled device camera was
developed to capture four-spectral (red, green, blue, and near- and, hence, it is suitable for identity verification. Traditionally,
infrared) iris images which contain useful information for sim- the iris recognition was developed based on the orthogonal-
plifying the iris segmentation task. An intelligent random sample view assumption (OVA), i.e., the optical axis of the camera is
consensus iris segmentation method is proposed to robustly detect approximately aligned with the normal vector of the pupillary
iris boundaries in a four-spectral iris image. In order to match
circle. Traditional iris recognition methods usually comprise
iris images acquired at different off-axis angles, we propose a
circle rectification method to reduce the off-axis iris distortion. the following five main steps.
The rectification parameters are estimated using the detected
elliptical pupillary boundary. Furthermore, we propose a novel 1) Iris Image Acquisition: Iris images are usually acquired
iris descriptor which characterizes an iris pattern with multiscale using a near-infrared (NIR) camera, because the re-
step/ridge edge-type maps. The edge-type maps are extracted flectance of human iris is high at the NIR band. How-
with the derivative of Gaussian and the Laplacian of Gaussian
filters. The iris pattern classification is accomplished by edge-type
ever, if the iris color is not too dark, its texture can
matching which can be understood intuitively with the concept be revealed when using a visible light imaging system
of classifier ensembles. Experimental results show that the equal [7], [8]. Existing iris image acquisition systems can be
error rate of our approach is only 0.04% when recognizing iris separated into two types, depending on the degrees of
images acquired at different off-axis angles within ±30°. user cooperation they require. The first type of systems
Index Terms—Iris feature extraction, iris imaging system, iris requires intensive user cooperation. Users are asked to
recognition, iris segmentation, non-ideal iris recognition, non- adjust their head pose for the system to acquire an iris
orthogonal view iris recognition. image [1], [7], [9], [10]. Kalka et al. have proposed a
method to assess the quality of the acquired iris image
I. Introduction [11]. The image acquisition process should be repeated
DENTIFYING or verifying one’s identity using biometrics until the image is qualified. Since the size of a human
I is attracting considerable attention in these years. Biomet-
rics authentication uses information specific to a person, such
iris is very small, an iris camera is usually equipped
with a high-magnification lens. Unfortunately, the fact
as a fingerprint, face, utterance, or iris pattern. Therefore, it is that the depth of field of such a lens is small makes
more convenient and securer than the traditional authentication the acquired iris image sometimes not clear enough.
methods. Among all the biometrics authentication methods, Narayanswamy et al. have proposed a wavefront coded
iris recognition appears to be a very attracting method because imaging technique that uses an aspheric optical element
of its high recognition rate. and a digital decoder to increase the depth of field [12].
The first automatic iris recognition system was developed Also, image restoration techniques have been utilized
by Daugman [1]. His algorithm is accurate and efficient. to alleviate the influence of image blurness [13]. The
Large scale tests have been conducted and the results show second type of systems is able to control the orientation
that his system can achieve almost zero error rate [2]–[5]. and/or the zoom and focus settings of the camera to
acquire a satisfactory iris image [14]–[21]. Therefore,
Manuscript received September 26, 2008; revised April 24, 2009 and August systems of the second type require less user cooperation
20, 2009. First version published November 3, 2009; current version published and are more convenient than systems of the first type.
March 5, 2010. This work was supported in part by the National Science
Council, Taiwan, under Grant NSC 94-2213-E-260-004, and by the Ministry 2) Iris Region Segmentation: The goal of iris region seg-
of Economic Affairs, Taiwan, under Grant 97-EC-17-A-02-S1-032. This paper mentation is to detect the boundaries of the iris pattern
was recommended by Associate Editor S. Pankanti. in an input image. The iris boundaries include the pupil
C.-T. Chou, S.-W. Shih, and V. W. Cheng are with the Department of
Computer Science and Information Engineering, National Chi Nan Uni- (inner) boundary, the limbus (outer) boundary, and the
versity, Puli, Nantou 54561, Taiwan (e-mail: ctchou@stu.csie.ncnu.edu.tw; boundaries of the upper and the lower eyelids. Based on
swshih@ncnu.edu.tw; vicwk@ncnu.edu.tw). the OVA, the inner and the outer boundaries of an iris
W.-S. Chen is with the Department of Electrical Engineering, National Chi
Nan University, Puli, Nantou 54561, Taiwan (e-mail: wschen@ncnu.edu.tw). pattern can be approximately modeled as non-concentric
D.-Y. Chen is with the Department of Electrical Engineering, Yuan Ze circles [1]. The upper and the lower eyelid boundaries
University, Chungli 320, Taiwan (e-mail: dychen@saturn.yzu.edu.tw). are usually modeled as parabolas [4]. Popular methods
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. for detecting the circular and the parabolic bound-
Digital Object Identifier 10.1109/TCSVT.2009.2035849 aries can be classified into two categories. One is the
c 2010 IEEE
1051-8215/$26.00 

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
418 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010

Fig. 1. Double-circular-arc model of the outer iris boundary.

integro-differential operator (IDO) approach proposed


by [1] and the other is the Hough transform (HT) ap-
proach proposed by [9]. Both categories of methods have Fig. 2. Alleviating eyelash occlusion using non-OVA camera placement.
been widely adopted in many iris recognition systems.
Additionally, the flickering-pupil (caused by blinking if the features have been binarized, the normalized
coaxial and off-axis light sources in turn) techniques Hamming distance [1] is the most efficient one.
[22], [23] can be used to detect the inner boundary
Most of the existing iris recognition systems are developed
robustly. The detection of the outer boundary is trickier
under the OVA and, thus, when the OVA is not valid, the
than the detection of the inner boundary because of the
aforementioned methods may not be applicable. Although the
eyelid occlusions. To overcome the occlusion problem,
OVA can simplify the iris segmentation problem, it is an
a double-circular-arc model as shown in Fig. 1 is used
unnecessary constraint which not only causes inconveniences
for the outer boundary detection [4], [9].
but also reduces the iris recognition accuracy because of eye-
3) Iris Pattern Normalization: After detecting the iris
lash occlusions and/or user noncooperation. Computer access
boundaries, the iris area is transformed into a rectan-
control is an example for which the OVA is not convenient
gular region using the rubber-sheet method proposed by
because the camera has to be placed at the border of the
Daugman [1]. This normalization process alleviates the
monitor but users usually do not look into the camera directly.
effects of pupil size variation upon the iris recognition
Emancipating the iris recognition systems from the OVA will
results and, thus, it is adopted in most of the iris
not only make them more convenient but also will broaden
recognition methods. However, if the recognition results
their application range. Significantly, if the OVA constraint is
are not satisfactory, then a nonlinear deformation model
relaxed, then a better camera placement can be achieved to
should be considered. Thornton et al. have shown that
alleviate the eyelash occlusion problem as shown in Fig. 2.
the equal error rate (EER) of iris recognition can be
halved when a nonlinear deformation model is used [24].
4) Feature Extraction: Iris patterns are usually represented A. Related Work
using low level features extracted from image pyramids This paper focuses on the non-OVA (NOVA) four-spectral
or band pass filters. Wildes et al. proposed to use (red, green, blue (RGB) and NIR) iris recognition prob-
Laplacian pyramids to represent an iris pattern [6], lems. Unlike the NOVA iris images contained in the
[9]. However, the complexity of computing the distance iris challenge evaluation (ICE) database [42] whose off-
between two Laplacian pyramids is very high and, thus, axis deviations are mainly in the horizontal direction, both
the band-pass filter approaches are favorable. Band- the horizontal and the vertical off-axis deviations are consid-
pass filters can minimize the influence of both the ered in this paper. At present, there are only a few papers
high frequency noise and the low frequency illumination about the NOVA iris recognition. Gaunt et al. presented a
non-uniformity on the extracted features. Popular band- NOVA iris image collection and segmentation system [43].
pass filters include the Gabor filters [1], [25]–[29], the Their NOVA iris images were acquired by having a subject
wavelet filters [30]–[32], and the ordinal filters [33], initially faces an NIR camera and then move her/his eye
[34]. Usually, the signs [1], [25]–[29], [33], [34], the to specific points corresponding to known off-axis angles.
local extrema [35]–[38], and the zero-crossings [30]– A method for detecting the inner and the outer iris bound-
[32] of the filtered results are used to construct a binary aries was proposed [43], [44]. However, their method was
feature vector to simplify the subsequent pattern match- developed based on the assumption that the upper and lower
ing computation. Additionally, Huang et al. [39] and edges of the outer iris boundary are observable [44], and,
Noh et al. [40] used independent component analysis hence, their method is not applicable when either of the
methods to reduce the size of the iris feature with- eyelids occludes the iris. Furthermore, no recognition result
out sacrificing the recognition accuracy. Monro et al. was reported in [43]. Abhyankar et al. proposed a NOVA
proposed an iris coding method using differences of dis- iris recognition system [45]. They showed that if the off-
crete cosine transform coefficients that achieves highly axis angle is known a priori, a NOVA iris image can be
accurate recognition results [41]. transformed into an approximate OVA iris image using either
5) Classification: Iris identification is usually computed affine or projective transformations and the recognition rate
using the nearest neighbor approach. Many different is still reasonable. However, they argued that assuming a
distance measures can be used in this task. However, known off-axis angle is impractical, and that both affine and

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 419

projective transformations of iris images degrade the image the detected elliptical boundary to make the iris segmentation
quality, which will eventually reduce the system performance. easier and more precise.
Therefore, they proposed to enroll multi-iris images acquired Notably, iris segmentation is a critical problem in NOVA iris
at different off-axis angles. Experimental results showed that recognition. Since the traditional iris boundary detection rules
the multiview-enrollment approach achieved high accuracy. are no longer valid with NOVA, it is necessary to incorporate
In fact, the multiview enrollment can be avoided because more information, e.g., multispectral iris images, to achieve ro-
the off-axis angle can be estimated. Dorairaj et al. proposed bust iris segmentation results. Multispectral iris images can be
two methods using the Hamming distance and Daugman’s acquired using either the sequential acquisition technique [53]
IDO, respectively, as objective functions for estimating the or three-charge-coupled device (3CCD) cameras [54], [55],
off-axis angle [46]. However, both of their methods involve and the latter is preferred since the images can be acquired
an image transformation in the optimization process. Hence, simultaneously. Park et al. proposed to use multispectral in-
the computation cost is very high. Besides, both of the two frared iris images and a gradient-based image fusion algorithm
objective functions proposed in [46] are not differentiable, to detect counterfeit attacks [53]. Boyce et al. have studied
which complicate the optimization problem. Zuo et al. also the four-spectral iris analysis problem using a 3CCD camera
modified the IDO method to incorporate an elliptical iris [54], [55]. Their goal is to explore how multispectral iris
boundary model for segmenting non-frontal irises [47]. How- images can be used to improve the iris recognition accuracy.
ever, their method is not very efficient due to the elliptical They proposed an OVA iris segmentation method to roughly
IDO computation. Also, because no extra RGB information locate the iris boundaries using a four-spectral iris image.
is available, the accuracy of their method is subject to the The four-spectral vectors of an iris pixel and a non-iris pixel
influence of non-iris noises. Yahya et al. proposed to model are modeled as two Gaussian distributions, respectively, so
the pupil boundary as an elliptical curve, but their purpose is the iris area can be classified using the Bayes classifier [55].
not to compensate the off-axis distortion because the outer iris Post-processing of the resulting iris map using morphological
boundary model is assumed circular in the same work [48]. operations and the median filter is required to remove miss-
Daugman has recently proposed a Fourier-based method de- classified pixels. The outer iris boundary is determined using
veloped under the weak-perspective assumption for correcting only two points selected from the border of the filtered iris
the NOVA iris image distortion [49]. Consider an elliptical area. Since each pixel of the entire image has to be classified,
curve {(A cos (t) , B sin (t)) |t ∈ [0, 2π] } centered at the origin the efficiency of their method is low. Furthermore, according
and lying along the x-axis, where A and B are the semimajor to their results, the main advantage of using the RGB iris
and the semiminor axes, respectively. When A/B is close to image is to improve the robustness of iris segmentation results
unity, the parameter t is approximately equal to the polar rather than the recognition accuracy. This result is expectable
angle θ = tan−1 (B tan(t)/A) and the coordinates of the elliptic since it is very difficult to improve the near 100% accuracy of
curve (A cos (t) , B sin (t)) can be approximately represented the state-of-the-art NIR iris recognition system. However, if
as (A cos (θ) , B sin (θ)). Since the x and the y-coordinates are the main purpose of using additional RGB channels is to
sinusoidal functions of θ, parameters A and B can be estimated improve the segmentation accuracy, then it is not necessary
by computing the amplitudes of the sinusoidal functions using to obtain high definition images in any of the RGB channels.
the discrete Fourier transform of the x and y-coordinates, Therefore, the number of CCD chips can be reduced and
respectively. This method is efficient and can be applied even they do not have to be precisely aligned, which implies
when the pupil is not in circular shape. However, when the that a four-spectral iris imaging device that is simpler and
aspect ratio A/B deviates from unity, the estimated aspect more cost-effective than the 3CCD solution can be devel-
ratio will become less accurate. Ross and Shah proposed a oped.
level-set method for segmenting the iris area [50]. The level-
set method can effectively deal with problems of local minima
B. Contributions of This Paper
in finding iris boundary but its computation cost is very
high. Pundlik et al. proposed a non-ideal iris segmentation This paper proposes a NOVA iris recognition method. The
method based on the graph cut algorithm [51]. Since the iris contributions of this paper are summarized in the following.
area segmented using the graph cut algorithm may have an 1) A dual-CCD (DCCD) camera is developed which cap-
irregular shape, they use a fix-orientation elliptical iris model tures four-spectral iris images to simplify the NOVA iris
to estimate the iris boundary. Thus, their method can deal segmentation problem.
with the NOVA iris images in the ICE database, but can not 2) A simple and efficient method to estimate the circle
handle the general NOVA images. Vatsa et al. proposed a rectification parameters is proposed for rectifying a
two stage iris segmentation method combining an IDO-like NOVA iris image into an OVA one using the detected
elliptical boundary detection stage followed by a level-set fine pupil boundary.
tuning stage which is able to detect non-circular iris boundaries 3) A new iris segmentation method using the four-spectral
[52]. They also proposed a method to improve the iris image iris images is proposed. Experimental results show that
quality and the recognition performance. However, in their the new segmentation method is more robust and more
method, the segmented iris image is directly converted into accurate than both the IDO approach [1] and the HT
a polar coordinate system without considering the off-axis approach [9], and improving the iris segmentation accu-
image distortion. In Section III-B, we will show how to use racy is crucial to NOVA iris recognition.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
420 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010

Fig. 5. Schematic diagram and a picture of the dual CCD camera.

pupil can be easily located. However, the NIR illumination also


reduces the contrast of the outer iris boundary (see Fig. 4).
Fig. 3. Flowchart of the NOVA iris recognition process.
Therefore, detecting a NOVA outer iris boundary becomes
error-prone using an NIR imaging device. This paper pro-
poses a DCCD imaging system to solve the above-mentioned
problem. Fig. 5 shows the schematic diagram of a DCCD
camera. A cold mirror is used to reflect the visible light to
an RGB CCD while allowing the NIR light to pass through
the cold mirror and to form an NIR image on the NIR CCD. In
practice, low cost board level cameras can be used to construct
the DCCD system as shown in Fig. 5. To acquire clear four-
spectral iris images, the two CCDs have to be aligned to have
Fig. 4. Two NIR iris images selected from the Connecticut Alarm and
Systems Integrators Association (CASIA) iris database with low outer iris
approximately the same focal plane. However, since the RGB
boundary contrast. image is mainly used to assist the iris segmentation process,
if the RGB image is not too blurry, the segmentation result
will be satisfactory. Therefore, only a rough alignment of the
4) A new multiresolution edge-type descriptor is proposed CCDs is required and the total cost of the DCCD imaging
to represent the signature of an iris pattern. The edge system can be made very low.
types considered in this paper include the ridge edge and The timings of the two CCDs are synchronized to ensure
the step edge which are detected using the Laplacian of that the color and the NIR iris images are temporally aligned.
Gaussian (LoG) and the derivative of Gaussian (DoG) Furthermore, a calibration process is used to spatially align
filters, respectively. A simple and efficient iris recogni- the color and the NIR iris images. Since the CCDs are located
tion method based on classifier ensembles is proposed. behind the lens, the nonlinear optical distortion of the lens can
The feature extraction method is very efficient and ex- be neglected. Therefore, the mappings from the NIR image to
ceptionally simple and can provide intuitive explanation the color image and vice versa are linear and one-to-one which
of the reasons why the proposed method can achieve a can be represented as a homography matrix. The homography
high recognition rate. can be easily estimated with a calibration target containing
The rest of this paper is organized as follows (refer to Fig. 3 some fiducial markers. A lookup table for the image coordinate
for the flowchart of the proposed method that depicts the mapping can be pre-calculated to speed the image alignment.
relationships among the methods described in the following In the subsequent description, the alignment of the RGB and
sections). Section II describes a new iris imaging system with NIR CCDs is assumed completed.
a DCCD camera and the properties of the acquired four- Fig. 6 shows the three sets of typical NOVA iris images
spectral iris image. Section III explains the iris segmentation acquired using the DCCD camera equipped with a Canon VF
method using four-spectral iris images. Section IV discusses 75-mm lens. The horizontal and the vertical fields of view of
the multiresolution edge-type descriptor for representing an the imaging system are about 4.8° and 3.6°, respectively. The
iris pattern. Section V describes the experimental results of lens is adjusted to focus at a distance of 300 mm and the depth
the proposed method. Conclusion is given in Section VI. of field is about 10 mm. Notably, in the images shown in the
second and the third rows, the orientations of some portions
of the upper eyelid boundary are almost parallel to the outer
II. Iris Imaging System With a DCCD Camera iris boundary, which can increase the iris segmentation error.
Most of the iris recognition systems adopt an NIR camera To solve this problem, a new method for detecting NOVA iris
and a non-coaxial NIR light source to acquire iris images. The boundaries using the four-spectral iris images is proposed in
non-coaxial NIR light source yields a dark pupil image so the the next section.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 421

B. Circle Rectification
From the coefficients of (1), the following geometrical
parameters of the pupil ellipse can be determined:
1) the ellipse center (xe , ye );
2) the major axis defined by its length ra and a unit
orientation vector ax , ay ;
3) the minor axis defined by its length rb and a unit
orientation vector bx , by .
Based on the weak-perspective assumption, an affine trans-
formation determined by using the above elliptic parameters
can be applied to the iris image such that the rectified inner
Fig. 6. Left two columns show the original iris images. The right two boundary approaches a circle. We call this image mapping the
columns show the rectified four-spectral iris images.
circle rectification process. The circle rectification process is
useful for both OVA and NOVA iris images. For OVA iris
III. Iris Segmentation With Four-Spectral Iris images, the circle rectification process can unify the pixel
Images aspect ratio so iris images acquired from different cameras can
be correctly processed. For NOVA iris images, determining the
A. Pupil Boundary Detection
outer iris boundary in a circle-rectified image can be simplified
In general, the distance between the iris and the camera is as a circle detection problem with fewer parameters than the
much greater than the radius of the iris and thus the weak- original ellipse detection problem. One way to perform circle
perspective projection is a reasonable camera model. Since rectification is to rotate the input image such that the major and
the inner boundary is approximately circular, either NOVA or the minor axes align with the vertical and the horizontal axes,
non-square pixels will make the acquired pupil approximately respectively. The rotated image is then stretched horizontally
elliptic. Therefore, an inner iris boundary can be modeled as to make the minor axis as long as the major one. Finally, the
an elliptic curve. Notably, the difficulty of pupil detection is image is rotated back to its original orientation. The above
determined by the placement of light sources because the glints three-step circle rectification process can be represented as an
in the cornea area will usually complicate the pupil detection affine transformation matrix given by
problem. This paper uses an off-axis NIR light source (SONY
HVL-IRH2) placed below the camera to produce the dark T = Tr Ts T−1
r (2)
pupil effect, as is shown in the first column of Fig. 6. The light
where
source placement depresses the NIR glint (refer to Fig. 6) and ⎡ ⎤
greatly simplifies the pupil detection problem. Our method ax bx xe
for determining the pupil boundary is summarized in the Tr = ⎣ ay by ye ⎦ (3)
following. 0 0 1
1) Determine a threshold value for segmenting the dark and
pupil using a method modified from [56]. A moving ⎡
1 0 0

average operation is applied to the NIR image and the Ts = ⎣ 0 ra
0 ⎦. (4)
rb
lowest intensity value, denoted as µs , of the filtered
0 0 1
result is recorded which represents the average intensity
of the darkest sub-image block. The threshold value is The third and the fourth columns of Fig. 6 show the
determined as tp = µs +µ 2
w
, where µw is the average results of the circle rectification process, while the first two
intensity of the whole image. A binary image can be columns are the original images. The rectified results verify
computed from the NIR image using the threshold tp . the adequacy of the weak-perspective assumption.
2) Pupil area candidates are computed using the connected
component analysis technique. Dark areas which are too C. Intelligent Random Sample Consensus (RANSAC) Iris
large or too small are discarded. For the remaining dark Segmentation Method
areas, only the one closest to the image center is selected Most of the traditional iris segmentation methods use edge
as the final pupil candidate. orientations to classify an image edge pixel as limbus bound-
3) The ellipse fitting algorithm proposed by Fitzgibbon ary or eyelid boundary and, thus, they are not suitable for
et al. is used to estimate the coefficients of the following processing the NOVA iris images. In this paper, we propose to
quadratic equation using the boundary of the final pupil use four-spectral measurements rather than edge orientations
candidate [57] to classify different types of iris boundaries. In general, human
skin, iris, and sclera possess very different spectral-reflectance
ax2 + 2bxy + cy2 + 2dx + 2ey + f = 0. (1) properties. For example, the human skin (independent of
race) has relatively high reflectance for the red light when
If the ellipse fitting error is less than a threshold, then compared with the reflectance for the blue and the green light
the dark area is presumed as the pupil area. [58]. The human iris has high reflectance for near infrared

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
422 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010

Fig. 7. Flowchart of the IRIS-4 algorithm.

light [59], while the reflectance of human sclera shows little


wavelength dependence in the range from 442 nm to 1064 nm
[60]. The distinct spectral-reflectance properties suggest that Fig. 8. Feature maps of an edge point with respect to eight orientation
classifying eyelid, iris, and sclera using a four-spectral image angles.
is very promising. However, the exact mathematical four-
spectral models to distinguish different parts of a human eye
are unknown and have to be learned from a set of training data.
The method of Boyce et al. [55] is an online learning approach.
The parameters of the color distributions have to be estimated
using different areas of the input image and then every pixel
in the image has to be classified, which makes their method
very time consuming. Also, when the corneal reflection of the
environment infects the online training data, the segmentation Fig. 9. Normalized iris images and mask images of an iris pattern acquired
result will be inaccurate. Therefore, a pre-trained classifier at two different views, 30° apart.
which only processes sparse image edge pixels will greatly
improve the segmentation efficiency and accuracy. sq are statistics of {p1 , ..., pn } and {q1 , ..., qn }, respectively.
This paper proposes an intelligent RANSAC iris segmenta- The dimension of the feature vector depends on what statistics,
tion (IRIS) method using four-spectral iris images for detecting such as mean, median, and/or variance, are adopted.
the outer iris and the eyelid boundaries, where RANSAC is A classifier is trained with a set of manually labeled feature
the abbreviation of the random sample consensus algorithm vectors to distinguish the boundary candidates into outer iris
[61]. For convenience, the proposed iris segmentation method boundary pixels and noises. Images in the training set should
will be referred to as the IRIS-4 method hereafter where the contain human eyes of different skin and iris colors for the
suffix “-4” denotes that this method uses a four-spectral iris method to work well. If the number of classified outer bound-
image. Fig. 7 shows the flowchart of the IRIS-4 method. ary pixels is greater than three, then the RANSAC algorithm
The basic idea of the IRIS-4 method is to design a classifier can be used to determine an outer boundary circle in the
to remove non-target edge pixels using four-spectral measure- rectified iris image. This method can also be applied to detect
ments before a curve model is fitted to the edge pixels using the eyelid boundaries by replacing the circular boundary model
RANSAC. Iris boundary candidates are computed based on the with the parabolic model. Additionally, another classifier is
Canny edge detector [62]. The edge pixels are determined as trained to classify the eyelid boundary pixels using the same
the union of the edge pixels computed in the NIR, red, green, local features. If the number of computed eyelid boundary
and blue channels. Edge pixels around glints are discarded pixels is not less than three, then the RANSAC algorithm can
because both the chromatic and the intensity values of a pixel be used to find a parabolic eyelid boundary curve.
around glints are distorted. Also, edge pixels around the pupil
boundary are discarded as the outer iris boundary should not D. Iris Normalization
overlap the inner boundary. After detecting the iris boundaries, the iris region can be
For each edge pixel, a feature vector is constructed based mapped to a normalized polar coordinate system using the
on the pixel values around the edge. The edge orientation is rubber-sheet method [1]. Let N (ρ, φ) and NM (ρ, φ) denote the
roughly grouped into eight directions. For each of the edge normalized image and mask, respectively, where NM (ρ, φ) is
orientations, a pre-computed mask as shown in Fig. 8 is used to a Boolean variable representing that N (ρ, φ) is not occluded
extract the four-spectral values of 2n feature pixels, n pixels in by any glint or eyelid. Fig. 9 shows two typical normalized
each side of the edge point. In Fig. 8, the edge pixel is marked images and their corresponding mask images of an iris pattern.
with a ‘×’. Feature pixels on the pupil side are marked with
a dark-gray color, whereas feature pixels on the other side are
marked with a light-gray color. Let the four-spectral values IV. Iris Feature Extraction
(4-D vectors) of the dark-gray pixels and the light-gray pixels A. Edge-Type Feature Descriptor
be denoted by pi and qi , i = 1, ..., n, respectively. The feature
  ⊤ Human iris patterns usually contain dense edges of different
vector of each edge pixel is defined as sp⊤ sq⊤ , where sp and types and most of them can be classified as either a step

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 423

edge or a ridge edge. Therefore, this paper proposes to two to three, and the other is to use a mask flag to ignore the
characterize an iris pattern using an edge-type descriptor [63]. computation results at this location. While the former approach
Since the uncertainty of the estimated outer iris boundary is is very attractive because it can encode the flat region of the
large due to noise and eyelid occlusion, the vertical location iris pattern, increasing the quantization levels will make the
of a horizontal edge detected in a normalized iris image is edge-type detection results sensitive to the contrast of the input
not stable. Therefore, only vertical edges are considered in iris image which, however, is undesirable. Hence, the second
this paper. The edge-type flag of each site on an iris can be solution is preferable and a mask flag is inducted for each of
determined with the DoG and the LoG filters given by the Boolean variables. Thus, the ℓ-th level

edge-type descriptor
2 2 can be expressed as Sℓ , MSℓ , Rℓ , MRℓ , where MSℓ and MRℓ
∂ −π[ 2αφ 2 + 2βρ 2 ] ℓ
DoG (ρ, φ) = e ℓ ℓ (5) are the validity maps of Sℓ and Rℓ , respectively. If too many
∂φ invalid pixels are involved in the convolution computation
and at (ρ, φ), then both MSℓ (ρ, φ) and MRℓ (ρ, φ) should also be
2
∂2 −π[ 2αφ 2 + 2βρ 2 ]
2 cleared.
LoGℓ (ρ, φ) = − e ℓ ℓ (6) Let nℓ denote the number of different scale levels. The com-
∂φ2 plete edge-type descriptor of an iris pattern can be represented
where αℓ and βℓ determine the width and the height of as  {F, M}, where the edge-type flag F and the mask M
the filters, respectively. The negative sign on the right-hand are defined as follows:
side of (6) is introduced such that the filtered response of a
F = Sℓ , Rℓ |ℓ = 1, ..., nℓ


positive ridge edge is also positive. To obtain multiscale edge (7)
information, αℓ is set to αℓ = 2ℓ α0 where ℓ is an integer and
representing the scale level, and α0 is a design parameter.
MSℓ , MRℓ |ℓ = 1, ..., nℓ .


Conversely, βℓ controls the smoothness of the filtered output M= (8)
in the radial direction. This parameter can be set to 2ℓ β0 , For convenience, the complete edge-type descriptor is referred
where β0 is another design parameter. The genetic algorithm to as the ETCode hereafter.
is used to design α0 and β0 using a set of training iris images
[63]. Since the LoG filter is separable, the 2-D filtering can B. Iris Recognition Using ETCodes
be decomposed into two 1-D filtering passes. In pass one, the
Each bit of the ETCode, Sℓ (ρ, φ) /Rℓ (ρ, φ), can be used
input image is smoothed along the radial direction, and in pass
to design a weak classifier which categorizes people in the
two, variation along the angular direction of the smoothed
world into two groups—one group in which the people have
image is calculated with a 1-D LoG (Mexican hat) filter.
a step/ridge edge at (ρ, φ) in their iris, while the people in
Notably, the multiscale 1-D LoG filters form a non-orthogonal
the other group do not. Although the recognition rate of the
continuous wavelet basis [64], which have been widely used in
weak classifier is not high, the accuracy can be improved by
many computer vision applications. However, due to memory
incorporating more and more such weak classifiers. This is
and computation time limitations, only a few levels of the LoG
a very popular technique known as classifier ensembles [65].
filters can be used to extract iris features. Since a set of finite
The use of edge features for iris recognition was first proposed
LoG filters cannot form a complete basis, the DoG filters are
by Daugman [1]. His IrisCode uses the phase response of
adopted to provide supplementary information.
Gabor filters which can encode the edge features. Also, Sun
The edge-type flag is calculated by convolving the input
et al. proposed to use the ordinal filters to encode the edge
image with the LoG/DoG filters. Let Dℓ (ρ, φ) = DoGℓ (ρ, φ)∗
features [33]. However, both the Gabor filter and the ordinal
N (ρ, φ) and Lℓ (ρ, φ) = LoGℓ (ρ, φ)∗N (ρ, φ). The superscript
filter contain more design parameters than the LoG/DoG
ℓ denotes the scale level of the filters. Because both DoG
filters proposed in this paper. Therefore, designing optimal
and LoG are band-pass filters, high frequency components of
parameters using either the Gabor filter or the ordinal filter
an image are filtered out. Thus, the processed image can be
is more difficult than designing optimal LoG/DoG filters.
sub-sampled without sacrificing any information due to the
The benefit of our approach is that the edge-type feature is
sampling theorem. If the dimensions of the sub-sampled first
very simple and it can achieve a recognition rate which is
level edge intensity maps, i.e., D1 and L1 , are H1 ×W1 , then the
comparable to that of the state-of-the-art methods.
sub-sampled ℓth level edge intensity maps will be H1 /βℓ /β1 ×
Assume that 1 is an ETCode enrolled to the system and
W1 /αℓ /α1 .
that 2 is an input ETCode for identification. The distance
When Dℓ (ρ, φ) > 0 Lℓ (ρ, φ) > 0 , the possibility that an
between 1 and 2 is defined as the following normalized
ℓth level step (ridge) edge located at (ρ, φ) is high. ℓ There-
Hamming distance:
fore, we use two binary variables S ℓ
(ρ, φ)  u D (ρ, φ)
|(F1 ⊕ F2 ) · (M1 · M2 )|
and Rℓ (ρ, φ)  u Lℓ (ρ, φ) to record the computed edge-

D( 1 , 2 )  (9)
type flags, where u (·) is the unit step function. When the |M1 · M2 |
image
ℓ intensity
around
(ρ, φ) is approximately constant, both where ⊕ and · are the bitwise-XOR and the bitwise-AND
D (ρ, φ) and Lℓ (ρ, φ) are close to zero. In this case, operation, respectively. When D( 1 , 2 ) is smaller than a
the computed edge-type flags will be sensitive to noise. Two threshold, 1 and 2 are classified as the same iris class.
approaches can be used to remedy this problem. One is to The normalized Hamming distance reflects the proportion of
increase the quantization levels of Dℓ (ρ, φ) and Lℓ (ρ, φ) from weak classifiers, indicating that the two ETCodes do not

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
424 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010

belong to the same class. However, since (9) does not consider TABLE I
the effect of the in-plane rotation of the eye, the matching EER (%) of the CASIA-V3-Interval and UBIRIS-V1
result may be incorrect. When the in-plane rotation angle
is ψ, the normalized iris image and its corresponding mask LoG/DoG Gabor
image can be represented as N (ρ, φ + ψ) and NM (ρ, φ + ψ), CASIA-V3-Interval 0.031 0.087
˜ ψ. UBIRIS-V1 0.258 0.244
respectively. Let the resulting ETCode be denoted by
˜ ψ
Notably, can be predicted by rotating the edge-type de-
scriptors
1 1 of 1 horizontally such that if the level-one descriptor,

S , MS , R , MR1 , is rotated by k bits, then Sℓ , MSℓ , Rℓ , MRℓ



is rotated by k/2ℓ−1 bits. Rotating the ETCode is equivalent


to rotating small binary feature maps of the ETCode, which is
very efficient. If ψ is known, then k can be determined as the
Fig. 10. First row shows the LoG (left) and the DoG (right) filters and the
integer closest to ψ/δψ, where δψ = 2π/W1 denotes the angu- second row shows the real part and the imaginary part of the Gabor filter. For
lar resolution of the H1 × W1 level-one edge-type descriptor. comparison, the intensity of the imaginary part of the Gabor filter is inverted.
However, since ψ is usually unknown, an exhaustive search
for k within a pre-specified range has to be performed. Thus,
the distance measure is modified as follows:
k δψ
˜ 1 , 2 )
D̃( 1 , 2 ) = min D( (10)
−K≤k≤K

where K is an integer specifying the exhaustive search range.


Given the maximum in-plane rotation angle, ψmax , K can
be determined as ⌈ψmax /δψ⌉. Equation (10) is equivalent
to rotating the ETCodes in the horizontal direction until a
minimum distance is obtained. Fig. 11. DET curves of the LoG/DoG filters and the Gabor filters.

B. Evaluation Setup and Dataset


V. Experiments
Fig. 12 shows the iris imaging system used in our ex-
A. Recognition Accuracy Tested Using Two OVA Iris periments. For providing suitable illumination for the DCCD
Databases camera, both the NIR and the visible light sources are included
Fig. 10 shows the best filters designed using the genetic in this imaging system. Hot mirrors mounted in front of the
algorithm [63]. Interestingly, the real part and the imaginary visible light sources are used to prevent extra corneal glints
part of the best Gabor filter are very similar to the best in the NIR image. The visible light sources are positioned
LoG/DoG filters. The performance of the designed filters is to minimize the uncomfortableness of subjects. The monitor
assessed using the detection error trade-off (DET) curve [66] is mounted on a translation stage. Totally, 100 Asian iris
as shown in Fig. 11. The DET curves are computed using patterns are acquired for testing. Each subject participated in
the CASIA-V3-Interval (2651 images of 249 iris patterns the experiment is asked to keep staring at the center of a
tested) [67] and UBIRIS-V1 (835 images of 147 iris patterns specified area on the monitor where the acquired iris image is
tested) [68]. Since images of the two databases contain glints shown to form a visual feedback loop. The subject may need
in the pupil area, a pupil detection method which is more to adjust his/her head position to assist the system acquiring a
complicated than the one described in Section III-A is used in clearly focused iris image using the visual feedback loop. For
this experiment. each input image, the method described in Section III-A is
The EER of the LoG/DoG and the Gabor filter approach is used to detect the pupil boundary. When the pupil is detected,
shown in Table I. The results show that the proposed LoG/DoG a sharpness measure (sum of the edge strengths) computed
filter approach can achieve a recognition rate comparable to along the pupil boundary is used to assess the quality of the
that of the Gabor filter approach. The EER computed using iris image. If the sharpness measure is greater than a thresh-
UBIRIS-V1 is worse than that computed using CASIA-V3- old, then the iris image is considered as a high quality iris
Interval because the session-two iris images of UBIRIS-V1 image and is included in our iris database. By controlling the
contain non-negligible corneal reflection noises. If the session- monitor position, iris images with respect to seven horizontal
two data are excluded in the test (the remaining session-one orientations θh ∈ {−30°, − 20°, ..., 30°} (see Fig. 13) are
data still contain 704 iris images of 145 iris patterns), then the grabbed. The tilt angle of the DCCD camera, θv , is about 35°.
computed EER is zero. Since the dark irises are eliminated in For each eye of each subject, eight images are acquired for
the experiment because of pupil detection failure, the intensity each off-axis angle resulting an iris database of 5600 four-
contrasts of the tested iris patterns are all sufficiently high. The spectral iris images. The pupil detection algorithm described
zero EER of the UBIRIS-V1 session-one data confirms that the in Section III-A achieves zero error rate in the experiments,
iris recognition rates are irrelevant to the hues of iris patterns and all the images in the iris datasets are rectified as described
because iris hues are ignored in extracting the iris features. in Section III-B. The proposed NOVA iris recognition system

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 425

Let D̂ and D denote the circular disk binary functions


defined by the estimated and the manually labeled outer iris
boundaries, respectively. The estimation error of the outer iris
boundary is computed using the normalized Hamming distance
of the disk functions D̂ and D given by

2 D ⊕ D̂dxdy
e(D, D̂) = R  . (11)
R2 Ddxdy

Also, the inaccuracy of the estimated eyelid boundaries is com-


puted using the normalized Hamming distance between the
mask images computed with the estimated eyelid boundaries

Fig. 12. NOVA iris imaging system.
NM (ρ, φ) and manually labeled boundaries NM (ρ, φ), respec-
tively. Fig. 14 shows the normalized Hamming distances of
the iris segmentation results computed using IRIS-4, IDO, and
HT methods, respectively. For comparison, the segmentation
results of IDO and HT methods obtained using NIR-along
images are also included in Fig. 14, where IRIS-4, IDO-4,
and HT-4 denote the data computed using four-spectral iris
images whereas IDO-N and HT-N denote the data computed
by using NIR-along iris images. The IDO-4 method maximizes
the summation of the four IDOs of the RGB and NIR images,
and the HT-4 method computes the circular Hough transform
Fig. 13. Schematic of NOVA iris imaging system with seven different off- using the union of the edge pixels computed in the RGB and
axis angles. NIR images. The experimental results lead to the following
conclusions.
requires to enroll only one iris image per person acquired at 1) HT-4 and IDO-4 methods outperformed their counter-
θh = 0°. parts, HT-N and IDO-N, which means that the DCCD
The present implementation of the IRIS-4 method uses the camera does improve the outer boundary detection ac-
support vector machine (SVM) [69], [70] to detect the target curacy.
iris boundaries. The number of feature pixels on each side of 2) Using circular Hough transform to detect outer iris
an edge pixel, n, is set to six in the experiment. For simplicity, boundaries is less accurate because the outer boundaries
the feature vector of each edge pixel is constructed directly are generally not circular.
using the 12 four-spectral pixel values resulting a 48-D feature 3) When applying either IDO-4 or IDO-N to estimate the
vector (see Fig. 8). In order to train the SVMs, ten iris images outer iris boundary in an image with a small off-axis
are acquired with five new subjects. The iris patterns of the five angle, the vertical position error is usually larger than the
subjects are not included in the testing database. Each of the horizontal error because the double-circular-arc model
five subjects is asked to arbitrarily select two off-axis angles ignores the upper and the lower portions of the outer
and contributes two training images. The edge pixels of the boundary (see the first row of Fig. 15).
ten iris images are manually labeled as iris boundary, eyelid 4) When the off-axis angle is large, the orientation of
boundary, or noise, respectively. Therefore, labeled training the upper eyelid and the outer iris boundary become
data can be extracted from the training iris images. indistinguishable (see the second row the Fig. 15), which
Three experiments have been conducted to assess the effec- increases the estimation errors of the IDO method
tiveness of the proposed method. The first experiment tests whereas the IRIS-4 is insensitive to this kind of noise.
the performance of iris segmentation methods using NIR- 5) The proposed IRIS-4 algorithm reduces roughly 45%
along or four-spectral iris images. The second experiment tests mean error of the NOVA outer boundary when compared
the NOVA iris recognition error by enrolling and testing iris with the other four methods. It will be shown in the
images acquired at different view angles. The last experiment next subsection that this improvement in outer boundary
explores the EERs of different NOVA iris recognition methods. detection accuracy greatly meliorates the NOVA iris
recognition rate.
C. Performance of Iris Segmentation Methods 6) Eyelid detection using the four-spectral images does
In additional to the proposed IRIS-4 method, Daugman’s not improve both the HT and IDO methods. Specu-
IDO method, and Wildes’ HT method are implemented for lar corneal reflection of the eyelid, eyelash and/or the
comparison. In order to assess the performance of the three environment recorded in the RGB images introduce
different iris segmentation methods, the iris boundaries of 20 more image noise degrading the eyelid detection ac-
arbitrarily selected iris patterns with 56 images per iris pattern curacy. Conversely, because most of the image noise
collected at seven view angles are manually labeled, resulting can be eliminated by the iris boundary classifier, the
1120 iris images with reference boundary data. proposed IRIS-4 method can also reduce about 50%

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
426 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010

TABLE II
EER (%) of the Ten Configurations Evaluated at θh = 0°

Segmentation Methods
Filter Banks IRIS-4 IDO-4 IDO-N HT-4 HT-N
LoG/DoG 0 0 0 0.00 0.00
Gabor 0 0 0.00 0.01 0

boundaries of 1200 iris images selected from the session one


of UBIRIS-V1 are manually labeled as reference data. Many
dark or out-of-focus iris patterns which were not used in
Section V-A are included in the segmentation performance
test. The segmentation results are shown in Fig. 16 where the
suffix number of the segmentation methods has been changed
to “3” because only RGB images were used in this experiment.
Notably, the iris colors in the UBIRIS-V1 database are diverse.
Training of the IRIS-3 method requires to use iris images of
different colors. Therefore, we manually label the iris patterns
with blue-or-green (280 images of 56 iris patterns), light-
brown (489 images of 99 iris patterns), and dark (431 images
Fig. 14. (a) Means and (b) standard deviations of the outer boundary errors, of 86 iris patterns). From each of the three iris color categories,
and (c) means and (d) standard deviations of the eyelid errors computed using five images are randomly selected to constitute a training set
IRIS-4 (∗), IDO-4 (◦), IDO-N (ⵧ), HT-4 (+), and HT-N (⋄), respectively.
of 15 images. This process is repeated three times to test
if the segmentation results are sensitive to the selection of
training sets. According to the data shown in Fig. 16, the
performance of IRIS-3 using the three random training sets
is better than that of the other methods. Therefore, the perfor-
mance of IRIS-3 is insensitive to the selection of the training
sets. The BOYCE-3 method slightly outperforms the HT-3
method in this test. However, the BOYCE method is not
suitable to our database because the non-negligible corneal
Fig. 15. Typical outer iris boundary estimation results computed using IRIS- reflection significantly degrades the segmentation results.
4, IDO-4, IDO-N, HT-4, and HT-N, respectively. The yellow circle is the
manually determined reference boundary.
D. Recognition of NOVA Iris Patterns
Before evaluating the NOVA iris recognition accuracy, a
simpler test using only iris images acquired at θh = 0° is
conducted to verify that the implementations of the five iris
segmentation methods are satisfactory. Table II shows the
evaluated EERs of the ten recognition configurations (five
segmentation methods and two filter banks). Note that 0 and
0.00 in Table II denote the exact zero and the rounded zero
values, respectively. The low EERs of the ten configurations
confirm that our implementations of the iris segmentation
algorithms are correct.
Fig. 16. (a) Means and (b) standard deviations of the outer boundary errors Evaluation of the ten NOVA iris recognition configurations
computed using IRIS-3 (with three random training sets, R1, R2, R3), IDO-3,
HT-3, and BOYCE-3, respectively. is accomplished by using five-fold cross-validation for which
20% of the iris patterns are selected as training samples and
the remaining 80% iris patterns are used as test samples. For
of the eyelid detection errors using the four-spectral each of the ten configurations, the threshold value yielding
images. the EER of classifying the training data is selected as the
In order to test the segmentation performance using non- decision boundary. Fig. 17 shows the false non-match rates
Asians iris patterns in the UBIRIS-V1, the four-spectral meth- (FNMRs) and the false match rates (FMRs) [66] of the ten
ods, IRIS-4, IDO-4, and HT-4, are all modified to omit the configurations. The data points shown in Fig. 17(a) and (b)
NIR channel. Furthermore, since the corneal reflection noise are computed using 11 200 and 25 600 genuine tests for
in the iris area of the session one images of the UBIRIS-V1 θh = 0° and θh
= 0°, respectively. Each FMR value shown
is negligible, the method of Boyce et al. [55] is suitable in Fig. 17(c) and (d) is computed using 1 011 200 imposter
for the data set. Therefore, we have also implemented the tests. This experiment shows that the proposed IRIS-4 method
BOYCE method for comparison in this experiment. The iris does improve the iris recognition rate in the NOVA situation.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 427

Fig. 17. FNMRs the FMRs with respect to different off-axis angles com-
puted using five iris segmentation methods and two filter banks. (a) FNMR
and (c) FMR of the LoG/DoG filter bank. (b) FNMR and (d) FMR of the
Gabor filter bank.

Notably, when the off-axis angle is large, both the increase


of the iris segmentation errors and the non-coplanarity of
the iris pattern will degrade the recognition rate. Therefore,
the FNMR is proportional to the off-axis angle. However, the
maximal FNMR and the maximal FMR of the proposed
method using the edge-type descriptor are still less than 0.48%
and 0.08%, respectively. When the off-axis is within ±30°,
the configuration using the IRIS-4 segmentation method and
the LoG/DoG filter bank outperforms the other configurations.
This confirms that the proposed method is very promising.
Fig. 18 shows the DET maps (FNMR versus FMR with respect
to different off-axis angles) of the ten configurations. From
Fig. 18, it is obvious that the IRIS-4 segmentation method
yields the most accurate NOVA recognition results. The EERs Fig. 18. Off-axis angle dependent DET map of the NOVA recognition results
corresponding to different off-axis angles are marked with a computed using five segmentation methods (from top to bottom IRIS-4, IDO-
4, IDO-N, HT-4, and HT-N) with (a) LoG/DoG filter bank, and (b) Gabor
green ‘∗’. Note that the EERs are computed for a specific off- filter bank. The horizontal axes are the FMR (%) while the vertical axes are
axis angle, so their values are much lower than the error rates the off-axis angles. The FNMR (%) are represented using pseudo colors.
computed using a global threshold.
To compute the EERs of the ten NOVA iris recognition TABLE III
configurations, for each iris pattern, the eight iris images EER (%) of the Ten NOVA Iris Recognition Configurations
acquired at θh = 0° are enrolled one at a time in turn and
the other iris images not enrolled are divided into a genuine Segmentation Methods
group and an impostor group for computing the FNMR and Filter Banks IRIS-4 IDO-4 IDO-N HT-4 HT-N
the FMR, respectively. Fig. 19 shows the DET curves with LoG/DoG 0.04 0.17 0.36 0.44 0.37
respect to the ten NOVA iris recognition configurations. Each Gabor 0.09 0.21 0.55 0.60 0.52
data point shown on the DET curve is computed with 41 200
genuine tests and 2 217 600 impostor tests. The EER of each
configuration is listed in Table III and is also marked on the E. Computation Time
corresponding DET curve in Fig. 19. The EERs also show that Table IV shows the computation time of every step of the
an accurate iris segmentation method is crucial in NOVA iris iris recognition process computed on a PC running Windows
recognition system. Furthermore, the proposed LoG/DoG filter XP with an Intel Core 2 Duo 4400 processor and 1 GB RAM.
banks outperform the Gabor filter banks designed in [63] in All methods are implemented in C/C++. The average iris
this test. The IRIS-4 combined with the LoG/DoG filter banks segmentation time of the BOYCE-3 method is 1635 ms. It is
achieves the lowest EER, i.e., 0.04%, which is much smaller not listed in Table IV because this method does not explicitly
than that of the other approaches. estimate the eyelid boundaries. Although the proposed IRIS-4

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
428 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010

rectification process was introduced to convert a NOVA iris


image into an approximately OVA iris image so a circular
outer boundary model can be applied. Furthermore, we have
proposed a new edge-type iris feature descriptor which is also
very simple and efficient. The new iris descriptor characterizes
an iris pattern with multiscale edge-type maps detected using
the DoG and LoG filters. Matching of the edge-type maps
of two iris patterns was formulated as an ensemble of weak
classifiers. Real experiments have been conducted and the
results show that the proposed NOVA iris recognition system
can achieve a high recognition rate. Specifically, the EER of
the proposed system computed using iris images acquired at
different off-axis angles ranging from −30° to +30° is 0.04%,
which means that our method is very promising.
Although the proposed non-orthogonal view iris recognition
system can provide satisfactory NOVA iris recognition results,
it has a few disadvantages compared with the traditional
iris recognition systems. First, to acquire clear four-spectral
iris images, a special camera and additional visible light
Fig. 19. DET curves of the NOVA recognition results computed using five
sources are required. The placements of the visible light
segmentation methods with (a) LoG/DoG filter bank, and (b) Gabor filter sources have to be carefully designed to avoid harsh light.
bank. Second, processing a four-spectral image requires more com-
putation time than processing a single channel iris image,
TABLE IV
though the total computation time is only about 0.6 seconds
Average Computation Time (in ms) of the Iris Recognition
with a modern personal computer. Furthermore, the current
Process
implementation of the DCCD iris imaging system has the
following limitations. First, both the depth of field and the
Pupil detection 4
Circle rectification 18
field of view of the DCCD camera are very limited. Users are
required to align their eyes with the capture area of the DCCD
Iris segmentation IRIS-4 IDO-4 IDO-N HT-4 HT-N
camera to acquire satisfactory iris images. Second, the system
Limbus detection 338 137 80 163 143
Eyelid detection 140 29 15 101 93 is designed to work in a controlled environment to minimize
Iris normalization 5 corneal reflections of the environmental infrared light sources.
Feature extraction 53 Third, the influence of eye shadow makeup is currently not
Classification 0.5 considered in this paper. Therefore, the goal of the future work
will mainly be the development of a less restrictive and more
convenient iris recognition system using the DCCD camera,
method requires more computation time, the total computation similar to the ones described in [20], [71], which involves three
time for each input iris pattern is still less than 0.6 seconds. aspects, improving the automatic iris image acquisition ability
Therefore, the additional computation time will not reduce the with a pan-tilt unit and a motorized lens, reducing the corneal
applicability of the proposed method. reflection using polarized filters, and enhancing the robustness
of the iris segmentation result in an uncontrolled environment.
VI. Conclusion and Future Work
Acknowledgment
Automatic iris recognition has been studied for more than
a decade. Many recognition techniques have been developed The authors would like to thank the anonymous review-
and some commercialized systems are available now. However, ers for their constructive suggestions that greatly improved
most of the existing iris recognition systems are developed the quality of this paper, as well as T.-J. Wu, C.-C. Lin,
under the OVA that the optical axis of the camera is approxi- M.-C. Tsai, and K.-R. Chen for their help in constructing the
mately aligned with the normal vector of the pupillary circle. acquisition systems and in collecting the iris images.
Therefore, when the OVA is not valid, those methods are not
applicable. In this paper, we have proposed a new solution to
References
the NOVA iris recognition problem. To increase the robustness
of the iris segmentation results, a dual-CCD camera was [1] J. Daugman, “High confidence visual recognition of persons by a test
of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell.,
developed to acquire four-spectral iris images. To utilize the vol. 15, no. 11, pp. 1148–1161, Nov. 1993.
enriched iris boundary information contained in a four-spectral [2] J. Daugman and C. Downing, “Recognizing iris texture by phase
iris image, a simple and efficient iris segmentation method demodulation,” in Proc. IEE Colloq. Image Process. Biometric Meas.,
vol. 2. 1994, pp. 1–8.
was proposed for accurately detecting the iris boundaries. To [3] J. Daugman, “Biometric personal identification system based on iris
simplify the outer iris boundary estimation problem, a circle analysis,” U.S. Patent 5 291 560, Mar. 1, 1994.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 429

[4] J. Daugman, “Statistical richness of visual phase information: Update [30] W. W. Boles and B. Boashash, “A human identification technique using
on recognizing persons by iris patterns,” Int. J. Comput. Vision, vol. 45, images of the iris and wavelet transform,” IEEE Trans. Signal Process.,
no. 1, pp. 25–38, Oct. 2001. vol. 46, no. 4, pp. 1185–1188, Apr. 1998.
[5] J. Daugman, “How iris recognition works,” IEEE Trans. Circuits Syst. [31] S. Lim, K. Lee, O. Byeon, and T. Kim, “Efficient iris recognition through
Video Technol., vol. 14, no. 1, pp. 21–30, 2004. improvement of feature vector and classifier,” Electron. Telecommun.
[6] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J. Kolczyn- Res. Inst. J., vol. 23, no. 2, pp. 61–70, 2001.
ski, J. R. Matey, and S. E. McBride, “A system for automated iris [32] C. Tisse, L. Martin, L. Torres, and M. Robert, “Person identification
recognition,” in Proc. IEEE Workshop Applicat. Comput. Vision, 1994, technique using human iris recognition,” in Proc. Int. Conf. Vision
pp. 121–128. Interface, 2002, pp. 294–299.
[7] M. Dobeš, L. Machala, P. Tichavský, and J. Pospı́šil, “Human eye iris [33] Z. Sun, T. Tan, and Y. Wang, “Robust encoding of local ordi-
recognition using the mutual information,” Int. J. Light Electrons Optics, nal measures: A general framework of iris recognition,” in Proc.
vol. 115, no. 9, pp. 399–405, 2004. Eur. Conf. Comput. Vision Workshop Biometric Authentication, 2004,
[8] H. Proença and L. A. Alexandre, “UBIRIS: A noisy iris image pp. 270–282.
database,” in Proc. Int. Conf. Image Anal. Process., 2005, pp. 970– [34] Z. He, Z. Sun, T. Tan, X. Qiu, C. Zhong, and W. Dong, “Boosting
977. ordinal features for accurate and fast iris recognition,” in Proc. IEEE
[9] R. P. Wildes, “Iris recognition: An emerging biometric technology,” Comput. Soc. Conf. Comput. Vision Pattern Recognition, 2008, pp. 1–8.
Proc. IEEE, vol. 85, no. 9, pp. 1348–1363, Sep. 1997. [35] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by
[10] Y. He, Y. Wang, and T. Tan, “Iris image capture system design for characterizing key local variations,” IEEE Trans. Image Process., vol. 13,
personal identification,” in Proc. Sinobiometrics, LNCS 3338. 2004, no. 6, pp. 739–750, 2004.
pp. 539–545. [36] L. Ma, T. Tan, D. Zhang, and Y. Wang, “Local intensity variation
[11] N. D. Kalka, J. Zuo, N. A. Schmid, and B. Cukic, “Image quality analysis for iris recognition,” Pattern Recognition, vol. 37, no. 6,
assessment for iris biometric,” in Proc. 3rd Int. Soc. Optical Engineers pp. 1287–1298, 2004.
Conf. Biometric Technol. Human Identification, vol. 6202. 2006, pp. [37] Z. Sun, Y. Wang, T. Tan, and J. Cui, “Robust direction estimation of
62 020D1–62 020D11. gradient vector field for iris recognition,” in Proc. Int. Assoc. Pattern
[12] R. Narayanswamy, G. E. Johnson, P. E. X. Silveira, and H. B. Wach, Recognition Int. Conf. Pattern Recognition, vol. 2. 2004, pp. 783–786.
“Extending the imaging volume for biometric iris recognition,” Appl. [38] Z. Sun, Y. Wang, T. Tan, and J. Cui, “Improving iris recognition accuracy
Optics, vol. 44, no. 5, pp. 701–712, 2005. via cascaded classifiers,” IEEE Trans. Syst. Man Cybern. C, Appl. Rev.,
[13] B. J. Kang and K. R. Park, “Real-time image restoration for iris recog- vol. 35, no. 3, pp. 435–441, Aug. 2005.
nition systems,” IEEE Trans. Syst. Man Cybern. B, Cybern., vol. 37, [39] Y.-P. Huang, S.-W. Luo, and E.-Y. Chen, “An efficient iris recognition
no. 6, pp. 1555–1566, 2007. system,” in Proc. Int. Conf. Mach. Learning Cybern., vol. 1. 2002,
[14] K. R. Park, “New automated iris image acquisition method,” Appl. pp. 450–454.
Optics, vol. 44, no. 5, pp. 713–734, 2005. [40] S.-I. Noh, K. Bae, K. R. Park, and J. Kim, “A new iris recognition
[15] K. R. Park and J. Kim, “A real-time focusing algorithm for iris method using independent component analysis,” Inst. Electron. Inf.
recognition camera,” IEEE Trans. Syst. Man Cybern. C, Appl. Rev., Commun. Engineers Trans. Inf. Syst., vol. E88-D, no. 11, pp. 2573–
vol. 35, no. 3, pp. 441–444, Aug. 2005. 2581, 2005.
[16] X. Liu, K. W. Bowyer, and P. J. Flynn, “Experiments with an improved [41] D. M. Monro, S. Rakshit, and D. Zhang, “DCT-based iris recognition,”
iris segmentation algorithm,” in Proc. IEEE Workshop Autom. Identifi- IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 586–595,
cation Advanced Technol., 2005, pp. 118–123. Apr. 2007.
[17] LG Iris: Iris Recognition Technology [Online]. Available: [42] Iris Challenge Evaluation [Online]. Available: http://iris.nist.gov/ice
http://www.lgiris.com [43] R. M. Gaunt, B. L. Bonney, R. W. Ives, D. M. Etter, and R. C.
[18] Panasonic Security Products: Biometrics [Online]. Available: Schultz, “Collection and segmentation of non-orthogonal irises for iris
http://www.panasonic.com/business/security/products/biometrics.asp recognition,” in Proc. Biometric Consortium Conf., 2005 [Online]. Avail-
[19] S. Itoda, M. Kikuchi, and S. Wada, “Fully automated image capturing- able: http://www.biometrics.org/bc2005/Presentations/Conference/1%
type iris recognition device IRISPASSR-M,” Oki Tech. Review, vol. 73, 20Monday%20September%2019/Poster%20Session/Gaunt− 1568964530.
no. 205, pp. 48–51, 2006. pdf
[20] J. R. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D. J. LoIacono, [44] B. Bonney, R. Ives, D. Etter, and Y. Du, “Iris pattern extraction using
S. Mangru, M. Tinker, T. M. Zappia, and W. Y. Zhao, “Iris on the bit-plane analysis,” in Proc. IEEE Annu. Asilomar Conf. Signals Syst.
move: Acquisition of images for iris recognition in less constrained Comput., vol. 1. 2004, pp. 582–586.
environments,” Proc. IEEE, vol. 94, no. 11, pp. 1936–1947, Nov. [45] A. Abhyankar, L. Hornak, and S. Schuckers, “Off-angle iris recognition
2006. using bi-orthogonal wavelet network system,” in Proc. IEEE Workshop
[21] X. He, J. Yan, G. Chen, and P. Shi, “Contactless autofeedback iris Autom. Identification Advanced Technol., 2005, pp. 239–244.
capture design,” IEEE Trans. Instrum. Meas., vol. 57, no. 7, pp. 1369– [46] V. Dorairaj, N. A. Schmid, and G. Fahmy, “Performance evaluation
1375, Jul. 2008. of non-ideal iris based recognition system implementing global ICA
[22] C. H. Morimoto, D. Koons, A. Amir, and M. Flickner, “Pupil detection encoding,” in Proc. IEEE Int. Conf. Image Process., vol. 3. 2005,
and tracking using multiple light sources,” Image Vision Computing, pp. 285–288.
vol. 18, no. 4, pp. 331–335, 2000. [47] J. Zuo, N. D. Kalka, and N. A. Schmid, “A robust iris segmentation
[23] C. H. Morimoto, T. T. Santos, and A. S. Muniz, “Automatic iris procedure for unconstrained subject presentation,” in Proc. Biometrics
segmentation using active near infra red lighting,” in Proc. Brazilian Symp. Special Session Research Biometric Consortium Conf., 2006, pp.
Symp. Comput. Graph. Image Process., 2005, pp. 37–43. 1–6.
[24] J. Thornton, M. Savvides, and B. V. Kumar, “A Bayesian approach to [48] A. E. Yahya and M. J. Nordin, “A new technique for iris localization
deformed pattern matching of iris images,” IEEE Trans. Pattern Anal. in iris recognition systems,” Information Technol. J., vol. 7, no. 6,
Mach. Intell., vol. 29, no. 4, pp. 596–606, Apr. 2007. pp. 924–929, 2008.
[25] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification based on [49] J. Daugman, “New methods in iris recognition,” IEEE Trans. Syst. Man
iris pattern,” in Proc. Int. Assoc. Pattern Recognition Int. Conf. Pattern Cybern. B, Cybern., vol. 37, no. 5, pp. 1167–1175, Oct. 2007.
Recognition, vol. 12. 2000, pp. 805–808. [50] A. Ross and S. Shah, “Segmenting non-ideal irises using geodesic active
[26] L. Ma, Y. Wang, and T. Tan, “Iris recognition based on multichannel contours,” in Proc. Biometrics Symp. Special Session Res. Biometric
Gabor filtering,” in Proc. Asian Conf. Comput. Vision, vol. 1. 2002, Consortium Conf., 2006, pp. 1–6.
pp. 279–283. [51] S. J. Pundlik, D. L. Woodard, and S. T. Birchfield, “Non-ideal iris
[27] L. Ma, Y. Wang, and T. Tan, “Iris recognition using circular symmetric segmentation using graph cuts,” in Proc. IEEE Comput. Soc. Conf.
filters,” in Proc. Int. Assoc. Pattern Recognition Int. Conf. Pattern Comput. Vision Pattern Recognition Workshop, 2008, pp. 1–6.
Recognition, vol. 2. 2002, pp. 414–417. [52] M. Vatsa, R. Singh, and A. Noore, “Improving iris recognition per-
[28] L. Masek and P. Kovesi, “Recognition of human iris patterns for formance using segmentation, quality enhancement, match score fusion,
biometric identification,” B.E. thesis, School Comput. Sci. Softw. Eng., and indexing,” IEEE Trans. Syst. Man Cybern. B, Cybern., vol. 38, no. 4,
Univ. Western Australia, Perth, Australia, 2003. pp. 1021–1035, 2008.
[29] J. Huang, L. Ma, Y. Wang, and T. Tan, “Iris model based on local [53] J. H. Park and M. G. Kang, “Iris recognition against counterfeit attack
orientation description,” in Proc. Asian Conf. Comput. Vision, 2004, using gradient based fusion of multispectral images,” in Proc. Int.
pp. 954–959. Workshop Biometric Recognition Syst., 2005, pp. 150–156.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
430 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010

[54] C. Boyce, A. Ross, M. Monaco, L. Hornak, and X. Li, “Multispec- Sheng-Wen Shih (M’97) received the M.S.
tral iris analysis: A preliminary study,” in Proc. IEEE Comput. Soc. and Ph.D. degrees in electrical engineering from
Conf. Comput. Vision Pattern Recognition Workshop, 2006, pp. 51– National Taiwan University, Taipei City, Taiwan, in
59. 1990 and 1996, respectively.
[55] C. K. Boyce, “Multispectral iris recognition analysis: Techniques and From January to July 1997, he was a Postdoctoral
evaluation,” Master’s thesis, Dept. Comput. Sci. Electr. Eng., West Associate with the Image Formation and Processing
Virginia Univ., Morgantown, WV, Dec. 2006. Laboratory, Beckman Institute, University of Illinois,
[56] A. Basit and M. Y. Javed, “Localization of iris in gray scale images using Ubana-Champaign. In August 1997, he was with
intensity gradient,” Optics Lasers Eng., vol. 45, no. 12, pp. 1107–1114, the Department of Computer Science and Informa-
Dec. 2007. tion Engineering, National Chi Nan University, Puli,
[57] A. W. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting Nantou, Taiwan as an Assistant Professor. He be-
of ellipse,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, no. 5, came an Associate Professor in 2003. His research interests include computer
pp. 476–480, May 1999. vision, biometrics, and human–computer interaction.
[58] E. Angelopoulou, “The reflectance spectrum of human skin,” Dept.
Comput. Information Sci., Univ. Pennsylvania, Philadelphia, PA, Tech.
Rep. MS-CIS-99-29, 1999. Wen-Shiung Chen received the M.S. degree from
[59] M. Vilaseca, R. Mercadal, J. Pujol, M. Arjona, M. de Lasarte, R. Huertas, National Taiwan University, Taipei City, Taiwan, in
M. Melgosa, and F. H. Imai, “Characterization of the human iris spectral 1985, and the Ph.D. degree from the University of
reflectance with a multispectral imaging system,” Appl. Optics, vol. 47, Southern California, Los Angeles, in 1994, both in
no. 30, pp. 5622–5630, 2008. electrical engineering.
[60] A. Vogel, C. Dlugos, R. Nuffer, and R. Birngruber, “Optical properties of He was with the Department of Electrical Engi-
human sclera, and their consequences for transscleral laser applications,” neering, Feng Chia University, Taichung, Taiwan
Lasers Surgery Med., vol. 11, no. 4, pp. 313–394, 1991. from 1994 to 2000. In 2000, he joined the De-
[61] M. A. Fischler and R. C. Bolles, “Random sample consensus: A partment of Electrical Engineering, National Chi
paradigm for model fitting with applications to image analysis and Nan University, Puli, Nantou, Taiwan, where he is
automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, currently a Professor. His research interests include
1981. image processing, pattern recognition, computer vision, biometrics, mobile
[62] J. Canny, “A computational approach to edge detection,” IEEE Trans. computing, and networking.
Pattern Anal. Mach. Intell., vol. 8, no. 6, pp. 679–698, Nov. 1986.
[63] C.-T. Chou, S.-W. Shih, W.-S. Chen, and V. W. Cheng, “Iris recogni-
tion with multiscale edge-type matching,” in Proc. Int. Assoc. Pattern
Victor W. Cheng received the B.S. degree in elec-
Recognition Int. Conf. Pattern Recognition, vol. 4. 2006, pp. 545–548.
trical engineering from National Taiwan University,
[64] I. Daubechies, “Discrete wavelet transforms: Frames,” in Ten Lectures
Taipei City, Taiwan, in 1989, and the M.S. and
on Wavelets. Philadelphia, PA: Society for Industrial and Applied
Ph.D. degrees, both in electrical engineering and
Mathematics (SIAM), 1992, ch. 3, p. 75.
computer science, from the University of Michigan,
[65] D. Opitz and R. Maclin, “Popular ensemble methods: An empirical
Ann Arbor, in 1995 and 2000, respectively.
study,” J. Artif. Intell. Res., vol. 11, pp. 169–198, Aug. 1999.
He is currently a Faculty Member with the Depart-
[66] A. J. Mansfield and J. L. Wayman, “Best practices in testing and report-
ment of Computer Science and Information Engi-
ing performance of biometric devices,” Center Math. Sci. Computing,
neering, National Chi Nan University, Puli, Nantou,
Nat. Phys. Lab., Teddington, Middlesex, U.K., Tech. Rep. NPL CMSC
Taiwan. His research interests include sensor net-
14/02, 2002.
working, error correcting code, code division multi-
[67] CASIA Iris Image Database, Center for Biometrics and Security Re-
ple access, and 3-D surface scanning.
search, National Laboratory of Pattern Recognition, Institute of Au-
tomation, Chinese Academy of Sciences, China [Online]. Available:
http://www.sinobiometrics.com
[68] UBIRIS Noisy Visible Wavelength Iris Image Databases, Soft Computing Duan-Yu Chen received the B.S. degree in com-
and Image Analysis Group, Department of Computer Science, University puter science and information engineering from
of Beira Interior, Portugal [Online]. Available: http://iris.di.ubi.pt National Chaio-Tung University, Hsinchu, Taiwan,
[69] C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learning, in 1996, the M.S. degree in computer science
vol. 20, no. 3, pp. 273–297, 1995. from National Sun Yat-Sen University, Kaohsiung,
[70] R.-E. Fan, P.-H. Chen, and C.-J. Lin, “Working set selection using Taiwan, in 1998, and the Ph.D. degree in computer
second order information for training SVM,” J. Mach. Learning Res., science and information engineering from National
vol. 6, pp. 1889–1918, Dec. 2005. Chiao-Tung University, in 2004.
[71] H. Proença, S. Filipe, R. Santos, J. Oliveira, and L. A. Alexandre, “The He was a Postdoctoral Research Fellow with
UBIRIS.v2: A database of visible wavelength iris images captured on- Academia Sinica, Taipei, Taiwan from 2004 to 2008.
the-move and at-a-distance,” IEEE Trans. Pattern Anal. Mach. Intell., He is currently an Assistant Professor with the
2009, submitted for publication. Department of Electrical Engineering, Yuan Ze University, Chungli, Taiwan.
His research interests include computer vision, video signal processing,
content-based video indexing and retrieval, and multimedia information
systems.
Chia-Te Chou received the B.S. and M.S. degrees
in computer engineering in 2003 and 2005, re-
spectively, from the National Chi Nan University,
Nantou, Taiwan, R.O.C., where he is currently pur-
suing the Ph.D. degree in computer science and
engineering at the Department of Computer Science
and Information Engineering.
His research interests include biometrics and
human–computer interaction.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.

You might also like