Professional Documents
Culture Documents
Iris Recognition
Iris Recognition
505-063 168
The concept of automated iris recognition has been first
proposed by Flom and Safir in 1987, though an attempt to
use iris to human identification can be traced back to as
early as 1885. John Daugman developed the first Iris
recognition system in 1994. Later, similar work has been
investigated, such as Wilde’s, Boles’, and Lim’s, which
differed either in feature extraction or pattern matching
methods [15].
Daugman [1] [2] [13] [15] [16] used multi-scale Gabor (c) Oriented eye image
wavelets to demodulate the texture phase information. He
filtered the iris image with a family of Gabor filters and 1. Eyelids and eyelashes bring about some edge
generated 2048-bit complex valued iris code and then noises and occlude the effective regions of the
compared the difference between two iris codes by iris (Figure 2 (b) and 2 (c)). This may leads to
computing their Hamming distance. His work is most the inaccurate localisation of the iris, which
widely implemented in the available systems in the world. results in the false non-matching.
Boles and Boashash [4] generated one-dimensional 1-D 2. The corneal and specular reflections will occur
signals by calculating the zero-crossings of wavelet on the pupil and iris region. When the reflection
transform at various resolution levels over concentric occurs in the pupil from iris/pupil border, the
circles on the iris. Comparing the 1-D signals with model detection of the inner boundary of the iris fails.
features using different dissimilarity functions did the 3. The orientation of the head with the camera may
matching. Wildes et al. [3] represented the iris texture result in the orientation of the iris image to some
with a Laplacian pyramid constructed with four different extent.
resolution levels and used the normalized correlation to
determine whether the input image and the model image The proposed system uses a technique to overcome these
are from the same class. Lim et al. [15] method was problems.
based on key local variations. The local sharp variation
points denoting the appearing or vanishing of an The Proposed System
important image structure are utilized to represent the
characteristics of the iris. He used one-dimensional The proposed system consists of five steps as shown in
intensity levels to characterize the most important the Figure 3 below, Image acquisition, segmentation of
information and position sequence of local sharp variation the iris region, Normalisation of the iris region, extraction
points are recorded using wavelets. The matching has of the iris code and matching the iris code. Each of the
been performed using exclusive OR operation to compute five steps is briefly discussed below.
the similarity between a pair of position sequences.
Image Acquisition
The existing systems use the ideal eye images with less
occluding parts for their testing. An example of an ideal
The iris is a relatively small organ of about 1cm in
eye image is shown in the Figure 2 (a). However the real
diameter [3]. In order to make the iris recognition system
time image of the eye can be captured with a lot of
to work efficiently a high-quality image of the iris has to
occluding parts, the effect of such occluding parts may
be captured. The acquired image of the iris must have
result in the false non-matching. Some of the problems
sufficient resolution and sharpness to support recognition.
are outlined below,
Also, it is important to have good contrast in the interior
iris pattern [3], [18]. The eye images captured under
normal lighting conditions will result in poor quality
images with lots of spatial reflections. However, images
captured using infrared camera results in good quality
images with high contrast and low reflections [3], [18].
The images used in the proposed system (CASIA
database) were taken from such cameras [17].
169
Image Acquisition
Pupil
Detection
Segmentation
Iris Detection
Figure 4 (a) Eye image (b) Segmenting the pupil
Normalization Unwrap Iris region iv. The hit and miss operator is applied to segment
into Polar only the circle region in the image (Figure 4 (b)).
Coordinates v. A bounding box is drawn by searching for the
first black pixel from top, bottom and from the
sides of the image.
Feature Extraction vi. The length or breadth of the box gives the
(Gabor Filters) diameter of the pupil. The centre of the
bounding box gives the centre of the pupil.
Matching
(Hamming distance)
170
Probablistic Hough transform is used to reduce Where x(r, θ) and y(r, θ) are defined as linear
the computation time. combinations of both the set of papillary
(x – a)2 + (y- b)2 = r2 ------------------(1) boundary points. The following formulas
v. The CASIA database images have radius (r) in perform the transformation,
the range of 100 to 120. Hence in our work the
radius parameter in the Hough transform is set x(r, θ) = (1-r)xp(θ) + xi(θ)…………………….(3)
between 95 and 125. y(r, θ) = (1-r)yp(θ) + yi(θ)…………………….(4)
vi. The pupil and iris are not concentric. The centre
of the iris lies within the pupil area and hence the xp(θ) = xp0(θ) + rp Cos(θ)…………………......(5)
probability that the centre of the iris lying in this yp(θ) = yp0(θ) + rp Sin(θ)…………………...…(6)
area is high. Therefore the centre parameters
(a,b) are taken to be the pupil area. This reduces xi(θ) = xi0(θ) + ri Cos(θ)...…………………….(7)
the number of edge pixels to be computed.
yi(θ) = yi0(θ) + ri Sin(θ)…………………..…..(8)
Daugman’s rubber sheet model is used to unwrap the iris Figure 8 Enhanced normalized Image
region. The steps are as follows,
vii. The Figure 8 shows the enhanced
i. The centre of the pupil is considered as the normalized image after histogram
reference point. The iris ring is mapped to a equalization. It shows rich texture suitable
rectangular block in the anti-clockwise for feature extraction.
direction. Feature Extraction and Matching
ii. Radial resolution is the number of data
points in the radial direction. Angular The most discriminating features present in the iris region
resolution is the number of radial lines must be encoded in order to recognise the individuals
generated around iris region. accurately. Wavelet is powerful in extracting features of
iii. Radial direction is taken to be 64 data points textures. The template that is generated using wavelets
and horizontal direction is 360 data points. will also need a corresponding matching metric, which
iv. Using the following equation the doughnut gives a measure of similarity between two iris templates.
iris region is transformed to a 2D array with This metric should give a range of values for intra-class
horizontal dimensions of angular resolution comparisons, that is, templates generated from same eye,
and vertical dimension of radial resolution. and another range of values for inter-class comparisons,
that is, templates generated from different eyes [20], [21],
I(x(r,θ), y(r, θ)) → I(r, θ) ………………….(2) [22].
171
The normalised iris image is divided into multiple vi. A bank of thirty Gabor filters is applied over
regions. Each local region is transformed in to a complex the forty channels to generate 2400 bits of
number with 2-D Gabor filters. The real and imaginary information as shown in the figure below.
part of the complex number is encoded in to 1 or 0 vii. Figure 10 and Figure 11 shows the typical
depending on the sign. A number of Gabor filters are iris codes generated using the above method.
used to generate 2400 bits.
i. The normalised iris image is divided into 8 Figure 10: Example of Iris Code
parts of 45degrees interval each.
ii. Each part is further sub divided into 8
110010011001001001010110101101000101000
regions to form sixty-four small channels.
110100010110010001010010100100010001010
θ Figure 11. Typical Iris Code bits
r
HD = 1/N ∑ Xj (XOR)Yj…………..(12)
J=1
If the hamming distance is below 0.2 then the two irises
belongs to the same eye. If it is greater than 0.2 then the
two irises belongs to different person.
Experimental results
Figure 9. Multi-channel Feature Extraction The proposed system has been tested with the images
from the CASIA database. First the test has been
iii. From figure 9 it is apparent that the eyelids performed on all the 756 images from the database to find
and eyelashes can occlude the irises in the the pupil. It was found that the proposed method to find
regions between 45 and 135 degrees, 270 the pupil was successful in all the cases. Pupil detection
and 315 degrees. Hence only the top two was 100% successful. Second the test has been
channels in these regions are considered for performed to detect the iris region. It was found that the
the feature extraction. Therefore the iris is iris region was detected in almost all the cases, but in
divided in to forty channels, which are some cases the exact iris circle was not detected, this was
mostly not occluded, by eyelashes or because only small iris region was considered for the iris
eyelids. detection and the noises in the image may result in the
iv. A bank of thirty Gabor filters is generated offset of the iris centre. It was also found that Hough
using the 2D Gabor wavelet equation, transform used for iris detection was computationally
expensive. Computing the hamming distance did
G(x, y; θ, ω) = exp {-1/2 [x′2 / δ x′ 2 + y′2 / δ matching. The intra class comparisons were made for all
y′
2
]} exp (iω(x+y)) …………………....(9) the 108 samples and inter class comparisons were made
for only a few samples. The proposed system has
x′ = x cos θ + y sin θ ………………...... (10) correctly identified the intra class eye images with false
y′ = y cos θ - x sin θ…………………… (11) reject rates in some samples. This may be due to the
blurred image, poor lighting conditions, and incorrect iris
v. The frequency parameter is often chosen to detection. The inter class comparisons were made to only
be power of 2. In our experiments, the a very few samples and found that there was few false
central frequencies used are 2, 4, 8, 16, and acceptance of the eye image.
32. For each frequency the filtering is
performed at orientation 0, 30, 60, 90, 120 Conclusion
and 150 degrees. So, there are thirty filters
with different frequencies and directions. The paper presented an iris recognition system using
Gabor filters, which was tested using CASIA eye images
172
database in order to verify the performance of the http://cnx.rice.edu/content/m12488/latest/
technology. Firstly a novel segmenting method was used (Accessed on 29th June 2005).
to localise the iris region from the eye image. Next the [9] Center for Biometrics and Security Research
localised iris image was normalised to eliminate (CBSR), http://www.sinobiometrics.com/
dimensional inconsistencies between iris regions using (Accessed on 1st July 2005).
Daugman’s rubber sheet model. Finally features of the [10] “Gross Anatomy of the Eye”,
iris region was encoded by convolving the normalised iris http://webvision.med.utah.edu/anatomy.html
region with 2D Gabor wavelets and phase quantising the (Accessed on 7th July 2005).
output inorder to produce a bitwise biometric template. [11] Richard P. Wildes, “Iris Recognition: An Emerging
Hamming distance was chosen as a matching metric, Biometric Technology”, Proceedings of the IEEE,
which gave a measure of how many bits disagreed Vol 85, N0. 9, September 1997.
between two templates. It was found from the [12] “Anatomy and physiology of the iris”,
experiments that the hamming distance between two irises http://www.cl.cam.ac.uk/users/jgd1000/anatomy.ht
that belong to the same eye is 0.20. It was found that ml (Accessed on 10th July).
nearly 82% of the intra class classifications were found [13] J. Daugman “The importance of being random:
correctly by the system. Statistical principles of iris recognition”, Pattern
Testing more samples of iris images could extend the Recognition, vol. 36, no. 2, pp 279-291 2003.
study. Since the images in the CASIA database were [14] John Daugman and Cathryn Downing, “Epigenetic
mainly eyes of Chinese people the testing could be done randomness, Complexity and Singularity of
for other ethnic groups. The images were all captured Human Iris Patterns”, University of Cambridge,
under infrared cameras, so that the specular reflections Cambridge UK.
were minimum. The study could be extended to test the [15] Li Ma, Tieniu Tan, Dexin Zhang and Yunhong
images taken under ordinary cameras. The eyes that have Wang, “Local Intensity Variation Analysis for Iris
very low contrast between iris and pupil were not tested. Recognition”, National Laboratory of Pattern
“The process of defeating a biometric system through the Recognition.
introduction of fake biometric samples is known as [16] Jafar M.H. Ali, Aboul Ella Hassanien, “An Iris
spoofing”. Biometric liveness detection has become an recognition system to Enhance E-Security
active research topic and the project could be extended to Environment Based on wavelet theory”, Advanced
determine if the iris image given are taken from the modelling and optimisation, Volume 5, Number 2,
original eye or spoofed iris image. 2003
[17] John Daugman, “Recognizing Persons By Their
References Iris Patterns”, Cambridge University, Cambridge
UK.
[1] John Daugman, “How Iris Recognition Works”, [18] Chinese Academy of Sciences – Institute of
Proceedings of 2002 International Conference on Automation (CASIA). Database of 756 Greyscale
Image Processing, Vol. 1. 2002. Eye Images. http://www.sinobiometrics.com
[2] J. Daugman, “High Confidence Visual Recognition Version 1.0 2005
of Persons by a Test of Statistical Independence”, [19] Ahmed Poursaberi and BabakN. Arabi, “ A novel
IEEE Trans. Pattern Analysis and Machine Iris Recognition System Using Morphological
Intelligence, Vol.15, No.11, pp.1148-1161, 1993. Edge Detector and Wavelet Phase Features”,
[3] R. Wildes, J. Asmuth, et al., “A Machine-vision http://www.icgst.com/gvip/v6/P1150517004.pdf
System for Iris Recognition”, Machine Vision and (Accessed on 1st July 2005).
Applications, Vol.9, pp.1-8, 1996. [20] Michael J. Lyons, Julien Budynek, Andre Plante,
[4] Wikipedia, “Iris (anatomy)”, Shigeru Akamatsu, “Classifying Facial Attributes
http://en.wikipedia.org/wiki/Iris_%28anatomy%29 Using a 2-D Gabor Wavelet Representation and
(Accessed on 30th June 2005). Discriminant Analysis”, Proceedings of the 4th
[5] “Anatomy and physiology of the iris”, International Conference on Automatic Face and
http://www.cl.cam.ac.uk/users/jgd1000/anatomy.ht Gesture Recognition ,28-30 March, 2000, Grenoble
ml (Accessed on 10th July). France, IEEE Computer Society, pp. 202-207.
[6] W. Boles and B. Boashah, “A Human [21] Thomas P. Weldon and William E. Huggins, “
Identification Technique Using Images of the Iris Designing Multiple Gabor Filters for Multi-Texture
and Wavelet Transform”, IEEE Trans. on Signal Image Segmentation”, Optical Engineering , Vol.
Processing, Vol.46, pp.1185-1188, 1998. 38 No. 9 , pp. 1478-1489 , Sept. 1999
[7] S. Lim et al., “Efficient Iris Recognition through [22] Simona E. Grigorescu, Nicolai Petkov, and Peter
Improvement of Feature Vector and Classifier”, Kruizinga, “ Comparison of Texture Features
ETRI Journal, Vol. 23, No. 2, pp61-70, 2001. Based on Gabor Filters”, IEEE Transactions on
[8] Paul Robichaux, “Motivation Behind Iris Image Processing, Vol 11. No. 10, October 2002.
Detection”,
173