You are on page 1of 6

IRIS RECOGNITION BASED ON MULTI-CHANNEL FEATURE

EXTRACTION USING GABOR FILTERS

Dr. Qussay A. Salih Vinod Dhandapani


Associate professor MSc Information Technology
Division of CSIT Division of CSIT
University of Nottingham (UK) Malaysia Campus University of Nottingham (UK) Malaysia Campus
Malaysia Malaysia
Qussay.a@nottingham.edu.my kcx4vd@nottingham.edu.my

ABSTRACT control the amount of light entering the eye based on


external conditions [1], [3]. The average diameter of the
The interface of computer technologies and biology is iris is 12 mm, and the pupil size can vary from 10% to
having a huge impact on society. Human recognition 80% of the iris diameter [1], [4], [5].
technology research projects promises new life to many
security consulting firms and personal identification
system manufacturers. Iris recognition is considered to be
the most reliable biometric authentication system. Very
few iris recognition algorithms were commercialised.
The method proposed in this paper differed from the
existing work in the iris segmentation and feature
extraction phase. Digitised greyscale images of Chinese
Academy of Sciences – Institute of Automation (CASIA)
database were used for determining the performance of
the proposed system. The circular iris and pupil of the
eye image were segmented using Morphological operators Figure 1 Human Iris
and Hough transform. The localised iris region was then From the Figure 1, it is apparent that irises show complex
normalised in to a rectangular block to account for random patterns, created by arching ligaments, furrows,
imaging inconsistencies. Finally, the iris code was ridges, crypts, rings, sometimes freckles as well as corona
generated using 2D Gabor Wavelets. A bank of Gabor and a zigzag collarette. Muscles constrict and dilate the
filters has been used to capture both local and global iris pupil to control the amount of light entering the eye, since
characteristics to generate 2400 bits of iris code. the back of the iris is covered with dense black pigment.
Measuring the Hamming distance did the matching of the The spatial patterns that are apparent in the iris are unique
iris code. The two iris codes were found to match if the to each individual [6], [7], [8]. The different anatomical
hamming distance is below 0.20. The experimental structures that exist in the body of each Individual result
results show that the proposed method is effective and in the uniqueness of the iris pattern. The iris pattern is as
encouraging. distinct as the patterns of retinal blood vessels [8], [9]
[10]. Iris is a highly protected internal organ and hence it
KEY WORDS bioinformatics, iris, hough transform, is less vulnerable to external damage and its patterns are
morphological operators, gabor filters, hamming distance apparently stable throughout the life [6],[8],[9]. Iris is the
only internal organ that is visible externally. Images of the
iris can be acquired from distances of up to about 3 feet (1
meter) [9],[11], [12].
Introduction
Iris patterns can have up to 249 independent degrees of
“The coloured part of the eye (iris) contains delicate freedom. This high randomness in the iris patterns has
patterns that vary randomly from person to person, made the technique more robust and it is very difficult to
offering a powerful means of identification” – John deceive an iris pattern [3] [13] [14]. Unlike other
Daugman. The mathematical advantage of iris pattern is biometric technology Iris recognition is the most accurate,
that its pattern variability among different persons is non-invasive, and easy-to-use biometric for secure
enormous [1], [2]. The iris consists of a variable sized authentication and positive identification [8] [15].
hole called pupil. It works in the same way as a camera
aperture, automatically adjusting the size of the pupil to

505-063 168
The concept of automated iris recognition has been first
proposed by Flom and Safir in 1987, though an attempt to
use iris to human identification can be traced back to as
early as 1885. John Daugman developed the first Iris
recognition system in 1994. Later, similar work has been
investigated, such as Wilde’s, Boles’, and Lim’s, which
differed either in feature extraction or pattern matching
methods [15].
Daugman [1] [2] [13] [15] [16] used multi-scale Gabor (c) Oriented eye image
wavelets to demodulate the texture phase information. He
filtered the iris image with a family of Gabor filters and 1. Eyelids and eyelashes bring about some edge
generated 2048-bit complex valued iris code and then noises and occlude the effective regions of the
compared the difference between two iris codes by iris (Figure 2 (b) and 2 (c)). This may leads to
computing their Hamming distance. His work is most the inaccurate localisation of the iris, which
widely implemented in the available systems in the world. results in the false non-matching.
Boles and Boashash [4] generated one-dimensional 1-D 2. The corneal and specular reflections will occur
signals by calculating the zero-crossings of wavelet on the pupil and iris region. When the reflection
transform at various resolution levels over concentric occurs in the pupil from iris/pupil border, the
circles on the iris. Comparing the 1-D signals with model detection of the inner boundary of the iris fails.
features using different dissimilarity functions did the 3. The orientation of the head with the camera may
matching. Wildes et al. [3] represented the iris texture result in the orientation of the iris image to some
with a Laplacian pyramid constructed with four different extent.
resolution levels and used the normalized correlation to
determine whether the input image and the model image The proposed system uses a technique to overcome these
are from the same class. Lim et al. [15] method was problems.
based on key local variations. The local sharp variation
points denoting the appearing or vanishing of an The Proposed System
important image structure are utilized to represent the
characteristics of the iris. He used one-dimensional The proposed system consists of five steps as shown in
intensity levels to characterize the most important the Figure 3 below, Image acquisition, segmentation of
information and position sequence of local sharp variation the iris region, Normalisation of the iris region, extraction
points are recorded using wavelets. The matching has of the iris code and matching the iris code. Each of the
been performed using exclusive OR operation to compute five steps is briefly discussed below.
the similarity between a pair of position sequences.
Image Acquisition
The existing systems use the ideal eye images with less
occluding parts for their testing. An example of an ideal
The iris is a relatively small organ of about 1cm in
eye image is shown in the Figure 2 (a). However the real
diameter [3]. In order to make the iris recognition system
time image of the eye can be captured with a lot of
to work efficiently a high-quality image of the iris has to
occluding parts, the effect of such occluding parts may
be captured. The acquired image of the iris must have
result in the false non-matching. Some of the problems
sufficient resolution and sharpness to support recognition.
are outlined below,
Also, it is important to have good contrast in the interior
iris pattern [3], [18]. The eye images captured under
normal lighting conditions will result in poor quality
images with lots of spatial reflections. However, images
captured using infrared camera results in good quality
images with high contrast and low reflections [3], [18].
The images used in the proposed system (CASIA
database) were taken from such cameras [17].

Figure 2 (a) ideal image (b) Occluded eye image

169
Image Acquisition

Pupil
Detection
Segmentation

Iris Detection
Figure 4 (a) Eye image (b) Segmenting the pupil

Normalization Unwrap Iris region iv. The hit and miss operator is applied to segment
into Polar only the circle region in the image (Figure 4 (b)).
Coordinates v. A bounding box is drawn by searching for the
first black pixel from top, bottom and from the
sides of the image.
Feature Extraction vi. The length or breadth of the box gives the
(Gabor Filters) diameter of the pupil. The centre of the
bounding box gives the centre of the pupil.

Matching
(Hamming distance)

Figure 3. The proposed system

Iris Localisation Figure 5 Bounding box around pupil

vii. Since we are drawing the bounding box, the


The localisation of iris involves detecting the Pupil/Iris
specular reflections near the pupil/iris border will
boundary and the Iris/Sclera boundary [1], [4], [5], [19].
not affect the detection of the pupil.
The pupil is segmented from other regions using
morphological operators. The centre and radius of the
Iris Detection:
pupil is used to detect the inner boundary of the iris [1],
[2], [3]. The outer boundary of the iris is detected using
After detecting the pupil, the next task is to detect the
Hough transform [3].
iris/sclera border. The iris/sclera region is low in contrast
and may be partly occluded by eyelids and eyelashes.
Pupil Detection:
i. A bounding box of height equal to that of the
Pupil is a dark black circular region in an eye image.
pupil and width equal to up to eight times (as the
Hence pupil detection is equivalent to detecting the black
size of the iris varies from 10% to 80% to that of
circular region. The steps involved in detecting pupil is
the pupil) is drawn. Only regions within this
as follows,
bounding box are used to detect iris/sclera
boundary.
i. The eye image is converted to a binary image
using a threshold value of 40, that is, if the pixel
value is less than 40 it is converted into 255
(white) and if the pixel value is greater than 40 it
is converted in to 0 (black).
ii. The resulting image will contain the pupil and
some other noises (pixels whose value is less
than 40 for e.g., eyelash). Figure 6a) Bounding box iris
iii. To remove these noises morphological erosion ii. The contrast of the iris region is increased, by
and dilation is applied over the image. first applying Gaussian smoothing and then by
histogram equalisation.
iii. Sobel edge detector is applied to find the edges
of the enhanced image.
iv. Probabilistic circular Hough transform is applied
to the edge detected image to find the centre and
radius of the iris using the equation (1).

170
Probablistic Hough transform is used to reduce Where x(r, θ) and y(r, θ) are defined as linear
the computation time. combinations of both the set of papillary
(x – a)2 + (y- b)2 = r2 ------------------(1) boundary points. The following formulas
v. The CASIA database images have radius (r) in perform the transformation,
the range of 100 to 120. Hence in our work the
radius parameter in the Hough transform is set x(r, θ) = (1-r)xp(θ) + xi(θ)…………………….(3)
between 95 and 125. y(r, θ) = (1-r)yp(θ) + yi(θ)…………………….(4)
vi. The pupil and iris are not concentric. The centre
of the iris lies within the pupil area and hence the xp(θ) = xp0(θ) + rp Cos(θ)…………………......(5)
probability that the centre of the iris lying in this yp(θ) = yp0(θ) + rp Sin(θ)…………………...…(6)
area is high. Therefore the centre parameters
(a,b) are taken to be the pupil area. This reduces xi(θ) = xi0(θ) + ri Cos(θ)...…………………….(7)
the number of edge pixels to be computed.
yi(θ) = yi0(θ) + ri Sin(θ)…………………..…..(8)

where (xp, yp) and (xi, yi ) are the coordinates on


the pupil and limbus boundaries along the θ
direction.

v. Normalisation produces the image of


unwrapped iris region of size 360X64
(Figure 7).
Figure 6b ) Finding irises through Hough transform.
Figure 6c) Iris and Pupil region in the image

vii. By applying Hough transform we can find the


iris centre and radius. Figure 7 Iris Normalised into Polar coordinates
viii. The Figure 6b and 6c shows the pupil and iris
circles detected by the proposed system vi. The unwrapped image has very low contrast
and has noises. To obtain a good feature
Iris Normalisation extraction the texture must be clear and the
contrast must be high. Median filter is
The localised iris part is transformed into polar applied to remove the noises and Histogram
coordinates system to overcome imaging inconsistencies. equalisation is used to increase the contrast
The annular iris region is transformed into rectangular of the normalised image.
region [1] – [5], [9] [11] [14] [19]. The Cartesian to polar
transform of the iris region is based on the Daugman’s
Rubber sheet model [1].

Daugman’s rubber sheet model is used to unwrap the iris Figure 8 Enhanced normalized Image
region. The steps are as follows,
vii. The Figure 8 shows the enhanced
i. The centre of the pupil is considered as the normalized image after histogram
reference point. The iris ring is mapped to a equalization. It shows rich texture suitable
rectangular block in the anti-clockwise for feature extraction.
direction. Feature Extraction and Matching
ii. Radial resolution is the number of data
points in the radial direction. Angular The most discriminating features present in the iris region
resolution is the number of radial lines must be encoded in order to recognise the individuals
generated around iris region. accurately. Wavelet is powerful in extracting features of
iii. Radial direction is taken to be 64 data points textures. The template that is generated using wavelets
and horizontal direction is 360 data points. will also need a corresponding matching metric, which
iv. Using the following equation the doughnut gives a measure of similarity between two iris templates.
iris region is transformed to a 2D array with This metric should give a range of values for intra-class
horizontal dimensions of angular resolution comparisons, that is, templates generated from same eye,
and vertical dimension of radial resolution. and another range of values for inter-class comparisons,
that is, templates generated from different eyes [20], [21],
I(x(r,θ), y(r, θ)) → I(r, θ) ………………….(2) [22].

171
The normalised iris image is divided into multiple vi. A bank of thirty Gabor filters is applied over
regions. Each local region is transformed in to a complex the forty channels to generate 2400 bits of
number with 2-D Gabor filters. The real and imaginary information as shown in the figure below.
part of the complex number is encoded in to 1 or 0 vii. Figure 10 and Figure 11 shows the typical
depending on the sign. A number of Gabor filters are iris codes generated using the above method.
used to generate 2400 bits.

The steps for feature extraction and matching is as


follows,

i. The normalised iris image is divided into 8 Figure 10: Example of Iris Code
parts of 45degrees interval each.
ii. Each part is further sub divided into 8
110010011001001001010110101101000101000
regions to form sixty-four small channels.
110100010110010001010010100100010001010
θ Figure 11. Typical Iris Code bits

viii. The two iris codes are compared by


computing the hamming distance between
them using the equation
N

r
HD = 1/N ∑ Xj (XOR)Yj…………..(12)
J=1
If the hamming distance is below 0.2 then the two irises
belongs to the same eye. If it is greater than 0.2 then the
two irises belongs to different person.

Experimental results
Figure 9. Multi-channel Feature Extraction The proposed system has been tested with the images
from the CASIA database. First the test has been
iii. From figure 9 it is apparent that the eyelids performed on all the 756 images from the database to find
and eyelashes can occlude the irises in the the pupil. It was found that the proposed method to find
regions between 45 and 135 degrees, 270 the pupil was successful in all the cases. Pupil detection
and 315 degrees. Hence only the top two was 100% successful. Second the test has been
channels in these regions are considered for performed to detect the iris region. It was found that the
the feature extraction. Therefore the iris is iris region was detected in almost all the cases, but in
divided in to forty channels, which are some cases the exact iris circle was not detected, this was
mostly not occluded, by eyelashes or because only small iris region was considered for the iris
eyelids. detection and the noises in the image may result in the
iv. A bank of thirty Gabor filters is generated offset of the iris centre. It was also found that Hough
using the 2D Gabor wavelet equation, transform used for iris detection was computationally
expensive. Computing the hamming distance did
G(x, y; θ, ω) = exp {-1/2 [x′2 / δ x′ 2 + y′2 / δ matching. The intra class comparisons were made for all
y′
2
]} exp (iω(x+y)) …………………....(9) the 108 samples and inter class comparisons were made
for only a few samples. The proposed system has
x′ = x cos θ + y sin θ ………………...... (10) correctly identified the intra class eye images with false
y′ = y cos θ - x sin θ…………………… (11) reject rates in some samples. This may be due to the
blurred image, poor lighting conditions, and incorrect iris
v. The frequency parameter is often chosen to detection. The inter class comparisons were made to only
be power of 2. In our experiments, the a very few samples and found that there was few false
central frequencies used are 2, 4, 8, 16, and acceptance of the eye image.
32. For each frequency the filtering is
performed at orientation 0, 30, 60, 90, 120 Conclusion
and 150 degrees. So, there are thirty filters
with different frequencies and directions. The paper presented an iris recognition system using
Gabor filters, which was tested using CASIA eye images

172
database in order to verify the performance of the http://cnx.rice.edu/content/m12488/latest/
technology. Firstly a novel segmenting method was used (Accessed on 29th June 2005).
to localise the iris region from the eye image. Next the [9] Center for Biometrics and Security Research
localised iris image was normalised to eliminate (CBSR), http://www.sinobiometrics.com/
dimensional inconsistencies between iris regions using (Accessed on 1st July 2005).
Daugman’s rubber sheet model. Finally features of the [10] “Gross Anatomy of the Eye”,
iris region was encoded by convolving the normalised iris http://webvision.med.utah.edu/anatomy.html
region with 2D Gabor wavelets and phase quantising the (Accessed on 7th July 2005).
output inorder to produce a bitwise biometric template. [11] Richard P. Wildes, “Iris Recognition: An Emerging
Hamming distance was chosen as a matching metric, Biometric Technology”, Proceedings of the IEEE,
which gave a measure of how many bits disagreed Vol 85, N0. 9, September 1997.
between two templates. It was found from the [12] “Anatomy and physiology of the iris”,
experiments that the hamming distance between two irises http://www.cl.cam.ac.uk/users/jgd1000/anatomy.ht
that belong to the same eye is 0.20. It was found that ml (Accessed on 10th July).
nearly 82% of the intra class classifications were found [13] J. Daugman “The importance of being random:
correctly by the system. Statistical principles of iris recognition”, Pattern
Testing more samples of iris images could extend the Recognition, vol. 36, no. 2, pp 279-291 2003.
study. Since the images in the CASIA database were [14] John Daugman and Cathryn Downing, “Epigenetic
mainly eyes of Chinese people the testing could be done randomness, Complexity and Singularity of
for other ethnic groups. The images were all captured Human Iris Patterns”, University of Cambridge,
under infrared cameras, so that the specular reflections Cambridge UK.
were minimum. The study could be extended to test the [15] Li Ma, Tieniu Tan, Dexin Zhang and Yunhong
images taken under ordinary cameras. The eyes that have Wang, “Local Intensity Variation Analysis for Iris
very low contrast between iris and pupil were not tested. Recognition”, National Laboratory of Pattern
“The process of defeating a biometric system through the Recognition.
introduction of fake biometric samples is known as [16] Jafar M.H. Ali, Aboul Ella Hassanien, “An Iris
spoofing”. Biometric liveness detection has become an recognition system to Enhance E-Security
active research topic and the project could be extended to Environment Based on wavelet theory”, Advanced
determine if the iris image given are taken from the modelling and optimisation, Volume 5, Number 2,
original eye or spoofed iris image. 2003
[17] John Daugman, “Recognizing Persons By Their
References Iris Patterns”, Cambridge University, Cambridge
UK.
[1] John Daugman, “How Iris Recognition Works”, [18] Chinese Academy of Sciences – Institute of
Proceedings of 2002 International Conference on Automation (CASIA). Database of 756 Greyscale
Image Processing, Vol. 1. 2002. Eye Images. http://www.sinobiometrics.com
[2] J. Daugman, “High Confidence Visual Recognition Version 1.0 2005
of Persons by a Test of Statistical Independence”, [19] Ahmed Poursaberi and BabakN. Arabi, “ A novel
IEEE Trans. Pattern Analysis and Machine Iris Recognition System Using Morphological
Intelligence, Vol.15, No.11, pp.1148-1161, 1993. Edge Detector and Wavelet Phase Features”,
[3] R. Wildes, J. Asmuth, et al., “A Machine-vision http://www.icgst.com/gvip/v6/P1150517004.pdf
System for Iris Recognition”, Machine Vision and (Accessed on 1st July 2005).
Applications, Vol.9, pp.1-8, 1996. [20] Michael J. Lyons, Julien Budynek, Andre Plante,
[4] Wikipedia, “Iris (anatomy)”, Shigeru Akamatsu, “Classifying Facial Attributes
http://en.wikipedia.org/wiki/Iris_%28anatomy%29 Using a 2-D Gabor Wavelet Representation and
(Accessed on 30th June 2005). Discriminant Analysis”, Proceedings of the 4th
[5] “Anatomy and physiology of the iris”, International Conference on Automatic Face and
http://www.cl.cam.ac.uk/users/jgd1000/anatomy.ht Gesture Recognition ,28-30 March, 2000, Grenoble
ml (Accessed on 10th July). France, IEEE Computer Society, pp. 202-207.
[6] W. Boles and B. Boashah, “A Human [21] Thomas P. Weldon and William E. Huggins, “
Identification Technique Using Images of the Iris Designing Multiple Gabor Filters for Multi-Texture
and Wavelet Transform”, IEEE Trans. on Signal Image Segmentation”, Optical Engineering , Vol.
Processing, Vol.46, pp.1185-1188, 1998. 38 No. 9 , pp. 1478-1489 , Sept. 1999
[7] S. Lim et al., “Efficient Iris Recognition through [22] Simona E. Grigorescu, Nicolai Petkov, and Peter
Improvement of Feature Vector and Classifier”, Kruizinga, “ Comparison of Texture Features
ETRI Journal, Vol. 23, No. 2, pp61-70, 2001. Based on Gabor Filters”, IEEE Transactions on
[8] Paul Robichaux, “Motivation Behind Iris Image Processing, Vol 11. No. 10, October 2002.
Detection”,

173

You might also like