You are on page 1of 7

International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)

Web Site: Email:,

Volume 3, Issue 2, March April 2014 ISSN 2278-6856

Volume 3, Issue 2 March April 2014 Page 61

Abstract: Accurate iris segmentation is a key stage of iris
recognition. The iris image may hold irrelevant parts (like,
eyelid, eyelashes, boundary of pupil) beside to iris. In this
paper, a robust method for iris segmentation is introduced.
However, the iris segmentation stage frequently fails when
iris image doesnt hold sufficient intensity density between
pupil/iris and iris/sclera areas. The proposed method had
included a preprocessing step to improve the contrast of the
eye region using histogram stretching. This step improves the
contrast between the different eye regions, which in turn will
facilitate the process of assigning the optimal threshold value
that required for doing successful image binarization. Seed
filling algorithm is used to locate the pupil as the darkest
central segment. Later, a circle fitting method is used to
locate the best pupil circle. Also, leading edge detection
mechanism is used to detect the outer iris boundary
(iris/sclera boundary). A set of tests was conducted on the iris
data sets CASIA v1.0 and CASIA v4.0- interval, and the
results indicated that with proposed method was able to
localize iris with 100% accuracy rate.

Keywords: Histogramstretching, Iris segmentation, Seed
Filling, circle fitting.
In recent year, biometric features have received great
attention for many applications, such as face, voice,
fingerprints, palmprint, retina, iris, and so on [1]. Among
of this biometrics, iris has achieved highest recognition
accuracy, because it is has many properties that make it a
wonderful biometric identification technology: (i) the
textures of iris are unique to each subject all over the
world; (ii) The textures of iris are essentially stable and
reliable throughout ones' life; (iii) Genetic independence;
irises not only differ between identical twins, but also
between the left and right eye [2][3]. After Flomand Safir
presented the first relevant methodology in 1987, many
other methods have been proposed [4]. In the segmentation
stage, Daugman introduced an integro-differential operator
in 1993 to find both the iris inner and outer borders, this
process proved to be very effective on images with clear
intensity separability between iris, pupil and sclera regions
[5]. Integro-differential operator was proposed with some
differences in 2004 by Nishino and Nayar [6].
Two stages of the iris segmentation methods were
proposed by Wildes [7]: a gradient based binary edge map
is first constructed fromthe intensity image, and next the
inner / outer boundaries are detected using Hough
transform. Other famous iris localization algorithms are
based on using Hough transform with combination of
Canny edge detection in [8][9], also with integro-
differential operator in [10], and with Haar wavelet
transformin [11]. Liamet al. [12] have proposed a simple
method on the basis of threshold and function
maximization in order to obtain two ring parameters
corresponding to the iris inner and outer borders. Although
Although these methods have promising performance, they
they need to search the iris boundaries overlarge parameter
parameter space exhaustively, which takes more
computational time. Moreover, they may results in circle
detection failure, because some chosen threshold values
used for edge detection cause critical edge points being
removed. Du et al have proposed an iris detection method
on the basis of prior pupil identification. The image is then
then transformed into polar coordinates, and the iris outer
border is identified as the largest horizontal edge resultant
resultant fromSobel Filtering. This approach may fail in
the case of non-concentric iris and pupil, as well as in very
very dark iris textures [13]. Ghassan et al. has been
developed the angular integral projection function as a
general function to perform integral projection along
angular directions [14].
There is attributes (contrast, brightness and existing noise)
are highly sensitive to the specific characteristics of each
image. This high sensitivity was the main motivation
behind proposing an accurate and fast iris segmentation
method that less constrained image capture environments.

As presented in figure (1), the proposed method for
accurate iris localization, passes through three main
stages: image enhancement, localization of pupil (inner)
boundary and finally localization of iris (outer) boundary.

Figure 1: Block diagram of the proposed iris localization
Robust and Fast Iris Localization Using
Contrast Stretching and Leading Edge

Iman A. Saad
, Loay E. George

Department of Mathematics, College of Science, University of Alepo, Alepo, Syria
Electronic Computer Center, Al-Mustansiriyah University, Baghdad, Iraq
Department of Computer Science, College of Science, University of Baghdad, Baghdad, Iraq
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: Email:,
Volume 3, Issue 2, March April 2014 ISSN 2278-6856

Volume 3, Issue 2 March April 2014 Page 62

2.1 Normalization
The suitable process could be used to enhance the contrast
and brightness of the iris image is the normalization.
Contrast stretching (or histogram stretching) is a sort of
normalization process. The purpose of histogram
stretching is usually to bring the iris image into an
intensity range that is more normal or suitable to human
vision. Through histogram stretching the image contrast
is increased. Usually, the pupil who represents the darkest
portion in the human eye is almost located near iris image
center; this will help us to reduce the effect of the dark
areas caused by eyelashes by considered a sub-image that
contains the largest part of the region of interest to
perform histogram analysis without need to take the
whole eye image.

The histogram of the sub-image as shown in figure (2c)
can be modified by a linear mapping function, which will
stretch the histogram of the sub-image. The applied
mapping function for histogram stretching can be found
by the equation (1); it maps the minimum gray level G
in the image to zero and the maximum gray level G
255, the other gray levels are remapped linearly between
0 and 255:

Accumulative histogram is a good way to gain the best
values of G
and G
through using a predefined cut-off
fraction parameter of the accumulative histogram. If the
gray-level histogram is given by H, then the accumulative
histogram is determined using the following:

Then, the array H is scanned upward from 0 until the first
intensity value corresponds to accumulative histogram
above the (Cut-off Fraction x H
) is met; this value is
considered G
. Similarly, H is scanned downward from
255 until the first intensity value corresponds to
accumulative histogram less than the (Cut-off Fraction x
) is found this intensity value defines G
. Figures
(1b, 1d)) show the image after enhancement and its

Figure 2: Image histograms,(a) Original image, (b)
Enhancement image, (c), Histogram of the original image
H, (d) Histogram of the enhanced image H', (e) New
enhanced histogram H'
, (f) Smoothed histogram

2.2 Localization of Inner Boundary
The iris inner boundary is determined by finding the
pupil; this step is accomplished by assuming that the
pupil region is the darkest circular area in the iris image.
The inner boundary localization process is accomplished
through the following steps:

Step 1: Thresholding
The value of optimal threshold is estimated using the
image histogram; the latter is represented by a smooth
curve that best fits the measured image histogram. This
process is less susceptible to the noise may exist in the
raw data. As shown in figure (2d), due to histogram
stretching many of the stretched histogram, H'
elements have appeared with zero values. So, to
manipulate this case, we replaced these zero valued
elements with the interpolation values that depend on
previous and next non-zero neighbour histogram values,
see figure (2e). Then, as next step, the smoothing
(averaging) process is applied on H'
to overcome the
existing irregularities may appear in its shape, this step
will lead to gain a new smoothed histogram H
, see
figure (2f). The goal of all above operations is to
determine the best threshold at specific range of the

A reasonable threshold value (T) could assessed
depending on H
; this assessment is done by making
a scan through H
to find the gray level value
corresponds to lowest histogram value within a specified
range of gray level.

Then, the assessed threshold value is used to convert the
iris image into black (0) and white (255) image using the
following binarization criterion:

Where, G(x,y) is the pixels intensity and G
(x,y) is the
mapped pixel value. This process will produce image
segmentation, such that the largest semi-circular black
segment represents the nominated pupil region, see figure
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: Email:,
Volume 3, Issue 2, March April 2014 ISSN 2278-6856

Volume 3, Issue 2 March April 2014 Page 63

Figure 3: Image after thresholding, (a) sample image
from CASIA V3.0, (b) sample image from CASIA V1.0.

Step 2: Image Denoising
Reflection spots may found near (or on) the pupil edge,
for such case the pupil boundary points cannot allocated
correctly, which of course degrades the accuracy of the
detected approximate center and radius parameters of
pupil, see figure (4c). In order to overcome this problem,
the averaging filter (33) was applied on the, G
, image,
see figure (4d).

Figure 4: Image segmentation, (a) Original image, (b)
Enhanced image, (c) Image after thresholding, (d)
smoothing the threshold image, (e) Detect max pupils
segment, (f) Pupil filling.

Step 3: Localization of Circular Pupil Boundary
In this step, the seed fill algorithm is used as region
growing tool for extracting the binary iris image and
detects the pupil as the largest black segment. The region
growing method consists of picking an arbitrary seed
pixel from the set, investigating all 4-connected
neighbours of this seed, and adding any found connected
black neighbour to the collected region set. The seed is
then removed from the search domain (i.e., G
) and all
merged neighbours are added to the collected region set.
The region growing process continues scanning the
neighbours of all pixels listed in the region set until all
connected points are tabulated in the region set. Then, the
collected seed set is checked, if it is the largest black
segment then the pixels collected in the set are saved in
an array (denoted SEG array), and considered as the most
nominated pupil region, as shown in figure (4e). Then,
the coordinates of pupil's center point, P(X
), is
determined as the center of SEG pupils segment. Then,
by scanning the coordinates of the pixels belong to SEG,
the system will identify the values of X
and X
because they are represented minimum and maximum
found values of the x-coordinate of the pixels belong to
SEG, see figure 5. Then, the mid value of the border
points coordinates (i.e., X
& X
) is determined and
considered as the x-coordinate of the pupil center (X
that is:

Similarly, we scan the collected segment (SEG) to identify
the locations of Y
and Y
sides; which are the Y

and Y
values, respectively. Then, the y-coordinate of the
midpoint is determined and used the y-coordinate of the
pupil center point:

From this pupil center point, P(X
), we can obtain the
horizontal radius (R
),vertical radius (R
) and then
calculate the average initial pupil's radius (R
) using
the following equations:

Figure 5: Pupil initial center and radius

Step 4: Fill Pupil
Practically, the images belong to iris database CASIA
v4.0 have eight white circular spots in the pupil. In order
to remove the effect of specular spot reflections, the whole
pupil area should be filled with black color. The scan
should be applied from outside to inside; during the scan
the color of each found white pixel convert to black. The
scan starts from the segment (SEG) boundary points till
reaching the center of the pupil. Figure 6 presents an
example of the pupil filling process.
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: Email:,
Volume 3, Issue 2, March April 2014 ISSN 2278-6856

Volume 3, Issue 2 March April 2014 Page 64

Figure 6: Pupil filling. (a) Pupil segment, (b, c) fill pupil
by horizontal scan and (d, e) fill pupil by vertical scan.

Step 5: Circle Fitting
It is easy allocate the pupil boundary accurately after
getting: (i) the approximate location of the pupil; that is
, Y
), as the initial center, and (ii) R
as the initial
The Circle Fitting Algorithm (CFA) is illustrated in
figure 7. That is, for given values of (X
, Y
) and (R
the objective of circle fitting is to asses, more accurately,
the circle parameters that best fit the collected SEG
pupils segment. In order to reduce the search time, the
CFA starts searching the pupil boundary at a point of the
75% distance relative to the R
from the (X
, Y
). The
algorithm tests the circle to ensure that most of its points
lay in the pupil segment, in case of finding most of the
circle points are black then the algorithm increases the
radius by 1 and recheck if its points are black or not, This
step is repeated till reaching the case that a significant
ratio of circle pixels are white, then the algorithm tries to
move the circle center to the left, right, up, down, and to
the other four diagonal directions, if any of them led to
meet black circle (i.e., most points are inside the pupil
area) then the algorithm will continue to increase the
radius; and if it is not then the algorithm will stop. The
CFA returns the pupil's radius (R
) and the adjusted
center coordinates of the fitted circle.

Figure 7: Circle fitting.

Since the pupils boundary could be approximate as a
circle shape, therefore we need to check if the (SEG)
pupil's segment after applying the circle fitting satisfies
the circular shape or not through using the following

Where R is the pupils radius R
, Area is representing the
total number of the SEG pupil's segment.
So, if the results of this criteria is close to 12.5, this
indicates that SEG pupil segment have a circular shape,
as shown in figure (8a, b). But, in case (CS) value is far
from 12.5, this indicates that SEG is not circular, this
may happen when the collected SEG segment points may
include points belong to the intersected eyelashes with
pupil; the collection of the extended eyelashes black
points (but connected with the pupil) will introduce an
error in the determination of the pupil location, as shown
in figure (8c); to handle this case we have to re-calculate
the pupil radius by making horizontal scanning in pupil's
segment from bottom to top to avoid the effects of
eyelashes, and compare the number of black pixels in
each line with those in the surrounding horizontal lines;
the scan is continued till reaching a horizontal line whose
number of black pixels is equal or greater than those of
the neighbour lines; then consider this line as the
diameter of the SEG pupil's segment, see figure(8c), then
determine the radius (as diameter/2), and calculate the
new pupil center according to this new radius. After that,
the fitting circle algorithm is applied again to obtain the
pupil boundary depending on the last calculated radius
and center, as shown in figure (8d).

Figure 8: The pupil boundary localization: (a) SEG
pupil's segment has circular shape (b) Correct blue circle
fitting boundary. (c) SEG pupil's segment has not circular
shape induced by eyelashes and the new green diameter of
the max segment, (d) Incorrect yellow circle fitting and
correct blue circle fitting boundary.

2.3 Localization of Outer Boundary
The results of the localization process of the pupil (inner)
boundary are used as guiding parameters for initiating the
parameters of the detection process of the iris outer
boundary. The detection process is done by making a scan
along an inclined line segment that slightly below the
horizontal line. Following an inclined line instead of
horizontal line is to avoid possible occlusions of
eyelashes. Through our checks to a large number of iris
samples we have noticed that the adoption of inclined
lines to down- left or down-right will insure the case of
transition from iris to sclera regions.
During the scanning process along the inclined line a
leading edge detection algorithm was applied through the
following steps:
Apply smoothing filtering on the image; by using mean
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: Email:,
Volume 3, Issue 2, March April 2014 ISSN 2278-6856

Volume 3, Issue 2 March April 2014 Page 65

Calculate the average value (Avg) of the points
extended along the inclined line in the smooth image,
see figure (9b).
Compare each point (P) along the inclined line with
average value (Avg): if the value of (P) value is below
Avg then set (P) value to zero, else set (P) value to one,
as shown in figure (9c).
Remove the small gaps or pores (i.e., short runs of
zeros or ones) may found in the binary sequence of (p);
this step will lead to long run of zeros followed long
run of ones, see figure (9d).
Search for transition point (from zero to one) along the
inclines scan line; this point will be considered as a
boundary point P
(x,y) between iris and sclera region ,
see figure (9e).
Determine the distance between P
(x, y) and the pupil
center point P(x
, y
), then subtract of it the pupil's
radius (R
). The resulted distance will be considered as
the iris radius of the iris region (R

Figure 9: Outer boundary localization. (a) Iris image,
(b) smoothed image, (c) graph of series of zero and one,
(d) graph of smoothed series of zero and one, (e) detect of
iris outer boundary.

The performance of the proposed method is tested using
CASIA v1.0 (15) database; it includes iris image samples
belong to 108 individuals, with totally 756 images with
resolution (320x280) pixels. Also, iris database CASIA
v4.0-interval (16) was used in our experiments; it consists
of 2639 iris images captured from 249 individuals with
(320x280) pixels resolution. The conducted tests have
been applied on the whole database sets images. The
tests results indicated that the proposed system is capable
to make an accurate and fast iris localization task.
Because of the position of the light source, original iris
images may have low contrast and non-uniform
illumination, and this will impair the iris segmentations
result. Therefore, we must enhance the images for getting
a uniform distributed illumination and better contrast by
means of histogram stretching; the cutoff fraction
parameter was set to be value between [0.025-0.04]. Also
this enhancement step will help us to get the flexible
threshold value automatically for applying image
binarization and segment the pupil from other images
parts; see figures (10b) and (11b). The seed fill algorithm
is used to collect the large central black segment found in
the image, and consider it the initial allocation of pupil,
then the specular spot reflection areas in the collected
black segment are found and removed by filling the pupil
with black color. As next step, the center point and radius
of the segment are determined and considered as the
initial parameters of the pupil segment. Then the circle
fitting algorithm is applied to get more accurate values
for pupil center point and radius. For some of the tested
images, the applied method didn't lead to collected
segments (SEG) have circular shape because some of the
eyelashes are intersected with pupil boundary, so handle
such cases a circular shape testing criterion is applied to
check whether SEG has a circular shape or not; if it is not
an extra processing step is applied to overcome the effect
of potential occlusion of the dark pupil's segment that
caused by eyelashes.
The Leading Edge Detection algorithm was applied for
detecting the iris outer boundary. The proposed method
has shown a good performance for the iris outer boundary
detection because the enhancement process that applied
on the iris images makes them have good contrast
between the iris/sclera Figures 10 and 11 show the iris
segmentation results for different iris images taken from
CASIA v1.0 and CASIA v4.0-interval database sets.
Depending on the visual evaluation, we have carefully
checked the iris segmentation results for all tested images
belong to both database sets. Finally, the average
processing times of iris segmentation for CASIA v1.0 and
CASIA v4.0-interval was computed and found them
(0.23ms, 0.25 ms) respectively. These results were
obtained by running the iris segmentation process using
Visual Basic (6) programming language to develop the
proposed system program, and the tests applied on
computer platform has 2.4 GHz Core i5 processor, 2 GB
Table 1, shows the results of the proposed iris
segmentation using CASIA v1.0 database compared with
other methods, and table 2, shows the results of iris
detection of our proposed method using CASIA v4.0-
interval database compared with other methods. Taking
into consideration that the used images belong to CASIA
v4.0-interval database are the same images belong to
CASIA v3.0-interval database.
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: Email:,
Volume 3, Issue 2, March April 2014 ISSN 2278-6856

Volume 3, Issue 2 March April 2014 Page 66

Figure 10: Iris Segmentation.(a) original images from
CASIA V1.0, (b) binary enhanced image and segmented
pupil, (c) Enhanced image with inner and outer iris
boundary detection.

Figure 11: Iris Segmentation.(a) original images from
CASIA V4.0, (b) binary enhanced image and segmented
pupil, (c) Enhanced image with inner and outer iris
boundary detection.

Table 1: Performance of iris segmentation for some
methods which introduced by different researchers using
CASIA v1.0 database.
Iris Segmentation Accuracy rate
Guang-zhu XU et. al. [17] 98.42%
Weiqi Yuan et. al. [18] 99.45%
Muhammad Talal et. al. [19] 99.47%
Basit, A. et al. (20) 99.6%
Omran Safaa S. and Salih Maryam
A. (21)
Proposed Method 100%

Table 2: Performance of iris segmentation for some
methods which introduced by different researchers using
CASIA v4.0-interval database.
Iris Segmentation Accuracy rate
Sreecholpech C. and Thainimit S.
Hong-Lin W. et al.(23) 97.29
Talebi S. et al (24) 98.20%
Ann A. et al.(25) 98.85%
Basit, A. et al. (20) 99.21%
Proposed Method 100%

The method introduced in this paper can potentially
facilitate the iris segmentation task. The test results
indicate that proposed method gained a high correct iris
segmentation rate with a lower averaged segmentation
time of each database sets iris image.

We have used CASIA v1.0 images, as first test step, to
incorporate few types of noise, which almost exclusively
related with eyelid and eyelash obstruction. Secondly, we
have used CASIA v4.0-interval, because it contains
heterogeneous images, also its iris images contains
several types of noise (regarding focus, contrast, or
brightness, poor image quality and illumination).

The proposed method was proceeded in three stages;
image enhancement, inner boundary detection and outer
boundary detection. Experiments on all CASIA v1.0 and
CASIA v4.0-interval database images show encouraging
results for localizing both the inner and outer iris
boundaries as circles shape with 100% accuracy rate.

[1] Wang Y., Y. Zhu, and T. Tan, Biometrics Personal
identification based on iris pattern, Acta Automatic
a Sinica, Vol.28, Pp.110, January 2002.
[2] Ritu J., Gagandeep K., Biometric Identification
System Based On Iris, Palm And Fingerprints For
Security Enhancement, International Journal of
Engineering Research & Technology, Vol.1, Pp. 1-4,
[3] Daugman J. G., How Iris Recognition Works,
Circuits and Systems for Video Technology, IEEE
Transaction, Vol.14, Pp. 21-30, 2004.
[4] Flom, L., and Safir, A., Iris recognition system, US
patent 4,641,349, Patent and Trademark Office,
Washington, D.C.,1987.
[5] Daugman, J. G., High confidence visual recognition
of persons by a test of statistical independence,
Pattern Analysis and Machine Intelligence, IEEE
Transaction on ISSN, Vol. 15, Pp. 11481161, 1993.
[6] Nishino, Ko, and Nayar, Shree.K.,Eyes for
relighting, ACM Trans. Graph, Vol. 23, No. 3, Pp.
704711, 2004.
International Journal of EmergingTrends & Technology in Computer Science(IJETTCS)
Web Site: Email:,
Volume 3, Issue 2, March April 2014 ISSN 2278-6856

Volume 3, Issue 2 March April 2014 Page 67

[7] Wildes, R.P., Iris recognition: an emerging
biometric technology, Proceedings of the IEEE
ISSN, Vol.85, Pp.13481363, 1997.
[8] Masek L., Recognition of human Iris patterns for
biometric identification, http://www.csse., 2003.
[9] Li Ma, Tieniu Tan, Yunhong Wang, Dexin Zhang,
Efficient iris recognition by characterizing key local
variations, IEEE Transaction on Image Processing,
Vol.13, Pp.739-750, 2004.
[10] Christel-Loc Tisse, Lionel Martin, Lionel Torres,
Michel Robert, Person identification technique
using human iris recognition, Proceedings of the
International Conference on Vision Interface, VI
'02, Calgary, Canada, Pp. 294-299, 2002.
[11] Jiali Cui, Yunhong Wang, Tieniu Tan, Li Ma, Sun,
An iris recognition algorithm using local extreme
points, Proceedings of the First International
Conference on Biometrics Authentication, ICBA 04,
Hong Kong, Pp. 442-449, 2004.
[12] Lye Wil Liam, Chekima, A., Liau Chung Fan, and
Dargham, J.A., Iris recognition using self-
organizing neural network, IEEE, Student
Conference on Research and Developing Systems,
Malaysia, pp. 169172, 2002.
[13] Proenca H. and Alexandre L.A., Iris segmentation
methodology for non-cooperative recognition, IEE
Proc.Vis. Image Signal Processing, Vol. 153, No. 2,
Pp.199-205, 2006.
[14] Ghassan J. Mohammed, Bing-Rong Hong and Ann
A. Jarjes, Accurate pupil features extraction based
on new projection function, Computing and
Informatics, Vol. 29, Pp. 663680, 2009.
[15] Chinese Academy of Science-Institute of
Automation, CASIA Iris Image Database (Ver. 1.0),
and available
[16] Chinese Academy of Science-Institute of
Automation, CASIA Iris Image Database (Ver. 4.0),
and Available on: http://biometrics.
[17] Guang-zhu XU, Zai-Feng ZHANG and Yi-de MA,
A novel and efficient method for iris automatic
location, Journal of China University of Mining and
Technology, Vol. 17, Pp. 441446, 2007.
[18] Weiqi Yuan, Zhonghua Lin and Lu Xu, A Rapid
Iris Location Method Based on the Structure of
Human Eyes, Proceeding of the Annual
International Conference-IEEE, Engineering in
Medicine and Biology Society; Pp. 3020-3023, 2005.
[19] Muhammad Talal Ibrahim, Tariq Mehmood, M.
Aurangzeb Khan and Ling Guan, A Noval and
Efficient Feed Back Method for Pupil and Iris
Localization, Image Analysis and Recognition, 8th
International Conference, ICIAR, Burnaby, BC,
Canada, Proceedings, Part II, Vol. 6754, Pp. 79-88,
[20] Basit A., Javed M.Y. and Masood S.Non-circular
pupil localization in iris images, International
Conference on Emerging Technologies, IEEE-ICET,
Rawalpindi, Pakistan, Pp. 228-231, 2008.
[21] Omran Safaa S. and Salih Maryam A., Iris
Segmentation Using Staistical Measurements for the
Intensity Values of the Eye Image, International
Conference on Information Technology, Pp.1, 2013.
[22] Sreecholpech C. and Thainimit S., A Robust Model-
based Iris Segmentation, International Symposium
on Intelligent Signal Processing and Communication
Systems, (ISPACS), Pp. 599-602, 2009.
[23] Hong-Lin Wan, Zhi-Cheng Li, Jian-Ping Qiao, Bao-
Sheng Li, Non-ideal iris segmentation using
anisotropic diffusion, The Institution of Engineering
and Technology, IET Image Processing,Vol.7, Pp.
111-120, 2013.
[24] Talebi S.M., Ayatollahi A., and Moosavi S.M.S., "A
Novel Iris Segmentation Method based on Balloon
Active Contour, Iranian Conference on Machine
Vision and Image Processing , Pp. 1-5, 2010.
[25] Ann A. Jarjes, Kuanquan Wang, Ghassan J.
Mohammed, Iris Localization: Detecting Accurate
Pupil contour and Localizing Limbus Boundary,
2nd International Asia Conference on Informatics in
Control, Automation and Robotics, Vol. 1, Pp. 349
352, 2010.

Iman A. Saad received the B.S. and M.S. degrees in
computer science from Al-Mustansiriyah University, Iraq
in 1993 and 2006, respectively. She is working as
Lecturer, in Electronic Computer Center, Al-
Mustansiriyah University, Baghdad, Iraq. She is currently
pursuing the Ph.D. degree in computer science at
Department of Mathematics, College of Science, Alepo
University, Alepo, Syria.

Dr. Loay Edwar George received the B.S. in Physics,
M.S. in Theoretical Physics and Ph.D in Digital Image
Processing degrees from Baghdad University, Iraq, in
1979, 1983 and 1997, respectively. He is a member of
Arab Union of Physics and Mathematics, and the Iraqi
Association for Computers. Now, he is the Head of
Computer Science Department, Baghdad University.