You are on page 1of 4

AUTOMATIC SEGMENTATION OF IRIS IMAGES FOR THE PURPOSE OF IDENTIFICATION Amjad Zaim InfoMD Consultancy Group Toledo, Ohio

USA www.infomd.org amjad@infomd.org


ABSTRACT Automatic recognition of the human iris is essential for the purpose of personal identification and verification from eye images. The human iris is known to possess structures that are distinct and unique to each individual. Accurate classification, however, depends on proper segmentation of the iris and the pupil. In this paper, we present a new method for automatically localizing and segmenting the iris with no operator intervention. Circular region growing is first used to localize the eyes centroid. We then utilize several geometrical features of the eye to constraint a model built in the polar image space. The model employs knowledge of anatomical attributes as well as gradient information to extract the iris boundaries. Applying this method to 352 images revealed 92% segmentation accuracy. The algorithm has shown to be effective in various levels of illumination and for images with large field of view containing other facial features. eye (Figure 1; left). Ideally, it shares high-contrast boundaries with the pupil but less-contrast boundaries with the sclera. Diseases of the eye and exposure to certain environmental conditions, however, can reverse this effect and drastically alter its appearance although these conditions are rare [3]. Reactions to different levels of illuminations have shown to produce small changes in the diameter of the pupil and do result in severe distortion of the iris. All these factors make the iris a potential candidate for personal identification. An identification system typically consists of 1) image data acquisition, 2) iris localization and segmentation and 3) pattern matching [1,4]. The design of an imagecapturing system requires that sufficient resolution is maintained in order to capture the small details of the iris, typically 1cm in diameter, with a level of illumination high enough to provide good contrast in its interior portion without causing discomfort and irritation to the individuals eye. An 80-mm lens, f-stop 11 and a 1 cm depth of field has been reported to produce reasonable quality images [5]. Iris localization and segmentation of camera images, however, poses a significant challenge for many reasons. The eyelids can obscure substantial portions of the eye. The shape and dimensions of the eye vary from one subject to another and depends on its position in relation to the sensor. In addition, low-level illumination and certain diseases of the eye can greatly degrade the contrast in pupil-iris boundaries. Previous studies have made use of the geometrical nature of the eye. The eye was modeled with circular contour fitted through gradient ascend to search for the global minimum using deformable template [5]. An approach based on an analysis of the image gradient pattern corresponding to an eye, including motion information has also been presented [6]. The pupil/iris boundaries also have been detected and localized via edge detection and Hough transform was used to

1. INTRODUCTION The problem of automatic identification and verification of individuals to restrict access to secured sites has been tackled by a variety of biometrically-based approaches from fingerprints, voice, handwritten signatures and even facial features. The iris has recently been recognized as a highly distinct feature that is unique to each individual [1,2]. It is composed of several layers which gives it its unique appearance. This uniqueness is visually apparent when looking at its rich and small details seen in highresolution camera images under proper focus and illumination. The iris is the ring-shape structure that encircles the pupil, the dark centered portion of the eye, and stretches radially to the sclera, the white portion of the

0-7803-9134-9/05/$20.00 2005 IEEE

capture the contours of interest [7]. Other methods included symmetrical circular filter and principle component analysis for feature extraction [8-10]. We developed an accurate model-based segmentation scheme that is tolerable of low-illumination level and wider filed images, and is guarded against obstructive features such as eyelids. The system has shown good performance when applied to 352 images with 92% accuracy. 2. IRIS SEGMENTATION Our system models the iris as a ring-shaped object concentric around the disk-shaped pupil. Gray-scale camera images of 280x320 pixels containing one eye are processed in the following fashion. First, occluding eyelashes are removed by morphological closing. Next, the center of the pupil is localized by fist applying Split and Merge algorithm to detect connected regions of uniform intensity and then growing a circular template to distinguish the pupil from other potential circular or semicircular objects. A model-based algorithm is then applied in the polar-sampled space and an edge linking scheme is used to detect the horizontally-mapped boundaries of the iris. The algorithm returns a set of model parameters including the radii of the circular contours of the pupil and the iris as well as the centroid they share. The steps are discusses in more details in the next sections. 2.1. Morphological Closing Eyelashes can severely occlude the iris and interfere with the segmentation process. The eyelashes can be described in the image as the dark, long and narrow structures that are oriented randomly around the eye (Figure 1; left). Morphological closing is a well know filter frequently used in image processing to fuse narrow breaks and long gulfs in binary and gray-scale images [8,9]. The Closing of an image A by a structuring element B can be simply defined as the dilation of A by B followed by erosion of the results by B or:

2.2. Centroid Localization Given the circular nature of the iris and the pupil, the first step in segmentation after eyelashes removal begins with localizing the centroid shared by both the iris and the pupil. The darkness of the pupil stems from its absorption of light and, in some cases, diseases or improper illumination may reverse this effect. We first apply Split and Merge algorithm to detect connected regions of uniform intensity [11,12]. A criterion for disjoining a region Rm is that 80% of the pixels contained in Rm have the property: | I(x,y) (m) | (m) (2) Where I(x,y) is the intensity level, (m) and (m) are the gray-level mean and the standard deviation in Rm, respectively. The shape of the extracted regions is classified into a circle by growing a disk-shape template centered at the first moment of each region. The template that produces the maximum normalized energy is identified as the one that has the centroid of the pupil. The centroid is also moved around few pixels upward, downward and sideways until an optimal centroid is reached. 2.3. Cartesian-to-Polar Mapping We have used Cartesian-to-Polar mapping in the past to reconstruct rectangular-shaped ultrasound images from fan-shaped images collected around a point source [13,14]. Samples were obtained from rays irradiating outward from a central point. Resampling the caretsian image space (x,y) into the polar image space (r,) is done according to:
r = [( x xo ) 2 + ( y yo )]1 / 2

(3) (4)

= arctan

y yo x xo

A B = ( A B ) B

(1)

The initial dilation process removes the dark details and brightens the image. The subsequent erosion process darkens the image without reintroducing the details removed by dilation. We used a flat linear structuring element of 15-pixel length. The closing process was applied iteratively while the linear structuring element was incrementally rotated by a 5-degrees interval to account for the random orientation of the eyelashes. The closed image contained smoother features with smeared eyelashes (Figure 1; right).

For each grid point (x,y) of the destination image, the polar coordinates are computed with respect to a centroid (xo,yo) and its grayscale value is interpolated from its nearest neighbors in the source image. Ideally, a circle centered on the point (xo,yo) is mapped onto a line stretching along the angular range (0-2). For example, the pupil and iris boundary contours are mapped to near horizontal edges with linearity depending on how concentric the contours are with respect to the centroid. Although this is not a one-to-one transformation, most of the image content is recovered in the polar space except toward the periphery where significant loss of resolution occurs as a result of interpolation. We used a small angle

of 0.5o for sampling interval to minimize this effect which becomes more prominent at some distance away from the center. In general, objects in the resultant transformed image appear much like flexible bent curved objects that were unfolded or stretched into their flat form (Figure 2; left). 2.2. Gradient-Based Edge Detection and Linking Our model exploits few observations that can be made from the polar-sampled image. First, the intensity distribution of the eye changes as one crosses from pupil to iris and from iris to sclera. Hence, taking the radial derivative of the image intensity in the direction of r reveals two set of boundaries. Second, the two boundaries are horizontal or near horizontal and can be detected using the local maximum of the image gradient. We also utilize the anatomical criteria of the eye that the iris-pupil radii ratio is greater than 4.0 or less than 1.75 [3]. The last observation is used as an upper constraint to prevent the iris from crossing over to the sclera region or toward the pupil in cases of severe noise or light reflection. The following steps summarize our edge detection scheme: 1) For horizontal edges, the gradient of the image has the following properties:

The horizontal direction with small tolerance is chosen to be the criteria direction consistent throughout the edges. 4) Merge horizontally-connected edge chains by fitting line segments to their edge points. Only the two sets of line segments that differ by less than 3o are kept. 5) The distance between the two lines that correspond to the highest number of segments are tested against the two conditions: (riris/rpupil < 4.0) AND (riris/rpupil > 1.75). If any of the two conditions are true, ignore and consider next possible segment sets. The result of the above algorithm is a map of two line segments overlaying the edges of the iris and the pupil (Figure 3). The average vertical distances to the first and second lines of edges are considered to be rpupil and riris, respectively. When these parameters along with the centroid (xo,yo) are mapped back to the original iris image, the resulting contours can be seen to accurately overlays the pupil and the iris (Figure 4).

G y = max, G x = 0

(5)

We extract horizontal edges by obtaining the first gradient in the - and r- direction using the following Sobel masks:
S horizontal =
1 0 1

Figure 1: Original camera image of the eye showing the iris (left). The iris image after morphological closing with eyelashes removed (right).

2 0 2

0 S vertical 1

1 2 1

0 2 0 1

(6)

A typical gradient image in r-direction includes strong edges at high-contrast areas of dark-to-bright zones such as the pupil-iris interface, weaker and wider edges at areas of less-contrast along the iris-sclera interface, and other scattered edges (Figure 2; right). The gradient magnitude in -direction is zero along horizontal edges. 2) We search for the edges that satisfy (5) by keeping the points whose gradient magnitude is a local maximum in the -direction and zero in the r-direction. For a central pixel, P, the two neighbors in the direction that is closest to the direction of the gradient gp are checked and if gp is largest, it is incremented by half its magnitude while the others are eliminated [12]. The process is iterated and the result is an edge map with only one best point across a given border at any point along -axis. 3) Build 8-connected pixels edge chains along the rdirection using forward and backward neighbors along .

r

Figure 2: Result of polar mapping of the lower half of the original image with pupil and iris mapped into horizontal structures (left). Gradient image produced by horizontal Sobel operator (right).

riris rpupil
Figure 3: A map of line segments masking the iris and the pupil edges.

5. REFERENCES
[1] R. Wildes, Iris Recognition: An Emerging Biometric Technology, Proc. IEEE, vol. 85, pp. 1348-1363, 1997. [2] R. G. Johnson, Can iris patterns be used to identify people, Los Alamos National Laboratory, CA, Chemical and Laser Sciences Division, Rep. LA-12331-PR, 1991. [3] D. Miller, Ophthalmology. Houghton Mifflin, MA, 1979. [4] J.Daugman, Statistical Richness of Visual Phase Information: Update on Recognizing Persons by Iris Patterns, Intl. J. of Computer Vision, vol. 45, pp. 25-38, 2001. [5] J.Daugman, High Confidence Visual Recognition by a Test of Statistical Independence, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, pp. 1148-1161, 1993. [6] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J. Kolczynski, J. R. Matey, and S. E. McBride, A Machine Vision System for Iris Recognition, Mach. Vision App., vol. 9, pp.18, 1996. [7] J. G. Daugman, Complete discrete 2-D Gabor transforms by neural network for image analysis and Compression, IEEE Trans. Acoust., Speech, Signal Processing, vol. 36, pp. 1169 1179, 1988. [8] Li Ma, Y.Wang, T.Tan, Iris Recognition Using Circular Symmetric Filters, Proceedings of the Sixteenth International Conference on Pattern Recognition, vol. 2, pp. 414-417, 2002. [9] Kwanghyuk Bae, Seungin Noh, and Jaihei Kim, Iris Feature Extraction Using Independent Component Analysis, AVBPA 2003, LNCS, vol. 2688, pp. 838-844, 2003. [10] R. Kothari, J. Mitchell, Detection of Eye Locations in Unconstrained Visual Images, Proc. IEEE ICIP, pp. 519-522, 1996. [11] K. R. Castleman, Digital Image Processing, Prentice-Hall, Upper Saddle River, New Jersey, 1996. [12] W, Niblack, An Introduction to Digital Image Processing, Prentice-Hall, Upper Saddle River, New Jersey, 1996. [13] A, Zaim, R, Keck, R, S, Selman, and J, Jankun, "ThreeDimensional Ultrasound Image Matching System for Photodynamic Therapy," Proceedings of BIOS-SPIE, vol. 4244, pp. 327-337, 2001. [14] J. Jankun and A. Zaim, An image-guided robotic System for photodynamic Therapy of the Prostate, SPIE Proceeding, vol. 39, pp. 22-30, 1999. [15] R, Gonzalez. Digital Image Processing, Addison-Wisely, MA, 1996.

Figure 4: Segmentation results with contours outlining the pupil and the iris.

Method Accuracy Mean Time Wildes[1] 98.6% 8.28s Daugman[5] 99.5% 6.56s Proposed 92.7% 5.83s Table 1. Comparison with other algorithms.. 3. RESULTS We obtained camera eye images of 100 human subjects with diverse shapes and orientation from an online database. Good localization was obtained in images of low contrast in the iris-sclera interface (12 to 20 gray-level difference). However, images of low contrast in the pupiliris interface were not available for tests. At the other extreme, excess illumination has not prevented accurate localization but caused error in 9 localization attempts. Imperfection in the circular nature of the iris boundary localization also caused 18 miss-localizations. Overall, a total of 352 eye images resulted in 320 correct segmentation based on visual assessment and a performance accuracy of 92%. The average execution time needed for the entire process of segmentation is about 6 seconds when performed on a regular 789 MHz desktop computer. Table 1 shows comparison of the proposed method with other segmentation algorithms. While our method reports lower segmentation accuracy, it outperforms the others in terms of speed. 4. CONCLUSION In this paper, we described a fast and effective real-time algorithm for localizing and segmenting the iris and pupil boundaries of the eye from camera images. Our approach detects the center and the boundaries quickly and reliably, even in the presence of eyelashes, under very low contrast interface and in the presence of excess illumination. Results have demonstrated 92% accuracy rate with a relatively rapid execution time. It is suggested that this algorithm can serve as an essential component for iris recognition applications.

You might also like