You are on page 1of 4

30th Annual International IEEE EMBS Conference Vancouver, British Columbia, Canada, August 20-24, 2008

Detection of the Optic Disc in Images of the Retina Using the Hough Transform
Xiaolu Zhu and Rangaraj M. Rangayyan Department of Electrical and Computer Engineering, Schulich School of Engineering, University of Calgary, Calgary, Alberta, Canada T2N 1N4

Abstract We propose a method to locate automatically the optic disc (OD) in fundus images of the retina. Based on the properties of the OD, our proposed method includes edge detection using the Sobel or the Canny method, and detection of circles using the Hough transform. The Hough transform assists in the detection of the center and radius of a circle that approximates the margin of the OD. Based on the feature that the OD is one of the bright areas in a fundus image, potential circles detected by the Hough transform are analyzed using intensity. Forty images of the retina from the DRIVE database were used to evaluate the performance of the proposed method. The success rates including both good and acceptable detections were 92.50% using the Sobel operators and 80% using the Canny edge detector. I. I NTRODUCTION The optic disc (OD) is one of the main features of a retinal fundus image [1]. Detection of the OD is a key preprocessing component in algorithms designed for the automatic extraction of the anatomical structures of the retina. The OD appears toward the left-hand or right-hand side of a fundus image as an approximately circular area, roughly one-sixth the width of the image in diameter, brighter than the surrounding area, and as the convergent area of the blood vessel network [2]. In an image of a healthy retina, all of the properties mentioned above (shape, color, size, and convergence) contribute to the identication of the OD. A. Review of Methods for Detection of the Optic Disc We present here a selective review of recently proposed methods and algorithms to locate the OD in images of the retina. Direction matched lter: The OD detection algorithm proposed by Youssif et al. [1] is based on matching the expected directional pattern of the retinal blood vessels in the vicinity of the OD. A direction map of the segmented retinal vessels is obtained by a two-dimensional (2D) Gaussian matched lter. The minimum difference between the matched lter and the vessels directions in the surrounding area of each of the OD-center candidates is found. The OD-center was detected correctly in 80 out of 81 images (98.77%) from a subset of the STARE dataset [3], and all of the 40 images

(100%) of the DRIVE dataset [4], [5]. Similar methods have been implemented by ter Haar [6]. Property-based method: Based on the brightness and roundness of the OD, Park et al. [2] presented a method using algorithms which include thresholding, detection of object roundness, and circle detection. The success rate was 90.25% with the 40 images in the DRIVE dataset. Similar methods have been described by Barrett et al. [7], ter Haar [6], and Chr stek et al. [8], [9]. a Geometrical model: The method proposed by Foracchia et al. [10] is based on a preliminary detection of the main retinal vessels. A geometrical parametric model, where two of the model parameters are the coordinates of the OD center, is proposed to describe the general direction of retinal vessels at any given position. Model parameters are identied by means of a simulated annealing optimization technique. The estimated values provide the coordinates of the center of the OD. An evaluation of the proposed procedure was performed using a set of 81 images from the STARE project [3], containing both normal and pathological images. The position of the OD was correctly identied in 79 out of 81 images (97.53%). Fractal-based method: Ying et al. [11] proposed an algorithm to differentiate the OD from other bright regions such as hard exudates and artifacts on the basis of the fractal dimension related to the converging pattern of the blood vessels. The OD was segmented by local histogram analysis. The scheme was tested with the DRIVE database and identied the OD in 39 out of the 40 images. Warping and random sample consensus (RANSAC): A method was proposed by Kim et al. [12] to analyze images obtained by retinal nerve ber layer photography. The center of the optic nerve head is selected as the brightest point and an imaginary circle is dened. The circle is then warped into a rectangle. RANSAC is used to nd the boundary of the optic nerve head. Then, the model is inversely warped into a circle. The images used to test the method included 43 normal images and 30 images with glaucomatous changes. The performance of the algorithm was reported as 91% sensitivity and 78% positive predictability.

978-1-4244-1815-2/08/$25.00 2008 IEEE.


II. M ETHODS A. Dataset of Retinal Images and Preprocessing Fundus images of the retina from the DRIVE database [4], [5], which contains 40 images, are used in the present work. After normalizing each component (dividing by 255), the result was converted to the luminance component Y , computed as Y = 0.299R + 0.587G + 0.114B, where R, G, and B are the red, green, and blue components, respectively, of the color image. The effective region of the image was thresholded using the normalized threshold of 0.1. The artifacts present in the DRIVE images at the edges were removed by applying morphological erosion [13] with a disc-shaped structuring element of diameter 10 pixels. In order to avoid edge artifacts, each image was extended beyond the limits of its effective region [14], [15]. First, a four-pixel neighborhood was used to identify the pixels at the outer edge of the effective region. For each of the pixels identied, the mean gray level was computed over all pixels in a 21 21 neighborhood that were also within the effective region, and assigned to the corresponding pixel location. The effective region was merged with the outer edge pixels, forming an extended effective region. The procedure was repeated 50 times, extending the image by a ribbon of width 50 pixels. After preprocessing, a 5 5 median lter was applied to the Y channel, to remove outliers in the image. Then, the maximum intensity in the image was calculated to serve as a reference intensity for the selection of circles. B. Detection of Edges The Sobel operators [13], [16] for the horizontal and vertical gradients are shown in Fig. 1. The horizontal and vertical components of the gradient, Gx (x, y) and Gy (x, y), respectively, were obtained by convolving the preprocessed image with the operators shown in Fig. 1. The combined gradient magnitude was obtained as G(x, y) = [G2 (x, y) + x 1 G2 (x, y)] 2 . A threshold was applied to the gradient magniy tude image to obtain a binary edge map. Canny [17] proposed an approach for edge detection based upon three criteria for good edge detection, multidirectional derivatives, multiscale analysis, and optimization procedures. The MATLAB [18] version of the Canny operator was used to obtain a binary edge map for comparative analysis. 1 2 0 0 1 2 Vertical
Fig. 1.

circles and other parameterized geometrical shapes [20], [16], [13]. The points lying on the circle (x a) + (y b) = c2
2 2


are represented by a single point in the three-dimensional (3D) parameter space (a, b, c) with accumulators of the form A(a, b, c), which is also known as the Hough space. Here, (a, b) is the center, and c is the radius of the circle. The procedure to detect circles involves the following steps: 1) Obtain a binary edge map of the image. 2) Set values for a and b. 3) Solve for the value of c that satises Equation 1. 4) Update the accumulator that corresponds to (a, b, c). 5) Update values for a and b within the range of interest and go back to Step 3. D. Procedure for the Detection of the OD Because the OD appears usually a circular region, an algorithm for the detection of circles may be expected to solve the problem. The Hough accumulator is a 3D matrix, each cell of which is incremented for each nonzero pixel of the edge map that meets the stated condition. For example, the value for the cell (a, b, c) in the Hough accumulator is equal to the number of edge map pixels of a potential circle in the image with the center at (a, b) and radius c. In the case of the images in the DRIVE database [4], [5], the size of each image is 584 565 pixels. The spatial resolution of the images in the DRIVE database is about 20 m per pixel. The physical diameter of the optic disc is about 1.5 mm on average [21]. Assuming the range of the radius of a circular approximation to the OD to be 600 1000 m, the range for the radius c was determined to be 31 50 pixels. Hence, the size of the Hough accumulator was set to be 584 565 20. The potential circles indicated by the Hough accumulator were ranked, and the top 30 were selected for further analysis. Because we know that the OD is one of the bright areas in the image, a threshold equal to 0.9 times the reference intensity (determined as described in Section II-A) was used to check the maximum intensity within a circular area with half of the radius of the potential circle. If the test failed, the circle was rejected, and the next circle was tested. III. R ESULTS The proposed method was tested with the 40 images from the DRIVE database [4], [5]. The edge images obtained using the Sobel operators were binarized using a threshold of 0.02 (the threshold was chosen by analyzing the appearance of the OD in a edge image). Fig. 2 shows an example of a successfully detected OD. Fig. 3 shows an acceptable detection of the OD, where the detected circle overlaps the bright area of the OD. Fig. 4 and Fig. 5 show two cases where the ODs were missed. The success rate, including both good and acceptable detections, is 37 out of 40 images, or 92.50%. Using the Canny edge detector (auto thresholded using the MATLAB [18] function) instead of the Sobel operator, seven ODs were missed; two results were acceptable; and the rest were successful. A success rate of 80% was achieved.

1 0 1

1 2 1

0 0 0

1 2 1

The Sobel operators.

C. The Hough Transform for the Detection of Circles Hough [19] proposed a method to detect straight lines in images. The Hough transform has been extended to identify




[1] Youssif AAHAR, Ghalwash AZ, and Ghoneim AASAR. Optic disc detection from normalized digital fundus images by means of a vessels direction matched lter. IEEE Transactions on Medical Imaging, 27(1):1118, 2008. [2] Park M, Jin JS, and Luo S. Locating the optic disc in retinal images. In Proceedings of the International Conference on Computer Graphics, Imaging and Visualisation, page 5, Sydney, Qld., Australia, 26-28 July 2006. IEEE. [3] Structured Analysis of the Retina, http:// www. ces. clemson. edu /ahoover/stare/, accessed on March 24, 2008. [4] Staal J, Abr` moff MD, Niemeijer M, Viergever MA, and van Ginneken a B. Ridge-based vessel segmentation in color images of the retina. IEEE Transactions on Medical Imaging, 23(4):501509, 2004. [5] DRIVE: Digital Retinal Images for Vessel Extraction,, accessed on March 24, 2008. [6] ter Haar F. Automatic localization of the optic disc in digital colour images of the human retina. Masters thesis, Utrecht University, Utrecht, The Netherlands, 2005. [7] Barrett SF, Naess E, and Molvik T. Employing the Hough transform to locate the optic disk. Biomedical Sciences Instrumentation, 37:8186, 2001. [8] Chr stek R, Wolf M, Donath K, Michelson G, and Niemann H. Optic a disc segmentation in retinal images. In Bildverarbeitung fur die Medizin, pages 263266, 2002. [9] Chr stek R, Wolf M, Donath K, Niemann H, , Paulus D, Hothorn T, a Lausen B, L mmer R, Mardin CY, and Michelson G. Automated a segmentation of the optic nerve head for diagnosis of glaucoma. Medical Image Analysis, 9:297314, 2005. [10] Foracchia M, Grisan E, and Ruggeri A. Detection of optic disc in retinal images by means of a geometrical model of vessel structure. IEEE Transactions on Medical Imaging, 23(10):11891195, 2004. [11] Ying H, Zhang M, and Liu JC. Fractal-based automatic localization and segmentation of optic disc in retinal images. In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 41394141, Lyon, France, August 23-26 2007. IEEE. [12] Kim SK, Kong HJ, Seo JM, Cho BJ, Park KH, Hwang JM, Kim DM, Chung H, and Kim HC. Segmentation of optic nerve head using warping and RANSAC. In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 900903, Lyon, France, August 23-26 2007. IEEE. [13] Gonzalez RC and Woods RE. Digital Image Processing. Prentice Hall, Upper Saddle River, NJ, 2nd edition, 2002. [14] Soares JVB, Leandro JJG, Cesar Jr. RM, Jelinek HF, and Cree MJ. Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classication. IEEE Transactions on Medical Imaging, 25(9):12141222, 2006. [15] Oloumi Faraz, Rangayyan RM, Oloumi Foad, Eshghzadeh-Zanjani P, and Ayres FJ. Detection of blood vessels in fundus images of the retina using Gabor wavelets. In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 64516454, Lyon, France, 22-26 August 2007. IEEE. [16] Rangayyan RM. Biomedical Image Analysis. CRC, Boca Raton, FL, 2005. [17] Canny J. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI8(6):670698, 1986. [18] The MathWorks,, accessed on March 24, 2008. [19] Hough PVC. Method and means for recognizing complex patterns. US Patent 3, 069, 654, December 18, 1962. [20] Duda RO and Hart PE. Use of the Hough transformation to detect lines and curves in pictures. Communications of the ACM, 15:1115, 1972. [21] Lalonde M, Beaulieu M, and Gagnon L. Fast and robust optic disc detection using pyramidal decomposition and Hausdorff-based template matching. IEEE Transactions on Medical Imaging, 20(11):11931200, 2001.

Method of detection Youssif et al. [1] Ying et al. [11] Park et al. [2] Foracchia et al. [10] ter Haar [6] Our proposed method

Success rate 100.00% (DRIVE) 98.77% (STARE) 97.50% (DRIVE) 90.25% (DRIVE) 97.53% (STARE) 93.8% (STARE) 92.50% (DRIVE) 40.24% (STARE)

The method was also tested with 82 images from the STARE dataset [3]. The method located 33 ODs using the Sobel operator (auto thresholded) and 18 ODs using the Canny edge detector (thresholded at 0.17). IV. D ISCUSSION In Table I, we have listed the success rates of locating the OD reported by a few selected methods published in the literature and reviewed in Section I-A. We have focused on methods that have been tested with images from the DRIVE and the STARE databases. Our proposed method can locate the center of the OD and nd a circular approximation of its boundary. However, it might fail when the OD is dim or blurred. The appearance of the OD in the images in the STARE dataset varies signicantly due to various types of retinal pathology; as a result, our proposed method did not yield good results. Based on the properties of the OD, our proposed method does not require preliminary detection of blood vessels and hence has low complexity. Similar methods used by Barrett et al. [7], ter Haar [6], and Chr stek et al. [8], [9] were a not tested with the publicly available database DRIVE to facilitate comparative analysis. V. C ONCLUSION AND F UTURE W ORK We have proposed a method for automatic detection of the OD in fundus images of the retina. A comparison of two methods of edge detection, the Sobel operators and the Canny method, was performed. It was found that the Sobel operators give clearer edge maps, and lead to better detection of the OD using the Hough transform. Among the 40 images in the DRIVE database, our proposed method correctly detected the OD in 92.50% of the cases. We also conducted a preliminary test with the STARE database. The proposed method did not work well in cases where the OD is not circular and where bright exudates are present. Further studies are required to incorporate additional characteristics of the OD to improve the efciency of detection. VI. ACKNOWLEDGMENTS This work was supported by the Natural Sciences and Engineering Research Council of Canada. We thank F bio a J. Ayres for assistance in this work.


Fig. 2.



(a) DRIVE testing image 01; (b) edge image using the Sobel operators; (c) successfully detected OD.

Fig. 3.



(a) DRIVE testing image 12; (b) edge image using the Sobel operators; (c) acceptable detection of the OD.

Fig. 4.



(a) DRIVE training image 28; (b) edge image using the Sobel operators; (c) OD missed.

Fig. 5.



(a) DRIVE testing image 03; (b) edge image using the Sobel operators; (c) OD missed.