Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more ➡
Download
Standard view
Full view
of .
Add note
Save to My Library
Sync to mobile
Look up keyword
Like this
2Activity
×
0 of .
Results for:
No results containing your search query
P. 1
Iris Image Pre-Processing and Minutiae Points Extraction

Iris Image Pre-Processing and Minutiae Points Extraction

Ratings: (0)|Views: 759|Likes:
Published by ijcsis
An efficient method for personal identification based on the pattern of human iris is proposed in this paper. Crypto-biometrics is an emerging architecture where cryptography and biometrics are merged to achieve high level security systems. Iris recognition is a method for biometric authentication that uses pattern-recognition techniques based on high-resolution images of the irides of an individual's eyes. Here we discuss about ‘recognizing the iris and storing the pattern of the iris recognized.
An efficient method for personal identification based on the pattern of human iris is proposed in this paper. Crypto-biometrics is an emerging architecture where cryptography and biometrics are merged to achieve high level security systems. Iris recognition is a method for biometric authentication that uses pattern-recognition techniques based on high-resolution images of the irides of an individual's eyes. Here we discuss about ‘recognizing the iris and storing the pattern of the iris recognized.

More info:

Published by: ijcsis on Jul 07, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See More
See less

12/21/2011

pdf

text

original

 
(IJCSIS)
 
International 
 
 Journal 
 
of 
 
Computer 
 
Science
 
and 
 
Information
 
Security,
 
Vol.
 
9,
 
No.
 
6
 
 ,
 
2011
 
Iris Image Pre-Processing And MinutiaePoints Extraction
 ARCHANA R
.
 J.N 
 AVEENKUMAR
P
 ROF 
.D
 R
.S
UHAS
.
 H 
.P
 ATIL
COMPUTER ENGINEERING COMPUTER ENGINEERING HOD, COMPUTER ENGINEERING BVDUCOE BVDUCOE BVDUCOE Pune, Maharashtra, India Pune, Maharashtra, India Pune, Maharashtra, India
 Abstract
An efficient method for personalidentification based on the pattern of human iris isproposed in this paper. Crypto-biometrics is anemerging architecture where cryptography andbiometrics are merged to achieve high level securitysystems. Iris recognition is a method for biometricauthentication that uses pattern-recognitiontechniques based on high-resolution images of theirides of an individual's eyes. Here we discuss about‘recognizing the iris and storing the pattern of the irisrecognized.
 Keywords- Biometrics, Cryptography, pattern recognition, Canny edge detection, Hough transform
1.
 
.
 
I
 NTRODUCTION
 
Independently both biometrics and cryptography play a vital role in the field of security. A blend of these two technologies can produce a high levelsecurity system, known as crypto biometric systemthat assists the cryptography system to encrypt anddecrypt the messages using bio templates. Havingan easier life by the help of developingtechnologies forces people is more complicatedtechnological structure. In today’s world, securityis more important than ever. Today, for securityneeds, detailed researches are organized to set upthe most reliable system. Iris Recognition SecuritySystem is one of the most reliable leadingtechnologies that most people are related [1]. Irisrecognition technology combines computer vision, pattern recognition, statistical inference, and optics.Its purpose is real time, high confidencerecognition of a person's identity by mathematicalanalysis of the random patterns that are visiblewithin the iris of an eye from some distance.Because the iris is a protected internal organ whoserandom texture is stable throughout life, it canserve as a kind of living passport or a living password that one need not remember but canalways present. Because the randomness of iris patterns has very high dimensionality, recognitiondecisions are made with confidence levels highenough to support rapid and reliable exhaustivesearches through national-sized databases [2],[ 3].2.
 
B
IOMETRIC
O
BJECT
ECOGNITION
 Robust representations for pattern recognition must be invariant under transformations in the size, position, and Orientation of the patterns. For thecase of iris recognition, this means that we mustcreate a representation that is invariant to theoptical size of the iris in the image (which dependsupon both the distance to the eye, and the cameraoptical magnification factor); the size of the pupilwithin the iris, the location of the iris within theimage and the iris orientation, which depends uponhead tilt, torsional eye rotation within its socket,and cameraFigure 1. iris preprocessing and pattern generationangles, compounded with imaging through pan/tilteye finding mirrors that introduce additional imagerotation factors as a function of eye position,camera position, and mirror angles. Fortunately,invariance to all of these factors can readily beachieved. The dilation and constriction of theelastic meshwork of the iris when the pupil changessize is intrinsically modelled by this coordinatesystem as the stretching of a homogeneous rubber sheet, having the topology of an annulus anchoredalong its outer perimeter, with tension controlled byan (o,-centred) interior ring of variable radius.The main functional components of extant irisrecognition systems consist of image acquisition,iris localization, and pattern matching. In
Iris
 
image
 
pre
process
 
Pattern
 
Generation
 
Noise
 
removal/
 
filtering
 
171http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS)
 
International 
 
 Journal 
 
of 
 
Computer 
 
Science
 
and 
 
Information
 
Security,
 
Vol.
 
9,
 
No.
 
6
 
 ,
 
2011
 
evaluating designs for these components, one mustconsider a wide range of technical issues. Chief among these are the physical nature of the iris,optics, image processing/analysis, and humanfactors. All these considerations must be combinedto yield robust solutions even while incurringmodest computational expense and compact design.Claims that the structure of the iris is unique to anindividual and is stable with age come from twomain sources. The first source of evidence isclinical observations. During the course of examining large numbers of eyes, ophthalmologistsand anatomists have noted that the detailed patternof an iris, even the left and right iris of a single person, seems to be highly distinctive. Another interesting aspect of the iris from a biometric pointof view has to do with its moment-to-momentdynamics. Due to the complex interplay of the iris’muscles, the diameter of the pupil is in a constantstate of small oscillation. Potentially, thismovement could be monitored to make sure that alive specimen is being evaluated. Further, since theiris reacts very quickly to changes in impingingillumination (e.g., on the order of hundreds of milliseconds for contraction), monitoring thereaction to a controlled illuminant could providesimilar evidence.3.
 
I
RIS LOCALIZATION AND NORMALIZATION TECHNIQUES
 We use the iris image database from UBIRISdatabase. Data base contributes a total number of 1865 iris images which were taken in different timeframes. Each of the iris images is with resolution800x600 which is converted to 320x240.Cannyedge detection is performed both in verticaldirection and horizontal directions. [4], [5]The algorithm runs in 5 separate steps:1. Smoothing: Blurring of the image to removenoise.2. Finding gradients: The edges should be markedwhere the gradients of the image has largemagnitudes.3. Non-maximum suppression: Only local maximashould be marked as edges.4. Double thresholding: Potential edges aredetermined by thresholding.5. Edge tracking by hysteresis: Final edges aredetermined by suppressing all edges that are notconnected to a very certain (strong) edge.It is inevitable that all images taken from a camerawill contain some amount of noise. To prevent thatnoise is mistaken for edges, noise must be reduced.Therefore the image is first smoothed by applying aGaussian filter. The kernel of a Gaussian filter witha standard deviation of 
σ
= 1.4 is shown in figure 2.After smoothing the image and eliminating thenoise, the next step is to find the edge strength bytaking the gradient of the image.Figure 2: Gaussian filter with a standard deviationof 
σ
= 1.4Figure 3: iris after smoothingThe Sobel operator performs a 2-D spatial gradientmeasurement on an image. Then, the approximateabsolute gradient magnitude (edge strength) at each point can be found. The Sobel operator uses a pair of 3x3 convolution masks, one estimating thegradient in the x-direction (columns) and the other estimating the gradient in the y-direction (rows).They are shown below:The magnitude, or edge strength, of the gradient isthen approximated using the formula:
172http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS)
 
International 
 
 Journal 
 
of 
 
Computer 
 
Science
 
and 
 
Information
 
Security,
 
Vol.
 
9,
 
No.
 
6
 
 ,
 
2011
 
|G| = |Gx| + |Gy|Whenever the gradient in the x direction is equal tozero, the edge direction has to be equal to 90degrees or 0 degrees, depending on what the valueof the gradient in the y-direction is equal to. If GYhas a value of zero, the edge direction will equal 0degrees. Otherwise the edge direction will equal 90degrees. The formula for finding the edge directionis just:Theta = invtan (Gy / Gx)Once the edge direction is known, the next step isto relate the edge direction to a direction that can betraced in an image. So if the pixels of a 5x5 imageare aligned as follows:
x x x x xx x x x xx xax xx x x x xx x x x x
Then, it can be seen by looking at pixel "
a
", thereare only four possible directions when describingthe surrounding pixels -
0 degrees
(in thehorizontal direction),
45 degrees
(along the positive diagonal),
90 degrees
(in the verticaldirection), or 
135 degrees
(along the negativediagonal). So now the edge orientation has to beresolved into one of these four directionsdepending on which direction it is closest to (e.g. if the orientation angle is found to be 3 degrees, makeit zero degrees).The edge-pixels remaining after the non-maximumsuppression step are marked with their strength pixel-by-pixel. Many of these will probably be trueedges in the image, but some may be caused bynoise or color variations for instance due to roughsurfaces. The simplest way to discern betweenthese would be to use a threshold, so that onlyedges stronger that a certain value would be preserved. The Canny edge detection algorithmuses double thresholding. Edge pixels stronger thanthe high threshold are marked as strong; edge pixels weaker than the low threshold aresuppressed and edge pixels between the twothresholds are marked as weak.Finally, hysteresis is used as a means of eliminatingstreaking. Streaking is the breaking up of an edgecontour caused by the operator output fluctuatingabove and below the threshold. If a singlethreshold, T1 is applied to an image, and an edgehas an average strength equal to T1, then due tonoise, there will be instances where the edge dips below the threshold. Equally it will also extendabove the threshold making an edge look like adashed line. To avoid this, hysteresis uses 2thresholds, a high and a low. Any pixel in theimage that has a value greater than T1 is presumedto be an edge pixel, and is marked as suchimmediately. Then, any pixels that are connected tothis edge pixel and that have a value greater thanT2 are also selected as edge pixels. If you think of following an edge, you need a gradient of T2 tostart but you don't stop till you hit a gradient belowT1.Figure 4: iris after Canny edge detectionThe iris images in UBIRIS database has iris radius60 to 100 pixels, which were found manually andgiven to the Hough transform. If we apply Houghtransform first for iris/sclera boundary and then toiris/pupil boundary then the results are accurate.The purpose of the Hough transform is to addressthis problem by making it possible to performgroupings of edge points into object candidates by performing an explicit voting procedure over a setof parameterized image objects. The output of thisstep results in storing the radius and x, y parametersof inner and outer circles. In the image space, thecircle can be described as r 
2
=x
2
+y
2
where r is theradius and can be graphically plotted for each pair of image points (
 x
,
 y
). [6],[7]
 
173http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->