A Feedback Design for Rotation Invariant FeatureExtraction in Implementation with Iris Credentials
M. Sankari
Department of Computer Applications,Nehru Institute of Engineering and Technology,Coimbatore, INDIA.sankarim2@gmail.com
R. Bremananth
Abstract
—
Rotation invariant feature extraction is an essentialobjective task in computer vision and pattern credentialsproblems, that is, recognizing an object must be invariant inscale, translation and orientation of its patterns. In the irisrecognition, the system should represent the iris patterns, whichis invariant to the size of the iris in the image. This depends upon
the distance from the sensors to subjects’ eye positions and the
external illumination of the environments, which in turn makethe changes in the pupil diameter. Another invariant factor is thetranslation, the explicit iris features should be a positionalindependent even though eye present anywhere in the acquiredimage. These two invariants are perfectly achieved by the weightbased localization approaches. However, the iris orientationestimation is an important problem to avoid in preservingselective orientation parameters. Multiple source points are usedto estimate the segmented objects orientations. After estimatingthe deviation in angle of segmented object that can be rotated toits principal origin and then the feature extraction process isapplied. A multi resolution approach such as wavelet transformis employed for feature extraction process that provides efficientfrequency and spatial texture feature deviations present in theirises. In this paper, we work on a feedback design with Radontransform with wavelet statistical analysis of iris recognition intwo different ways. In order to check the viability of the proposedapproaches invariant features are directly compared withweighted distance (WD) measures, in the first phase and secondphase is to train the Hamming neural network to recognize theknown patterns.
Keywords- Iris credentials; Invariant Features; Rotationestimation; Multiresolution anlysis;
I.
I
NTRODUCTION
In computer vision and pattern recognition, rotationinvariant feature extraction is an essential task, that is,recognizing an object must be invariant in scale, translation andorientation of its patterns. This paper emphasizes on invariantfeature extraction and statistical analyses. In the irisrecognition, the system should represent the iris patterns, whichis invariant to the size of the iris in the image. This depends
upon the distance from the sensors to subjects’ eye positions
and the external illumination of the environments that make thechanges in the pupil diameter. Another invariant factor is thetranslation where iris features should be a positionalindependent of iris pattern, it could occur anywhere in theacquired eye image. However, the iris orientation estimation isan important problem to avoid in preserving selectiveorientation parameters, for example, 7 relative orientationswere maintained for iris best matching process in the literature[1] and seven rotation angles (-9, -6, -3, 0, 3, 6 and 9 degrees)used by Li ma et al. [2]. In the real time imaging, due to thehead tilt, mirror angle and sensor positions, iris images arecaptured in widely varied angels or divergent positions. Weestimate the rotation angle of iris portion within the acquiredimage by using multiple line integral approaches, whichprovide better accuracy in the real time capturing. Local binarypatterns, gray-level and auto-correlation features were used toestimate orientation of the texture patterns. It projected theangles that are locally invariant to rotation [3]. In [4], texturerotation-invariant was achieved by autoregressive models. It
used several circle’s
neighborhood points to project the rotationangle of the object. Aditya Vailaya et al. [5] had dealt withBayesian learning framework with small code features that areextracted from linear vector quantization. Thus, these featurescan be used for automatic image rotation detection. A hiddenMarkov model and multichannel sub-band were used forestimating rotation angles of gray level images in the study [6].In this work, we propose Radon transform based multipointsources to estimate the rotation angle estimation for real-timeobjects.Classification is a final stage of pattern recognition systemwhere each unknown pattern is classified to a particularcategory. In iris recognition system, a person is automaticallyrecognized based on his / her iris pattern already trained by thesystem. This is done in a way of training a brain to teachcertain kind of sample patterns. In the testing process, systemrecalls the trained iris patterns as a weighted distance specifiedby the system. If threshold is attained then system genuinelyaccepts a person, otherwise false alarm sounds. However, theway to find the statistical level is a tedious work because itmakes decision to evaluate the pattern either genuine or fake.Hence combinatorics of iris code sequence should be carriedout by means of statistical independence. Moreover, failure of iris recognition is principally concerned with a test of statisticalindependence because it absorbs more degree-of-freedom. Thetest is nearly assured to be allowed whenever the extracted iriscode comparing from two different eyes are evaluated. Inaddition, the test may exclusively fail when any iris code iscompared with another version of itself. The test of statisticalindependence was implemented by the Hamming distance in
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 6, September 2010245http://sites.google.com/site/ijcsis/ISSN 1947-5500