Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
A Feedback Design for Rotation Invariant Feature Extraction in Implementation with Iris Credentials

A Feedback Design for Rotation Invariant Feature Extraction in Implementation with Iris Credentials

Ratings: (0)|Views: 37 |Likes:
Published by ijcsis
Abstract — Rotation invariant feature extraction is an essential objective task in computer vision and pattern credentials problems, that is, recognizing an object must be invariant in scale, translation and orientation of its patterns. In the iris recognition, the system should represent the iris patterns, which is invariant to the size of the iris in the image. This depends upon the distance from the sensors to subjects’ eye positions and the external illumination of the environments, which in turn make the changes in the pupil diameter. Another invariant factor is the translation, the explicit iris features should be a positional independent even though eye present anywhere in the acquired image. These two invariants are perfectly achieved by the weight based localization approaches. However, the iris orientation estimation is an important problem to avoid in preserving selective orientation parameters. Multiple source points are used to estimate the segmented objects orientations. After estimating the deviation in angle of segmented object that can be rotated to its principal origin and then the feature extraction process is applied. A multi resolution approach such as wavelet transform is employed for feature extraction process that provides efficient frequency and spatial texture feature deviations present in the irises. In this paper, we work on a feedback design with Radon transform with wavelet statistical analysis of iris recognition in two different ways. In order to check the viability of the proposed approaches invariant features are directly compared with weighted distance (WD) measures, in the first phase and second phase is to train the Hamming neural network to recognize the known patterns.

Keywords- Iris credentials; Invariant Features; Rotation estimation; Multiresolution anlysis;
Abstract — Rotation invariant feature extraction is an essential objective task in computer vision and pattern credentials problems, that is, recognizing an object must be invariant in scale, translation and orientation of its patterns. In the iris recognition, the system should represent the iris patterns, which is invariant to the size of the iris in the image. This depends upon the distance from the sensors to subjects’ eye positions and the external illumination of the environments, which in turn make the changes in the pupil diameter. Another invariant factor is the translation, the explicit iris features should be a positional independent even though eye present anywhere in the acquired image. These two invariants are perfectly achieved by the weight based localization approaches. However, the iris orientation estimation is an important problem to avoid in preserving selective orientation parameters. Multiple source points are used to estimate the segmented objects orientations. After estimating the deviation in angle of segmented object that can be rotated to its principal origin and then the feature extraction process is applied. A multi resolution approach such as wavelet transform is employed for feature extraction process that provides efficient frequency and spatial texture feature deviations present in the irises. In this paper, we work on a feedback design with Radon transform with wavelet statistical analysis of iris recognition in two different ways. In order to check the viability of the proposed approaches invariant features are directly compared with weighted distance (WD) measures, in the first phase and second phase is to train the Hamming neural network to recognize the known patterns.

Keywords- Iris credentials; Invariant Features; Rotation estimation; Multiresolution anlysis;

More info:

Published by: ijcsis on Oct 10, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

10/10/2010

pdf

text

original

 
 
A Feedback Design for Rotation Invariant FeatureExtraction in Implementation with Iris Credentials
M. Sankari
Department of Computer Applications,Nehru Institute of Engineering and Technology,Coimbatore, INDIA.sankarim2@gmail.com 
R. Bremananth
School of EEE, Information Engg. (Div.),Nanyang Technological University,Singapore.bremresearch@gmail.com 
 Abstract
 — 
Rotation invariant feature extraction is an essentialobjective task in computer vision and pattern credentialsproblems, that is, recognizing an object must be invariant inscale, translation and orientation of its patterns. In the irisrecognition, the system should represent the iris patterns, whichis invariant to the size of the iris in the image. This depends upon
the distance from the sensors to subjects’ eye positions and the
external illumination of the environments, which in turn makethe changes in the pupil diameter. Another invariant factor is thetranslation, the explicit iris features should be a positionalindependent even though eye present anywhere in the acquiredimage. These two invariants are perfectly achieved by the weightbased localization approaches. However, the iris orientationestimation is an important problem to avoid in preservingselective orientation parameters. Multiple source points are usedto estimate the segmented objects orientations. After estimatingthe deviation in angle of segmented object that can be rotated toits principal origin and then the feature extraction process isapplied. A multi resolution approach such as wavelet transformis employed for feature extraction process that provides efficientfrequency and spatial texture feature deviations present in theirises. In this paper, we work on a feedback design with Radontransform with wavelet statistical analysis of iris recognition intwo different ways. In order to check the viability of the proposedapproaches invariant features are directly compared withweighted distance (WD) measures, in the first phase and secondphase is to train the Hamming neural network to recognize theknown patterns.
 Keywords- Iris credentials; Invariant Features; Rotationestimation; Multiresolution anlysis;
I.
 
I
NTRODUCTION
In computer vision and pattern recognition, rotationinvariant feature extraction is an essential task, that is,recognizing an object must be invariant in scale, translation andorientation of its patterns. This paper emphasizes on invariantfeature extraction and statistical analyses. In the irisrecognition, the system should represent the iris patterns, whichis invariant to the size of the iris in the image. This depends
upon the distance from the sensors to subjects’ eye positions
and the external illumination of the environments that make thechanges in the pupil diameter. Another invariant factor is thetranslation where iris features should be a positionalindependent of iris pattern, it could occur anywhere in theacquired eye image. However, the iris orientation estimation isan important problem to avoid in preserving selectiveorientation parameters, for example, 7 relative orientationswere maintained for iris best matching process in the literature[1] and seven rotation angles (-9, -6, -3, 0, 3, 6 and 9 degrees)used by Li ma et al. [2]. In the real time imaging, due to thehead tilt, mirror angle and sensor positions, iris images arecaptured in widely varied angels or divergent positions. Weestimate the rotation angle of iris portion within the acquiredimage by using multiple line integral approaches, whichprovide better accuracy in the real time capturing. Local binarypatterns, gray-level and auto-correlation features were used toestimate orientation of the texture patterns. It projected theangles that are locally invariant to rotation [3]. In [4], texturerotation-invariant was achieved by autoregressive models. It
used several circle’s
neighborhood points to project the rotationangle of the object. Aditya Vailaya et al. [5] had dealt withBayesian learning framework with small code features that areextracted from linear vector quantization. Thus, these featurescan be used for automatic image rotation detection. A hiddenMarkov model and multichannel sub-band were used forestimating rotation angles of gray level images in the study [6].In this work, we propose Radon transform based multipointsources to estimate the rotation angle estimation for real-timeobjects.Classification is a final stage of pattern recognition systemwhere each unknown pattern is classified to a particularcategory. In iris recognition system, a person is automaticallyrecognized based on his / her iris pattern already trained by thesystem. This is done in a way of training a brain to teachcertain kind of sample patterns. In the testing process, systemrecalls the trained iris patterns as a weighted distance specifiedby the system. If threshold is attained then system genuinelyaccepts a person, otherwise false alarm sounds. However, theway to find the statistical level is a tedious work because itmakes decision to evaluate the pattern either genuine or fake.Hence combinatorics of iris code sequence should be carriedout by means of statistical independence. Moreover, failure of iris recognition is principally concerned with a test of statisticalindependence because it absorbs more degree-of-freedom. Thetest is nearly assured to be allowed whenever the extracted iriscode comparing from two different eyes are evaluated. Inaddition, the test may exclusively fail when any iris code iscompared with another version of itself. The test of statisticalindependence was implemented by the Hamming distance in
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 6, September 2010245http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
 
[1] with a set of mask bits to prevent non-iris artifacts. Li ma etal. [7] proposed a classifier design that was based on exclusive-OR operation to compute the match between pairs of iris bits.In [2], authors worked with the nearest centre classifier torecognize diverse pair of iris patterns. A competitive neuralnetwork with linear vector quantization was reported for bothidentification and recognition of iris patterns by ShinyoungLim et al. [8]. Our main contribution to this paper is a feedback design (Fig. 1) to extract an appropriate set of rotation invariantfeatures based on Radon and wavelet transforms. An iterationprocess is repeated until a set of essential invariant features isextracted from the subject. We have done two different phasesof statistical analyses of rotation invariant iris recognition.During phase I, wavelet features are directly compared withweighted distance (WD) measures and in phase II invariantfeatures were trained and recognized by the Hamming neuralnetwork.
Rotationestimationusing multiplesourcesRotationCorrection toits principaldirectionWavelet basedRotationInvariantextraction
Is it provide bestidentification?
YesNo, Find another suitableRotation invariant FeaturesEnroll theInvariantFeatures
 
Fig. 1. A feedback design of rotation invariant feature extraction.
The remainder of this paper is organized as follows: SectionII emphasizes on invariance and estimation of rotation angle.Radon and wavelet based rotation invariant is described insection III. Section IV depicts the results obtained based on theproposed methodologies while Concluding remarks and futureresearch direction are accentuated Section V.II.
 
I
NVARIANCE IN
R
OTATION
 A 2D rotation is applied to an object by repositioning it along a
circular path. A rotation angle θ and pivot point about which
the object to be rotated is specified for generating series of rotation. In counterclockwise, positive angle values are used forrotation about the pivot point and in contrast clockwise rotationrequires negative angle values. The rotation transformation isalso described as a rotation about an axis that is perpendicularto the xy plane and passes through the pivot point.
 
The rotationtransformation equations are determined from position
),(
11
y x
to position
),(
22
y x
through an angle B relative to thecoordinate origin. The original displacement of the point fromthe x-axis is, angle A. By trigonometric ratios,
 y A
 / )sin(
1
,
 y B A
 / )sin(
2
,
 x B A
 / )cos(
2
and
 x A
 / )cos(
1
. From the compound angle formulae describedas
 
)sin()cos()cos()sin()sin(
B A B A B A
.
(1)Substituting trigonometric ratios and obtain the following
)sin() / ()cos() / () / (
112
B x B y y
,
(2)
)sin()cos(
112
B x B y y
,
(3)
)cos()sin(
112
B y B x y
,
(4)Likewise, substituting trigonometric ratios and derived as
)sin()sin()cos()cos()cos(
B A B A B A
,
(5)
)sin() / ()cos() / () / (
112
B y B x x
,
(6)
)sin()cos(
112
B y B x x
,
(7)Therefore, from Eqs. (5) and (10) we can get counterclockwiserotation matrix and the new coordinate position can be foundas described in Eq. (8). The basics of rotation and lineintegrals are incorporated together to form equations forprojecting the object in single and multi source points.
 A.
 
 Multipoint source
Based on the basics of rotation, multipoint source methodcomputes the line integrals along parallel beams in a specificdirection. A projection of image f(x,y) is a set of line integralsto represent an image. This phase takes multiple parallel-beams from different angles by rotating the source around thecentre of the image.
.)cos()sin( )sin()cos(
1122
 
 y x B B B B y x
(8)This method is based on Radon transform, which estimatesthe angle of rotation using the projection data in differentorientations. A fusion of Radon transform and Fouriertransform had been performed for digital watermarking whichis invariant to the rotation, scale and translation invariant in theliterature [9]. A parallel algorithm for Fast Radon transformand its inverse was proposed by Mitra et al. [10]. Radontransform was employed for estimating angle of rotated textureby Kourosh et al. [11]. Image object recognition based Radontransform was proposed by Jun Zhang et al. [12], this method isrobust and invariant to rotation, scale and translation of imageobject. Fig. 2 shows a multipoint source at a specified angle forestimating rotation angle of a part of iris. This method projectsthe image intensity with a radial line orientation at a specificangle from the multipoint sources. Multipoint projectioncomputes any angle
θ
by using the Radon transform
),'(
 x R
of f(x,y), it is the line integral of parallel paths to the y axis. Afterapplying the function of multipoint sources
),'(
 x R
theresultant data contain row and column. Column describes
 projection data for each angle in θ and it contains the respectivecoordinates along the x’ axis. The
procedure for applyingmultipoint source projection to estimate the angle is as follows:Image is rotated to a specific angle in counterclockwise by bi-cubic interpolation method.
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 6, September 2010246http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
 
Fig. 2. Multipoint estimation using multipoint sources.
Assume the rotation angle from
1
to
180
in order to find thepeak area of rotation angles. After applying the multipointsources, Radon transform coefficients have been generated foreach angle. The standard deviation of the Radon transformcoefficients is calculated to find the maximum deviation of rotation angle. This is shown in Fig. 3. Then, using estimatedangle, object rotation is rotated to its original principal angleusing bi-cubic interpolation method. If the estimated angle
ˆ
ispositive then rotate the object as -(
ˆ
+
90
) in clockwisedirection else if the estimated angle is negative or above
90
then rotate the object as -(
ˆ
-
90
) in clockwise direction.
Fig. 3. Illustration of orientation angle estimation using multipoint.
III.
 
I
RIS WAVELT
F
EATURE ANALYSIS
 In this phase wavelet based feature extraction process hasbeen employed to extract feature obscured in the iris patterns. Itis an essential task for recognising a pattern from othersbecause some features may produce same type of responses fordiverse patterns. It causes the hypothesis in pattern recognitionprocess to differentiate one from another. To overcome theproblem of uncertainty the system needs an efficient way toextort quality features from the acquired pattern. Iris providessufficient amount of interclass variability and minimises intra-class variability. Thus the characteristics of these patterns arewell efficiently taken out by the sense of using lesscomputational process. Among various feature extractors,wavelet series approximate hasty transitions much moreaccurately than Fourier series. Consequently, wavelet analysisperfectly replicates constant measurements. It produces betterapproximation for data that exhibit local variation and becauseof its basis function each term in a wavelet series has a compactsupport within a finite interval. The other sense to employwavelet is orthogonal. This means that information carried byone idiom is independent of information conceded by the other.Thus, there is no redundancy in the feature extraction. This isfine when neither computational sequence time nor storage iswasted as a result of wavelet coefficient computed or stored.The next sense related with wavelet is multi resolution, whichis like biological sensory system. Many physical systems areorganised into divergent levels or scales of some variables. Itprovides an economic structure and positional notion of arithmetic whose computational complexity is O (N), where Ndata points are to be accessed
 
[13]. In the current literaturevarious computer vision and signal processing applicationshave been based on wavelet theory [14] such as detecting self-similarity, de-noising, compression, analysis and recognition.This technique has proven the ability to provide high codingefficiency, spatial and quality features. However, waveletsfeatures are not rotation invariant due to its directional changes.Hence this approach initially estimates the extorted patternrotation angle and rotates to its principal direction. Afterwardsmulti resolution wavelets have been employed to extortfeatures from the rotation corrected patterns. In the irisrecognition process, this approach has adopted Daubechies (db)wavelet to decompose the iris patterns into multiple resolutionsub-bands. These sub-bands are employed to transform well-distributed complex iris patterns into a set of one-dimensionaliris feature code. Decay is a process to divide the given irisimage into four sub-bands such as approximation, horizontal,vertical, and diagonal coefficients.
 
A 2D Daubechies wavelettransform of an iris image (I) can be carried by performing twosteps, Initially, it performs 1D wavelet transform, on each rowof (I) thereby producing a new image I
1
. In second step it takesI
1
as an input image and performs 1D transform on each of itscolumns. A Level-1 wavelet transform of an image can bedescribed as
,,
222211111
 
vhaavha I 
 
(9)where the sub-images a
1
, h
1
v
1
and d
1
represent level-1approximation, horizontal, vertical and diagonal coefficientsa
2
, h
2
v
2
and d
2
level 2 coefficients. The approximation iscreated by computing trends along rows of I followed bycomputing trends along columns. Trends represent the runningaverage of the sub-signals in the given image. It produces alower frequency of the image I. The other sub-signals such ashorizontal, vertical and diagonal have been created by takingfluctuation. It is a running difference of sub-signals. Eachcoefficient represents a spatial area corresponding to one-quarter of the segmented iris image size. The low and highfrequencies represent a bandwidth corresponding to
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 8, No. 6, September 2010247http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->