Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
Transformation Invariance and Luster Variability in the Real-Life Acquisition of Biometric Patterns

Transformation Invariance and Luster Variability in the Real-Life Acquisition of Biometric Patterns

Ratings: (0)|Views: 6 |Likes:
Published by ijcsis
In the real-life scenario, obtaining transformation invariant feature extraction is a challenging task in Computer Vision. Biometric recognitions are suffered due to diverse luster variations and transform patterns especially for face and biometric features. These patterns are main contingence on the distance of acquisition from the sensor to subjects’ location and the external luster of the environments that make diverse revolutionizes in the biometric features. Another invariant aspect is the translation and rotation. Explicitly face and biometric features should be a positional independent whenever an Active-Region-of-Pattern (AROP) can occur anyplace in the acquired image. In this research paper, we propose Jacobin based transformation invariance scheme. The method is effectively incorporated in order to attain essential features which are required for the transformation invariant recognition. The results show that the proposed method can robust in the real-life Computer vision applications.
In the real-life scenario, obtaining transformation invariant feature extraction is a challenging task in Computer Vision. Biometric recognitions are suffered due to diverse luster variations and transform patterns especially for face and biometric features. These patterns are main contingence on the distance of acquisition from the sensor to subjects’ location and the external luster of the environments that make diverse revolutionizes in the biometric features. Another invariant aspect is the translation and rotation. Explicitly face and biometric features should be a positional independent whenever an Active-Region-of-Pattern (AROP) can occur anyplace in the acquired image. In this research paper, we propose Jacobin based transformation invariance scheme. The method is effectively incorporated in order to attain essential features which are required for the transformation invariant recognition. The results show that the proposed method can robust in the real-life Computer vision applications.

More info:

Published by: ijcsis on Feb 19, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

06/30/2014

pdf

text

original

 
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No.11, 2011
Transformation Invariance and Luster Variability inthe Real-Life Acquisition of Biometric Patterns
R. Bremananth,
Information Systems and Technology Department,Sur University College, Affiliated to Bond University, AustraliaP.O. 440, Postal code 411,Sur, Oman.bremresearch@gmail.com  / bremananth@suc.edu.om 
 Abstract
— In the real-life scenario, obtaining transformationinvariant feature extraction is a challenging task in ComputerVision. Biometric recognitions are suffered due to diverse lustervariations and transform patterns especially for face andbiometric features. These patterns are main contingence on thedistance of acquisition from the sensor to subjects’ location andthe external luster of the environments that make diverserevolutionizes in the biometric features. Another invariant aspectis the translation and rotation. Explicitly face and biometricfeatures should be a positional independent whenever an Active-Region-of-Pattern (AROP) can occur anyplace in the acquiredimage. In this research paper, we propose Jacobin basedtransformation invariance scheme. The method is effectivelyincorporated in order to attain essential features which arerequired for the transformation invariant recognition. Theresults show that the proposed method can robust in the real-lifeComputer vision applications.
 Keywords- Biometric; Luster variations; Jacobian transformation;Transformation invariant patterns;
I.
 
I
NTRODUCTION
Transformation invariant pattern recognition plays anessential role in the field of computer vision, patternrecognition, document analysis, image understanding andmedical imaging. Since the system works well for theinvariant real-life transformation distortions, it turns into anefficient recognition or identification system. In addition,features extracted from the identical sources should beclassified as the same kind of classes in diverse luster andother deformation. An invariant pattern recognition system iscapable of adjusting to any exterior artifacts and producesminimum false positives for the patterns that are obtainedfrom the intra-classes. The aim of this paper is to suggesttransformation invariant pattern recognition that improves theperformance of recognition system. Images can be acquiredeither by a still camera or extracting frames from a motionsequence of video camera using a standard frame grabber orcapture card. However, the latter method is more suitable forreal-life processing because it produces sequence of imagesfrom which system can choose the best frame for thepreprocessing. Image status checking has been carried out toselect an image, which is capable for further processing suchas binarization, localization and other recognition operations.Threshold analysis is an essential process to choose a set of minimum and maximum values for the real-life images, whichare provided an efficient preprocessing in different kinds of luster. In the current literature, Image quality assessment wasdiscussed by Li ma et al [1] to select a sharp Biometric imagefrom the input sequence using Fourier transformation.However, there was no distance of capturing in betweencamera and subject position reported in the literature [2] [3].Currently, biometric camera is capable to capture the eyeimages up to 36 inches with clear pigments of biometrics,though this paper analyses biometric images, which areacquired from 18 inches to 48 inches. Moreover, eye imagesare captured in divergent orientations with different luster andby varying distance between biometric camera and subjectsthat are challenges to the proposed methodology. Furthermore,Anti-spoofing module aims to allow living human beings bychecking the pupil diameter variations in diverse luster at thesame distance of capturing. It prevents artificial eye images tobe enrolled or verified by the system. This method is known aschallenge-response test.Invariant feature extraction is a difficult problem incomputer vision to recognize a person in non-invasive manner,for instance, from a long distance. It provides high security inany public domain application such as E-election, bank transactions, network login and other automatic personidentification systems. The algorithm can be categorized intofour types such as Quadrature-phasor encoded, Textureanalysis, Zero-crossing, Local variation methods and rotationinvariant feature extraction for biometrics were suggested byDaugman [3] , Li ma et al [1], Li ma et al [2], and Bremananthet al [4][5][6], respectively. However, these methods havelimitations such as masking bits for occlusion avoidance,shifting of feature bits and several templates required to makea system as rotation invariant. Locating active-region-of-pattern (AROP) is complicated processes in the diverseenvironment and luster variations that include luster
8http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No.11, 2011
correction, invariance localization, segmentation,transformation invariant feature extraction and recognition.The remainder of this paper is organized as follows: SectionII emphasizes on transformation pattern extraction, geometricand luster transformation functions. Issues of transformationinvariant pattern recognition are described in Section III.Section IV depicts the results obtained based on the proposedmethodologies. Concluding remarks and future researchdirection are given Section V.II.
 
T
RANSFORMATION
P
ATTERNS EXTRACTION
The basic geometric transformations are usually employedin Computer Graphics and Visualization, and are oftenexecuted in Image analysis, Pattern recognition and Imageunderstanding as well (Milan Sonka et al [7]). They allowexclusion of image deformations that occur when images arecaptured in a real-life condition. If one strives to match twodifferent images of the same subject, an image transformationshould be required to compensate their changes in orientation,size and shapes. For example, if one is trying to capture andmatch a remotely sensed eye images from the same area evenafter a minute, the recent image will not match exactly with theprevious image due to factors such as position, scale, rotationand changes in the patina. To examine these alterations, it isnecessary to execute an image transformation and thenrecognize the images. Skew occurs while capturing imageswith an obvious orientation at the diverse angles. Thesevariations may be very tiny, but can be critical if the orientationis demoralized in subsequent processing. This is normally aproblem in computer vision applications such as character,Biometric and license plate recognition.The basic transformation is a vector function T that maps thepixel (x,y) to a new position (x’,y’) described as
),,('),('
y x y y x x
 y x
==
(1)where
 x
and
 y
are transformation equations.It transforms pixels into point-to-point basis. Thecommonly used transformations in recognition systems arepixel coordinate and brightness transformations. Pixelcoordinate transformation is used to map the coordinate pointsof input pixel to a point in the output image. Figure 1 illustratespixel coordinate transformation.
Figure 1. Pixel coordinate transformation for biometric imageTransformation on an image plane.
Equation (1) is usually approximated by the polynomial (MilanSonka et al 1999) as shown below
,,
00'00'
====
==
mmrk mmrk 
y xb y y xa x
(2)where
rk rk 
ba
,
 are linear coefficients, (x, y) is the knownpoint and (x’, y’) is the transformed point in the output image.It is possible to determine
rk rk 
ba
,
by solving the linearequations, if both coordinate points are known. When thegeometric transform does not change rapidly depending on theposition in the image lower order approximation polynomials(m = 2 or 3) are used with 6 -10 pairs of corresponding points.These points should be distributed in the image in such a waythat it can articulate the geometric transformation. Typicallycorresponding points are spread uniformly. When thegeometric transform is sensitive to the distribution of corresponding points in the input, higher degree of approximating polynomials are used. Equation (3) isapproximately with four pairs of corresponding points by thebilinear transform described as
.''
3210 3210
 xyb yb xbb y  xya ya xaa x
+++= +++=
(3)The affine transformation requires three pairs of correspondingpoints to find the coefficients as in (4). The affine transformincludes geometric transformation such as rotation, translation,scaling and skewing.
.''
210210
 yb xbb y  ya xaa x
++=++=
(4)A transformation applied to the entire image may alter thecoordinate system. Jacobian J provides information about howthe co-ordinates are modified due to the transformations. Thisis represented as
.''''),()','(
 y y x y y x x x y x y x J 
==
(5)If transformation is singular J = 0. If the area of an image isinvariant under the transformation then J = 1. The Jacobian forthe bilinear and affine transform is described in (6) and (7),respectively.
,)()(
322313311221
ybaba xbabababa J 
++=
(6)
.
1221
baba J 
=
(7)
9http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No.11, 2011
 A.
 
Geometric Transformations.
Biometric feature extraction depends on geometric datatransformation. We can see that face and Biometric imageshave mainly rotation transformations. In the real-life scenario,patterns are acquired by the sensors, due to rotation, translationand scaling, they are notably diverge. So that any robustalgorithms could be suffered to extract unique templates inorder to obtain their prominent by nature. For example, Table Idescribes some of the various geometric transformations whichcould be occurred duration acquisition of biometric patterns.From these seven transformation types, we believe thatbiometric patterns could be adapted to the amply circumstanceswhich are habitually transpired on their pattern catastrophe.
T
ABLE
I.
 
G
EOMETRIC
T
RANSFORMATION FUNCTIONS
.
No. Transformation TypesTransformation Function
)','(
y x
 J
1Rotation through an angle
φ 
about theorigin in clockwise direction
φ φ 
sincos'
y x x
+=
 
φ φ 
cossin'
y x y
+=
 
1
=
 J 
 2Rotation through the angle
φ 
about theorigin in anticlockwise direction
φ φ 
sincos'
y x x
=
 
φ φ 
cossin'
y x y
+=
 
1
=
 J 
 3Rotation through the angle
φ 
about rotationpoint (x
r
,y
r
) in anticlockwise direction
φ
sin)yy(
φ
cos)xx(x'x
rrr
+=
 
φ φ 
cos)(sin)('
y y x x y y
++=
 
1
=
 J 
 4 Scaling a in x-axis and b in y-axis
ax x
=
'
 
bx y
=
'
 
ab J 
=
 5 Fixed point scale
a x x x x
 f  f 
)('
+=
b y y y y
 f  f 
)('
+=
 
ab J 
=
 6Skew by the angle
φ 
 
φ 
tan'
y x x
+=
 
 y y
=
'
 
1
=
 J 
 7 Translation
 x
 x x
+=
'
 
 y
 y y
+=
'
 
1
=
 J 
 
Any complex upheaval can be approximated by partitioning animage into smaller rectangular sub-images. Image upheaval isestimated on the corresponding pair of pixels by using affine orbilinear method and then repairing each sub-image separately.An optical camera is a passive sensor, which offers moreaffordable non-linearities in raster scanning and a non-constantsampling period in capturing any moving object. There aresome cataclysms that must be tackled in remote sensing. Themain source of rotation, skew, scale, translation and non-linearity upheaval are due to the wrong position or orientationof the camera or sensor with respect to the object or diverseway of acquiring an object. Figure 2 shows some of thedistortions that occur while capturing an object by any type of passive sensor.Line non-linearity distortion is caused by variable distanceof the object from the camera mirror as shown in Fig. 2a.Camera mirror rotating at constant speed causes panoramicparody. This is shown is Fig. 2b. The rotation or shake of anobject during image capturing produces skew distortion asshown in Fig. 2c. The shear distortion is represented in Fig. 2d.The variation of distance between the object and cameraprovokes change-of-scale distortion as shown in Fig. 2e.Figure 2f shows the perspective distortions.
Figure 2. Types of upheaval occurred in the real-life acquisition.
 B.
 
 Luster Transformation function
Luster transformation functions are principally for reimbursingsheens and gleams of picture elements which are be revealedon the acquisition of biometric patterns. Here, we programmedfor some of the luster which are possibly devastatinglyexaggerated on the patterns and features of the face andBiometric images. Table II listed some of the lustertransformation functions.
T
ABLE
II.
 
L
USTER
T
RANSFORMATION FUNCTION
.
No. Transformation Types Transformation Function
1 Nearest neighbour
))(),((),(
yround  xround g y x f 
n
=
 2 Linear interpolation
)1,1()1,()1( ),1()1(),()1)(1(),(
+++++ ++=
labglgab lgbalgba y x f 
n
 
 yb yround  l xa xround l
====
),(),(
 3 Bi-cubic interpolation
−∞=−∞=
=
l nn
y y xl xh y xlg y x f 
),(),(),(
 
<+ <+ =
otherwise x for  x x x  x for  x x h
n
021584 1021
3232
 where
n
h
is the interpolation kernel and g(.,.)is the sampled version of input image.
III.
 
I
SSUES IN BIOMETRIC TRANSFORMATION INVARIANTPATTERNS
 The issues in the biometric transformation invariant patternsare such as subsist detection during image acquisition, imagequality measure and tilted Biometric patterns have beenaddressed in the work to achieve rotation-invariant recognitionsystem.
 A.
 
Subsist Detection
Subsist detection is required to determine, if the biometricsample is actually presented by living human or from any otherartificial sources or not. One of the vulnerable points in thesystem is the user data capture interface that should ensure thesignals for the genuine subject and should contradict artificialsources such as printed picture of biometrics, spurious fingersor eyes, video clips and any kind of objects like eyes. Achallenge-response test ensures the pupil diameter variations inthe imaging. It monitors the diameter of eye images under
10http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->