Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
3Activity
0 of .
Results for:
No results containing your search query
P. 1
An Efficient Feature Extraction Technique for Texture Learning

An Efficient Feature Extraction Technique for Texture Learning

Ratings: (0)|Views: 344 |Likes:
Published by ijcsis
This paper presents a new methodology for discovering features of texture images. Orthonormal
Polynomial based Transform is used to extract the features from the images. Using orthonormal polynomial basis function polynomial operators with different sizes are generated. These operators are applied over the images to capture the texture features. The training images are segmented with fixed size blocks and features are extracted from it. The operators are applied over the block and their inner product yields the transform coefficients. These set of transform coefficients form a feature set of a particular texture class. Using clustering technique, a codebook is generated for each class. Then significant class representative vectors are calculated which characterizes the textures. Once the orthonormal basis function of particular size is found, the operators can be realized with few matrix operations and hence the approach is computationally simple. Euclidean Distance measure is used in the classification phase. The transform coefficients have rotation invariant capability. In the training phase the classifier is trained with samples with one particular angle of image and tested with samples at different angles. Texture images are collected from Brodatz album. Experimental results prove that the proposed approach provides good discrimination between the textures.
This paper presents a new methodology for discovering features of texture images. Orthonormal
Polynomial based Transform is used to extract the features from the images. Using orthonormal polynomial basis function polynomial operators with different sizes are generated. These operators are applied over the images to capture the texture features. The training images are segmented with fixed size blocks and features are extracted from it. The operators are applied over the block and their inner product yields the transform coefficients. These set of transform coefficients form a feature set of a particular texture class. Using clustering technique, a codebook is generated for each class. Then significant class representative vectors are calculated which characterizes the textures. Once the orthonormal basis function of particular size is found, the operators can be realized with few matrix operations and hence the approach is computationally simple. Euclidean Distance measure is used in the classification phase. The transform coefficients have rotation invariant capability. In the training phase the classifier is trained with samples with one particular angle of image and tested with samples at different angles. Texture images are collected from Brodatz album. Experimental results prove that the proposed approach provides good discrimination between the textures.

More info:

Published by: ijcsis on Jun 12, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

10/24/2012

pdf

text

original

 
(IJCSIS) International Journal of Computer Science and Information Security,Vol.
8
 , No.
2
 , 2010
An Efficient Feature Extraction Techniquefor Texture Learning
R. Suguna
Research Scholar, Department of Information TechnologyMadras Institute of Technology, Anna UniversityChennai- 600 044, Tamil Nadu, India.hitec_suguna@hotmail.com
P. Anandhakumar 
Assistant Professor, Department of Information Tech.Madras Institute of Technology, Anna UniversityChennai- 600 044, Tamil Nadu, India.anandh@annauniv.edu 
 Abstract
T
his paper presents a new methodology fordiscovering features of texture images. OrthonormalPolynomial based Transform is used to extract the featuresfrom the images. Using orthonormal polynomial basisfunction polynomial operators with different sizes aregenerated. These operators are applied over the images tocapture the texture features. The training images aresegmented with fixed size blocks and features are extractedfrom it. The operators are applied over the block and theirinner product yields the transform coefficients. These setof transform coefficients form a feature set of a particulartexture class. Using clustering technique, a codebook isgenerated for each class. Then significant classrepresentative vectors are calculated which characterizesthe textures. Once the orthonormal basis function of particular size is found, the operators can be realized withfew matrix operations and hence the approach iscomputationally simple. Euclidean Distance measure isused in the classification phase. The transform coefficientshave rotation invariant capability. In the training phasethe classifier is trained with samples with one particularangle of image and tested with samples at different angles.Texture images are collected from Brodatz album.Experimental results prove that the proposed approachprovides good discrimination between the textures.
 Keywords- Texture Analysis; Orthonormal Transform; codebook generation; Texture Class representatives; TextureCharacterization.
I.
 
I
 NTRODUCTION
 Texture can be regarded as the visual appearance of asurface or material. Textures appear in numerous objects andenvironments in the universe and they can consist of verydifferent elements. Texture analysis is a basic issue in image processing and computer vision. It is a key problem in manyapplication areas, such as object recognition, remote sensing,content-based image retrieval and so on. A human maydescribe textured surfaces with adjectives like fine, coarse,smooth or regular. But finding the correlation withmathematical features indicating the same properties is verydifficult. We recognize texture when we see it but it is verydifficult to define. In computer vision, the visual appearance of the view is captured with digital imaging and stored as image pixels. Texture analysis researchers agree that there issignificant variation in intensity levels or colors betweennearby pixels and at the limit of resolution there is non-homogeneity. Spatial non-homogeneity of pixels correspondsto the visual texture of the imaged material which may resultfrom physical surface properties such as roughness, for example. Image resolution is important in texture perception,and low-resolution images contain typically very homogenoustextures.The appearance of texture depend upon three ingredients:(i) some local ‘order’ is repeated over a region which is large incomparison to the order’s size, (ii) the order consists in thenonrandom arrangement of elementary parts, and (iii) the partsare roughly uniform entities having approximately the samedimensions everywhere within the textured region[1].Image texture, defined as a function of the spatial variationin pixel intensities (gray values), is useful in a variety of applications and has been a subject of intense study by manyresearchers. One immediate application of image texture is therecognition of image regions using texture properties. Textureis the most important visual cue in identifying these types of homogeneous regions. This is called
texture classification
. Thegoal of texture classification then is to produce a classificationmap of the input image where each uniform textured region isidentified with the texture class it belongs to [2].Texture
analysis
 
methods
 
have
 
 been
 
utilized
 
in
 
a
 
variety
 
of
 
application
 
domains.
 
Texture
 
plays
 
an
 
important
 
role
 
in
 
automated
 
inspection,
 
medical
 
image
 
processing,
 
document
 
processing
 
and
 
remote
 
sensing.
 
In
 
the
 
detection
 
of
 
defects
 
in
 
texture
 
images,
 
most
 
applications
 
have
 
 been
 
in
 
the
 
domain
 
of
 
textile
 
inspection.
 
Some
 
diseases,
 
such
 
as
 
interstitial
 
fibrosis
, affect the lungs in such a manner that the resultingchanges in the X-ray
images
 
are
 
texture
 
changes
 
as
 
opposed
 
to
 
clearly
 
delineated
 
lesions.
 
In
 
such
 
applications,
 
texture
 
analysis
 
methods
 
are
 
ideally
 
suited
 
for
 
these
 
images.
 
Texture
 
plays
 
a
 
significant
 
role
 
in
 
document
 
processing
 
and
 
character
 
recognition
.
The
 
text
 
regions
 
in
 
a
 
document
 
are
 
characterized
 
 by
 
their
 
high
 
frequency
 
content
. Textureanalysis has been extensively used to classify remotely sensedimages. Land use classification where homogeneous regionswith different types of terrains (such as wheat, bodies of water,urban regions, etc.) need to be identified is an important
21http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol.
8
 , No.
2
 , 2010
application. Haralick et al. [3] used gray level co-occurrencefeatures to analyze remotely sensed images.Since we are interested in interpretation of images we candefine texture as the characteristic variation in intensity of aregion of an image which should allow us to recognize anddescribe it and outline its boundaries. The degrees of randomness and of regularity will be the key measure whencharacterizing a texture. In texture analysis the similar texturalelements that are replicated over a region of the image arecalled texels. This factor leads us to characterize textures in thefollowing ways:
 
The texels will have various sizes and degrees of uniformity
 
The texels will be oriented in various directions
 
The texels will be spaced at varying distances in differentdirections
 
The contrast will have various magnitudes and variations
 
Various amounts of background may be visible betweentexels
 
The variations composing the texture may each havevarying degrees of regularityIt is quite clear that a texture is a complicated entity tomeasure. The reason is primarily that many parameters arelikely to be required to characterize it. Characterization
 
of
 
textured
 
materials is usually very difficult and the goal of characterization depends on the application. In general, the aimis to give a description of analyzed material, which can be, for example, the classification result for a finite number of classesor visual exposition of the surfaces. It gives additionalinformation compared only to color or shape measurements of the objects. Sometimes it is not even possible to obtain color information at all, as in night vision with infrared cameras.Color measurements are usually more sensitive to varyingillumination conditions than texture, making them harder to usein demanding environments like outdoor conditions. Thereforetexture measures can be very useful in many real-worldapplications, including, for example, outdoor scene imageanalysis.To exploit texture in applications, the measures should beaccurate in detecting different texture structures, but still beinvariant or robust with varying conditions that affect thetexture appearance. Computational complexity should not betoo high to preserve realistic use of the methods. Differentapplications set various requirements on the texture analysismethods, and usually selection of measures is done with respectto the specific application.Typically textures and the analysis methods related to themare divided into two main categories with differentcomputational approaches: the stochastic and the structuralmethods. Structural textures are often man-made with a veryregular appearance consisting, for example, of line or square primitive patterns that are systematically located on the surface(e.g. brick walls). In structural texture analysis the propertiesand the appearance of the textures are described with differentrules that specify what kind of primitive elements there are inthe surface and how they are located. Stochastic textures areusually natural and consist of randomly distributed textureelements, which again can be, for example, lines or curves (e.g.tree bark). The analysis of these kinds of textures is based onstatistical properties of image pixels and regions. The abovecategorization of textures is not the only possible one; thereexist several others as well, for example, artificial vs. natural or micro textures vs. macro textures. Regardless of thecategorization, texture analysis methods try to describe the properties of the textures in a proper way. It depends on theapplications what kind of properties should be sought from thetextures under inspection and how to do that. This is rarely aneasy task.One of the major problems when developing texturemeasures is to include invariant properties in the features. It isvery common in a real-world environment that, for example,the illumination changes over time, and causes variations in thetexture appearance. Texture primitives can also rotate andlocate in many different ways, which also causes problems. Onthe other hand, if the features are too invariant, they might not be discriminative enough.II.
 
T
EXTURE
M
ODELS
 Image texture has a number of perceived qualities which play an important role in describing texture. One of thedefining qualities of texture is the spatial distribution of grayvalues. The use of statistical features is therefore one of theearly methods proposed in the machine vision literature.The gray-level co-occurrence matrix approach is based onstudies of the statistics of pixel intensity distributions. Theearly paper by Haralick et al.[4] presented 14 texture measuresand these were used successfully for classification of manytypes of materials for example, wood, corn, grass and water.However, Conners and Harlow [5] found that only five of thesemeasures were normally used, viz. “energy”, “entropy”,“correlation”, “local homogeneity”, and “inertia”. The size of the co-occurrence matrix is high and suitable choice of d(distance) and
θ
(angle) has to be made to get relevant features.A novel texture energy approach is presented by Laws [6].This involved the application of simple filters to digital images.The basic filters he used were common Gaussian, edgedetector, and Laplacian-type filters and were designed tohighlight points of high “texture energy” in the image. Adeinvestigated the theory underlying Laws’ approach anddeveloped a revised rationale in terms of Eigen filters [7]. Eacheigenvalue gives the part of the variance of the original imagethat can be extracted by the corresponding filter. The filters thatgive rise to low variances can be taken to be relativelyunimportant for texture recognition.The structural models of 
texture
 
assume
 
that
 
textures
 
are
 
composed
 
of
 
texture
 
primitives.
 
The
 
texture
 
is
 
produced
 
 by
 
the
 
placement
 
of
 
these
 
primitives
 
according
 
to
 
certain
 
placement
 
rules.
 
This
 
class
 
of
 
algorithms,
 
in
 
general,
 
is
 
limited
 
in
 
power
 
unless
 
one
 
is
 
dealing
 
with
 
very
 
regular
 
textures.
 
Structural
 
texture
 
analysis
 
consists
 
of
 
two
 
major
 steps: (a) extraction of the texture elements, and (b) inferenceof the placement rule. An approach to model the texture by
22http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol.
8
 , No.
2
 , 2010
structural means is described by Fu [8]. In this approach thetexture image is regarded as texture primitives arrangedaccording to a placement rule. The primitive can be as simpleas a single pixel that can take a gray value, but it is usually acollection of pixels. The placement rule is defined by a treegrammar. A texture is then viewed as a string in the languagedefined by the grammar whose terminal symbols are the texture primitives. An advantage of this method is that it can be usedfor texture generation as well as texture analysis.Model based texture analysis methods are based on theconstruction of an image model that can be used not only todescribe texture, but also to synthesize it. The model parameters capture the essential perceived qualities of texture.Markov random fields (MRFs) have been popular for modelingimages. They are able to capture the local (spatial) contextualinformation in an image. These models assume that theintensity at each pixel in the image depends on the intensitiesof only the neighboring pixels. Many natural surfaces have astatistical quality of roughness and self-similarity at differentscales. Fractals are very useful and have become popular inmodeling these properties in image processing.However, the majority of existing texture analysismethods makes the explicit or implicit assumption that textureimages are acquired from the same viewpoint (e.g. the samescale and orientation). This gives a limitation of these methods.In many practical applications, it is very difficult or impossibleto ensure that images captured have the same translations,rotations or scaling between each other. Texture analysisshould be ideally invariant to viewpoints. Furthermore, basedon the cognitive theory and our own perceptive experience,given a texture image, no matter how it is changed under translation, rotation and scaling or even perspective distortion,it is always perceived as the same texture image by a humanobserver. Invariant texture analysis is thus highly desirablefrom both the practical and theoretical viewpoint.Recent developments include the work withautomated visual inspection in work. Ojala et al., [9] andManthalkar et al., [10] aimed at rotation invariant textureclassification. Pun and Lee [11] aims at scale invariance. Davis[12] describes a new tool (called polarogram) for image textureanalysis and used it to get invariant texture features. In Davis’smethod, the co-occurrence matrix of a texture image must becomputed prior to the polarograms. However, it is well knownthat a texture image can produce a set of co-occurrencematrices due to the different values of a and d. This also resultsin a set of polarograms corresponding to a texture. Only one polarogram is not enough to describe a texture image. Howmany polarograms are required to describe a texture imageremains an open problem. The polar grid is also used byMayorga and Ludeman [13] for rotation invariant textureanalysis. The features are extracted on the texture edgestatistics obtained through directional derivatives amongcircularly layered data. Two sets of invariant features are usedfor texture classification. The first set is obtained by computingthe circularly averaged differences in the gray level between pixels. The second computes the correlation function alongcircular levels. It is demonstrated by many recent publicationsthat Zernike moments perform well in practice to obtaingeometric invariance.Local frequency analysis has been used for texture analysis.One of the best known methods uses Gabor filters and is basedon the magnitude information [14]. Phase information has beenused in [15] and histograms together with spectral informationin [16]. Ojala T & Pietikäinen M [17] proposed amultichannel approach to texture description by approximating joint occurrences of multiple features with marginaldistributions, as 1-D histograms, and combining similarityscores for 1-D histograms into an aggregate similarity score.Ojala T introduced a generalized approach to the gray scale androtation invariant texture classification method based on local binary patterns [18]. The current status of a new initiativeaimed at developing a versatile framework and image databasefor empirical evaluation of texture analysis algorithms is presented by him. Another frequently used approach in texturedescription is using distributions of quantized filter responsesto characterize the texture (Leung and Malik), (Varma andZisserman) [19] [20]. Ahonen T, proved that the local binary pattern operator can be seen as a filter operator based on localderivative filters at different orientations and a special vector quantization function [21].A rotation invariant extension to the blur insensitive local phase quantization texture descriptor is presented by OjansivuV [22].Unitary Transformations are also used to represent theimages. The simple and powerful class of transform coding islinear block transform coding, where the entire image is partitioned into a number of non-overlapping blocks and thenthe transformation is applied to yield transform coefficients.This is necessitated because of the fact that the original pixelvalues of the image are highly correlated. A framework usingorthogonal polynomials for edge detection and texture analysisis presented in [23] [24].III.
 
O
RTHONORMAL
P
OLYNOMIAL
T
RANSFORM
 
A linear 2-D image formation system usually consideredaround a Cartesian coordinate separable, blurring, point spreadoperator in which the image
 I 
results in the superposition of the point source of impulse weighted by the value of the object
 f 
.Expressing the object function
 f 
in terms of derivatives of theimage function
 I 
relative to its Cartesian coordinates is veryuseful for analyzing the image. The point spread function M(x,y) can be considered to be real valued function defined for (x,y) € X x Y, where X and Y are ordered subsets of real values.In case of gray-level image of size (n x n) where X (rows)consists of a finite set, which for convenience labeled as {0, 1,2, … ,n-1}, the function M(x, y) reduces to a sequence of functions.
Μ,
(τ), τ=0,1,...ν−1 (1)
The linear two dimensional can be defined by the pointspread operator M(x,y), (M(i,t) = u
i
(t)) as shown in equation 2.
,,

,,
(2)
 
23http://sites.google.com/site/ijcsis/ISSN 1947-5500

Activity (3)

You've already reviewed this. Edit your review.
1 thousand reads
1 hundred reads
ganeshphd liked this

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->