This action might not be possible to undo. Are you sure you want to continue?
,
Vol. 8, No. 2, 2010
An Efficient Feature Extraction Technique
for Texture Learning
R. Suguna
Research Scholar, Department of Information Technology
Madras Institute of Technology, Anna University
Chennai 600 044, Tamil Nadu, India.
hitec_suguna@hotmail.com
P. Anandhakumar
Assistant Professor, Department of Information Tech.
Madras Institute of Technology, Anna University
Chennai 600 044, Tamil Nadu, India.
anandh@annauniv.edu
Abstract— This paper presents a new methodology for
discovering features of texture images. Orthonormal
Polynomial based Transform is used to extract the features
from the images. Using orthonormal polynomial basis
function polynomial operators with different sizes are
generated. These operators are applied over the images to
capture the texture features. The training images are
segmented with fixed size blocks and features are extracted
from it. The operators are applied over the block and their
inner product yields the transform coefficients. These set
of transform coefficients form a feature set of a particular
texture class. Using clustering technique, a codebook is
generated for each class. Then significant class
representative vectors are calculated which characterizes
the textures. Once the orthonormal basis function of
particular size is found, the operators can be realized with
few matrix operations and hence the approach is
computationally simple. Euclidean Distance measure is
used in the classification phase. The transform coefficients
have rotation invariant capability. In the training phase
the classifier is trained with samples with one particular
angle of image and tested with samples at different angles.
Texture images are collected from Brodatz album.
Experimental results prove that the proposed approach
provides good discrimination between the textures.
Keywords Texture Analysis; Orthonormal Transform;
codebook generation; Texture Class representatives; Texture
Characterization.
I. INTRODUCTION
Texture can be regarded as the visual appearance of a
surface or material. Textures appear in numerous objects and
environments in the universe and they can consist of very
different elements. Texture analysis is a basic issue in image
processing and computer vision. It is a key problem in many
application areas, such as object recognition, remote sensing,
contentbased image retrieval and so on. A human may
describe textured surfaces with adjectives like fine, coarse,
smooth or regular. But finding the correlation with
mathematical features indicating the same properties is very
difficult. We recognize texture when we see it but it is very
difficult to define. In computer vision, the visual appearance of
the view is captured with digital imaging and stored as image
pixels. Texture analysis researchers agree that there is
significant variation in intensity levels or colors between
nearby pixels and at the limit of resolution there is non
homogeneity. Spatial nonhomogeneity of pixels corresponds
to the visual texture of the imaged material which may result
from physical surface properties such as roughness, for
example. Image resolution is important in texture perception,
and lowresolution images contain typically very homogenous
textures.
The appearance of texture depend upon three ingredients:
(i) some local ‘order’ is repeated over a region which is large in
comparison to the order’s size, (ii) the order consists in the
nonrandom arrangement of elementary parts, and (iii) the parts
are roughly uniform entities having approximately the same
dimensions everywhere within the textured region[1].
Image texture, defined as a function of the spatial variation
in pixel intensities (gray values), is useful in a variety of
applications and has been a subject of intense study by many
researchers. One immediate application of image texture is the
recognition of image regions using texture properties. Texture
is the most important visual cue in identifying these types of
homogeneous regions. This is called texture classification. The
goal of texture classification then is to produce a classification
map of the input image where each uniform textured region is
identified with the texture class it belongs to [2].
Texture analysis methods have been utilized in a variety
of application domains. Texture plays an important role in
automated inspection, medical image processing, document
processing and remote sensing. In the detection of defects in
texture images, most applications have been in the domain
of textile inspection. Some diseases, such as interstitial
fibrosis, affect the lungs in such a manner that the resulting
changes in the Xray images are texture changes as opposed
to clearly delineated lesions. In such applications, texture
analysis methods are ideally suited for these images.
Texture plays a significant role in document processing and
character recognition. The text regions in a document are
characterized by their high frequency content. Texture
analysis has been extensively used to classify remotely sensed
images. Land use classification where homogeneous regions
with different types of terrains (such as wheat, bodies of water,
urban regions, etc.) need to be identified is an important
21 http://sites.google.com/site/ijcsis/
ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 2, 2010
application. Haralick et al. [3] used gray level cooccurrence
features to analyze remotely sensed images.
Since we are interested in interpretation of images we can
define texture as the characteristic variation in intensity of a
region of an image which should allow us to recognize and
describe it and outline its boundaries. The degrees of
randomness and of regularity will be the key measure when
characterizing a texture. In texture analysis the similar textural
elements that are replicated over a region of the image are
called texels. This factor leads us to characterize textures in the
following ways:
• The texels will have various sizes and degrees of
uniformity
• The texels will be oriented in various directions
• The texels will be spaced at varying distances in different
directions
• The contrast will have various magnitudes and variations
• Various amounts of background may be visible between
texels
• The variations composing the texture may each have
varying degrees of regularity
It is quite clear that a texture is a complicated entity to
measure. The reason is primarily that many parameters are
likely to be required to characterize it. Characterization of
textured materials is usually very difficult and the goal of
characterization depends on the application. In general, the aim
is to give a description of analyzed material, which can be, for
example, the classification result for a finite number of classes
or visual exposition of the surfaces. It gives additional
information compared only to color or shape measurements of
the objects. Sometimes it is not even possible to obtain color
information at all, as in night vision with infrared cameras.
Color measurements are usually more sensitive to varying
illumination conditions than texture, making them harder to use
in demanding environments like outdoor conditions. Therefore
texture measures can be very useful in many realworld
applications, including, for example, outdoor scene image
analysis.
To exploit texture in applications, the measures should be
accurate in detecting different texture structures, but still be
invariant or robust with varying conditions that affect the
texture appearance. Computational complexity should not be
too high to preserve realistic use of the methods. Different
applications set various requirements on the texture analysis
methods, and usually selection of measures is done with respect
to the specific application.
Typically textures and the analysis methods related to them
are divided into two main categories with different
computational approaches: the stochastic and the structural
methods. Structural textures are often manmade with a very
regular appearance consisting, for example, of line or square
primitive patterns that are systematically located on the surface
(e.g. brick walls). In structural texture analysis the properties
and the appearance of the textures are described with different
rules that specify what kind of primitive elements there are in
the surface and how they are located. Stochastic textures are
usually natural and consist of randomly distributed texture
elements, which again can be, for example, lines or curves (e.g.
tree bark). The analysis of these kinds of textures is based on
statistical properties of image pixels and regions. The above
categorization of textures is not the only possible one; there
exist several others as well, for example, artificial vs. natural or
micro textures vs. macro textures. Regardless of the
categorization, texture analysis methods try to describe the
properties of the textures in a proper way. It depends on the
applications what kind of properties should be sought from the
textures under inspection and how to do that. This is rarely an
easy task.
One of the major problems when developing texture
measures is to include invariant properties in the features. It is
very common in a realworld environment that, for example,
the illumination changes over time, and causes variations in the
texture appearance. Texture primitives can also rotate and
locate in many different ways, which also causes problems. On
the other hand, if the features are too invariant, they might not
be discriminative enough.
II. TEXTURE MODELS
Image texture has a number of perceived qualities which
play an important role in describing texture. One of the
defining qualities of texture is the spatial distribution of gray
values. The use of statistical features is therefore one of the
early methods proposed in the machine vision literature.
The graylevel cooccurrence matrix approach is based on
studies of the statistics of pixel intensity distributions. The
early paper by Haralick et al.[4] presented 14 texture measures
and these were used successfully for classification of many
types of materials for example, wood, corn, grass and water.
However, Conners and Harlow [5] found that only five of these
measures were normally used, viz. “energy”, “entropy”,
“correlation”, “local homogeneity”, and “inertia”. The size of
the cooccurrence matrix is high and suitable choice of d
(distance) and θ (angle) has to be made to get relevant features.
A novel texture energy approach is presented by Laws [6].
This involved the application of simple filters to digital images.
The basic filters he used were common Gaussian, edge
detector, and Laplaciantype filters and were designed to
highlight points of high “texture energy” in the image. Ade
investigated the theory underlying Laws’ approach and
developed a revised rationale in terms of Eigen filters [7]. Each
eigenvalue gives the part of the variance of the original image
that can be extracted by the corresponding filter. The filters that
give rise to low variances can be taken to be relatively
unimportant for texture recognition.
The structural models of texture assume that textures are
composed of texture primitives. The texture is produced by
the placement of these primitives according to certain
placement rules. This class of algorithms, in general, is
limited in power unless one is dealing with very regular
textures. Structural texture analysis consists of two major
steps: (a) extraction of the texture elements, and (b) inference
of the placement rule. An approach to model the texture by
22 http://sites.google.com/site/ijcsis/
ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 2, 2010
structural means is described by Fu [8]. In this approach the
texture image is regarded as texture primitives arranged
according to a placement rule. The primitive can be as simple
as a single pixel that can take a gray value, but it is usually a
collection of pixels. The placement rule is defined by a tree
grammar. A texture is then viewed as a string in the language
defined by the grammar whose terminal symbols are the texture
primitives. An advantage of this method is that it can be used
for texture generation as well as texture analysis.
Model based texture analysis methods are based on the
construction of an image model that can be used not only to
describe texture, but also to synthesize it. The model
parameters capture the essential perceived qualities of texture.
Markov random fields (MRFs) have been popular for modeling
images. They are able to capture the local (spatial) contextual
information in an image. These models assume that the
intensity at each pixel in the image depends on the intensities
of only the neighboring pixels. Many natural surfaces have a
statistical quality of roughness and selfsimilarity at different
scales. Fractals are very useful and have become popular in
modeling these properties in image processing.
However, the majority of existing texture analysis
methods makes the explicit or implicit assumption that texture
images are acquired from the same viewpoint (e.g. the same
scale and orientation). This gives a limitation of these methods.
In many practical applications, it is very difficult or impossible
to ensure that images captured have the same translations,
rotations or scaling between each other. Texture analysis
should be ideally invariant to viewpoints. Furthermore, based
on the cognitive theory and our own perceptive experience,
given a texture image, no matter how it is changed under
translation, rotation and scaling or even perspective distortion,
it is always perceived as the same texture image by a human
observer. Invariant texture analysis is thus highly desirable
from both the practical and theoretical viewpoint.
Recent developments include the work with
automated visual inspection in work. Ojala et al., [9] and
Manthalkar et al., [10] aimed at rotation invariant texture
classification. Pun and Lee [11] aims at scale invariance. Davis
[12] describes a new tool (called polarogram) for image texture
analysis and used it to get invariant texture features. In Davis’s
method, the cooccurrence matrix of a texture image must be
computed prior to the polarograms. However, it is well known
that a texture image can produce a set of cooccurrence
matrices due to the different values of a and d. This also results
in a set of polarograms corresponding to a texture. Only one
polarogram is not enough to describe a texture image. How
many polarograms are required to describe a texture image
remains an open problem. The polar grid is also used by
Mayorga and Ludeman [13] for rotation invariant texture
analysis. The features are extracted on the texture edge
statistics obtained through directional derivatives among
circularly layered data. Two sets of invariant features are used
for texture classification. The first set is obtained by computing
the circularly averaged differences in the gray level between
pixels. The second computes the correlation function along
circular levels. It is demonstrated by many recent publications
that Zernike moments perform well in practice to obtain
geometric invariance.
Local frequency analysis has been used for texture analysis.
One of the best known methods uses Gabor filters and is based
on the magnitude information [14]. Phase information has been
used in [15] and histograms together with spectral information
in [16]. Ojala T & Pietikäinen M [17] proposed a
multichannel approach to texture description by approximating
joint occurrences of multiple features with marginal
distributions, as 1D histograms, and combining similarity
scores for 1D histograms into an aggregate similarity score.
Ojala T introduced a generalized approach to the gray scale and
rotation invariant texture classification method based on local
binary patterns [18]. The current status of a new initiative
aimed at developing a versatile framework and image database
for empirical evaluation of texture analysis algorithms is
presented by him. Another frequently used approach in texture
description is using distributions of quantized filter responses
to characterize the texture (Leung and Malik), (Varma and
Zisserman) [19] [20]. Ahonen T, proved that the local binary
pattern operator can be seen as a filter operator based on local
derivative filters at different orientations and a special vector
quantization function [21].
A rotation invariant extension to the blur insensitive local
phase quantization texture descriptor is presented by Ojansivu
V [22].
Unitary Transformations are also used to represent the
images. The simple and powerful class of transform coding is
linear block transform coding, where the entire image is
partitioned into a number of nonoverlapping blocks and then
the transformation is applied to yield transform coefficients.
This is necessitated because of the fact that the original pixel
values of the image are highly correlated. A framework using
orthogonal polynomials for edge detection and texture analysis
is presented in [23] [24].
III. ORTHONORMAL POLYNOMIAL TRANSFORM
A linear 2D image formation system usually considered
around a Cartesian coordinate separable, blurring, point spread
operator in which the image I results in the superposition of the
point source of impulse weighted by the value of the object f.
Expressing the object function f in terms of derivatives of the
image function I relative to its Cartesian coordinates is very
useful for analyzing the image. The point spread function M(x,
y) can be considered to be real valued function defined for (x,
y) € X x Y, where X and Y are ordered subsets of real values.
In case of graylevel image of size (n x n) where X (rows)
consists of a finite set, which for convenience labeled as {0, 1,
2, … ,n1}, the function M(x, y) reduces to a sequence of
functions.
Μ(i, t) = u
ì
(τ), τ=0,1,...ν−1 (1)
The linear two dimensional can be defined by the point
spread operator M(x,y), (M(i,t) = u
i
(t)) as shown in equation 2.
[
i
(¯, n) = ] ] H(¯, x)H(
ye¥ xeX
n, y)I(x, y)JxJy (2)
23 http://sites.google.com/site/ijcsis/
ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 2, 2010
Considering both X and Y to be a finite set of values {0, 1,
2, …. ,n1}, equation (2) can be written in matrix notation as
follows
['
ì]
 = (H ×H)
i
I (3)
where ⊗ is the outer product, ['
ì]
 are n
2
matrices
arranged in the dictionary sequence, I is the image, ['
ì]
 are
the coefficients of transformation and the point spread operator
H is
H =


u
0
(t
1
) u
1
(t
1
) … u
n1
(t
1
)
u
0
(t
2
) u
1
(t
2
) … u
n1
(t
2
)
.
.
.
u
0
(t
n
) u
0
(t
n
) … u
n1
(t
n
)


(4)
We consider the set of orthogonal polynomials
u
0
(t), u
1
(t), …, u
n1
(t) of degrees 0, 1, 2, …, n1 respectively
to construct the polynomial operators of different sizes from
equation (4) for n ≥ 2 and t
ì
= i. The generating formula for
the polynomials is as follows.
u
ì+1
(t) = (t µ)u
ì
(t) b
ì
(n)u
ì1
(t) ¡or i ~ 1 (5)
u
1
(t) = t µ, onJ u
0
(t) = 1,
where
b
I
(n) =
(u
i,
u
i
)
( u
i1,
u
i1
)
=
∑ u
i
2
(t)
n
t=1
∑ u
i1
2
(t)
n
t=1
(6)
and
µ =
1
n
∑ t
n
t=1
(7)
Considering the range of values of t to be t
ì
= i, i =
1,2,S, …n, we get
b
ì
(n) =
ì
2
(n
2
 ì
2
)
4(4ì
2
 1)
(8)
µ =
1
n
∑ t =
n
t=1
n+1
2
(9)
We can construct pointspread operators H of different
size from equation (4) using the above orthogonal polynomials
for n ~ 2 and t
ì
= i.
The orthogonal basis functions for n=2 and n=3 are given
below.

1 1
1 1
 
1 1 1
1 u 2
1 1 1

Orthonormal basis functions can be derived from
orthogonal sets. Suppose that S is a set of vectors in an inner
product space.
(a) If each pair of distinct vectors from S is orthogonal then
we call S an orthogonal set.
(b) If S is an orthogonal set and each of the vectors in S also
has a norm of 1 then we call S an orthonormal set.
To enforce orthonormal property, divide each vector by its
norm. Suppose S = (:
1
, :
2
, :
3
) forms an orthogonal set. Then,
(:
1
, :
2
) = (:
2
, :
3
) = (:
1
, :
3
) = u. Any vector v can be turned
into a vector with norm 1 by dividing by its norm as follows,
1
¡
: (10)
To convert S to have orthonormal property, divide each
vector by its norm.
u
ì
=
1
¡
i

:
ì
, i = 1, 2, S (11)
After finding the orthonormal basis function, the operators
are generated by applying outer product. For an orthonormal
basis function of size n, n
2
operators are generated. Applying
the operators over the block of the image we get transform
coefficients.
IV. METHODOLOGY
Sample Images representing different Textures are
collected. We collected the images from Outex Texture
Database. Each image is of size 128 x 128. Images of each
texture are partitioned into two groups as Training Set and Test
Set.
The process involved in capturing the texture
characterization is depicted in Figure1. Each training image is
partitioned into nonoverlapping blocks of size M*M. We have
chosen M = 4. Features are extracted from each block using
orthonormal polynomial based transform as described in
section 3. From each block a kdimensional feature vector is
generated. A codebook is built for each concept classes. The
algorithm for construction of codebook is discussed below.
A. Codebook Generation Algorithm
Input: Training Images of Texture Ti
Output: Codebook of the Texture Ti
1. Read the image Tr(m) from the Texture Class Ti,
where m=1,2,…M; M denotes the number of training
images in Ti and i=1,2,…L; L denotes the number of
Textures. Size of Tr(m) is 128x128.
2. Each image is partitioned into p x p blocks and we
have P blocks for each training image, p=4.
3. For each block apply Orthonormal Based transform by
using a set of (pxp) polynomial operators and extract
the feature coefficients. Inner product between a
polynomial operator and image block results in a
transform coefficient. We get p
2
coefficients for each
block.
4. Rearrange the feature coefficients into 1D array in
descending sequence.
24 http://sites.google.com/site/ijcsis/
ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 2, 2010
Figure 1. Process involved in Texture Characterization
5. Take only d coefficients to form the feature vector z,
where z = {z(j), j=1,2,…,d; d<k}.
6. From P blocks get P x d coefficients.
7. Repeat 26 for all images in T
i
and collect the z vectors.
Apply clustering technique, to cluster the feature vectors of
T
i
. The number clusters decides the codebook size. The mean
of the clusters form the code vectors.
B. Building Class Representative Vector
Input: Images of size N x N, Texture codebook
Output: Class Representative Vector R
i
.
1. For each image in T
i
, generate the code indices
associated with the corresponding codebook.
2. Find the number of occurrences in each code index for
each image.
3. Compute the mean of occurrences to generate class
representative vector R
i
, where i=1,2,…L, where L is
the number of Textures.
4. Repeat 13 for all T
i.
C. Texture Classification
Given any texture image this phase determines to
which texture class the image is relevant. Images from the Test
set are partitioned into nonoverlapping blocks of size M*M.
Features are extracted using orthonormal polynomial
Transform. Consulting the codebooks, code indices are
generated and the corresponding input representative vector is
formed. Compute the distance d
i
between the Class
Representative vector Ri and input image representative vector
IR
i
for T
i
. Euclidean distance is used for similarity measure.
d
i
= dist(IR
i
, R
i
)
Find min (d
i
) to obtain the Texture class.
V. RESULTS AND DISCUSSION
We demonstrate the performance of our approach with the
proposed Transform coefficients with texture image data that
have been used in recent studies on rotation invariant texture
classification. Since the data included samples from several
rotation angles, we also present results for a more challenging
setup, where the samples of just one particular rotation angle
are used for training the texture classifier, which is then tested
with the samples of the other rotation angles.
A. Image Data and Experimental Setup
The image data included 12 textures from the Brodatz
album. Textures are presented at 6 different rotation angles (0,
30, 60, 90, 120, and 150). For each texture class there were 16
images for each class and angle (hence 1248 images in total).
Each texture class comprises following subsets of images: 16
'original' images, 16 images rotated at 30
0
, 16 images rotated at
60
0,
16 images rotated at 90
0,
16 images rotated at 120
0
and16
images rotated at 150
0
. The size of each image is 128x128.
The texture classes considered for our study are shown in
Figure. 2. The texture classes are divided into two sets. Texture
Set1 contains structural textures (regular patterns) and Texture
Set2 contains stochastic textures (irregular patterns).
Texture Set1 includes {bark, brick, bubbles, raffia, straw,
weave}. Texture Set2 includes {grass, leather, pigskin, sand,
water, wool}.
The statistical features of the texture class are studied first.
The mean and variance of the texture classes are found and
depicted in Figure3 to Figure 6.
Figure 2. Sample Images of Textures
25 http://sites.google.com/site/ijcsis/
ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 2, 2010
Figure 3. Mean of Structural Textures
Figure 4. Mean of Stochastic Textures
Figure 5. Variance of Structural Textures
Figure 6. Variance of Stochastic Textures
B. Contribution of Transform Coefficients
Each Texture class with rotation angle 0 is taken for
training. Other images are used for Testing. For each Texture
class a code book is generated with the training samples. A
Class Representative Vector is estimated. Figure 7 and Figure 8
shows the representatives of Textures.
Figure 7. Class Representatives of Structural Textures
Figure 8. Class Representatives of Stochastic Textures
Table 1 and Table 2 presents results for a the challenging
experimental setup where the classifier is trained with samples
of just one rotation angle and tested with samples of other
rotation angles.
Texture
Classification Accuracy (%) for different Training
angles
30
0
60
0
90
0
120
0
150
0
Bark 86.6 68.75 86.6 75.0 68.75
Brick 75.0 86.6 87.5 75.0 86.6
Bubbles 93.75 93.75 100 100 100
Raffia 100 93.75 87.5 87.5 87.5
Straw 56.25 62.5 68.75 56.25 62.5
Weave 93.75 100 100 100 93.75
Table 1 Classification Accuracies (%) of Structural Textures trained with
One rotation angle (0
0
) and Tested with other versions
26 http://sites.google.com/site/ijcsis/
ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 2, 2010
Texture
Classification Accuracy (%) for different Training
angles
30
0
60
0
90
0
120
0
150
0
Grass 100 100 100 93.75 93.75
Leather 87.5 93.75 87.5 93.75 93.75
Pigskin 87.5 87.5 93.75 75.0 68.75
Sand 75.0 75.0 68.75 68.75 75.0
Water 100 93.75 93.75 87.5 87.5
Wool 86.6 86.6 62.5 68.75 75
Table 2 Classification Accuracies (%) of stochastic Textures trained with
One rotation angle (0
0
) and Tested with other versions
It is observed that in Structural Textures, Bark is
misclassified as Straw and few as Brick. Brick is misclassified
as Raffia. Straw is misclassified as Bark and Bubbles. In the
case of Stochastic Textures Sand is misclassified as pigskin.
Wool is misclassified as Pigskin and sand. Compared to
structural Textures the performance of stochastic Textures is
good. The performance of Structured Textures and Stochastic
Textures are shown in Figure 9 and Figure 10.
Figure 9. Classification Performance of Structural Textures
Figure 10. Classification Performance of Stochastic Textures
The overall performance of Structured and Stochastic
Textures is reported in Figure 11 and Figure 12. If the mean
difference between the textures is less, then their classification
performance degrades.
Figure 11. Overall Classification Performance of Structural Textures
Figure 12. Overall Classification Performance of Stochastic Textures
We have also compared the performance of our feature
extraction method with other approaches. Table 3 shows the
comparative study with other Texture models.
Texture model Recognition rate in %
Co occurrence matrix 78.6
Autocorrelation method 76.1
Laws Texture measure 82.2
Orthonormal
Transformed Feature
Extraction
89.2
Table 3 Performance of various Texture measures in classication
27 http://sites.google.com/site/ijcsis/
ISSN 19475500
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 2, 2010
VI. CONCLUSION
An efficient way of extracting features from textures is
presented. From the orthonormal basis, new operators are
generated. These operators perform well in characterizing the
textures. The operator can be used for grayscale and rotation
invariant texture classification. Experimental results are
appreciable where the original version of image samples are
used for learning and tested for different rotation angles.
Computational simplicity is another advantage since the
operator is evaluated by computing the inner product. This
facilitates less time for implementation. The efficiency can be
further improved by varying the codebook size and the
dimension of feature vectors.
REFERENCES
[1] Hawkins J K Textural Properties for Pattern Recognition in Picture
Processing and Psychopictorics, (LIPKIN B AND ROSENFELD A
Eds), Academic Press, New York 1969.
[2] Chen Ch, Pau Lf, Wang Psp (1998) The Handbook Of Pattern
Recognition And Computer Vision (2nd Edition), Pp. 207248, World
Scientific Publishing Co., 1998.
[3] Haralick Rm, Shanmugam K And Dinstein I, (1973) Textural Feature
For Image Classification Ieee Transactions On Systems, Man, And
Cybernetics, Smc3, Pp. 610621.
[4] Haralick Rm (1979) Statistical And Structural Approaches To Texture,
Proc Ieee 67,No.5, 786804
[5] Conners Rw And Harlow Ca (1980) Toward A Structural Textural
Analyzer Based On Statistical Methods, Comput. Graph, Image
Processing,12,224256.
[6] Laws Ki (1979) Texture Energy Measures Proc. Image Understanding
Workshop, Pp. 4751. 1979.
[7] Ade F (1983) Characterization Of Texture By Eigenfilters Signal
Processing, 5, No.5, 451457.
[8] Fu Ks (1982) Syntactic Pattern Recognition And Applications, Prentice
Hall, New Jersey, 1982.
[9] Ojala T, Pietikainen M And Maenpaa T (2002) Multiresolution Gray
Scale And Rotation Invariant Texture Classification With Local Binary
Patterns, Ieee Trans. Pattern Anal. Mach. Intell, 24, No. 7, 971987.
[10] Manthalkar R, Biswas Pk And Chatterji Bn (2003) Rotation Invariant
Texture Classification Using Even Symmetric Gabor Filters, Pattern
Recog. Lett, 24, No. 12, 20612068.
[11] Pun Cm And Lee Mc (2003) Log Polar Wavelet Energy Signatures For
Rotation And Scale Invariant Texture Classification, Ieee Trans.
Pattern. Anal, Mach. Intell. 25, No.5, 590603.
[12] Larry S Davis (1981) Polarogram: A New Tool For Image Texture
Analysis, Pattern Recognition 13 (3) 219–223.
[13] Mayorga L. Ludman (1994), Shift And Rotation Invariant Texture
Recognition With Neural Nets, Proceedings Of Ieee International
Conference On Neural Networks, Pp. 4078–4083.
[14] Manthalkar R, Biswas Pk And Chatterji Bn (2003) Rotation Invariant
Texture Classification Using Even Symmetric Gabor Filters, Pattern
Recog. Lett, 24, No. 12, 20612068.
[15] Vo Ap, Oraintara S, And Nguyen Tt (2007) Using Phase And Magnitude
Information Of The Complex Directional Filter Bank For Texture Image
Retrieval, Proc. Ieee Int. Conf. On Image Processing (Icip’07), Pages
61–64.
[16] Xiuwen L And Deliang W(2003) Texture Classification Using Spectral
Histograms, Ieee Trans. Image Processing, 12(6):661–670.
[17] Ojala T & Pietikäinen M (1998) Nonparametric Multichannel Texture
Description With Simple Spatial Operators, Proc. 14th International
Conference On Pattern Recognition, Brisbane, Australia, 1052 – 1056.
[18] Ojala T, Pietikäinen M & Mäenpää T (2001) A Generalized Local
Binary Pattern Operator For Multiresolution Gray Scale And Rotation
Invariant Texture Classification, Advances In Pattern Recognition, Icapr
2001 Proceedings, Lecture Notes In Computer Science 2013, Springer,
397  406.
[19] Leung T And Malik J (2001) Representing And Recognizing The Visual
Appearance Of Materials Using Three Dimensional Textons, Int. J.
Comput. Vision, 43(1):29– 44.
[20] Varma M And Zisserman A (2005) A Statistical Approach To Texture
Classification From Single Images, International Journal Of Computer
Vision, 62(1–2):61–81.
[21] Ahonen T & Pietikäinen M (2008) A Framework For Analyzing
Texture Descriptors”, Proc. Third International Conference On
Computer Vision Theory And Applications (Visapp 2008), Madeira,
Portugal, 1:507512.
[22] Ojansivu V & Heikkilä J (2008) A Method For Blur And Affine
Invariant Object Recognition Using PhaseOnly Bispectrum, Proc.
Image Analysis And Recognition (Iciar 2008), Póvoa De Varzim,
Portugal, 5112:527536.
[23] Krishnamoorthi R (1998) A Unified Framework Orthogonal
Polynomials For Edge Detection, Texture Analysis And Compression In
Color Images, Ph.D. Thesis, 1998
[24] Krishnamoorthi R And Kannan N (2009) A New Integer Image Coding
Technique Based On Orthogonal Polynomials, Image And Vision
Computing, Vol 27(8). 9991006.
Anandhakumar P received Ph.D degree in
CSE from Anna University, in 2006. He is
working as Assistant Professor in Dept. of
IT, MIT Campus, Anna University. His
research area includes image processing and
networks
Suguna R received M.Tech degree in CSE
from IIT Madras, Chennai in 2004. She is
currently pursuing the Ph.D. degree in
Dept. of IT, MIT Campus, Anna
University.
28 http://sites.google.com/site/ijcsis/
ISSN 19475500
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.