This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more

Download

Standard view

Full view

of .

Look up keyword

Like this

Share on social networks

1Activity

×

0 of .

Results for: No results containing your search query

P. 1

Image Classification in Transform DomainRatings: (0)|Views: 26|Likes: 0

Published by ijcsis

Organizing images into meaningful categories using low level or high level features is an important task in image databases. Although image classification has been studied for many years, it is still a challenging problem within multimedia and computer vision. In this paper the generic image classification approach using different transforms is proposed. The two main steps in image classification are feature extraction and classification algorithm. This paper proposes to generate feature vector from image transform. The paper also investigates the effectiveness of different transforms (Discrete Fourier Transform, Discrete Cosine Transform, Discrete Sine Transform, Hartley and Walsh Transform) in classification task. The size of feature vector also varied to see its impact on the result. Classification is done using nearest neighbor classifier. Euclidean and Manhattan distance is used to calculate the similarity measure. Images from the Wang database are used to carry out the experiments. The experimental results and detailed analysis are presented.

Organizing images into meaningful categories using low level or high level features is an important task in image databases. Although image classification has been studied for many years, it is still a challenging problem within multimedia and computer vision. In this paper the generic image classification approach using different transforms is proposed. The two main steps in image classification are feature extraction and classification algorithm. This paper proposes to generate feature vector from image transform. The paper also investigates the effectiveness of different transforms (Discrete Fourier Transform, Discrete Cosine Transform, Discrete Sine Transform, Hartley and Walsh Transform) in classification task. The size of feature vector also varied to see its impact on the result. Classification is done using nearest neighbor classifier. Euclidean and Manhattan distance is used to calculate the similarity measure. Images from the Wang database are used to carry out the experiments. The experimental results and detailed analysis are presented.

See more

See less

https://www.scribd.com/doc/93689475/Image-Classification-in-Transform-Domain

12/27/2013

text

original

(IJCSIS) International Journal of Computer Science and Information Security,Vol. 10, No. 3, March 2012

Image Classification in Transform Domain

Dr. H. B. Kekre

Professor,Computer EngineeringMukesh Patel School of TechnologyManagement and Engineering,NMIMS University, Vileparle(w)Mumbai 400–056, Indiahbkekre@yahoo.com.

Dr. Tanuja K. Sarode

Associate Professor,Computer Engineering,Thadomal Shahani EngineeringCollege,Bandra(W), Mumbai 400-050, Indiatanuja_0123@yahoo.com

Jagruti K. Save

Ph.D. Scholar, MPSTME,NMIMS University,Associate Professor,Fr. C. Rodrigues College of Engineering, Bandra(W), Mumbai400-050, India jagrutik_save@yahoo.com

Abstract

—

Organizing images into meaningful categories usinglow level or high level features is an important task in imagedatabases. Although image classification has been studied formany years, it is still a challenging problem within multimediaand computer vision. In this paper the generic imageclassification approach using different transforms is proposed.The two main steps in image classification are feature extractionand classification algorithm. This paper proposes to generatefeature vector from image transform. The paper also investigatesthe effectiveness of different transforms (Discrete FourierTransform, Discrete Cosine Transform, Discrete Sine Transform,Hartley and Walsh Transform) in classification task. The size of feature vector also varied to see its impact on the result.Classification is done using nearest neighbor classifier. Euclideanand Manhattan distance is used to calculate the similaritymeasure. Images from the Wang database are used to carry outthe experiments. The experimental results and detailed analysisare presented.

Keywords- Image classification; Image Transform; Discrete Fourier Transform (DFT); Discrete Sine Transform(DST); Discrete Cosine Transform(DST); Hartley Transform; WalshTransform; Nearest neighbor Classifier.

I.

I

NTRODUCTION

Though the image classification is usually not a verydifficult task for humans, it has been proven to be an extremelycomplex task for machines. In the existing literatures, most of the frameworks for image classification include two mainsteps: feature extraction and classification algorithm. In the firststep, some discriminative features are extracted to represent theimage content such as color [1] [2], shape [3] and texture [4].There has been a lot of research work done in the area of feature extraction. Saliency map is used to extract features toclassify both the query image and database images intoattentive and non-attentive classes [5]. The image texturefeature is calculated based on gray-level co-occurrence matrix(GLCM) [6]. Color Co-occurrence method in which both thecolor and texture of an image are taken into account, is used togenerate the features [7]. Transforms have been applied to grayscale image to generate feature vector [8]. In classificationalgorithm step, various multi-class classifiers like k nearestneighbor classifier [9], Support Vector Machine (SVM) [10][11], Artificial Neural Network [12] [13], Genetic algorithm[14] are used.II.

I

MAGE

T

RANSFORMS

A.

Discrete Fourier Transform (DFT)

The discrete Fourier transform (DFT) is one of the mostimportant transforms that is used in digital signal processingand image processing [15]. Two dimensional discrete Fouriertransform for an image f(x, y) of size N by N is given byequation 1.

1Nvu,0for
1N0yNvyNux j2

π

y)ef(x,1N0xv)F(u,

−≤≤

∑

−=+−

∑

−==

(1)

B.

Discrete Cosine Transform (DCT)

The discrete cosine transform (DCT), introduced byAhmed, Natarajan and Rao [16], has been used in manyapplications of digital signal processing, data compression,information hiding and content based Image Retrievalsystem(CBIR)[17]. The discrete cosine transform (DCT) isclosely related to the discrete Fourier transform. It is aseparable linear transformation; that is, the two-dimensionaltransform is equivalent to a one-dimensional DCT performedalong a single dimension followed by a one-dimensional DCTin the other dimension. The two dimensional DCT can bewritten in terms of pixel values f(x, y) for x, y= 0, 1,…, N-1and the frequency-domain transform coefficients F(u, v) asshown in equation 2.

( )( )

1Nvu,0for
2Nv12y cos2Nu12x cosy)f(x,

α

(v)

α

(u)v)F(u,

−≤≤

+

+

∑ ∑

=

π π

(2)

Where

91http://sites.google.com/site/ijcsis/ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,Vol. 10, No. 3, March 2012

112)(0 / 1)(
112)(0 / 1)(

−≤≤=
==−≤≤=
==

N v for N vv for N v
N u for N uu for N u

α α α α

C.

Discrete Sine Transform (DST)

The discrete sine transform was introduced by A. K. Jain in1974. The two dimensional sine transform is defined by anequation 3.

( )( )( )( )

1Nvu,0for
1N1v1y
sin1N1u1x
y)sinf(x,1N2v)F(u,

−≤≤

+++

+++

∑∑

+=

π π

(3)

Discrete Sine transform has been widely used in signal andimage Processing [18] [19].

D.

Discrete Hartley Transform (DHT)

The Hartley transform [20] is an integral transform closelyrelated to the Fourier transform. It has some advantages overthe Fourier transform in the analysis of real signals as it avoidsthe use of complex arithmetic.A discrete Hartley transform (DHT) is a Fourier-relatedtransform of discrete, periodic data similar to the discreteFourier transform (DFT), with analogous applications in signalprocessing and related fields [21]. Its main distinction from theDFT is that it transforms real inputs to real outputs, with nointrinsic involvement of complex numbers. Just as the DFT isthe discrete analogue of the continuous Fourier transform, theDHT is the discrete analogue of the continuous Hartleytransform. The discrete two dimensional Hartley Transform forimage of size N x N is defined as in equation 4.

( )

θ θ θ π

sincos2casy)f(x,N1v)F(u,

+=

+

∑ ∑

=

caswherevyux N

(4)

E.

Discrete Walsh Transform (DWT))

The Walsh Transform [22] has become quite useful in theapplications of image processing [23] [24]. Walsh functionswere established as a set of normalized orthogonal functions,analogous to sine and cosine functions, but having uniformvalues ± 1 throughout their segments. The Walsh transformmatrix is defined as a set of N rows, denoted Wj, for j = 0, 1, ..., N - 1, which have the following properties:

•

Wj takes on the values +1 and -1

•

Wj[0] = 1 for all j

•

Wj x [Wk]

t

=0, for j

≠

k and Wj x [Wk]

t

=N, for j=k.

•

Wj has exactly j zero crossings, for j = 0, 1,..., N-1Each row Wj is even (when j is even) and odd (when j isodd) w.r.t. to its midpoint.III.

R

OW

M

EAN

V

ECTOR

The row mean vector [25] [26] is the set of averages of theintensity values of the respective rows as shown in equation 5.

=

N)Avg(Row::2)Avg(Row1)Avg(Rowrmean vectoRow

(5)

IV.

P

ROPOSED

A

LGORITHM

The image database is divided into a training set and atesting set. The feature vector of each training/testing image iscalculated. Given an image to be classified from testing set, anearest neighbor classifier compares it against the images of atraining set, in order to identify the most similar image andconsequently the correct class. Euclidean and Manhattandistance is used as similarity measure.

A.

Generation of feature vector

1.

For each color image f(x,y), generate its three color(R, G, and B) planes f

R

(x,y), f

G

(x,y) and f

B

(x,y)respectively.2.

Apply transform T (DCT, DFT, DST, HARTLEY,WALSH) on the columns of three image planes asgiven in equation 6 to 8 to get column transformedimages.

[ ] [ ]

),(),(

v xF y x f T

R R

=×

(6)

[ ] [ ]

),(),(

v xF y x f T

GG

=×

(7)

[ ] [ ]

),(),(

v xF y x f T

B B

=×

(8)

3.

Calculate row mean vector of each columntransformed image.4.

Make a feature vector of size 75 by fusing the rowmean vectors of R, G, and B plane. Take first 25values from R plane followed by first 25 values fromG plane followed by first 25 values from B plane.

Identify applicable sponsor/s here.

(sponsors)

92http://sites.google.com/site/ijcsis/ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,Vol. 10, No. 3, March 2012

5.

Do the above process for training images to generatethe feature database.The different values of feature vector size like 150 (50R +50G + 50B), 225 (75R + 75G + 75B), 300 (100R + 100G +100B), 450(150R + 150G + 150B), and 768 (256R + 256G +256B) are also considered to generate feature vectors.

B.

Classification

1.

In this phase, for given testing images, their featurevectors are generated.2.

Euclidean distance and Manhattan distance iscalculated between each testing image feature vectorand each training image feature vector.3.

Minimum distance indicates the most similar trainingimage for that testing image. Then the given testingimage is assigned to the corresponding class.We have also considered another training set where eachfeature vector is the average of feature vectors of all trainingimages of a particular class.V.

R

ESULTS

The implementation of the proposed technique is done inMATLAB 7.0 using a computer with Intel Core 2 DuoProcessor T8100 (2.1GHz) and 2 GB RAM. The proposedtechnique is tested on the Wang image database. This databasewas created by the group of professor Wang from thePennsylvania State University [27]. The experiment is carriedon 8 classes of Wang database. For testing, 30 images for eachclass were used and for training, 5 images of each class wereused. Thus total testing images were 240 and total trainingimages were 40. Training set contains 40 feature vectors. Theproposed method is also implemented using another trainingset that contain 8 feature vectors where each feature vector isthe average of feature vectors of all training images of sameclass. Fig. 1 shows the sample database of training images andFig. 2 shows the sample database of testing images.

Figure 1. Sample database of training imagesFigure 2. Sample database of testing images

Each image is resized to 256 x 256. Table I and Table II showsthe number of correctly classified total images (out of 240) fordifferent transforms over different vector sizes for two differenttraining sets. The correctness of classification is visuallychecked.With average training set Walsh transform gives betterperformance compared to other transforms with Manhattan assimilarity measure. If Euclidean distance is used forcalculation then feature vector size of 768 gives the marginallybetter performance in all transforms. Considering the results asshown in Table 1, best results are obtained for Manhattandistance as similarity measure. DST Walsh and DFT gavebetter performance in that order.Now considering individual class classification performanceusing these two similarity measures is shown in Table III toTable VI. For this purpose the vector size is selected based onthe performance. For Euclidean distance criterion, the numberof correctly classified images in each class for differenttransforms over two training sets is shown in table III and tableIV with feature vector size 768. If a Manhattan distancecriterion is used, then there is a variation in the performance of the transforms for different feature vector sizes. In most casesvector size 225 gives better performance. So using this vectorsize, the number of correctly classified images in each class fordifferent transforms over two training sets is shown in table Vand table VI.

93http://sites.google.com/site/ijcsis/ISSN 1947-5500

- Read and print without ads
- Download to keep your version
- Edit, email or read offline

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

CANCEL

OK

You've been reading!

NO, THANKS

OK

scribd