You are on page 1of 6

Content Based Image Retrieval using Gabor Filters

and Color Coherence Vector


Jyotsna Singh∗ , Ahsaas Bajaj† , Anirudh Mittal‡ , Ansh Khanna§ and Rishabh Karwayun¶
Department of Electronics and Communications Engineering
Netaji Subhas Institute of Technology
New Delhi, India
Email: ∗ jsingh.nsit@gmail.com, † bajajahsaas@gmail.com,
‡ anirudh7795@gmail.com, § anshkhanna720@gmail.com, ¶ rkarwayun95@gmail.com

Abstract—Images have become a standard for information into multi-dimensional feature vectors for each image. These
consumption and storage, far replacing text in various domains feature vectors for all the images in the database is referred
such as museums, news stations, medicine and remote sensing. to as a feature database. In order to retrieve relevant images,
Such images constitute of the majority of data being consumed
on the Internet today and the volume is constantly increasing day user would provide with an example image(s) or sketched
by day. Most of these images are unlabeled and devoid of any figure(s). The CBIR system would transform these inputs
keywords. The swift and continuous increase in the use of images into its representation of feature vectors. The distances or
and their unlabeled characteristics have demanded the need for similarities between the feature vectors of the example query
efficient and accurate content-based image retrieval systems. A and those of the images in the database are then computed
considerable number of such systems have been designed for
the task that derive features from a query image and show the and based on it the retrieval is performed. The results of the
most similar images. One such efficient and accurate system is search can then be sorted based on their distance or similarity
attempted in this paper which makes use of color and texture measure to the query example. In literature, many measures
information of the images and retrieves the best possible results of similarity models (or image distance) have been developed.
based on this information. The proposed method makes use of The paper mainly utilizes two most commonly used features
Color Coherence Vector (CCV) for color feature extraction and
Gabor Filters for texture features. The results were found to be for CBIR system, texture and color [2], [3]. Since image size
significantly higher and easily exceeded a few popular studies as or its orientation has no impact on the color composition of the
well. image, color is amongst the most widely employed features
Index Terms—Color coherence vector, Image retrieval, Gabor for image classification. Swain et. al. in [4] and Viet [5]
Filter, Legendre moment proposed use of color histogram method for representing color
feature. However, color histogram methods do not include
I. I NTRODUCTION spatial information and hence are not avert to significant spatial
Content Based Image Retrieval (CBIR), also referred to changes [6]. Pass et al. [7] classified each image pixel as either
as Content-Based Visual Information retrieval (CBVIR) and coherent or not, based on the similarity in color between pixel
Query By Image Content (QBIC), applies the techniques of and its neighbors. He represented this classification of color by
computer vision to the problem of searching relevant digital a split histogram called Color Coherence Vector (CCV). It has
images in large datasets, also known as the image-retrieval proved to be most effective for color feature extraction. While
problem. ‘Content-Based’ implies that the search instead of color extraction is simple and fast in implementation, it alone
analyzing metadata (e.g. tags, keywords, or any other descrip- cannot capture the complete information about the image.
tion associated with the image), analyzes the contents of the Different methods of texture representations have been studied
image. The term content suggests any kind of information that in the fields of pattern recognition and computer vision.
can be directly derived from the image, such as colors, shapes, Gabor and wavelet transform amongst other multi-resolution
textures etc. Even though it is possible to annotate images filtering techniques, characterizes texture by the statistical
manually by providing metadata for each image in a database, distribution of the image intensity. Manjunath and Ma [8]
such a process will be time consuming (specially for large demonstrated that multi-filtering approach using a Gabor filter
datasets) and it is also possible that the metadata does not bank yields most accurate results. Gabor wavelets inherently
represent the image properly. Because of the limitations of provide multi-resolution of both spatial frequency and orien-
such systems as mentioned above, as well as the large range tation and they can be conveniently designed for efficiently
of possible uses for efficient image retrieval, the interest in extracting regions of varying textures from an image. It has
CBIR has grown in the recent times. also been proven that image analysis with Gabor filters is in
CBIR method [1] is centered around extraction of features on synchronization with the human visual system which provides
basis of which different images can be compared in the dataset superior analytical resolutions in comparison to other methods
and relevant images can be retrieved. A typical CBIR system of texture extraction.
extracts the visual contents of the images in the database In this paper, authors propose a CBIR system based on an

978-1-5386-6678-4/18/$31.00 2018
c IEEE 290
efficient combination of CCV for color feature extraction and by most techniques. In the next stage of algorithm, Color Co-
Gabor Filters for texture feature extraction. In Section II the herence Vectors are used for color feature extraction of image
proposed technique is discussed in brief. Section III presents dataset. CCV technique provides a more accurate analysis of
the detailed description of feature extraction process. Different the color space of an image as opposed to the techniques that
subsections include texture feature vector extraction in III-A, were used previously in the study. The idea of using pixels
Image segmentation in III-B, color moments extraction using as part of connected components not only segments pixels on
Legendre polynomial in III-C and color coherence vectors basis of similar color in a discrete color space but also does
in III-D. Simulation results are given in section IV. Lastly this in a computationally efficient manner.
section V includes concluding remarks based on the results After computing the entire feature database, the distances
obtained from simulations. of the color and texture features of the query image are
calculated from the database using the ‘Manhattan’ similarity
II. P ROPOSED T ECHNIQUES measure. The distances obtained are combined on application
Huang [9] proposed a CBIR technique involving combined of suitable weights. Finally, the top-k images are retrieved
features of color and texture. Moments (mean, variance and after sorting. However, Color and texture features may not be
skewness) of Hue, Saturation and Value (HSV) components of equally important for the database under consideration. When
image are used as its color features. Gabor texture descriptors distances are computed of the database from the query image,
are utilized as texture features. In the work proposed by [10], different weights must be assigned to the respective color and
image retrieval system is designed using a combined feature texture distances. These weights may vary from 0.1 to 0.9 and
set including color auto-correlogram feature, Gabor texture sum of the weights should be 1. For achieving the optimal
feature for color and texture respectively. Wavelet transform is results, a grid search technique is used where a linear scan on
added in feature set for representing shape to improve results. the weights is performed, keeping the sum to be 1. The best
Extraction of texture features using Gabor filters includes weights chosen are thereby used for further computation.
appropriate design of a filter bank tuned to several orientations
III. F EATURE E XTRACTION
and spatial-frequencies to comprise the spatial-frequency space
[11]. Each wavelet of the filter bank can be tuned to a In case of CBIR, as the name suggests, the features of the
specific frequency and orientation. This multi-channel filtering image are to be retrieved from the content itself. In this paper,
approach decomposes the image into a number of filtered only color and texture features of the images are considered.
images and extraction of features is done from the filtered A. Gabor Filters for Texture Feature space
images. In this paper, the mean (μ) and standard deviation (σ)
of filtered images are computed which have been subsequently Gabor filters are linear filters used for texture analysis. They
used for obtaining the texture feature vectors. can analyse presence of specific frequencies in an image in
A lot of different algorithms were tried to analyze color space specified directions for a localized region around the region of
in this paper. In initial stage of proposed technique, higher interest. Fundamentally, Gabor filters are a group of wavelets.
order Legendre moments have been computed to generate the Each wavelet captures energy at a specific frequency and at
color feature vector for the image. In image processing, com- a specific direction [13] . Expanding a signal using this basis
puter vision and related fields, image moment [12] are widely provides a localized frequency description. The energy distri-
chosen to have some attractive property or interpretation of butions obtained can be used to extract the texture features.
images pixel intensities. Further, color information is gathered Furthermore, Gabor filters provide optimal resolution in both
from the image (HSV color space) and stored in form of color time and frequency domains as compared to Fourier transform.
feature vector, which constitute the feature database. Same In the spatial domain, mathematically, a 2D Gabor filter is
technique was repeated after separating the foreground objects a Gaussian kernel function modulated by a sinusoidal plane
from the background of the images. To achieve this, color wave.
based segmentation technique using the K-means clustering f2 f2 f2
ψ(x, y) = exp(−( 2 x2 + 2 y 2 )) exp(j2πx f )
algorithm and L ∗ a ∗ b color space was employed. Once the πγη γ η
segmentation of an image is complete, higher order Legendre
x = x cos θ + y sin θ , y  = −x sin θ + y cos θ
moments are extracted from the region of interest to represent
color features. f and θ indicate the scale and orientation of the Gabor wavelet
An extensive study of already existing algorithms revealed that respectively, γ is the sharpness along the major axis, η is
positional attribute of colors were ignored while analyzing the sharpness along the minor axis. The multi-resolution√and
u
the color space. Sparse and aggregated color distributions multi-orientation analysis can be defined as: fu = fmax / 2
produced similar feature vectors because pixels having same and θv = vπ/V Where fmax is the maximum central
colors were simply aggregated without taking their relative frequency. Different central frequencies for different wavelets
positions into consideration. CCV is superior than other tech- should be near the characteristic texture frequency of the
niques such as color histograms in a way that they prevent co- corresponding region. Total six orientations (M ) and four
relation between coherent pixels in one image from incoherent scale (N ) have been chosen to generate a total of 24 filters
pixels in another, thus allowing fine distinctions not provided in the filter bank. The Gabor filters are applied on the image

8th International Advance Computing Conference (IACC). 291


with different scales and orientations to obtain an array of
magnitudes:

E(m, n) = |Gmn (x, y)| (1)
x y

where G(m, n) is filtered image for m = 0, 1, .M − 1, n =


0, 1, .N − 1. The magnitudes E(m, n) represent the energy
contents at a specific scale-orientation (m,n) of the image. For
each of the 24 filtered images, the mean (μmn ) and standard
deviation (σmn ) are calculated to generate the texture feature
vector.
 
2
E(m, n) x y (|Gmn (x, y)| − μmn )
μmn = , σmn =
P ∗Q P ∗Q
(2) Fig. 1. Sample Images Before and After Segmentation

B. Image Segmentation
For image segment based classification, the image is seg- image, the first order and second order moments are computed
mented into many homogeneous areas. Image features are then to generate the color feature vector.

extracted based on the specific requirements. To achieve sep- N  N
1  1 
aration of the foreground objects from background, the color Er,i = Iij , σr,i =  (Iij − Er,i )2 (3)
based segmentation technique using the K-means clustering N j=1 N j=1
algorithm and L ∗ a ∗ b color space was employed. The K-
means clustering algorithm in [14] is an unsupervised machine The number of features for each image will be 6 (mean,
learning technique which is generally used for unlabeled data. variance for each of the 3 channels). When these color features
The aim of this algorithm is to find some structure or pattern and gabor texture features were combined to form the feature
in the data by dividing the data into K distinct groups such database for image retrieval, precision was quite low. This
that the observations within a cluster are similar. Each cluster led to the conclusion that computing only mean and variance
can be represented by its own centroid. The L ∗ a ∗ b color for color features may not be enough. We need to capture
space describes all perceivable colors in three dimensions: L higher order color information from the image also. The
for lightness and a and b for the color components green-red computation of color moment by mean and variance can be
and blue-yellow. All of the color information is stored in a and extended by calculating Legendre moments that may contain
b. Following steps are carried out in separating the foreground higher order information. There are many advantages of using
homogeneous region from the background: legendre moments over basic moments (mean, variance, etc.)
as [15] : values of these features are invariant to geometric
1) Read the image and convert to L ∗ a ∗ b color space.
transformations, these are also helpful to identify objects with
This helps in better quantifying the visual differences.
unique shapes, etc. There are defined Legendre polynomials
The color information is contained in a ∗ b color space.
which are used to determine Legendre Moments. The fol-
2) K-means algorithm is used to segment the objects
lowing recurrence relation can be used to represent Legendre
into 2 clusters: foreground and background, using the
polynomials of order k :
Euclidean distance measure. For every object, it returns
an index corresponding to the cluster. Each pixel was (2k − 1)mLPk−1 (m) − (k − 1)Pk−2 (m)
LPk (m) = (4)
labeled by this index. k
3) The indices obtained were reshaped to form an image Here LP0 (m) = 1, LP1 (m) = m and k > 1. Similarly,
consisting of only black and white pixels, corresponding polynomial LPl (n) can also be defined. Using the Legendre
to the original image polynomials LPk (m) , LPl (n) and image intensity function
4) Pixel corresponding to the index value at the top left I(m, n), the two-dimensional Legendre moments of order (k+
corner of the image was considered as background and l), are defined in [16]. Normalizing the equation in discrete
its value set to 0. This separated background from the form results in Legendre moments as:
foreground.
N
 −1 N
 −1
Once the segmentation of an image is complete, Legendre
LMkl = λkl LPk (mi )LPl (ni )I(i, j) (5)
moments are used on the segmented image (region of interest)
i=0 j=0
to extract the higher order color features.
Here λkl is defined as the normalizing constant and is given
C. Moments for Color Feature Space by: λkl = (2k + 1)(2l + 1)/N 2 . In equation (5), mi and nj
Color moments are most commonly used to extract and are in the range of [−1, 1], and known as the normalized pixel
represent the color feature of an image. For each channel of the coordinates: mi = N2i−1 − 1, nj = N2j −1 − 1

292 8th International Advance Computing Conference (IACC).


D. Color Coherence Vector (CCV)
It was found that most color extraction techniques involved
some sort of aggregation, thereby rendering the possibility of
generating similar color features for unrelated images. For
example, color histograms provide us with count of pixels
for each color within a fixed list of color ranges and don’t
differentiate between sparse and contiguous color distributions.
CCV overcomes this drawback by taking information regard-
ing the spatial distribution of colors in an image and ensuring
coherence of pixels. Fig. 2. Block Diagram of Experimental Setup and Testing
We begin by discretizing the color space into N colors buckets,
each indicating a specific color. This results in the entire color
space of the image consisting of only N distinct colors. In distance measure between P and P , as in [7], might lead to the
the next step, the coherency of a pixel is defined with the aim algorithm suggesting high similarity between the two images
of classifying each pixel as coherent or incoherent member of when images might be extremely different in the coherent-
the color bucket it belongs to. A pixel is said to be coherent incoherent structure. This drawback can be rectified by taking
if it belongs to a large group of pixels of the same color, into account the coherent and
Nincoherent details of a bucket
 
otherwise the pixel is said to be incoherent. We utilize the separately [7]. i.e. Δ = j=1 |(α j − α j ) + ((β j − βj )|.
notion of connected components to determine pixel groups This distance measure provides a more accurate illustration of
within an image. Formally, A connected component C is a difference between two images being compared using CCV
maximal set of pixels such that for any two pixels p, p ε C, technique.
there is a path in C between p and p .
A sequence of pixels p, p1 , p2 , ....., pn , p can be defined as a IV. E XPERIMENTAL S ETUP AND R ESULTS
path in C , such that each pixel pi exists in C and any two Using texture extraction shown in section III-A, texture
sequential pixels pi, pi+1 are adjacent. Two pixels are adjacent feature database is created for all the images in the dataset.
if one pixel is present among the eight closest neighbors of the There are 24 filtered images of a single original image in the
other. Time complexity of computing connected components dataset, which contains 1000 images divided into 10 classes.
varies linearly with number of pixels in the image. Since the mean and variance of all the filtered images are
After this computation, each pixel will belong to exactly one calculated, texture feature database takes the form of a matrix
connected component. Each pixel can be classified as coherent with 1000 rows and 48 columns. For color features, Legendre
or incoherent based on the size of the component to which moments up to second order are considered, as when order
they belong. All the pixels for which the size of its connected is increased the information becomes less relevant. For each
component exceeds a fixed threshold τ are known as coherent channel of the image, these moments are computed and feature
pixels. Therefore, given a color bucket, few of the pixels vector will be concatenation of moments of all three channels.
belonging to that color will be coherent while others will be This forms a color feature database of 1000 rows and 18
incoherent. For discretized color j th , let’s assume αj to be the columns. To test and validate the techniques discussed in this
number of coherent pixels and βj to be the count of incoherent paper, the database by James S. Wang et al. [17], [18] is used.
ones. The pair (αj , βj ) is computed for each color, called It contains 1,000 pictures divided among ten different classes
the coherence pair for the j th color. The Color Coherence with 100 images for each class. The classes include African,
Vector (CCV) for an image consitutes of all such coherence sea, building, bus, dinosaur, elephant, flower, horse, mountain,
pairs: ((α1 , β1 ), ..., (αn , βn )), each pair representing one color and food. Refer flowchart in Fig. 2 for the setup. To evaluate
bucket. the performance of the proposed technique, following steps
Comparison of CCV’s : Consider two images P and P with are performed:
CCVs as CP and CP respectively. Also let αj be the number
• For each image in the dataset, color features and texture
of coherent pixels in bucket j for image P and similarly αj for
features are computed and stored in the database.
image P  . In the same manner, let the number of incoherent
• An image is passed as query and its color and texture
pixels be βj and βj for images P and P  respectively.
features are computed.
Therefore, their CCVs can be defined as:
• Color and texture features of the input image are then

CP = ((α1 , β1 ), (α2 , β2 ), . . . , (αn , βn )) and compared with all the vectors in the feature database and
are checked for similarity using a distance factor. Images
CP  = ((α1 , β1 ), (α2 , β2 ), . . . , (αn , βn )) (6) are then sorted in increasing order of distance and top ’k’
Since the total number of coherent and incoherent pixels images are retrieved. In this paper, Manhattan distance
for a bucket j can be same for images P and P , that is, metric [19] is used for computing similarity distance.
αj +βj = αj +βj . But these pixels may entirely be coherent in This section also presents the simulation results to analyze the
P and entirely incoherent in P . So applying a naive absolute performance of the proposed work. Gabor Filter is used for

8th International Advance Computing Conference (IACC). 293


Fig. 3. Top 10 images retrieved from database for Horse query image using
proposed CCV and gabor method.

TABLE I
P RECISION OF PROPOSED CBIR SYSTEM USING CCV AND G ABOR FOR
Nr (T OTAL RETRIEVED IMAGES )=10 AND 20.

Class (Precision) (Precision)


Class Name Query Image
Number Nr = 10 Nr = 20
Fig. 4. Accuracy for feature vector having combination of Legendre moment
with and without segmentation, CCV and Gabor for different mean values.
1 HUMANS 9 16
(Nr = 10)
2 BEACHES 7 10

3 MONUMENTS 8 11

4 BUSES 10 20

5 DINOSAURS 10 20

6 ELEPHANTS 10 18

7 FLOWERS 10 17

8 HORSES 9 15

9 MOUNTAINS 8 12

10 FOOD 8 15
MEAN = 8.9 MEAN = 15.4

Fig. 5. Simulation results to show comparative performance of proposed


texture feature extraction in all of the following techniques technique (CCV and Gabor) and Huang et.al [9] for Nr = 15
(section III-A). Different feature vectors are extracted for
representing color:
texture, whereas in the original image, the background may
• Legendre: Legendre moments having 6 features (section consist of many different objects, each having a disparate
III-C) used along with gabor filters having 48 features. texture pattern. Fig. 4 also shows percentage accuracy obtained
• Use of segmenation: Legendre moments after using seg- with color coherence vector (CCV) combined with gabor
mentation technique (section III-B) along with gabor filters. Results clearly indicate that this method gives best
filters. results as compared to other experiments. Considering results
• CCV : Color Coherence Vector (section III-D) and Gabor for all the classes, the accuracy rate is highest for the class
Filters, resulting in 54 features in total. of dinosaurs, buses, elephants and flowers using any retrieval
The values of precision are used to judge the performance method due to simplicity of content. However, in case of
of the proposed system and gives information about its ef- complex contents such as mountains, beaches, tribals and
fectiveness. Let the total number of images retrieved be Nr , monuments, CCV gives much better results as compared to
therefore, precision equals to the number of relevant images other approaches. The CBIR scheme proposed in this paper
retrieval (from same class) divided by the Nr . The value of uses CCV for color and Gabor for texture feature extraction.
accuracy (in %) is calculated in terms of Precision. As shown Fig. 3 shows ten most relevant results retrieved from the
in Fig. 4, Legendre moment and Gabor filter output without database for one query using the proposed scheme. As all
segmentation gives better results as compared to segmentation the results are from the ‘Horse’ class itself, the same as query
approach. Using segmentation with texture extraction leads to image, the results are quite encouraging. Elaborative results of
inaccuracies since the gabor filter bank may treat the elimi- proposed CBIR system having CCV and Gabor feature vector
nated background (which is set as black) as one homogenous with Nr equal to 10 and 20 respectively, are presented in

294 8th International Advance Computing Conference (IACC).


image segmentation. This may be due to the contribution
of background in content of image. In second stage, color
coherent vector (CCV) and Gabor filter were used for color
and texture representation respectively. This combined feature
vector produced more accurate and precise results as compared
to already existing techniques. The proposed CBIR system
also gave the best match as the query image itself (among top
matched images) extracted from the database.
R EFERENCES
[1] F. Long, H. Zhang, and D. D. Feng, “Multimedia information retrieval
and management,” Technological Fundamentals and Applications, 2003.
[2] A. K. Jain and K. Karu, “Learning texture discrimination masks,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 18,
no. 2, pp. 195–205, 1996.
[3] T. Caelli and D. Reye, “On the classification of image regions by colour,
texture and shape,” Pattern recognition, vol. 26, no. 4, pp. 461–470,
Fig. 6. Simulation results to show comparative performance of proposed 1993.
technique (CCV and Gabor) and Anandh et.al [10] [4] M. J. Swain and D. H. Ballard, “Color indexing,” International journal
for Nr = 10 of computer vision, vol. 7, no. 1, pp. 11–32, 1991.
[5] L. Viet Tran, “Efficient image retrieval with statistical color descriptors,”
Ph.D. dissertation, Linköping University Electronic Press, 2003.
TABLE II
[6] J. Morovic and P.-L. Sun, “Accurate 3d image colour histogram trans-
C OMPARISON WITH EXISTING METHODS IN TERMS OF M EAN ACCURACY
formation,” Pattern Recognition Letters, vol. 24, no. 11, pp. 1725–1735,
2003.
Images Accuracy of Accuracy of [7] G. Pass, R. Zabih, and J. Miller, “Comparing images using color
Retrieved Proposed Technique existing Existing Techniques coherence vectors,” in Proceedings of the fourth ACM international
(Nr) (CCV and Gabor) methods conference on Multimedia. ACM, 1997, pp. 65–73.
Color Auto- [8] B. S. Manjunath and W.-Y. Ma, “Texture features for browsing and
10 89% 83% Correlogram, Gabor Filters, retrieval of image data,” IEEE Transactions on pattern analysis and
Wavelet Transform [10] machine intelligence, vol. 18, no. 8, pp. 837–842, 1996.
HSV Histogram Equalization,
15 79.34% 63.6%
Gabor Filters [9]
[9] Z.-C. Huang, P. P. Chan, W. W. Ng, and D. S. Yeung, “Content-
based image retrieval using color moment and gabor texture feature,”
in Machine Learning and Cybernetics (ICMLC), 2010 International
Conference on, vol. 2. IEEE, 2010, pp. 719–724.
[10] A. Anandh, K. Mala, and S. Suganya, “Content based image retrieval
system based on semantic information using color, texture and shape
Table I. The results are best for buses, dinosuars, elephants features,” in Computing Technologies and Intelligent Data Engineering
and flowers. As can be seen that results are above 80% even (ICCTIDE), International Conference on. IEEE, 2016, pp. 1–8.
for complex images. [11] K. Hammouda and E. Jernigan, “Texture segmentation using gabor
filters,” Cent. Intell. Mach, vol. 2, no. 1, pp. 64–71, 2000.
Results of the proposed method are compared with approaches [12] M. R. Teague, “Image analysis via the general theory of moments,”
given in [9]. The compared study uses HSV Histogram JOSA, vol. 70, no. 8, pp. 920–930, 1980.
Equalization for color feature extraction and Gabor Filter for [13] D. Zhang, A. Wong, M. Indrawan, and G. Lu, “Content-based image
retrieval using gabor texture features,” IEEE Transactions PAMI, pp.
texture feature extraction. The proposed method exceeds the 13–15, 2000.
other by a margin of 16% with a mean accuracy of 79.34% [14] S. Lloyd, “Least squares quantization in pcm,” IEEE transactions on
(Fig.5). Another method of [10] is compared with proposed information theory, vol. 28, no. 2, pp. 129–137, 1982.
[15] P. Srivastava, O. Prakash, and A. Khare, “Content-based image retrieval
method and results are presented in Fig. 6. The compared study using moments of wavelet transform,” in Control, Automation and Infor-
uses Color Auto-Correlogram for color features, Gabor Filter mation Sciences (ICCAIS), 2014 International Conference on. IEEE,
for texture features and Wavelet Transform for shape features. 2014, pp. 159–164.
[16] R. Mukundan and K. Ramakrishnan, Moment functions in image
The proposed method here too perfoms better with a mean analysis-theory and applications. World Scientific, 1998.
accuracy of 89%. Also, CCV and Gabor performed above [17] J. Li and J. Z. Wang, “Automatic linguistic indexing of pictures by a
90% in majority of classes from database. Table II shows the statistical modeling approach,” IEEE Transactions on pattern analysis
and machine intelligence, vol. 25, no. 9, pp. 1075–1088, 2003.
comparison in terms of Mean Accuracy between the proposed [18] J. Z. Wang, J. Li, and G. Wiederhold, “Simplicity: Semantics-sensitive
and existing method. integrated matching for picture libraries,” IEEE Transactions on pattern
analysis and machine intelligence, vol. 23, no. 9, pp. 947–963, 2001.
V. C ONCLUSION [19] J. Han, M. Kamber, and J. Pei, “Data mining: concepts and techniques
(the morgan kaufmann series in data management systems),” Morgan
Several combinations of feature vectors are explored in this Kaufmann, 2000.
paper. Value of precision and accuracy are used to evaluate the
their respective performance. Initially combined feature vector
including Legendre moment and Gabor filter was used to
represent color and texture of image database. The simulations
were performed with and without segmentation of images.
It was observed that better accuracy was obtained without

8th International Advance Computing Conference (IACC). 295

You might also like