You are on page 1of 4

Improving Image Retrieval Performance by Using Both Color and Texture Features

Dengsheng Zhang Gippsland School of Comp. and Info. Tech. Monash University, Churchill, Victoria 3842, Australia Email: Abstract
Most content-based image retrievals (CBIR) use color as image features. However, image retrieval using color features often gives disappointing results because in many cases, images with similar colors do not have similar content. Color methods incorporating spatial information have been proposed to solve this problem, however, these methods often result in very high dimensions of features which drastically slow down the retrieval speed. In this paper, a method combining both color and texture features of image is proposed to improve the retrieval performance. Given a query, images in the database are firstly ranked using color features. Then the top ranked images are re-ranked according to their texture features. Results show the second process improves retrieval performance significantly. Keywords: CBIR, color, texture, image retrieval. of the image in some cases. Therefore, we provide two alternatives to user, one is the retrieval based on color features, the other is retrieval based on combined features. When the retrieval based on color fails, the user will use the other alternative which is the combined retrieval. By integrating these two alternatives, retrieval performance is improved significantly. The rest of the paper is organized as following. In Section 2, the color descriptor is presented. Section 3 briefly describes the rotation invariant Gabor texture features. Section 4 describes the indexing and retrieval process. Retrieval results and performance will be reported in Section 5, and the paper is concluded in Section 6.

2. Robust Color Features
To extract robust color features, we use the perceptually weighted histogram or PWH proposed by Lu et al [9]. Basically, the PWH is acquired by using CIEL*u*v* color space and assigning a perceptual weight to each histogram bin according to the distance between the color of the pixel and the color of the bin. We briefly describe the details of PWH in the following. The first step to extract color features is to select an appropriate color space. Several color spaces are available, such as RGB, CMYK, HSV and CIEL*u*v*. Most digital images are stored in RGB color space. However, RGB color space is not perceptually uniform, which implies that two colors with larger distance can be perceptually more similar than another two colors with smaller distance, or simply put, the color distance in RGB space does not represent perceptual color distance. In view of this drawback, CIEL*u*v* space is chosen because it is an uniform color space in terms of color distance. In order to use L*u*v* space, color values are first converted from RGB space into CIEXYZ space with a linear transform and then from CIEXYZ space into L*u*v* color space using the following transform:
L* = 116 × 3 Y / Yn − 16 Y / Yn > 0.008856 L* = 903.3 × (Y / Yn )
' u* = 13 × L * (u '−un ) ' u* = 13 × L * (v'−vn )

1. Introduction
In recent years, Content Based Image Retrieval (CBIR) has been proposed in attempt to index image automatically and objectively [1, 2]. In CBIR, images in database are represented using such low level image features as color, texture and shape, which are extracted from images automatically. Among these low level features, color features are the most widely used features for image retrieval because color is the most intuitive feature and can be extracted from images conveniently [3, 4]. However, image retrieval using color features often gives disappointing results, because in many cases, images with similar colors do not have similar content. This is due to the global color features computed often fails to capture color distributions or textures within the image. Several methods have been proposed to incorporate spatial color information in attempt to avoid color confusion by machine [5, 6, 7, 8]. However, these methods often results in very high dimensions of features which drastically slow down the retrieval speed. In this paper, we propose a method combining both color and texture features to improve retrieval performance. We compute both the color and texture features from the images and images in the database are indexed using both types of features. During the retrieval process, given a query image, images in the database are firstly ranked using color features. Then in the second process, a number of top ranked images are selected and re-ranked according to their texture features. Because the texture features are extracted globally from the image, they are not an accurate description

u ' = 4 X /( X + 15Y + 3Z ) v' = 9Y /( X + 15Y + 3Z )

Y / Yn ≤ 0.008856


' un = 4 X n /( X n + 15Yn + 3Z n ) ' vn = 9Yn /( X n + 15Yn + 3Z n )

and (Xn, Yn, Zn) is the reference white in XYZ space. There are usually millions of colors in the color space, in order to build the color histogram from an image, the color space is normally quantized into a number of color bins with each bin represents a number of neighboring colors. However, in the L*u*v* space, representative colors are used instead of

L component u component v component 6. v0) is the color of the pixel to be assigned. f1n) and (f21. Rotation Invariant Texture Features Texture is an important feature of objects in an image.125 81. Image Retrieval Using Both Color and Texture Features In this section. y ) |.25 -110.125) in conventional histogram building.. and g mn is the complex conjugate of gmn which is a class of self-similar functions . These magnitudes represent the energy content at different scale and orientation of the image. N . 1.g.625 In contrast to the conventional histogram building which assigns the color of each pixel to a single color bin. n = 0. Basically. the fij can be either the color features or the texture features. Two images with different content can usually be distinguished by their texture features even when the images share similar colors. The number of representative colors is given by the combinations (512) of the following three components in L*u*v* space [10]. As the result. e.625 -90. In this situation.375 56. σ00 . f22. 40) to bin (18. f12.375 40.125). the distance between the two images is given by 3. rather than the actual image content expected by the user. Expanding a signal using this basis provides a localized frequency description. 21. its discrete Gabor wavelet transform is given by a convolution: * Gmn ( x. σ45). images in the database. 40) to bin (18. A number of the top ranked images are presented to the user. M . the texture features captured are also global features. 1.. assign a pixel color (6.875 -25. ui. + ( f1n − f 2 n ) 2 Here. y ) = 1 x2 y2 exp[ − ( 2 + 2 )] ⋅ exp( j 2πWx ) 2 σx σy 2πσ xσ y 1 where W is called the modulation frequency. y ) = ∑ ∑ I ( x − s. y − t ) g mn ( s. y) with size P×Q. therefore capturing local features/energy of the signal. it is still a global representation of images.75 198.75.25 154.75 -66. To make the above extracted texture features robust to image rotation.4. 40. For two images with features (f11. …. the PWH assigns the color of each pixel to 10 neighboring color bins based on the following weight: 1/ di wi = 1 / d1 + 1 / d 2 + . 40) and (30. It is assumed that we are interested in images or regions that have homogenous texture.875 -123.625 72.75 110. m = 0.125 7. However.125 43.g. … . The PWH also overcomes the drawback of having to assign two quite different colors to a same color bin. 21. in the indexed database. Each image in the database is indexed using both color and texture features which have been described above. Gabor filters (or Gabor wavelets) are widely adopted to extract texture features from the images for image retrieval [11].75 21. These representative colors are uniformly distributed in L*u*v* space.. f2n) respectively. Although PWH is a more accurate color histogram representation of images than conventional histogram.875.375 -58. assigning pixel color (6. The use of PWH overcomes the drawback of conventional histogram methods which would in many situations assign a pixel color to a bin of a quite different color. The similarity between two texture patterns is measured by the Euclidean distance between their Gabor feature vectors. The scale (frequency) and orientation tunable property of Gabor filter makes it especially useful for texture analysis. Given a query image. are ranked in descending order of similarity to the query image. n) = ∑∑ | G mn ( x. µ01 .875 105. ….. (Li. The main purpose of texture-based retrieval is to find images or regions with similar texture. + 1 / d10 where d = ( L − L ) 2 + (u − u ) 2 + (v − v ) 2 i 0 i 0 i 0 i generated from dilation and rotation of the following mother wavelet: g ( x. e. A feature vector f (texture representation) is created using µmn and σmn as the feature components [11]. Gabor filters are a group of wavelets. As the result.875.25 66. ….1 x =0 y =0 P Q and (L0. For a given image I(x. it may fail to retrieve similar images in some * where. vi) is the color of bin i. After applying Gabor filters on the image with different orientation at different scale.1. … . PWH is much more accurate in representing the image than conventional histograms. 21. σ01 .375 68. a simple circular shift on the feature map is used [12]. µ45. 40. an array of magnitudes is obtained: E (m.875 31. In other words. K is the filter mask size. The ranking of similarity is determined by the distance between the feature vector of the query image and the feature vector of the target image.25 -22.75.625 18. the retrieval result given by PWH usually reflects the overall color tone of the images. u0. the images are first retrieved by using PWH color features. 4. therefore the mean µmn and standard deviation σmn of the magnitude of the transformed coefficients are used to represent the homogenous texture feature of the region. 21. Texture features can then be extracted from this group of energy distributions. In the retrieval. Five scales and six orientations are used in common implementation and the Gabor texture feature vector is thus given by: f = (µ00 . texture features of the images can be used to help ranking images with similar content closer.4.quantizing each color channel by a constant step.875 93. images are represented by the indices of features. with each wavelet capturing energy at a specific frequency and a specific direction. image indexing and retrieval using color and combined features are described. called target images.. 21. t ) s =0 t =0 K K d = ( f11 − f 21 ) 2 + ( f12 − f 22 ) 2 + ..

L. i.e. retrieval precision using combined features is very robust and has a smooth drop of performance.c. news photos. AAAI/MIT Press. 1993. Jeju. Some of the retrieval screen shots are shown in Figure 3 to demonstrate the comparison between retrieval using combined features and retrieval using color features.. Swain and D. where Ri-ct stands for retrieval using combined features and Ri-c for retrieval using color features. we will be able to improve retrieval performance of combined features further. our method outperforms the common color feature retrieval significantly. the improvement of retrieval performance is guaranteed. Retrieval Experiments and Results In the experiment. the user then selects the retrieval using combined features. In Proc. [5] J. they are shown in Figure 1. The screen shots are arranged in pairs.c. Martínez. March 2002. T. In M. For each query. …. the images in the database are first ranked using color features according to the distance between the query and the target images. 1999. S. ed. precision-recall pair. retrievals using combined features give very good results while the same retrievals using color features fail almost completely. [2] J. As can be observed. the overall performance may not be improved in those approaches. is used. Since we give the users control to select the type of retrieval..173-187. Intelligent Multimedia Information Retrieval. retrieval using color features only and retrieval using combined features. Lu. Artech House. Chang. The server side is a Java application which handles online indexing and multi-user query. pp. it can be observed that the retrieval precision using color features has a sharp drop from 40% of recalls. the precision of the retrieval at each level of the recall is obtained.e.. “Fast Signature-based Color-spatial Image Retrieval”. In contrast to most conventional combined approaches which may not give better performance than individual features. we use the database which is used by MPEG-7 for natural image retrieval test [13]. P = r/n. query by example. selecting query from list and navigation tools. i. 30 images are randomly selected from the database as queries. ISO/IEC JTC1/SC29/WG11 N4674. e. 1991. A Java-based client-server retrieval framework has been developed to conduct the retrieval test. In the combined retrieval. By using regional features instead of global features. Ooi. However. F. From the precision-recall chart of Figure 2. and R4ct. Experiment results show. [4] M. 6. Simith and S. San Jose. International Journal of Computer Vision. MPEG-7 Overview (version 7).. landscapes.1908. the retrieval performance of combined features outperforms that of color features significantly. “Querying by Color Regions Using the VisualSEEK Content-Based Visual Query System”. This is done for all the 30 queries. Recall R measures the robustness of the retrieval. To evaluate the retrieval performance. therefore. 6).466 images from large varieties. However. 2. K. buildings. Precision P measures the accuracy of the retrieval. “Multimedia Database Management Systems”. M. [6] T. In some cases. References: [1] G. Recall R is defined as the ratio of the number of retrieved relevant images r to the total number m of relevant images in the whole database. Texture and Shape. i. it always helps to improve the retrieval results in our system. scenery photos. We have used the PWH color features to improve conventional histogram features. Chua. We also plan to use more queries to test the retrieval performance. Then a number of the top ranked images are selected and they are then re-ranked using their texture features. This is different from most of other approaches which integrates the two features into one and Figure 1. Vol. R2-ct. CA. i. The 30 random selected query images. If the color retrieval result is not satisfactory. The client side is a Java applet which presents retrieval result and also provides useful user interface such as query by sketch. the combined retrievals give much better results than retrievals using color features only. we let the user to decide if texture features is needed. On Storage and Retrieval for Image and Video Databases. R = r/m. Niblack et al. the common evaluation method. an image retrieval method using both color features and combined features has been proposed. the 30 retrieval results are averaged to give the final precision-recall chart of Figure 2. SPIE Conf. R. The QBIC Project: Querying Images By Content Using Color.situations. “Color Indexing”. segmentation is a complex problem which has no desirable result at this moment. provides users with only one alternative. flowers. Maybury.g. we plan to segment image automatically into homogenous texture regions using split and merging technique. Among the 30 queries tested. group of images from video sequence etc. 17 of the retrieval results have been improved significantly by using combined features. 1996. As the choice of retrieval using combined features is determine by users. A robust texture feature which is suitable for image retrieval has also been presented in the paper. cartoons. C. each pair is labeled as Ri-ct and Ri-c (i = 1. in all the cases demonstrated. of IEEE .e. [3] W. Images with different orientations have also been included. Conclusions In this paper. Ballard. our approach provides users with two alternatives.e. Solution to this problem can be segmenting image into regions and extracting texture features from the regions. Overall. The database consists of 5. The types of images in the database include animals. 7(1):11-32. Therefore. Tan and B. 5. Precision P is defined as the ratio of the number of retrieved relevant images r to the total number of retrieved images n. In the future.

December 13-15. Australia. 1996. Sydney. R1-ct R1-c R2-ct R2-c R3-ct R3-c R4-ct R4-c R5-ct R5-c R6-ct R6-c Figure 3. 2002. . Precision-Recall Chart 100 90 80 70 Precision Color-Texture Color 60 50 40 30 20 10 0 0 20 40 Recall 60 80 100 Figure 2. Chua. In Proc. 1998. Miller. In Proc. 1996. S. Y. Manjunath and W. “Integrated Color-spatial Approach to Content-based Image Retrieval”. of First IEEE Pacific-Rim Conference on Multimedia (PCM'00). Beijing. [12] D. S. T. “Comparing Images Using Color Coherence Vectors”. “Texture Features for Browsing and Retrieval of Large Image Data”. [10] B. Hsu. IEEE Transactions on Circuits and Systems for Video Technology. Pass. S. In Proc. [11] B. Chua and H. Zhang and G. “Color-based Image Retrieval Incorporating Relevance Feedback”. August. Y. Manjunath et al. In Proc. [8] G. 2001. of International Conference on Signal Processing. 11(6):703-715. of the 4th ACM Multimedia Conference. pp11501153. “Content-based Image Retrieval Using Gabor Texture Features”. 1997. 2000. “Color and Texture Descriptors”. 1995. Lu. pp362-369. pp. Ontario. Pung. K. “Using Perceptually Weighted Histograms for Color-Based Image Retrieval”. [7] W. pp65-73. [13] B. Sample screen shots of retrievals using color and combined features. Average precision-recall of 30 queries. on Multimedia Computing and Systems. IEEE Trans. S. R. Canada. Honors Thesis. Phillips. PAMI-18 (8):837-842. China. Monash University. [9] G. pp305-313. Zabih and J.International Conf. Ma. Lu and J. of ACM Multimedia.392395.