Detection And Evaluation Of Crohns Disease Lesions From Wireless Capsule Endoscopy Images

PG Student,2Associate Professor 1,2 Department of Electronics and Communication Engineering 1 tommathew16@gmail.com,2sridhar @annauniv.edu College of Engineering, Anna University, Chennai-600 025 Contact no. +919445319619

Tom Mathew1, D.Sridharan2
1

ABSTRACT

This paper describes a method for automatic detection of contractions in the small bowel through analyzing Wireless Capsule Endoscopic images. Based on the characteristics of contraction images, yellow pus formed on lesion affected area a procedure that includes analyzes of the temporal and spatial features is proposed. Our work is the first study to systematically explore supervised classification for CD lesions, a classifier cascade to classify discrete lesions, as well as quantitative assessment of lesion severity. For temporal features, the image sequence is examined to detect the contractions through the edges. For spatial features,descriptions of the directions at the edge pixels are used to determine contractions utilizing a classification method. The experimental results show the effectiveness of our method that can detect a total of 83% of cases. The developed methods show high agreement with ground truth severity ratings manually assigned by an expert, and good precision (>90% for lesion detection) of varying severity.

Detection And Evaluation Of Crohns Disease Lesions From Wireless Capsule Endoscopy Images
Tom Mathew1, D.Sridharan 2, PG Student,2 Assosciate Professor 1,2, Department of Electronics and Communication Engineering,Anna University,Chennai,India 1 tommathew16@gmail.com,2 sridhar@annauniv.edu
1

Abstract— This paper describes a method for automatic

detection of contractions in the small bowel through analyzing Wireless Capsule Endoscopic images. Based on the characteristics of contraction images, yellow pus formed on lesion affected area a procedure that includes analyzes of the temporal and spatial features is proposed. Our work is the first study to systematically explore supervised classification for CD lesions, a classifier cascade to classify discrete lesions, as well as quantitative assessment of lesion severity. For temporal features, the image sequence is examined to detect the contractions through the edges. For spatial features,descriptions of the directions at the edge pixels are used to determine contractions utilizing a classification method. The experimental results show the effectiveness of our method that can detect a total of 83% of cases. The developed methods show high agreement with ground truth severity ratings manually assigned by an expert, and good precision (>90% for lesion detection) of varying severity.
Keywords— Charge Coupled Device(CCD), Hue Saturation Value(HSV)

I.

INTRODUCTION

FIG. I. Examples of CE lesions: normal small bowel lumen, and mild, moderate, and severe Crohn’s lesions (clockwise from left). classify discrete lesions, as well as quantitative assessment of lesion severity.Contractions were successfully detected using a three-stage procedure:In Stage1, potential contractions were detected by using various filters in labview [3]and in matlab. The possible contractions were then evaluated, based on the similarities between consecutive frames, allowing the negative cases to be discarded.In Stage 2 Image segmentation[6] can be done using watershed algorithm ,Region Of Interest and lesions can be detected by Hue-Saturation-Value model.For a final decision,in Stage3,the spatial features of the possible contractions were classified using an edge direction histogram and using colour classifying training interface[7] summarize the disease and non-disease images with the help of pattern scores.The results suggest that this method could provide a non-invasive means of intestinal motility analysis with requiring minimal attention from physicians.

I

n this paper presented a method for the recognition of

intestinal contractions during WCE of the small bowel,based on the efficient combinations of image features and the physiological properties of contraction frequencies. However, WCE produces large amounts of data (approximately 50 000 images) that must be then manually reviewed by a clinician. Such large datasets provide an opportunity for application of image analysis and supervised learning methods.Capsule contains CCD camera a an RF antenna to transmit the images to a storage device. Automated analysis of CE images has only focused on detection, and often only for bleeding. Compared to these detection approaches, we explored assessment of discrete disease for lesions[1] created by mucosal inflammation in Crohn’s disease (CD)[4,5]. Our work is the first study to systematically explore supervised classification for CD lesions, a classifier cascade to

FIG. II Contractions of sample images taken from wireless capsule endoscopy II . Previous Methods In comparison to other content-based image retrieval [8] and conventional picture archiving and communication systems [9], only a few research systems [10]–[12] have investigated visual medical imaging such as conventional endoscopy, or CE images. The literature describes some methods for complexity reduction in reading CE studies. However, this image analysis [13]–[18] has focused on detection of anatomy and anomalies. This includes detection of lumen and its contractions, fluids such as blood and intestinal juices [19], as well as extraneous matter such as food and bubbles. For example, Bashar et al. [16] present a method using color information and apply it on data from three CE studies to detect ―noninteresting‖ images contain- ing excessive food or fecal matter or air bubbles. They compare the performance of their methods with that of Gabor and dis- crete wavelet feature methods. As detection of obscure bleeding is one of the main clinical applications, several research groups have investigated detection of bleeding. For example, Hwang et al. [14] apply expectation maximization (EM) clustering on a dataset of around 15 000 CE images for blood detection. Jung et al. [20] also report a blood detection method employing color information, where as Mackiewicz et al. [18] use adaptive color histograms. By contrast, our previous analysis has focused on analysis of CE images for semiautomated assignment of diag- nostic attributes, primarily for CD lesions [21]–[23]. Small bowel CD lesions could not be imaged prior to the intro- duction of CE without invasive procedures. Discrete (punched- out) erosions and ulcers are a characteristic of CD. The severity of mucosal damage, and conversely, mucosal healing by anti- inflammatory therapies are common clinical assessments [24], [25]. Because of the discrete nature and the large volume of image data involved, CD lesions in CE are a good candidate for automated analysis. We aim to perform this assessment of lesions with a level of consistency and accuracy comparable to human observers. Our prior work has investigated several aspects of this analysis In [21], we presented results on the detection of CD lesions in CE images using support vector machines (SVMs) [26], [27]. We demonstrated 96.5%average accuracy of classification of lesions over whole images from ten CE studies using color fea- tures alone, comparable to the high results reported in the art for simpler detection tasks such as bleeding [14], [20]. In [22], we extended this study to the use of ordinal regression techniques to develop a ranking function correlating with clinical annotation using a small number of available pairwise comparisons.

Crohns disease detection

Identification & classif. score

FIG III. Block diagram of analysis of Crohns disease detection In [23] and [28], we described experiments demonstrating the use of both SVMs and boosting methods to classify misregistra- tions for duplicate lesion detection. Our goal here was to further investigate study reading complexity reduction, by reducing the number of images that a

clinician would need to review by re- moving images that do not contain a CD lesion. Second, and more importantly, in the remaining lesion images, we assigned a lesion severity to help the clinician with consistent diagnosis. This sequential approach lends itself to multiclass and cascaded classifications, and given the good experimental performance of the lesion detection classifier, we used a simple cascade of two classifiers later. Current clinical work flow [2] supports such a semiautomatic approach, where the clinician may segment the lesion images of interest, and then use the lesion severity assignment to compute a consistent patient severity score. We also investigated whether the availability of additional information, for example, a region of interest (ROI), helps in either automated detection of the lesions, or in the assignment of lesion severity .

Hue Image

Saturation Image

Value Image

Histogram of All Bands
6000
Pixel Count Pixel Count

Histogram of Hue Image Histogram of Saturation Image Histogram of Value Image
4000
Pixel Count

4000
Pixel Count

4000

4000 2000 0

2000

2000

2000

0

0.2

0.4 0.6 Values

0.8

1

0

0

0.2

0.4 0.6 Hue Value

0.8

1

0

0

0.2

0.4 0.6 0.8 Saturation Value

1

0

0

0.2

0.4 0.6 Value Value

0.8

1

Mask of Only The Yellow Objects

= Hue Mask

& Saturation Mask

& Value Mask

FIG . IV HSV histogram and masks V .IMAGE SEGMENTATION

III

EDGE AND COLOR FEATURES

In edge histogram descriptor uses four edge operators to capture edges in four directions(0◦, 45◦ , 90◦ , and 135◦), and by edge filters like canny and gabor filters. Each image is divided into 16 equal nonoverlapping blocks. Each of these blocks may then be further divided into a configurable number of sub-blocks. The five 2 × 2 edge filters are then applied to each sub-block, and edge.The HSV color descriptor contains the representative colors, their percentages in the image, and several other statistical measures.The dominant color descriptor (DCD) is typically used as a direct comparison descriptor based on metrics of spatial coherence computed from the generated dominant color. Here,we primarily treat it as a color histogram, an approximation that may not extend to general use beyond CE analysis for CD due to more widely varying image characteristics compared to CE images. A CE image and its corresponding dominant color image. The lesion exudate and inflammation surrounding the lesion are significantly different than the normal color distribution of the intestinal lumen. Therefore, we initialize the color descriptors to nominal exudate and inflammation colors.With the help of dominant colors (c =< r, g, b >) and their percentages and using the color mask we can identify the area having CD lesions. IV TEXTURE FEATURES We based our texture features on 7 texture descriptor as well. The homogeneous texture descriptor (HTD) uses Gabor filters of different scales and orientations.

In grey scale mathematical morphology the watershed transform, originally proposed by Digabel and Lantu_ejoul [30,29] and later improved by Beucher and Lantu_ejoul [4], is the method of choice for image segmentation [31, 32, 33]. Generally spoken, image segmentation is the process of isolating objects in the image from the background, i.e., partitioning the image into disjoint regions, such that each region is homogeneous with respect to some property, such as grey value or texture. The watershed transform can be classi_ed as a region-based segmentation approach. The Intuitive idea underlying this method comes from geography: it is that of a landscape or topographic relief which is ooded by water, watersheds being the divide lines of the domains of attraction of rain falling over the region [46]. An alternative approach is to imagine the landscape being immersed in a lake, with holes pierced in local minima. Basins (also called `catchment basins') will _ll up with water starting at these local minima, and, at points where water coming from di_erent basins would meet, dams are built. When the water level has reached the highest peak in the landscape, the process is stopped. As a result, the landscape is partitioned into regions or basins separated by dams, called watershed lines or simply watersheds.The main goal of watershed segmentation algorithm is to find the ―watershed lines‖ in an image in order to separate the distinct regions. M1, M2, …, MR sets denoting the coordinates in the regional minima of an image g(x, y), where g(x, y) is the pixel value of coordinate (x, y). C(Mi) - the coordinates associated with regional minimum Mi. Step1. Find the minimum and maximum pixel value of g(x, y) as min and max. Assign the coordinate of min into Mi. The topography will be flooded in integer flood increments from n = min +1. Let Cn(Mi) as the coordinates in the catchment basin associated with minimum Mi that are flooded at stage n.

Step2. Compute

Step3. Derive the set of connected components in T[n] denoting as Q. For each connected component q €Q [n], there are three conditions: a. If q⋂ C [n – 1] is empty, connected component q is incorporated into C[n - 1] to form C[n] because it represents a new minimum is en-countered. b. If q ⋂ C [n – 1] contains one connected component of C[n - 1], connected component q is incorporated into C[n 1] to form C[n] because it means q lies within the catchment basin of some regional minimum. c. If q ⋂ C [n – 1] contains more than one connected component of C[n- 1], it represents all or part of a ridge separating two or more catch- ment basins is encountered so that we have to find the points of ridge(s) and set them as ―dam‖. Step 4. Construct C[n] according Set n = n + 1. Step5. Repeat Step 3 and 4 until n reaches max + 1.

FIG VI. Edge features: creation of the 85-dimension edge histogram (top), and an example of accumulated edge responses for a subblock size of 4 (bottom).

Gradient magnitude (gradmag)

Watershed transform of gradient magnitude (Lrgb)

Opening (Io)

Opening-by-reconstruction (Iobr)

Opening-closing (Ioc)

Opening-closing by reconstruction (Iobrcbr)

Regional maxima of opening-closing by reconstruction (fgm)

Regional maxima superimposed on original image (I2)

Modified regional maxima superimposed on original image (fgm4)

Thresholded opening-closing by reconstruction (bw)

Markers and object boundaries superimposed on original image (I4)

Colored watershed label matrix (Lrgb)

FIG.V Image segmentation processed in watershed algorithm

V FEATURE EXTRACTION The vision literature reports [33]–[35] a wide range of color, edge, and texture features. The limited literature on analysis of CE images also describes creation of compact representations for further analysis. Vu et al. [36] use edge features for contrac- tion detection. Zheng et al. [34] use color and texture features in their decision support system, while Lee

et al. [17] focus on hue, saturation, and intensity color features in their topographic segmentation system. We have previously used [21] MPEG-7 visual descriptors and Haralick texture fea- tures [36], color histograms [22], and range of other features [23] in our investigations. We investigated a range of other features also used in our preliminary work [21]–[23], [29] for these experiments. In [21] and [22], we tested color and intensity histograms, MPEG-7 fea- tures, as well as other texture features. In [23], we investigated a range of other common color and edge histograms and fea- tures. Commonly used features such as scale-invariant feature transform (SIFT) were found to be of limited use due to lack of such features in the relatively small (100s pixels across) ROIs and high noise in the CE images; however, this work primarily focused on binary classification for CD and did not evaluate all possible feature choices. A CD lesion typically develops as an ulcer covered by ―yellowish‖ exudate, and circumscribed by inflammation. The boundaries of the ulcerous lesion as well as the ―reddish‖ in- flammation become larger and better identifiable with increasing severity of the CD lesion. CD lesions also disrupt the carpet-like texture of the villi covered lumen. Therefore, color (of the exu- date, and inflammation), edge (for ulcer and lesion boundaries), and texture (the villous texture) information may all serve to identify severity of a CD lesion, and we used edge, color, and texture features customized to emphasize this information. In previous work,and here in Table III, we report classifica- tion using individual color, edge, and texture descriptors, and use customized features based on these results for lesion classi- fication as follows.

COLOR CLASSIFYING INTERFACE Use the Color Classification Training Interface to train a color classifier by manually classifying color samples into new or existing color classes. Based on those samples, the color classifier can classify unknown samples into a known class. Steps involving Color Classification interface trainer 1. Select File»Open Images from the directory. 2. Creating Color Classes using a different patterns from an image. Complete the following steps to create the yellow class and train the color classifier: 1. 2. 3. 4. 5. Use the navigation buttons to locate ColorClassification_1.png. Select the Add Samples tab and click Add Class. Enter Yellow as the new class label and click OK. Make classes like no disease, moderate, severity classes for perfect matching. Draw an ROI within the yellow portion on the WCE image for severe class. Ensure that the severe class is selected in the Classes table and click Add Sample having yellow portion. Select the Classify tab, and click Train Classifier. Select the Classify tab to view the classification results. Testing the Color Classifier 1.Class Label—The class to which the classifier has assigned the sample in the ROI. 2.Classification Score—The degree of certainty that a sample belongs to one class instead of another class. 1000 is the best possible score. 3.Identification Score—The degree of similarity between a sample and samples in the class to which the sample is assigned. 1000 is the best possible score. 4.Closest Sample—Displays the learned sample that most closely matches the sample in the ROI. 5.Distances Table—Lists the distance between the a sample and samples in each existing class. Lower numbers indicate a closer match. 6.Color Vector Tab—Illustrates the differences between the hue, saturation, and intensity color planes of the ROI and the currently selected class. Select a different class

in the Distances table to change the currently selected class.

COMAPRISON TABLE -I

6. 7.

FIG. VII NI Color Classification Training Interface software

Pattern Score of a train classifier TABLE-II

VI I .CONCLUSION In this paper, we gave an overview of research mainly focused at the detection or classification of Crohns disease and other pathologies of interest in endoscopy of the GI tract. We noticed that there is a rising interest in this research topic, especially throughout the last two decades. We also gave an overview of different parts within the GI tract and respective pathologies of current research interest.. We have also created a unique annotated database of CE images of CD lesions and associated clinical findings that could be used to develop and evaluate a wide range of tools for content-based image retrieval in CE assessment. Our experiments show the promise of simplifying CE study analysis for CD assessment. An automated instrument could greatly reduce the time needed for assessment by hiding the unnecessary images from clinical review, if requested by the re- viewer. We aim to integrate the presented methods in a suitable tool for such further testing. The lesion rankings used here were generated by a single expert. In our continuing study, we are investigating using multiple rankings by one expert, and rankings from multiple experts for training and validating our statistical methods. With multiple lesion severities (mild, moderate, and severe), multiple classifiers may also be combined to create more com- plex cascades, as well as boosted classifiers. We have previ- ously investigated boosted classifiers for duplicate information detection in [28], and we will investigate more complex lesion severity classifiers as additional data becomes available. While the developed classifiers perform with adequate ac- curacy (>90% for lesion detection), these results still need to be validated in larger studies to be practically useful. The pro- posed methodology also has wider applications in other common discrete IBD lesions such as those due to non steroidal anti-inflammatory drugs. . Here we use NI color classification tool method to replace the normal versus disease classification before processing the remaining images to perform severity analysis using individual or boosted classifiers. Our statistical classification on ROIs for the experimental database performs comparably to whole images. We conjecture that this may be due to loss of relevant ―normal‖ information, and relative size information included in the whole images, or it may be an artifact of the specific ROIs exported here. We are currently exploring alternative ROIs; however, annotation of ROIs is an additional step in the clinical work flow, and from an application perspective it is unnecessary if whole images can provide satisfactory performance.

VIII .REFERENCES [1] Rajesh Kumar*, Senior , IEEE, QianZhao, Sharmishtaa Seshamani, Gerard Mullin, Gregory Hager, Fellow, IEEE, and Themistocles Dassopoulos, Assessment of Crohn’s Disease Lesions in Wireless Capsule Endoscopy Images [2] Jeremy Gerber, MSc, MPhil ARCS, Ari Bergwerk, MD, David Fleischer, MD Yokneam, Israel, Scottsdale, Arizona, USA, A capsule endoscopy guide for the practicing clinician: technology and troubleshooting [3]Christopher G. Relf Image Acquisition and Processing with LabVIEW [4] J. Gerber, A. Bergwerk, and D. Fleischer, ―A capsule endoscopy guide for the practicing clinician: Technology and troubleshooting,‖ Gastrointest.Endosc, vol. 66, no. 6, pp. 1188–1195, 2007. [5] T. Nakamura and A. Terano, ―Capsule endoscopy: Past, present, and fu- ture,‖ J. Gastroenterol., vol. 43, pp. 93– 99, 2008. [6] D. Comaniciu and P. Meer, Robust Analysis of Feature Spaces: Color Image Segmentation, 1997 [7].COLOR CLASSIFICATION INTERFACE TRAINER - NI VISION DEVELOPMENT MODULE [8] Y. Liu, D. Zhang, G. Lu, and W. Ma, ―A survey of content-based im- age retrieval with high-level semantics,‖ Pattern Recog., vol. 40, no. 1, pp. 262–282, 2007. [9] H. Muller, N. Michoux, D. Bandon, and A. Geissbuhler, ―A review of content-based image retrieval systems in medical applications—Clinical benefits and future directions,‖ Int. J. Med. Inf., vol. 73, no. 1, pp. 1–23,2003. [10] J.-M. Cauvin, C. Le Guillou, B. Solaiman, M. Robaszkiewicz, P. Le Beux, and C. Roux, ―Computerassisted diagnosis system in digestive endoscopy,‖ IEEE Trans. Inf. Technol. Biomed., vol. 7, no. 4, pp. 256–262, Dec. 2003. [11] D. K. Iakovidis, D. E. Maroulis, and S. A. Karkanis, ―An intelligent system for automatic detection of gastrointestinal adenomas in video endoscopy,‖ Comput. Biol. Med., vol. 36, no. 10, pp. 1084–1103, 2006. [12] S. Xia, D. Ge, W. Mo, and Z. Zhang, ―A content-based retrieval system for endoscopic images,‖ in Proc. Annu. Int. Conf. Eng. Med. Biol. Soc., Jan., 2005, vol. 2, pp. 1720– 1723. [13] M. Coimbra, P. Campos, and J. Cunha, ―Topographic segmentation and transit time estimation for endoscopic capsule exams,‖ in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2006, vol. 2, pp. 1164–1167. [14] S. Hwang, J. Oh, J. Cox, S. J. Tang, and H. F. Tibbals, ―Blood detection in wireless capsule endoscopy using expectation maximization clustering,‖ in Proc. SPIE, 2006,vol. 6144, pp. 577–587. [15] L. Igual, S. Segui, J. Vitria, F. Azpiroz, and P. Radeva, ―Eigenmotion-based detection of intestinal contractions,‖ in Proceedings of the International Conference on Computer Analysis of Images and Patterns (Lecture Notes Computer Science Series 4673). Berlin: Springer-Verlag, 2007, pp.

293–300. [16] M. K. Bashar, K. Mori, Y. Suenaga, T. Kitasaka, and Y. Mekada, ―De- tecting informative frames from wireless capsule endoscopic video using color and texture features,‖ in Proceedings of the International Confer- ence on Medical Image Computing and Computer Assisted Intervention (Lecture Notes Computer Science Series 5242). Berlin: Springer-Verlag,2008, pp. 603–611. [17] J. Lee, J. Oh, S. K. Shah, X. Yuan, and S. J. Tang, ―Automatic classification of digestive organs in wireless capsule endoscopy videos,‖ presented at the ACM Symp. on Applied Computing, Seoul, Korea, 2007. [18] M. Mackiewicz, M. Fisher, and C. Jamieson, ―Bleeding detection in wire- less capsule endoscopy using adaptive colour histogram model and support vector classification,‖ in Proc. SPIE, 2008, vol. 6914, p. 69140R. [19] F. Vilarino, P. Spyridonos, O. Pujol, J. Vitria, and P. Radeva, ―Automatic detection of intestinal juices in wireless capsule video endoscopy,‖ in Proc. 18th Int. Conf. Pattern Recog., Washington, DC: IEEE Computer Society, 2006, pp. 719– 722. [20] Y. Jung, Y. Kim, D. Lee, and J. Kim, ―Active blood detection in a high resolution capsule endoscopy using color spectrum transformation,‖ in Proc. Int. Conf. Biomed. Eng. Informat., 2008, vol. 1, pp. 859–862. [21] S. Bejakovic, R. Kumar, T. Dassopoulos, G. Mullin, and G. Hager,―Analysis of crohns disease lesions in capsule endoscopy images,‖ in Proc. Int. Conf. Robot. Autom., May 2009, pp. 2793–2798. [22] R. Kumar, P. Rajan, S. Bejakovic, S. Seshamani, G. Mullin, T. Dassopou-los, and G. Hager, ―Learning disease severity for capsule endoscopy im- ages,‖ in Proc. IEEE Int. Symp. Biomed. Imag., 2009, pp. 1314–1317 [23] S. Seshamani, P. Rajan, R. Kumar, H. Girgis, G. Mullin, T. Dassopoulos, and G. Hager, ―A boosted registration framework for lesion matching,‖ in Proc. Med. Image Comput. Comput. Assist. Intervent., 2009, pp. 582–589. [24] F. Schnitzler, H. Fidder, M. Ferrante, M. Noman, I. Arijs, G. Van Assche, I. Hoffman, K. Van Steen, S. Vermeire, and P. Rutgeerts, ―Mucosal healing

predicts long-term outcome of maintenance therapy with infliximab in crohn’s disease,‖ Inflamm. Bowel Dis., vol. 15, no. 9, pp. 1295–1301,2009. [25] P. Rutgeerts, S. Vermeire, and G. Van Assche, ―Mucosal healing in in- flammatory bowel disease: Impossible ideal or therapeutic target?,‖ Gut, vol. 56, no. 4, pp. 453–455, 2007. [26] S. Tong and E. Chang, ―Support vector machine active learning for image retrieval,‖ in MULTIMEDIA’01: Proceedings of the Ninth ACM Interna- tional Conference on Multimedia, New York: ACM, 2001, pp. 107–118. [27] C. Bishop, Pattern Recognition and Machine Learning.NewYork: Springer-Verlag, 2006. [28] S. Seshamani, R. Kumar, G. Hager, T. Dassopoulos, and G. Mullin, ―A meta method for image matching,‖ IEEE Trans. Med. Imag., vol. 30, no.8, pp. 1468–1479, 2011 [29] Serra, J. Image Analysis and Mathematical Morphology. Academic Press, New York, 1982. [30] Lantu_ejoul, C. La squelettisation et son application aux mesures topologiques des mosa• _ques polycristallines. PhD thesis, Ecole des Mines, Paris, 1978. [31] ] Beucher, S., and Meyer, F. The morphological approach to segmentation: the watershed transformation. In Mathematical Morphology in Image Processing, E. R. Dougherty, Ed. Marcel Dekker,New York, 1993, ch. 12, pp. 433{481. [32] Beucher, S., and Lantu_ejoul, C. Use of watersheds in contour detection. In Proc. International Workshop on Image Processing, Real-Time Edge and Motion Detection/Estimation, Rennes, September (1979). [33] K. Mikolajczyk and C. Schmid, ―A performance evaluation of local descriptors,‖ IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10, pp. 1615–1630, Oct. 2005. [34] B. Manjunath, J. Ohm, V. Vasudevan, and A. Yamada, ―Color and texture descriptors,‖ IEEE Trans. Circuits Syst. Video Technol., vol. 11, no. 6, pp. 703–715, Jun. 2001. [35] M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Ma- chine Vision. Florence, KY: Thomson-Engineering (now Cengage Learn- ing), 2007. [36] H. Vu, T. Echigo, R. Sagawa, K. Yagi, M. Shiba, K. Higuchi, T. Arakawa, and Y. Yagi, ―Contraction detection in small bowel from an image se- quence of wireless capsule endoscopy,‖ in Proceedings of International Conference on Medical Image Computing and Computer Assisted Inter- vention (Lecture Notes Computer Science Series 4791). Berlin: Springer- Verlag, 2007, pp. 775–783.