You are on page 1of 3

Thanks to the significant growth of Computer Vision techniques, the Image

Classification problems are nowadays facilitated by a tremendous amount of novel


algorithms and technologies. However, no matter how much robust algorithms are,
they require their input data, which is information extracted from images, to be
meaningful and representative. “Texture” is considered as one of the most
important characteristics to depict the pictorial information and to be fed into the
classification models. It is a fundamental pattern element used in human
interpretation of color photographs and contains useful information for
discrimination purposes [1]. From my perspective, comprehending this concept
would help me to progress smoother when encountering techniques built up from or
implementing the “textural features” [1]. As a consequence, the question “How to
find and apply textural features to solve the images classification or categorization
problems in Computer Vision?” is relevant and essentials to enhance my learning.

Relevance of selected articles:


1. R. M. Haralick, K. Shanmugam and I. Dinstein, "Textural Features for Image
Classification," in IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-3,
no. 6, pp. 610-621, Nov. 1973.
2. T. Ojala, M. Pietikainen and T. Maenpaa, "Multiresolution gray-scale and rotation
invariant texture classification with local binary patterns," in IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, July 2002.

These two research papers are extremely relevant to my question because both of
the papers study how to extract the “textural features” of images and how they are
implemented to categorize images captured in real-life dataset. Haralick et al. [1]
clearly explains the definition of the “texture” and its relationship with other
characteristics in the image such as “spectral” and “contextual”. They develop a
procedure to get the set of textural information from image under an assumption
that these information are contained in the overall spatial relationship which gray
tones in the image have to one another [1]. Moreover, he also introduced two
classification algorithms that employ these textural features to categorize image
blocks. Ultimately, some experiments are also conducted to investigate the
usefulness of this methodology. Ojala et al. [2] note that the textures in real world
are often not uniform that they have variations in orientation, scale, gray scale, etc.,
hence, they introduce a more powerful approach to classify images using gray-scale
and rotation invariant textures. Importantly, there are also two experiments set up
to justify their findings.

Critiques of articles:
The first research paper gives me an understandable definition of “textural features”
and describe clearly step-by-step methodology to compute these features. The
author prompts me to a procedure that “Gray-Tone Spatial Dependencies Matrices”
are firstly constructed to store the frequencies that how often one gray tone will
appear in a specified spatial relationship to another gray tone in the image [1]. Then,
a set of 14 measures of these textural features are computed based on the
calculated matrices. As a result, some of the measures are utilized as the inputs for
training classifier models. The two classification algorithms introduced, which are the
“Piecewise Linear Discriminant Function Method” and the “Min-Max Decision Rule”,
show how these “textural features” can be used as inputs for training models,
however, in my opinion, they do not have adequate mathematical derivations and
behavior explanations [1]. At the bottom, Haralick et al. [1] conducted experiments
to show us the applicability and the importance of these features in practical images
classification. For examples, in terms of the “Satellite Imagery Data Set”, the overall
accuracy of the classifier implementing “texture features” is 83.5 percent, which
outperform the accuracy of one using only “spectral features” (74-77 percent) [1].

The strength of the second research paper is threefold. Firstly, it points out the
difficulty encountered in the real world as the variations of the “texture” and prove
that the proposed improvement in the paper is essential. Not only invariant to the
“monotonic gray-scale transformation” and “rotation”, the approach developed
using joint distribution of gray values of a circularly symmetric neighbor set of pixels
in a local neighborhood, also has computation simplicity that earlier studies have not
addressed [2]. Secondly, all the approach properties mentioned above and
terminologies appeared in the paper are justified and explained in such an intuitive
hierarchy. Lastly, the two experiments operated demonstrate apparently the
robustness of this method. While the first experiment examines the rotation
invariant property that the classifier is trained with samples of just one rotation
angle and tested with samples of other nine rotation angles, the second experiment
examine the invariance of this approach against the gray-scale variations, in which
excellent results are captured.

Lesson learned and reflection


These papers give me an insight into the definition and application of “texture” in
Computer Vision, as well as other relating technical concepts that I already or will
encounter when researching in this topic. While both of the papers discussing the
methodologies of extracting “textural features” and applying it as input to
classification model, the first paper gives me a better fundamental understanding of
“texture” and the second paper gives me the idea of a more robust and prevalent
technique utilizing this property. I have also learned some key points:
1. “Texture” contains information about the spatial distribution of tonal variations
within a neighborhood.
2. There are multiple methods to define and extract “Textural features” from
images.
3. Based on the four “Gray-Tone Spatial Dependencies Matrices”, the 14 textural
features computed represent “Angular Second Moment”, “Contrast”,
“Correlation”, “Variance”, “Inverse Difference Moment”, “Sum Average”, “Sum
Variance”, “Sum Entropy”, “Entropy”, “Difference Variance”, “Difference
Entropy”, “Information Measures of Correlations”, “Maximal Correlation
Coefficient”.
4. The operator to detect the “uniform Local Binary Patterns” can be applied to any
quantization of the angular space and at any spatial resolution.
5. The image classification approach using “uniform Local Binary Patterns” is gray-
scale invariant, rotation invariant and has computational simplicity.
References:
[1] R. M. Haralick, K. Shanmugam and I. Dinstein, "Textural Features for Image
Classification," in IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-
3, no. 6, pp. 610-621, Nov. 1973.
[2] T. Ojala, M. Pietikainen and T. Maenpaa, "Multiresolution gray-scale and
rotation invariant texture classification with local binary patterns," in IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp.
971-987, July 2002.

You might also like