(IJCSIS) International Journal of Computer Science and Information Security,Vol.
application. Haralick et al.  used gray level co-occurrencefeatures to analyze remotely sensed images.Since we are interested in interpretation of images we candefine texture as the characteristic variation in intensity of aregion of an image which should allow us to recognize anddescribe it and outline its boundaries. The degrees of randomness and of regularity will be the key measure whencharacterizing a texture. In texture analysis the similar texturalelements that are replicated over a region of the image arecalled texels. This factor leads us to characterize textures in thefollowing ways:
The texels will have various sizes and degrees of uniformity
The texels will be oriented in various directions
The texels will be spaced at varying distances in differentdirections
The contrast will have various magnitudes and variations
Various amounts of background may be visible betweentexels
The variations composing the texture may each havevarying degrees of regularityIt is quite clear that a texture is a complicated entity tomeasure. The reason is primarily that many parameters arelikely to be required to characterize it. Characterization
materials is usually very difficult and the goal of characterization depends on the application. In general, the aimis to give a description of analyzed material, which can be, for example, the classification result for a finite number of classesor visual exposition of the surfaces. It gives additionalinformation compared only to color or shape measurements of the objects. Sometimes it is not even possible to obtain color information at all, as in night vision with infrared cameras.Color measurements are usually more sensitive to varyingillumination conditions than texture, making them harder to usein demanding environments like outdoor conditions. Thereforetexture measures can be very useful in many real-worldapplications, including, for example, outdoor scene imageanalysis.To exploit texture in applications, the measures should beaccurate in detecting different texture structures, but still beinvariant or robust with varying conditions that affect thetexture appearance. Computational complexity should not betoo high to preserve realistic use of the methods. Differentapplications set various requirements on the texture analysismethods, and usually selection of measures is done with respectto the specific application.Typically textures and the analysis methods related to themare divided into two main categories with differentcomputational approaches: the stochastic and the structuralmethods. Structural textures are often man-made with a veryregular appearance consisting, for example, of line or square primitive patterns that are systematically located on the surface(e.g. brick walls). In structural texture analysis the propertiesand the appearance of the textures are described with differentrules that specify what kind of primitive elements there are inthe surface and how they are located. Stochastic textures areusually natural and consist of randomly distributed textureelements, which again can be, for example, lines or curves (e.g.tree bark). The analysis of these kinds of textures is based onstatistical properties of image pixels and regions. The abovecategorization of textures is not the only possible one; thereexist several others as well, for example, artificial vs. natural or micro textures vs. macro textures. Regardless of thecategorization, texture analysis methods try to describe the properties of the textures in a proper way. It depends on theapplications what kind of properties should be sought from thetextures under inspection and how to do that. This is rarely aneasy task.One of the major problems when developing texturemeasures is to include invariant properties in the features. It isvery common in a real-world environment that, for example,the illumination changes over time, and causes variations in thetexture appearance. Texture primitives can also rotate andlocate in many different ways, which also causes problems. Onthe other hand, if the features are too invariant, they might not be discriminative enough.II.
Image texture has a number of perceived qualities which play an important role in describing texture. One of thedefining qualities of texture is the spatial distribution of grayvalues. The use of statistical features is therefore one of theearly methods proposed in the machine vision literature.The gray-level co-occurrence matrix approach is based onstudies of the statistics of pixel intensity distributions. Theearly paper by Haralick et al. presented 14 texture measuresand these were used successfully for classification of manytypes of materials for example, wood, corn, grass and water.However, Conners and Harlow  found that only five of thesemeasures were normally used, viz. “energy”, “entropy”,“correlation”, “local homogeneity”, and “inertia”. The size of the co-occurrence matrix is high and suitable choice of d(distance) and
(angle) has to be made to get relevant features.A novel texture energy approach is presented by Laws .This involved the application of simple filters to digital images.The basic filters he used were common Gaussian, edgedetector, and Laplacian-type filters and were designed tohighlight points of high “texture energy” in the image. Adeinvestigated the theory underlying Laws’ approach anddeveloped a revised rationale in terms of Eigen filters . Eacheigenvalue gives the part of the variance of the original imagethat can be extracted by the corresponding filter. The filters thatgive rise to low variances can be taken to be relativelyunimportant for texture recognition.The structural models of
steps: (a) extraction of the texture elements, and (b) inferenceof the placement rule. An approach to model the texture by