You are on page 1of 26

REMOTE SENSINGAND GIS APPLICATIONS

UNIT II

Image analysis: Introduction, elements of visual interpretation, digital image processing- image
preprocessing, image enhancement, image classification, supervised classification, unsupervised
classification.

INTRODUCTION

Image analysis is the extraction of meaningful information from images, mainly from digital
images by means of digital images processing techniques. Image analysis tasks can be as simple as
reading bar coded tags or as sophisticated as identifying a person from their face.

Computers are indispensable for the analysis of large amounts of data, for tasks that require
complex computation. On the other hand, the human visual cortex is an excellent image analysis
apparatus, especially for extracting higher level information.

Remote sensing data can be analyzed using visual image interpretation techniques if the data
are in the hardcopy or pictorial form. It is used extensively to locate specific features and conditions,
which are then geo-coded for inclusion in GIS. Visual image interpretation techniques have certain
disadvantages and may require extensive training and are labor intensive. In this technique, the
spectral characteristics are not always fully evaluated because of the limited ability of the eye to
discern tonal values and analyze the spectral changes. If the data are in digital mode, the remote
sensing data can be analyzed using digital image processing techniques and such a database can be
used in raster GIS. In applications where spectral patterns are more informative, it is preferable to
analyze digital data rather than pictorial data.

VISUAL INTERPRETATION

Interpretation is the process of extraction of qualitative and quantitative information of


objects from aerial photographs or satellite images. Interpretation is generally called image
interpretation except for the case when the interpretation is carried out on aerial photographs.
Interpretation of aerial photographs is known as aerial photo interpretation. Based on the mode of the
interpretation, interpretation can be categorized into visual and digital interpretation. Visual
interpretation involves visual analysis of aerial photographs and satellite images. When the
interpretation is carried out with the help of computer software, it is known as digital interpretation.
Comparison of merits and demerits of visual (human) and digital interpretation techniques

Visual (human) interpretation Digital interpretation


Description

• Image analyst’s experience and • Time effective-requires much less time


knowledge is available for interpretation
• Very good for extraction of spatial • Results can be exactly reproduced for any
Merits
information number of times
• Extraction of quantitative information is
possible and easier

• Time consuming • Image analyst’s experience and


• Interpretation results may vary with time knowledge is not available
Demerits and person depending upon their • Poor in extracting spatial information
experience and knowledge

ELEMENTS OF VISULA INTERPRETTION

Visual interpretation of aerial photographs involves the study of various basic characteristics
of an object. In case of interpretation of satellite images, these characteristics of objects are studied
with reference to a single or multiple spectral bands because there are generally more than one images
acquired in different spectral regions of electromagnetic spectrum.

However, the basic elements are tone, texture, shape, size, pattern, shadow, location and
association, similar to those used in aerial photo interpretation. Image interpretations employ
combination of the following eight elements

1. Tone 2. Size 3. Shape 4. Texture 5. Association

6. Shadow 7. Site 8. Pattern

A systematic study and visual interpretation of satellite images usually involves consideration of two
basic elements, namely image elements and terrain elements. Out of the eight elements listed above,
the first seven elements comprise image elements and the 8th element; pattern is the terrain element
such as drainage, landform, erosion, soil, vegetation and land-use patterns. These elements are shown
in the order of their complexity in Fig.
Fig- Ternary plot showing the primary ordering of image elements that are fundamental to the image analysis
process

Fig- Diagrammatic representation of elements of visual image interpretation

1. TONE

Tone refers to the colour or relative brightness of an object in colour image and the
relative and quantitative shades of gray in black and white image. As studied earlier, the tonal
variation is due to the reflection, transmission or absorption characteristic of an object. This
may vary from one object to another and from one band to another. Tone is one of the most
basic elements because it is difficult to discern other elements without tonal differences. In
general, smooth surface tends to have high reflectance than rougher surface with less
reflectance.

Tone in aerial photographs is influenced by the following factors:


• light reflectivity of the object
• angle of reflected light
• the geographic latitude
• type of photography and film sensitivity
• light transmission of filters and
• photographic processing.

Fig- Satellite imagery of Indira Gandhi National Open University campus at NewDelhi. Place mark shows
the New Academic Complex

2. SIZE

Objects can be misinterpreted if their sizes are not evaluated properly. Size of objects
in an image is a function of scale hence, the size of objects must be considered in the context
of the scale of a photograph/image. Although, the third dimension, which comprises of height
of the objects is not readily measurable on satellite images but valuable information can be
derived from the shadows of the objects. Size of an object can be important tool for its
identification, in two ways. First, the size of an image object or feature is relative in relation
to other objects on the image. This is probably the most direct and important function of size.

As it provides the interpreter with an intuitive notion of the scale and resolution of an
image even though no measurements or calculations may have been made.

This role is achieved by recognition of familiar objects like dwelling, highways and
rivers as shown in figure. Second, absolute measurement can be equally valuable as
interpretation aids. You should remember that size of an object in an image depends on the
scale resolution of the image
Fig- Variation in size and shapes in the images provides clue for different objects. (a)Automobiles, (b)
railway track, (c) baseball court, (d) trailer, (e) swimming pool and (f) a meandering river

3. SHAPE

Shape relates to the general form, configuration or outline of an individual object.


Shape is one of the most important single factors for recognizing objects from images (a-e).
Regular geometric shapes are usually indicators of human presence and use. Similarly,
irregular shapes are usually indicators of natural objects as shown in Fig. f. Some objects can
be identified almost solely on the basis of their shapes. For example, a railway line is usually
readily distinguished from a highway or an unmetalled road because its shape consists of long
straight tangents and gentle curves as opposed to the shape of highway as shown in Fig. b.
You should remember that shape of an object viewed from above may be quite different from
its profile view. For planar objects, it is easier to calculate the areal dimensions on imagery
e.g., river as shown in Fig. f. Features in nature often have such distinctive shapes that shape
alone might be sufficient to provide clear identification e.g., beach, ponds, lakes and rivers
occur in specific shapes unlike others found in nature.

4. TEXTURE

Texture is an expression of roughness or smoothness as exhibited by the images. It is


the rate of change of tonal values (frequency of tonal changes). Texture signifies the
frequency of change and arrangement of tones in an image and is produced by an aggregate
of unit features too small to be clearly recognized individually on an image.
Texture can be expressed qualitatively as coarse, moderate, fine, very fine, smooth,
rough, rippled and mottled. It is rather easier to distinguish various textural classes visually
than in the digital oriented techniques. Texture is, thus, dependent upon tone, shape, size,
pattern, and scale of the imagery, and, is produced by a mixture of features that are too small
to be seen individually. For example, grass and water generally appear ‘smooth’ while trees
or a forest canopy may appear ‘rough’ as shown in Fig.

5. ASSOCIATION

Association is occurrence of features in relation to its surroundings. sometimes a


single feature by itself may not be distinctive enough to permit its identification. It specifies
the occurrence of certain objects or features in association of a particular object or feature.

Many features can be easily identified by examining the associated features. For
example, a primary school and a high school may be similar flat roofed building structures
but it may be possible to identify the high school by its association with an adjacent football
field.

6. SHADOW

Shadow is an especially important clue in the interpretation of objects in the following


two ways:

1. The outline or shape of a shadow provides a profile view of objects, which aids in
image interpretation and

2. Objects within shadow reflect little light and are difficult to discern on image, which
hinders interpretation.

Taller features cast larger shadows than shorter features as observed in fig. Military image
interpreters are often primarily interested in identification of individual items of equipment.
Shadow is significant in distinguishing suitable differences that not be otherwise visible.
Fig- Taller objects such as the Qutub Minar cast larger shadow than smaller objects such as buildings and
trees

7. SITE

Site refers to the topographic position, for example, sewage treatment facilities are
positioned at low topographic sites near stream or rivers to collect waste flowing through the
system from higher locations. The relationship of feature to the surrounding features provides
clues towards its identity. You can also consider the example of certain tree species located in
areas of specific altitudes. Similarly, identification of landforms can help in deciphering the
underlying geology. Often many of the rock types have distinct topographic expressions, for
example, some kinds of sedimentary rocks are typically exposed in the form of alternating
ridge and valley topography.

8. PATTERN

You have read about the seven image elements. It is now time to discuss about the
terrain element which is also a significant element in image interpretation. The terrain
elements include drainage, topography/landform, soil, vegetation and land use planning
patterns.

Pattern develops in an image due to spatial arrangement of objects. Hence, pattern can
be defined as the spatial arrangement of objects in an image. Certain objects can be easily
identified because of their pattern. A particular pattern may have its genetic relation with
several factors of its origin.
Fig- Satellite image showing Doon valley and surroundings. The drainage patterns and lithological
differences can be clearly observed

Fig- Monitoring land cover change over time. Here you can see agricultural field as observed in the image
of year 1975 has been converted to human settlements in 2000

DIGITAL IMAGE CLASSIFICATION

DEFINITION

 Digital Image Processing is the manipulation of the digital data with the help of the
computer hardware and software to produce digital maps in which specific
information has been extracted and highlighted.

Different stages in Digital Image Processing

Satellite remote sensing data in general and digital data in particular have been used
as basic inputs for the inventory and mapping of natural resources of the earth surface like
forestry, soils, geology, and agriculture. Space borne remote sensing data suffer from a
variety of radiometric, atmospheric and geometric errors, earth’s rotation and so on. These
distortions would diminish the accuracy of the information extracted and reduce the utility of
the data. So these errors required to be corrected.

In order to update and compile maps with high accuracy, the satellite digital data have
to be manipulated using image process techniques. Digital image data is fed into the
computer, and the computer is programmed to insert these data into an equation or a series of
equation, and then store the results of the competition of each and every pixel. These results
are called look–up–table (LTU) values for a new image that may be manipulated further to
extract information of user’s interest. Virtually, all the procedure may be grouped into one or
more of the following broad types of the operations, namely,

1. Pre-processing, and

2. Image Processing

1. Image Pre-processing

Remotely sensed raw data generally contains flaws and deficiencies received from
imaging sensor mounted on the satellite. The correction of deficiencies and removal of flaws
present in the data through some methods are termed as pre–processing methods this
correction model involves to correct geometric distortions, to calibrate the data
radiometrically and to eliminate noise present in the data. All the pre–processing methods are
consider under three heads, namely,

a) Radiometric correction method,


b) Atmospheric correction method,
c) Geometric correction methods.

a) Radiometric correction method


 Radiometric errors are caused by detected imbalance and atmospheric deficiencies.
Radiometric corrections are transformation on the data in order to remove error,
which are geometrically independent. Radiometric corrections are also called as
cosmetic corrections and are done to improve the visual appearance of the image.
Some of the radiometric distortions are as follows

1. Correction for missing lines


2. Correction for periodic lines striping
3. Random noise correction

1. Correction for missing lines


 Line drop occurs due to recording problems when one of the detectors of the
sensor in question either given wrong data or stops functioning.
 The LANDSAT, for example has 16 detectors in all its bands, except the
thermal band.
 A loss of one of the detectors would result in every 16 th scan line being of zero
that would plot as a black line on the image.
 Dropped lines are normally corrected by replacing the line with pixel values in
the above or below, or with the average of the two.

Fig- Correction for missing Lines


2. Correction for periodic lines striping

 Striping was common in early Landsat data due to variations and drift in the
response over time of six detectors.
 The ‘drift’ was different for each of the six detectors, causing the same
brightness to be represented differently by each detector.
 The corrective process made a relative correction among the six sensors to
being their apparent value in line with each other

Fig- Correction for periodic line striping

3. Random noise correction

Fig- Correction for Random noise correction

b) Atmospheric correction method


The value recorded at any pixel location on the remotely sensed image is not a record
of the true ground – leaving radiant at that point, for the signal is attenuated due to
absorption and scattering. The atmosphere has effect on the measured brightness
value of a pixel. Other difficulties are caused by the variation in the illumination
geometry. Atmospheric path radiance introduces haze in the imagery where by
decreasing the contrast of the data.
Fig- Major Effect due to Atmosphere

Fig- Atmospheric Correction

c) Geometric correction methods.


Remotely sensed images are not maps. Frequently information extracted from
remotely sensed images is integrated with the map data in Geographical Information
System (GIS). The transformation of the remotely sensed image into a map with the
scale and projection properties is called geometric corrections. Geometric corrections
of remotely sensed image is required when the image is to be used in one of the
following circumstances

1. To transform an image to match your map projection

2. To locate points of the interest on map and image.


3. To overlay temporal sequence of images of the same area, perhaps acquired by different sensors.

4. To integrate remotely sensed data with GIS.

Fig- Geometric Correction Method

DIGITAL IMAGE PROCESSING

1. Image Processing

 The second stage in Digital image Processing entails five different operations, namely,

A. Image Registration

B. Image Enhancement

C. Image Filtering

D. Image Transformation

E. Image Classification

A. Image Registration

 Image registration is the translation and alignment process by which two images/maps
of like geometrics and of the same set of objects are positioned co-incident with
respect to one another so that corresponding element of the same ground area appear
in the same place on the registered images. This is often called image to image
registration. One more important concept with respect to geometry of satellite image
is rectification. Rectification is the process by which the geometry of an image area is
made planimetric. This process almost always involves relating Ground Control Point
(GCP), pixel coordinates with precise geometric correction since each pixel can be
referenced not only by the row or column in a digital image, but it is also rigorously
referenced in degree, feet or meters in a standard map projection whenever accurate
data, direction and distance measurements are acquired, geometric rectification is
required. This is often called as image to map rectification.

Fig- Image Registration

B. Image Enhancement

 Low sensitivity of the defector, weak signal of the objects present on the earth
surface, similar reflectance of different objects and environmental conditions at the
time of recording are the major causes of low contract of the image. Another problem
that complicates photographic display of digital image is that the human eye is poor at
discriminating the slight spectral or radiometric differences that may characterize the
feature.

 Image enhancement techniques improve the quality of an image as perceived by


human eye. There exist a wide variety of techniques for improving image quality.
Contrast stretch, density slicing, edge enhancement, and spatial filtering are the most
commonly used techniques.

 Image enhancement methods are applied separately to each band of a multispectral


image.
Fig- Contrast Enhancement by Histogram Stretch

Fig- Density Slice for Surface Temperature Visualization


Fig- Edge Enhancement

Fig- Spatial filtering

C. Image Filtering

 A characteristic of remotely sensed images is a parameter called spatial frequency,


defined as number of changes in brighter value per unit distance for any particular
part of an image. Few brighter changes in an image considered as low frequency,
conversely, the brighter value changes dramatically over short distance, this is an area
of high frequency.
 Spatial filtering is the process of dividing the image into its constituent spatial
frequency and selectively altering certain spatial features. This technique increases the
analyst’s ability to discriminate details.

 The three types of spatial filters used in remote sensor data processing are

1. Low Pass Filters,

2. Band Pass Filters, and

3. High Pass Filters

1. Low Pass Filters

Contrast-stretched image Low Pass Filter Image

2. Band Pass Filters

Original Image High Pass Filter Image


3. High Pass Filters

Fig- In the example, we add some sinusoidal noise to an image and filter it with Band Bass Filter

D. Image Transformation

 All the transformations in an image based on arithmetic operations. The resulting


images may will have properties that make them more suited for a particular purpose
than the original

 The two transformations which are commonly used. These two techniques are

1. Normalised Difference Vegetation Index (NDVI)

2. Principal Component Analysis (PCA)

1. Normalised Difference Vegetation Index (NDVI)

Remotely sensed data is used extensively for large area vegetation monitoring
typically the spectral bands use for this purpose are visible and near infrared bands.
Various mathematical combinations of these bands have been used for computation of
NDVI, which is an indicator of the presence and condition of green vegetation. These
mathematical quantities are refer to as vegetation indices.
Fig-Normalized Difference Vegetation Index of Honshu, Japan

Fig-Normalized Difference Vegetation Index of india

2. Principal Component Analysis (PCA)

The application of Principal Component Analysis (PCA) on raw remotely sensed data
producing a new image which is more interpretable than the original data. The main
advantage of PCA is that it may be used to compress the information content of a
number of bands into just two or three transformed principal component images.
Fig- The application of Principal Components Analysis (PCA) on raw remotely sensed data of urban
Area by Unsupervised Classification.
E. Image Classification

 Image Classification is a procedure to automatically categories all pixel in an image


of a terrain into land use and land cover classes. This concept is dealt under broad
subject, namely, “pattern recognition”. Spectral pattern recognition refers to the
family of classification procedures that utilizes this pixel-by-pixel spectral
information as the basis for automated land cover classification. Spatial pattern
recognition of image on the basis of the spatial relationship with pixel surrounding.

There are two approaches to image classification

1. Supervised classification: It is the process of identification of classes within a remote


sensing data with inputs from and as directed by the user in the form of training data,
and

2. Unsupervised classification: It is the process of automatic identification of natural


groups or structures within a remote sensing data.

Merits and demerits of supervised and unsupervised classification methods


1. Supervised Classification

Fig- Basic Steps in Supervised Classification

Supervised classification, as the name implies, requires human guidance. An analyst select a
group of contiguous pixels from part of an image known as training area that defines DN
values in each channel for a class. A classification algorithm computers certain properties of
set of training pixels, for example, mean DN for each channel. Then, DN values of each pixel
in the image are composed with the attributes of the training set.

This is based on the statistics of training areas representing different ground objects
selected subjectively by users on the basis of their own knowledge or experience.
Classification is controlled by users knowledge but, on the other hand, is constrained and may
even be biased by their subjective view.
Fig- Using Supervised Classification, Fig- Training data in supervised classification

pixels are classified into different


categories

1. Supervised Classification

In the following we will discuss parallelepiped and maximum likelihood algorithms


of supervised image classification.

a) Parallelepiped classifier

Parallelepiped classifier uses the class limits stored in each class signature to
determine if a given pixel falls within the class or not. The class limits specify the dimensions
of each side of a parallelepiped surrounding mean of the class in feature space. If the pixel
falls inside parallelepiped, it is assigned to the class. However, if pixel falls within more than
one class, it is put in the overlap class. If the pixel does not fall inside any class, it is assigned
to the null class.

Fig- Using the parallelepiped approach, pixel 1 is classified as forest and pixel 2 is classified as urban
b) Maximum Likelihood classifier

Maximum likelihood classifier is one of the most widely used classifiers in the remote
sensing. In the method, a pixel is assigned to the class for which it has maximum likelihood
of membership. This classification algorithm uses training data to estimate means and
variances of the class, which are the used to estimate probabilities of pixels to belong to
different classes. Maximum likelihood classification considers not only mean or average
values in assigning classification but also the variability of brightness values in each class
around the mean. It is the most powerful of the classification algorithms as long as accurate
training data is provided and certain assumptions regarding the distributions of classes are
valid.

2. Unsupervised Classification

As the name implies, this form of classification is done without interpretive guidance
from an analyst. An algorithm automatically organizes similar pixel values into groups that
become the basic for different classes. This is entirely based on the statistics of the image
data distribution and is often called clustering.

The result of an unsupervised classification is an image of statistical clusters, where


the classified image still needs interpretation based on knowledge of thematic contents of the
clusters.

Fig- Steps of unsupervised classification

a) K-Means Clustering

K-means algorithm assigns each pixel to a group based on an initial selection of mean values.
The iterative re definition of groups continues till the means reach a threshold beyond which
it does not change. Pixels belonging to the groups are then classified using a minimum-
distance to means or other principle. K-means clustering algorithm, thus, helps split a given
unknown dataset into a fixed number (k) of user defined clusters. The objective of the
algorithm is to minimize variability within the cluster.

Fig- Unsupervised Classification


b) ISODATA Clustering

ISODATA (Iterative Self-Organising Data Analysis Technique) clustering method is an


extension of k-means clustering method (ERDAS, 1999). It represents an iterative
classification algorithm and is useful when one is not sure of the number of clusters present in
an image. It is iterative because it makes a large number of passes through the remote sensing
dataset until specified results are obtained. Good results are obtained if all bands in remote
sensing image have similar data ranges. It includes automated merging of similar clusters and
splitting of heterogeneous clusters.

You might also like