Professional Documents
Culture Documents
UNIT II
Image analysis: Introduction, elements of visual interpretation, digital image processing- image
preprocessing, image enhancement, image classification, supervised classification, unsupervised
classification.
INTRODUCTION
Image analysis is the extraction of meaningful information from images, mainly from digital
images by means of digital images processing techniques. Image analysis tasks can be as simple as
reading bar coded tags or as sophisticated as identifying a person from their face.
Computers are indispensable for the analysis of large amounts of data, for tasks that require
complex computation. On the other hand, the human visual cortex is an excellent image analysis
apparatus, especially for extracting higher level information.
Remote sensing data can be analyzed using visual image interpretation techniques if the data
are in the hardcopy or pictorial form. It is used extensively to locate specific features and conditions,
which are then geo-coded for inclusion in GIS. Visual image interpretation techniques have certain
disadvantages and may require extensive training and are labor intensive. In this technique, the
spectral characteristics are not always fully evaluated because of the limited ability of the eye to
discern tonal values and analyze the spectral changes. If the data are in digital mode, the remote
sensing data can be analyzed using digital image processing techniques and such a database can be
used in raster GIS. In applications where spectral patterns are more informative, it is preferable to
analyze digital data rather than pictorial data.
VISUAL INTERPRETATION
Visual interpretation of aerial photographs involves the study of various basic characteristics
of an object. In case of interpretation of satellite images, these characteristics of objects are studied
with reference to a single or multiple spectral bands because there are generally more than one images
acquired in different spectral regions of electromagnetic spectrum.
However, the basic elements are tone, texture, shape, size, pattern, shadow, location and
association, similar to those used in aerial photo interpretation. Image interpretations employ
combination of the following eight elements
A systematic study and visual interpretation of satellite images usually involves consideration of two
basic elements, namely image elements and terrain elements. Out of the eight elements listed above,
the first seven elements comprise image elements and the 8th element; pattern is the terrain element
such as drainage, landform, erosion, soil, vegetation and land-use patterns. These elements are shown
in the order of their complexity in Fig.
Fig- Ternary plot showing the primary ordering of image elements that are fundamental to the image analysis
process
1. TONE
Tone refers to the colour or relative brightness of an object in colour image and the
relative and quantitative shades of gray in black and white image. As studied earlier, the tonal
variation is due to the reflection, transmission or absorption characteristic of an object. This
may vary from one object to another and from one band to another. Tone is one of the most
basic elements because it is difficult to discern other elements without tonal differences. In
general, smooth surface tends to have high reflectance than rougher surface with less
reflectance.
Fig- Satellite imagery of Indira Gandhi National Open University campus at NewDelhi. Place mark shows
the New Academic Complex
2. SIZE
Objects can be misinterpreted if their sizes are not evaluated properly. Size of objects
in an image is a function of scale hence, the size of objects must be considered in the context
of the scale of a photograph/image. Although, the third dimension, which comprises of height
of the objects is not readily measurable on satellite images but valuable information can be
derived from the shadows of the objects. Size of an object can be important tool for its
identification, in two ways. First, the size of an image object or feature is relative in relation
to other objects on the image. This is probably the most direct and important function of size.
As it provides the interpreter with an intuitive notion of the scale and resolution of an
image even though no measurements or calculations may have been made.
This role is achieved by recognition of familiar objects like dwelling, highways and
rivers as shown in figure. Second, absolute measurement can be equally valuable as
interpretation aids. You should remember that size of an object in an image depends on the
scale resolution of the image
Fig- Variation in size and shapes in the images provides clue for different objects. (a)Automobiles, (b)
railway track, (c) baseball court, (d) trailer, (e) swimming pool and (f) a meandering river
3. SHAPE
4. TEXTURE
5. ASSOCIATION
Many features can be easily identified by examining the associated features. For
example, a primary school and a high school may be similar flat roofed building structures
but it may be possible to identify the high school by its association with an adjacent football
field.
6. SHADOW
1. The outline or shape of a shadow provides a profile view of objects, which aids in
image interpretation and
2. Objects within shadow reflect little light and are difficult to discern on image, which
hinders interpretation.
Taller features cast larger shadows than shorter features as observed in fig. Military image
interpreters are often primarily interested in identification of individual items of equipment.
Shadow is significant in distinguishing suitable differences that not be otherwise visible.
Fig- Taller objects such as the Qutub Minar cast larger shadow than smaller objects such as buildings and
trees
7. SITE
Site refers to the topographic position, for example, sewage treatment facilities are
positioned at low topographic sites near stream or rivers to collect waste flowing through the
system from higher locations. The relationship of feature to the surrounding features provides
clues towards its identity. You can also consider the example of certain tree species located in
areas of specific altitudes. Similarly, identification of landforms can help in deciphering the
underlying geology. Often many of the rock types have distinct topographic expressions, for
example, some kinds of sedimentary rocks are typically exposed in the form of alternating
ridge and valley topography.
8. PATTERN
You have read about the seven image elements. It is now time to discuss about the
terrain element which is also a significant element in image interpretation. The terrain
elements include drainage, topography/landform, soil, vegetation and land use planning
patterns.
Pattern develops in an image due to spatial arrangement of objects. Hence, pattern can
be defined as the spatial arrangement of objects in an image. Certain objects can be easily
identified because of their pattern. A particular pattern may have its genetic relation with
several factors of its origin.
Fig- Satellite image showing Doon valley and surroundings. The drainage patterns and lithological
differences can be clearly observed
Fig- Monitoring land cover change over time. Here you can see agricultural field as observed in the image
of year 1975 has been converted to human settlements in 2000
DEFINITION
Digital Image Processing is the manipulation of the digital data with the help of the
computer hardware and software to produce digital maps in which specific
information has been extracted and highlighted.
Satellite remote sensing data in general and digital data in particular have been used
as basic inputs for the inventory and mapping of natural resources of the earth surface like
forestry, soils, geology, and agriculture. Space borne remote sensing data suffer from a
variety of radiometric, atmospheric and geometric errors, earth’s rotation and so on. These
distortions would diminish the accuracy of the information extracted and reduce the utility of
the data. So these errors required to be corrected.
In order to update and compile maps with high accuracy, the satellite digital data have
to be manipulated using image process techniques. Digital image data is fed into the
computer, and the computer is programmed to insert these data into an equation or a series of
equation, and then store the results of the competition of each and every pixel. These results
are called look–up–table (LTU) values for a new image that may be manipulated further to
extract information of user’s interest. Virtually, all the procedure may be grouped into one or
more of the following broad types of the operations, namely,
1. Pre-processing, and
2. Image Processing
1. Image Pre-processing
Remotely sensed raw data generally contains flaws and deficiencies received from
imaging sensor mounted on the satellite. The correction of deficiencies and removal of flaws
present in the data through some methods are termed as pre–processing methods this
correction model involves to correct geometric distortions, to calibrate the data
radiometrically and to eliminate noise present in the data. All the pre–processing methods are
consider under three heads, namely,
Striping was common in early Landsat data due to variations and drift in the
response over time of six detectors.
The ‘drift’ was different for each of the six detectors, causing the same
brightness to be represented differently by each detector.
The corrective process made a relative correction among the six sensors to
being their apparent value in line with each other
1. Image Processing
The second stage in Digital image Processing entails five different operations, namely,
A. Image Registration
B. Image Enhancement
C. Image Filtering
D. Image Transformation
E. Image Classification
A. Image Registration
Image registration is the translation and alignment process by which two images/maps
of like geometrics and of the same set of objects are positioned co-incident with
respect to one another so that corresponding element of the same ground area appear
in the same place on the registered images. This is often called image to image
registration. One more important concept with respect to geometry of satellite image
is rectification. Rectification is the process by which the geometry of an image area is
made planimetric. This process almost always involves relating Ground Control Point
(GCP), pixel coordinates with precise geometric correction since each pixel can be
referenced not only by the row or column in a digital image, but it is also rigorously
referenced in degree, feet or meters in a standard map projection whenever accurate
data, direction and distance measurements are acquired, geometric rectification is
required. This is often called as image to map rectification.
B. Image Enhancement
Low sensitivity of the defector, weak signal of the objects present on the earth
surface, similar reflectance of different objects and environmental conditions at the
time of recording are the major causes of low contract of the image. Another problem
that complicates photographic display of digital image is that the human eye is poor at
discriminating the slight spectral or radiometric differences that may characterize the
feature.
C. Image Filtering
The three types of spatial filters used in remote sensor data processing are
Fig- In the example, we add some sinusoidal noise to an image and filter it with Band Bass Filter
D. Image Transformation
The two transformations which are commonly used. These two techniques are
Remotely sensed data is used extensively for large area vegetation monitoring
typically the spectral bands use for this purpose are visible and near infrared bands.
Various mathematical combinations of these bands have been used for computation of
NDVI, which is an indicator of the presence and condition of green vegetation. These
mathematical quantities are refer to as vegetation indices.
Fig-Normalized Difference Vegetation Index of Honshu, Japan
The application of Principal Component Analysis (PCA) on raw remotely sensed data
producing a new image which is more interpretable than the original data. The main
advantage of PCA is that it may be used to compress the information content of a
number of bands into just two or three transformed principal component images.
Fig- The application of Principal Components Analysis (PCA) on raw remotely sensed data of urban
Area by Unsupervised Classification.
E. Image Classification
Supervised classification, as the name implies, requires human guidance. An analyst select a
group of contiguous pixels from part of an image known as training area that defines DN
values in each channel for a class. A classification algorithm computers certain properties of
set of training pixels, for example, mean DN for each channel. Then, DN values of each pixel
in the image are composed with the attributes of the training set.
This is based on the statistics of training areas representing different ground objects
selected subjectively by users on the basis of their own knowledge or experience.
Classification is controlled by users knowledge but, on the other hand, is constrained and may
even be biased by their subjective view.
Fig- Using Supervised Classification, Fig- Training data in supervised classification
1. Supervised Classification
a) Parallelepiped classifier
Parallelepiped classifier uses the class limits stored in each class signature to
determine if a given pixel falls within the class or not. The class limits specify the dimensions
of each side of a parallelepiped surrounding mean of the class in feature space. If the pixel
falls inside parallelepiped, it is assigned to the class. However, if pixel falls within more than
one class, it is put in the overlap class. If the pixel does not fall inside any class, it is assigned
to the null class.
Fig- Using the parallelepiped approach, pixel 1 is classified as forest and pixel 2 is classified as urban
b) Maximum Likelihood classifier
Maximum likelihood classifier is one of the most widely used classifiers in the remote
sensing. In the method, a pixel is assigned to the class for which it has maximum likelihood
of membership. This classification algorithm uses training data to estimate means and
variances of the class, which are the used to estimate probabilities of pixels to belong to
different classes. Maximum likelihood classification considers not only mean or average
values in assigning classification but also the variability of brightness values in each class
around the mean. It is the most powerful of the classification algorithms as long as accurate
training data is provided and certain assumptions regarding the distributions of classes are
valid.
2. Unsupervised Classification
As the name implies, this form of classification is done without interpretive guidance
from an analyst. An algorithm automatically organizes similar pixel values into groups that
become the basic for different classes. This is entirely based on the statistics of the image
data distribution and is often called clustering.
a) K-Means Clustering
K-means algorithm assigns each pixel to a group based on an initial selection of mean values.
The iterative re definition of groups continues till the means reach a threshold beyond which
it does not change. Pixels belonging to the groups are then classified using a minimum-
distance to means or other principle. K-means clustering algorithm, thus, helps split a given
unknown dataset into a fixed number (k) of user defined clusters. The objective of the
algorithm is to minimize variability within the cluster.