Professional Documents
Culture Documents
(16CE4025)
Credits: 3.5
Internal Marks: 30
External Marks:70
Structure of the Syllabus
UNIT I
Basics of Remote Sensing: Components of remote sensing -
Electromagnetic radiation, electromagnetic spectrum - EMR interaction
with atmosphere - EMR interaction with Earth Surface Materials -
Atmospheric Windows and its Significance.
UNIT II
Platforms Sensors and Resolutions : Types of platforms- ground, airborne,
and space born platforms, Types and classification of sensors - Sensor
resolution-spectral, radiometric and temporal - Image data
characteristics - Digital image data formats- band interleaved by pixel,
band interleaved by line, band sequentia
UNIT III
Image Analysis: Introduction, elements of visual interpretations, digital
image processing, image enhancement, image classification, supervised
classification, unsupervised classification
UNIT IV
Geographical Information System: Introduction, key components, map
projections, Data – Spatial and Non-Spatial, spatial data input, raster data
models, vector data models, raster versus vector.
UNIT V
Spatial data analysis: Introduction, overlay function-vector overlay
operations, arithmetic operators, comparison and logical operators,
conditional expressions, overlay using a decision table
RS and GIS Applications: Land use and Land cover , agriculture, forestry,
geology, geomorphology, urban applications, flood zone delineation and
mapping.
UNIT-III
Image Analysis
Introduction :
Tone or color:
• Tone is the relative brightness of grey level on black and white image or
color/F.C.C image. Tone is the measure of the intensity of the reflected or emitted
radiation of the objects of the terrain.
• Lower reflected objects appear relatively dark and higher reflected objects appear
bright. Figure a represents a band imaged in NIR region of the electromagnetic
spectrum. Rivers does not reflect in NIR region thus appear black and the
vegetation reflects much thus appears bright.
•Our eyes can discriminate only 16-20 grey levels in the black and white photograph,
while more than hundreds of color can be distinguished in a color photograph. In
multispectral imaging, optimal three bands are used to generate color composite image.
•False Color Composite (FCC) using NIR, red and green are most preferred
combination for visual interpretation. In a standard FCC, NIR band passes through red
channel, red band passes through green channel and green band passes through blue
channel.
•Vegetation reflects much in NIR region of the electromagnetic spectrum therefore in
standard FCC vegetation appears red (Fig.b), which is more suitable in vegetation
identification.
Fig. Satellite image of area in (a) grey scale and in (b) standard FCC
Size:
•Size of objects on images must be considered in the context of the image scale or
resolution. It is important to assess the size of a target relative to other objects in the
scene, as well as the absolute size, to aid in the interpretation of that target.
•A quick approximation of target size can make direct interpretation to an
appropriate result more quickly. The most measured parameters are length, width,
perimeter, area, and occasionally volume.
•For example, if an interpreter had to distinguish zones of land use, and had
identified an area with a number of buildings in it, large buildings such as factories
or warehouses would suggest commercial property, whereas small buildings would
indicate residential use as shown in below Fig.
Shape:
•Shape refers to the general form, configuration or outline of an individual object.
Shape is one of the most important single factors for recognizing object from an
image.
•Generally regular shapes, squares, rectangles, circles are signs of man-made objects,
e.g., buildings, roads, and cultivated fields, whereas irregular shapes, with no
distinct geometrical pattern are signs of a natural environment, e.g., a river, forest.
•In a general case of misinterpretation in between roads and train line: roads can
have sharp turns, joints perpendicularly, but rails line does not.
•From the shape of the following image, it can be easily said that the darkblue
colored object is a river as shown in below Fig.
Texture:
•Texture refers to the frequency of tonal variation in an image.
•Texture is produced by an aggregate unit of features which may be
too small to be clearly discerned individually on the image.
•It depends on shape, size, pattern and shadow of terrain features.
•Texture is always scale or resolution dependent. Same reflected
objects may have difference in texture helps in their identification.
•As an example in a high resolution image grassland and tree crowns
have similar tone, but grassland will have smooth texture compared to
tree.
•Smooth texture refers to less tonal variation and rough texture refers
to abrupt tonal variation in an imagery or photograph.
Pattern:
•Pattern refers to the spatial arrangement of the objects. Objects both
natural and manmade have a pattern which aids in their recognition.
•The repetition of certain general form or relationship in tones and
texture creates a pattern, which is characteristic of this element in
image interpretation.
•In the below fig it could be easily understood that at the left bottom
corner of the image, it is plantation, where the tress are nearly
equally spaced.
•Whereas at the upper right and bottom right corners show natural
vegetation.
The similar kinds of colour /
reflectance (green) throughout the
image, three distinct land cover types
can be seen from the image texture.
The triangular patch at the bottom
left corner is the plantation which
has rough texture where individual
trees can be seen.
Right side on the top of the image ,
the trees are closer together, and the
tree canopies merge together,
forming medium textural pattern at
the right bottom corner with smooth
texture indicating that it is probably
an open field with short grass.
Shadow:
•Shadow is a helpful element in image interpretation. It also creates difficulties for some
objects in their identification in the image.
•Knowing the time of photography, we can estimate the solar elevation/illumination, which
helps in height estimation of objects.
•The outline or shape of a shadow affords an impression of the profile view of objects.
But objects within shadow become difficult to interpret.
•Shadow is also useful for enhancing or identifying topography and landforms, particularly in
radar imagery.
-The pixel which is an individual divided cell is usually square for easy
use in a computer and is provided at its centre with integers values of
average intensity (cell value).
•In raw imagery, the useful data often populates only a small portion
of the available range of digital values (commonly 8 bits or 256 levels).
•In our example, the minimum value (occupied by actual data) in the
histogram is 84 and the maximum value is 153. These 70 levels
occupy less than one-third of the full 256 levels available.
•A linear stretch uniformly expands this small range to cover the full
range of values from 0 to 255. This enhances the contrast in the
image with light toned areas appearing lighter and dark areas
appearing darker, making visual interpretation much easier. This
graphic illustrates the increase in contrast in an image before (left)
and after (right) a linear contrast stretch.
•A linear stretch uniformly expands this small range to cover the full
range of values from 0 to 255. This enhances the contrast in the image
with light toned areas appearing lighter and dark areas appearing
darker, making visual interpretation much easier. This graphic
illustrates the increase in contrast in an image before (left) and after
(right) a linear contrast stretch.
A uniform distribution of the input range of values across the full range
may not always be an appropriate enhancement, particularly if the
input range is not uniformly distributed. In this case, a histogram-
equalized stretch may be better.
•This stretch assigns more display values (range) to the frequently
occurring portions of the histogram. In this way, the detail in these
areas will be better enhanced relative to those areas of the original
histogram where values occur less frequently. In other cases, it may
be desirable to enhance the contrast in only a specific portion of the
histogram
2) Non-Linear Contrast Enhancement: In non-linear stretching, the DN values are not
stretched linearly to uniformly occupy the entire display range. Different non-linear contrast
stretching methods are available. Some of them are the following.
a)Histogram-equalized stretch
b)Piece-wise linear stretch
c)Logarithmic, power law or Gaussian stretch
Fig.2 (a) Sample histogram of an image and (b) Function used for histogram equalized stretch
b) Piece-wise linear stretch: In piece-wise linear stretch, different linear functions are
used for enhancing the DN values in different ranges within the same image. In other
words, different parts of the histogram are stretched by different amounts. This may
described as
DNout = a DNb + c
•This can be usually carried out on a single band. It is used for spatial image
enhancement, for example to reduce noise or to sharpen blurred images.
•The gain sums all kernel coefficients (ki). In general, the sum of the kernel
coefficient, after multiplication by the gain, should be equal to 1 to result in
an image with approximately the same range of grey values.
•The effect of using a kernel is illustrated in below figure, which shows how
the output value is calculated in terms of average filtering.
Fig. Input and output result of a filtering operation: the
neighbourhood in the original image determines the value of
the output. In this situation a smoothing filter was applied.
•The effect of applying this averaging filter is that image will become
blurred or smoothed. When dealing with speckle effect in radar imagery
the result of applying this filter is to reduce the speckle.
•To emphasize the value of the central pixel, a larger value can be put in the
centre of the kernel. As result, less drastic blurring takes place. In addition,
it is necessary to take into account that the horizontal and vertical
neighbours influence the result more strongly that the diagonal ones.
•The reason for this is that the direct neighbours are closer to the central
pixel. The resulting kernel, for which the gain is 1/16 = 0.0625, is given
table.2.
•An example of feature vector is (13, 55), which tells that 13 DN and 55 DN
are stored for band 1 and band 2 respectively. This vector can be plotted in a
two-dimensional graph.
Fig 2 Plotting of the values of a pixel in the feature space for a two and three
band image.
•Similarly, this approach can be visualized for a three band situation in a
three dimensional graph. A graph that shows the values of the feature
vectors is called a feature space or feature space plot as shown in fig 2.
•That illustrates how a feature vector (related to one pixel) is plotted in the
feature space for two and three bands. Usually we only find two axis feature
space plots.
•For four bands, this already yields six combinations: band1 and 2, 1 and 3, 1
and 4, bands 2 and 3, 2 and 4, and bands 3 and 4.
•Plotting the combinations of the values of all the pixels of one image yields
a large cluster of points. Such a plot is also referred to as a scatterplot as
shown in fig 3. A scatterplot provides information about the combinations of
pixel values that occur within the image. Note that some combinations will
occur more frequently and can be visualized by using intensity or colour.
Combination of 4 (1234) Band
Data:
1)1 and 2
2)1 and 3
3)1 and 4
4)2 and 3
5)2 and 4
6)3 and 4
Fig 3: Scatterplot of two bands of a digital image. Note the units (DN
values) along the x- and y- axes. The intensity at a point in the feature
space is related to the number of pixels at that point.
iii) Distances and clusters in the feature space: Distance in the feature
space expressed as ‘Euclidian distance’ and the units are DN (as this is
the unit of the axes).
•Each cluster of feature vectors (class) occupies its own area in feature
space. Fig. 4 shows the basic assumption for image classification: a specific
part of the feature space corresponds to a specific class.
•Once the classes have been defined in the feature space, each image pixel
can be compared to these classes and assigned to the corresponding class.
• The training samples are representative of the known classes of interest to the
analyst. Classification methods that relay on use of training patterns are called
supervised classification methods. The three basic steps (Fig. 7) involved in a
typical supervised classification procedure are as follows :
•(i) Training stage: The analyst identifies representative training areas and develops
numerical descriptions of the spectral signatures of each land cover type of interest
in the scene.
•(ii) The classification stage: Each pixel in the image data set IS categorised into the
land cover class it most closely resembles. If the pixel is insufficiently similar to any
training data set it is usually labeled 'Unknown'.
•(iii) The output stage: The results may be used in a number of different ways.
Three typical forms of output products are thematic maps, tables and digital data
files which become input data for GIS. The output of image classification becomes
input for GIS for spatial analysis of the terrain. Fig. 8 depicts the flow of operations
to be performed during image classification of remotely sensed data of an area
which ultimately leads to create database as an input for GIS. Plate 6 shows the land
use/ land cover colour coded image, which is an output of image classification.
UNSUPERVISED CLASSIFICATION
• One way to perform a classification is to plot all pixels (all feature vectors) of
the image in a feature space, and then to analyze the feature space and to
group the feature vectors into clusters. The name of this process is
unsupervised classification.
• In this process there is no knowledge about “thematic” land cover class names,
such as town, road, cropland etc. All it can do is to find out that there appears
to be (for example) 16 different “things” in the image and give them numbers
(1 to 16). Each of these “things” are called “spectral classes”.
• The result can be a raster map, in which each pixel has a class (from 1 to 16),
according to the cluster to which the image feature vector of the
corresponding pixel belongs. After the process is finished, it is up to the user to
find the relationship between spectral and thematic classes. It is very well
possible, that it is discovered that one thematic class is split into several
spectral ones, or, worse, that several thematic classes ended up in the same
cluster.
•Various unsupervised classification (clustering) algorithms exist.
Usually, they are not completely automatic; the user must specify
some parameters such as the number of clusters (approximately)
you want to obtain, the maximum cluster size (in the feature space),
the minimum distance (also in the feature space), that is allowed
between different clusters etc.