You are on page 1of 58

REMOTE SENSING & GIS

(16CE4025)
Credits: 3.5
Internal Marks: 30
External Marks:70
Structure of the Syllabus
UNIT I
Basics of Remote Sensing: Components of remote sensing -
Electromagnetic radiation, electromagnetic spectrum - EMR interaction
with atmosphere - EMR interaction with Earth Surface Materials -
Atmospheric Windows and its Significance.

UNIT II
Platforms Sensors and Resolutions : Types of platforms- ground, airborne,
and space born platforms, Types and classification of sensors - Sensor
resolution-spectral, radiometric and temporal - Image data
characteristics - Digital image data formats- band interleaved by pixel,
band interleaved by line, band sequentia
UNIT III
Image Analysis: Introduction, elements of visual interpretations, digital
image processing, image enhancement, image classification, supervised
classification, unsupervised classification

UNIT IV
Geographical Information System: Introduction, key components, map
projections, Data – Spatial and Non-Spatial, spatial data input, raster data
models, vector data models, raster versus vector.

UNIT V
Spatial data analysis: Introduction, overlay function-vector overlay
operations, arithmetic operators, comparison and logical operators,
conditional expressions, overlay using a decision table
 
RS and GIS Applications: Land use and Land cover , agriculture, forestry,
geology, geomorphology, urban applications, flood zone delineation and
mapping.
UNIT-III
Image Analysis
Introduction :

Image : A picture or representation of an object or scene on


paper, or a display screen. Remotely sensed images are digital
representations of the Earth.

Image Analysis is the extraction of meaningful information


from images.

Image Interpretation: It's as an act of examining images for the


purpose of identifying objects and judging their significance.

•Interpreters study remotely sensed data and attempt through


logical processes to detect, identify, measure and evaluate the
significance of environmental and cultural objects, pattern and
spatial relationships.
Elements of Visual Image Interpretation: 
• Image is a pictorial representation of the energy reflected
and / or emitted from the scene in different parts of the
electromagnetic spectrum.
• This appears in many shapes, tones, sizes etc. on an image
depending upon the scene, source and sensor
characteristics. The understanding of these image elements
is essential to describe the perception and significance of the
objects. below is not strictly hierarchical.
• Nine image elements have been identified out of which few
are interconnected. The order of these elements which is
given below is not strictly hierarchical.
• Primary : Tone / Colour
• Secondary : Size, Shape and Texture
• Tertiary : Pattern, Height and Shadow
• Higher : Site and Association

Tone or color:
• Tone is the relative brightness of grey level on black and white image or
color/F.C.C image. Tone is the measure of the intensity of the reflected or emitted
radiation of the objects of the terrain.

• Lower reflected objects appear relatively dark and higher reflected objects appear
bright. Figure a represents a band imaged in NIR region of the electromagnetic
spectrum. Rivers does not reflect in NIR region thus appear black and the
vegetation reflects much thus appears bright.
•Our eyes can discriminate only 16-20 grey levels in the black and white photograph,
while more than hundreds of color can be distinguished in a color photograph. In
multispectral imaging, optimal three bands are used to generate color composite image.
•False Color Composite (FCC) using NIR, red and green are most preferred
combination for visual interpretation. In a standard FCC, NIR band passes through red
channel, red band passes through green channel and green band passes through blue
channel.
•Vegetation reflects much in NIR region of the electromagnetic spectrum therefore in
standard FCC vegetation appears red (Fig.b), which is more suitable in vegetation
identification.

Fig. Satellite image of area in (a) grey scale and in (b) standard FCC
Size:
•Size of objects on images must be considered in the context of the image scale or
resolution. It is important to assess the size of a target relative to other objects in the
scene, as well as the absolute size, to aid in the interpretation of that target.
•A quick approximation of target size can make direct interpretation to an
appropriate result more quickly. The most measured parameters are length, width,
perimeter, area, and occasionally volume.
•For example, if an interpreter had to distinguish zones of land use, and had
identified an area with a number of buildings in it, large buildings such as factories
or warehouses would suggest commercial property, whereas small buildings would
indicate residential use as shown in below Fig.
Shape:
•Shape refers to the general form, configuration or outline of an individual object.
Shape is one of the most important single factors for recognizing object from an
image.
•Generally regular shapes, squares, rectangles, circles are signs of man-made objects,
e.g., buildings, roads, and cultivated fields, whereas irregular shapes, with no
distinct geometrical pattern are signs of a natural environment, e.g., a river, forest.
•In a general case of misinterpretation in between roads and train line: roads can
have sharp turns, joints perpendicularly, but rails line does not.
•From the shape of the following image, it can be easily said that the darkblue
colored object is a river as shown in below Fig.
Texture:
•Texture refers to the frequency of tonal variation in an image.
•Texture is produced by an aggregate unit of features which may be
too small to be clearly discerned individually on the image.
•It depends on shape, size, pattern and shadow of terrain features.
•Texture is always scale or resolution dependent. Same reflected
objects may have difference in texture helps in their identification.
•As an example in a high resolution image grassland and tree crowns
have similar tone, but grassland will have smooth texture compared to
tree.
•Smooth texture refers to less tonal variation and rough texture refers
to abrupt tonal variation in an imagery or photograph.
Pattern:
•Pattern refers to the spatial arrangement of the objects. Objects both
natural and manmade have a pattern which aids in their recognition.
•The repetition of certain general form or relationship in tones and
texture creates a pattern, which is characteristic of this element in
image interpretation.
•In the below fig it could be easily understood that at the left bottom
corner of the image, it is plantation, where the tress are nearly
equally spaced.
•Whereas at the upper right and bottom right corners show natural
vegetation.
The similar kinds of colour /
reflectance (green) throughout the
image, three distinct land cover types
can be seen from the image texture.
The triangular patch at the bottom
left corner is the plantation which
has rough texture where individual
trees can be seen.
Right side on the top of the image ,
the trees are closer together, and the
tree canopies merge together,
forming medium textural pattern at
the right bottom corner with smooth
texture indicating that it is probably
an open field with short grass.
Shadow:
•Shadow is a helpful element in image interpretation. It also creates difficulties for some
objects in their identification in the image.
•Knowing the time of photography, we can estimate the solar elevation/illumination, which
helps in height estimation of objects.
•The outline or shape of a shadow affords an impression of the profile view of objects.
But objects within shadow become difficult to interpret.
•Shadow is also useful for enhancing or identifying topography and landforms, particularly in

radar imagery.

Fig. Shadow of objects used for


interpretation.
Site:
•Site refers to topographic or geographic location. It is also an important element in
image interpretation when objects are not clearly identified using the previous the
elements.
•A very high reflectance feature in the Himalayan valley may be snow or cloud, but in
Kerala one cannot say it as snow.
Association:
•Association refers to the occurrence of certain features in relation to others objects in
the imagery. In urban area a smooth vegetation pattern generally refers to a play
ground or grass land not agricultural land as shown in below figure.

Fig. Satellite image of an urban area.


DIGITAL
IMAGE PROCESSING
•Data is available either in hard form or digital form.
•For Hard form, visual interpretation techniques can be adopted.If data
in digital form, the remote sensing data can be analyzed using digital
image processing techniques.
The basic character of a digital image:
-Though an image appears to be a continuous tone photograph, it is
actually composed of the two-dimensional array of discrete picture
elements, also called pixels.

-The intensity of each pixel corresponds to the average brightness or


radiance measured electronically over the ground area corresponding to
each pixel.

-The pixel which is an individual divided cell is usually square for easy
use in a computer and is provided at its centre with integers values of
average intensity (cell value).

-A digital image has coordinates of pixel number, normally counted


from left to right and line number, counted from top to bottom.
-The most important factor involved is the pixel size. If the pixel size is
large, the appearance of the image becomes worse and sampling
becomes difficult, while in the reverse case, the data volume becomes
very large.

Digital Image Processing


Preprocessing
Image Enhancement
Image Transformation
Image classification and analysis
Preprocessing
• Remotely sensed raw data, received from imaging sensor generally
contain flaws and deficiencies.
• The correction of deficiencies and removal of flaws present in the data
are termed as pre-processing methods.
• Thus, Preprocessing involves operations that are normally required
prior to main data analysis and extraction of information.
• Involves processes referred to as image restoration/ reconstruction/
rectification of the image.
• Operations include (i) Geometric correction, (ii) Radiometric correction
Geometric correction:
• Correcting of geometric distortions due to sensor-earth geometry
variations; factors including a motion of scanning system, the motion
of the platform, the platform altitude, velocity, terrain relief, curvature
and rotation of the earth.
• It also includes conversion/transformation of remote sensing data to
real world coordinates like latitude/longitude on the earth’s surface.
Radiometric correction
Include correcting the data for sensor
irregularities and unwanted sensor
or atmospheric noise and converting
the data so that it accurately
represents the reflected or emitted
radiation measured by the sensor.
Image Enhancement
Image Enhancement: It is the process of making an image more
interpretable for a particular application (Faust, 1989)

•Enhancements are used to make it easier for visual interpretation and


understanding of imagery. The advantage of digital imagery is that it
allows us to manipulate the digital pixel values in an image.

• Even though Radiometric corrections for illumination, atmospheric


influences, and sensor characteristics may be done prior to
distribution of data to the user, the image may still not be optimized
for visual interpretation.

•In raw imagery, the useful data often populates only a small portion
of the available range of digital values (commonly 8 bits or 256 levels).

Contrast enhancement: It involves changing the original values so that


more of the available range is used, thereby increasing the contrast
between targets and their backgrounds.
•The key to understanding contrast enhancements is to understand the
concept of an image histogram.

Histogram: It is a graphical representation of the brightness values that


comprise an image. The brightness values (i.e. 0-255) are displayed along the
x-axis of the graph. The frequency of occurrence of each of these values in
the image is shown on the y-axis.

•By manipulating the range of digital values in an image, graphically


represented by its histogram, we can apply various enhancements to the data.
•There are many different techniques and methods of enhancing
contrast and detail in an image; we will cover only a few common ones
here. The simplest types of enhancement is a Linear contrast stretch. and
Non-linear contrast stretch.

1.Linear contrast stretch: This involves identifying lower and upper


bounds from the histogram (usually the minimum and maximum
brightness values in the image) and applying a transformation to stretch
this range to fill the full range. or

•In this procedure histograms of input image is generated and


lower (DNlow) and upper (DNup) limits are determined. The
output digital number DNout for each pixel is determined as
(DN – DNlow)
DNout = --------------------- x (DNmax – DNmin + 1)
(DNup – DNlow)
Where, DNmax and DNmin are the maximum and minimum
gray values that can be displayed.
•This procedure is also called min-max contrast stretch. It is not
necessary to use only lower and upper limits of histogram in above
equation; instead DN and UPDN are specified at a fixed percent of
pixels into the tail OUT of histogram. This results in a percentage
linear contrast stretch. When the input image is not Gaussian, a
piecewise liner contrast stretch may be applied in which a series of
DN values are specified along with DNup, DNlow, DNmin, DNmax.

•In our example, the minimum value (occupied by actual data) in the
histogram is 84 and the maximum value is 153. These 70 levels
occupy less than one-third of the full 256 levels available.

•A linear stretch uniformly expands this small range to cover the full
range of values from 0 to 255. This enhances the contrast in the
image with light toned areas appearing lighter and dark areas
appearing darker, making visual interpretation much easier. This
graphic illustrates the increase in contrast in an image before (left)
and after (right) a linear contrast stretch.
•A linear stretch uniformly expands this small range to cover the full
range of values from 0 to 255. This enhances the contrast in the image
with light toned areas appearing lighter and dark areas appearing
darker, making visual interpretation much easier. This graphic
illustrates the increase in contrast in an image before (left) and after
(right) a linear contrast stretch.

A uniform distribution of the input range of values across the full range
may not always be an appropriate enhancement, particularly if the
input range is not uniformly distributed. In this case, a histogram-
equalized stretch may be better.
•This stretch assigns more display values (range) to the frequently
occurring portions of the histogram. In this way, the detail in these
areas will be better enhanced relative to those areas of the original
histogram where values occur less frequently. In other cases, it may
be desirable to enhance the contrast in only a specific portion of the
histogram
2) Non-Linear Contrast Enhancement: In non-linear stretching, the DN values are not
stretched linearly to uniformly occupy the entire display range. Different non-linear contrast
stretching methods are available. Some of them are the following.

a)Histogram-equalized stretch
 b)Piece-wise linear stretch
c)Logarithmic, power law or Gaussian stretch

a) Histogram-equalized stretch : In histogram-equalized stretch the DN values are enhanced


based on their frequency in the original image. Thus, DN values corresponding to the peaks
of the histogram are assigned to a wider range. Fig.1 compares the histogram of a raw image
with that of the images enhanced using linear stretching and histogram-equalized stretching.
Fig 1. Histograms of (a) Unstretched image (b) Linear contrast stretched image
(c) Histogram equalised image
Fig.2 (a) shows a sample histogram of an image and Fig.2 (b) shows the
corresponding histogram-equalization stretch function. Input DN values
corresponding to the peak of the histogram are stretched to a wider range
as shown in the figure.

Fig.2 (a) Sample histogram of an image and (b) Function used for histogram equalized stretch
b) Piece-wise linear stretch: In piece-wise linear stretch, different linear functions are
used for enhancing the DN values in different ranges within the same image. In other
words, different parts of the histogram are stretched by different amounts. This may
described as

DNout = a log (DN + b) + c

Where a, b and c are stretch parameters. This stretching preferentially emphasizes


detail in dark regions.

c) Logarithmic, power law or Gaussian stretch: In logarithmic stretching, curves having


the shape of the logarithmic function are used for rescaling the original DN levels
into the wider output range, as shown in Fig. 3. It uses the formula. This may
described as

DNout = a DNb + c

Where a, b and c are stretch parameters. It is used when emphasis is on showing


detailed structure in bright region of the image.
Fig. 3. A sample logarithmic stretch function
Filtering: Filtering is the process by which the tonal variations in an image, in
selected ranges or frequencies of the pixel values, are enhanced or suppressed.
Or in other words, filtering is the process that selectively enhances or
suppresses particular wavelengths or pixel DN values within an image.

•This can be usually carried out on a single band. It is used for spatial image
enhancement, for example to reduce noise or to sharpen blurred images.

Kernel : A kernel defines the output pixel value as a linear combination of


pixel values in a neighbourhood around the corresponding position in the
input image. For a specific kernel, also-called gain can be calculated as
follows:

•The gain sums all kernel coefficients (ki). In general, the sum of the kernel
coefficient, after multiplication by the gain, should be equal to 1 to result in
an image with approximately the same range of grey values.

•The effect of using a kernel is illustrated in below figure, which shows how
the output value is calculated in terms of average filtering.
Fig. Input and output result of a filtering operation: the
neighbourhood in the original image determines the value of
the output. In this situation a smoothing filter was applied.

The significance of the gain factor is explained in the


next two subsections (which includes Noise Reduction and
Edge Enhancement). In these examples only small
neighbourhoods of 3 x 3 kernels are considered. In practice
other kernel dimensions may be used.
Noise reduction: consider the kernel shown in table 1 in which all
coefficients equal 1. This means that the values of the nine pixels in the
neighbourhood are summed.

•Subsequently, the result is divided by 9 to achieve that the overall pixel


values in the output image are in the same range as the input image. In this
situation the gain is 1/9 = 0. 11.

•The effect of applying this averaging filter is that image will become
blurred or smoothed. When dealing with speckle effect in radar imagery
the result of applying this filter is to reduce the speckle.

Table.1 Filter kernel for smoothing


•In the above kernel, all pixels have equal contribution in the calculation of
the result. It is also possible to define a weighted average.

•To emphasize the value of the central pixel, a larger value can be put in the
centre of the kernel. As result, less drastic blurring takes place. In addition,
it is necessary to take into account that the horizontal and vertical
neighbours influence the result more strongly that the diagonal ones.

•The reason for this is that the direct neighbours are closer to the central
pixel. The resulting kernel, for which the gain is 1/16 = 0.0625, is given
table.2.

Table.2 Filter kernel for weighted smoothing


Edge enhancement: Another application of filtering is to emphasize local
differences in grey values, for example related to linear features such as
roads, canals, geological faults etc. this is done using an edge enhancing filter,
which calculates the difference between the central pixel and its neighbours.

•This is implemented using negative values for the non-central kernel


coefficients. An example of edge enhancement filters as shown in table 3.

Table.3 Filter kernel for edge enhancement.

The gain is calculated as follows: 1 / (16-8) = 1/8 = 0.125. The sharpening


effect can be made stronger by using smaller values for the central pixel
(with a minimum of 9). An example of the effect of using smoothing and
edge enhancement is shown in below figure.
Figure. Original image (middle), edge enhanced image (left) and
smoothed image (right).
Image Classification
Introduction: In the process of visual image interpretation
human vision plays a crucial role in extracting information
from image data.

•Although computers may be used for visualization and


digitization, the interpretation itself is carried out by the
operator. But, in digital image classification the operator
instructs the computer to perform an interpretation according
to certain conditions.

•These conditions are defined by the operator. Image


classification is one of the techniques in the domain of digital
image interpretation. Other techniques include automatic
object recognition (for example, road detection) and scene
reconstruction (for example, generation of 3D object models).
Principles of Image Classification:
i) Image Space: A digital image is a 2D-array of elements. In
each element the energy reflected or emitted from the
corresponding area on the Earth’s surface is stored.

• The spatial arrangement of the measurements defines the


image or image space. Depending on the sensor, data are
recorded in n bands as shown in below fig 1. Digital image
elements are usually stored as 8-bit DN-values (range0-255).

Fig 1. The structure of multi-band image


ii) Feature Space: In one pixel, the values in (for example) two bands can be
regarded as components of a two-dimensional vector, the feature vector.

•An example of feature vector is (13, 55), which tells that 13 DN and 55 DN
are stored for band 1 and band 2 respectively. This vector can be plotted in a
two-dimensional graph.
 

Fig 2 Plotting of the values of a pixel in the feature space for a two and three
band image.
•Similarly, this approach can be visualized for a three band situation in a
three dimensional graph. A graph that shows the values of the feature
vectors is called a feature space or feature space plot as shown in fig 2.

•That illustrates how a feature vector (related to one pixel) is plotted in the
feature space for two and three bands. Usually we only find two axis feature
space plots.

•Note that plotting values is difficult for a four- or more-dimensional case. A


practical solution when dealing with four or more bands is that all the
possible combinations of two bands are plotted separately.

•For four bands, this already yields six combinations: band1 and 2, 1 and 3, 1
and 4, bands 2 and 3, 2 and 4, and bands 3 and 4.

•Plotting the combinations of the values of all the pixels of one image yields
a large cluster of points. Such a plot is also referred to as a scatterplot as
shown in fig 3. A scatterplot provides information about the combinations of
pixel values that occur within the image. Note that some combinations will
occur more frequently and can be visualized by using intensity or colour.
Combination of 4 (1234) Band
Data:
1)1 and 2
2)1 and 3
3)1 and 4
4)2 and 3
5)2 and 4
6)3 and 4

Fig 3: Scatterplot of two bands of a digital image. Note the units (DN
values) along the x- and y- axes. The intensity at a point in the feature
space is related to the number of pixels at that point.
iii) Distances and clusters in the feature space: Distance in the feature
space expressed as ‘Euclidian distance’ and the units are DN (as this is
the unit of the axes).

•In a two-dimensional feature space the distance can be calculated


according to Pythagoras theorem. In the situation of below figure, the
distanced between (10,10) and (40,30) equals the square root of (40 –
10)2 + (30 – 10)2. For three or more dimensions, the distance is
calculated in a similar way.

Fig : Euclidian distance between the two points is calculated using


Pythagoras’ theorem.
Image classification: The scatterplot shown in fig.3 gives information about
the distribution of corresponding pixel values in two bands of an image.
Fig.4 shows a feature space in which the feature vectors have been plotted
for six specific land cover classes (grass, water, trees, etc).

•Each cluster of feature vectors (class) occupies its own area in feature
space. Fig. 4 shows the basic assumption for image classification: a specific
part of the feature space corresponds to a specific class.

•Once the classes have been defined in the feature space, each image pixel
can be compared to these classes and assigned to the corresponding class.

•Classes to be distinguished in an image classification need to have different


spectral characteristics. This can, for example, be analyzed by comparing
spectral reflectance curves.

•Fig.4 also illustrates the limitation of image classification: if classes do not


have distinct clusters in the feature space, image classification can only give
results to a certain level or reliability.
•The principle of image classification is that a pixel is assigned to a class
based on its feature vector, by comparing it to predefined clusters in the
feature space. Doing so far all image pixels results in a classified image.

•The crux of image classification is in comparing it to predefined clusters,


which requires definition of the clusters and methods for comparison.

• Definition of the clusters is an interactive process and is carried out


during the training process. Comparison of the individual pixels with the
clusters takes place using classifier algorithms.

Fig.4 Feature space showing the


respective clusters of six classes;
note that each class occupies a
limited area in the feature space
IMAGE CLASSIFICATION PROCESS

The process of image classification typically involves five steps:-


 
1. Selection and preparation of the image data. Depending on the cover types
to be classified, the most appropriate sensor, the most appropriate date(s) of
acquisition and the most appropriate wavelength bands should be selected.

2. Definition of the clusters in the feature space. Here two approaches


are possible: supervised classification and unsupervised classification. In a
supervised classification, the operator defines the clusters during the training
process; in an unsupervised classification a clustering algorithm automatically
finds and defines a number of clusters in the feature space.
 
3. Selection of classification algorithm. Once the spectral classes have
been defined in the feature space, the operator needs to decide on how the
pixels (based on their DN-values) are based on the different criteria.
4. Running the actual classification. Once the training data have been
established and the classifier algorithm selected, and actual classification
can be carried out. This means that, based on its DN-values, each
individual pixel in the image is assigned to one of the defined classes as
shown in below figure.
 
5. Validation of the result. Once the classified image has been produced
its quality is assessed by comparing it to reference data (ground truth). This
requires selection of a sampling technique, generation of an error matrix,
and the calculation of error parameters.

Fig. 6 The result of


classification of multispectral
image (a) is a raster
in which each cell is assigned
to some thematic class (b)
• Most examples deal with a two-dimensional situation (two bands) for
reasons of simplicity and visualization.

• In principle, however, image classification can be carried out on any n-


dimensional data set. Visual image interpretation limits itself to an
image that is composed of a maximum of three bands.

Preparation for image classification


• Image classification serves a specific goal: converting image data into
thematic data. In the application context, one is rather interested in
thematic characteristics of an area (pixel) rather than its reflection values.
• Thematic characteristics such as land cover, land use, soil type or mineral
type can be used for further analysis and input to models. In addition,
image classification can also be considered as data reduction: the n
multispectral bands result in a single valued raster file.
SUPERVISED CLASSIFICATION
• A supervised classification algorithm requires a training sample for each class,
that is, a collection of data points known to have come from the class of
interest.

• The classification is thus based on how "close" a point to be classified is to


each training sample. We shall not attempt to define the word "close" other
than to say that both geometric and statistical distance measures are used in
practical pattern recognition algorithms.

• The training samples are representative of the known classes of interest to the
analyst. Classification methods that relay on use of training patterns are called
supervised classification methods. The three basic steps (Fig. 7) involved in a
typical supervised classification procedure are as follows :
•(i) Training stage: The analyst identifies representative training areas and develops
numerical descriptions of the spectral signatures of each land cover type of interest
in the scene.

•(ii) The classification stage: Each pixel in the image data set IS categorised into the
land cover class it most closely resembles. If the pixel is insufficiently similar to any
training data set it is usually labeled 'Unknown'.

•(iii) The output stage: The results may be used in a number of different ways.
Three typical forms of output products are thematic maps, tables and digital data
files which become input data for GIS. The output of image classification becomes
input for GIS for spatial analysis of the terrain. Fig. 8 depicts the flow of operations
to be performed during image classification of remotely sensed data of an area
which ultimately leads to create database as an input for GIS. Plate 6 shows the land
use/ land cover colour coded image, which is an output of image classification.
UNSUPERVISED CLASSIFICATION
• One way to perform a classification is to plot all pixels (all feature vectors) of
the image in a feature space, and then to analyze the feature space and to
group the feature vectors into clusters. The name of this process is
unsupervised classification.

• In this process there is no knowledge about “thematic” land cover class names,
such as town, road, cropland etc. All it can do is to find out that there appears
to be (for example) 16 different “things” in the image and give them numbers
(1 to 16). Each of these “things” are called “spectral classes”.

• The result can be a raster map, in which each pixel has a class (from 1 to 16),
according to the cluster to which the image feature vector of the
corresponding pixel belongs. After the process is finished, it is up to the user to
find the relationship between spectral and thematic classes. It is very well
possible, that it is discovered that one thematic class is split into several
spectral ones, or, worse, that several thematic classes ended up in the same
cluster.
•Various unsupervised classification (clustering) algorithms exist.
Usually, they are not completely automatic; the user must specify
some parameters such as the number of clusters (approximately)
you want to obtain, the maximum cluster size (in the feature space),
the minimum distance (also in the feature space), that is allowed
between different clusters etc.

•The process “builds” clusters as it is scanning through the image.


Typically, when a cluster becomes larger than the maximum size, it
is split into two clusters; on the other hand, when two clusters get
nearer to each other than the minimum distance, they are merged
into one.
Fig. Reflectance spectra of surface samples of five mineral soils,
a)Organic dominated
b)Minimally altered
c)Iron altered
d)Organic altered and
e)Iron dominated

You might also like