Professional Documents
Culture Documents
Geometric correction: Image Geometry Correction (often referred to as Image Warping) is the
process of digitally manipulating image data such that the image’s projection precisely matches a
specific projection surface or shape. Image geometry correction compensates for the distortion
created by off-axis projector or screen placement or non-flat screen surface, by applying a pre-
compensating inverse distortion to that image in the digital domain. The purpose of the
geometric correction processes is to compensate for the distortions and to ultimately produce a
corrected image with a high level of geometric integrity. If images are not corrected for
geometric distortions, the x,y location of a given pixel will not be in the correct geographic
location. Geometric correction is undertaken to avoid geometric distortions from a distorted
image, and is achieved by establishing the relationship between the image coordinate system and
the geographic coordinate system using calibration data of the sensor, measured data of position
and attitude, ground control points, atmospheric condition etc.
Atmospheric correction: It is the process of removing the effects of the atmosphere on the
reflectance values of images taken by satellite or airborne sensors. This process eliminates the
scattering and absorption effects of the atmosphere on the reflectance values of images taken by
satellite or airborne sensors. Atmospheric effects in optical remote sensing are significant and
complex, dramatically altering the spectral nature of the radiation reaching the remote sensor.
The atmosphere both absorbs and scatters various wavelengths of the visible spectrum which
must pass through the atmosphere twice, once from the sun to the object and then again as it
travels back up the image sensor. Atmospheric correction converts the radiance values to the
surface reflectance values. Atmospheric correction can significantly improve the quality of
remote sensing data by removing or compensating for atmospheric effects such as scattering,
absorption, and reflection.
Image enhancement
Image enhancement deals with the procedures of making a raw image better interpretable for a
particular application. It includes stretching and filtering.
Stretching is done to produce an image of optimal contrast by utilizing the full brightness range
(from black to white through a variety of gray tones) of the display medium. Stretching is of two
types.
Image filtering is changing the appearance of an image by altering the colors of the pixels.
Increasing the contrast as well as adding a variety of special effects to images are some of the
results of applying filters. In image processing filters are mainly used to suppress either the high
frequencies in the image, i.e. smoothing the image, or the low frequencies, i.e. enhancing or
detecting edges in the image.
Low pass filter (smoothing filter): It is easy to smooth an image. The basic problem is to do
this without losing interesting features.
High pass filter (edge-enhancement filter): Used to define the boundaries of steep gradient in
DN values known as edges. High pass filters are designed to emphasize high frequencies and to
suppress low-frequencies. Applying a high pass filter has the effect of enhancing edges. There
are two types of high pass filter.
1. Gradient (directional) filters: Used to enhance specific linear trends. They are designed
in such a way that edges running in a certain direction (e.g. horizontal, vertical or
diagonal) are enhanced
2. Laplacian filters (non-directional) filters: They enhance linear features in any direction
in an image. They do not look at the gradient itself, but at the changes in gradient.
Image Interpretation
Interpretation elements:
1. Tone/Hue: Tone/hue is directly related to the amount of light (energy) reflected from the
surface. Different types of rock, soil or vegetation most likely have different tones.
Increasing moisture content gives darker grey tones.
2. Shape: Characterizes many terrain objects visible in the image in geomorphological
mapping. characterize object (built-up areas, roads and railroads, agricultural fields,
fishery ponds etc).
3. Size: Farm size, water body size are important for agril. Study Width determines the road
type, e.g., primary road, secondary road, et cetera.
4. Pattern: Spatial arrangement objects (such as concentric, radial, checkerboard etc). Used
for land form, land use, erosion study
5. Texture: Frequency of tonal change (e.g., as coarse/fine, smooth/rough, even/uneven,
mottled, speckled, granular, linear, woolly). Often be related to terrain roughness
6. Site: Topographic or geographic location (where the feature is located). ‘Backswamps’ in
a floodplain , Mangroves with coastal zone
7. Association: Combination of objects makes it possible to infer about its meaning or
function. Industry with Transport system or Salinity features in coastal zone
False colour composite: False colour image is the representation of a multispectral image
produced using any bands other than visible red, green, blue as the red, green and blue
components of the display. False color composites allow us to visualize wavelengths that the
human eye can not see (i.e. near-infrared and beyond). Using bands such as near infrared
highlights the spectral differences and often increases the interpretability of the data. There
are many different false colored composites that can be used to highlight different features.
Image Classification
Feature Vector- In one pixel, the values in two bands can be regarded as components of a 2-D
vector, the feature vector.
Feature Space - Graph that shows the values of the feature vectors is called feature space or
feature space plot.
Scatterplot - Plotting the combinations of the values of all the pixels of one image yields a large
cluster of points. Such a plot is also referred to as a scatterplot.
Principles of Image Classification
Classification Step
Feature or band Selection: Decide on the bands or spectral channels to be used in the
classification. The selection depends on the characteristics of the objects or land cover
classes you want to differentiate.
Data Normalization: Normalize the data to ensure that each band contributes equally to the
classification process. This step is crucial when working with datasets with different
measurement scales.
Determination of the Number of Clusters (k): The number of clusters into which the user
wants to classify the whole data is to be determined.
Cluster Assignment: Apply the chosen clustering algorithm (e.g., k-means) to the
normalized data. The algorithm assigns each pixel to one of the k clusters based on the
similarity of its spectral signature.
Class Label Assignment: Examine the resulting clusters and assign class labels based on the
spectral characteristics of the clusters. This step involves visual interpretation or analysis of
the spectral profiles associated with each cluster.
Accuracy Assessment or Validation (Optional): If ground truth data are available, conduct
an accuracy assessment to evaluate the performance of the unsupervised classification. This
step is optional, as unsupervised classification does not rely on training samples.