You are on page 1of 8

Image correction

Geometric correction: Image Geometry Correction (often referred to as Image Warping) is the
process of digitally manipulating image data such that the image’s projection precisely matches a
specific projection surface or shape. Image geometry correction compensates for the distortion
created by off-axis projector or screen placement or non-flat screen surface, by applying a pre-
compensating inverse distortion to that image in the digital domain. The purpose of the
geometric correction processes is to compensate for the distortions and to ultimately produce a
corrected image with a high level of geometric integrity. If images are not corrected for
geometric distortions, the x,y location of a given pixel will not be in the correct geographic
location. Geometric correction is undertaken to avoid geometric distortions from a distorted
image, and is achieved by establishing the relationship between the image coordinate system and
the geographic coordinate system using calibration data of the sensor, measured data of position
and attitude, ground control points, atmospheric condition etc.

Radiometric correction: It is a process of improving the visual quality of an image in terms of


correcting pixels values that do not match the reflectance value or the spectral emission of the
object. Radiometric correction is done to make the image better and easier to analyze. In this
correction, the digital number values of the image are converted to the radiance values. This is
done using calibration coefficients that are usually provided in image header files. It belongs to
absolute calibration. Radiometric correction is done to calibrate the pixel values and/ correct for
errors in the values. The process improves the interpretability and quality of remote sensed data.
Radiometric calibration and corrections are particularly important when comparing multiple data
sets over a period of time. The main purpose for applying radiometric corrections is to reduce the
influence of errors or inconsistencies in image brightness values that may limit one's ability to
interpret or quantitatively process and analyze digital remotely sensed images.

Atmospheric correction: It is the process of removing the effects of the atmosphere on the
reflectance values of images taken by satellite or airborne sensors. This process eliminates the
scattering and absorption effects of the atmosphere on the reflectance values of images taken by
satellite or airborne sensors. Atmospheric effects in optical remote sensing are significant and
complex, dramatically altering the spectral nature of the radiation reaching the remote sensor.
The atmosphere both absorbs and scatters various wavelengths of the visible spectrum which
must pass through the atmosphere twice, once from the sun to the object and then again as it
travels back up the image sensor. Atmospheric correction converts the radiance values to the
surface reflectance values. Atmospheric correction can significantly improve the quality of
remote sensing data by removing or compensating for atmospheric effects such as scattering,
absorption, and reflection.

Image enhancement

Image enhancement deals with the procedures of making a raw image better interpretable for a
particular application. It includes stretching and filtering.

Stretching is done to produce an image of optimal contrast by utilizing the full brightness range
(from black to white through a variety of gray tones) of the display medium. Stretching is of two
types.

1. Contrast enhancement/global enhancement: transforms the raw data using the


statistics computed over the whole data set. During stretching the DN value in the low
end of the original histogram is assigned to extreme black, and a value at the high end is
assigned to extreme white. The remaining pixel values are distributed between these
extremes. Ex- linear contrast stretch, histogram equalized stretch (non-linear) and piece-
wise contrast stretch.
2. Spatial or local enhancement: only considers local conditions and which can vary
considerably over an image. Ex- image smoothing and sharpening.

Image filtering is changing the appearance of an image by altering the colors of the pixels.
Increasing the contrast as well as adding a variety of special effects to images are some of the
results of applying filters. In image processing filters are mainly used to suppress either the high
frequencies in the image, i.e. smoothing the image, or the low frequencies, i.e. enhancing or
detecting edges in the image.

Low pass filter (smoothing filter): It is easy to smooth an image. The basic problem is to do
this without losing interesting features.

High pass filter (edge-enhancement filter): Used to define the boundaries of steep gradient in
DN values known as edges. High pass filters are designed to emphasize high frequencies and to
suppress low-frequencies. Applying a high pass filter has the effect of enhancing edges. There
are two types of high pass filter.

1. Gradient (directional) filters: Used to enhance specific linear trends. They are designed
in such a way that edges running in a certain direction (e.g. horizontal, vertical or
diagonal) are enhanced
2. Laplacian filters (non-directional) filters: They enhance linear features in any direction
in an image. They do not look at the gradient itself, but at the changes in gradient.

Image Interpretation

Image Interpretation is done to extract the required information. In general, information


extraction methods from remote sensing imagery can be subdivided into two groups viz. Visual
Image Interpretation and Digital Image Processing.

Visual Image Interpretation: Information extraction based on visual analysis or interpretation


of the data. Typical examples of this approach are visual interpretation methods for land use or
soil mapping. urban mapping, soil mapping, geomorphological mapping, forest mapping, natural
vegetation mapping, cadastral mapping, land use mapping and many others.

Interpretation elements:

1. Tone/Hue: Tone/hue is directly related to the amount of light (energy) reflected from the
surface. Different types of rock, soil or vegetation most likely have different tones.
Increasing moisture content gives darker grey tones.
2. Shape: Characterizes many terrain objects visible in the image in geomorphological
mapping. characterize object (built-up areas, roads and railroads, agricultural fields,
fishery ponds etc).
3. Size: Farm size, water body size are important for agril. Study Width determines the road
type, e.g., primary road, secondary road, et cetera.
4. Pattern: Spatial arrangement objects (such as concentric, radial, checkerboard etc). Used
for land form, land use, erosion study
5. Texture: Frequency of tonal change (e.g., as coarse/fine, smooth/rough, even/uneven,
mottled, speckled, granular, linear, woolly). Often be related to terrain roughness
6. Site: Topographic or geographic location (where the feature is located). ‘Backswamps’ in
a floodplain , Mangroves with coastal zone
7. Association: Combination of objects makes it possible to infer about its meaning or
function. Industry with Transport system or Salinity features in coastal zone

False colour composite: False colour image is the representation of a multispectral image
produced using any bands other than visible red, green, blue as the red, green and blue
components of the display. False color composites allow us to visualize wavelengths that the
human eye can not see (i.e. near-infrared and beyond). Using bands such as near infrared
highlights the spectral differences and often increases the interpretability of the data. There
are many different false colored composites that can be used to highlight different features.

Natural or True Color Composites: A natural or true color composite is an image


displaying a combination of the visible red, green and blue bands to the corresponding red,
green and blue channels on the computer display. The resulting composite resembles what
would be observed naturally by the human eye: vegetation appears green, water dark is blue
to black and bare ground and impervious surfaces appear light gray and brown. Many people
prefer true color composites, as colors appear natural to our eyes, but often subtle differences
in features are difficult to recognize. Natural color images can be low in contrast and
somewhat hazy due the scattering of blue light by the atmosphere.

Digital Image Processing: semi-automatic processing by the computer. Examples include


automatic generation of DTMs, image classification and calculation of surface parameters.

Image Classification

Feature Vector- In one pixel, the values in two bands can be regarded as components of a 2-D
vector, the feature vector.

Feature Space - Graph that shows the values of the feature vectors is called feature space or
feature space plot.

Scatterplot - Plotting the combinations of the values of all the pixels of one image yields a large
cluster of points. Such a plot is also referred to as a scatterplot.
Principles of Image Classification

 During image classification the classes are to be distinguished in an image classification


need to have different spectral characteristics.
 A pixel is assigned to a class based on its feature vector, by comparing it to predefined
clusters in the feature space.
 Doing so for all image pixels results in a classified image.
 Classification involves assigning each pixel to a predefined cluster, using a defined
method of for comparison.
 Definition of the clusters is an interactive process and is carried out during the training
process.
 Comparison of the individual pixels with the clusters takes place using classifier
algorithms.

Classification Step

 Selection and preparation of the image data.


 Definition of the clusters (class) in the feature space.
 Selection of classification algorithm
 Running the actual classification
 Validation of the result (ground truth).

Types of image classification:

1. Supervised image classification is a procedure for identifying spectrally similar areas on


an image by identifying 'training' sites of known targets and then extrapolating those spectral
signatures to other areas of unknown targets. Examples: Maximum likelihood, minimum
distance, parallelpiped, binary encoding, Mahalanobis distance, spectral angle mapping.
Acquisition of Remote Sensing Data: Obtain satellite or aerial imagery with spectral
bands suitable for the classification task. Common bands include visible, near-infrared,
and thermal bands.
Define Classes or Land Cover Types: Identify and define the classes or land cover
types you want to classify in the image. These classes could include water bodies, urban
areas, forests, agriculture, etc.
Collect Training Samples: Select representative pixels from the image that belong to
each class. These pixels are used to train the classification algorithm. The number of
training samples should be sufficient to capture the spectral variability within each class.
Extract Spectral Signatures: For each training sample, extract the spectral signature by
collecting the pixel values across all relevant bands. These spectral signatures represent
the unique spectral characteristics of each land cover class.
Training the Classifier: Use a supervised classification algorithm, such as Support
Vector Machines (SVM), Random Forest, or Maximum Likelihood Classifier, to train the
model based on the extracted spectral signatures. The algorithm learns to differentiate
between different classes using the training samples.
Validation and Accuracy Assessment: Assess the accuracy of the classification by
using an independent set of samples not used during training. Compare the classified
results with ground truth data to evaluate the accuracy of the classification.
Mapping and area estimation: After accuracy assessment, areas under various land use
and land cover classes are to be estimated and the thematic map is to be generated.
2. Unsupervised classification is where the outcomes (groupings of pixels with common
characteristics) are based on the software analysis of an image without the user providing
sample classes. The computer uses techniques to determine which pixels are related and
groups them into classes. The user specifies the number of classes and the spectral classes are
created solely based on the numerical information in the data (i.e. the pixel values for each of
the bands or indices). Algorithms: K-means clustering, isodata clustering, etc.

Unsupervised classification is a technique used in remote sensing to categorize pixels in an


image into different classes or clusters without prior knowledge of the ground truth. Unlike
supervised classification, where the algorithm is trained using labeled training samples,
unsupervised classification relies on inherent patterns and similarities within the data. The
most common method for unsupervised classification is clustering, and one popular
algorithm for this purpose is the k-means clustering algorithm.
Data Preparation: Choose the remote sensing image or dataset that you want to classify.
This could be a multispectral or hyperspectral image acquired by satellite or aerial platforms.
Perform necessary preprocessing steps, such as radiometric calibration, atmospheric
correction, and geometric correction, to ensure the quality of the data.

Feature or band Selection: Decide on the bands or spectral channels to be used in the
classification. The selection depends on the characteristics of the objects or land cover
classes you want to differentiate.

Data Normalization: Normalize the data to ensure that each band contributes equally to the
classification process. This step is crucial when working with datasets with different
measurement scales.

Selection of Clustering Algorithm: The clustering algorithm is to be selected. K-Means


Clustering, a popular unsupervised classification algorithm, separates the data into k clusters
based on the similarity of pixel values.

Determination of the Number of Clusters (k): The number of clusters into which the user
wants to classify the whole data is to be determined.

Cluster Assignment: Apply the chosen clustering algorithm (e.g., k-means) to the
normalized data. The algorithm assigns each pixel to one of the k clusters based on the
similarity of its spectral signature.

Class Label Assignment: Examine the resulting clusters and assign class labels based on the
spectral characteristics of the clusters. This step involves visual interpretation or analysis of
the spectral profiles associated with each cluster.

Accuracy Assessment or Validation (Optional): If ground truth data are available, conduct
an accuracy assessment to evaluate the performance of the unsupervised classification. This
step is optional, as unsupervised classification does not rely on training samples.

You might also like