You are on page 1of 31

Unit 5: Digital Image Processing and Classification

11/24/2021 By: Wondifraw Nigussie 1


Chapter contents
5.1 Pre processing methods
5.1.1 Image rectification and image corrections
5.1.2 Image enhancement techniques: Histograms, Contrasts, Stretching
5.1.3 Filtering: Low and high pass filters
5.2 Image Classification
5.2.1 Unsupervised classification
5.2.2 Supervised classification
5.2.3 Classification accuracy assessment
5.2.4 Error matrix
5.2.5 LU/LC change analysis
5.3 Segmentation

11/24/2021 By: Wondifraw Nigussie 2


Preprocessing

 Digital image processing of remote sensing imagery involves the identification and/or
measurement of various targets in an image in order to extract useful information about them.

 The information in an image is presented in tones or colors.

 Each cell of a digital image is called a pixel and the number representing the brightness of
the pixel is called a digital number (DN)

 Digital image processing can improve image visual quality, selectively enhance and highlight
particular image features and classify, identify and extract spectral and spatial patterns
representing different phenomena from images.

11/24/2021 By: Wondifraw Nigussie 3


 Image processing cannot increase any information from the original image data, although it
can indeed optimize the visualization for us to see more information from the enhanced
images than from the original.

 Extracting information from images is a subjective process:

 Due to the quality or characteristics of the image that we have,

 The experience and knowledge of the interpreter.

11/24/2021 By: Wondifraw Nigussie 4


 Remote sensed raw data, received from imaging sensor mounted on satellite platforms
generally contain flaws and deficiencies.

 The correction of deficiencies and removal of flaws present in the data through some
methods are termed as pre-processing methods.

 This correction model involves the initial processing of raw image data to correct geometric
and radiometric distortions.

 No definitive list of “standard” preprocessing steps, because each project requires individual
attention, and some preprocessing decisions may be a matter of personal preference.

11/24/2021 By: Wondifraw Nigussie 5


 Depending on the user's requirement, some standard correction procedures may be carried
out by the ground station operators before the data is delivered to the end-user.

 These procedures include:

 radiometric correction to correct forever sensor response over the whole image and

 geometric correction to correct for geometric distortion due to Earth's rotation and other
imaging conditions (such as oblique viewing).

 The purpose of image preprocessing is to correct distorted or degraded image data to create a
more faithful representation of the original scene.

 This typically involves the initial processing of raw image data to correct radiometric and
atmospheric corrections, to minimize geometric distortions.
11/24/2021 By: Wondifraw Nigussie 6
 layer stacking, resampling, image enhancement of the image dataset, mosaicking and
subsetting which are of utmost importance for land use land cover analysis

 Layer Stacking

 Layer stacking is a method to build a new multiband file from geo-referenced images of
various pixel sizes, extents, and projections.

 Mosaicking and Subsetting

 all scenes from the same year will be mosaicked together in their spatial sequence and to get
a single image that covers all parts of the study area.

 From the mosaicked image, the portion that fell within the study area can extracted
(subsetted) to limit the size of the mosaicked image to the size of the study area boundary
11/24/2021 By: Wondifraw Nigussie 7
5.1.1 Image rectification and image corrections

 It is concerned with the correction of distortion, degradation and noise induced in


the imaging process.
 The aim of image restoration and rectification is to produce a corrected image that is as close
as possible, both geometrically and radio-metrically, to the original object‟s radiant
energy distribution.
Geometric Correction
 Ground control consists of carefully located position of known longitude and latitude, UTM
coordinates or other known grid coordinates that is recognizable on images and can be used
to determine geometric corrections.

 The original pixel are then re-sampled to match the correct geometric coordinates.

11/24/2021 By: Wondifraw Nigussie 8


Radiometric Correction

 Radiometric correction of remote sensing data normally involves the process of correcting
radiometric errors or distortions of digital images to improve the quality of satellite images

 Radiometric correction is mainly performed for normalization of differences among scenes


of imagery taken at different time periods
Image enhancement
 The aim of digital enhancement is to amplify these slight differences for better clarity of the
image scene.

 This means digital enhancement increases the separability (contrast) between the interested
classes or features.

11/24/2021 By: Wondifraw Nigussie 9


 The digital image enhancement may be defined as some mathematical operation that are to
be applied to digital remote sensing input data to improve the visual appearance of an image
for better interpretability or subsequent digital analysis

 To improve the visual interpretability of an image by increasing the apparent/noticeable


distinction between the features of the scene

 To improve the visual interpretability of an image by increasing the apparent/noticeable


distinction between the features of the scene.

 This objective is to create new image from the original image in order to increase the amount
of information that can be visually interpreted from the data.

11/24/2021 By: Wondifraw Nigussie 10


Con…

 Enhancement operations are normally applied to image data after the appropriate restoration
procedures have been performed.

The common problems that can be remove by image enhancement-

(1) Low sensitivity of detectors,

(2) Weak signal of objects present on earth surface,

(3) Similar reflection of different objects,

(4) Environment condition at the time of recording, and

(5) Human eye is poor at discriminating the differences.

11/24/2021 By: Wondifraw Nigussie 11


5.1.2 Image enhancement techniques: Histograms, Contrasts, Stretching

 The term enhancement is used to mean the alteration of the appearance of an image in such a way that the
information contained in that image is more readily interpreted visually in terms of a particular need.

 The image enhancement techniques are applied either to single-band images or separately to the individual
bands of a multiband image set

Contrast Stretching

 To expand the narrow range of brightness values of an input image over a wider range of gray values

 Certain features may reflect more energy than others. This results in good contrast within the image and features
that are easy to distinguish

 The contrast level between the features in an image is low when features reflect nearly the same level of energy

 The full dynamic range of sensor will be used and the corresponding image is dull and lacking in contrast or
over bright

11/24/2021 By: Wondifraw Nigussie 12


Con…
 The result is an image lacking in contrast - but by remapping the DN distribution to the full
display capabilities of an image processing system, we can recover a beautiful image.
 Contrast Stretching can be displayed in two categories:
1. Linear Contrast Stretching
2. Histogram Equalization

Linear Contrast Stretching


 Stretches the range of data from the lower values to the higher values so there is higher
contrast when an image is displayed

Frequency

s s m ms s
0 255

11/24/2021 By: Wondifraw Nigussie 13


Linear stretch

11/24/2021 By: Wondifraw Nigussie 14


Example of linear stretching

11/24/2021 By: Wondifraw Nigussie 15


Histogram Equalization (or uniform distribution stretch):

 Input pixels are redistributed to produce a uniform population density of pixels along the
output axis,

 which results in the output histogram having a wide spacing of bins (all pixels having the
same DN) in the center of the distribution curve and a close spacing of the less-populated
bins at the head and tail of the histogram.

 A histogram is a graphical representation of the brightness values that comprise an image.


The brightness values (i.e. 0-255) are displayed along the x-axis of the graph.

 The frequency of occurrence of each of these values in the image is shown on the y-axis.

11/24/2021 By: Wondifraw Nigussie 16


5.1.3 Spatial filtering

 Encompasses another set of digital processing functions which are used to enhance the
appearance of an image

 Spatial filters are designed to highlight or suppress specific features in an image based on
their spatial frequency.

 Spatial filters pass (emphasize) or suppress (de-emphasize) image data of various spatial

frequencies

 Spatial frequency refers to the number of changes in brightness value for any area within a

scene

11/24/2021 By: Wondifraw Nigussie 17


 High spatial frequency  rough areas
– High frequency corresponds to image elements of smallest size
– An area with high spatial frequency will have rapid change in digital values with distance
(i.e. dense urban areas and street networks)
 Low spatial frequency  smooth areas
– Low frequency corresponds to image elements of (relatively) large size.
– An object with a low spatial frequency only changes slightly over many pixels and will
have gradual transitions in digital values (i.e. a lake or a smooth water surface).

Numerical Filters-Low Pass Filters


 A low-pass filter is designed to emphasize larger, homogeneous areas of similar tone and
reduce the smaller detail in an image.
 Thus, low-pass filters generally serve to smooth the appearance of an image.

11/24/2021 By: Wondifraw Nigussie 18


High-pass Filter
 High-pass filters do the opposite and serve to sharpen the appearance of fine detail in an
image.
 One implementation of a high-pass filter first applies a low-pass filter to an image and then
subtracts the result from the original, leaving behind only the high spatial frequency
information.
 Streets and highways, and some streams and ridges, are greatly emphasized. The trademark
of a high pass filter image is that linear features commonly are defined as bright lines with a
dark border.

11/24/2021 By: Wondifraw Nigussie 19


5.2 Image Classification

 Image classification is to automatically categorize all pixels in an image into land use / land
cover classes using spectral features, i.e. the brightness and "color" information contained in
each pixel.

 There is a relationship between land cover and measured reflection values and used to order
to extract information from the image data.

 Image classification refers to the computer-assisted interpretation of remotely sensed images.


11/24/2021 By: Wondifraw Nigussie 20
 A human analyst attempting to classify features in an image uses the elements of visual
interpretation to identify homogeneous groups of pixels which represent various features or
land cover classes of interest.
Digital image classification
 Uses the spectral information represented by the digital numbers in one or more spectral
bands, and attempts to classify each individual pixel based on this spectral information.
 The objective is to assign all pixels in the image to particular classes or themes (e.g water,
Agriculture).
Information classes and spectral classes
 Information classes are those categories of interest that the analyst is actually trying to
identify in the imagery, such as different kinds of crops, different forest types
 Spectral classes are groups of pixels that are uniform (or near-similar) with respect to their
brightness values in the different spectral channels of the data.

11/24/2021 By: Wondifraw Nigussie 21


Con…

 Common classification procedures can be broken down into two broad subdivisions known
as unsupervised and supervised classification based on the methods used.

5.2.1 Supervised classification.


 The analyst identifies in the imagery homogeneous representative samples of the different
surface cover types (information classes) of interest which is known as training areas.
 The selection of appropriate training areas is based on the analyst's familiarity with the
geographical area and their knowledge of the actual surface cover types present in the image
 Once the computer has determined the signatures for each class, each pixel in the image is
compared to these signatures and labeled as the class it most closely "resembles" digitally.
 Thus, in a supervised classification we are first identifying the information classes(land cover
types) which are then used to determine the spectral classes which represent them.

11/24/2021 By: Wondifraw Nigussie 22


5.2.2 Unsupervised classification
 It requires no advance information about the classes of interest.
 Unsupervised classification examines the data and breaks it into the most prevalent natural
spectral groupings, or clusters, present in the data.
 The analyst then identifies these clusters as land cover classes through a combination of
familiarity with the region and ground truth visits.
 It is important to recognize, however, that the clusters unsupervised classification produces
are not information classes, but spectral classes (i.e., they group together features (pixels)
with similar reflectance patterns).
 It is thus usually the case that the analyst needs to reclassify spectral classes into information
classes.
 The system might identify classes for asphalt and cement which the analyst might later group
together, creating an information class called pavement.
 Unsupervised classification in essence reverses the supervised classification process.

11/24/2021 By: Wondifraw Nigussie 23


Con…
 Spectral classes are grouped first, based solely on the numerical information in the data, and
are then matched by the analyst to information classes.
 Programs, called clustering algorithms, are used to determine the natural (statistical)
groupings or structures in the data.
5.2.3 Accuracy Assessment
 Is the assessment of the accuracy of the final images produced and is used typically to
express the degree of „correctness‟ of classification .
 In essence, classification accuracy is typically taken to mean the degree to which the derived
image classification agrees with reality or conforms to the „truth.
 A set of reference pixels representing geographic points on the classified image is required
for the accuracy assessment.
 If information derived from remote sensing data is to be used in some decision-making
process, then it is critical that some measure of its quality be known.

11/24/2021 By: Wondifraw Nigussie 24


Con…

 Classification accuracy assessment is a general term for comparing the classification to


geographical data that are assumed to be true to determine the accuracy of the classification
process.

 Usually, the assumed true data are derived from ground truth. It is usually not practical to
ground truth or otherwise test every pixel of a classified image.

 Therefore a set of reference pixels is usually used.

 Reference pixels are points on the classified image for which actual data are (will be) known.
The reference pixels are randomly selected.

11/24/2021 By: Wondifraw Nigussie 25


Con…

 Once a classification exercise has been carried out there is a need to determine the degree of
error in the end-product. These errors could be thought of as being due to incorrect labeling
of the pixels.

 The basic idea is to compare the predicted classification (supervised or unsupervised) of each
pixel with the actual classification as discovered by ground truth

11/24/2021 By: Wondifraw Nigussie 26


5.2.4 The Confusion Matrix (Error Matrix)
 The most commonly-used method of representing the degree of accuracy of a classification is
to build a confusion (or error) matrix.
 The analyst selects a sample of pixels and then visits the sites (or vice-versa), and builds a
confusion matrix. This is used to determine the nature and frequency of errors.
 Overall map accuracy = total on diagonal / grand total
Confusion matrix table
Reference Data

Vegetation Non-vegetation Total Users accuracy (%)

Classification Data Vegetation 172 28 200 86

Non-vegetation 20 280 300 93.3

Total 192 308 500

Producers accuracy (%) 89.6 90.9

11/24/2021 By: Wondifraw Nigussie 27


Con…

 In some empirical studies, it is noted that a minimum accuracy value of 85% is required for
effective and reliable land cover change analysis.

 Based on the purpose of the land cover map, different people may use different accuracy
levels.
5.2.5 LU/LC change analysis(Land transformation) Change detection
 The extent and nature of natural resource in the specific area have been analyzed and
formulated some mitigation measures with land use land cover change analysis of the area
from remotely sensed data.

 It has the advantage of showing both change, no change as well as „from to‟ information

11/24/2021 By: Wondifraw Nigussie 28


Con…

 Land cover is the biophysical state of the earth‟s surface and immediate subsurface.

 Land cover "describes the physical state of the land surface: as in cropland, mountains, or
forests”

 Land use involves both the manner in which the biophysical attributes of the land are
manipulated and the intent underlying that manipulation the purpose for which the land is
used.

 The land use is the way in which, and the purpose for which, human beings employ the land
and its resources.

11/24/2021 By: Wondifraw Nigussie 29


5.3 Image segmentation

 Image segmentation is the process of partitioning a digital image into multiple segments
(sets of pixels, also known as image objects).

 The goal of segmentation is to simplify and/or change the representation of an image into
something that is more meaningful and easier to analyze.

11/24/2021 By: Wondifraw Nigussie 30


Thank you!!!
Enjoy with
Remote Sensing!!

11/24/2021 By: Wondifraw Nigussie 31

You might also like