Professional Documents
Culture Documents
By:
Al Neelain University
Faculty of Petroleum and Minerals
Fifth Year
2013-2014
The range 0-255 represents the full grey scale whereas each level within
The zero value defines the pure black tone whereas the 255 defines the
white tone.
Architecture of a digital image
Raster array and accompanying digital numbers (DN) values for a single band image.
Dark pixels have low DN values while bright pixels have high DN values
Image Resolution
• Digital images are usually described by their resolution.
• Resolution is a broad term commonly used to describe the number of
pixels that can be displayed on a display device or the area on the
ground that a pixel represents in an image file.
Four major types of resolution are always considered:
Spectral resolution: describing the specific wavelength intervals that a sensor
can record;
Spatial resolution: describes the area on the ground represented by each
pixel;
Radiometric resolution: indicates the number of possible data files values in
each band
Temporal resolution: indicating how often a sensor obtains imagery of
particular area
The spectral resolution of a sensor system is the number and width of spectral bands in
the sensing device. The simplest form of spectral resolution is a sensor with one band
only, which senses visible light. An image from this sensor would be similar in
appearance to a black and white photograph from an aircraft. A sensor with three
spectral bands in the visible region of the EM spectrum would collect similar information
to that of the human vision system. The Landsat TM sensor has seven spectral bands
located in the visible and near to mid infrared parts of the spectrum.
The spatial resolution (ground resolution) is the ground area imaged for the instantaneous
field of view (IFOV) of the sensing device. Spatial resolution may also be described as the
ground surface area that forms one pixel in the satellite image.
format is optimal for spatial (X, Y) access of any part of a single spectral band. Good for
multispectral images
followed by the third pixel for all bands, etc., interleaved up to the number of pixels. This
format provides optimum performance for spectral (Z) access of the image data. Good for
hyperspectral images
line of the third band, interleaved up to the number of bands. Subsequent lines for each band
are interleaved in similar fashion. This format provides a compromise in performance between
spatial and spectral processing and is the recommended file format for most ENVI processing
tasks. Good for images with 20-60 bands
Band 2 Band 3 Band 4
20 50 50 90 90 120 150 100 120 103 210 250 250 190 245
76 66 55 45 120 176 166 155 85 150 156 166 155 415 220
80 80 60 70 150 85 80 70 77 135 180 180 160 170 200
100 93 97 101 105 103 90 70 120 133 200 0 123 222 215
10 15 17 20 21 20 50 50 90 90 120 150 100 120 103 210 250 250 190 245
15 16 18 21 23 76 66 55 45 120 176 166 155 85 150 156 166 155 415 220
17 18 20 22 22 80 80 60 70 150 85 80 70 77 135 180 180 160 170 200
BIL
18 20 22 24 25 100 93 97 101 105 103 90 70 120 133 200 0 123 222 215
10 15 17 20 21 15 16 18 21 23 17 18 20 22 22 18 20 22 24 25
20 50 50 90 90 76 66 55 45 120 80 80 60 70 150 100 93 97 101 105
120 150 100 120 103 176 166 155 85 150 85 80 70 77 135 103 90 70 120 133
BSQ
210 250 250 190 245 156 166 155 415 220 180 180 160 170 200 200 0 123 222 215
10 20 120 210 15 50 150 250 17 50 100 250 20 90 120 190 21 90 103 245
BIP
15 76 176 156 16 66 166 166 18 55 155 155 21 45 85 415 23 120 150 220
17 80 85 180 18 80 80 180 20 60 70 160 22 70 77 170 22 150 135 200
18 100 103 200 20 93 90 0 22 97 70 123 24 101 120 222 25 105 133 215
Band sequential (BSQ) format stores information
for the image one band at a time. In other words,
data for all pixels for band 1 is stored first, then
data for all pixels for band 2, and so on.
Value=image(c, r, b)
Band interleaved by pixel (BIP) data is similar to
BIL data, except that the data for each pixel is
written band by band. For example, with the same
three-band image, the data for bands 1, 2 and 3 are
written for the first pixel in column 1; the data for
bands 1, 2 and 3 are written for the first pixel in
column 2; and so on.
Value=image(b, c, r)
Band interleaved by line (BIL) data stores pixel
information band by band for each line, or row, of
the image. For example, given a three-band image,
all three bands of data are written for row 1, all
three bands of data are written for row 2, and so
on, until the total number of rows in the image is
reached.
Value=image(c, b, r)
Most of the image processing functions available in image analysis
Pre-processing
Image Enhancement
Image Transformation
Image Classification and Analysis
Pre-processing involve operations that are essentially done prior to other image processing
techniques and includes image rectification and image registration. Image rectification
includes the radiometric and geometric correction of the satellite raw data. Image
Image enhancement involves techniques for the creation of new modified images that contain
more information to ease the visual interpretation of the overall image &/or certain features.
Image transformation usually deals with the processing of multiple band images of the same
Image classification involves the categorization of the pixels of a scene into various thematic
statistics of the multispectral satellite data is the initial step towards digital image
The statistical parameters usually calculated during the different phases of digital
image processing include: the maximum and minimum values of digital numbers
(DN) “Brightness value BV” for each band of the whole satellite data set, the mean,
correlation matrix and the frequency of brightness values in each band. Some
BV k
2
Max ik
Variance vark i 1
Standard deviation n 1
Coefficient of
sk k vark
variation (CV)
Skewness
Kurtosis
Moment
k
CV
k
Multivariate Image Statistics
Remote sensing research is often concerned with the
measurement of how much radiant flux is reflected or emitted
from an object in more than one band. It is useful to compute
multivariate statistical measures such as covariance and
correlation among the several bands to determine how the
measurements covary. Variance–covariance and correlation
matrices are used in remote sensing principal components
analysis (PCA), feature selection, classification and
accuracy assessment.
Covariance
The different remote-sensing-derived spectral measurements for each pixel
often change together in some predictable fashion. If there is no relationship
between the brightness value in one band and that of another for a given
pixel, the values are mutually independent; that is, an increase or decrease in
one band’s brightness value is not accompanied by a predictable change in
another band’s brightness value. Because spectral measurements of
individual pixels may not be independent, some measure of their mutual
interaction is needed. This measure, called the covariance, is the joint
variation of two variables about their common mean.
n n
BV BV SPkl
cov kl
n ik il
SPkl BVik BVil i 1 i 1
i 1 n n 1
Correlation
To estimate the degree of interrelation between variables in a manner not influenced by
measurement units, the correlation coefficient, is commonly used. The correlation
between two bands of remotely sensed data, rkl, is the ratio of their covariance (covkl)
to the product of their standard deviations (sksl); thus:
cov kl
rkl
sk sl
If we square the correlation coefficient (rkl), we obtain the sample coefficient of
determination (r2), which expresses the proportion of the total variation in the values
of “band l” that can be accounted for or explained by a linear relationship with the
values of the random variable “band k.” Thus a correlation coefficient (rkl) of 0.70
results in an r2 value of 0.49, meaning that 49% of the total variation of the values of
“band l” in the sample is accounted for by a linear relationship with values of “band
Pixel Band 1
(green)
Band 2
(red)
Band 3
(ni)
Band 4
(ni) example
(1,1) 130 57 180 205
(1,2) 165 35 215 255
(1,3) 100 25 135 195
SP12 (31,860)
675232
(1,4) 135 50 200 220
5
(1,5) 145 65 205 235
540
cov12 135
4
Band 1 (Band 1 x Band 2) Band 2
130 7,410 57
165 5,775 35
100 2,500 25
135 6,750 50
145 9,425 65
Variance
Multi-variate
Statistics -
Sum of
Products
Multi-variate Statistics - Sum
of Products
Multi-variate Statistics -
Sum of Squares
Multi-variate Statistics -
Sum of Covariance
Multi-variate Statistics –
Correlation coefficient
Image distortions are often classified into: geometric distortions
and radiometric distortions. Errors that occur during Satellite
data acquisition are of two types:
This processing category includes those operations that are normally required
prior to the main data analysis and information extraction, its utilization is
necessary to restore the image to resemble the scene on the terrain by
compensating for image errors, noise, and geometric distortions introduced
during the scanning, recording and play back operations (Sabins, 1999) and are
generally grouped into radiometric or geometric corrections:
Rectification is the process of transforming the data from one grid system into
In other words rectification is a procedure that leads to the adjustment of the grid of
satellite Scene in terms of position and radiance. Rectification procedures are usually
carried out whenever the Satellite data are unprojected, needs to be reprojected, or
• the first by registering the data to another image that has been projected, and
digital georeferenced map. The first method which the second one is named
The corrections referred to here are the ones related to registering an image to a
specified geographic projection, or another image.
The geometric registration process involves
identifying image coordinates of several
clearly points, called ground control points
(or GCPs), in the distorted image (A - A1 to
A4), and matching them to their true
positions in ground coordinates (e.g.
latitude, longitude). The true ground
coordinates are typically measured from a
map (B - B1 to B4), in paper or digital format.
This is image-to-map registration.
Geometric Corrections ……….cont.
It is well known that the shorter wavelengths of the Ultra violet and the blue light suffer
scattering when passing through the atmosphere. This process is termed selective
scattering. Selective Scattering is caused by fumes and gasses such as nitrogen, Oxygen and
Carbon dioxide (and it causes haze and reduces their contrast ratio of images). The major
objectives of remote sensing are to record the amount of energy reflected from every
particular part of the earth's surface. Scene the selective scattering of EMR by the
atmosphere leads to an additive portion to the recorded radiation reflected from the
surface of the earth, correction for this portion is essential. This correction is termed haze
correction.
Haze Correction ………….cont.
Unfortunately, the accurate estimation of the added value needs precise data about the
atmospheric conditions e.g. temperature, relative humidity, atmospheric pressure and
visibility, which is always nearly impossible to be available for extendable areas.
Haze Correction is based on the assumption that, in each band in a scene under
investigation, there should be some pixels at or close to zero brightness value but that
atmospheric effect, has added a constant value to each pixel in each band (Richards & Jia,
1999). Completely dark pixels might correspond to area of deep, clear, open water bodies or
deep shadows (Gupta, 2003).
Such pixels record non-zero value. This value is usually reflected by the minimum DN in
each band. In this study the dark-pixel subtraction or the histogram minimum method
described by Crane (1971). Chavez et.al. (1977) had been adopted for the haze removal.
Thorough examination of the histograms of each band reveals a progressive offset increase
of the plots from band 7 through to band 1. The amount of this offset account for the path
radiance that is then subtracted from each pixel in each corresponding band.
BANDS DISPLAY
It is well known that the human eye can discriminate more color hues than grey
levels or tones. Colored images are more informative than black and white
images. It is also known that white color can result from the mixing of equal
amount of Red (R), Green (G) and Blue (B), where as other colors can be
obtained by mixing different percentage of R, G & B. This process of color
mixing is known as additive color mixing. To display a colored Satellite image
only three bands are needed. These bands should be of the same resolution and
registered to the same geographic system. Each of the three bands is displayed
in one of the three major color channels, Red, Green or Blue. The resultant is a
color composite image. RGB color composite images are simple and effective as
the mixing of the three primary colors (red, green & blue) can produce a wide
range of colors (Harris et. al, 1999).
BANDS DISPLAY
A natural color composite image result from red, green and blue wavelengths
bands when represented in R, G, B respectively. Other band combinations
produce false-color images. There is a simple rule for band combination is to
render the most informative band for a particular purpose in red, the next in
green and the least informative in blue (Drury, 1993).
The six reflected bands of Landsat 7 (Bands 1, 2, 3, 4, 5 & 7) give the chance for
20 combinations, for each of which there are six RGB order (options).
E N O R M O U S C O N F U S I O N !!!!!!
Landsat7 band3 set in the Red Channel
Image enhancement involves the modification of the original set of digital numbers.
Point operations modify the digital numbers of each pixel of the image under
enhancement, while local operations modify the digital numbers in the context of
the surrounding ones (Jehsen, 1996).
Enhancing certain features of an image may occur at the expense of other features
which may become relatively subdued (Gupta, 2003).
Image Enhancement … … … … … … … cont.
Enhancements are used to make the image easier for visual interpretation and
understandable.
Raw image data are usually dark and occupies only a limited portion of the data
dynamic range (0 – 255 brightness levels in the case of 8-bit data).
Contrast enhancement involves changing the original values (DN or BV) so that
more of the available range is used, thereby increasing the contrast between targets
and their backgrounds. The key to understanding contrast enhancements is to
understand the concept of an image histogram.
Image Histogram
There are different techniques and methods of enhancing contrast and detail in an
image.
Image Enhancement … … … … … … … cont.
The raw satellite data are usually dim and lack contrast, because natural
features have a low range of reflectance in a specific wave band (Gibson &
Digital numbers in satellite data do not encompass the entire dynamic range of
the grey-scale (0 – 255) and are often compressed into a small part of the
available range.
To view satellite images, the full grey-scale must be utilized by "stretching" the
digital numbers in the raw data. Data stretching is either linear or nonlinear.
Linear contrast stretching rescales the digital numbers range of the raw image to
fill the whole dynamic range (0 – 255) by assigning the 0 value to the digital number
with the minimum value and a value of 255 to the digital number with maximum value
and stretches the middle digital numbers accordingly.
This type of contrast stretching is largely dependent upon the statistics of the scene.
Mainly the mean and the standard deviation, in addition to the minimum and the
maximum values control the calculation. The output DN values are calculated by the
equation:
Image Enhancement … … … … … … … cont.
This form of enhancement works well if the histogram values are uniformly
distributed.
Non-Linear Contrast Enhancement
In these methods, the input and output data values follow a non-linear
transformation. The general form of the non-linear contrast
enhancement is defined by y = f (x), where x is the input data value and
y is the output data value. The non-linear contrast enhancement
techniques have been found to be useful for enhancing the color contrast
between the nearly classes and subclasses of a main class.
A type of non linear contrast stretch involves scaling the input data
logarithmically. This enhancement has greatest impact on the brightness
values found in the darker part of histogram. It could be reversed to
enhance values in brighter part of histogram by scaling the input data
using an inverse log function.
Non-linear contrast stretches is used when certain parts of the histogram needs
to be stretched more preferably than the rest of it. Logarithmic and exponential
contrast stretches in addition to the Gaussian and histogram equalization contrast
stretch represent the most popular types of the non – linear contrast stretches.
Histogram Equalization: this stretch assigns more display values (range) to the
frequently occurring portions of the histogram. In this way, the detail in these areas
will be better enhanced relative to those areas of the original histogram where
values occur less frequently.
Adaptive Enhancement: (can have other names): in this situation it may be desirable
to enhance the contrast in only a specific portion(s) of the image histogram. Each
portion can then be stretched independently from other portions.
Other forms of enhancement exit. For example applying a particular equation to the
brightness values such as square root etc … or an analysis can develop his own
equations.
Linear Contrast Stretch (source Lillesand and Kiefer, 1993)
Image Enhancement … … … … … … … cont.
Spatial Frequency
The number of changes of brightness values per unit distance for any part of
an image.
Defines the “roughness” of the brightness variations occurring in an image.
Spatial frequency is the analog of the frequency of a wave in time.
Spatial Frequency in an Image
• High Frequency
Occurs often at the feature-to-feature boundary.
Enhancing the high spatial frequency of an image helps distinguishing the
different features and their boundaries.
• Low Frequency
Occurs in homogenous areas, where no small features variations are seen.
Enhancing the low frequency eliminates the unwanted noise from image
details.
Image Enhancement … … … … … … … cont.
Spatial Filtering
These operations that involve enhancing
rough textured features e.g.. edges.
BV = BV /BV
i,j,r i,j,k i,j.l
Ratio function is: BVi,j,r = BV /BV
i,j,k i,j.l
where BVi,j,r is the output ratio value for the pixel at row, i, column j; BVi,j,k is the
brightness value at the same location in band k, and BVi,j,l is the brightness value
in band l. Unfortunately, the computation is not always simple since BVi,j = 0 is
possible. However, there are alternatives. For example, the mathematical domain
of the function is 1/255 to 255 (i.e., the range of the ratio function includes all
values beginning at 1/255, passing through 0 and ending at 255). The way to
overcome this problem is simply to give any BVi,j with a value of 0 the value of 1.
Ratio images can be meaningfully interpreted because they can be directly related
to the spectral properties of materials. Ratioing can be thought of as a method of
enhancing minor differences between materials by defining the slope of spectral
curve between two bands. We must understand that dissimilar materials having
similar spectral slopes but different albedos, which are easily separable on a
standard image, may become inseparable on ratio images.
The figure below shows a situation where Deciduous and Coniferous Vegetation
crops out on both the sunlit and shadowed sides of a ridge. In the individual
bands the reflectance values are lower in the shadowed area and it would be
difficult to match this outcrop with the sunlit outcrop.
The ratio values, however, are nearly identical in the shadowed and sunlit areas
and the sandstone outcrops would have similar signatures on ratio images. This
removal of illumination differences also eliminates the dependence of
topography on ratio images.
The difference between deciduous and coniferous trees is that Conifer
trees have those prickly, thorny leaves which don't really fall off, while
deciduous trees shed their leaves in the fall, and the leaves are mainly
flat, wide, and thin. Most softwoods are coniferous while hardwoods are
deciduous. Coniferous trees stay green all year round, but deciduous
change leaf color.
Image Histogram
Image processing software can illustrate the distribution of the BV (DN) values within a
satellite image. The distribution of the BVs is displayed as a histogram diagram; where the
spread of the BV values (from the MinBV to the MaxBV in the data set of the image) is shown
in the X-axis; while the frequency or how many pixels in the image each BV value has is
Histogram enables the digital image processor (analyst) to quickly access the type of
distribution maintained by the data (normal, bimodal or skewed). Histograms are useful
A lookup table (LUT) graphs the intensity of the input pixel value relative to the
output BV observed on the scene. The curve does not provide information about
relationship between the unaltered raw data and the adjusted display data.
Scatter Plot Diagrams
DIP software can generate scatter plots that show the correlation between the
different bands of the same scene. The diagram graph the BVs of one band relative
to those of another band of the same scene. Bands that are highly correlated will
produce plots with linear relationship and slight deviation from the line. On the
contrary, bands that are not well correlated will lack linear relationship and the
distribution of the BVs of both band will distribute randomly in the plot.
Scatter plot diagrams allow for a quick assessment of the usefulness of particular
band combinations.
Image Classification and Analysis
Image classification and analysis operations are used to digitally identify and
classify pixels in the data. It is the categorization of all the pixels in the scene or
subset of a scene (within all the bands or a set of selected bands) into classes
(landcover/use classes, lithologies, types of vegetations…etc).
Image Classification is usually performed on multi-channel data sets (A) and
this process assigns each pixel in an image to a particular class, land cover or
theme (B) based on a certain set of statistical characteristics of the pixel
brightness values.
Image classification uses the spectral information represented by the BVs (DNs)
in one or more spectral bands, and attempts to classify each individual pixel based
on this spectral information. This type of classification is termed spectral
pattern recognition.
ISODATA unsupervised classification calculates class means evenly distributed in the data
space then iteratively clusters the remaining pixels using minimum distance techniques.
Each iteration recalculates means and reclassifies pixels with respect to the new means.
Iterative class splitting, merging, and deleting is done based on input threshold
parameters. All pixels are classified to the nearest class unless a standard deviation or
distance threshold is specified, in which case some pixels may be unclassified if they do not
meet the selected criteria. This process continues until the number of pixels in each class
changes by less than the selected pixel change threshold or the maximum number of
iterations is reached.
ISODATA Method
ISODATA First Iteration
Unsupervised Cluster Busting
K-Means unsupervised classification calculates initial class means evenly distributed in the
data space then iteratively clusters the pixels into the nearest class using a minimum
distance technique. Each iteration recalculates class means and reclassifies pixels with
respect to the new means. All pixels are classified to the nearest class unless a standard
deviation or distance threshold is specified, in which case some pixels may be unclassified
if they do not meet the selected criteria. This process continues until the number of pixels
in each class changes by less than the selected pixel change threshold or the maximum
number of iterations is reached (ENVI 4.8 Help file).
Parallelepiped Classification
Minimum Distance to Means
Classification
Nearest Neighbor Classification
Maximum Likelihood
Classification
Image Classification ……………………cont.
The numerical information in all spectral bands for the pixels in the
training areas are used to "train" the computer to recognize spectrally
similar areas for each class. This is how the classification is achieved.
Similarity between any random pixel to the training areas is based on the
statistical information, such as mean, standard deviation, variance and
covariance. A maximum likelihood algorithm is widely used to calculate
the probability of a pixel belonging to a particular class. The probabilities
are calculated based on the above statistical information.
The set of the statistical information for each class is called “spectral
signature of the class”.
THANK YOU