Professional Documents
Culture Documents
mkoch@bu.edu
Lineaments
Low High
frequency frequency
Image Convolution
Spatial filtering is a local operation in that pixel values in an original image are
modified on the basis of the gray levels of neighboring pixels. This is done by
moving a window or kernel over each pixel in the image, applying a mathematical
calculation using the pixel values under that window and replacing the central pixel
with the new value. The window is moved along in both the row and column
directions one pixel at a time and the calculation is repeated until the entire image
has been filtered and a new image has been generated. By varying the calculation
performed and the weightings of the individual pixels in the filtered window, filters
can be designed to enhance or suppress different types of features. This procedure
is also called image convolution.
Convolution Matrix
Image DN value
Kernel 3 3 4 4 5 Target Pixel
1 1 1 2 3 3 4 4
1 1 1 1 2 2 3 3 Target Neighborhood
1 1 1 1 1 2 4 4
1 2 3 20 20 Image DN value
Kernel sizes:
Kernel
3x3, 5x5, 7x7 3 3 4 4 5
etc.
1 1 1 1x3 1x3 1x4 2 3 3 4 4
1 1 1 1x2 1x3 1x3 1 2 2 3 3
1 1 1 1x1 1x2 1x2 1 1 2 4 4
Sum(Kernel x Image Neighborhood) 1 2 3 20 20
=
23 = 2.56
Sum(Kernel) 9
3 3 4 4 5 3 3 4 4 5 3 3 4 4 5 3 3 4 4 5
2 3 3 4 4 2 3 3 4 4 2 3 3 4 4 2 3 3 4 4
1 2 2 3 3 1 2 2 3 3 1 2 2 3 3 1 2 2 3 3
1 1 2 4 4 1 1 2 4 4 1 1 2 4 4 1 1 2 4 4
1 2 3 20 2 1 2 3 20 2 1 2 3 20 2 1 2 3 20 2
Directional Filters
Arrows indicate
the direction of
linear features
enhanced by edge
enhancement
filter.
Contrast along
linear feature
increases as well
as its width.
TM band 5 of a part of the central Lineament interpretation showing mainly
Ethiopian Rift, with edge enhancement tensional rift faults.
filter, which sharpens contrast only a little.
Distance E-W is 37 km.
Data Fusion
Radar images are different from that of Multi-spectral images. Therefore they contain
complementary information highlighting different aspects of the landforms and
outcropping rocks.
The combination of radar and multi-spectral images, also termed data fusion requires
that both have the same geometric properties, in other words, geocoding has to be done
to ensure that pixel sizes and coordinate systems are the same. This requires resampling
of at least one of the images, usually the radar image. To make a radar image of hilly
terrain suitable for fusion, a good DEM has to be available to place the radar pixels into
the proper planimetric position because of relief distortion. Examples of fusion of radar
and multi-spectral images are shown in the next slides. After fusing both images types a
land cover classification was performed.
ASTER Image PALSAR Image
Classified Image
A = agriculture
B = sandstone
C = limestone
D = basalt
E = eolian sed.
E F = structures
These values together are called the IHS system and can be
represented as a cylinder with the main axis measuring intensity,
distance from the center measuring saturation, and position along
the rim indicating hue.
RGB and IHS Color Models
Intensity, Hue and Saturation Transform
The process of dividing the pixels in one image or band by the corresponding
pixels in a second image or band is known as image or band ratioing. Ratio
images serve to highlight subtle variations in the spectral responses of various
surface covers. By ratioing the data from two different spectral bands, the
resultant image enhances variations in the slopes of the spectral reflectance
curves between two different spectral ranges that may otherwise be masked by
the pixel brightness variations in each of the bands. Useful for differentiating
between areas of stressed and non-stressed vegetation; discriminating rock types
etc.
Spectral Ratioing
Spectral Ratioing
Ground Truthing
Spectral Library
Spectral Feature Extraction Across Scales
Band b
Band a
Spectral Angle Mapper
Band b
• Length of the vector resembles
Reference-
reflection intensity angle
spectrum
• Difference between spectra is
described by the angle
• By assessing the angle difference
between pixel and reference
spectrum an image can be divided in Band a
any number of classes
Spectral Angle Mapper
Spectral Angle Mapper (SAM) is an automated method for comparing image spectra to individual
spectra or to a spectral library. The algorithm determines the similarity between two spectra by
calculating the spectral angle between them, treating them as vectors in n-D space, where n is the
number of bands.
Two different materials are represented in a 2D scatter plot by a point for each given illumination, or as a
line (vector) for all possible illuminations. Because SAM uses only the direction of the spectra, not the
length, SAM is insensitive to variations in illumination. Poorly illuminated pixels fall closer to the origin of
the scatter plot. The color of a material is defined by the direction of its unit vector. SAM determines the
similarity of an unknown spectrum to a reference spectrum.
Application: Mineral Mapping