You are on page 1of 26

1/31/2018

Image processing:

A: data restoration
B: image enhancement
C: image analysis
D: accuracy assessment

Lecture materials by Dr. Thomas Schneider

Steps of an image interpretation session: 1/3

A: data restoration – removal of data registration errors

• sensitivity differences of detector elements


• unsystematic image distortions
• geometric correction:
• earth´ rotation
• earth´ curvature
• panorama effect
• instability of the platform:
• altitude
• velocity
• pitch
• roll
• yaw
• topographic effects
• resampling: is changeing spatial or spectral resolution of the data set

sensitivity differences of detector elements

Preprocessing: striping

MOMS-2P data :
Band 1: blue (449 – 511 nm)

Striping occurs due to differing


detector sensitivities of a line
sensor array

Lake
Starnberg

1
1/31/2018

unsystematic image distortions

Result of applying noise reduction algorithm: Typical noise correction algorithm


a: original image data with noise induced “salt and pepper employing a 3x3 neighbourhood.
appearance “WEIGHT” is an analyst specified
b: image resulting from application of the filter algorithm in (c) weighting factor. The lower the
weight, the greater the number of
(from Lillesand and Kiefer, 1999) pixels considered

unsystematic image distortions 2


Scan direction:

Possible reasons for the “smearing” of light objects:


• “memory effect” : detector is not “empty” when registering the next pixel
• resampling error : mistake occurred eliminating systematic sensor deficits
• electronics error : mistake during the registration or preprocessing of data

Preprocessing: geometric distortion 1/4

Earth´rotation
The effect of earth rotation on scanner imagery
a. Image formed by lines arranged in a square grid
b. Offset of successive lines to the west to correct for the
rotation of earth´s surface during the frame acquisition
time

Earth´curvature
Effect of earth curvature on the size of a pixel in the scan
direction (across track)

2
1/31/2018

Preprocessing: geometric distortions 2/4

The displacements Δs due to differences in surface height


Δh across a scan line are calculate as follows:
Δs = Δh * ds /hg

with : hg = height above ground


Effect of scan angle on pixel size at constant angular Δh = h – H (height difference to surface plane)
IFOV Δsm = sm – sg (distance to map position)
Δsi = si – sg (distance to image position)

Panorama effect Topographic effects

Preprocessing: geometric distortions 3/4


Of interest: ‚In line position‘ displacements due to topographic height differences
Example: 100 and 300m height differences calculated for Landsat TM and Spot.

Sensor spat. hg max. max displacement ∆s from map


res. [km] obs. ds position [m] (in pixel) at a
angle topographic height difference
[m] vmax [°] [km] [m]
100 300

TM 30 705 +/-7,4° 92 13 (0,4) 65,2 (2,2)


Spot 20 822 +/-2,1° 30 3,7 (0,2) 18,2 (0,9)
ms
20 822 30° 475 57,8 (2,9) 288,9 (14,4)
Spot 10 822 +/-2,1° 30 3,7 (0,4) 18,2 (1,8)
pan
10 822 30° 475 57,8 (5,8) 288,9 (28,9) ds

The cross track pointing capability of the Spot system of +/- 27° from the nadir
position, increases the displacement drastically.

Preprocessing:
geometric distortions
4/4
Earth rotation Altitude variation Pitch variation

Spacecraft velocity Roll variation Yaw variation

Desired position Geometric distortion in satellite


Disturbed position remote sensing data.
After: Bernstein and Ferneyhough, 1975,
Scan skew Sabins 1978

3
1/31/2018

Principle of image
geocoding
(from DIBAG)

Preprocessing: resampling

Resampling is a data manipulation technique used at different positions in the


image processing chain:
• to fit the signal measured over the IFOV to the nominal pixel size
• to correct for geometric distortions during data take
• to recalculate the pixel size according to the project grid size
• to georectify the data
Source: Taranik, 1978

Nearest neighbour bilinear interpolation


Flight platform instability error correction using a

Georectification and overlay


Original Ikonos data set

DGM and passpoints

Thanks to ZGIS Uni Salzburg and


Joaneum Research Graz for
processing

4
1/31/2018

Results of preprocessing steps:

• Data representing physical radiation


measurements (radiometric calibration)
• Data fitting into the geographic reference
system chosen as base for the investigation or
the administrative GIS data holding environment
(geometric rectification)

Data base for image analysis

Steps of an image interpretation session: 2/3

B. image enhancement: change the visual impression

• visualisation
• RGB display / MCGB printer output
• false colour image concept
• contrast stretching
• linear/ non linear
• histogramm manipulations
• transformations in spectral space (RGB to IHS)
• filter operations : change intensity according to surrounding
pixel
• etc.

image enhancement: visualization


The color theory differentiate between “pure” and mixed colors. “Pure” colors are the
so called endmembers of the color scheme, usually drawn as a cube with the edges red,
green, blue, yellow, magenta, cyan, black and white (see next slide). The reproduction
of all other colors of the visible spectra is possible by combining three “additive” or
three “subtractive” colors.

red green yellow magenta

blue cyan

additive subtractive colors

5
1/31/2018

RGB Colorcube Corner White.png

White corner of a RGB color


cube in isometric projection

Source: RokerHRO - Eigenes Werk, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6523698

Dr. Thomas Schneider

Color space: adjustment in MS power point

Printer used for hardcopy output usually are working according to the subtractive color
scheme with the ink colors cyan/magenta/yellow. The brightness is controlled by the fraction
of black ink added to the basic CMY colors

Registration, digitalisation, grey value recoding, visualisation

CCD line with 5700 detector elements (MOMS-02)

Dr. Thomas Schneider

6
1/31/2018

position of spectral bands and


greyvalue representation of
six spectral bands of a
Daedalus ATM data set

From B&W to coloured visualisation

„additive color
scheme“ :
visualization in
the RGB
environment of a
computer screen

Cathode ray tube (Braun’s tube)


with three cathode “guns”:
red, green, blue (RGB)
Flatscreen monitor systems are
working with red, green, blue LEDs
to produce a colour image

result: true or false color image “Moro Bay”

Image enhancement: principle of histogramm strech

The original histogram (a) shows DN values from 60 to 158. A better differentiation of this region
will be helpful for image interpretation. This is done by redistributing the existing DNs using
different functions. The result is a DN value distribution covering the whole dynamic range.

7
1/31/2018

Image enhancement: histogram stretching

Our screen is able to display 256 greyvalues per band (theoretical 16777216 colors). Often we are
interested in objects displayed within a specific brightness range (9 to77 or 158 to 234).
To get a more detailed insight this range is rescaled band for band. The new image is stretching the
greyvalues of this range to again 256 greyvalues (screen limit).

Results of some “default” ENVI stretch operations

RGB = NIR/red/green equalization Gauss


Linear stretch square root stretch

Image enhancement: moving window filter technique

8
1/31/2018

Results of some filter operations with an Ikonos data set

Sharpen with a
Original,
10 x 10
RGB =
window
NIR/red/green

“smooth”
“Sobel” filter
With a 5 x 5
window

Results of image enhancement procedures:

• Better identification of features


• Enhancement of special features like edges
• Simplifies training and verification area selection
• Improves the visual aspect of an image

Steps of an image interpretation session: 3/3

C. image analysis
• multivariate data analysis (traditional)
• multidimensional spectral space
• indices calculations
• ratios f.e. simple ratio = band 1/ band 2
• vegetation indices like NDVI = band 1 – band 2 / band
1 + band 2
• principal component calculation
• supervised classification
• unsupervised classification
• object oriented image analysis

D. accuraccy assessment

9
1/31/2018

Image analysis:

Basic assumptions:

• Surfaces may be arranged in classes according to reflected


radiation intensities in different bands (wavelength ranges)
• Pixel with similar intensities of the reflected radiation at the sensor
belong to the same object class
• Objects of thematic interest are represented by clustered pixel
groups (neighborhood principle: the chance of the neigbour pixel to
belong to the same class is higher then to other classes)
• The grey value “profil” or “spectral signature” allows the
identification of object classes

traditional image processing classification base:


the single pixel ‚moving window‘
0 0
0 0
Procedure calculates
Proceedure works pixel by pixel
the value of the central
pixel as a function of
the surrounding pixels
Pixel
x 1932; y 1819

Window:
x 2219-2221, y 1613-1615
6000

6000

6000 6000
Result: Result:
DN of the 4 DPA DN of the 4 DPA Multispectral
Multispectral Bands: bands for the central pixel
Band 1 = 42 Band 1= (3 * 3 Filter) = 41
Band 2 = 45 Band 2= (3 * 3 Filter) = 47
Band 3 = 33 Band 3= (3 * 3 Filter) = 32
Band 4 = 46 Band 4= (3 * 3 Filter) = 46

Spectral feature space concept

Each single pixel is


characterized by his DN set
in the different spectral
bands of a multispectral
data set.

The position of a pixel in a


2- or 3 dimensional feature
space (area) is expressed
by a vector controlled by
the DN in the bands
defining the feature space
(red arrow).

The computer is analyzing


the n-dimensional feature
space of the n-band data
set

10
1/31/2018

Coniferous

I
1. image domain 2. Spectral domain

3 Domains:
1. The Image domain (2D: area
imaged)
2. The spectral domain (spectral
signature of each pixel: coniferous)
3. The feature space (set of features
characterizing a class: visualized for
the class coniferous in a 2-D plot of
3. 2(n)-dim. feature space RED versus Infrared

“Image domain” and row profiles

Image domain,
showing the spatial
distribution of
homogeneous
surfaces

“profiles”row 20 for
band 3 (red, left) and
band 4 (NIR, right)

Dr. Thomas Schneider

“image domain” and “feature space” concepts

2-dimensional feature space of


Image-matrix 70 columns, 50 rows
bands 3 (red) und 4 (NIR)
Column adress: 256
0 10 20 30 40 50 60 70
0
Row adress:

Greyvalues Band 3 (red)

10
128
20 lake forest

30 agric. plot meadow


soil,gricultural plot
40 0
meadow
50

forest
lake

0 128 256
Greyvalues Band 4 (NIR)

image domain (spatial distribution) feature space


(band value defined position of a pixel
in a n-dimensional feature space)

11
1/31/2018

“spatial” versus “spectral” dimension

Wiese Wiese

Waschsee Waschsee
Fichte Fichte

classification-strategies

unsupervised classification
• pixel are assigned to a class on base of homogeneity criteria
(„cluster analysis“)
• labelling of the classes by an human analyst is done after
class assignment

supervised classification
• thematic classes are defined
• test and training areas are delineated by the operator by
visual interpretation of the image domain
• classifiers assign each pixel to a predefined class by
comparing the pixel values with the class values as defined by
the training samples

Classification strategies:
assigning a pixel to a class there are three options:
Region of the feature space which do not belong to the object class but is
represented by the training areas (sample areas) (f.e. a car on the road)
(“errors of comission”)

Cluster of an object class in the


spectral feature space (-plane) of
bands A and B as represented by the
training areas identified and
delineated by visual interpretation

Sample point
placement:
Region of the feature space which is
‚good‘
belonging to the object class but is not
‚not helpful‘ represented by the training areas
(“errors of omission”)

Spectral Band A

12
1/31/2018

Gaussian distribution and standard deviation

The „normal“ or Gaussian distribution describes the frequency of occurrence of


randomly distributed values of a data set

Standard deviation (SD, abbr.: σ, ς:;) is representing the square root of the
variance, which is the most common quantile for distributions, especially of
the Gaussian distribution

Concept of selected classifier

Most classifiers require a Gaussian distribution of the pixel values (DNs) for an object
class. Common thresholds for the assignment of a pixel to a class are 1-, 2-, 3-standard
deviations with respect to the center (maxima) position.

Dr. Thomas Schneider

classifiers

Discriminant analysis classifier. The asterisk is an unknown pixel classified as


“moorland”. The cross is an umknown pixel classified as bare soil. (Source? Unknown)

13
1/31/2018

Each classifier gains different results !

TM CIR
09. 12.
Aug. Aug.
1992
1993

Box / Maximum Likelihood threshold

Thematic map representing land cover classes of the Sava river floodplains in Kroatia

Bottlenecks of conventional image processing :

• missing methodology for the automatic


classification of high resolution satellite data
• missing methodology for data fusion, data merge,
image fusion, data concatenation, sensor merge,
resolution merge
• the scaling problem
• the problem of similar spectral signatures of
differing objects (“spectral confusion”)

‘object oriented’ method promise to solve or at


least to bypass these problems !!

14
1/31/2018

‘Object-oriented’ (‘oo’) or ‘object based’


image analysis (OBIA)
approach

with
eCognition software

Geometric resolution: the mixed pixel problem

The DN of each pixel is representing


the mean signal of all objects within
the pixel area.

That fore the DN is a function of pixel


size too.

Change of fractal dimension: for forest


at around 4 -6 m pixel

Inventury sample plot imaged by CIR aerial photography and the Daedalus
ATM multispectral scanner, October 86, Auerhahn, Harz mountains

Sample plot 22

Aerial photograph Daedalus ATM representation


1.25 * 1.25 m Pixel

15
1/31/2018

Daedalus ATM, 3 flight heights, October 86, Auerhahn, Harz mountains

5 * 5 m, 4000 m 2,50 * 2,50 m, 2000 m 1,25 * 1,25 m, 1000 m

ATM 5
highly damaged
middle
slightly

Histograms und sumcurves of the sample plots „highly“, „mid“ „low“ damages
source: Kenneweg, Förster, Runkel (1991): Untersuchungen und Kartierung von Waldschäden mit Methoden der
Fernerkundung

eCognition, segmentation to classification


DPA data, RGB = NIR, red, green, 0.75 m Pixel Hierarchy level 1.1.1 “objects of interest”

1.1.1

Hierarchy level

1.1.1

Hierarchy level 1.1.1.1 ”object primitives” Hierarchy level 1.1 “classification results”

eCognition procedure
0
0

FA
6000

6000

Segmentation base:
6 spectral-bands, texture, context

Output: table with statistics of


homogeneous areas (“objects”)

16
1/31/2018

Visualisation of the “standard deviation” attribute on stand level

RGB = St.Dev MIR, St Dev NIR, St. Dev Red;


background = Orthophoto in grey.

Red

Infrared

II
IV I : Image domain with selected Object in Yellow,
II : Spectral domain per Class,
IV : Database per Object including topology

eCognition segmentation data base creation

eCognition classification = data base operation !!!

17
1/31/2018

Multilayer concept in object based analysis

GIS eCognition

Hierarchical Net of Image Objects :


Level d = GIS Objects

Level c = SPOT 20 * 20 m Objects

Level b = “merged image” Objects

Level a = Pixel Layer of the ”merged image” digital


orthophoto 1 * 1m, Spot 20 * 20 m

Lidar Aerial photograph GIS-Data

Data-Fusion

Multi-source, multi-scale data concatenation of an eCognition evaluation


Level 2 Level 4
NDVI Spot 4 Forest
+ stand map
1 *1 m border
orthophoto
+
Level 3
20*20 m net
Spot pixel
+
forest stand Forest
border stand map

attribute
table of attribute
eCognition table of
segmenta- the forest
Level 1 (1*1m orthophoto) stand
tion Detail showing
subobjects inside the area covered by a
Spot pixel

18
1/31/2018

Outlook on future work :


development of thematic and problem focused expert systems !!
Informationbase

Remote sensing data

thematic GIS-data
l i s a t i o n
Bestk. Nbstf Bhfn

1 54765 12344 12134


2 54766 23455 34535
3 54767 45355 78895

Knowledge base
4 55123 33666 13959
5 56234 98374 45644
a
m

Inference maschine
r

Theoretical
o

and Resulting maps


F

practical knowledge

Enough information about the context ?

What can you identify ?

Basic decisions!

• Which resolution do we deed for our task?


• Does it is always necessary to work with the
highest resolution?
• Which image processing approach is most
appropriated, which is sufficient?

19
1/31/2018

non forest
unchanged
Kernel area decrease

increase

August 1985 August 1992

Change detection
increase
for water run off decrease
simulations unchanged
non forest
(EU, ALPMON)
ATKIS forestry data

Area statistics for different pixel sizes


30 7,5 1,2

7
28 1

6,5
26 0,8

24 0,6

5,5

22 0,4
5

20 0,2
4,5

18 4 0

pixel size pixel size pixel size

Meadows Forest (wide) Swamp


Broad-leaved Rock wide > 15
Forest (strict)
Not classified
Rock wide < 15
Coniferous
No Data
Rock strict > 15

Rock strict < 15

(EU, ALPMON project, 2000) Water

Summary:

class driven approach:

• sufficient for modelling or inventories on regional level (climatic models,


water run off models, land cover/use statistics) a class driven approach at
scales from 1 : 50.000 to 1 : 1.000.000.

object based approach

• promise to be superior in case of :


• administration and management of area covering objects,
• planning purposes on local scale (1: 5.000 to 1: 10.000)

20
1/31/2018

Verification or accuracy assessment

The process of accuracy assessment is the final step of an image analysis session and is
evaluating the quality of the classification. Usually a confusion matrix is created, by
comparing :

a. Systematic distributed samples of the classification results with the reality as


interpreted by visual interpretation or ground truth mapping (“producers accuracy”)
b. the reality as interpreted by visual interpretation or ground truth mapping with the
results of classification (“users accuracy”)

The diagonal of this procedure gives us the “over all” accuracy.

This confusion matrix can be created by visual interactive examination, image processing
packages offers also other methods, f.e. the split of training areas in two groups, one
used as training set the other as verification set.

A per class accuracy of better than 75% is acceptable in an natural environment.

Confusion matrix

example from a MOMS-02 D2 classification in Mexiko

And remember:
"Remote Sensing" comprise a method pool for
data acquisition as well as for data evaluation.
An RS data set is an image of the landscape at
the time of data take and in this sense a spatial
"document".
Evaluations of RS data are important sources for
landscape analysis and successive GIS
updates.

21
1/31/2018

Reality table visualisation function

Image table

RS

• in GIS the visualisation of the real world model is performed via a table with
already extracted, generalised information
• in RS the real world is directly imaged by the data. The goal of remote sensing
data evaluations is to retrieve an attribute table describing the real world as
accurate as possible after: Roeland de Kok, 2001

Registration unit information unit Information type

Satellite
Scales in RS

Region (Harz mountains) Pixel (30m) digital grey-/colourvalues (RGB)

airborne,
multispectral-
scanner
Forest stand Pixel (1m – 5m) digital grey-/colourvalues (RGB)

Aerial
source: Kenneweg,
Förster, Runkel (1991): photograph
Untersuchungen
und Kartierung von
Waldschäden mit Forest stand/details single tree structure, colour
Methoden der
Fernerkundung
terrain

Tree in the stand single tree, branches original symptoms

Change detection, global

• Forest loss

Forest loss1212_005_AR_EN.mp4

22
1/31/2018

Signatures combination in fast response cases

Forest fire

Fast response microwave


• Earthquake Italy 1: 24th of August 2016

Fast response microwave

• Earthquake in
Italy 2016

23
1/31/2018

Fast response microwave

• Earthquake Italy 2, 30th of October 2016

Details

•Title Wave height during Hurricane Irma


•Released 31/10/2017 1:05 pm
•Copyright contains modified Copernicus Sentinel data (2017),
processed by DLR
•Description
Copernicus Sentinel-1 radar mission images were used to measure
waves of up to 10 m high under Hurricane Irma as it struck Cuba and
the Florida keys on 9 and 10 September 2017, respectively.
Read full story: Sentinel-1 sees through hurricanes
•Id 385580

TAGS

•Click on the tags to find the matching images.


•Activity Observing the Earth
•Mission Sentinel-1
•System Copernicus

Dr. Thomas Schneider

Details
•Title Hurricane Harvey
•Released 25/08/2017 3:42 pm
•Copyright contains modified Copernicus Sentinel
data (2017), processed by ESA, CC BY-SA 3.0
IGO
•Description
The Copernicus Sentinel-3A satellite saw the
temperature at the top of Hurricane Harvey on 25
August 2017 at 04:06 GMT as the storm
approached the US state of Texas.
The brightness temperature of the clouds at the top
of the storm, some 12–15 km above the ocean,
range from about –80°C near the eye of the storm
to about 20°C at the edges.
Hurricanes are one of the forces of nature that can
be tracked only by satellites, providing up-to-date
imagery so that authorities know when to take
precautionary measures. Satellites deliver
information on a storm’s extent, wind speed and
path, and on key features such as cloud thickness,
temperature, and water and ice content.
Sentinel-3’s Sea and Land Surface Temperature
Radiometer measures energy radiating from
Earth’s surface in nine spectral bands and two
viewing angles.
•Id 382898

TAGS
•Click on the tags to find the matching images.
•Activity Observing the Earth
•Mission Sentinel-3
•System Copernicus
•Location USA
•Keywords Hurricanes, Storms, Temperature,
Radiometer

Dr. Thomas Schneider

24
1/31/2018

…..and thousands of other nice


application examples may be found on
the hompages of the space agencies like:

• http://www.esa.int/spaceinimages/Images

Dr. Thomas Schneider

Control questions:
• Which are the four main steps of an image processing session?
• How geocoding works? What‘s behind the term ‚resampling‘?
• How digital images are registered, what does ‚digital number# (DN) mean?
• What does ‚imafe analysis‘ mean? Which aare the basic assumptions?
• What‘s the idea of the ‚spectral feature space‘ concept?
• Which are the three domains explored by an image processing session?
• Which classification approaches do you know, which is the fundamental
difference?
• Which is the function of training samples?
• What fore a ‚classifier‘ is used? Do you know some of them (concept)?
• Please comment the statement: ‚each classifier gains different results‘
• Which are the fundamental differences between pixel- and object-based image
processing approaches?
• What‘s behind the ‚mixed pixel‘ problem in remote sensing data?
• Describe in few words how object oriented image analysis works!
• Which are the key features of ‚oo‘ image analysis?
• Which basic decisions have to be taken when conceptualizing an image
analysis project?
• Which is the function of the final accuracy assessment step?

Dr. Thomas Schneider

25
1/31/2018

Dr. Thomas Schneider

26

You might also like