You are on page 1of 26

Digital Image processing

Methods used in photogrammetry

Fundamentals
Photogrammetric image procesing are developed and
applied in the fields of image acquisition, preprocessing and segmentation.
Methods:

Image measuring

Line following

Image matching

Object recognition

Handling image data


Image pyramids

Gaussian shmooted

Dimension +30%

Compression

Lossless
compression

Lossy
compression

Image pre-processing
Histogram
Provides the frequency distribution of the pixel values in the
image

aS g
pS g
M
Relative frequency

Min,Max

contrast

Mean, variance

entropy, simmetry

The most important are:

255

H pS g log 2 pS g
g0

Contrast enhancement
Manipulating brightness and contrast of an image results
in a change of the pixel value distribution, for example
along an image edge.

Linear contrast stretching

Histogram equalization

Other simple operations


Thresholding

Image combination (aritmetic, Logical, bitwise)

Filter operations
Smoothing filters (low-pass filter) are mainly used for
pixel noise suppression.

Gaussian filter possess optimal smoothing properties

1 1 1

1
H3,3 1 1 1
9 1 1 1

1 2 1

1
H 3,3 2 4 2
16

1 2 1

1 4 6 4 1

4
16
24
16
4

1
6 24 36 24 6
H 3,3
256
4 16 24 16 4

1
4
6
4
1

x2 y2
1
f (x, y)
exp
2
2
2
2

Smoothing filter

Smoothing filter

Morphological operators

Application of a non-linear filter for the enhancement or


suppression of black an white regions with (known) properties.

Two fundamental functions based on boolean operator are


defined:

Erosion, which leads to the shrinking of regions. The value 1 is set


up if all pixels in the filter region (e.g. 3x3 elements) correspond
to the structure elements. Otherwise 0.

Dilatationwhich yields to the extension of connected regions. 1 is


setted if at least one matching pixel is present.

Morphological operators
Sequential application of dilatation and erosion can be
used.

Opening erososion followed by dilatation Remove


small objects

closing dilatation followed by erosion close gaps

Edge extraction
Edge are primary images structures used by human
visual system for object recognition.
Charachteristics

a significant change in adjacent pixel values


perpendicular to the edge direction.

edges have direction and magnitude

formed by small images structures

High frequencies into frequency domain.

Edge extraction
First order differential filter
s' x

s x x s x
ds
lim
dx x0
x

ds x, y ds x, y
grad s
,

dx
dy

ds x, y 2 ds x, y 2

dx
dy

ds x, y

dx

ds x, y

dy

s' x

s x1 s x
1

1 0 1

H x 2 0 2
1 0 1

Sobel operator

Hy

1 2 1

0 0 0
1 2 1

Edge extraction
Second order differential filter
s x x s x s x s x x
d2 s
s x 2 lim
d x x0
(x)2
''

s x
''

s x1 s x s x s x1
1

d2 s d2 s
s x s
2
2
dx
dy

''

Laplacian filter
0 1 0 0 0 0 0 1 0

0
2
0

1
2
1

1
4
1

0 1 0 0 0 0 0 1 0

Edge extraction
Laplacian of Gaussian filter
Laplacian filter is senitive to
noise a better results would be
expected if the image is
smooted in advance with a
gaussian filter.

Hough Transform
The hough transform is
based on the condition that
all point on an nalytical curve
can be defined by one
common set of parameters.

r xcos ysin
Based on that fact, the straight
line y = mx + b can be
represented as a point (r, ) in
the parameter space

Sample

Enanched Edge operator


Simple methods for edge detector often do not deliver satisfactory
results. An edge filter should have following chaacteristics:

Robustness

simple parametrization (with out interactive input)

High subpixel accuracy


minimum computational effort.
Canny and Deriche operator

edge extraction in image pyramids (kthe 1997)

Least square edge operators (el hakim 1996)

Geometric Image
transformation
The term rectification denotes a general modification of pixel coordinates
e.g. for:

Translation and rotation

Change of scale or size

Correction of distrosion effect

Projective rectification

orthophoto production

Texture mapping

Generally is performed into two stages


Transformation Of Pixel Coordinates

Calculation of (output) pixel values

s x, y G x, y
'

'

'

'

Geometric Image
transformation
Stereo image rectification

This process is useful for stereo vision,


because the 2-D stereo correspondence
problem is reduced to a 1-D problem.
Steps:

Find homologous points

Remove outliners

Compute fundamental matrix

Rectify images

Fusiello, Andrea (2000-03-17). "Epipolar Rectification

Geometric Image
transformation
Geometric Rectification
Raw remotely sensed data gathered by satellite or aircraft are representations of
the irregular surface of the Earth. Remotely sensed images are distorted by both
the curvatures of the Earth and the sensor being used. The process of shifting pixel
locations to remove distortion is known as rectification or georectification.

Spatial interpolation

Intensity interpolation

The geometric relationship between the input pixel coordinates (column and row; referred
to as x, y) and the associated map coordinates of this same point (X, Y) must be
identified.
Polynomial equations are used to convert source file coordinates into the referencing map
coordinates

Geometric Image
transformation

Intensity interpolation involves the extraction of a brightness value


from an x, y location in the original (distorted) input image and its
relocation to the appropriate x, y coordinate location in the rectified
output image.

There are several methods of brightness value (BV) intensity


interpolation that can be applied, including:

nearest neighbor,

bilinear interpolation, and

cubic convolution.

NEAREST NEIGHBOUR

Nearest neighbour resampling uses the digital value from the pixel in
the original image which is nearest to the new pixel location in the
corrected image. This is the simplest method and does not alter the
original values, but may result in some pixel values being duplicated
while others are lost. This method also tends to result in a disjointed
or blocky image appearance

BILINEAR INTERPOLATION

Bilinear interpolation resampling takes a weighted average of 4


pixels in the original image nearest to the new pixel location. The
averaging process alters the original pixel values and creates entirely
new digital values in the output image. This may be undesirable if
further processing and analysis, such as classification based on
spectral response, is to be done. If this is the case, resampling may
best be done after the classification process.

CUBIC CONVOLUTION

Cubic convolution resampling calculates a distance weighted average


of a block of sixteen pixels from the original image which surround the
new output pixel location. As with bilinear interpolation, this method
results in completely new pixel values. However, these two methods
both produce images which have a much sharper appearance and
avoid the blocky appearance of the nearest neighbour method

You might also like