You are on page 1of 90

Tracking of Radar-Detected Precipitation-Centroids

using Scale-Space Methods for Automatic Focussing

The SARTrE Algorithm

Diplomarbeit im Fach Meteorologie

Vorgelegt der

Mathematisch-Naturwissenschaftlichen Fakultät
der Rheinischen Friedrich-Wilhelm-Universität zu Bonn

Jürgen Lorenz Simon

October 12, 2004


Versicherung
Hiermit versichere ich, dass ich die vorliegende Arbeit selbstständig verfasst, und keine an-
deren als die angegebenen Hilfsmittel und Quellen benutzt, sowie Zitate kenntlich gemacht
habe.

1
Preface
When starting with my Diploma Thesis, the idea for developing a Tracking Algorithm was
only part of a much more ambitious plan of developing a Nowcasting algorithm. During the
course of working on the topic, it dawned on me that creating a working Tracking algorithm
was not a trifle, but a veritable task in itself. Moreover, when I read about the application
of Scale-Space methods for the solution of Tracking problems in other than the meteoro-
logical field, I got interested in Scale-Space Theory itself. Realising, that both could be
interconnected in a beneficial way also for meteorological applications, I was diverted from
the original plan and began to investigate the topic more deeply. During the short time of
this work, the simplicity and beauty of the Scale-Space appealed to me, and although I only
have taken but a first glance, the multitude of possibilities it seems to offer to all sort of
problems concerned with deriving information, which can be linked to scale, is overwhelm-
ing. The problem of scale has somehow always interested me - when I was a youth I was
fascinated by Fractals, especially because of the fact of self-similarity of their structures at
small and large scales. And, although I learned a lot during the course of writing this work,
the discovery of Scale-Space theory itself was among the biggest rewards for me.

Thanks
I would like to express my gratitude towards my Mother, for unbroken faith in me over the
winding and often erratic course of my life. Special Thanks to Prof. G. Heinemann for
accepting the proposal of this thesis in the first place, showing patience or exerting pressure
and providing numerous valuable hints and constructive criticism, which helped to improve
the quality of the work a lot. All of my friends for giving me support, lending an ear or
leaving me alone when appropriate. Gordon Dove for optimisation hints, general suggestions
as well improving my English. Mark Jackson for cheering up. Maren Timmer for helping
with the pedagogic aspects and moral support. D. Meetschen and Eva Heuel for providing
software and data as well as advise. Very Special Thanks to my girlfriend for moral support
and standing back when I needed the time, much obliged.

Bonn, 14th of March, 2004.

2
Contents

1 About Radar Meteorology 5

2 Radar Data 6
2.1 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Clutter Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Digital Image Processing Basics 19


3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Spatial Convolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.1 Using Masks for Convolution . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.2 Types of Masks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Neighbourhood Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3.1 Arithmetic Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3.2 Maximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3.3 Median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.4 Percentile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4.1 Absolute or Adaptive? . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.5 Other Filters Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.5.1 Isolated Bright Pixel Filtering . . . . . . . . . . . . . . . . . . . . . . . 26
3.5.2 Speckle Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4 Scale Space Theory 29


4.1 Basics Conception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2 Short Introduction to Gaussian Scale Space . . . . . . . . . . . . . . . . . . . 30
4.2.1 Effective Width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2.2 Extension to 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2.3 Isotropic Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3 Blobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3.2 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3.3 Edge Linking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3.4 Holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3.5 Area Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.4 Scale Space Representation in 2D . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.5 Blob Detection in Scale-Space Images . . . . . . . . . . . . . . . . . . . . . . 40
4.6 Automatic Detection of Prevalent Signals . . . . . . . . . . . . . . . . . . . . 43

3
5 Tracking and Scale Space 47
5.1 Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 Centroid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.1 Geometric Centre of Boundary . . . . . . . . . . . . . . . . . . . . . . 49
5.2.2 Centre of Reflectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2.3 Scale Space Centre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.4 Tracking Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.5 Visualisation of Tracking Data . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.6 Estimation of Quality, False Alarm Rates . . . . . . . . . . . . . . . . . . . . 58

6 Case Studies 59
6.1 Tracking at Fixed Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.2 Tracking at Automatically Selected Scale . . . . . . . . . . . . . . . . . . . . 67
6.3 Tracking at higher velocities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.4.1 Linear Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . 76
6.4.2 Percentile Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . 78

7 Discussion and Outlook 81

A Programming Techniques 83
A.1 Object Oriented Programming (OOP) . . . . . . . . . . . . . . . . . . . . . . 83
A.2 Objective-C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
A.3 Libraries and Third Party Software Used . . . . . . . . . . . . . . . . . . . . . 84
A.4 Macintosh Programming and Tools . . . . . . . . . . . . . . . . . . . . . . . . 84

B Software and Data 85

4
Chapter 1

About Radar Meteorology

RADAR is short for Radio Detection and Ranging. As many other great inventions (Transis-
tor, Penicillin, X-Rays, ... ) it was discovered by a fortunate combination of sheer luck and
awareness. In R.E.Rinehart’s Book, Radar for Meteorologists[13], the discovery is described
as follows:
”...In September 1922, the wooden steamer Dorchester plied up the Potomac
River and passed between a transmitter and receiver being used for experimen-
tal U.S. Navy high-frequency radio communications. The two researchers con-
ducting the tests, Albert Hoyt Taylor and Leo C. Young, had sailed on ships
and knew the difficulty in guarding enemy vessels seeking to penetrate harbours
and fleet formations under darkness. Quickly putting the serendipitous finding
together, the men proposed using radio waves like a burglar alarm, stringing
up an electromagnetic curtain across harbour entrances and between ships. But
receiving no response to the suggestion, and with many demands on their time,
the investigators let the idea whither on the vine.”
From that first incidence to the modern Radar systems used in civil and military purposes
today, a long time has passed. Radar is now an every day tool, used to detect and guide
air-planes or ships, detect distances between cars in automatic control systems or even to
detect objects hidden underground. The first Radars used for meteorological purposes where
obtained from the military after WWII, whose by then well developed equipment became
available for civil use. Another great development for meteorological applications was the
development of the Doppler Radar, which allows not only for detection of objects by their
reflected radiation, but also for detecting their speed radially to the Radar’s site through
the Doppler effect.

The way modern Radar’s work, is by alternatively emitting a bundled pulse of energy
(ray) and detecting the portion of radiation reflected from objects in its path in short time
intervals. Through the speed of light and the time interval, the range of the object from the
radar can be estimated. By changing the radar’s azimuth and/or elevation angle, two- or
even three-dimensional images of reflectivity can be obtained. By measuring the phase shift
between back-scattered- and emitted radiation, a radial velocity can be measured. For a
good introduction into the history, theoretical and technical details of Radar, use Rhinehart’s
Book.[13].

5
Chapter 2

Radar Data

Owing to the way radar data is obtained, its natural format is organised into rays for
each scanned angle, and within the rays a set of range gates, one for each time interval the
back-scattered radiation was sampled at. The natural coordinate system thus is planar polar
coordinates. In reality, the plane is more often than not a shallow cone, since the radar beam
often has an elevation from the perfect horizontal. However, the natural coordinate system
for the data is polar coordinates. These coordinates can be transformed into Cartesian
coordinate systems, where usually the origin is chosen to represent the radar site. This is
called a Plan Position Indicator (PPI) display. This term goes back to the beginning of
radar meteorology, when the PPI was indeed an oscilloscope’s display with the radar beam
taking sweeps, leaving detected targets in its wake.

2.1 Coordinate Systems


Two views are frequently used in this work: Plain Polar Coordinates and Cartesian Coor-
dinates:
Plain Polar Coordinates :
This form of display is a simple approach to get a first glance of the data contained
in a scan. The rays are plotted in ascending order of their azimuth angles from left to
right, one pixel for each ray. The range gates within each ray are plotted vertically,
starting with 0 at the bottom increasing in distance upwards. Again, one pixel per
gate. That way a plain look on the data can be achieved which is good enough to
perform simple filtering tasks which don’t have to take actual distances into account,
like thresholding or cluttermap sampling / filtering. The natural resolution for this
type of image is number of rays x number of range gates. An example of this view is
given in Fig.2.1.
Cartesian Coordinates :
Simple Cartesian Transformation from Polar coordinates is relatively easy. However,
the process gets more complex when introducing interpolation, which makes up for
the lack of sampled values and the increased size of the sampled volumes, as well as
the difference in height as the radar beam progresses outward. Both modes were used
in this thesis and are also available as options in the software developed alongside it.
1
. When using simple projection, the values are written into the Cartesian display
1 Thanks to D.Meetschen, Meteorological Institute Bonn for providing Interpolation Code.

6
Figure 2.1: Plain Polar Coordinates
Azimuth Scan, 28. Sep. 1999, 9:36 GMT+2, Range 50km, Elevation 2.57.

without considering whether there was already a value plotted at that point. (’last
wins’). See Figure 2.2 for an example. Figure 2.3 for the same data in interpolated
form.

Figure 2.2: Projection onto Cartesian Coordinates


Azimuth Scan, 28. Sep. 1999, 9:36 GMT+2, Range 50km, Elevation 2.57, Cartesian
Projection.

7
Figure 2.3: Interpolation onto Cartesian Coordinates
Azimuth Scan, 28. Sep. 1999, 9:36 GMT+2, Range 50km, Elevation 2.57, Cartesian
Interpolation.

8
2.2 Values
Reflectivity data from the X-Band radar installed in Bonn used in this work comes in
unsigned char values, which evaluates to a range of integers: [0..255]. The reflectivity is
calculated by using the formula: Z[dBZ] = −31.5dBZ + 0.5 ∗ Z[byte]. For the most part
of the data processing, this conversion is omitted though, because the byte valued format
proves advantageous in terms of grayscale representation. Also, the data contains time-
stamp and angular properties for each ray and overall scans. For optically matching a given
grey value back to reflectivity value, the following legend may be referenced:

Figure 2.4: Reflectivity Legend

9
2.3 Clutter Filtering
Clutter is radiation reflected off static ground targets like trees, buildings, hills etc. This is
mainly due to the fact, that the geometric properties of the radar ’beam’ are far from ideal.
The radar ’beam’ has, viewed across its axis, multiple local maxima (lobes) of radiation.
While the absolute maximum, the main lobe, contains most of the energy, some energy is
emitted in the secondary maxima, called the side lobes, whose axes point away from the
main axis. Thus, even when placing the radar on a raised point with clear line of sight
(for the main beam), the side lobes will produce ground clutter. In the light of this fact,
it is understandable that clutter is mostly found in the near range around the radar. Of
course there is also a dependency on the orographic circumstances of the radar site which
differs from site to site. Clutter may well be among the most intense reflectivity in the
data, since the objects giving rise to clutter are often of significantly higher density and
possess better reflective properties than most meteorological targets do, except maybe hail.
Thus, in order to obtain a more meteorologically relevant view on the data, its desirable to
find means to filter clutter out. One strong indicator of clutter is a target being stationary
(trees, buildings, mountains, large radio antennas, etc.). Doppler Radar can identify clutter
with relative ease for the absence of radial movement. Although the X-Band Radar in Bonn
is capable of detecting Doppler velocities now, that was not always the case. The radar
was modernised in 1998 and enabled for Doppler detection then. The data chosen for the
thesis is from before that time, and thus different approach of distinguishing clutter from
the real targets was required. Apart from adopting the cluttermap approach, a method of
stochastical decision making and weighed interpolation was developed.

The chosen approach is based on a concept known as cluttermaps. A cluttermap is a


map of the radar surroundings containing reflectivity values for days, where no ’significant’
meteorological targets where detected. It is reckoned that the signal on such days will mainly
be due to clutter. In the course of this thesis, the cluttermap was created using a linked list,
where the positions are indicated by azimuth angle and range gate number. The collection
of a cluttermap is done as follows: Given a suitable scan, the rays in the scan are traced
individually. Whenever a byte value exceeds 0 (or a suitable threshold), it is looked up in
the cluttermap. Should the coordinate (angle, gate number) exist already in the cluttermap,
the reflectivity value found in the scan is added to the reflectivity value of that cluttermap
point (node) and a counter indicating how many scans have contributed to this specific
point is increased, allowing for calculating a simple average of the found reflectivity values
later. Should the point not exist in the cluttermap, a new node is created, initialised with
the scan’s value for that point and inserted into the linked list at this spot. The choice
of using a linked list instead of a using a full array is based on two thoughts: First, only
values which actually contain something are stored. Clutter does not by a long shot fill a
radar scan, it is relatively sparse. Second: The azimuth angles in rays of different scans
are not perfectly constant. Although the azimuth angles where rounded somewhat prior to
collecting the cluttermap, 2 the problem remains in principle. By choosing the linked list
approach the cluttermap simply gets more dense should additional azimuth angles appear.
After adding a few scans, the cluttermap contains at each point the sum of reflectivity–
2 To ease the processing, the azimuth angles, which originally come in a precision of 10−2 are rounded to

the next digit down in precision to 10−1 . The maximum angular error made thus is: ∆φ = ±0.05◦ (deg).
At the maximum range of 100km for extended azimuth scans, this angular error translates into a maximum
dislocation error of ∆r = 100km ∗ rad(0.05) ≈ 87.3m. This was found tolerable for this process, since mostly
the clutter can be found in a range of 0-25km, where the error according to the same evaluation is about
21m. The rounding error of ≈ 5% seems acceptable for the purpose.

10
and the number of scans taken into account for each position. For days with great changes
in weather conditions it can be necessary to create more than one cluttermap (or at least
use more scans) to account for the impact of different weather conditions on the path of
the radar beam. For days with more stationary conditions, one cluttermap suffices and less
scans are required. For practical purposes, it has proven advantageous to obtain a new
cluttermap for each day, provided sufficiently event-free intervals can be found in the data.

How can this cluttermap be leveraged to reduce clutter in scans? Remember that the
cluttermap contains those positions in the scan, which have been found to be cluttered in
’clear’ conditions, the number of scans indicating so and the summed up clutter reflectivity
values.

A first approach might be to simply subtract the average clutter reflectivity at each
position in the cluttermap from the reflectivity found in the scan to be corrected. This
approach is based on the assumption that the overall reflectivity at a cluttered position is
the sum of the reflectivity of the meteorological target plus the clutter’s reflectivity (simple
superposition). Consider this basic form of the radar equation for multiple targets:

Pt G2 λ2 X σi
Pr = (2.1)
(4π)3 i Ri4

where Pr is the average received power, Pt the transmitted power, G is the gain for the
radar, λ the radar’s wavelength. The sum on the right contains σi , the i-th target’s scat-
tering cross section and its distance to the radar, Ri . The backscattering cross section σ is
calculated by taking the shape (diameter facing the radar’s direction), dielectric properties
and the radar’s wavelength into account. According to this equation, in the absence of
any meteorologically relevant targets, the clutter’s back-scattered power could be measured
and in the aftermath being subtracted from the measurement, since it seems to be additive
(through the sum on the right hand side). However, in practice this path leads to big errors,
ripping ’holes’ into the radar image. How’s this? For a start there is the fact that the path
of the radar beam is heavily influenced by atmospheric fields like temperature and humidity.
Thus stationary ground targets appear to be moving in the radar’s view because of that.
In addition, the radar beam is somewhat attenuated by travelling through a medium filled
with backscattering targets. These effects of energetic and directional obfuscation render the
simplistic superposition approach somewhat useless. In spite of the cluttermap information,
the problem of determining how much radiation at a given point in a sample is owed to
clutter persists.

In what other way could the information in the cluttermap aid us? Could it be possible
to leverage the cluttermap for estimating at least the likelihood of a point being cluttered?
And should the likelihood be high, could we apply a correction based on more information
than just the cluttermap? The following paragraph develops a method for doing just that.

Stochastic Ray-Interpolation Filter


Suppose a sample of a meteorological target, which is known to be at least partially due
to ground clutter. A human observer would find it relatively easy to identify clutter by
looking at a sequence of images, identify the stationary bits and take the structure of the
clutter into account. When clutter and other targets are present in the same area, the
human observer would still be able to tell clutter from other objects to some extend, by

11
his collected experience. One chief aspect in this decision making process would surely be
continuity, the larger structure of the objects seen. The presented method tries to take that
concept into account when distinguishing clutter from non-clutter. Knowledge about the
stationary targets is collected in the aforementioned cluttermap. In order to get a view of
the structure of detected objects, the scan is considered ray-wise. The main assumption is
as follows:
The more the measured reflectivity at a given coordinate deviates from the av-
erage cluttermap value, the more likely the value is to be correct.
Assume a cluttermap C = {C(φ, m)|φ ∈ [0, 360), m ∈ [1, Ngates ]}. Ngates is the number
of range gates the radar produces in a ray. Further let a radar scan consist of Nrays rays at
angles φn . Each of these rays contains Ngates range gates: Z = {Z(φn , m)|n ∈ [1, Nrays ]; m ∈
[1, Ngates ]} The method works by traversing all points (nodes) of the cluttermap and compare
them to corresponding points in the scan. What interests us is the likelihood of the point
under consideration (Z(φn , m)) being obfuscated by clutter (C(φ = φn , m)). An estimate is
proposed in the following form:
2|Z( φn , m) − C( φ = φn , m)|
Pclutter (Z(φn , m)) = erf c( ) (2.2)
255
3
Should the probability Pclutter exceed a pre-set threshold Pcrit , the point in the scan is
assumed to be heavily contaminated by clutter and thus in dire need of correction. 4
Now that a decision has been made, the samples value needs correction. In order to take
the continuity of the data along the ray into account, the data is modelled as a polynomial
g of order N in the index coordinate within a certain range upwards (further away from the
radar site) - and downwards (closer to it) of each range gate under consideration. Should the
downward range cross the origin (the radar site), samples from the diametrically opposite ray
(or the ray closest to being diametrical) are taken into account. For the sake of simplicity,
assume a fixed ray angle φ and consider only the range gate coordinate m:
N
X
g(m) = aj mj (2.3)
j=0

To obtain the coefficients aj , a least-squares-fit is done, taking a range of K values up and


down the ray into account: Let f (m) denote the sampled value at the given range gate m.
Then the least-squares-fit through m is obtained by defining the squared error function I:
m+K
X
I= (g(m̂) − wm̂ f (m̂))2 (2.4)
m̂=m−K

where wi are weights on the observations. Since we want to minimise the error by adjusting
the coefficients, we differentiate I for each aj
m+K
∂I ∂ X
= (g(m̂) − wm̂ f (m̂))2 ≡ 0 (2.5)
∂aj ∂aj
m̂=m−K
3 the factor 2 in the argument of the error function serves the purpose of extending the range of the

argument a bit, thus making fuller use of the value range of the error function, yielding more distinguishable
results. The value 255 is owed to the fact that the range of possible values is [0, 255] and serves to normalise
the argument.
4 This formula was in its basic form derived by inspiration. The Gaussian error function was chosen

simply for its mathematical properties. (See Fig.2.5). The closer the sampled value is to the cluttermap
value, the smaller the argument of the Gaussian error function, the closer the result (the ’likelihood’) gets
to 1. Note that this approach introduces one parameter, the threshold likelihood Pcrit .

12
1
erf(x)
erfc(x)

0.8

0.6

0.4

0.2

0
0 0.5 1 1.5 2

Figure 2.5: Gaussian Error Functions


Gaussian Error Function erf(x) and Complementary Gaussian Error Function erfc(x).

Carrying out the differentiation for coefficient aj , replacing g with its definition and re-
ordering gives:
M
X m+K
X m+K
X
ai m̂j m̂i = wm̂ f (m̂)m̂j (2.6)
i=0 m̂=m−K m̂=m−K

By shifting the indices so that [m − K, m + K] transforms to [0, 2K + 1], defining a transition


which maps n −→ m̂(n) and considering each j, this can be written in matrix form as:

Ga=v (2.7)

Where G denotes the matrix containing the elements


2K+1
X
Gij = m̂i (n)m̂j (n) (2.8)
n=0

vector a the polynomial coefficients (a0 ...aN ) and the vector v the observations with
2K+1
X
vi = wm̂(n) f (m̂(n)) m̂(n)i (2.9)
n=0

Now the coefficient vector a can be determined by inverting the matrix G.

The observations are weighed through the wi , according to a scheme, which is based
on their creditability with respect to clutter. For each point of the observation (fm ), an

13
estimate is made how likely it is for that point to be influenced by clutter (Cm ). A weighing
scheme is proposed to make use of the following function:

wm = erf (2|fm − Cm |/255). (2.10)

This way, values that exhibit a higher probability of being cluttered receive less credit, ex-
pressed through w, than less cluttered ones. See again Fig. 2.5 for the Gaussian error
function. Since the abscissa as defined by the range gate indexing was chosen to have its
origin on the range gate under consideration, the evaluation of the fit value for this special
point is simplified to the value of the coefficient a0 .

With this procedure, a device is present to correct clutter in radar data. Given a clut-
termap C and a scan Z, each point in Z is checked against C and, if Pclutter exceeds a
selectable threshold Pcrit , the point in Z is replaced by the fitted value a0 .

For the following three examples, the parameters where chosen as follows: Pcrit = 0.9,
K = 20, M = 3. All scans were taken from July, 12th 1999. The shown scans in Figure
2.6 were used to collect the cluttermap. Figures 2.8 to 2.12 show a correction for the ray of
angle 0, 5th range gate. Note that the corrected value used in each situation is the value of
the fit at x = 0 (Corresponding to a0 by construction).

Figure 2.6: Cluttermap Scans


Two scans from July,12th 1999, constituting the cluttermap. Left: 10:01, Right: 10.06.

14
Figure 2.8 shows clutter and fitting procedure for a situation, where no larger structure
is present in current sample (red curve) in the vicinity of the clutter (green curve). Since in
that situation the difference between cluttermap and sampled values is small and no larger
structure is present in the ray to indicate ’proper’ signal, the resulting fit is close to 0 overall.

The situation changed in Figure 2.10. A large precipitation signal has wandered into the
centre from the Northeast and is partially covering the cluttered area. It can be seen how
the presence of the larger structure in the ray ’pulls up’ the weights and sample values, thus
raising the fitted value.

In Figure 2.12 the precipitation echo has wandered Southwest even further and now cov-
ers the clutter completely. The large structure present in the ray pulls up the fit from both
sides. Also very clearly visible is how the weights react with the change from cluttered to
non-cluttered areas.

This procedure is not fully mature yet. It still leaves small holes in the precipitation.
Since these holes don’t pose a problem for subsequent stages, the quality was deemed good
enough to be useful for the course of this work. At an early stage during development
the whole procedure was tried using simple linear regression, which basically boils down to
setting the order of the interpolation polynomial to 1. It turns out that the linear approach
is too crude. Since a larger structure with a distinct curvature should be captured and not
only the next few points, the simple linear process tends to underestimate the reflectivity a
lot, resulting in holes or artificial low level plateaus.

15
Figure 2.7: Cluttermap Correction 1
Left: 10:06 No Correction. Right: Corrected

200
’samples_0.5’
’clutter_0.5’
’weights_0.5’
’weighed_samples_0.5’
’fit_0.5’

150
Z [byte value], weight [100*weight]

100

50

0
-20 -15 -10 -5 0 5 10 15 20
range gate distance

Figure 2.8: Ray Interpolation Example: Clutter Only


Showing the fit for July, 12th, 10:06. The fit was done for Azimuth Angle 0 and Range
Gate No.5.

16
Figure 2.9: Cluttermap Correction 2
Left: 12:31 No Correction. Right: Corrected

180
’samples_0.6’
’clutter_0.6’
’weights_0.6’
160 ’weighed_samples_0.6’
’fit_0.6’

140
Z [byte value], weight [100*weight]

120

100

80

60

40

20

0
-20 -15 -10 -5 0 5 10 15 20
range gate distance

Figure 2.10: Ray Interpolation Example: Clutter partially covered by another event
Showing the fit for July, 12th, 14:31. The fit was done for Azimuth Angle 0 and Range
Gate No.6.

17
Figure 2.11: Cluttermap Correction 1
Left: 13:46 No Correction. Right:Corrected

200
’samples_89.2’
’clutter_89.2’
’weights_89.2’
’weighed_samples_89.2’
’fit_89.2’

150
Z [byte value], weight [100*weight]

100

50

0
-20 -15 -10 -5 0 5 10 15 20
range gate distance

Figure 2.12: Ray Interpolation Example: Clutter completely covered by another event
Showing the fit for July, 12th, 15:46. The fit was done for Azimuth Angle 89 and Range
Gate No.2.

18
Chapter 3

Digital Image Processing Basics

The data produced by the radar system in its original form is not very suitable for sub-
sequent stages of the edge and object detection processing. It needs transformation onto
the Cartesian plane and a couple of filtering operations first. Since the data can be viewed
upon as a natural grayscale image, its only natural to refer to methods for processing digital
imagery as appropriate for the treatment of this data. This section introduces some basic
concepts and methods used in the course of this work.

The algorithms devised for processing digital images are legion. They range from simple
pixel-wise appliances (like thresholding) to algorithms taking into account the whole image
data, like Fourier transformations. It would be well beyond the scope of this work to give
an authoritative overview, so only the techniques used will be taken into consideration. For
an extensive discussion of the topic see Gonzales/Woods,Digital Image Processing[1], from
where all digital image processing techniques were taken, except for the ones developed by
the Author himself.

3.1 Definitions
An image in the sense of image processing is a set of rectangular matrices of evenly di-
mensioned values, which define properties for each pixel in each cell of the corresponding
matrices. The combination of all information determines the appearance of the pixel in the
resulting image. A good example are well known RGB images, which need 3 matrices con-
taining the colour information for red, green and blue for each pixel. Since the algorithms
used to process these matrices are more often than not identical for each information matrix,
the most widely used image used when explaining digital image processing procedures is a
grayscale image. It only needs one matrix containing the pixel values from a defined realm
of values. Radar data from the X-Band radar in Bonn comes in a range of unsigned char
[0..255], and can thus be looked upon as a natural grayscale image. All following proce-
dures will make use of that convention. Another helpful construction for the purpose of
processing is defining the image as a function f (x, y) which yields the grayscale value at
pixel coordinates (x, y).

19
3.2 Spatial Convolutions
Convoluting an image is among the most simple tools in image processing. It can be thought
of as an image transformation, by which the values of neighbouring pixels of a pixel under
convolution are used in some discrete function (the convolution kernel) to determine the
pixel value for the resulting image. The neighbourhood can be rectangular shaped or a
circle of influence and the parameter determining its size (also called the convolution kernel
size) may vary. The kernel function itself may constant or depend on spatial coordinate
or values found in the neighbourhood. More often than not, the parameters and size of
the convolution are constant though, which gives rise to a significant simplification of the
process: masks.

3.2.1 Using Masks for Convolution


A convolution with a mask is done by taking the neighbourhood of a pixel and apply a
mask of weights to it by calculating the sum of the weighted neighbourhood values. Fig.3.1
illustrates this process for a simple 3x3 neighbourhood. A mask in the context of convolution
is often also called its kernel. Also see [1], p48. Each pixel pj in the input image corresponds

Figure 3.1: 3x3 neighbourhood around the central point z5.

P9
to c(pj ) = i=1 pi ∗ wi in the output image. Repeating this process for each pixel results
in the convolution of the image with the mask. Of course a mask needn’t be limited to 3x3.
The concept of masks has proven to be so generic and useful, that the engineers of the Java
programming language introduced a class in their graphics library for just this purpose in
version 1.4.1.

3.2.2 Types of Masks


The choice of the weights in the mask are of course completely determined by the purpose
the mask serves. Masks can perform differentiation, smoothing, averaging and a lot more.
It all depends on the parameters.

Averaging Masks
Among the most simple uses for masks is averaging. By setting all the weights to 1 and
dividing the convolution image by the number of weights in the mask, each pixel in the
result contains the arithmetic average of the neighbourhood of the pixel (including itself).
A little more advanced use could be setting all diagonal entries to 0, thus limiting the

20
neighbourhood to straight lines. A better solution though is choosing the weights according
to the number of values under consideration. The mask shown in Figure 3.2 calculates the
arithmetic average of a 3x3 neighbourhood. As a general guideline, the sum of the weights
has to be 1 for averaging. The result of averaging is demonstrated in Figure 3.3 and Figure

Figure 3.2: Arithmetic Mean Averaging Mask


A simple averager.

3.4. One of the biggest disadvantage of this method is the blurring, which makes edges
considerably harder to locate. We will introduce a more subtle method of averaging later,
the Gaussian Blur filter.

Figure 3.3: Averaging Example, Unfiltered


Uncorrected radar data from July 12th, 1999 12:31 transformed into Cartesian coordinates
at a 200x200 resolution.

21
Figure 3.4: Averaging Example, Filtered
Result of convoluting the image once with the averager shown in fig.3.2. Notice how the
bright spots have been averaged out and some of the smaller gaps have been filled.

Derivative Masks
As stated in Gonzales/Woods Digital Image Processing[1] p197, if the averaging process can
be viewed upon as an analogue to integration and this smoothes images, the opposite can be
expected for differential masks. Since differentiation on a two - dimensional domain yields
a vector, and the magnitude of the gradient is the length of that vector, calculating the
gradient by using masks requires two masks, one for the x and one for the y direction:
" #
∂f
f = ∇f = ∂x
∂f
∂y

p
|∇f | = kfk = (∂f /∂x)2 + (∂f /∂y)2
Now let a 3x3 neighbourhood around a given point be numbered as indicated in fig.3.1:
Then the gradient can be approximated as:

∇f ≈ |(z7 + z8 + z9 ) − (z1 + z2 + z3 )| + |(z3 + z6 + z9 ) − (z1 + z4 + z7 )|

where the first term corresponds to the approximate gradient in y, Gy and the second term to
its counterpart in x, Gx . This scheme gives rise to a pair of masks known as Prewitt Operators
in image processing, which can be seen in Fig.3.5. Another form of differential operators,
known as Sobel Operators, have the advantage of enhancing the axis-oriented values over the
diagonal elements, providing a smoother result than the Prewitt operator. The two Sobel
Operators are shown in fig.3.6. Generally, differential masks have their coefficients sum up
to 0. For in-depth information on the topic of the presented operators, see [1].

22
Figure 3.5: Prewitt Operators
The Prewitt Operator for x (left) and y (right) direction correspondingly.

Figure 3.6: Sobel Operators


The Sobel Operator for x (left) and y (right) direction correspondingly.

3.3 Neighbourhood Averaging


As the name implies, a pixel under a neighbourhood averaging process is replaced by some
mean value of its surrounding pixels. In this case the size of the convolution makes a big
difference. Bigger influence radii tend to smear values around more than small ones, and
the specific types of averaging contain or discard the fine structure of the image more or
less. In this work, four types of averaging have been taken into account.

3.3.1 Arithmetic Mean


[1] The arithmetic mean of the values in a sample is calculated and replaces the original pixel
value. This method has the same weakness as the median (see below), and often median and
average are indeed identical. It tends to underestimate the brightness in sparsely populated
areas of the image and blurs the data considerably. For an example of applying this method
(3x3 neighbourhood equivalent to a radius of 1 pixel) see Fig.3.4

3.3.2 Maximum
The pixel value is replaced by the maximum of the values found in the sample. Its a very
good filter for enhancing structural views of the data and fill gaps, but it destroys a lot of

23
the fine grain structure. Its the steam-hammer among the presented methods, but good for
boundary finding in weak data.

3.3.3 Median
[1] The Median of a sample of values is defined as the 0.5 percentile of these values. Its
the one value in the sample above which half of the values reside above, and the other half
of the values reside below it in the range of values in that sample. An example of using a
median filter on a 3x3 neighbourhood on the data presented in fig.3.3 is shown in fig.3.7.

3.3.4 Percentile
The best suited averaging method found in the course of this work was the percentile filter.
A predefined percentile is chosen and for each sample the original pixel value is replaced
by the given percentile. A carefully chosen percentile value has all desirable properties of
the maximum filter, yet contains the fine grain structure of the data a lot better than all
other methods. Its computationally more intensive, since an interpolation is done for each
sample, but in practical application this difference was found to be imperceptible and the
results justify the extra effort involved. Note that the maximum filter is the 100% percentile
and the median is the 50% percentile.

Figure 3.7: Median Averaging Example


July 12th, 1999 12:31, Median Averaging in a 3x3 neighbourhood.

3.4 Thresholding
Thresholding denotes the process of limiting the range of possible values for the purpose of
differentiating between the background and foreground of a given image. Often the term
thresholding is used synonymously for a highpass filter, where all values must lie above a
certain value to pass the filter. Thresholding can just as well mean the reverse (Lowpass),

24
Figure 3.8: Percentile Averaging Example
July 12th, 1999 12:31, Percentile Averaging 80% in a 3x3 neighbourhood.

or a combination (Bandpass). For the purpose of this work, only a highpass filter was
implemented and used.

3.4.1 Absolute or Adaptive?


Absolute Thresholding constitutes a static value or range of values, which are allowed to
pass. It is most applicable in situations, where the range of data is known to be of interest in
a a-priori predefined range. Adaptive Thresholding is a process, where the threshold value
is not set in advance, but defined on the fly as a certain portion of a dynamic range. Which
of these two variants is used, depends a great deal on the problem under consideration. For
radar data with a closely defined range of values (as provided by the X-Band Radar in Bonn)
static thresholding has proven to give a good result on average. As a convention, let T abs
denote absolute threshold values in dBZ, T rel denote adaptive thresholds in percent.

25
3.5 Other Filters Used
3.5.1 Isolated Bright Pixel Filtering
Single Points are isolated pixels, which differ considerably in brightness from their immediate
surroundings. Since they cause trouble in later stages of the object detection – namely in
the Gaussian scale space analysis – a procedure was devised to remove those. For each point
the difference with all points in a 4x4 - neighbourhood are considered. If more than two
exceed the chosen maximum gradient, the pixel is assumed to be either an isolated point
of strong reflectivity or part of a line - like structure of that type. Thus it is replaced by
a simple arithmetic average of its surrounding pixels. Otherwise it passes unchanged. The
following figures illustrates this using a max. gradient of 100/pixel (100dBZ/km) 1

Figure 3.9: Single Bright Spots on plain polar image


Uncorrected data from July 11th, 11:01. Single bright spots are clearly visible

Figure 3.10: Single Bright Spots removed


Corrected using max. gradient 100/pixel, 68 (0.17%) of the original pixels were corrected.

1 The assumption of a gradient of 100dBZ/km indicating fallacious measurements is based on [2]

26
3.5.2 Speckle Filtering
Speckle is defined as small particles, which are randomly distributed on the image. It can
be thought of as dust, scratches or other small scale noise in e.g. a photograph. In the
context of this work, speckle is defined as small scale objects which need removal in order
to not disturb higher layers of processing. Especially when using Gaussian blur filtering,
small scaled, yet highly intense spots in the data can get spread out widely in the process,
resulting in noise in the scale space. Therefore, the following procedure was devised in order
to get rid of it.

A pixel radius is chosen along with a minimum coverage percentage. Each pixel in the
image is then the midpoint of a disk with said radius and the coverage is calculated. Since
the data consists solely of bright Blobs on a dark background, and this background is de-
fined as a pixel of value 0, the coverage is simply the number of non-zero pixels divided by
the overall number of pixels taken into consideration. If that number is equal or exceeds
the chosen percentage, the pixel is considered part of a large enough structure and passes
the filter. If, on the other hand, the coverage around that point is smaller than the chosen
percentage, it doesn’t make it into the result. The parameters, however, need to be chosen
very carefully, since too high a threshold for a given radius results in removal of too many
boundary points from originally sufficiently large structures. A good combination of values
was to be found a radius of 11 pixels needing a coverage of at least 10% to make it through
at a resolution of 200x200 pixels. Fig.3.11 and Fig.3.12 illustrate the method.

Note the small remains of speckle near unfiltered areas in the top right hand area. If
a very small scale object lives near enough a bigger one, enough points from the adjacent
bigger structure make it into the area of influence for the smaller one, keeping it alive through
the filter. Since this is limited to a fraction of the radius of influence, the errors introduced
are not of importance Another configuration fooling the filter are dense, yet singular spots,
which keep each other alive. A remainder which is owed to this configuration can be seen
right in the centre of the de-speckled image. Overall, however, the presented method delivers
good enough results for the subsequent processing stages.

27
Figure 3.11: Image with Speckle
July 13th 1999, 11:41. The shown image was produced by applying a cluttermap correction,
removing bright spots, thresholding at 12.5bDZ and projecting on the Cartesian plane using a
resolution of 200x200 pixels. In the centre, remains of the cluttermap correction can be seen

Figure 3.12: Despeckled Image


The same data after applying a speckle filter of radius 11 pixels and minimum coverage of 10%. 72
values were removed.

28
Chapter 4

Scale Space Theory

4.1 Basics Conception


The conceptual foundations of scale space theory are very intuitive. Consider a single tree.
On a fine scale it exhibits leaves and twigs. Looking at the tree from a little further off
renders the concept of describing the tree by its twigs somewhat pointless. The details
observable would rather be branches and trunk. Backing off even further reveals the tree’s
overall shape, which might roughly be a cylindrical or spherical shape. At the scale of a for-
est, however, even the unit ”tree” seems inappropriate. The level of detail used to describe
something depends largely on the scale of the perceived object.

Although this concept is conceptually very easy to understand, it has been looking for
mathematical approach in terms of signal processing for some time, despite the fact that all
necessary mathematical concepts needed were ready by the mid 1800’s.[5] It is interesting
that, although the scale-space idea in the western hemisphere usually is said to have ap-
peared first in a paper by A.P.Witkin [6], 1983 or an unpublished report by Stansfield (1980),
Weickert points out that the first Gaussian Scale-Space formulation has been proposed by
Taizo Ijima in Japan, 1959. Two theories of scale-space have developed surprisingly inde-
pendent of each other in Japan and the Western World. A comparison of the two theories
was done by Weickert in his paper ”Scale Space was discovered in Japan” [5], which is also
a good, compact introduction into the general ideas of the theory.

Within the confines of any given image1 the concept of scale becomes somewhat relative.
Lindeberg states in his book ”Scale Space Theory in Computer Vision” [3]: ’The extent of
any real world object is determined by two scales, the inner scale and the outer scale. The
outer scale of an object or a feature may be said to correspond to the (minimum) size of a
window, that completely contains the object or the feature, while the inner scale may loosely
be said to correspond to the scale at which substructures of the feature or object begin to
appear’.
Scale Space Theory is a mathematical model, which strives to give a robust and usable
description of the property ’scale’.
1 image in this work is used synonymously to 2-D signal representations.

29
4.2 Short Introduction to Gaussian Scale Space
This section basically subsumes Lindeberg,1994, Chapter 2. Consider a one dimensional
’image’ F : IR −→ IR. Now a scale parameter t ∈ IR+ is introduced. Small values of t shall
represent finer–, larger values coarser scales. Then the image F is abstracted into coarser
and coarser scales by gradually increasing t, resulting in a family F (x, t) of images, param-
eterized by t. This family is called the scale space representation of the image, L(x, t). It
contains information of each object in F at each considered scale. This has some similarity
with the wavelet approach. As opposed to wavelets, the scale space representation does
shrink in size as the scale parameter increases. Scale Space is useless for data compression.

How does the abstraction take place? For an illustration, a one-dimensional signal is
instructive. Again, let F : IR −→ IR. The scale-space representation L of F starts at
scale 0 (the original image) and images at coarser scales are given by convolution with a
scale-space kernel g:
L(x, 0) = F (x) (4.1)
L(x, t) = g(x, t) ∗ F (4.2)
which is calculated in the form of a convolution of F with g:
Z ∞
L(x, t) = g(λ, t)F (x − λ)dλ (4.3)
λ=−∞

Although many possible scale-space kernels are conceivable 2 , the Gaussian kernel g(·, t) 3

has by far the most important stance in the field of scale-space theory:
1 2
g(x, t) = √ e−x /2t (4.4)
2πt
It has a number of desirable properties. (See Lindeberg,1994). First of all, its normalised in
the sense, that Z
g(x, t)dx = 1 (4.5)
x∈IR
It has a semi-group property, which results in the fact that the convolution of a Gaussian
kernel with a Gaussian kernel is another Gaussian kernel:
g(., t1 ) ∗ g(., t2 ) = g(., t + s) (4.6)
which has a technically important implication for scale-space representations: A scale-space
representation L(x, t2 ) can be computed from a scale-space representation L(x, t1 ) with
t1 < t2 through convolution with a Gaussian kernel g(., t2 − t1 ):
L(x, t1 ) = g(., t2 − t1 ) ∗ L(x, t1 ) (4.7)
This is the cascade smoothing property of the scale-space representation. Furthermore, it is
separable in N dimensions such that a N-dimensional Gaussian kernel g : IRN −→ IR can
be written as
N
Y
g(x, t) = g(xi , t) (4.8)
i=1
which takes the order of processing operations needed for computing convolution masks in
the spatial domain down considerably.
2 The two properties to make a kernel useful, being unimodal and positive
3 g(·, t)meaning g(x, t) ∀ x ∈ IR

30
4.2.1 Effective Width
In practical applications the Gaussian kernel is calculated until a certain distance from its
origin, its effective width xmax . In this work, this distance was determined for each scale
as the point xmax (t), at which the value of g(xmax (t), t) had decayed to 0.01% of g(0, t).
The value of 0.01 was called the decay δg and is adjustable in the software, although it was
mostly left at its default of 0.01. Thus, the width of the kernel operator was be calculated
through:
2
√1 e−xmax /2t
g(xmax , t) 2πt 2
δg = = 1 0
= e−xmax /2t (4.9)
g(0, t) √ e
2πt

and thus p
xmax (t) = −2 t ln δg (4.10)
4
which is also the width of the mask used to calculate the kernel.
Since the width of the kernel is expressed in image coordinates, where the basic unit is one
pixel. For relating xmax to distances in [m], the resolution of the image has to be taken into
account.

How does convolution with a Gaussian kernel affect the data? Figure 4.1 shows a scale
space representation of random data, which as been modulated by a sinus. Scale increases
from bottom to top:

4 In literature on scale-space, the effective width is often deduced from the thought, that the weighted
averaging introduced by the Gaussian√kernel is similar to measure the signal at point x through a circular
aperture of characteristic length σ = t, so for example in Lindeberg,1994.

31
Figure 4.1: 1-D Scale Space Representation
Scale increases from 0 (bottom) to 0.8 (top).

Notice how the small scaled, random signal gets less and less important as scale increases.
The structure that remains is the larger-scaled, sinusoidal variation.

Of course gauss filtering in its own right is a well known technique for de-noising noisy
data and nothing new. However, in the context of scale space, the ”noise” is not an unwanted
part to be filtered out, but just the property of the given signal at the scale where its visible.
The scale space representation is constituted by the whole family of curves, parameterised
by t at different levels of detail.

4.2.2 Extension to 2D
The extension into a higher dimension is straightforward. The image function is extended
to F : IR2 −→ IR and the Gaussian kernel looks like:
1 2
g(r, t) = √ e−|r| /2t
2πt
where r ∈ IR2 . The convolution of F with g(r) is the integral over the whole domain:
Z
L(r, t) = g(λ, t)F (r − λ)dλ
λ∈IR2

32
The Scale-Space Representation of a 2D image is a 3D space, where the scaled versions of
F stack up along the t axis in L(x, t) Outlines of structures in scale-space appear to be
upside-down domes or mountains.

4.2.3 Isotropic Diffusion


It is interesting to see how the scale parameter’s letter came to be t. This has historical
reasons: It was observed that this smoothing process has a physical counterpart: heat
diffusion. It is often mentioned in literature about scale space, that the Gaussian kernel is
a solution of the heat diffusion equation. (Lindeberg,1994, pp43)
1 2
∂t L = ∇ L
2
with initial condition L(x, 0) = f (x). Indeed, one of the first ideas concerned with abstract-
ing image details goes back to the Perona-Malik-Filter (P.Perona, J.Malik 1985), whose
underlying idea was to take a given signal and let it diffuse through an isotropic medium
for a certain time t and then observe the results. Because the linear diffusion process has
the disadvantage of dislocating edges, further attempts have been made using an anisotropic
medium, which is for instance more diffusive at edges (areas of high gradients) than in areas
of shallow gradients in order to preserve the structure of the prevalent edges better. This
is called inhomogenous linear diffusion. Both types, as well as their non-linear siblings, are
presented in a compact manner in [4].

4.3 Blobs
4.3.1 Definition
Grayscale imagery is composed of areas of different brightness. Blobs are areas in the image
where a desired property remains relatively stable and which is somewhat distinguished
from its surroundings. In grayscale images, the two candidates are bright Blob on dark
background and its evil twin: dark Blob on bright background. In the case of radar data in
the given representation this is particularly easy: we have only bright areas against a dark
background since only the bright areas are of interest.

4.3.2 Edge Detection


A Blob in the given data is also determined by an edge. And edge is determined by an area of
high gradient and a zero crossing in the Laplacian. This is best illuminated by a little walk
across an edge, coming from an area of relatively homogenous low intensity (low gradient)
and heading for a bright area. In the region of the transition the gradient increases until the
point in the transition where the change of intensity declines and a relatively homogenous,
albeit brighter region of intensity is entered. Figure 4.2 illustrates this:

33
f(x)
gradient(x)
laplace(x)

Figure 4.2: Intensity, gradient, Laplacian at edges


Intensity making a transition from low (left) to high (right) values.

Figure 4.3: Mexican Hat Laplacian Mask

34
Notice how the gradient reaches its maximum on the middle of the slope. Observe how
the Laplacian changes sign in the process. There are two basic techniques for obtaining the
location of edges using derivative operators: gradient maxima and Laplacian zero crossings.
For the course of this work, the Laplacian zero crossing was used and approximated by
using the mask shown in Fig.4.3, which is a second order derivative of a Gaussian smoothing
operator (see [1], chapter 7), a so called Mexican Hat operator. Only points with negative
Laplacian were considered as candidates for edge points. That way the edge is actually
located inside the bright Blobs. A demonstration of this can be seen on Fig.4.4.

Figure 4.4: Laplacian mask, detected edges


Edges detected by the Laplacian mask detector after the data has been put through a low
smoothing Gaussian filter. Edges are already linked and coloured accordingly.

For the following procedure let F be the original image. F is first smoothed using a
Gaussian kernel g(·, t) 5 in order to prevent the very noise sensitive Laplacian from going
nuts, resulting in a smoothed image G, and then the Mexican hat edge detection, denoted
by M H is applied. This is somewhat double done, since the Mexican Hat operator was
constructed with a smoothing property itself, but the results are nonetheless usable.

G = g(·, t) ∗ F
E = MH ∗ G

An edge point is every point in E less than zero.

4.3.3 Edge Linking


The edge points alone are not very useful, they have to be linked into chains enclosing ob-
jects. For convenience, all points classified as edge points are taken from the edge detection
data output and stored in a simple list containing the location of the edge point in image
coordinates. A recursive scheme was used to detect closed chains in this list. For each of
these closed chains, a Blob object was created.

Let all N points in E satisfying the edge criteria be collected into a list S = {n1 , n2 , ..., nN }.
Every node ni is composed of the location in image coordinates and a pointer to the next
5 g(·, t) means Gaussian convolution kernel with scale t

35
entry, ni+1 6

Starting with an empty boundary node list b1 , the first node n0 ∈ S, is added to b1 .
Then the immediate 8-Neighbourhood of n0 is searched in S. Every point found to be a
direct neighbour is considered to be part of the boundary b1 and added, if it hasn’t been
added already. Then this new found friend is subjected to the same treatment. This process
continues until no more new points can be added to b1 . Afterwards, b1 is removed from S
and the process starts all over again, this time with b2 , until S is empty. This results in K
closed boundaries:
b1 = {n1 , n2 , ..., nN1 }
b2 = {n1 , n2 , ..., nN2 }
..
.
bK = {n1 , n2 , ..., nNK }
where the sets bj are orthogonal in space and their junction is S:
[
S= bi
i

For each now closed boundary bj a Blob object Bj is created and the boundary is stored
within for future use.

4.3.4 Holes
As said before, the data under consideration only contains bright Blobs on dark background.
Nevertheless it is quite common to have areas of no signal completely enclosed by areas
bearing significant signal. These spots are called holes in the context of this work and they
pose a problem: since the edge detection algorithm finds the boundaries between the hole
and the surrounding bright area like any other transition, spurious Blobs are generated. In
order to remove these, each combination of Blobs is checked: Consider two boundaries bj
and bk . If
b j bk = bk
where denotes complete geometrical inclusion. 7 , then bk is considered to be a hole and
removed from the list of Blobs. This has to be done since the following area sampling
algorithm would be fooled by holes and run astray.

4.3.5 Area Sampling


In order to obtain the measured values that are actually contained within the found bound-
aries, one must first know where inside actually is. To this end, a gradient walk was applied:
Consider a boundary bj . For each point ni in bj the gradient in L(., t), which was used to
obtain the boundaries, is calculated. A step is taken into that direction. This is repeated
until a point p0 is encountered which is not a boundary point. It is assumed that the inside
is found. From p0 the inside is traversed horizontally, once in direction of increasing, once
in direction of decreasing x coordinates, each until a point in bj is encountered. En route,
for each coordinate in the walk, the according value in F (the original data) is added to
the Blob Bj ’s area values Aj 8 . The resulting image Aj contains only those values from F ,
6 this
is called a forward linked list.
7 Every point in bk is checked whether it is completely confined within bj . If true, it is added to the result
of the operator.
8 A is an image of the same dimensions as F where all values have initially been set to 0
j

36
which lie inside, but not on, the boundary bj .

4.4 Scale Space Representation in 2D


Since images say more than words, lets take a look at how 2D signals evolve under scale-
space transformations. The sequence shown in Figures 4.5 and 4.6 shows the scale space
representation of the azimuth scan data from September 8th, 1998 19:38. The original image
was subjected to a thresholding at T abs =12.5dBZ, single bright spots where removed and
the result projected onto the Cartesian plane at a resolution of 400x400 pixels. For every
subsequent image, scale parameter t was doubled.

Please observe that the brightness shown has been adjusted to represent the whole range
of values of a Gaussian blurred image. Having used the fixed value grayscale mapping would
have made the Blobs almost invisible, because the Gaussian kernel not only smoothes the
image, but also levels the values somewhat down, the higher the scale, the lower the resulting
signal. It is clearly visible how scaling up dismisses more and more of internal details of
the signal and at large scales, only a rough description of the original shape remains visible.
The internal scale of the image shown could roughly be estimated to lie around 128.

37
38
Figure 4.5: Scale Space Representation 1
Scale-space representation of the azimuth scan, 8th of September 1998 at scales 0 (original
image), 2,4,6,8,16 and 32.
39
Figure 4.6: Scale Space Representation 2
Selected Points in the scale-space representation continued for scales 64,128,256,512,1024
and 2048.
4.5 Blob Detection in Scale-Space Images
The problem posed by images under Gaussian scale space transformation for detecting ob-
jects, is clearly the absence or massive dislocation of clean edges. Since the Gaussian blur
tends to smooth the edges out, artificial edges have to be re-introduced. How can this be
done? A simple approach would be to subject L(x, t) to a thresholding procedure. Since the
Gaussian kernel g(·, t) tones the values down more and more with increasing scale t, it is a
good idea to use adaptive thresholding. The following series repeats the process in the pre-
vious section on the same data, but this time each slice from the scale-space representation
is subjected to an adaptive thresholding at T rel =20%. This value will subsequently also be
referred to as cut-off value.9 . After thresholding. the edge detection introduced in section
4.3.2 onwards was applied. See Figures 4.7 and 4.8 for results.

It is clearly visible that the resulting boundaries settle around prevalent structures in the
original data by observing their scale-space representation. The number of detected Blobs
K decreases with increasing scale t, as could be expected.

9 which means the lowest 20% of the data are trashed (set to 0)

40
41
Figure 4.7: Edge Detection in Scale Space Images 1
Thresholded Edge Detection at scales 2,4,8. Left:Scale Space Representation. Right:
resulting boundaries on original data.
42
Figure 4.8: Edge Detection in Scale Space Images 2
Thresholded Edge Detection at scales 16,32 and 64. Left:Scale Space Representation.
Right: resulting boundaries on original data.
4.6 Automatic Detection of Prevalent Signals
As could be seen in the previous section, an increasing scale parameter t leads to prevalence
of the most significant and dampening of the less significant features. The scale, at which
the prevalent features remain while the insignificant disappear, does vary considerably from
image to image. It depends a great deal on the complexity of the scenery. Prevalent, in
the scale-space context, is always to be seen in the context of the scale of the present image
features. This means, that an approach based on a similar level of detail (in scale space
terms) in subsequent images can not work properly with a fixed scale. Thus, an automatic
process capable of distinguishing the prevalent from the insignificant Blobs would be highly
desirable. The question is though: how can prevalent be defined in terms of scale-space?

Consider the following idea: Given the fact that (in general) the number of detected
Blobs decreases as the scale parameter t increases, could it be reckoned that Blobs surviving
the upscale process for a given number of repetitions are the prevalent Blobs?

Figure 4.9: Automatic Scale Detection, Original Image


Azimuth Scan used for demonstrating automatic scale space analysis. The data has been
thresholded at T abs =12.5dBZ,contrast-stretched,single-point filtered with a gradient of
25dBZ/330m and interpolated onto the cartesian plane.

This idea shall be used for the following procedure. Starting with a low scale parameter
t0 , the number of Blobs is detected. The scale is increased by a fixed increment δt and the
number of Blobs found now is compared to the previous number. This process is repeated
until the number of detected Blobs stabilises over Nmax iterations. The parameter Nmax
determines the required scale-space persistence for any given object needed to be classified

43
as prevalent. The resulting automatically selected scale is chosen to be the scale parameter
t of the first scale space representation slice L(., t) of the stable series in order to conserve
maximum detail. The complete set of parameters required thus, is the start scale t0 , the
scale increment δt and the scale-space persistence Nmax . The Blobs to be considered persis-
tent thus are required to remain distinguishable over an effective scale difference of Nmax ∗δt.

Figure 4.9 shows an azimuth scan from July 1999. An extensive signal is present on
the west side, the upper east side is populated by smaller, scattered signals. Figure 4.10
illustrates the automatic upscale process for Nmax of 1,2,4,6,8,10. Depending on the Nmax
setting, different Blobs ’prevail’ or structures merge into larger Blobs as expected.

44
45
Figure 4.10: Automatic Scale Detection Results
Blob Boundaries detected at Nmax of 1,2,4,6,8 and 10
Frequently, repeated convolution with the Gaussian kernel destroys borders between ob-
jects, which lie close to each other, but are connected by low intensity areas. In order to
alleviate this effect, a feature called Inprocess cut-off was introduced. It works by threshold-
ing each intermediary scale space representation before performing the next upscaling step.
In the further course of this thesis, the intermediary cut-off is denoted by Tinp (adaptive
threshold). The net effect is that boundaries move closer to the local maxima in scale space
representation. See Figure 4.11. Inprocess cut-off should be used with care, if chosen too
high, it results in massive loss of information. A value yielding solid results was found to be
Tinp = 0.1 (10% adaptive threshold).

Figure 4.11: In-Process Cut-Off Results


Nmax = 5. Top left: No Cut-off, Top Right: Tinp = 0.05, Bottom Left: Tinp = 0.1, Bottom
Right: Tinp = 0.2.

46
Chapter 5

Tracking and Scale Space

Tracking means: to extract data about movement from subsequent sets of data. The move-
ment need not be physical movement between two points in time, other parameters changing
between to images may be suitable (for example Tracking of objects under scale transfor-
mations).

The speciality of the SARTrE1 tracking tools lies in the ability to be able to automati-
cally select features worth tracking in the context of all objects in any given snapshot, and
the correlation procedure, which takes histograms of Blob content (signature) into account.
The focus of attention is drawn to the salient image structures by the applying of the auto-
matic detection procedure presented in Section 4.6.

There exist a couple of Tracking algorithms based on different principles to obtain infor-
mation about what happened between time t and t + ∆t:

Centroid-Tracking :
can be applied if the trackable data can be decomposed into distinct objects under
some criteria. A centroid - a designated point - for each object is assigned. Subsequent
images are analysed with the goal of finding the same object at its new position, and
the displacement of the object between the two images is estimated as the displacement
of its centroid. Of course the problem of correlating objects from one image to another
is dependant on the nature of the image or object and the criteria used. A certain
grey value, a geometrical shape or another suitable form of signature may be used.
Often, the search is narrowed by some a-priori or otherwise obtained information
about maximum possible object velocity and size of the object, restricting the search
window in a subsequent image. It was first applied in meteorology by Barclay and
Wilk (1970). A recent adoption of this form of tracking is the Trace3D algorithm,
developed in Karlsruhe by J.Handwerker,2002.[9].

Statistical Cross-Correlation :
is not concerned with individual objects as such, but the extraction of flow patterns
in image series. This is achieved by defining a box size and statistically correlate all
possible boxes at t to all possible boxes at time t + ∆t. The boxes getting the highest
correlation are connected. The resulting field of displacement vectors is dependant on
1 The abbreviation SARTrE is short for Scale Adaptive Radar Tracking Environment. The Environment

mentioned refers to the reusable software libraries developed for this work.

47
the box size as well as on the data. Statistical box correlation suffers from ambiguities
inherent in the correlation process and is often highly sensitive to changes in box
size. For an illustration of the ambiguity problem, see E.Heuel 2004[14]. An example
of this type is the TREC algorithm (Rhinehard 1981)[10] which was improved by
L.Li,W.Schmid, J.Joss 1994 (COTREC) [11] through directional post-processing by
applying the continuity equation to the vector field delivered by TREC, the results
where used for Nowcasting. This was the basis for the improved algorithm developed
at the ETH Zurich by S.Mecklenburg, 2000[12].
Tracer Tracking :
A special form of semiautomatic Tracking is applied when the object under observation
exhibits little clue as to its motion. For instance determining flow patterns and veloci-
ties in fluids. In this case, a tracer is picked or introduced and the motion of the tracer
is tracked instead. An example is the estimation of rotational velocities in a Tornado
by tagging debris carried by it and following it through a series of high-resolution film
frames. In the context of Radar Meteorology, this form of indirect tracking has no real
significance.

For the course of the work the natural approach to track precipitation seemed to track
Blob centroids. As signature the histogram of reflectivity within each Blob was chosen.
The correlation was performed using a weighting scheme including spatial displacement,
histogram size and -shape (via Kendall’s Tau correlation).

5.1 Histogram
A histogram of reflectivity values contains the counts of each value from the range of (dis-
crete) values possible. In our case, the range was chosen to be the natural range as present
in the data, where values range from [0..255]. Each Blob area A was scanned and the found
values counted up. As an example the histograms of the Blobs detected in Figure 5.3 were
are shown in Figure 5.2

48
Figure 5.1: Histograms, Detected Blobs
Azimuth Scan, July 12th 1999, 13:01. Four distinct Blobs detected with identifiers
#1,#2,#3,#4

5.2 Centroid
5.2.1 Geometric Centre of Boundary
When saying that the centroid of the object is used for determining its displacement, the
question was left open what the centroid actually is. At first glance, the geometric centroid
of the boundary points comes to mind. Assume a boundary b from a Blob B containing N
points xi = (xi , yi ).
N
1 X
xcentre = xi (5.1)
N i=1
N
1 X
ycentre = yi (5.2)
N i=1

This has a big drawback: since Blobs tend to change in shape, yet may stay relatively intact
in terms of overall size and (foremost) in position, the geometric centre of the boundary
points might yield spurious movement. Although that option was left in the software for
pedagogic purposes, it is not a good choice. Two other candidates proved to be a lot more
stable:

49
180 180
’h200’ ’h477’

160 160

140 140

120 120

100 100
N(Z)

N(Z)
80 80

60 60

40 40

20 20

0 0
80 100 120 140 160 180 80 100 120 140 160 180
Z [byte value] Z [byte value]

180 180
’h760’ ’h7255’

160 160

140 140

120 120

100 100
N(Z)

N(Z)

80 80

60 60

40 40

20 20

0 0
80 100 120 140 160 180 80 100 120 140 160 180
Z [byte value] Z [byte value]

Figure 5.2: Histograms


Histograms of the four Blobs in Fig.5.3. Top Left:#1, Top Right:#2, Bottom Left:#4,
Bottom Right:#3.

5.2.2 Centre of Reflectivity


To give the centroid a bit more anchoring towards the actual data, the reflectivity can be
viewed as a distribution of a ’mass’ about the area of the Blob. Having the reflectivity 2 play
the role of density or mass, it is possible to calculate a centroid based on the well-known
formula for centre-of-mass equation for mass-point-distributions. Assume N values in the
2 as measured by F (x, y) and also present in A for each Blob separately

50
area A(x) of a Blob B at locations x1 ...xN , where x = (x, y) ∈ IR2 .
N
X
Asum = A(xi ) (5.3)
i=1
N
1 X
xcentre = xi A(xi ) (5.4)
Asum i=1
N
1 X
ycentre = yi A(xi ) (5.5)
Asum i=1

where Asum is the sum of reflectivity in the area and A(xi ) the value measured at each
point xi . This way the centroid follows the distribution of reflectivity, wherever the boundary
might be.

5.2.3 Scale Space Centre


With the introduction of the scale-space methods another possibility of marking the centroid
appeared, which is closely linked to the centre of reflectivity. Instead of basing the weighing
in Eq. 5.3 on the originally sampled reflectivity values, the scale-space centroid picks the
scale-space representation L(., t) from the Edge-Detection stage. Eq.5.3 is applied again,
only this time the values are sampled from the Gaussian scale space representation L(x, t)
instead of A(x) directly:
N
X
Lsum = L(xi , t) (5.6)
i=1
N
1 X
xcentre = xi L(xi , t) (5.7)
Lsum i=1
N
1 X
ycentre = yi L(xi , t) (5.8)
Lsum i=1

51
Figure 5.3: Centroids
Azimuth Scan, July 7th 1999, 20:31. Different Methods to obtain Centroid. Top Left:
Data with boundaries, Top Right: geometrical. Bottom Left: Reflectivity. Bottom Right:
Scale Space.

5.3 Correlation
Tracking consists of recording the displacements and histogram developments of Blobs
through time. At time t0 there is of course nothing to match against, and the current
Blobs are provided with unique ID’s and stored in a collection. At all subsequent times,
however, correlation’s task is to transfer the ID’s from old Blobs to new Blobs identified as
their successors. The Tracks are based on subsequent Blobs with the same ID.

52
Consider two images at two different points in time, t1 and t2 = t1 +∆t, F (t1 ) and F (t2 ).
(also called snapshots). A critical time difference can be set, which determines the maximum
time between two snapshots to attempt correlation. If the time difference ∆t between the
two snapshots exceeds ∆tmax , the correlation is omitted and the new Blobs simply replace
the previous Blobs by assigning entirely fresh IDs. This is useful in situations with fast
moving objects and sparse data. In such situations it is best to lift the pencil and start over,
instead of producing errors in the resulting tracks.

Assume ∆t is within reasonable limits and the snapshots have yielded a number of Blobs,
B prev and B new . For each new Blob bnew
i ∈ B new a table is calculated, which contains a set
prev
of values with respect to each old Blob bj ∈ B prev :

mid displacement: dR :
This is simply the distance between the centroids of bnew
i and bprev
j in metres.
displacement correlation value τR :
After all displacements have been calculated, they are normalised by the maximum
displacement value found in all correlations and fed into a complementary Gaussian
error function, resulting in values nearer to 1 the closer the argument gets to 0. The
resulting value ranges from ]0..1] and is named τR .
histogram size difference: dH :
|H| is defined by the number of values apart from zero, that went into the histogram.
That means simply the sum of all counts for all classes except class 0. The difference
between the histogram sizes, d|H|, is calculated for each pair bnew
i and bprev
j .
histogram size correlation: τH :
is obtained by normalising the differences d|H| with the highest present difference and
feeding this value to the complementary Gaussian error function again. As usual, this
yields a value which approaches 1 as d|H| approaches 0. This value is called τH
histogram shape correlation τK :
The Kendall rank correlation is a statistical correlation suitable for data, which only
has only one criteria: it should be rankable. The ranks are then correlated in categories
of concordant or discordant alone. No assumption about the parameters of the under-
lying distribution is made and none of its parameters are estimated. (non-parametric
correlation). Kendall’s Tau is described in Numerical Recipes in C [7], Chapter 14.
Basically, the correlation compares data by counting the occurrences of higher in rank
(concordant, aka con), lower in rank (discordant aka dis) or equal (a tie). If the tie oc-
curs in x, the count goes to an extra counter (extrax ), if it occurs in y, its an (extray ).
If the tie occurs in both, its not counted at all.
The basic formula to calculate Kendall’s Tau according to Numerical Recipes in C [7]
is:
conall − disall
τK = √ p (5.9)
conx + disx + extrax cony + disy + extray
How does this apply to the histograms? Each histogram consists of value counts (yi )
in the 256 classes of possible values (xi ). In x every value will be a tie, since all classes
are present in both histograms at all times (by construction). This leaves only the
yi ’s of the two histograms in bnew
i and bprev
j to be compared, which are the counts
for the classes xi and these usually differ. Using Kendall’s Tau yields a parameter,
which is not bound to the absolute numerical values of the histograms compared, but

53
merely their difference in shape. τ ranges from −1 (completely anti-correlated) to +1
(completely correlated).
coverage, previous by new :
This value is not used for correlation, but for determining merges and splits (see below).
Consider two arbitrary Blobs bi and bj and their respective areas Ai and Aj . Let the
coverage operator v 3 be defined as:
|{Ai (x, y) : Ai (x, y) > 0 ∧ Aj (x, y) > 0 ∀ (x, y) ∈ Ai }|
bi v bj = (5.10)
|{(x, y) ∈ Ai : Ai (x, y) > 0}|
or in human-readable form: how many percent of the area covered by bi is covered by
bj as well? Clearly, if that value reaches 1, bi is completely covered by bj , if the value
is 0 they are completely distinct (in terms of covered ground). This coverage value for
the current pair is computed as bprev
j v bnew
i and used to check for merges.
coverage, new by previous :
This is just the same operator applied in reverse order: bnew
j v bprev
i and its used for
detecting splits.

When all correlative values of all possible pairs have been calculated the values τR ,τH
and τK are summed up with weights in order to obtain an overall correlation value for each
pair:
τji = wR τR ij + wH τH ij + wK τK ij (5.11)
where the subscript index j denotes the new, the superscript index i the previous Blob
involved. The purpose of the weights wR ,wH and wK is to have a device to put more em-
phasis on one or another during operation. For the most parts of the work, they were all
set to 1, but in some situations the Tracking accuracy could be improved, depending on the
situation in the data-sets, by putting more weight on one or the other. By setting one of
the weights to zero, it is even possible to eliminate the according aspect completely from the
Tracking. Assuming all weights to be at their default value 1, the overall correlation index
ranges from −1 (total anti-correlated Kendall-τ , no spatial or histogram size correlation)
to +3 (perfect match). The following procedure needs no adjustment when the weights are
changed, because it works on a strictly relative principle.

The actual matchmaking 45 is made by traversing the τji in descending order and pair
the Blobs bnew
j with bprev
i accordingly. If the bprev
i was already matched to a new Blob, the
next lowest τji without a match is chosen. Pairing means, to assign the ID of bprev
i to bnew
j .
Before the match is made official, a couple of constraints have to be obeyed first:
abs
maximum velocity vmax :
max
The value vabs is one fixed parameter of the Tracking process, which is mandatory. It
limits the displacement of the centroid in the time ∆t between the two images. Since
that time isn’t always the same, a simple maximum range constraint wouldn’t work.
If the velocity resulting from the displacement of the centroids of two Blobs, which
max
were matched by the correlation, exceeds vabs , then the match is rejected and the
new Blob is given a fresh ID.
3 read: covered by
4 From Webster’s Revised Unabridged Dictionary (1913): Matchmaking Match”mak‘ing a. Busy in
making or contriving marriages; as, a matchmaking woman.
5 I hear its particulary alive still in some areas of Ireland, where it is considered a honest pastime for

elderly folk. (Rem. of the Author)

54
average velocity vav :
max
When entering a new Tracking sequence, vav is set to vabs . Subsequently vav is
calculated as the mean value of the detected velocities greater than 0. The constraint
max
resulting from this is determined by a factor cav so, that vav = cav ∗ vav . This serves
to leave room for variation of velocity up to the factor cav from the mean velocity of
max
the previous snapshot. If vav is exceeded, the match is rejected and the new Blob is
given a fresh ID.

After the matches have been made, they still have to be validated under the light of yet
another aspect, which is concerned with the development of Blobs from– and into another
over time. A situation frequently arising in radar data is the merge of several, previously
distinct Blobs into one new Blob, or a split of one previous Blob into several Blobs in the
succeeding snapshot. This poses a problem: If the correlation indicates (and it might well
do) that a couple of participants in the merge or split match, and the velocity constraints
are observed, then the resulting centroid displacement will be wrong in these cases. The
method developed here to handle these problems is based upon the previously introduced
coverage operator v.
Merges :
A merge is defined as a situation, where the area of multiple previous Blobs is covered
to a certain degree by the same newly detected Blob. The coverage of every previous
Blob bprev
i by every new Blob bnew j is calculated. If that coverage exceeds a pre-set
threshold covcrit , the old Blob is added to a list of candidates Cjmerge for a merge into
the new Blob:
bprev
i v bnew
j > covcrit −→ Cjmerge + = bprev
i . (5.12)
If, at the end of comparing all previous Blobs with the new Blob bnew
j , Cjmerge , contains
more than one Blob from the previous image, then a merge is assumed. In that case,
the matching (if any) of Blob bnew
j is undone and it is given a fresh ID.
Splits :
The reverse situation arises, when a Blob bprev
i from the previous image splits into
multiple Blobs bnew
j in the recent image. In this case, the same procedure is applied
reversely. For each old Blob, the coverage with every new Blob is calculated:

bnew
j v bprev
i > covcrit −→ Cjsplit + = bnew
j . (5.13)

Again, if the number of found Blobs in Cjsplit exceeds 1, a split is assumed to have taken
place. In that case, all the new Blobs in the split list are given new IDs, effectively
undoing all matches already made with those.
A critical coverage value covcrit = 0.3 was found to be sufficient in all situations that
were considered during the course of this work. Ideally, the coverage would take the indi-
vidual size of the objects taking part, as well as the overall sensed velocity present in the
past scans, into account. The presented method works quite well, but leaves some room for
improvement.

If a pairing from the correlation made it through the constraint and merge/split facilities
so far, it is assumed valid and stored. The storing happens in a dedicated object, which
creates separate lists of subsequent Blobs with the same IDs. A Track is generated from
this archive by traversing the stored Blobs for each ID in the order of their time-stamps and
connect the centroids.

55
5.4 Tracking Output
The results of a run on a series of images can be exported into a file, which contains an
entry for each Track consisting of a list of all nodes in that track. Each line in a Track’s
node list, contains the following fields, separated by whitespace and formatted according to
the UNIX printf standard (see man printf on a UNIX box):

Table 5.1: SARTrE Output Format of Tracks

Parameter Format Description


Date and Time dd.mm.yy HH:SS Time format in usual UNIX-Style.
Centroid Position at[%5.2f,%5.3f] km Cartesian distances from the radar site in x and y.
Velocity v=%4.2f m/s Average velocity for each times-slice after the first.
Histogram Size |H|=%5d Size of the histogram (see 5.1)
Maximum Reflectivity Z max=%3.1f dBZ Highest Reflectivity for the Blob
Average Reflectivity Z av=%3.1f dBZ Average Reflectivity for the Blob

56
5.5 Visualisation of Tracking Data
Tracks were visualised in the following manner: a hollow square is a starting point, a filled
triangle an end point and a filled disk an intermediary node. These marked points are
connected by straight lines, practically imposing a linear fit. Two modes of display are
possible:

Table 5.2: Visual Options for Tracks

All For every snapshot all Tracks detected in the Run so far are shown.
Current Tracks shows only those Tracks are shown, which belong to Blobs
visible in the current snapshot.

6
Background images can be chosen from the following Selection:

Table 5.3: Visual Options for Background

Nothing means, all data is plotted against a plain, dark background.


Raw shows images filtered in the plain polar stage by thresholding,
single bright-points and cluttermap correction projected onto
the Cartesian plane.
Filtered means, the image after passing through filters in the Cartesian
representation: Neighbourhood Averaging and De-speckle
(if enabled) are shown.
Scale Space shows the current Scale-Space Representation after Adaptive
Thresholding (’cut-off’ ).
Orography depicts a height profile of the area around the radar site within
the range of the current scan type as grayscale values. The data
was derived from the GTOPO30 data set. Higher altitudes appear brighter.

Further image elements contain the type of scan and the time the scan was obtained in
GMT in the lower left corner, a spatial measure indicating 10 km in the top left corner and
the scale parameter used in the lower right hand corner if applicable.
6 Thanks to Dirk Meetschen from the Meteorological Institute in Bonn for providing data and software

for Orography.

57
5.6 Estimation of Quality, False Alarm Rates
For an estimation of the quality of the Tracking, a number of criteria were defined. They
were applied by a manual inspection of the results of Tracking Runs, time-slice per time-slice.

Overall number of Tracks, Kall :


Simply the number of all detected Tracks over the duration of the run. Tracks are
denoted as Ti , where i ∈ [1..Kall ]. The number of nodes per Track shall be denoted
by |Ti |.
Overall number of Track Segments, Sall :
If a Track contains K blobs (aka nodes), it contains K − 1 segments, where a segment
means a line drawn between the centroids of two subsequent Blobs in the Track. The
overall number of segments in a complete run thus is
K
Xall K
Xall

Sall = (|Ti | − 1) = ( |Ti |T rack) − Kall (5.14)


i=1 i=1

Number of Spurious Tracks, Kspur :


Spurious Tracks are Tracks, which contain obviously wrong segments. This is, of
course, a somewhat subjective decision, but from the cases presented later, it should
become clear what is meant: spurious Tracks are Tracks containing segments where a
mismatch has occurred.
Number of Spurious Segments, Sspur :
Since Tracks declared as spurious quite often contain a lot of correct segments as well,
its useful to evaluate the number of spurious segments separately. This is also done
under the perspective of a (not yet implemented) directional post-processing of Tracks,
which would dissect a Track containing wrong segments into new Tracks without. Sspur
sums up all spurious Tracks from all found Tracks.
False Alarm Rate for Tracks, F ART racks :
F ART racks is calculated as the ratio of spurious and correct Tracks:
Kspur
F ART racks = (5.15)
(Kall − Kspur )

False Alarm Rate for Segments, F ARsegments :


F ARsegments is calculated as the ratio of spurious and correct Tracks:

Sspur
F ARsegments = (5.16)
(Sall − Sspur )

58
Chapter 6

Case Studies

6.1 Tracking at Fixed Scale


For a first look at the results of Tracking alone, a fixed scale was chosen. The following sec-
tion illustrates how Tracking looks in the implementation for a series of scans from July 7th,
1999. The cluttermap used was compiled from the Scans at 4:31, 4:41 and 7:01 (GMT+2).
The scans were thresholded at 12.5dBZ, subjected to bright spot filtering at 25dBZ/500m
and de-speckled using a radius of 11 pixels and a minimum coverage of 10%.

The sum of all Tracks detected on that day are shown in Figure 6.1. Obviously some of
the detected Tracks are not correct. Since no directional smoothing has been applied, errors
of that sort can’t be detected yet. Directional smoothing is a feature yet to be implemented.
Manually removing the obviously wrong Tracks leads to Figure 6.2. The removed spurious
Tracks are depicted in Figure 6.3.

Cartesian Resolution 300x300


Thresholding T abs = 12dBZ
Bright Spot Filtering 20bdBZ/pixel
Cluttermap Scans July 7th 1999, 4:31, 4:41, 7:01 (GMT)
Cluttermap Correction Pcrit = 0.75, K = 20, M = 3
Despeckling r = 10, min coverage 10%
Scale Space Representation t = 8, cutoff T rel = 20%, δg = 1%
Blob Detection Min. |H| = 100, covcrit = 30%
Tracking Constraints vmax = 20m/s, cav = 2.0, no ∆tmax constraint.
Depicted Info IDs,Centroids,Boundary,Current Tracks
Background Image Raw Mode.

Table 6.1: Parameters for Fixed Scale Run


July 7th, 1999, 2:00 GMT+2 - July 8th, 1:41 GMT+2

59
Figure 6.1: Fixed Scale, All Tracks
All Tracks detected July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.

60
Figure 6.2: Fixed Scale, Spurious Removed
Removed Spurious Tracks for July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.

61
Figure 6.3: Fixed Scale, Spurious Tracks
Spurious Tracks detected July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.

62
The following table contains a list of spurious nodes found in the run:

ID Color Start End |T | Spurious Segments


40 blue 13:06 14:11 10 1
43 gold 13:16 13:36 3 2
62 red 14:16 14:46 5 1
97 green 16:31 18:46 20 1
129 green 17:46 19:11 12 1
180 blue 20:11 20:46 6 1
188 blue 21:11 21:36 4 1

Table 6.2: List of Spurious Tracks for Fixed Scale Run

And here is a quality estimate according to the quality criteria defined in Section 5.6:

Kall 102
Kspur 7
FARtracks 0.0737 (7.37%)

Sall 384
Sspur 8
FARsegments 0.0213 (2.13%)

Table 6.3: Quality Estimate for the Fixed Scale Run

This clearly indicates that the algorithm’s precision suffers significantly from throwing
away all information from Tracks containing spurious segments, and that directional post-
processing would be strongly advisable in a unsupervised operation.

63
What happened at the moments, where the correlation was wrong? It might be instruc-
tive for the understanding of the algorithm to take a closer look. As an example, consider
the wrong matching for the Blob with the ID #43 in the step from 13:16 to 13:31. The
individual situations are shown in Figures 6.4 and after the wrong matching.

6.5.

Figure 6.4: Spurious Track Analysis, Before Mismatch


Spurious Track for Blob #43, situation before misconduct, July,7th ’99 13:16 GMT+2.

64
Figure 6.5: Spurious Track Analysis, After Mismatch
Spurious Track for Blob #43, situation after misconduct, July,7th ’99 13:31 GMT+2.

65
Lets take a peek at the correlation details to find out what caused the mismatch. The
correct match for #43 would have been #45. Remember that for each detected Blob in the
new image, the full set of correlations is calculated with each Blob in the previous image.
Every new blob is given a temporary ID, which simply ranges from #0 .. #K-1, where K is
the number of new Blobs.

ID dR[m] τR dH τH τK τsum P vN N vO
32 56871.2 0.288063 0.5865 0.406849 0.6124 1.3073 0.00 0.00
42 69790.4 0.192330 0.2837 0.688276 0.3090 1.1896 0.00 0.00
43 4344.2 0.935322 0.1844 0.794264 0.3738 2.1034 0.00 0.00
36 63195.5 0.237793 0.0355 0.960003 0.3832 1.5810 0.00 0.00
41 11537.2 0.829361 0.0567 0.936047 0.3533 2.1187 0.00 0.00
37 39326.7 0.462558 0.0903 0.898358 0.3053 1.6662 0.00 0.00
40 6134.5 0.908766 0.2879 0.683918 0.4205 2.0132 0.00 0.00
44 19427.7 0.716666 0.2034 0.773625 0.3814 1.8717 0.00 0.00

Table 6.4: Correlation Table for new Blob #1


Correlating new Blob #1 (histogram size 141, center at [66,64] (image coordinates))

The correct Blob from the new set would have been new Blob #0, whose correlation
table looks like this:

ID dR[m] τR dH τH τK τsum P vN N vO
32 54366.6 0.309824 0.6950 0.325657 0.4436 1.0790 0.00 0.00
42 75705.5 0.157299 0.0288 0.967460 0.1350 1.2598 0.00 0.00
43 7865.3 0.883189 0.0957 0.892396 0.3930 2.1686 0.00 0.00
36 69863.2 0.191867 0.2353 0.739318 0.3667 1.2979 0.00 0.00
41 10380.7 0.846241 0.2180 0.757807 0.3822 1.9863 0.00 0.00
37 39188.8 0.464129 0.3290 0.641701 0.3671 1.4729 0.00 0.00
40 13366.7 0.802823 0.4747 0.501969 0.3491 1.6539 0.00 0.00
44 27158.1 0.611926 0.4124 0.559716 0.3239 1.4955 0.00 0.00

Table 6.5: Correlation Table for new Blob #0


Correlating new Blob #0 (histogram size 104, centre at [43,72] (image coordinates))

Why was new Blob #0 favoured over new Blob #1? The spatial correlation for #0 is
0.883189 but for #1 it is 0.935322. Histogram size correlation with #0 is 0.892396, with #1
its 0.794264. The histogram shape correlation for both values is 0.3930 for #0 and 0.3738
for #1. This is a case where the spatial correlation, which clearly indicates the correct
match of old #43 with new #1, the match is rejected because the histogram size and shape
correlation outweigh the spatial.

This situation was presented in detail to show the use of weights. In situations, where
a lot of similar sized objects are present on a small area of the image, it makes sense to
increase the weight of the spatial correlation, wR , in order to minimise errors. However,
there is another method to optimise the tracking in unsupervised mode by using a Scale-
Space approach, it is presented in the following section.

66
6.2 Tracking at Automatically Selected Scale
The following procedure differs in but one aspect from the method in Section 6.1: The
Scale Parameter t for the choice of Scale-Space Representation L(., t) of F (x) chosen for
detection of blobs, is not pre-set and held constant during a Tracking Run, but automati-
cally determined for each slice based on the prevalent feature detection process presented
in Section 4.6. Consequently, some tracks of objects will be missed if they are insignificant
in the context of the prevalent signals present in the data. Why would that be useful? The
main motivation for this step is the idea, that a Tracking Algorithm which is supposed to
deliver stable results in an unsupervised situation, will improve its overall performance, if it
focusses on the most significant features. In order to prevent too ruthless upscaling and the
consequent loss of information, the upscaling process should be undertaken carefully.

The following results were produced using the same data used in Section 6.1. All result-
ing Tracks are shown in Figure 6.6. Manually removing the obviously wrong Tracks again,
leads to Figure 6.7. The removed spurious Tracks are depicted in Figure 6.8.

Cartesian Resolution 300x300


Thresholding T abs = 12dBZ
Bright Spot Filtering 20bdBZ/pixel
Cluttermap Scans July 7th 1999, 4:31, 4:41, 7:01 (GMT)
Cluttermap Correction Pcrit = 0.75, K = 20, M = 3
Despeckling r = 10, min coverage 10%
Scale Space Representation t0 = 8, δt = 1, Nmax =10, cutoff T rel = 20%, δg = 1%
Blob Detection Min. |H| = 100, covcrit = 30%
Tracking Constraints vmax = 20m/s, cav = 2.0, no ∆tmax constraint.
Depicted Info IDs,Centroids, Boundary,Current Tracks
Background Image Raw Mode.

Table 6.6: Parameters for Automatic Scale Run


July 7th, 1999, 2:00 GMT+2 - July 8th, 1:41 GMT+2

67
Figure 6.6: Automatic Scale, All Tracks
All Tracks detected July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.

68
Figure 6.7: Automatic Scale, Spurious Tracks Removed
Removed Spurious Tracks for July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.

69
Figure 6.8: Automatic Scale, Spurious Tracks
Spurious Tracks detected July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.

70
The following table contains a list of spurious nodes found in the run:

ID Color Start End |T | Spurious Segments


175 gold 21:16 21:32 2 1
174 red 21:16 21:32 2 1

Table 6.7: List of Spurious Tracks for Automatic Scale Run

And again a quality estimate according to the quality criteria defined in Section 5.6: The

Kall 89
Kspur 2
FARtracks 0.0230 (2.3%)

Sall 313
Sspur 2
FARsegments 0.0064 (0.64%)

Table 6.8: Quality Estimate for the Automatic Scale Run

improvements in quality of tracking are significant. The F ARtracks is less than 3 times of
that in the fixed scale case, and about the same for the segment point of view: F ARsegments
dropped well below 1%.

71
6.3 Tracking at higher velocities
The presented case from July 7th ’99 was comparatively easy, since the objects track where
well distinguished and the wind speeds on that day relatively low, which makes tracking
easier. One last case presented thus is a day from Autumn 99’, with high wind speeds and
closer objects. To prevent the algorithm from merging too many objects, a in-process cut-off
was used during the automatic scale detection phase. (See Section 4.6). The day analysed is
September 28th, 1999. Figure 6.9 shows the sum of all tracks detected. The detected wind
velocities were on average 12-15 m/s.

Cartesian Resolution 300x300


Thresholding T abs = 12dBZ
Bright Spot Filtering 20bdBZ/pixel
Cluttermap Scans Sep. 28th 1999, 2:06, 2:11 (GMT+2)
Cluttermap Correction Pcrit = 0.75, K = 20, M = 3
Despeckling r = 10, min coverage 10%
Scale Space Representation t0 = 8, δt = 1, Nmax =10, cut-off T rel = 20%, δg = 1%, Tinp = 10%
Blob Detection Min. |H| = 100, covcrit = 30%
Tracking Constraints vmax = 20m/s, cav = 2.0, no ∆tmax constraint.
Depicted Info IDs,Centroids, Boundary,Current Tracks
Background Image Raw Mode.

Table 6.9: Parameters for Automatic Scale Run with Inprocess cut-off
Sep 28th, 1999, 2:00 GMT+2 - Sep 28th, 1:41 GMT+2.

72
Figure 6.9: Automatic Scale with Inprocess cut-off, All Tracks
All Tracks detected Sep 28th, 1999, 2:00 GMT+2 - Sep 28th, 1:41 GMT+2.

73
Figure 6.10: Automatic Scale with Inprocess cut-off, Spurious Tracks Removed
Removed Spurious Tracks for Sep 28th, 1999, 2:00 GMT+2 - Sep 28th, 1:41 GMT+2.

74
Figure 6.11: Automatic Scale with Inprocess cut-off, Spurious Tracks
Spurious Tracks detected Sep 28th, 1999, 2:00 GMT+2 - Sep 28th, 1:41 GMT+2.

75
The following table contains a list of spurious nodes found in the run:

ID Color Start End |T | Spurious Segments


48 blue 9:06 9:32 4 1
68 blue 10:16 11:01 6 1
72 blue 10:16 10:41 4 1
84 blue 10:46 11:31 6 1
88 blue 11:46 12:11 4 1
109 green 14:16 14:46 5 1
118 red 15:41 16:01 3 1
122 red 15:41 16:01 3 1
130 red 17:01 17:31 5 1
140 blue 18:06 19:06 10 1
155 gold 0:01 01:01 8 1

Table 6.10: List of Spurious Tracks for Automatic Scale Run with Inprocess cut-off

And again a quality estimate according to the quality criteria defined in Section 5.6:

Kall 94
Kspur 11
FARtracks 0.13250 (13.25%)

Sall 348
Sspur 11
FARsegments 0.02972 (2.98%)

Table 6.11: Quality Estimate for the Automatic Scale Run with Inprocess cut-off

6.4 Experimental Results


The main obvious flaw in tracking centroids is, that stratiform precipitation is very hard
to track. When the image features exhibit very little detail and the radar range is covered
to a great extend, the problem is impossible to solve for the algorithm in the current form.
However, the number of tricks from the image processing toolbox is not even remotely
covered by the methods used in this work. The following results are from a very early
adoption of more elaborate methods, and are to be seen as strictly experimental.

6.4.1 Linear Contrast Stretching


Usually, the range of values from the range of possible values is not fully used by a data set.
By stretching the value present over the whole range of possible values, more image detail can
be obtained. If the stretch is based on a linear function, the process is called linear contrast
stretching. See Gonzales/Woods,1992 [1]. Figure 6.12 shows a situation from September
26th, 1999. The scenery exhibits little contrast. After stretching the contrast, the image
looks slightly different, as shown in Figure ??.

76
Figure 6.12: Contrast Enhancement
Sep 26th, 1999, 4:11 GMT+2 Top: Before contrast enhancement, Bottom: After linear
contrast stretch.

A little more detail and a generally brighter image is the result. When combined with a

77
form of thresholding, which is based on Histograms, the results can be utilised for tracking.
This thresholding is described in the next section.

6.4.2 Percentile Thresholding


Percentile thresholding works by accumulating a histogram of all values in the image. The
resulting histogram is compressed to all non-zero values, and a definable percentile is cal-
culated. All values below the class, which has the count nearest to the chosen percentile
of counts, is established as a threshold. All values below it are trashed by setting them to
zero. Consider the contrast-stretched image in Figure ??. Processing it with the percentile
threshold of 0.75 (75% - Percentile) results in the image shown in Figure 6.13.

Figure 6.13: Percentile Thresholding


Sep 26th, 1999, 4:11 GMT+2 after Contrast enhancement and subsequent Percentile
Thresholding at 0.75

This looks like an image, which could be used for centroid tracking. The process of
stretching the contrast and thresholding the image at the value, above which only 25% of
the values reside, makes viable input for a scaling procedure. Figure 6.14 shows the result of
the Blob detection procedure on Figure 6.13 and the same blobs on the original, unfiltered
data are shown for comparison in Figure ??.

Based on this procedure, a run was undertaken for the time between 3:01 and 6:31
GMT+2. Owed to the high wind-speeds and dynamic development of the precipitation
on that day (making for big changes in the internal structure), a time slice constraint
∆tcrit = 600s was applied. Scale-Space detection was to use Nmax = 5 and an Inprocess
cut-off of Tinp = 20% was used. The Percentile Threshold was set to 85%. Figure 6.15 shows

78
Figure 6.14: Blobs on Contrast Enhanced and Thresholded Image
Sep 26th, 1999, 4:11 GMT+2, Top: Detected Blobs after Contrast enhancement and
subsequent Percentile Thresholding at 0.75. Bottom: Blob Boundaries on unfiltered data.

79
the results for these 4 hours.

Figure 6.15: Experimental Run, All Tracks


Sep 26th, 1999, 4:11-6:31 GMT+2.

80
Chapter 7

Discussion and Outlook

The overall performance of the algorithm in its current state is not too bad. It leaves a
lot of room for improvement, though. One of the most prominent problems arises in the
situation where detected areas are about to leave the radar’s range. The resulting shrink in
area and seemingly different movement exposed to the algorithm leads to obviously wrong
tracks. This could be prevented by an interpolation through the nodes in the tracks history
and an estimate of the time the blob is leaving the range. The same is true for the reverse
situation, where objects are entering the radar’s range. The same situations often lead to
mismatches, mistaking an object wandering in fresh with another that was already within
range and wandered further inward. In general this is a problem inherent to the way the
algorithm works - on single objects. The mismatch problem could also be tackled by an
interpolation scheme, which takes the overall direction of all Tracks in processing at the
time into account. Overall, a directional smoothing post-processing or in-processing scheme
would prove beneficial.

The ability to track not only the position, but also the development of the object in
terms of reflectivity intensities over time, could prove a helpful additional information in
selecting the correct relation between rain-rate and reflectivity. As shown in a paper by
C.Reudenbach, G.Heinemann et.al.,2001 [8], the Z/R-Relation has a twofold nature, it looks
different during the phase of precipitation build-up than on decease. Tracking the reflec-
tivity histograms could provide a Nowcasting - algorithm using SARTrE output data with
valuable information as to the estimation of future development of the tracked precipitation
areas in terms of size (as denoted by histogram size) as well as rain-rates (through histogram
development) for better quantitative forecasts.

In order to get a better impression on the performance, a more thorough statistical anal-
ysis of the algorithm would be needed. The presented method is admittedly rudimentary,
and the decision between correct and incorrect tracking were based on rather subjective
criteria. Before undergoing more critical and extensive tests, the estimated performance
values should be viewed with prejudice.

One of the biggest weaknesses the algorithm exposes is the inability to deliver data in
situations, where large-scale stratiform precipitation covers the radar’s range to a big extend.
It is conceivable, that an approach making further use of Scale-Space theory and digital im-
age processing techniques, such as contrast enhancement and more elaborate thresholding
could improve the use in situations where the coverage of the radar’s detection area is very

81
high, but some structure remains visible inside. A rough first sketch of this was presented
in Section 6.4.

Whether the ability to focus on salient image features by Automatic Scale Selection
proves useful, is down to the fact whether the algorithm can be improved to treat all sorts
of weather situations accordingly. An idea into this direction is to perform a complete Scale-
Space1 analysis of each snapshot, and extract information to adjust certain parameters for
the Blob detection stage automatically, for instance by steering a contrast enhancing scheme
locally, accentuating centres of reflectivity (See section 6.4, or by using an anisotropic, prob-
ably non-linear diffusion process for extracting trackable details, even from mostly stratiform
precipitation. If this can be achieved, the algorithm would prove a good basis for a com-
pletely unsupervised tracking system, providing continuous tracking data. As far as I know,
the application of Scale-Space methods to this special field of Tracking is not explored widely
yet and leaves much to do.

Also, Scale-Space Methods could prove useful for Tracking based on statistical correla-
tion. These algorithms are often sensible to the box-size chosen, and the size of the box
could in turn be automatically determined by using a Scale-Space approach to find the scale
of the prominent image features and link the box size to it. This wouldn’t alleviate the
ambiguity problem, but it might reduce the output noise somewhat.

Comparison of the presented method to other tracking algorithms is difficult. The closest
recent relative to SARTrE is the Trace3D algorithm. A direct comparison is hard, for the
two algorithms have different foci. The latter concentrates on convective cores by applying
a semi-adaptive thresholding scheme. Precipitation outside the thresholded ranges is not
taken into account, and the correlation procedure is based on velocity interpolation. It also
contains a simple directional smoothing facility, based on the improbability of crossing tra-
jectories. SARTrE on the contrary doesn’t focus on reflectivity cores as such, but rather on
the impression the distribution of reflectivity leaves in its Scale-Space representation. Also,
the correlation process is based on different assumptions and techniques. SARTrE has no
directional preference or interpolation facilities (yet). For a direct comparison, a Run of
both algorithms on the same data sets would be interesting.

An idea for the far future might be to use a hybrid model. According to the weather
situation, either centroid- or cross-correlation tracking could be used. In order to determine
which algorithm to apply to a given situation could be decided by an accordingly trained
neural network, which could base a decision on coverage, gradients or other suitable input.

1 This
means to determine the scale of each object separately by tracking the its lifetime in the Scale-Space
Representation

82
Appendix A

Programming Techniques

A.1 Object Oriented Programming (OOP)


The object-oriented paradigm, although going back to the late 1960’s (Simula-67), didn’t
pick up until the early 1980’s. The first language to enable programmers to make full use
of it was Smalltalk, developed at the Xerox PARC Research group in 1972. The language
caught on slow, since the demands of OOP are somewhat higher than in procedural lan-
guage. But with the increased availability of powerful personal computers, the idea gained
more momentum and finally took off during the early 1980’s. The most widespread OO-
language today is C++, a language based on C and developed by Bjarne Stroustrup in 1985.
Other languages moved to adopt the principle, a well remembered first step at bringing OOP
to a wider audience was Borland’s TurboPascal 5.5, which extended the Pascal Language
with OO Features, this development led to the OO-Language Delphi, which is widely used
on ”WinTel” machines today. As opposed to procedural languages, which separated func-
tionality in different procedures, and where data and processing where quite separated, the
object-oriented paradigm is based on the idea, that data and processing of it are two aspects
of the same thing, and consequently strifed to unite them into an abstract unit, an object.

An object consists of data fields, which are called members or attributes in this context,
processing facilities operating on this internal data (called member functions or methods),
and an interface exposing data and functionality to other objects, making it possible for ob-
jects to exchange messages. Also, since Objects also constitute data types, it is possible to
exchange messages consisting of objects themselves. One of the most distinguishing features
is the ability of building complex object hierarchies based on ontological conceptualisation
of problems, which allows for flexible and runtime-bound solutions unprecedented by proce-
dural approaches. A complete discussion of the concept is beyond the scope of this thesis, a
good starting point can be found at the CETUS links website: http://www.cetus-links.org,
which is a good source on all practical aspects of OOP.

A.2 Objective-C
The OOP language used in this thesis is Objective-C. As opposed to C++, which has the
big drawback of falling apart into multiple proprietary dialects implemented by different
vendors, Objective-C suffers no dissension. It is a unique language standard, and the only
difference lies in the libraries used to constitute the root object of all objects, one by GNU

83
and the other by Apple. The differences are marginal and porting one to the other is a matter
of replacing a hand-full of statements. Objective-C is a true superset to the C language.
Everything written in standard ANSI-C can seamlessly be adopted into Objectice-C easily.
It is based on the message-passing structures from Smalltalk, which allows for very flexible
runtime behaviour and - if needed - almost type-free programming. It was chosen to be
the language of choice for the software developed in context of this thesis partly for these
properties and partly because it is the OO language of choice when dealing with Macintosh
programming in general. Apple’s new UNIX-based (OpenBSD) operating system is entirely
based on it.

A.3 Libraries and Third Party Software Used


For numerical processing, two libraries where used: the GNU Scientific Library and the
Library provided by Numerical Recipes. Visualisation was done through GNU’s plotutil
package. GUI and Application programming made heavy use of Apple’s Cocoa Library.
Assimilation of Radar Data, Orography Stuff as well as Cartesian Interpolation was based
on code by D.Meetschen, Meteorological Institute of Bonn. The thesis paper was written
using LATEX. (iTeXMac Software). Graphic format converting and re-scaling using the
fantastic GraphicConverter (Lemkosoft) application. Thoughts were held together by iBlog
(Lifli Software). For diagrams and the like OmniGraffle Professional (OmniGroup) was used.
The complete software was developed using Apple’s XCode.

A.4 Macintosh Programming and Tools


Programming on a Macintosh is extensively easy. All developer tools are free and contain
a world class GUI construction tool, a stable, well documented and easy to use IDE for
code development and debugging (ProjectBuilder/XCode) and even profiling tools and pro-
cess sampling tools. In the personal experience of the author, it provides the best platform
for rapid code development and application building of all platforms personally experi-
enced. The G4 processor delivers an impressive number-crunching capability and possesses
a floating-point parallel vector processing unit called Altivec, which is capable of performing
4 floating point operations in one processor cycle. The Virginia Tech University recognised
the potential of Apple Technology for the field of scientific computing and recently build
a cluster supercomputer based on 1100 dual 2GHz G5 processors, which made it instanta-
neously to position 3 in the top 500 list of supercomputers at the lowest cost/performance
profile. Programming, processing and writing for this thesis was done on a Macintosh 1GHz
G4-Powerbook with 512Mb of memory.

84
Appendix B

Software and Data

The complete documentation for presented runs including the correlation table for each
snapshot is much too extensive to be put into this document. The files containing all three
runs with images and tracking data can be obtained from the author on request. Send an
email to
juergen_simon@mac.com
and provide the keyword ”SARTrE” in the subject to avoid my rigourous spam filter. The
data used to generate runs is property of the Meteorological University of Bonn and can
therefore not be distributed. An abstraction layer in the software is used to assimilate
radar data. By changing it to your needs, you should be able to adopt the algorithm
for various formats with relative ease. The software is still under development, but will be
distributed at some point under the GNU Public License (GPL). Its composed of Frameworks
for Assimilation, Visualisation, Processing and a Cocoa-based application for OSX. It is
developed obeying the well known MVC (Model-View-Controller) paradigm, so adopting a
new front-end should be easy. Send an email to the above mentioned address if you want to
keep posted.

85
List of Tables

5.1 SARTrE Output Format of Tracks . . . . . . . . . . . . . . . . . . . . . . . . 56


5.2 Visual Options for Tracks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.3 Visual Options for Background . . . . . . . . . . . . . . . . . . . . . . . . . . 57

6.1 Parameters for Fixed Scale Run . . . . . . . . . . . . . . . . . . . . . . . . . . 59


6.2 List of Spurious Tracks for Fixed Scale Run . . . . . . . . . . . . . . . . . . . 63
6.3 Quality Estimate for the Fixed Scale Run . . . . . . . . . . . . . . . . . . . . 63
6.4 Correlation Table for new Blob #1 . . . . . . . . . . . . . . . . . . . . . . . . 66
6.5 Correlation Table for new Blob #0 . . . . . . . . . . . . . . . . . . . . . . . . 66
6.6 Parameters for Automatic Scale Run . . . . . . . . . . . . . . . . . . . . . . . 67
6.7 List of Spurious Tracks for Automatic Scale Run . . . . . . . . . . . . . . . . 71
6.8 Quality Estimate for the Automatic Scale Run . . . . . . . . . . . . . . . . . 71
6.9 Parameters for Automatic Scale Run with Inprocess cut-off . . . . . . . . . . 72
6.10 List of Spurious Tracks for Automatic Scale Run with Inprocess cut-off . . . 76
6.11 Quality Estimate for the Automatic Scale Run with Inprocess cut-off . . . . . 76

86
List of Figures

2.1 Plain Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7


2.2 Projection onto Cartesian Coordinates . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Interpolation onto Cartesian Coordinates . . . . . . . . . . . . . . . . . . . . 8
2.4 Reflectivity Legend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 Gaussian Error Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6 Cluttermap Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.7 Cluttermap Correction 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.8 Ray Interpolation Example: Clutter Only . . . . . . . . . . . . . . . . . . . . 16
2.9 Cluttermap Correction 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.10 Ray Interpolation Example: Clutter partially covered by another event . . . . 17
2.11 Cluttermap Correction 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.12 Ray Interpolation Example: Clutter completely covered by another event . . 18

3.1 3x3 Neighbourhood with numbering scheme . . . . . . . . . . . . . . . . . . . 20


3.2 Arithmetic Mean Averaging Mask . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Averaging Example, Unfiltered . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Averaging Example, Filtered . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Prewitt Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.6 Sobel Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.7 Median Averaging Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.8 Percentile Averaging Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.9 Single Bright Spots on plain polar image . . . . . . . . . . . . . . . . . . . . . 26
3.10 Single Bright Spots removed . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.11 Image with Speckle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.12 Despeckled Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.1 1-D Scale Space Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 32


4.2 Intensity, gradient, Laplacian at edges . . . . . . . . . . . . . . . . . . . . . . 34
4.3 Mexican Hat Laplacian Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.4 Laplacian mask, detected edges . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5 Scale Space Representation 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.6 Scale Space Representation 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.7 Edge Detection in Scale Space Images 1 . . . . . . . . . . . . . . . . . . . . . 41
4.8 Edge Detection in Scale Space Images 2 . . . . . . . . . . . . . . . . . . . . . 42
4.9 Automatic Scale Detection, Original Image . . . . . . . . . . . . . . . . . . . 43
4.10 Automatic Scale Detection Results . . . . . . . . . . . . . . . . . . . . . . . . 45
4.11 In-Process Cut-Off Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.1 Histograms, Detected Blobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

87
5.2 Histograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.3 Centroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

6.1 Fixed Scale, All Tracks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60


6.2 Fixed Scale, Spurious Removed . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.3 Fixed Scale, Spurious Tracks . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.4 Spurious Track Analysis, Before Mismatch . . . . . . . . . . . . . . . . . . . . 64
6.5 Spurious Track Analysis, After Mismatch . . . . . . . . . . . . . . . . . . . . 65
6.6 Automatic Scale, All Tracks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.7 Automatic Scale, Spurious Tracks Removed . . . . . . . . . . . . . . . . . . . 69
6.8 Automatic Scale, Spurious Tracks . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.9 Automatic Scale with Inprocess cut-off, All Tracks . . . . . . . . . . . . . . . 73
6.10 Automatic Scale with Inprocess cut-off, Spurious Tracks Removed . . . . . . 74
6.11 Automatic Scale with Inprocess cut-off, Spurious Tracks . . . . . . . . . . . . 75
6.12 Contrast Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.13 Percentile Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.14 Blobs on Contrast Enhanced and Thresholded Image . . . . . . . . . . . . . . 79
6.15 Experimental Run, All Tracks . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

88
Bibliography

[1] Rafael C. Gonzales, Richard E. Woods. Digital Image Processing, 1993 Addison Wesley
Publishing Company, Inc.
[2] E. A. Mueller Statistics of high radar gradients, Journal of Applied Meteorology, 1977,
Volume 16
[3] Tony Lindeberg. Scale Space Theory in Computer Vision, Kluwer Academic Publishers,
1994
[4] J. Weickert, A Review of Nonlinear Diffusion Filtering, published in Scale Space Theory
in Computer Science, LNCS, Vol.1252, Springer, 1997
[5] J. Weickert,Scale-Space has been Discovered in Japan, Technical Report DIKU-TR-
97/18, Department of Computer Science, University of Copenhagen, August 1997
[6] A. P. Witkin,Scale-Space filtering,Proc.Eight Int.Join.Conf. on Artificial Intelligence,
(ICAE ’83, Karlsruhe, Aug. 8-12,1983), Vol.2 1019-1022,1983
[7] William H. Press et.al.,Numerical Recipes in C, The Art of Scientific Computing, Sec-
ond Edition, Cambridge University Press, 1992
[8] C. Reudenbach, G. Heinemann, E. Heuel, J. Bendix, W. Winiger, Investigation of
summertime convective rainfall in Western Europe based on a synergy of remote sensing
data and numerical models, Meteorol. Atmos. Physics, 76, 23-41 (2001)
[9] J. Handwerker Cell tracking with TRACE3D - A new algorithm. 2002, Atmos. Res., 61,
15-34.
[10] R. E. Rinehart, E. T. Garvey, Three-dimensional storm motion detection by conven-
tional weather radar, 1978, Nature, 273:287-289.
[11] L. Li, W. Schmid, J. Joss, Nowcasting of Motion and Growth of Precipitation with
Radar over a complex Orography, 1995, Journal of Applied Meteorology, Volume 34,
pp1286-1300
[12] S. Mecklenburg,Nowcasting precipitation in an Alpine region with a radar echo tracking
algorithm, 2000, Dissertation, ETH Zurich, Diss.ETH No.13608
[13] R. E. Rinehart, Radar for Meteorologists, 3rd Edition, 1997 Rhinehart Pub.
[14] E. Heuel, Quantitative Niederschlagsbestimmung aus Radardaten. Ein Vergleich von
unterschiedlichen Verfahren unter Einbeziehung der Statistischen Objektiven Analyse,
2004, PhD-thesis, Meteorological Institute, University Bonn, 162p.

89

You might also like