Professional Documents
Culture Documents
Vorgelegt der
Mathematisch-Naturwissenschaftlichen Fakultät
der Rheinischen Friedrich-Wilhelm-Universität zu Bonn
1
Preface
When starting with my Diploma Thesis, the idea for developing a Tracking Algorithm was
only part of a much more ambitious plan of developing a Nowcasting algorithm. During the
course of working on the topic, it dawned on me that creating a working Tracking algorithm
was not a trifle, but a veritable task in itself. Moreover, when I read about the application
of Scale-Space methods for the solution of Tracking problems in other than the meteoro-
logical field, I got interested in Scale-Space Theory itself. Realising, that both could be
interconnected in a beneficial way also for meteorological applications, I was diverted from
the original plan and began to investigate the topic more deeply. During the short time of
this work, the simplicity and beauty of the Scale-Space appealed to me, and although I only
have taken but a first glance, the multitude of possibilities it seems to offer to all sort of
problems concerned with deriving information, which can be linked to scale, is overwhelm-
ing. The problem of scale has somehow always interested me - when I was a youth I was
fascinated by Fractals, especially because of the fact of self-similarity of their structures at
small and large scales. And, although I learned a lot during the course of writing this work,
the discovery of Scale-Space theory itself was among the biggest rewards for me.
Thanks
I would like to express my gratitude towards my Mother, for unbroken faith in me over the
winding and often erratic course of my life. Special Thanks to Prof. G. Heinemann for
accepting the proposal of this thesis in the first place, showing patience or exerting pressure
and providing numerous valuable hints and constructive criticism, which helped to improve
the quality of the work a lot. All of my friends for giving me support, lending an ear or
leaving me alone when appropriate. Gordon Dove for optimisation hints, general suggestions
as well improving my English. Mark Jackson for cheering up. Maren Timmer for helping
with the pedagogic aspects and moral support. D. Meetschen and Eva Heuel for providing
software and data as well as advise. Very Special Thanks to my girlfriend for moral support
and standing back when I needed the time, much obliged.
2
Contents
2 Radar Data 6
2.1 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Clutter Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3
5 Tracking and Scale Space 47
5.1 Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 Centroid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.1 Geometric Centre of Boundary . . . . . . . . . . . . . . . . . . . . . . 49
5.2.2 Centre of Reflectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2.3 Scale Space Centre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.4 Tracking Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.5 Visualisation of Tracking Data . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.6 Estimation of Quality, False Alarm Rates . . . . . . . . . . . . . . . . . . . . 58
6 Case Studies 59
6.1 Tracking at Fixed Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.2 Tracking at Automatically Selected Scale . . . . . . . . . . . . . . . . . . . . 67
6.3 Tracking at higher velocities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.4.1 Linear Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . 76
6.4.2 Percentile Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . 78
A Programming Techniques 83
A.1 Object Oriented Programming (OOP) . . . . . . . . . . . . . . . . . . . . . . 83
A.2 Objective-C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
A.3 Libraries and Third Party Software Used . . . . . . . . . . . . . . . . . . . . . 84
A.4 Macintosh Programming and Tools . . . . . . . . . . . . . . . . . . . . . . . . 84
4
Chapter 1
RADAR is short for Radio Detection and Ranging. As many other great inventions (Transis-
tor, Penicillin, X-Rays, ... ) it was discovered by a fortunate combination of sheer luck and
awareness. In R.E.Rinehart’s Book, Radar for Meteorologists[13], the discovery is described
as follows:
”...In September 1922, the wooden steamer Dorchester plied up the Potomac
River and passed between a transmitter and receiver being used for experimen-
tal U.S. Navy high-frequency radio communications. The two researchers con-
ducting the tests, Albert Hoyt Taylor and Leo C. Young, had sailed on ships
and knew the difficulty in guarding enemy vessels seeking to penetrate harbours
and fleet formations under darkness. Quickly putting the serendipitous finding
together, the men proposed using radio waves like a burglar alarm, stringing
up an electromagnetic curtain across harbour entrances and between ships. But
receiving no response to the suggestion, and with many demands on their time,
the investigators let the idea whither on the vine.”
From that first incidence to the modern Radar systems used in civil and military purposes
today, a long time has passed. Radar is now an every day tool, used to detect and guide
air-planes or ships, detect distances between cars in automatic control systems or even to
detect objects hidden underground. The first Radars used for meteorological purposes where
obtained from the military after WWII, whose by then well developed equipment became
available for civil use. Another great development for meteorological applications was the
development of the Doppler Radar, which allows not only for detection of objects by their
reflected radiation, but also for detecting their speed radially to the Radar’s site through
the Doppler effect.
The way modern Radar’s work, is by alternatively emitting a bundled pulse of energy
(ray) and detecting the portion of radiation reflected from objects in its path in short time
intervals. Through the speed of light and the time interval, the range of the object from the
radar can be estimated. By changing the radar’s azimuth and/or elevation angle, two- or
even three-dimensional images of reflectivity can be obtained. By measuring the phase shift
between back-scattered- and emitted radiation, a radial velocity can be measured. For a
good introduction into the history, theoretical and technical details of Radar, use Rhinehart’s
Book.[13].
5
Chapter 2
Radar Data
Owing to the way radar data is obtained, its natural format is organised into rays for
each scanned angle, and within the rays a set of range gates, one for each time interval the
back-scattered radiation was sampled at. The natural coordinate system thus is planar polar
coordinates. In reality, the plane is more often than not a shallow cone, since the radar beam
often has an elevation from the perfect horizontal. However, the natural coordinate system
for the data is polar coordinates. These coordinates can be transformed into Cartesian
coordinate systems, where usually the origin is chosen to represent the radar site. This is
called a Plan Position Indicator (PPI) display. This term goes back to the beginning of
radar meteorology, when the PPI was indeed an oscilloscope’s display with the radar beam
taking sweeps, leaving detected targets in its wake.
6
Figure 2.1: Plain Polar Coordinates
Azimuth Scan, 28. Sep. 1999, 9:36 GMT+2, Range 50km, Elevation 2.57.
without considering whether there was already a value plotted at that point. (’last
wins’). See Figure 2.2 for an example. Figure 2.3 for the same data in interpolated
form.
7
Figure 2.3: Interpolation onto Cartesian Coordinates
Azimuth Scan, 28. Sep. 1999, 9:36 GMT+2, Range 50km, Elevation 2.57, Cartesian
Interpolation.
8
2.2 Values
Reflectivity data from the X-Band radar installed in Bonn used in this work comes in
unsigned char values, which evaluates to a range of integers: [0..255]. The reflectivity is
calculated by using the formula: Z[dBZ] = −31.5dBZ + 0.5 ∗ Z[byte]. For the most part
of the data processing, this conversion is omitted though, because the byte valued format
proves advantageous in terms of grayscale representation. Also, the data contains time-
stamp and angular properties for each ray and overall scans. For optically matching a given
grey value back to reflectivity value, the following legend may be referenced:
9
2.3 Clutter Filtering
Clutter is radiation reflected off static ground targets like trees, buildings, hills etc. This is
mainly due to the fact, that the geometric properties of the radar ’beam’ are far from ideal.
The radar ’beam’ has, viewed across its axis, multiple local maxima (lobes) of radiation.
While the absolute maximum, the main lobe, contains most of the energy, some energy is
emitted in the secondary maxima, called the side lobes, whose axes point away from the
main axis. Thus, even when placing the radar on a raised point with clear line of sight
(for the main beam), the side lobes will produce ground clutter. In the light of this fact,
it is understandable that clutter is mostly found in the near range around the radar. Of
course there is also a dependency on the orographic circumstances of the radar site which
differs from site to site. Clutter may well be among the most intense reflectivity in the
data, since the objects giving rise to clutter are often of significantly higher density and
possess better reflective properties than most meteorological targets do, except maybe hail.
Thus, in order to obtain a more meteorologically relevant view on the data, its desirable to
find means to filter clutter out. One strong indicator of clutter is a target being stationary
(trees, buildings, mountains, large radio antennas, etc.). Doppler Radar can identify clutter
with relative ease for the absence of radial movement. Although the X-Band Radar in Bonn
is capable of detecting Doppler velocities now, that was not always the case. The radar
was modernised in 1998 and enabled for Doppler detection then. The data chosen for the
thesis is from before that time, and thus different approach of distinguishing clutter from
the real targets was required. Apart from adopting the cluttermap approach, a method of
stochastical decision making and weighed interpolation was developed.
the next digit down in precision to 10−1 . The maximum angular error made thus is: ∆φ = ±0.05◦ (deg).
At the maximum range of 100km for extended azimuth scans, this angular error translates into a maximum
dislocation error of ∆r = 100km ∗ rad(0.05) ≈ 87.3m. This was found tolerable for this process, since mostly
the clutter can be found in a range of 0-25km, where the error according to the same evaluation is about
21m. The rounding error of ≈ 5% seems acceptable for the purpose.
10
and the number of scans taken into account for each position. For days with great changes
in weather conditions it can be necessary to create more than one cluttermap (or at least
use more scans) to account for the impact of different weather conditions on the path of
the radar beam. For days with more stationary conditions, one cluttermap suffices and less
scans are required. For practical purposes, it has proven advantageous to obtain a new
cluttermap for each day, provided sufficiently event-free intervals can be found in the data.
How can this cluttermap be leveraged to reduce clutter in scans? Remember that the
cluttermap contains those positions in the scan, which have been found to be cluttered in
’clear’ conditions, the number of scans indicating so and the summed up clutter reflectivity
values.
A first approach might be to simply subtract the average clutter reflectivity at each
position in the cluttermap from the reflectivity found in the scan to be corrected. This
approach is based on the assumption that the overall reflectivity at a cluttered position is
the sum of the reflectivity of the meteorological target plus the clutter’s reflectivity (simple
superposition). Consider this basic form of the radar equation for multiple targets:
Pt G2 λ2 X σi
Pr = (2.1)
(4π)3 i Ri4
where Pr is the average received power, Pt the transmitted power, G is the gain for the
radar, λ the radar’s wavelength. The sum on the right contains σi , the i-th target’s scat-
tering cross section and its distance to the radar, Ri . The backscattering cross section σ is
calculated by taking the shape (diameter facing the radar’s direction), dielectric properties
and the radar’s wavelength into account. According to this equation, in the absence of
any meteorologically relevant targets, the clutter’s back-scattered power could be measured
and in the aftermath being subtracted from the measurement, since it seems to be additive
(through the sum on the right hand side). However, in practice this path leads to big errors,
ripping ’holes’ into the radar image. How’s this? For a start there is the fact that the path
of the radar beam is heavily influenced by atmospheric fields like temperature and humidity.
Thus stationary ground targets appear to be moving in the radar’s view because of that.
In addition, the radar beam is somewhat attenuated by travelling through a medium filled
with backscattering targets. These effects of energetic and directional obfuscation render the
simplistic superposition approach somewhat useless. In spite of the cluttermap information,
the problem of determining how much radiation at a given point in a sample is owed to
clutter persists.
In what other way could the information in the cluttermap aid us? Could it be possible
to leverage the cluttermap for estimating at least the likelihood of a point being cluttered?
And should the likelihood be high, could we apply a correction based on more information
than just the cluttermap? The following paragraph develops a method for doing just that.
11
his collected experience. One chief aspect in this decision making process would surely be
continuity, the larger structure of the objects seen. The presented method tries to take that
concept into account when distinguishing clutter from non-clutter. Knowledge about the
stationary targets is collected in the aforementioned cluttermap. In order to get a view of
the structure of detected objects, the scan is considered ray-wise. The main assumption is
as follows:
The more the measured reflectivity at a given coordinate deviates from the av-
erage cluttermap value, the more likely the value is to be correct.
Assume a cluttermap C = {C(φ, m)|φ ∈ [0, 360), m ∈ [1, Ngates ]}. Ngates is the number
of range gates the radar produces in a ray. Further let a radar scan consist of Nrays rays at
angles φn . Each of these rays contains Ngates range gates: Z = {Z(φn , m)|n ∈ [1, Nrays ]; m ∈
[1, Ngates ]} The method works by traversing all points (nodes) of the cluttermap and compare
them to corresponding points in the scan. What interests us is the likelihood of the point
under consideration (Z(φn , m)) being obfuscated by clutter (C(φ = φn , m)). An estimate is
proposed in the following form:
2|Z( φn , m) − C( φ = φn , m)|
Pclutter (Z(φn , m)) = erf c( ) (2.2)
255
3
Should the probability Pclutter exceed a pre-set threshold Pcrit , the point in the scan is
assumed to be heavily contaminated by clutter and thus in dire need of correction. 4
Now that a decision has been made, the samples value needs correction. In order to take
the continuity of the data along the ray into account, the data is modelled as a polynomial
g of order N in the index coordinate within a certain range upwards (further away from the
radar site) - and downwards (closer to it) of each range gate under consideration. Should the
downward range cross the origin (the radar site), samples from the diametrically opposite ray
(or the ray closest to being diametrical) are taken into account. For the sake of simplicity,
assume a fixed ray angle φ and consider only the range gate coordinate m:
N
X
g(m) = aj mj (2.3)
j=0
where wi are weights on the observations. Since we want to minimise the error by adjusting
the coefficients, we differentiate I for each aj
m+K
∂I ∂ X
= (g(m̂) − wm̂ f (m̂))2 ≡ 0 (2.5)
∂aj ∂aj
m̂=m−K
3 the factor 2 in the argument of the error function serves the purpose of extending the range of the
argument a bit, thus making fuller use of the value range of the error function, yielding more distinguishable
results. The value 255 is owed to the fact that the range of possible values is [0, 255] and serves to normalise
the argument.
4 This formula was in its basic form derived by inspiration. The Gaussian error function was chosen
simply for its mathematical properties. (See Fig.2.5). The closer the sampled value is to the cluttermap
value, the smaller the argument of the Gaussian error function, the closer the result (the ’likelihood’) gets
to 1. Note that this approach introduces one parameter, the threshold likelihood Pcrit .
12
1
erf(x)
erfc(x)
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2
Carrying out the differentiation for coefficient aj , replacing g with its definition and re-
ordering gives:
M
X m+K
X m+K
X
ai m̂j m̂i = wm̂ f (m̂)m̂j (2.6)
i=0 m̂=m−K m̂=m−K
Ga=v (2.7)
vector a the polynomial coefficients (a0 ...aN ) and the vector v the observations with
2K+1
X
vi = wm̂(n) f (m̂(n)) m̂(n)i (2.9)
n=0
The observations are weighed through the wi , according to a scheme, which is based
on their creditability with respect to clutter. For each point of the observation (fm ), an
13
estimate is made how likely it is for that point to be influenced by clutter (Cm ). A weighing
scheme is proposed to make use of the following function:
This way, values that exhibit a higher probability of being cluttered receive less credit, ex-
pressed through w, than less cluttered ones. See again Fig. 2.5 for the Gaussian error
function. Since the abscissa as defined by the range gate indexing was chosen to have its
origin on the range gate under consideration, the evaluation of the fit value for this special
point is simplified to the value of the coefficient a0 .
With this procedure, a device is present to correct clutter in radar data. Given a clut-
termap C and a scan Z, each point in Z is checked against C and, if Pclutter exceeds a
selectable threshold Pcrit , the point in Z is replaced by the fitted value a0 .
For the following three examples, the parameters where chosen as follows: Pcrit = 0.9,
K = 20, M = 3. All scans were taken from July, 12th 1999. The shown scans in Figure
2.6 were used to collect the cluttermap. Figures 2.8 to 2.12 show a correction for the ray of
angle 0, 5th range gate. Note that the corrected value used in each situation is the value of
the fit at x = 0 (Corresponding to a0 by construction).
14
Figure 2.8 shows clutter and fitting procedure for a situation, where no larger structure
is present in current sample (red curve) in the vicinity of the clutter (green curve). Since in
that situation the difference between cluttermap and sampled values is small and no larger
structure is present in the ray to indicate ’proper’ signal, the resulting fit is close to 0 overall.
The situation changed in Figure 2.10. A large precipitation signal has wandered into the
centre from the Northeast and is partially covering the cluttered area. It can be seen how
the presence of the larger structure in the ray ’pulls up’ the weights and sample values, thus
raising the fitted value.
In Figure 2.12 the precipitation echo has wandered Southwest even further and now cov-
ers the clutter completely. The large structure present in the ray pulls up the fit from both
sides. Also very clearly visible is how the weights react with the change from cluttered to
non-cluttered areas.
This procedure is not fully mature yet. It still leaves small holes in the precipitation.
Since these holes don’t pose a problem for subsequent stages, the quality was deemed good
enough to be useful for the course of this work. At an early stage during development
the whole procedure was tried using simple linear regression, which basically boils down to
setting the order of the interpolation polynomial to 1. It turns out that the linear approach
is too crude. Since a larger structure with a distinct curvature should be captured and not
only the next few points, the simple linear process tends to underestimate the reflectivity a
lot, resulting in holes or artificial low level plateaus.
15
Figure 2.7: Cluttermap Correction 1
Left: 10:06 No Correction. Right: Corrected
200
’samples_0.5’
’clutter_0.5’
’weights_0.5’
’weighed_samples_0.5’
’fit_0.5’
150
Z [byte value], weight [100*weight]
100
50
0
-20 -15 -10 -5 0 5 10 15 20
range gate distance
16
Figure 2.9: Cluttermap Correction 2
Left: 12:31 No Correction. Right: Corrected
180
’samples_0.6’
’clutter_0.6’
’weights_0.6’
160 ’weighed_samples_0.6’
’fit_0.6’
140
Z [byte value], weight [100*weight]
120
100
80
60
40
20
0
-20 -15 -10 -5 0 5 10 15 20
range gate distance
Figure 2.10: Ray Interpolation Example: Clutter partially covered by another event
Showing the fit for July, 12th, 14:31. The fit was done for Azimuth Angle 0 and Range
Gate No.6.
17
Figure 2.11: Cluttermap Correction 1
Left: 13:46 No Correction. Right:Corrected
200
’samples_89.2’
’clutter_89.2’
’weights_89.2’
’weighed_samples_89.2’
’fit_89.2’
150
Z [byte value], weight [100*weight]
100
50
0
-20 -15 -10 -5 0 5 10 15 20
range gate distance
Figure 2.12: Ray Interpolation Example: Clutter completely covered by another event
Showing the fit for July, 12th, 15:46. The fit was done for Azimuth Angle 89 and Range
Gate No.2.
18
Chapter 3
The data produced by the radar system in its original form is not very suitable for sub-
sequent stages of the edge and object detection processing. It needs transformation onto
the Cartesian plane and a couple of filtering operations first. Since the data can be viewed
upon as a natural grayscale image, its only natural to refer to methods for processing digital
imagery as appropriate for the treatment of this data. This section introduces some basic
concepts and methods used in the course of this work.
The algorithms devised for processing digital images are legion. They range from simple
pixel-wise appliances (like thresholding) to algorithms taking into account the whole image
data, like Fourier transformations. It would be well beyond the scope of this work to give
an authoritative overview, so only the techniques used will be taken into consideration. For
an extensive discussion of the topic see Gonzales/Woods,Digital Image Processing[1], from
where all digital image processing techniques were taken, except for the ones developed by
the Author himself.
3.1 Definitions
An image in the sense of image processing is a set of rectangular matrices of evenly di-
mensioned values, which define properties for each pixel in each cell of the corresponding
matrices. The combination of all information determines the appearance of the pixel in the
resulting image. A good example are well known RGB images, which need 3 matrices con-
taining the colour information for red, green and blue for each pixel. Since the algorithms
used to process these matrices are more often than not identical for each information matrix,
the most widely used image used when explaining digital image processing procedures is a
grayscale image. It only needs one matrix containing the pixel values from a defined realm
of values. Radar data from the X-Band radar in Bonn comes in a range of unsigned char
[0..255], and can thus be looked upon as a natural grayscale image. All following proce-
dures will make use of that convention. Another helpful construction for the purpose of
processing is defining the image as a function f (x, y) which yields the grayscale value at
pixel coordinates (x, y).
19
3.2 Spatial Convolutions
Convoluting an image is among the most simple tools in image processing. It can be thought
of as an image transformation, by which the values of neighbouring pixels of a pixel under
convolution are used in some discrete function (the convolution kernel) to determine the
pixel value for the resulting image. The neighbourhood can be rectangular shaped or a
circle of influence and the parameter determining its size (also called the convolution kernel
size) may vary. The kernel function itself may constant or depend on spatial coordinate
or values found in the neighbourhood. More often than not, the parameters and size of
the convolution are constant though, which gives rise to a significant simplification of the
process: masks.
P9
to c(pj ) = i=1 pi ∗ wi in the output image. Repeating this process for each pixel results
in the convolution of the image with the mask. Of course a mask needn’t be limited to 3x3.
The concept of masks has proven to be so generic and useful, that the engineers of the Java
programming language introduced a class in their graphics library for just this purpose in
version 1.4.1.
Averaging Masks
Among the most simple uses for masks is averaging. By setting all the weights to 1 and
dividing the convolution image by the number of weights in the mask, each pixel in the
result contains the arithmetic average of the neighbourhood of the pixel (including itself).
A little more advanced use could be setting all diagonal entries to 0, thus limiting the
20
neighbourhood to straight lines. A better solution though is choosing the weights according
to the number of values under consideration. The mask shown in Figure 3.2 calculates the
arithmetic average of a 3x3 neighbourhood. As a general guideline, the sum of the weights
has to be 1 for averaging. The result of averaging is demonstrated in Figure 3.3 and Figure
3.4. One of the biggest disadvantage of this method is the blurring, which makes edges
considerably harder to locate. We will introduce a more subtle method of averaging later,
the Gaussian Blur filter.
21
Figure 3.4: Averaging Example, Filtered
Result of convoluting the image once with the averager shown in fig.3.2. Notice how the
bright spots have been averaged out and some of the smaller gaps have been filled.
Derivative Masks
As stated in Gonzales/Woods Digital Image Processing[1] p197, if the averaging process can
be viewed upon as an analogue to integration and this smoothes images, the opposite can be
expected for differential masks. Since differentiation on a two - dimensional domain yields
a vector, and the magnitude of the gradient is the length of that vector, calculating the
gradient by using masks requires two masks, one for the x and one for the y direction:
" #
∂f
f = ∇f = ∂x
∂f
∂y
p
|∇f | = kfk = (∂f /∂x)2 + (∂f /∂y)2
Now let a 3x3 neighbourhood around a given point be numbered as indicated in fig.3.1:
Then the gradient can be approximated as:
where the first term corresponds to the approximate gradient in y, Gy and the second term to
its counterpart in x, Gx . This scheme gives rise to a pair of masks known as Prewitt Operators
in image processing, which can be seen in Fig.3.5. Another form of differential operators,
known as Sobel Operators, have the advantage of enhancing the axis-oriented values over the
diagonal elements, providing a smoother result than the Prewitt operator. The two Sobel
Operators are shown in fig.3.6. Generally, differential masks have their coefficients sum up
to 0. For in-depth information on the topic of the presented operators, see [1].
22
Figure 3.5: Prewitt Operators
The Prewitt Operator for x (left) and y (right) direction correspondingly.
3.3.2 Maximum
The pixel value is replaced by the maximum of the values found in the sample. Its a very
good filter for enhancing structural views of the data and fill gaps, but it destroys a lot of
23
the fine grain structure. Its the steam-hammer among the presented methods, but good for
boundary finding in weak data.
3.3.3 Median
[1] The Median of a sample of values is defined as the 0.5 percentile of these values. Its
the one value in the sample above which half of the values reside above, and the other half
of the values reside below it in the range of values in that sample. An example of using a
median filter on a 3x3 neighbourhood on the data presented in fig.3.3 is shown in fig.3.7.
3.3.4 Percentile
The best suited averaging method found in the course of this work was the percentile filter.
A predefined percentile is chosen and for each sample the original pixel value is replaced
by the given percentile. A carefully chosen percentile value has all desirable properties of
the maximum filter, yet contains the fine grain structure of the data a lot better than all
other methods. Its computationally more intensive, since an interpolation is done for each
sample, but in practical application this difference was found to be imperceptible and the
results justify the extra effort involved. Note that the maximum filter is the 100% percentile
and the median is the 50% percentile.
3.4 Thresholding
Thresholding denotes the process of limiting the range of possible values for the purpose of
differentiating between the background and foreground of a given image. Often the term
thresholding is used synonymously for a highpass filter, where all values must lie above a
certain value to pass the filter. Thresholding can just as well mean the reverse (Lowpass),
24
Figure 3.8: Percentile Averaging Example
July 12th, 1999 12:31, Percentile Averaging 80% in a 3x3 neighbourhood.
or a combination (Bandpass). For the purpose of this work, only a highpass filter was
implemented and used.
25
3.5 Other Filters Used
3.5.1 Isolated Bright Pixel Filtering
Single Points are isolated pixels, which differ considerably in brightness from their immediate
surroundings. Since they cause trouble in later stages of the object detection – namely in
the Gaussian scale space analysis – a procedure was devised to remove those. For each point
the difference with all points in a 4x4 - neighbourhood are considered. If more than two
exceed the chosen maximum gradient, the pixel is assumed to be either an isolated point
of strong reflectivity or part of a line - like structure of that type. Thus it is replaced by
a simple arithmetic average of its surrounding pixels. Otherwise it passes unchanged. The
following figures illustrates this using a max. gradient of 100/pixel (100dBZ/km) 1
26
3.5.2 Speckle Filtering
Speckle is defined as small particles, which are randomly distributed on the image. It can
be thought of as dust, scratches or other small scale noise in e.g. a photograph. In the
context of this work, speckle is defined as small scale objects which need removal in order
to not disturb higher layers of processing. Especially when using Gaussian blur filtering,
small scaled, yet highly intense spots in the data can get spread out widely in the process,
resulting in noise in the scale space. Therefore, the following procedure was devised in order
to get rid of it.
A pixel radius is chosen along with a minimum coverage percentage. Each pixel in the
image is then the midpoint of a disk with said radius and the coverage is calculated. Since
the data consists solely of bright Blobs on a dark background, and this background is de-
fined as a pixel of value 0, the coverage is simply the number of non-zero pixels divided by
the overall number of pixels taken into consideration. If that number is equal or exceeds
the chosen percentage, the pixel is considered part of a large enough structure and passes
the filter. If, on the other hand, the coverage around that point is smaller than the chosen
percentage, it doesn’t make it into the result. The parameters, however, need to be chosen
very carefully, since too high a threshold for a given radius results in removal of too many
boundary points from originally sufficiently large structures. A good combination of values
was to be found a radius of 11 pixels needing a coverage of at least 10% to make it through
at a resolution of 200x200 pixels. Fig.3.11 and Fig.3.12 illustrate the method.
Note the small remains of speckle near unfiltered areas in the top right hand area. If
a very small scale object lives near enough a bigger one, enough points from the adjacent
bigger structure make it into the area of influence for the smaller one, keeping it alive through
the filter. Since this is limited to a fraction of the radius of influence, the errors introduced
are not of importance Another configuration fooling the filter are dense, yet singular spots,
which keep each other alive. A remainder which is owed to this configuration can be seen
right in the centre of the de-speckled image. Overall, however, the presented method delivers
good enough results for the subsequent processing stages.
27
Figure 3.11: Image with Speckle
July 13th 1999, 11:41. The shown image was produced by applying a cluttermap correction,
removing bright spots, thresholding at 12.5bDZ and projecting on the Cartesian plane using a
resolution of 200x200 pixels. In the centre, remains of the cluttermap correction can be seen
28
Chapter 4
Although this concept is conceptually very easy to understand, it has been looking for
mathematical approach in terms of signal processing for some time, despite the fact that all
necessary mathematical concepts needed were ready by the mid 1800’s.[5] It is interesting
that, although the scale-space idea in the western hemisphere usually is said to have ap-
peared first in a paper by A.P.Witkin [6], 1983 or an unpublished report by Stansfield (1980),
Weickert points out that the first Gaussian Scale-Space formulation has been proposed by
Taizo Ijima in Japan, 1959. Two theories of scale-space have developed surprisingly inde-
pendent of each other in Japan and the Western World. A comparison of the two theories
was done by Weickert in his paper ”Scale Space was discovered in Japan” [5], which is also
a good, compact introduction into the general ideas of the theory.
Within the confines of any given image1 the concept of scale becomes somewhat relative.
Lindeberg states in his book ”Scale Space Theory in Computer Vision” [3]: ’The extent of
any real world object is determined by two scales, the inner scale and the outer scale. The
outer scale of an object or a feature may be said to correspond to the (minimum) size of a
window, that completely contains the object or the feature, while the inner scale may loosely
be said to correspond to the scale at which substructures of the feature or object begin to
appear’.
Scale Space Theory is a mathematical model, which strives to give a robust and usable
description of the property ’scale’.
1 image in this work is used synonymously to 2-D signal representations.
29
4.2 Short Introduction to Gaussian Scale Space
This section basically subsumes Lindeberg,1994, Chapter 2. Consider a one dimensional
’image’ F : IR −→ IR. Now a scale parameter t ∈ IR+ is introduced. Small values of t shall
represent finer–, larger values coarser scales. Then the image F is abstracted into coarser
and coarser scales by gradually increasing t, resulting in a family F (x, t) of images, param-
eterized by t. This family is called the scale space representation of the image, L(x, t). It
contains information of each object in F at each considered scale. This has some similarity
with the wavelet approach. As opposed to wavelets, the scale space representation does
shrink in size as the scale parameter increases. Scale Space is useless for data compression.
How does the abstraction take place? For an illustration, a one-dimensional signal is
instructive. Again, let F : IR −→ IR. The scale-space representation L of F starts at
scale 0 (the original image) and images at coarser scales are given by convolution with a
scale-space kernel g:
L(x, 0) = F (x) (4.1)
L(x, t) = g(x, t) ∗ F (4.2)
which is calculated in the form of a convolution of F with g:
Z ∞
L(x, t) = g(λ, t)F (x − λ)dλ (4.3)
λ=−∞
Although many possible scale-space kernels are conceivable 2 , the Gaussian kernel g(·, t) 3
has by far the most important stance in the field of scale-space theory:
1 2
g(x, t) = √ e−x /2t (4.4)
2πt
It has a number of desirable properties. (See Lindeberg,1994). First of all, its normalised in
the sense, that Z
g(x, t)dx = 1 (4.5)
x∈IR
It has a semi-group property, which results in the fact that the convolution of a Gaussian
kernel with a Gaussian kernel is another Gaussian kernel:
g(., t1 ) ∗ g(., t2 ) = g(., t + s) (4.6)
which has a technically important implication for scale-space representations: A scale-space
representation L(x, t2 ) can be computed from a scale-space representation L(x, t1 ) with
t1 < t2 through convolution with a Gaussian kernel g(., t2 − t1 ):
L(x, t1 ) = g(., t2 − t1 ) ∗ L(x, t1 ) (4.7)
This is the cascade smoothing property of the scale-space representation. Furthermore, it is
separable in N dimensions such that a N-dimensional Gaussian kernel g : IRN −→ IR can
be written as
N
Y
g(x, t) = g(xi , t) (4.8)
i=1
which takes the order of processing operations needed for computing convolution masks in
the spatial domain down considerably.
2 The two properties to make a kernel useful, being unimodal and positive
3 g(·, t)meaning g(x, t) ∀ x ∈ IR
30
4.2.1 Effective Width
In practical applications the Gaussian kernel is calculated until a certain distance from its
origin, its effective width xmax . In this work, this distance was determined for each scale
as the point xmax (t), at which the value of g(xmax (t), t) had decayed to 0.01% of g(0, t).
The value of 0.01 was called the decay δg and is adjustable in the software, although it was
mostly left at its default of 0.01. Thus, the width of the kernel operator was be calculated
through:
2
√1 e−xmax /2t
g(xmax , t) 2πt 2
δg = = 1 0
= e−xmax /2t (4.9)
g(0, t) √ e
2πt
and thus p
xmax (t) = −2 t ln δg (4.10)
4
which is also the width of the mask used to calculate the kernel.
Since the width of the kernel is expressed in image coordinates, where the basic unit is one
pixel. For relating xmax to distances in [m], the resolution of the image has to be taken into
account.
How does convolution with a Gaussian kernel affect the data? Figure 4.1 shows a scale
space representation of random data, which as been modulated by a sinus. Scale increases
from bottom to top:
4 In literature on scale-space, the effective width is often deduced from the thought, that the weighted
averaging introduced by the Gaussian√kernel is similar to measure the signal at point x through a circular
aperture of characteristic length σ = t, so for example in Lindeberg,1994.
31
Figure 4.1: 1-D Scale Space Representation
Scale increases from 0 (bottom) to 0.8 (top).
Notice how the small scaled, random signal gets less and less important as scale increases.
The structure that remains is the larger-scaled, sinusoidal variation.
Of course gauss filtering in its own right is a well known technique for de-noising noisy
data and nothing new. However, in the context of scale space, the ”noise” is not an unwanted
part to be filtered out, but just the property of the given signal at the scale where its visible.
The scale space representation is constituted by the whole family of curves, parameterised
by t at different levels of detail.
4.2.2 Extension to 2D
The extension into a higher dimension is straightforward. The image function is extended
to F : IR2 −→ IR and the Gaussian kernel looks like:
1 2
g(r, t) = √ e−|r| /2t
2πt
where r ∈ IR2 . The convolution of F with g(r) is the integral over the whole domain:
Z
L(r, t) = g(λ, t)F (r − λ)dλ
λ∈IR2
32
The Scale-Space Representation of a 2D image is a 3D space, where the scaled versions of
F stack up along the t axis in L(x, t) Outlines of structures in scale-space appear to be
upside-down domes or mountains.
4.3 Blobs
4.3.1 Definition
Grayscale imagery is composed of areas of different brightness. Blobs are areas in the image
where a desired property remains relatively stable and which is somewhat distinguished
from its surroundings. In grayscale images, the two candidates are bright Blob on dark
background and its evil twin: dark Blob on bright background. In the case of radar data in
the given representation this is particularly easy: we have only bright areas against a dark
background since only the bright areas are of interest.
33
f(x)
gradient(x)
laplace(x)
34
Notice how the gradient reaches its maximum on the middle of the slope. Observe how
the Laplacian changes sign in the process. There are two basic techniques for obtaining the
location of edges using derivative operators: gradient maxima and Laplacian zero crossings.
For the course of this work, the Laplacian zero crossing was used and approximated by
using the mask shown in Fig.4.3, which is a second order derivative of a Gaussian smoothing
operator (see [1], chapter 7), a so called Mexican Hat operator. Only points with negative
Laplacian were considered as candidates for edge points. That way the edge is actually
located inside the bright Blobs. A demonstration of this can be seen on Fig.4.4.
For the following procedure let F be the original image. F is first smoothed using a
Gaussian kernel g(·, t) 5 in order to prevent the very noise sensitive Laplacian from going
nuts, resulting in a smoothed image G, and then the Mexican hat edge detection, denoted
by M H is applied. This is somewhat double done, since the Mexican Hat operator was
constructed with a smoothing property itself, but the results are nonetheless usable.
G = g(·, t) ∗ F
E = MH ∗ G
Let all N points in E satisfying the edge criteria be collected into a list S = {n1 , n2 , ..., nN }.
Every node ni is composed of the location in image coordinates and a pointer to the next
5 g(·, t) means Gaussian convolution kernel with scale t
35
entry, ni+1 6
Starting with an empty boundary node list b1 , the first node n0 ∈ S, is added to b1 .
Then the immediate 8-Neighbourhood of n0 is searched in S. Every point found to be a
direct neighbour is considered to be part of the boundary b1 and added, if it hasn’t been
added already. Then this new found friend is subjected to the same treatment. This process
continues until no more new points can be added to b1 . Afterwards, b1 is removed from S
and the process starts all over again, this time with b2 , until S is empty. This results in K
closed boundaries:
b1 = {n1 , n2 , ..., nN1 }
b2 = {n1 , n2 , ..., nN2 }
..
.
bK = {n1 , n2 , ..., nNK }
where the sets bj are orthogonal in space and their junction is S:
[
S= bi
i
For each now closed boundary bj a Blob object Bj is created and the boundary is stored
within for future use.
4.3.4 Holes
As said before, the data under consideration only contains bright Blobs on dark background.
Nevertheless it is quite common to have areas of no signal completely enclosed by areas
bearing significant signal. These spots are called holes in the context of this work and they
pose a problem: since the edge detection algorithm finds the boundaries between the hole
and the surrounding bright area like any other transition, spurious Blobs are generated. In
order to remove these, each combination of Blobs is checked: Consider two boundaries bj
and bk . If
b j bk = bk
where denotes complete geometrical inclusion. 7 , then bk is considered to be a hole and
removed from the list of Blobs. This has to be done since the following area sampling
algorithm would be fooled by holes and run astray.
36
which lie inside, but not on, the boundary bj .
Please observe that the brightness shown has been adjusted to represent the whole range
of values of a Gaussian blurred image. Having used the fixed value grayscale mapping would
have made the Blobs almost invisible, because the Gaussian kernel not only smoothes the
image, but also levels the values somewhat down, the higher the scale, the lower the resulting
signal. It is clearly visible how scaling up dismisses more and more of internal details of
the signal and at large scales, only a rough description of the original shape remains visible.
The internal scale of the image shown could roughly be estimated to lie around 128.
37
38
Figure 4.5: Scale Space Representation 1
Scale-space representation of the azimuth scan, 8th of September 1998 at scales 0 (original
image), 2,4,6,8,16 and 32.
39
Figure 4.6: Scale Space Representation 2
Selected Points in the scale-space representation continued for scales 64,128,256,512,1024
and 2048.
4.5 Blob Detection in Scale-Space Images
The problem posed by images under Gaussian scale space transformation for detecting ob-
jects, is clearly the absence or massive dislocation of clean edges. Since the Gaussian blur
tends to smooth the edges out, artificial edges have to be re-introduced. How can this be
done? A simple approach would be to subject L(x, t) to a thresholding procedure. Since the
Gaussian kernel g(·, t) tones the values down more and more with increasing scale t, it is a
good idea to use adaptive thresholding. The following series repeats the process in the pre-
vious section on the same data, but this time each slice from the scale-space representation
is subjected to an adaptive thresholding at T rel =20%. This value will subsequently also be
referred to as cut-off value.9 . After thresholding. the edge detection introduced in section
4.3.2 onwards was applied. See Figures 4.7 and 4.8 for results.
It is clearly visible that the resulting boundaries settle around prevalent structures in the
original data by observing their scale-space representation. The number of detected Blobs
K decreases with increasing scale t, as could be expected.
9 which means the lowest 20% of the data are trashed (set to 0)
40
41
Figure 4.7: Edge Detection in Scale Space Images 1
Thresholded Edge Detection at scales 2,4,8. Left:Scale Space Representation. Right:
resulting boundaries on original data.
42
Figure 4.8: Edge Detection in Scale Space Images 2
Thresholded Edge Detection at scales 16,32 and 64. Left:Scale Space Representation.
Right: resulting boundaries on original data.
4.6 Automatic Detection of Prevalent Signals
As could be seen in the previous section, an increasing scale parameter t leads to prevalence
of the most significant and dampening of the less significant features. The scale, at which
the prevalent features remain while the insignificant disappear, does vary considerably from
image to image. It depends a great deal on the complexity of the scenery. Prevalent, in
the scale-space context, is always to be seen in the context of the scale of the present image
features. This means, that an approach based on a similar level of detail (in scale space
terms) in subsequent images can not work properly with a fixed scale. Thus, an automatic
process capable of distinguishing the prevalent from the insignificant Blobs would be highly
desirable. The question is though: how can prevalent be defined in terms of scale-space?
Consider the following idea: Given the fact that (in general) the number of detected
Blobs decreases as the scale parameter t increases, could it be reckoned that Blobs surviving
the upscale process for a given number of repetitions are the prevalent Blobs?
This idea shall be used for the following procedure. Starting with a low scale parameter
t0 , the number of Blobs is detected. The scale is increased by a fixed increment δt and the
number of Blobs found now is compared to the previous number. This process is repeated
until the number of detected Blobs stabilises over Nmax iterations. The parameter Nmax
determines the required scale-space persistence for any given object needed to be classified
43
as prevalent. The resulting automatically selected scale is chosen to be the scale parameter
t of the first scale space representation slice L(., t) of the stable series in order to conserve
maximum detail. The complete set of parameters required thus, is the start scale t0 , the
scale increment δt and the scale-space persistence Nmax . The Blobs to be considered persis-
tent thus are required to remain distinguishable over an effective scale difference of Nmax ∗δt.
Figure 4.9 shows an azimuth scan from July 1999. An extensive signal is present on
the west side, the upper east side is populated by smaller, scattered signals. Figure 4.10
illustrates the automatic upscale process for Nmax of 1,2,4,6,8,10. Depending on the Nmax
setting, different Blobs ’prevail’ or structures merge into larger Blobs as expected.
44
45
Figure 4.10: Automatic Scale Detection Results
Blob Boundaries detected at Nmax of 1,2,4,6,8 and 10
Frequently, repeated convolution with the Gaussian kernel destroys borders between ob-
jects, which lie close to each other, but are connected by low intensity areas. In order to
alleviate this effect, a feature called Inprocess cut-off was introduced. It works by threshold-
ing each intermediary scale space representation before performing the next upscaling step.
In the further course of this thesis, the intermediary cut-off is denoted by Tinp (adaptive
threshold). The net effect is that boundaries move closer to the local maxima in scale space
representation. See Figure 4.11. Inprocess cut-off should be used with care, if chosen too
high, it results in massive loss of information. A value yielding solid results was found to be
Tinp = 0.1 (10% adaptive threshold).
46
Chapter 5
Tracking means: to extract data about movement from subsequent sets of data. The move-
ment need not be physical movement between two points in time, other parameters changing
between to images may be suitable (for example Tracking of objects under scale transfor-
mations).
The speciality of the SARTrE1 tracking tools lies in the ability to be able to automati-
cally select features worth tracking in the context of all objects in any given snapshot, and
the correlation procedure, which takes histograms of Blob content (signature) into account.
The focus of attention is drawn to the salient image structures by the applying of the auto-
matic detection procedure presented in Section 4.6.
There exist a couple of Tracking algorithms based on different principles to obtain infor-
mation about what happened between time t and t + ∆t:
Centroid-Tracking :
can be applied if the trackable data can be decomposed into distinct objects under
some criteria. A centroid - a designated point - for each object is assigned. Subsequent
images are analysed with the goal of finding the same object at its new position, and
the displacement of the object between the two images is estimated as the displacement
of its centroid. Of course the problem of correlating objects from one image to another
is dependant on the nature of the image or object and the criteria used. A certain
grey value, a geometrical shape or another suitable form of signature may be used.
Often, the search is narrowed by some a-priori or otherwise obtained information
about maximum possible object velocity and size of the object, restricting the search
window in a subsequent image. It was first applied in meteorology by Barclay and
Wilk (1970). A recent adoption of this form of tracking is the Trace3D algorithm,
developed in Karlsruhe by J.Handwerker,2002.[9].
Statistical Cross-Correlation :
is not concerned with individual objects as such, but the extraction of flow patterns
in image series. This is achieved by defining a box size and statistically correlate all
possible boxes at t to all possible boxes at time t + ∆t. The boxes getting the highest
correlation are connected. The resulting field of displacement vectors is dependant on
1 The abbreviation SARTrE is short for Scale Adaptive Radar Tracking Environment. The Environment
mentioned refers to the reusable software libraries developed for this work.
47
the box size as well as on the data. Statistical box correlation suffers from ambiguities
inherent in the correlation process and is often highly sensitive to changes in box
size. For an illustration of the ambiguity problem, see E.Heuel 2004[14]. An example
of this type is the TREC algorithm (Rhinehard 1981)[10] which was improved by
L.Li,W.Schmid, J.Joss 1994 (COTREC) [11] through directional post-processing by
applying the continuity equation to the vector field delivered by TREC, the results
where used for Nowcasting. This was the basis for the improved algorithm developed
at the ETH Zurich by S.Mecklenburg, 2000[12].
Tracer Tracking :
A special form of semiautomatic Tracking is applied when the object under observation
exhibits little clue as to its motion. For instance determining flow patterns and veloci-
ties in fluids. In this case, a tracer is picked or introduced and the motion of the tracer
is tracked instead. An example is the estimation of rotational velocities in a Tornado
by tagging debris carried by it and following it through a series of high-resolution film
frames. In the context of Radar Meteorology, this form of indirect tracking has no real
significance.
For the course of the work the natural approach to track precipitation seemed to track
Blob centroids. As signature the histogram of reflectivity within each Blob was chosen.
The correlation was performed using a weighting scheme including spatial displacement,
histogram size and -shape (via Kendall’s Tau correlation).
5.1 Histogram
A histogram of reflectivity values contains the counts of each value from the range of (dis-
crete) values possible. In our case, the range was chosen to be the natural range as present
in the data, where values range from [0..255]. Each Blob area A was scanned and the found
values counted up. As an example the histograms of the Blobs detected in Figure 5.3 were
are shown in Figure 5.2
48
Figure 5.1: Histograms, Detected Blobs
Azimuth Scan, July 12th 1999, 13:01. Four distinct Blobs detected with identifiers
#1,#2,#3,#4
5.2 Centroid
5.2.1 Geometric Centre of Boundary
When saying that the centroid of the object is used for determining its displacement, the
question was left open what the centroid actually is. At first glance, the geometric centroid
of the boundary points comes to mind. Assume a boundary b from a Blob B containing N
points xi = (xi , yi ).
N
1 X
xcentre = xi (5.1)
N i=1
N
1 X
ycentre = yi (5.2)
N i=1
This has a big drawback: since Blobs tend to change in shape, yet may stay relatively intact
in terms of overall size and (foremost) in position, the geometric centre of the boundary
points might yield spurious movement. Although that option was left in the software for
pedagogic purposes, it is not a good choice. Two other candidates proved to be a lot more
stable:
49
180 180
’h200’ ’h477’
160 160
140 140
120 120
100 100
N(Z)
N(Z)
80 80
60 60
40 40
20 20
0 0
80 100 120 140 160 180 80 100 120 140 160 180
Z [byte value] Z [byte value]
180 180
’h760’ ’h7255’
160 160
140 140
120 120
100 100
N(Z)
N(Z)
80 80
60 60
40 40
20 20
0 0
80 100 120 140 160 180 80 100 120 140 160 180
Z [byte value] Z [byte value]
50
area A(x) of a Blob B at locations x1 ...xN , where x = (x, y) ∈ IR2 .
N
X
Asum = A(xi ) (5.3)
i=1
N
1 X
xcentre = xi A(xi ) (5.4)
Asum i=1
N
1 X
ycentre = yi A(xi ) (5.5)
Asum i=1
where Asum is the sum of reflectivity in the area and A(xi ) the value measured at each
point xi . This way the centroid follows the distribution of reflectivity, wherever the boundary
might be.
51
Figure 5.3: Centroids
Azimuth Scan, July 7th 1999, 20:31. Different Methods to obtain Centroid. Top Left:
Data with boundaries, Top Right: geometrical. Bottom Left: Reflectivity. Bottom Right:
Scale Space.
5.3 Correlation
Tracking consists of recording the displacements and histogram developments of Blobs
through time. At time t0 there is of course nothing to match against, and the current
Blobs are provided with unique ID’s and stored in a collection. At all subsequent times,
however, correlation’s task is to transfer the ID’s from old Blobs to new Blobs identified as
their successors. The Tracks are based on subsequent Blobs with the same ID.
52
Consider two images at two different points in time, t1 and t2 = t1 +∆t, F (t1 ) and F (t2 ).
(also called snapshots). A critical time difference can be set, which determines the maximum
time between two snapshots to attempt correlation. If the time difference ∆t between the
two snapshots exceeds ∆tmax , the correlation is omitted and the new Blobs simply replace
the previous Blobs by assigning entirely fresh IDs. This is useful in situations with fast
moving objects and sparse data. In such situations it is best to lift the pencil and start over,
instead of producing errors in the resulting tracks.
Assume ∆t is within reasonable limits and the snapshots have yielded a number of Blobs,
B prev and B new . For each new Blob bnew
i ∈ B new a table is calculated, which contains a set
prev
of values with respect to each old Blob bj ∈ B prev :
mid displacement: dR :
This is simply the distance between the centroids of bnew
i and bprev
j in metres.
displacement correlation value τR :
After all displacements have been calculated, they are normalised by the maximum
displacement value found in all correlations and fed into a complementary Gaussian
error function, resulting in values nearer to 1 the closer the argument gets to 0. The
resulting value ranges from ]0..1] and is named τR .
histogram size difference: dH :
|H| is defined by the number of values apart from zero, that went into the histogram.
That means simply the sum of all counts for all classes except class 0. The difference
between the histogram sizes, d|H|, is calculated for each pair bnew
i and bprev
j .
histogram size correlation: τH :
is obtained by normalising the differences d|H| with the highest present difference and
feeding this value to the complementary Gaussian error function again. As usual, this
yields a value which approaches 1 as d|H| approaches 0. This value is called τH
histogram shape correlation τK :
The Kendall rank correlation is a statistical correlation suitable for data, which only
has only one criteria: it should be rankable. The ranks are then correlated in categories
of concordant or discordant alone. No assumption about the parameters of the under-
lying distribution is made and none of its parameters are estimated. (non-parametric
correlation). Kendall’s Tau is described in Numerical Recipes in C [7], Chapter 14.
Basically, the correlation compares data by counting the occurrences of higher in rank
(concordant, aka con), lower in rank (discordant aka dis) or equal (a tie). If the tie oc-
curs in x, the count goes to an extra counter (extrax ), if it occurs in y, its an (extray ).
If the tie occurs in both, its not counted at all.
The basic formula to calculate Kendall’s Tau according to Numerical Recipes in C [7]
is:
conall − disall
τK = √ p (5.9)
conx + disx + extrax cony + disy + extray
How does this apply to the histograms? Each histogram consists of value counts (yi )
in the 256 classes of possible values (xi ). In x every value will be a tie, since all classes
are present in both histograms at all times (by construction). This leaves only the
yi ’s of the two histograms in bnew
i and bprev
j to be compared, which are the counts
for the classes xi and these usually differ. Using Kendall’s Tau yields a parameter,
which is not bound to the absolute numerical values of the histograms compared, but
53
merely their difference in shape. τ ranges from −1 (completely anti-correlated) to +1
(completely correlated).
coverage, previous by new :
This value is not used for correlation, but for determining merges and splits (see below).
Consider two arbitrary Blobs bi and bj and their respective areas Ai and Aj . Let the
coverage operator v 3 be defined as:
|{Ai (x, y) : Ai (x, y) > 0 ∧ Aj (x, y) > 0 ∀ (x, y) ∈ Ai }|
bi v bj = (5.10)
|{(x, y) ∈ Ai : Ai (x, y) > 0}|
or in human-readable form: how many percent of the area covered by bi is covered by
bj as well? Clearly, if that value reaches 1, bi is completely covered by bj , if the value
is 0 they are completely distinct (in terms of covered ground). This coverage value for
the current pair is computed as bprev
j v bnew
i and used to check for merges.
coverage, new by previous :
This is just the same operator applied in reverse order: bnew
j v bprev
i and its used for
detecting splits.
When all correlative values of all possible pairs have been calculated the values τR ,τH
and τK are summed up with weights in order to obtain an overall correlation value for each
pair:
τji = wR τR ij + wH τH ij + wK τK ij (5.11)
where the subscript index j denotes the new, the superscript index i the previous Blob
involved. The purpose of the weights wR ,wH and wK is to have a device to put more em-
phasis on one or another during operation. For the most parts of the work, they were all
set to 1, but in some situations the Tracking accuracy could be improved, depending on the
situation in the data-sets, by putting more weight on one or the other. By setting one of
the weights to zero, it is even possible to eliminate the according aspect completely from the
Tracking. Assuming all weights to be at their default value 1, the overall correlation index
ranges from −1 (total anti-correlated Kendall-τ , no spatial or histogram size correlation)
to +3 (perfect match). The following procedure needs no adjustment when the weights are
changed, because it works on a strictly relative principle.
The actual matchmaking 45 is made by traversing the τji in descending order and pair
the Blobs bnew
j with bprev
i accordingly. If the bprev
i was already matched to a new Blob, the
next lowest τji without a match is chosen. Pairing means, to assign the ID of bprev
i to bnew
j .
Before the match is made official, a couple of constraints have to be obeyed first:
abs
maximum velocity vmax :
max
The value vabs is one fixed parameter of the Tracking process, which is mandatory. It
limits the displacement of the centroid in the time ∆t between the two images. Since
that time isn’t always the same, a simple maximum range constraint wouldn’t work.
If the velocity resulting from the displacement of the centroids of two Blobs, which
max
were matched by the correlation, exceeds vabs , then the match is rejected and the
new Blob is given a fresh ID.
3 read: covered by
4 From Webster’s Revised Unabridged Dictionary (1913): Matchmaking Match”mak‘ing a. Busy in
making or contriving marriages; as, a matchmaking woman.
5 I hear its particulary alive still in some areas of Ireland, where it is considered a honest pastime for
54
average velocity vav :
max
When entering a new Tracking sequence, vav is set to vabs . Subsequently vav is
calculated as the mean value of the detected velocities greater than 0. The constraint
max
resulting from this is determined by a factor cav so, that vav = cav ∗ vav . This serves
to leave room for variation of velocity up to the factor cav from the mean velocity of
max
the previous snapshot. If vav is exceeded, the match is rejected and the new Blob is
given a fresh ID.
After the matches have been made, they still have to be validated under the light of yet
another aspect, which is concerned with the development of Blobs from– and into another
over time. A situation frequently arising in radar data is the merge of several, previously
distinct Blobs into one new Blob, or a split of one previous Blob into several Blobs in the
succeeding snapshot. This poses a problem: If the correlation indicates (and it might well
do) that a couple of participants in the merge or split match, and the velocity constraints
are observed, then the resulting centroid displacement will be wrong in these cases. The
method developed here to handle these problems is based upon the previously introduced
coverage operator v.
Merges :
A merge is defined as a situation, where the area of multiple previous Blobs is covered
to a certain degree by the same newly detected Blob. The coverage of every previous
Blob bprev
i by every new Blob bnew j is calculated. If that coverage exceeds a pre-set
threshold covcrit , the old Blob is added to a list of candidates Cjmerge for a merge into
the new Blob:
bprev
i v bnew
j > covcrit −→ Cjmerge + = bprev
i . (5.12)
If, at the end of comparing all previous Blobs with the new Blob bnew
j , Cjmerge , contains
more than one Blob from the previous image, then a merge is assumed. In that case,
the matching (if any) of Blob bnew
j is undone and it is given a fresh ID.
Splits :
The reverse situation arises, when a Blob bprev
i from the previous image splits into
multiple Blobs bnew
j in the recent image. In this case, the same procedure is applied
reversely. For each old Blob, the coverage with every new Blob is calculated:
bnew
j v bprev
i > covcrit −→ Cjsplit + = bnew
j . (5.13)
Again, if the number of found Blobs in Cjsplit exceeds 1, a split is assumed to have taken
place. In that case, all the new Blobs in the split list are given new IDs, effectively
undoing all matches already made with those.
A critical coverage value covcrit = 0.3 was found to be sufficient in all situations that
were considered during the course of this work. Ideally, the coverage would take the indi-
vidual size of the objects taking part, as well as the overall sensed velocity present in the
past scans, into account. The presented method works quite well, but leaves some room for
improvement.
If a pairing from the correlation made it through the constraint and merge/split facilities
so far, it is assumed valid and stored. The storing happens in a dedicated object, which
creates separate lists of subsequent Blobs with the same IDs. A Track is generated from
this archive by traversing the stored Blobs for each ID in the order of their time-stamps and
connect the centroids.
55
5.4 Tracking Output
The results of a run on a series of images can be exported into a file, which contains an
entry for each Track consisting of a list of all nodes in that track. Each line in a Track’s
node list, contains the following fields, separated by whitespace and formatted according to
the UNIX printf standard (see man printf on a UNIX box):
56
5.5 Visualisation of Tracking Data
Tracks were visualised in the following manner: a hollow square is a starting point, a filled
triangle an end point and a filled disk an intermediary node. These marked points are
connected by straight lines, practically imposing a linear fit. Two modes of display are
possible:
All For every snapshot all Tracks detected in the Run so far are shown.
Current Tracks shows only those Tracks are shown, which belong to Blobs
visible in the current snapshot.
6
Background images can be chosen from the following Selection:
Further image elements contain the type of scan and the time the scan was obtained in
GMT in the lower left corner, a spatial measure indicating 10 km in the top left corner and
the scale parameter used in the lower right hand corner if applicable.
6 Thanks to Dirk Meetschen from the Meteorological Institute in Bonn for providing data and software
for Orography.
57
5.6 Estimation of Quality, False Alarm Rates
For an estimation of the quality of the Tracking, a number of criteria were defined. They
were applied by a manual inspection of the results of Tracking Runs, time-slice per time-slice.
Sspur
F ARsegments = (5.16)
(Sall − Sspur )
58
Chapter 6
Case Studies
The sum of all Tracks detected on that day are shown in Figure 6.1. Obviously some of
the detected Tracks are not correct. Since no directional smoothing has been applied, errors
of that sort can’t be detected yet. Directional smoothing is a feature yet to be implemented.
Manually removing the obviously wrong Tracks leads to Figure 6.2. The removed spurious
Tracks are depicted in Figure 6.3.
59
Figure 6.1: Fixed Scale, All Tracks
All Tracks detected July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.
60
Figure 6.2: Fixed Scale, Spurious Removed
Removed Spurious Tracks for July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.
61
Figure 6.3: Fixed Scale, Spurious Tracks
Spurious Tracks detected July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.
62
The following table contains a list of spurious nodes found in the run:
And here is a quality estimate according to the quality criteria defined in Section 5.6:
Kall 102
Kspur 7
FARtracks 0.0737 (7.37%)
Sall 384
Sspur 8
FARsegments 0.0213 (2.13%)
This clearly indicates that the algorithm’s precision suffers significantly from throwing
away all information from Tracks containing spurious segments, and that directional post-
processing would be strongly advisable in a unsupervised operation.
63
What happened at the moments, where the correlation was wrong? It might be instruc-
tive for the understanding of the algorithm to take a closer look. As an example, consider
the wrong matching for the Blob with the ID #43 in the step from 13:16 to 13:31. The
individual situations are shown in Figures 6.4 and after the wrong matching.
6.5.
64
Figure 6.5: Spurious Track Analysis, After Mismatch
Spurious Track for Blob #43, situation after misconduct, July,7th ’99 13:31 GMT+2.
65
Lets take a peek at the correlation details to find out what caused the mismatch. The
correct match for #43 would have been #45. Remember that for each detected Blob in the
new image, the full set of correlations is calculated with each Blob in the previous image.
Every new blob is given a temporary ID, which simply ranges from #0 .. #K-1, where K is
the number of new Blobs.
ID dR[m] τR dH τH τK τsum P vN N vO
32 56871.2 0.288063 0.5865 0.406849 0.6124 1.3073 0.00 0.00
42 69790.4 0.192330 0.2837 0.688276 0.3090 1.1896 0.00 0.00
43 4344.2 0.935322 0.1844 0.794264 0.3738 2.1034 0.00 0.00
36 63195.5 0.237793 0.0355 0.960003 0.3832 1.5810 0.00 0.00
41 11537.2 0.829361 0.0567 0.936047 0.3533 2.1187 0.00 0.00
37 39326.7 0.462558 0.0903 0.898358 0.3053 1.6662 0.00 0.00
40 6134.5 0.908766 0.2879 0.683918 0.4205 2.0132 0.00 0.00
44 19427.7 0.716666 0.2034 0.773625 0.3814 1.8717 0.00 0.00
The correct Blob from the new set would have been new Blob #0, whose correlation
table looks like this:
ID dR[m] τR dH τH τK τsum P vN N vO
32 54366.6 0.309824 0.6950 0.325657 0.4436 1.0790 0.00 0.00
42 75705.5 0.157299 0.0288 0.967460 0.1350 1.2598 0.00 0.00
43 7865.3 0.883189 0.0957 0.892396 0.3930 2.1686 0.00 0.00
36 69863.2 0.191867 0.2353 0.739318 0.3667 1.2979 0.00 0.00
41 10380.7 0.846241 0.2180 0.757807 0.3822 1.9863 0.00 0.00
37 39188.8 0.464129 0.3290 0.641701 0.3671 1.4729 0.00 0.00
40 13366.7 0.802823 0.4747 0.501969 0.3491 1.6539 0.00 0.00
44 27158.1 0.611926 0.4124 0.559716 0.3239 1.4955 0.00 0.00
Why was new Blob #0 favoured over new Blob #1? The spatial correlation for #0 is
0.883189 but for #1 it is 0.935322. Histogram size correlation with #0 is 0.892396, with #1
its 0.794264. The histogram shape correlation for both values is 0.3930 for #0 and 0.3738
for #1. This is a case where the spatial correlation, which clearly indicates the correct
match of old #43 with new #1, the match is rejected because the histogram size and shape
correlation outweigh the spatial.
This situation was presented in detail to show the use of weights. In situations, where
a lot of similar sized objects are present on a small area of the image, it makes sense to
increase the weight of the spatial correlation, wR , in order to minimise errors. However,
there is another method to optimise the tracking in unsupervised mode by using a Scale-
Space approach, it is presented in the following section.
66
6.2 Tracking at Automatically Selected Scale
The following procedure differs in but one aspect from the method in Section 6.1: The
Scale Parameter t for the choice of Scale-Space Representation L(., t) of F (x) chosen for
detection of blobs, is not pre-set and held constant during a Tracking Run, but automati-
cally determined for each slice based on the prevalent feature detection process presented
in Section 4.6. Consequently, some tracks of objects will be missed if they are insignificant
in the context of the prevalent signals present in the data. Why would that be useful? The
main motivation for this step is the idea, that a Tracking Algorithm which is supposed to
deliver stable results in an unsupervised situation, will improve its overall performance, if it
focusses on the most significant features. In order to prevent too ruthless upscaling and the
consequent loss of information, the upscaling process should be undertaken carefully.
The following results were produced using the same data used in Section 6.1. All result-
ing Tracks are shown in Figure 6.6. Manually removing the obviously wrong Tracks again,
leads to Figure 6.7. The removed spurious Tracks are depicted in Figure 6.8.
67
Figure 6.6: Automatic Scale, All Tracks
All Tracks detected July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.
68
Figure 6.7: Automatic Scale, Spurious Tracks Removed
Removed Spurious Tracks for July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.
69
Figure 6.8: Automatic Scale, Spurious Tracks
Spurious Tracks detected July, 7th ’99 2:00 GMT+2 - July 8th, 1:41 GMT+2.
70
The following table contains a list of spurious nodes found in the run:
And again a quality estimate according to the quality criteria defined in Section 5.6: The
Kall 89
Kspur 2
FARtracks 0.0230 (2.3%)
Sall 313
Sspur 2
FARsegments 0.0064 (0.64%)
improvements in quality of tracking are significant. The F ARtracks is less than 3 times of
that in the fixed scale case, and about the same for the segment point of view: F ARsegments
dropped well below 1%.
71
6.3 Tracking at higher velocities
The presented case from July 7th ’99 was comparatively easy, since the objects track where
well distinguished and the wind speeds on that day relatively low, which makes tracking
easier. One last case presented thus is a day from Autumn 99’, with high wind speeds and
closer objects. To prevent the algorithm from merging too many objects, a in-process cut-off
was used during the automatic scale detection phase. (See Section 4.6). The day analysed is
September 28th, 1999. Figure 6.9 shows the sum of all tracks detected. The detected wind
velocities were on average 12-15 m/s.
Table 6.9: Parameters for Automatic Scale Run with Inprocess cut-off
Sep 28th, 1999, 2:00 GMT+2 - Sep 28th, 1:41 GMT+2.
72
Figure 6.9: Automatic Scale with Inprocess cut-off, All Tracks
All Tracks detected Sep 28th, 1999, 2:00 GMT+2 - Sep 28th, 1:41 GMT+2.
73
Figure 6.10: Automatic Scale with Inprocess cut-off, Spurious Tracks Removed
Removed Spurious Tracks for Sep 28th, 1999, 2:00 GMT+2 - Sep 28th, 1:41 GMT+2.
74
Figure 6.11: Automatic Scale with Inprocess cut-off, Spurious Tracks
Spurious Tracks detected Sep 28th, 1999, 2:00 GMT+2 - Sep 28th, 1:41 GMT+2.
75
The following table contains a list of spurious nodes found in the run:
Table 6.10: List of Spurious Tracks for Automatic Scale Run with Inprocess cut-off
And again a quality estimate according to the quality criteria defined in Section 5.6:
Kall 94
Kspur 11
FARtracks 0.13250 (13.25%)
Sall 348
Sspur 11
FARsegments 0.02972 (2.98%)
Table 6.11: Quality Estimate for the Automatic Scale Run with Inprocess cut-off
76
Figure 6.12: Contrast Enhancement
Sep 26th, 1999, 4:11 GMT+2 Top: Before contrast enhancement, Bottom: After linear
contrast stretch.
A little more detail and a generally brighter image is the result. When combined with a
77
form of thresholding, which is based on Histograms, the results can be utilised for tracking.
This thresholding is described in the next section.
This looks like an image, which could be used for centroid tracking. The process of
stretching the contrast and thresholding the image at the value, above which only 25% of
the values reside, makes viable input for a scaling procedure. Figure 6.14 shows the result of
the Blob detection procedure on Figure 6.13 and the same blobs on the original, unfiltered
data are shown for comparison in Figure ??.
Based on this procedure, a run was undertaken for the time between 3:01 and 6:31
GMT+2. Owed to the high wind-speeds and dynamic development of the precipitation
on that day (making for big changes in the internal structure), a time slice constraint
∆tcrit = 600s was applied. Scale-Space detection was to use Nmax = 5 and an Inprocess
cut-off of Tinp = 20% was used. The Percentile Threshold was set to 85%. Figure 6.15 shows
78
Figure 6.14: Blobs on Contrast Enhanced and Thresholded Image
Sep 26th, 1999, 4:11 GMT+2, Top: Detected Blobs after Contrast enhancement and
subsequent Percentile Thresholding at 0.75. Bottom: Blob Boundaries on unfiltered data.
79
the results for these 4 hours.
80
Chapter 7
The overall performance of the algorithm in its current state is not too bad. It leaves a
lot of room for improvement, though. One of the most prominent problems arises in the
situation where detected areas are about to leave the radar’s range. The resulting shrink in
area and seemingly different movement exposed to the algorithm leads to obviously wrong
tracks. This could be prevented by an interpolation through the nodes in the tracks history
and an estimate of the time the blob is leaving the range. The same is true for the reverse
situation, where objects are entering the radar’s range. The same situations often lead to
mismatches, mistaking an object wandering in fresh with another that was already within
range and wandered further inward. In general this is a problem inherent to the way the
algorithm works - on single objects. The mismatch problem could also be tackled by an
interpolation scheme, which takes the overall direction of all Tracks in processing at the
time into account. Overall, a directional smoothing post-processing or in-processing scheme
would prove beneficial.
The ability to track not only the position, but also the development of the object in
terms of reflectivity intensities over time, could prove a helpful additional information in
selecting the correct relation between rain-rate and reflectivity. As shown in a paper by
C.Reudenbach, G.Heinemann et.al.,2001 [8], the Z/R-Relation has a twofold nature, it looks
different during the phase of precipitation build-up than on decease. Tracking the reflec-
tivity histograms could provide a Nowcasting - algorithm using SARTrE output data with
valuable information as to the estimation of future development of the tracked precipitation
areas in terms of size (as denoted by histogram size) as well as rain-rates (through histogram
development) for better quantitative forecasts.
In order to get a better impression on the performance, a more thorough statistical anal-
ysis of the algorithm would be needed. The presented method is admittedly rudimentary,
and the decision between correct and incorrect tracking were based on rather subjective
criteria. Before undergoing more critical and extensive tests, the estimated performance
values should be viewed with prejudice.
One of the biggest weaknesses the algorithm exposes is the inability to deliver data in
situations, where large-scale stratiform precipitation covers the radar’s range to a big extend.
It is conceivable, that an approach making further use of Scale-Space theory and digital im-
age processing techniques, such as contrast enhancement and more elaborate thresholding
could improve the use in situations where the coverage of the radar’s detection area is very
81
high, but some structure remains visible inside. A rough first sketch of this was presented
in Section 6.4.
Whether the ability to focus on salient image features by Automatic Scale Selection
proves useful, is down to the fact whether the algorithm can be improved to treat all sorts
of weather situations accordingly. An idea into this direction is to perform a complete Scale-
Space1 analysis of each snapshot, and extract information to adjust certain parameters for
the Blob detection stage automatically, for instance by steering a contrast enhancing scheme
locally, accentuating centres of reflectivity (See section 6.4, or by using an anisotropic, prob-
ably non-linear diffusion process for extracting trackable details, even from mostly stratiform
precipitation. If this can be achieved, the algorithm would prove a good basis for a com-
pletely unsupervised tracking system, providing continuous tracking data. As far as I know,
the application of Scale-Space methods to this special field of Tracking is not explored widely
yet and leaves much to do.
Also, Scale-Space Methods could prove useful for Tracking based on statistical correla-
tion. These algorithms are often sensible to the box-size chosen, and the size of the box
could in turn be automatically determined by using a Scale-Space approach to find the scale
of the prominent image features and link the box size to it. This wouldn’t alleviate the
ambiguity problem, but it might reduce the output noise somewhat.
Comparison of the presented method to other tracking algorithms is difficult. The closest
recent relative to SARTrE is the Trace3D algorithm. A direct comparison is hard, for the
two algorithms have different foci. The latter concentrates on convective cores by applying
a semi-adaptive thresholding scheme. Precipitation outside the thresholded ranges is not
taken into account, and the correlation procedure is based on velocity interpolation. It also
contains a simple directional smoothing facility, based on the improbability of crossing tra-
jectories. SARTrE on the contrary doesn’t focus on reflectivity cores as such, but rather on
the impression the distribution of reflectivity leaves in its Scale-Space representation. Also,
the correlation process is based on different assumptions and techniques. SARTrE has no
directional preference or interpolation facilities (yet). For a direct comparison, a Run of
both algorithms on the same data sets would be interesting.
An idea for the far future might be to use a hybrid model. According to the weather
situation, either centroid- or cross-correlation tracking could be used. In order to determine
which algorithm to apply to a given situation could be decided by an accordingly trained
neural network, which could base a decision on coverage, gradients or other suitable input.
1 This
means to determine the scale of each object separately by tracking the its lifetime in the Scale-Space
Representation
82
Appendix A
Programming Techniques
An object consists of data fields, which are called members or attributes in this context,
processing facilities operating on this internal data (called member functions or methods),
and an interface exposing data and functionality to other objects, making it possible for ob-
jects to exchange messages. Also, since Objects also constitute data types, it is possible to
exchange messages consisting of objects themselves. One of the most distinguishing features
is the ability of building complex object hierarchies based on ontological conceptualisation
of problems, which allows for flexible and runtime-bound solutions unprecedented by proce-
dural approaches. A complete discussion of the concept is beyond the scope of this thesis, a
good starting point can be found at the CETUS links website: http://www.cetus-links.org,
which is a good source on all practical aspects of OOP.
A.2 Objective-C
The OOP language used in this thesis is Objective-C. As opposed to C++, which has the
big drawback of falling apart into multiple proprietary dialects implemented by different
vendors, Objective-C suffers no dissension. It is a unique language standard, and the only
difference lies in the libraries used to constitute the root object of all objects, one by GNU
83
and the other by Apple. The differences are marginal and porting one to the other is a matter
of replacing a hand-full of statements. Objective-C is a true superset to the C language.
Everything written in standard ANSI-C can seamlessly be adopted into Objectice-C easily.
It is based on the message-passing structures from Smalltalk, which allows for very flexible
runtime behaviour and - if needed - almost type-free programming. It was chosen to be
the language of choice for the software developed in context of this thesis partly for these
properties and partly because it is the OO language of choice when dealing with Macintosh
programming in general. Apple’s new UNIX-based (OpenBSD) operating system is entirely
based on it.
84
Appendix B
The complete documentation for presented runs including the correlation table for each
snapshot is much too extensive to be put into this document. The files containing all three
runs with images and tracking data can be obtained from the author on request. Send an
email to
juergen_simon@mac.com
and provide the keyword ”SARTrE” in the subject to avoid my rigourous spam filter. The
data used to generate runs is property of the Meteorological University of Bonn and can
therefore not be distributed. An abstraction layer in the software is used to assimilate
radar data. By changing it to your needs, you should be able to adopt the algorithm
for various formats with relative ease. The software is still under development, but will be
distributed at some point under the GNU Public License (GPL). Its composed of Frameworks
for Assimilation, Visualisation, Processing and a Cocoa-based application for OSX. It is
developed obeying the well known MVC (Model-View-Controller) paradigm, so adopting a
new front-end should be easy. Send an email to the above mentioned address if you want to
keep posted.
85
List of Tables
86
List of Figures
87
5.2 Histograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.3 Centroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
88
Bibliography
[1] Rafael C. Gonzales, Richard E. Woods. Digital Image Processing, 1993 Addison Wesley
Publishing Company, Inc.
[2] E. A. Mueller Statistics of high radar gradients, Journal of Applied Meteorology, 1977,
Volume 16
[3] Tony Lindeberg. Scale Space Theory in Computer Vision, Kluwer Academic Publishers,
1994
[4] J. Weickert, A Review of Nonlinear Diffusion Filtering, published in Scale Space Theory
in Computer Science, LNCS, Vol.1252, Springer, 1997
[5] J. Weickert,Scale-Space has been Discovered in Japan, Technical Report DIKU-TR-
97/18, Department of Computer Science, University of Copenhagen, August 1997
[6] A. P. Witkin,Scale-Space filtering,Proc.Eight Int.Join.Conf. on Artificial Intelligence,
(ICAE ’83, Karlsruhe, Aug. 8-12,1983), Vol.2 1019-1022,1983
[7] William H. Press et.al.,Numerical Recipes in C, The Art of Scientific Computing, Sec-
ond Edition, Cambridge University Press, 1992
[8] C. Reudenbach, G. Heinemann, E. Heuel, J. Bendix, W. Winiger, Investigation of
summertime convective rainfall in Western Europe based on a synergy of remote sensing
data and numerical models, Meteorol. Atmos. Physics, 76, 23-41 (2001)
[9] J. Handwerker Cell tracking with TRACE3D - A new algorithm. 2002, Atmos. Res., 61,
15-34.
[10] R. E. Rinehart, E. T. Garvey, Three-dimensional storm motion detection by conven-
tional weather radar, 1978, Nature, 273:287-289.
[11] L. Li, W. Schmid, J. Joss, Nowcasting of Motion and Growth of Precipitation with
Radar over a complex Orography, 1995, Journal of Applied Meteorology, Volume 34,
pp1286-1300
[12] S. Mecklenburg,Nowcasting precipitation in an Alpine region with a radar echo tracking
algorithm, 2000, Dissertation, ETH Zurich, Diss.ETH No.13608
[13] R. E. Rinehart, Radar for Meteorologists, 3rd Edition, 1997 Rhinehart Pub.
[14] E. Heuel, Quantitative Niederschlagsbestimmung aus Radardaten. Ein Vergleich von
unterschiedlichen Verfahren unter Einbeziehung der Statistischen Objektiven Analyse,
2004, PhD-thesis, Meteorological Institute, University Bonn, 162p.
89