You are on page 1of 120

Wavelet Enhancement of

Borehole Radar Data

Guy A. R. Antoine
Supervised by Dr. G.R.J. Cooper

Submitted in partial completion


of the requirements for BSc (Hons) at
the University of the Witwatersrand
Department of Geophysics
November 2002
1-1

Abstract
Borehole Radar has proven to be a useful tool in the South African mining
environment. Due to rapid attenuation of microwave frequency electromagnetic
radiation in high-permittivity materials, the deep reflections on borehole radar images
are often noisy. The project investigated the use of wavelets in enhancing two radar
datasets provided by the CSIR. Two aspects of wavelet based processing were
considered, de-noising and attribute analysis.

The stationary wavelet transform was used to de-noise the data both across-trace and
down-trace. Results were compared with those obtained by filtering in the space and
frequency domains. Wavelets were found to be superior in de-noising the data across-
trace. When processing down trace, no significant improvements were made with any
technique.

The complex continuous wavelet transform was used to calculate the following
attributes: instantaneous phase, instantaneous frequency and reflection strength. In
contrast to standard attribute analysis, the wavelet attributes could be calculated at
multiple scales allowing greater versatility. The small-scale instantaneous phase was
useful in detecting edges between reflectors. The instantaneous frequency highlighted
lateral discontinuities in the form of faults and pinchouts.
1-2

Table of contents

Chapter 1 - Introduction 1-9


Chapter 2 - Background 2-11
2.1 History of GPR 2-11
2.1.1 Extra-terrestrial GPR 2-12
2.1.2 Why GPR? 2-12
2.2 Physical characteristics and system design 2-13
2.2.1 Range 2-15
2.2.1.1 Material-attenuation loss La 2-15
2.2.1.2 Spreading Loss Ls 2-16
2.2.1.3 Target-Scattering loss Lsc 2-17
2.2.1.4 Summary 2-17
2.2.2 Dielectric properties of materials 2-18
2.2.3 Detectability 2-20
2.2.4 Clutter 2-20
2.2.4.1 System clutter 2-21
2.2.4.2 External clutter 2-21
2.2.5 Resolution 2-22
2.2.5.1 Depth Resolution 2-22
2.2.5.2 Plan resolution 2-23
2.2.6 Modulation technique 2-23
2.3 Borehole radar (BHR) 2-24
2.3.1 The Aardwolf BR40 2-26
2.3.2 Directionality Problems 2-27
2.4 Summary 2-27
Chapter 3 - Theory 3-28
3.1 The Fourier transform 3-28
3.2 The Short-Time Fourier transform 3-29
3.3 The wavelet transform 3-31
3.3.1 The Continuous Wavelet Transform 3-31
3.3.2 Frequency and scale 3-32
3.3.3 Wavelet scalograms 3-33
3.3.4 Discrete and continuous sampling 3-35
3.4 The discrete wavelet transform 3-36
3.4.1 Wavelet Decomposition 3-37
3.4.2 Wavelet synthesis 3-38
3.4.3 Aliasing and quadrature mirror filters 3-38
3.4.4 The dilation equation and the wavelet equation 3-39
3.4.5 The fast wavelet transform 3-40
3.5 The stationary wavelet transform 3-40
3.6 Wavelet packet analysis 3-41
3.7 Wavelet properties 3-42
3.7.1 Describing wavelets 3-43
1-3

3.7.2 Vanishing moments 3-43


3.7.3 Regularity 3-44
3.7.4 Support 3-44
3.7.5 Further constraints imposed on DWT wavelets 3-44
3.7.5.1 Orthogonal wavelet bases 3-44
3.7.5.2 Biorthogonal Wavelet basis 3-45
3.7.6 Complex wavelets 3-46
3.7.7 Custom wavelets 3-46
3.8 Choice of wavelet 3-46
3.8.1 DWT and SWT 3-47
3.8.2 Complex continuous wavelet transform 3-49
3.9 Wavelet De-noising 3-53
3.9.1 Decomposition 3-53
3.9.2 Thresholding 3-53
3.9.3 Noise model selection 3-55
3.9.4 Variance adaptive thresholding 3-56
3.9.5 Reconstruction 3-57
3.9.6 One-dimensional or two-dimensional? 3-57
3.10 Application of wavelet de-noising in seismic and radar processing 3-57
Chapter 4 – Data Processing and Results 4-59
4.1 Trend removal 4-59
4.1.1 Discussion 4-62
4.2 Padding techniques 4-63
4.2.1 Hermite cubic spline interpolation 4-64
4.2.2 Exponential decay taper 4-66
4.2.3 Symmetric padding 4-68
4.2.4 Discussion 4-69
4.3 De-noising 4-69
4.4 Down trace 4-70
4.4.1 Wavelet denoising 4-70
4.4.2 Fourier denoising 4-73
4.4.3 Discussion 4-76
4.5 Across trace 4-76
4.5.1 Wavelet denoising 4-78
4.5.2 Fourier denoising 4-83
4.5.3 Discussion 4-85
4.6 Space domain filtering 4-88
4.7 Complex trace analysis 4-90
4.7.1 Introduction 4-90
4.7.2 Reflection strength 4-91
4.7.3 Instantaneous phase 4-92
4.7.4 Instantaneous frequency 4-93
4.7.5 Wavelet calculation of attributes 4-94
4.7.6 Discussion 4-98
Chapter 5 - Conclusion 5-101
1-4

5.1 Future work 5-102


5.2 Acknowledgements 5-102
Chapter 6 - References 6-103
Chapter 7 - Appendix 7-106
7.1 wavedetrend.m 7-107
7.2 expad.m 7-108
7.3 expadx.m 7-109
7.4 swtpad.m 7-110
7.5 swtpadx.m 7-111
7.6 waveden.m 7-112
7.7 wavedenx.m 7-114
7.8 fftden.m 7-116
7.9 fftdenx.m 7-117
7.10 spden.m 7-118
7.11 enhance.m 7-119
1-5

Table of figures
Figure 2-1 Material attenuation as a function of frequency for a medium-loss soil.__
_________________________________________________________________ 2-16
Table 2-1: Adjustment of the range law for different types of target.___________ 2-16
Figure 2-2: Signal amplitude against time. ______________________________ 2-18
Figure 2-3 Effect of moisture content of rock on relative permittivity. _________ 2-19
Figure 2-4 Radar probing range through some typical ‘rocks’. ______________ 2-19
Figure 2-5: Illustration of relative signal strengths for various sources and targets._
_________________________________________________________________ 2-22
Figure 2-6: A portion of the electromagnetic spectrum. Ground penetrating radar
operates in the order of 50MHz to 2GHz.________________________________ 2-24
Figure 3-1Example illustrating how the addition of a single spike to a signal alters
the power spectrum. ________________________________________________ 3-29
Figure 3-2Breakdown of the time-frequency plane for the STFT. The tiles represent
the coverage in the time-frequency plane of a given basis function. Note the fixed
resolution in both time and frequency. __________________________________ 3-30
Figure 3-3 The CWT illustrated schematically. (a) The wavelet is convolved with the
signal. It is then rescaled and the convolution repeated (b). _________________ 3-32
Figure 3-4 Defining the centre frequency for (a) Daubechies 2 wavelet (b) Coiflet 1
wavelet. __________________________________________________________ 3-33
Figure 3-5 Comparison of the spread in time and frequency for the CWT and STFT.
Regions of influence of a Dirac pulse at t=t0 is shown a) for the CWT and b) for the
STFT. The influence of three sinusoids of frequencies f0, 2f0, 4f0 are shown c) for the
CWT and d) the STFT. ______________________________________________ 3-34
Figure 3-6 Comparison of scalograms calculated from (a) the DWT, and (b) the
CWT. ____________________________________________________________ 3-35
Figure 3-7 Dyadic sampling grid in the time-scale plane. Each node corresponds to a
wavelet basis function hj,k(t) with scale 2-j and shift 2-jk. ____________________ 3-36
Figure 3-8 Breakdown of the time-frequency plane for the discrete wavelet transform.
_________________________________________________________________ 3-36
Figure 3-9 DWT signal decomposition of a sinewave with added noise (S) into
approximation (cA) and detail (cD).____________________________________ 3-37
Figure 3-10 Typical decomposition tree for the DWT. _____________________ 3-38
Figure 3-11 Reconstruction (synthesis) process performed by the IDWT._______ 3-38
Figure 3-12 Transformation matrix of the FWT. Odd rows consist of the low pass
filter. Even rows consist a high pass filter. Together, these rows form a quadrature
mirror filter. ______________________________________________________ 3-40
Figure 3-13 Schematic of the generalised decomposition step at level j for the SWT._
_________________________________________________________________ 3-41
Figure 3-14 Typical decomposition tree from wavepacket analysis. Each level is
broken into an approximation and detail.________________________________ 3-41
Figure 3-15 Breakdown of the time-frequency plane for wavepacket analysis.
Compare with that for standard wavelet analysis (figure 3.8). _______________ 3-42
Figure 3-16 Using entropy to determine whether to decompose the next level. The
'best tree' is displayed on the right._____________________________________ 3-42
Figure 3-17 The effect of increasing the number of vanishing moments of the
Daubechies wavelet. A quadratic a) is analysed with b) 1 c) 2, and d) 3 vanishing
moments. The result of d) is practically zero, except for some edge effects. _____ 3-43
1-6

Figure 3-18 a) The Haar wavelet is orthogonal to b) its own dilations, and c) its own
translations. ______________________________________________________ 3-45
Figure 3-19 Two wavelets from the Biorthogonal wavelet family._____________ 3-46
Figure 3-20 Typical borehole radar trace taken from dataset Bhr2. ___________ 3-47
Table 2-1 Distinguishing properties for the highest order Symlets and Coiflets
wavelets. _________________________________________________________ 3-48
Figure 3-21 Wavelet functions for (a) Coiflets 5 wavelet, and (b) Symlets 8 wavelet.
Scaling functions for (c) Coiflets 5 wavelet, and (d) Symlets 8 wavelet. ________ 3-48
Figure 3-22 Complex Gaussian wavelet showing (a) Real part (b) Imaginary part (c)
Modulus, and (d) Phase. _____________________________________________ 3-49
Table 2-2 Summary of wavelet family properties, Part 1. ___________________ 3-51
Table 2-3 Summary of wavelet family properties, Part 2. ___________________ 3-52
Figure 3-23 Comparison of Hard, Soft and Qian thresholding on a straight line. 3-54
Figure 4-1 50-bin Histogram plot for datasets a) BHR1, and b) BHR2. Note the
nearly uniform distribution of BHR1 indicating that histogram equalisation might
have been performed on the data.______________________________________ 4-60
Figure 4-2 Typical trace from BHR1 showing prominent trend. A 6th degree
polynomial fit highlights the approximate trend. __________________________ 4-60
Figure 4-3 Typical trace from BHR1 comparing 5th level wavelet approximation to
6th degree polynomial approximation. __________________________________ 4-61
Figure 4-4 Wavelet detrending of typical trace. Detrending was performed using the
5th level approximation. _____________________________________________ 4-61
Figure 4-5 Comparison between the Bhr1 dataset (a) before and (b) after trend
removal. The detail (and noise) is made clearer in the second image. _________ 4-62
Figure 4-6 Wiggle trace plot of the portion of Bhr1 giving rise to the 'spotted' effect
seen on figure 4.5, (a) before trend removal and (b) after. __________________ 4-63
Figure 4-8 The Bhr1 dataset (a) before and (b) after wavelet and residual linear
detrending. Note the resulting streaks. __________________________________ 4-65
Figure 4-9 Traces 918 to 928 from BHR1 (a) before and (b) after wavelet and linear
trend removal. _____________________________________________________ 4-66
Figure 4-10 Typical trace after detrending. Trace has been padded to 256 samples
using an exponential decay function (Equation 4.2) . a) Decay constant k = 0.01 b)
Decay constant k=0.6 _______________________________________________ 4-67
Figure 4-11 Power spectra for the exponentially padded curves depicted in figure
4.10. Note the aliasing of frequencies evident in the blue curve. ______________ 4-67
Figure 4-12 Types of signal extension. Dashed line indicates padded section. ___ 4-68
Figure 4-13 Symmetric extension of trace 250 of dataset Bhr1. Signal has been
extended to twice its original length. ___________________________________ 4-69
Figure 4-14 Trace 250 from Bhr1 prepared for SWT denoising. Red line indicates
approximate point where signal changes from saturated to unsaturated. _______ 4-70
Figure 4-15 Level one detail for trace 250. Red lines indicate the fixed form
threshold. ________________________________________________________ 4-71
Figure 4-16 Trace 250 with independent thresholding applied. ______________ 4-71
Figure 4-17 Reconstructed detail of trace 250 before (black) and after thresholding
(red). ____________________________________________________________ 4-72
Figure 4-18 Trace 250 before and after wavelet denoising. _________________ 4-72
Figure 4-19 Test block showing BHR1 data before and after trace-by-trace wavelet
denoising. ________________________________________________________ 4-73
Figure 4-20 Power spectrum for trace 250 from dataset Bhr1. The red dashed line
marks the cutoff frequency of 90hz used in the lowpass filtering. Note that the
1-7

frequency values are relative and do not represent true frequency characteristics of
the signal. ________________________________________________________ 4-74
Figure 4-21 Comparison of trace 250 before (black) and after (red) lowpass filtering.
_________________________________________________________________ 4-74
Figure 4-22 Comparison between wavelet and fourier denoising on trace 250. __ 4-75
Figure 4-23 Test image before and after low pass filtering at 90Hz.___________ 4-76
Figure 4-24 Time slice across sample 10 from dataset Bhr1. The relatively smooth
signal indicates good correlation between traces. _________________________ 4-77
Figure 4-25 Time slice through sample 180 from dataset Bhr1. The signal has been
extended symmetrically to 2240 samples in preparation for SWT denoising. ____ 4-78
Figure 4-26 Reconstructed detail for section 180. Red lines show the detail after
Heuristic SURE thresholding._________________________________________ 4-79
Figure 4-27 Denoised image using fixed form thresholding to level three
decomposition. Image was taken between traces 1500 and 1800 and samples 160 to
200 from dataset Bhr1. Note the streaky appearance. ______________________ 4-80
Figure 4-28 (a) Test image, (b) Heuristic Sure thresholded denoising, (c) Fixed form
thresholded denoising. An eight level decomposition was used in denoising both
images. __________________________________________________________ 4-81
Figure 4-29 Dataset Bhr1 before and after wavelet denoising. _______________ 4-82
Figure 4-30 Pseudocolor image of wavelet-denoised data. __________________ 4-83
Figure 4-31 Portion of the time slice taken across sample 180. Overlaid is the
smoothed output from low pass filtering using different cutoff frequencies. _____ 4-84
Figure 4-32 Portion of signal comparing wavelet and fourier denoising. _______ 4-84
Figure 4-33 Low pass filtering using 200Hz and 40Hz cut-off frequencies. The
wavelet-denoised image is included as a comparison. ______________________ 4-85
Figure 4-34 Images from lower 100 samples of dataset Bhr1. Wavelet denoising is
compared with low pass filtering at 40Hz. _______________________________ 4-86
Figure 4-35 Wavelet denoising of dataset Bhr2 using the function 'wavedenx '. _ 4-88
Figure 4-36 Filtering of dataset Bhr1 in the space domain. The 'original' image has
been detrended. ____________________________________________________ 4-89
Figure 4-37 Isometric diagram of analytic signal of a seismic trace.__________ 4-91
Figure 4-38 (a) Denoised test image, (b) Reflection strength ________________ 4-92
Figure 4-39 (a) Test image, (b) Reflection strength ________________________ 4-92
Figure 4-40 (a) Denoised test image, and (b) Instantaneous phase ___________ 4-93
Figure 4-41 (a) Denoised test image, and (b) Instantaneous Frequency ________ 4-94
Figure 4-42 (a) Denoised test image. Wavelet calculation at scale 10 of (b)
Reflection Strength, (c) Instantaneous Phase, (d) Instantaneous Frequency. ____ 4-94
Figure 4-43 (a) Denoised test image. Wavelet calculation at scale 4 of (b) Reflection
Strength, (c) Instantaneous Phase, (d) Instantaneous Frequency. _____________ 4-95
Figure 4-44 (a) Denoised test image. Wavelet calculation at scale 1 of (b) Reflection
Strength, (c) Instantaneous Phase, (d) Instantaneous Frequency. _____________ 4-96
Figure 4-45 Reflection strength calculated (a) from the analytic signal, and (c) from
the complex continuous wavelet transform (Scale 4). Instantaneous phase calculated
(b) from the analytic signal, and (d) from the complex continuous wavelet transform
(Scale 4). _________________________________________________________ 4-97
Figure 4-46 (a) Instantaneous phase calculated at scale = 1 contoured over original
denoised test data. (b) Instantaneous frequency calculated at scale = 3 contoured
over original denoised data. __________________________________________ 4-98
Figure 4-47 Bhr2 dataset after wavelet denoising. ________________________ 4-99
Figure 4-48 Wavelet calculated reflection strength at scale 2. _______________ 4-99
1-8

Figure 4-49 Wavelet calculated instantaneous phase at scale 2._____________ 4-100


Figure 4-50 Wavelet calculated frequency at scale 2______________________ 4-100
1-9

Chapter 1 - Introduction
In 1972, Rex Morey and Art Drake began to sell the first commercial ground
penetrating radar (GPR) systems. Since then the range of applications for ground
penetrating radar has been expanding steadily. Currently GPR is used in archaeology,
engineering, remote sensing, and geophysics. It has also been used by the military for
mine detection, and by NASA for space exploration. This project focuses on a
borehole radar system developed by the CSIR for use in South African mines.

Borehole radar has proven to be useful in the mine environment for mine planning
and ore body mapping. The Aardwolf BR40 has a slimline design allowing it to fit
into 38mm EX boreholes that are typically encountered in the Bushveld platinum
mines. Borehole radar provides the highest resolution possible when compared with
other geophysical techniques, approaching centimetre resolution under the right
conditions. The highly resistive quartzites, typical of the South African mining
environment allow penetration depths up to 100 meters. Where conditions are not
ideal and rocks have high moisture content, the performance is compromised by rapid
attenuation of electromagnetic radiation. As a result, system noise starts to dominate
the signal at the late arrivals.

Usually noise filtering is performed in the frequency domain using the fast Fourier
transform. The Fourier transform uses sinewave basis to transform a signal. It
assumes a smooth, continuous, infinite signal whose properties do not evolve in time.
Consequently it does not perform well on non-stationary signals such as seismic or
radar traces. The wavelet transform on the other hand uses a small scalable ‘wavelet’
that allows both time and frequency localisation of the transformed signal.

The project investigates the use of wavelet de-noising to enhance two borehole radar
datasets provided by the CSIR.

A common practice in seismic data analysis is the calculation of the analytic signal
from which instantaneous attributes are defined. The complex continuous wavelet
transform allows the calculation of instantaneous attributes in a similar fashion to the
analytic signal. The second part of the project investigates the calculation of
instantaneous attributes at different scales.

Chapter 2 gives background on ground penetrating radar, its history, application and
the physics involved with radar propagation through earth material. The use of
borehole radar in the South African mining environments is introduced. Finally the
particular instrument used for the data acquisition, the Aardwolf BR40 is discussed.

Chapter 3 deals with the theory of frequency domain filters. The Fourier technique is
reviewed and its inadequacies in dealing with non-stationary data are highlighted. The
wavelet transform is then introduced as a solution and a particular application of
wavelets in denoising is discussed. The chapter finishes with a summary of the current
state of affairs in the application of wavelet de-noising to radar and reflection seismic
data.

In chapter 4, the two datasets are processed using wavelet de-noising techniques. For
comparison, de-noising is also performed in the frequency domain using the Fourier
1-10

transform, and in the space domain using a low-pass filter kernel. Finally the
instantaneous attributes of the borehole radar data are calculated using both the
Hilbert transform and the complex continuous wavelet transform. Chapter 5 discusses
the conclusion and makes recommendations for future work.

The data processing was performed in Matlab using the Wavelet Toolbox. The
program code developed for the processing is listed in the Appendix.
2-11

Chapter 2 - Background

RADAR is an acronym coined in 1934 for RAdio Detection And Ranging (Rees,
2001). Ground penetrating radar (GPR), also known as subsurface radar, is a radar
system having enough gain, and appropriate bandwidth, to allowing it to probe
considerable distances through rock. The concepts of electromagnetic wave
propagation and scattering are used in order to image changes in electrical and
magnetic properties in the ground. There exist a range of techniques, most aimed at
locating objects or interfaces buried beneath the earth’s surface (Daniels, 1988). It
may be performed from the surface of the earth, within a borehole or between
boreholes, from aircraft or satellite. It has the highest resolution in subsurface imaging
of any geophysical method, approaching centimetres under the right conditions. The
depth of investigation is largely dependant on the material properties, ranging from
less than a meter to over 5 kilometres (Olhoeft, 2000).

This chapter starts with a history of subsurface radar. The advantages of GPR as an
exploration tool will be discussed briefly. A large section will cover the physical
characteristics of GPR in a broad fashion, defining terms such as range, resolution and
clutter. Some aspects of system design, such as the modulation technique, will also be
mentioned. The last section will cover a particular application of GPR namely,
borehole radar (BHR). This application forms the emphasis of the project.

2.1 History of GPR


The first use of electromagnetic signals to determine the location of buried objects is
attributed to Leimbach and Löwy in 1910. Their technique consisted of burying dipole
antennas in an array of vertical boreholes and comparing the magnitude of signals
received when successive pairs of antennas were used to transmit and receive. A
crude image of the region within the array could be formed. The authors also
described an alternative technique, which used separate, surface mounted antennas to
detect the reflection from an interface due to ground water or an ore deposit. An
extension of this technique allowed the depth to a buried interface to be determined
through the interference between the reflected wave and the leaked wave between the
two separate antennas. This method incorporated continuous wave operation (the
different modes of signal modulation will be discussed later). The first use of pulsed
techniques to detect buried objects was in 1926 with the work of Hülsenbeck. He
realized that any dielectric variations, not necessarily involving conductivity, would
produce reflections. The technique through its easier realization of directional sources
was found to have advantages over seismic methods.

The first actual GPR survey was performed in Austria in 1929 to sound the depth of a
glacier. Since then the technology was largely forgotten until the late 1950’s. Interest
in GPR was renewed when U.S. Air Force radars were seeing through ice as planes
were trying to land in Greenland. As a result aircraft misread the altitude and crashed
into the ice. This sparked investigations into the use of radar to see into the
subsurface, not only for ice sounding but also for mapping subsoil properties and the
water table.
2-12

Subsequently, pulsed techniques were developed for the probing of salt deposits,
desert sand, and rock formations. Coal seam probing was investigated but the higher
attenuation in coal meant that depths greater than a few meters were impractical.

A new impetus was given to the subject in the early 1970’s with the Apollo 17 Lunar
Sounder Experiment. A system much like Stern’s original glacier sounder was
proposed, designed and eventually flown to the moon. The lunar sounder experiment
exploited one of the advantages of GPR over seismic techniques, namely the ability to
use remote, non-contacting antennas rather than the ground contacting geophones
required in seismic investigations.

In 1972, Rex Morey and Art Drake began to sell commercial ground penetrating radar
systems. Since then the range of applications has been expanding steadily and
includes: archaeology, road and railbed quality assessment, locations of voids and
containers, tunnels and mineshafts, pipe and cable detection, landmine detection, as
well as remote sensing by satellite (including ice thickness monitoring of arctic
shipping routes).

2.1.1 Extra-terrestrial GPR


Recent high-resolution images from the Mars Orbital Camera (MOC) on board the
Mars Global Surveyor (MGS) orbiter have revealed the possibility of the presence of
ground ice and probably some water lenses in the near sub-surface of Mars (Heggy,
2002).

GPR was incorporated in the Mars-96 Project, with the scientific goals of measuring
the thickness of the Martian permafrost and subsurface layers, as well as determining
their electromagnetic characteristics (Daniels, 1996). Unfortunately the third stage
rocket failed while the probe was on its third revolution around the earth, and the
mission plummeted into the Pacific Ocean.

Future developments will see GPR integrated into the 2003 Mars Express orbiter
(ESA) as the Mars Advanced Radar for Subsurface and Ionosphere Sounding
(MARSIS) experiment. Another radar sounding experiment is planned for 2005, on
board the Mars Reconnaissance Orbiter (MRO) (Heggy, 2002).

The first landed GPR experiment on Mars will be the 2007 Mars NETLANDER
project (Berthelier, 2000; Heggy, 2002). A miniature range-gated step-frequency
(RGSF) GPR has also been developed for the rover associated with the Mars 2009
Smart Lander Mission (Soon Sam Kim, 2002).

The above discussion makes it clear that GPR has become an important extra-
terrestrial imaging device.

2.1.2 Why GPR?


There exist a multitude of geophysical techniques for the location of buried objects.
Choosing the best tool depends on the nature of the target and the surrounding
geology. Some of the operational advantages of GPR will now be discussed.
2-13

i. Rapid surveying is allowed by the fact that the antennas do not have to be in
contact with the surface of the earth. This is possible because the dielectric
impedance ratio between free air and the earth’s surface is low, typically
around 2 to 4. In comparison, typical acoustic impedance ratios are of the
order of 100. GPR’s developed by the Stanford Research Institute have been
flown at heights of 400m in synthetic-aperture mode and have imaged buried
metallic landmines (Daniels, 1996).
ii. With GPR, a large degree of customisation is possible, allowing the best
system design for a particular target type. Most of this customisation is in the
form of antenna design. Antennas must be designed to have adequate
properties of bandwidth and beam shape. Daniels (1996) includes a concise
section on the available antenna types.
iii. As with EM prospecting, the transmitted bandwidth may be chosen to suit the
problem. Signal sources are available which can produce sub nanosecond
impulses or, alternatively can be programmed to produce a wide range of
modulation types. However, there is a trade-off between depth of penetration
and resolution.
iv. In general, any dielectric discontinuity is detected and can be imaged. Targets
can be classified on their geometry: planar interfaces; long, thin objects;
localized spherical or cuboidal objects. The radar system can be designed to
detect a given target type preferentially. Consequently, GPR equipment is
largely application driven.

The primary factor to be concerned with when assessing the usefulness of GPR is the
signal attenuation in the given material at the desired operating frequency. Dry
materials have much lower signal attenuation than wet ones. As a rule, a material that
has a high conductivity at low frequencies will have a large attenuation. Generalising
from this: gravel, sand, dry rock and fresh water are relatively easy to probe whereas
salt water, clay soils and conductive ores or minerals are less so. Probing the latter
materials requires a reduction in the transmitted frequency at the cost of reduced
resolution.

2.2 Physical characteristics and system design


Radar probing of the earth involves irradiating the ground with microwave frequency
electromagnetic radiation. A wide range of EM techniques are utilized in geophysical
exploration. The theory governing these techniques can’t be applied to the
propagation of EM radiation through the earth at radar frequencies. Consequently, the
signal response is very different to conventional EM.

In conventional EM, an important result, obtained from Maxwell’s equations in


isotropic media, is the propagation constant (k2):

k 2 = iµ 0σω + εµω 2 (2.1)

where k is the wave number, ω is the frequency of the EM field, µ is the magnetic
permeability of the medium in which the wave is propagating, usually taken as the
magnetic permeability of free space, µ 0 = 4π × 10 −7 H/m, ε is the dielectric
permittivity of the medium, and σ is the conductivity of the medium.
2-14

Usually, in conventional EM theory, the frequency is chosen to be so low that the


second term can be neglected, simplifying the propagation constant to:

k 2 ≈ iµ 0σω (2.2)

If however, the frequency is chosen to be high enough, as is the case for radar, the
first term becomes negligible (but not zero), consequently:

ω 2 µ 0 ε >> iωµ 0σ (2.3)

The medium is then referred to as a lossy dielectric medium and the expression for
dielectric permittivity is modified to include the conduction effects:

σ
ε∗ =ε +i (2.4)
ω
ε*, is referred to as the complex permittivity. Complex expressions are also derived
for the index of refraction and impedance in a lossy dielectric.

The most important result to recognize is that EM fields at high frequencies propagate
through the earth as visible light does, being reflected and refracted when passing
through regions of different electrical properties. Where the wavelength of radiation is
smaller than the interfaces it encounters, the radiation obeys the laws of geometrical
optics. A sharp contrast in electric properties will result in a reflection and scattering
of the EM wave. Ideally in such a system we would like to measure the travel time of
the EM field from the surface to the target reflector and back. The character of the
reflected field can yield information on the change in properties at the interface. This,
in essence is the principle of the ground penetrating radar concept (Zhdanov, 1994).

As a geophysical technique, GPR has much in common with the seismic reflection
method (Mccann, 1988). Consequently, seismic techniques have been employed in
GPR, especially in the processing of data. McCann et al. (1988) compare the two
methods, illustrating with case histories. It is found that the two techniques are
complementary. Dry material transmits EM energy better than wet material while the
converse is true for seismic energy. GPR however, has a limited depth of penetration
and this has restricted its versatility as an exploration tool. It has found specialized
applications, one of which is delineating ore-bodies in mines, which will be discussed
in more detail later.

In order for a GPR system to operate effectively it must achieve:


i. Adequate signal to clutter ratio
ii. Adequate signal to noise ratio
iii. Adequate depth resolution
iv. Adequate spatial resolution.

These requirements will be addressed in the following sections: range, clutter,


resolution and properties of dielectric materials.
2-15

2.2.1 Range
The range of GPR systems is governed primarily by the total path loss. The total path
loss for a particular distance is given by (Daniels, 1996):

Lt = Le + Lm + Lt1 + Lt 2 + L s + La + L sc dB (2.5)

where:

Le = antenna efficiency loss


Lm = antenna mismatch losses
Lt1 = transmission loss from air to material
Lt2 = retransmission loss from material to air
Ls = antenna spreading losses
La = attenuation loss of material
Lsc = target scattering loss

The major contributing terms are: material loss La, spreading loss Ls, and scattering
loss Lsc.

2.2.1.1 Material-attenuation loss La


The one-way attenuation loss of the material is given by

La = 8.686 R 2πf
µ o µ r ε 0ε r
2
{ (1 + tan 2
}
δ ) − 1 dB (2.6)

where:

f = frequency, Hz
tanδ = loss tangent of material
εr = relative permittivity of material
ε0 = absolute permittivity of free space
µr = relative magnetic susceptibility of material
µ0 = absolute magnetic susceptibility of free space

Note that it is normal to double the one-way attenuation loss in order to account for
the 2-way travel time.
2-16

Figure 2-1 Material attenuation as a function of frequency for a medium-loss soil. From (Daniels,
1996).

It is important to note that the attenuation loss of the material is directly proportional
to the frequency of the transmitted wave. Higher frequencies result in greater material
attenuation losses. Figure 2.1 shows the material attenuation as a function of
frequency. It is also important to note that the material attenuation is also highly
dependant on the relative permittivity of the material.

2.2.1.2 Spreading Loss Ls


Spreading loss from the conventional radar equation assumes that the target lies in the
far field, ie z>zF where

ω2
zF = (2.7)

is the Fresnel distance in free space. In ground penetrating radar, this assumption does
not necessarily hold and the target may lie in the near field. For simplicity, however
the conventional (far field) radar range equation is considered:

Gt Ar
L s = 10 log 10 dB (2.8)
(4πR )
2 2

where:

Gt = gain of transmitting antenna


Ar = receiving aperture
R = Range to target

Equation 2.8 assumes a point source scatterer, which is not always the case. The range
law may need adjusting for the different types of targets as shown in Table 2.1 below.

Nature of target Magnitude of received signal


Point scatterer (small void) R-4
Line reflector (pipeline) R-3
Planar reflector (smooth interface) R-2
Table 2-1: Adjustment of the range law for different types of target. After (Daniels, 1996).
2-17

2.2.1.3 Target-Scattering loss Lsc


For an interface between the material and a plane, where both the lateral dimensions
of the interface and overburden are large, then

 Ζ − Ζ2 
Lsc = 20 log 1  + 20 log σ
 Ζ1 + Ζ 2 

where:

Ζ1 = characteristic impedance of the first layer of material


Ζ2 = characteristic impedance of the second layer of material
σ = target radar cross section (as a proportion of intersection of both antenna beam
patterns)

The important characteristic of the above relation is the fact that a larger contrast in
impedance results in greater scattering loss. The scattering cross section is
proportional to the square of the polarisability of the material,

k4 2
σ= α (2.9)
6πε 02

hence more polarisable materials will contribute more to scattering loss.

2.2.1.4 Summary
Figure 2.2 below summarises the previously discussed signal losses as a function of
time. It is obvious that clutter is the dominant noise at short time (higher frequencies)
and system noise dominates at longer time (lower frequencies). The figure illustrates
that for a given signal detection threshold, the maximum depth of investigation
decreases rapidly with increasing frequency. For this reason, most GPR systems
operate at frequencies less than 2GHz.
2-18

Figure 2-2: Signal amplitude against time. From (Daniels, 1996).

For more details on the above relations, one is referred to (Daniels, 1996) and (Rees,
2001).

2.2.2 Dielectric properties of materials


Propagation of EM waves through dielectric materials can be described by two
theories:
i. Geometrical optics, which becomes relevant in dry materials. The wavelength
of the radiated wave has to be much less than the object’s dimensions and the
material involved is required to be an electrical insulator.
ii. Electromagnetic wave theory.

In EM wave theory, the velocity of propagation through the material is given by

c
vr = (2.10)
εr

where:

εr = relative permittivity
c = speed of light in a vacuum

It is clear from the above relation that the controlling factor is the relative permittivity.
Relative permittivity is primarily governed by the moisture content of the material. At
low microwave frequencies, water has a relative permittivity of approximately 80,
while most solid constituents of most soils have, when dry, a relative permittivity in
the range of 2–9. Figure 2.3 shows the effect of moisture content of rock on relative
permittivity.
2-19

Figure 2-3 Effect of moisture content of rock on relative permittivity. From (Daniels, 1996).

Figure 2-4 Radar probing range through some typical ‘rocks’. From (Cook, 1975).
2-20

The permittivity also varies with frequency but can be considered constant for most
materials over the range of frequencies utilized for GPR work.

Cook (1975) performed laboratory measurements of radio frequency complex


permittivity on various common earth materials. Effort was made to preserve the
moisture content of the samples. His work is best summarized in the following, well
known diagram reproduced in figure 2.4. The diagram illustrates the maximum depths
of penetration and the appropriate upper frequencies of operation for commonly
encountered earth materials. The figure shows that typical maximum penetration
depths, apart from some very low attenuation dielectrics, rarely exceed 20
wavelengths. Low frequencies of operation improve depth of penetration but higher
frequencies are needed for resolution.

Cook concludes that low loss propagation will be possible in certain granites,
limestones, high-grade coals and dry concrete. Useful but shorter depth of penetration
may be achieved in other coals, gypsums, oil shales, dry sandstones, high-grade tar
sands, and schists. Probing distances of less than 3 meters is predicted for most shales,
clays and fine-grained soils.

2.2.3 Detectability
In time domain systems it is more useful to consider the peak voltages instead of total
loss Lt. The detectability is defined as:

Vt
D ' = 20 log (2.11)
Vr
where:
Vt = Peak transmitted voltage
Vr = Peak voltage of reflected signal

The limiting factor of the detectability of a system is the noise performance of the
receiver. The peak voltage of the received signal must be greater than the peak voltage
of the system noise, provided clutter signals are low. In reality, however, clutter
signals are stronger than the system noise.

2.2.4 Clutter
In most GPR surveys, clutter is the limiting noise factor. For this reason, a whole
section has been dedicated towards describing clutter.

Clutter can be defined as those signals that are unrelated to the target scatter
characteristics but occur in the same sample-time window and have similar spectral
characteristics to the target wavelet1 (Daniels, 1996). GPR is vulnerable to extremely
high levels of clutter at short ranges.

Clutter can be divided into two primary categories, system clutter and external clutter
(Oswald, 1988).

1
This definition is different to the clutter encountered in conventional radar.
2-21

2.2.4.1 System clutter


This includes:

• Reflections and scattering from components of the radar system outside the
antennas including the support structure.
• The direct arrival of the transmitted signal between the transmitting and
receiver antennas, referred to as breakthrough.

Great care goes into the system design with the aim of reducing system clutter.
Absorbing materials are used in the equipment to reduce reflections. In most cases,
the breakthrough signal can be filtered out. Bridging with the antenna by dielectric
anomalies on the surface can degrade breakthrough in a random manner as the
antenna is moved over the surface. This reduces the effectiveness of signal
processing. The mechanical instability of the radar probe also determines the extent to
which simple processing can be used to remove system clutter. The antenna’s support
structure is not perfectly rigid and the clutter will change with small movements of the
antenna.

2.2.4.2 External clutter


Most clutter arises from multiple reflections between the ground surface and receiving
antenna. It can’t be filtered out as antenna breakthrough can, consequently imposing a
serious constraint on the overall detectability. Other factors that contribute to the
external clutter are:
i. Local variations in the characteristic impedance of the ground surface
ii. Inclusion of small groups of reflective sources within the material being
penetrated.
iii. Reflections from objects in the side lobes of the antenna.

It is possible to quantify the rate of change of peak clutter signal level as a function of
time. In most cases this parameter sets a limit to the detection capability. Figure 2.5
shows relative signal strengths for the case of ice thickness measurement (Oswald,
1988). Note the domination of the signal response by the different clutter types over
the system noise (thermal and jitter).
2-22

Figure 2-5: Illustration of relative signal strengths for various sources and targets. From (Oswald,
1988).

2.2.5 Resolution
When designing a survey it is important to determine the required resolution. The
vertical resolution (depth resolution) is considered separately to horizontal resolution
(plan resolution). Where the depth resolution depends primarily on the signal
bandwidth, the plan resolution depends on the attenuation of the material.

2.2.5.1 Depth Resolution


It is well known that the earth acts as a low pass filter. The filtering effect of the earth
is to stretch the transmitted wavelet, effectively decreasing the signal bandwidth. This
is known as wavelet dispersion and presents itself in the radar image as a
characteristic ‘blurriness’ that increases with depth. Because of wavelet dispersion,
the bandwidth of the received signal is the important consideration in the calculation
of depth resolution.

The required receiver bandwidth can be calculated from the power spectrum of the
received signal. An alternative technique uses the autocorrelation function. In most
cases, the required receiver bandwidth is greater than 500Mhz. A typical receiver
bandwidth of 1Ghz is required for a depth resolution of 5 to 20 centimeters,
depending on the value of εr.

Greater depth resolution is generally achieved in wetter materials, however the effect
of attenuation is to reduce depth of penetration. Thus for a given frequency at a given
depth, the effect of attenuation will be to increase depth resolution but decrease depth
of penetration. The two effects balance out so that the effective depth resolution is
independent of material attenuation.
2-23

An approximate empirical relationship is that the depth resolution is in the order of


10% to 20% of the profiling range.

2.2.5.2 Plan resolution


Plan resolution only becomes important when seeking localized targets and when
distinguishing between more than one target at the same depth. The plan resolution is
controlled by the antenna characteristics and the signal processing techniques
employed. Plan resolution can be approximated by:

ln 2
∆x = 4d (2.12)
(2 + αd )
where:
d = depth to target
α = attenuation constant of material

Equation 2.12 implies that plan resolution improves with higher attenuation material
assuming there is sufficient signal under the prevailing clutter conditions. Therefore, a
high gain antenna is usually required for good plan resolution. Plan resolution also
degrades with depth. In low attenuation material, synthetic aperture processing can be
applied to recover plan resolution.

2.2.6 Modulation technique


No attention has been paid to the nature of the transmitted frequency. The majority of
systems make use of a time domain impulse signal. A sharp impulse of EM energy is
created through the discharge of a capacitor and the received signal is recorded in the
off time of the transmitter. The received signal is a transient and will contain a range
of frequencies. The bandwidth of the transmitted signal depends on the pulse duration.
Depths greater than 30 meters require pulse durations of the order of 40ns
(bandwidths of approximately 50Mhz). Very short-range precision probing uses pulse
lengths of the order of 1.0ns or less (bandwidths of approximately 2GHz). To put
these bandwidths into context, figure 2.6 shows the electromagnetic spectrum.
2-24

Figure 2-6: A portion of the electromagnetic spectrum. Ground penetrating radar operates in the order
of 50MHz to 2GHz.From (Rees, 2001).

Most time domain systems use amplitude modulation (AM). The received signal is
usually an average of a number of samples. This reduces the signal to noise ratio.
However the improvement falls off rapidly as the number of averaged signals
increases. In practice, an optimum number of averages is 16 (Daniels, 1996).

Alternative systems use stepped-frequency, synthesized impulse and frequency


modulated (FMCW) designs.

FMCW is a preferred technique where targets are shallow and frequencies in excess
of 1GHz are used. As the frequency of operation increases, it is increasingly easier to
design and build FMCW systems with wider bandwidths, whereas it becomes
progressively more difficult to design AM systems.

The relative simplicity of concept and lower cost of production of AM systems have
been powerful factors contributing to their popularity in the past. As the cost of
components decrease, there may be an increase in the use of frequency-modulated and
stepped frequency systems.

2.3 Borehole radar (BHR)


Borehole radar has never been a popular logging method (Siever, 2000). Two main
reasons account for this:
1. Drilling mud and other conductive materials in the borehole limits the
applications of BHR dramatically. The technique is limited to stable holes
2-25

where no hydrostatic pressure is necessary. Pure water has to be able to


replace drilling mud as the high conductivities of drilling mud attenuate the
signal. Van Brakel et al. (2002) discuss the effect of wet drilling on borehole
radar performance.
2. Special logging cables are required. Optic fibre or co-axial cables are
generally used in BHR. This incompatibility hinders the use GPR. A tool is
required which can utilise standard logging cables and cable heads.

Despite these limitations, two environments which have proven to be especially suited
to BHR are the dry-salt environment, and the hard-rock deep mine environment. As a
consequence specialist BHR has been used successfully to image salt domes (Holser,
1972) and delineate ore bodies in mines.

BHR has been found to be valuable to the South African deepmine environment, in
particular, the mapping of the gold bearing Ventersdorp Contact Reef (VCR).
Detailed orebody delineation on a mine scale is usually carried out by interpretation of
drillhole core and mine cuttings (Turner, 2000). The use of BHR is complementary to
delineation by drilling and the two approaches can be very powerful when combined
effectively. The use of BHR in mining results in improved mine plans, reduced
dilution and improved ore recovery.

Mine planning within the deep mining environment requires a geophysical technique
that will address the following geological problems:
i. Rapid detection and delineation of faults exhibiting throws greater than 2
meters
ii. Dykes
iii. Water fissures
iv. Hanging wall parting planes
v. Rolls – Undulations in the VCR

With the exception of number (v) in the above list, the above geological features
cause mine hazards. In many cases, these hazards result in fatalities. The features are
randomly located and often have varied orientations, making them unpredictable. A
high-resolution tool is required to image the ore body and its surroundings ahead of
mining.

In March 1998, the industry-wide ‘DEEPMINE’ research program launched a project


to investigate the applicability of two geophysical techniques to predict the ore body
geometry ahead of mining:
i. Electromagnetic techniques
ii. Seismic techniques

BHR was identified as the most immediately applicable EM technique for the
following reasons. Boreholes provide access to a ‘clean’ environment free from
interference due to metallic support structures and mining induced fracture zones
associated with the mine tunnels deeper than ±1.5km. Resolution of less than 1 meter
can be obtained with BHR. This is better than the achievable resolution with radio
wave tomography.
2-26

The host quartzites, typical of the South African gold mining environment are highly
resistive, boasting resistivity values in excess of 1000Ωm. Although the VCR itself is
not a good dielectric reflector, the lava overlying the VCR is a good dielectric and
acts as a marker horizon. AX boreholes (diameter 48mm), typically 125m long, are
routinely drilled into the VCR to probe suspected structures and sample the reef.
Many of these holes can be used for BHR allowing the reef to be imaged (Simmat,
2002).

Tricket et al. (2000) carried out two case studies in the South African deep mine
environment. The results confirmed that the VCR acts as a good 10-100MHz BHR
target. Current systems allow up to 1000m of borehole to be logged with range of
penetration of approximately 85m. Long inclined boreholes (LIB), used to probe
undeveloped blocks of ground, are ideal for BHR logging, as they are drilled semi-
parallel to the reef.

The Bushveld platinum mines of South Africa have similarities with the gold mine
environment. The platinum mines use smaller diameter exploration drill holes (38mm
wide), requiring an especially compact system design. Investigations are being made
into the use of BHR to delineate potholes, which are characterized by (usually) drastic
dips in the platinumiferous horizons (Vogt, 2002).

2.3.1 The Aardwolf BR40


The two data sets being processed in this project were collected from mines in the
Witwatersrand area using a BHR developed by the CSIR, the ‘Aardwolf BR40’. The
instrument is a slimline, omni directional borehole radar capable of fitting into 38mm
diameter boreholes. The instrument had to take care of the following requirements of
the deep mine environment (Vogt, 2002):
1. Slimline design- the radar probe is required to fit, without snagging, into EX
(38mm diameter) boreholes, typical of the Bushveld platinum mine
environment. The gold mine environment uses more forgiving AX (48mm
diameter) boreholes.
2. Temperature resistant- The BHR has to be able to operate in virgin rock
temperatures of 50ûC-70ûC.
3. Robustness – The underground environment is generally humid and wet with
corrosive water. A resistant, watertight casing is required.
4. Non-gravity feed winch - The probe has to be able to be used in holes drilled
horizontally or upwards.

The Aardwolf BR40 has the following specifications:


• Both transmitter and receiver antennas are dipole antennas
• Transmitter capable of 90Mhz bandwidth
• The receiver can stack a maximum of 256 traces. Four 256-stacked-
traces can be displayed per second.
• Instantaneous sampling
2-27

2.3.2 Directionality Problems


Eisenburger and Gundelach (Eisenburger, 2000) describe the concept of a directional
BHR antenna. A cross-loop antenna is used, which requires significant radial
dimensions. Increased directional resolution requires a larger diameter. The problem
facing BHR is that it is difficult to build thin, efficient directional antennas. As a
result most slimline borehole radars like the Aardwolf BR40 are cylindrically omni
directional, and cannot discriminate sources of the reflections in a 3D environment.

Vogt (2002) recognizes the importance of this in interpretation of BHR data from the
mine environment. He states that in the absence of directional data it is vital to
interpret the radar image in three dimensions using as much a-priori information as
possible. The processed BHR image is inserted into a 3D environment with
information obtained from 3D surface seismics. By rotating the BHR data around the
borehole, strong reflections in the data might be correlated with known or inferred
reflectors.

Turner et al. (2000) discuss the use of drill hole information and the logic of specular
reflection (i.e. angle of incidence equals angle of reflection), in defining the direction
of a reflection.

Simmat et al. (2002) describe a technique that uses the curvature of the borehole
trajectory to provide directional information. They conclude that if the target field is
sparsely populated and the trajectory of the borehole is accurately known, then
curvature can be used to break the symmetry of a synthetic aperture radar
reconstruction, and distinguish between objects on the target plane that lie to the
convex or the concave side of the borehole.

Van Dongan et al. (2002), have designed and developed a directional BHR, however
they do not specify the dimensions of the instrument.

2.4 Summary
GPR has found a range of applications in geophysical prospecting, civil engineering
and archaeology. In South Africa it has been found to be particularly useful in the
deepmine environment. It is used for mine planning, helping to delineate the
respective orebody and avoid hazards. The typical exploration drillholes are 38mm-
48mm requiring a slimline borehole probe. This restricts antenna design and as a
result, most borehole radars for in mine use are omni directional. It is important to
take this into account during the processing phase. As much a-priori information must
be used in interpreting the data in order to quantify the direction of a particular
reflection with respect to the borehole. A problem in the processing BHR data is
system noise in the late time portion of the signal. The non-stationary nature of a
typical borehole radar trace means that traditional Fourier methods will perform
poorly in identifying and removing noise from the data. The similar nature of seismic
data to GPR data suggests that the methods applied in seismics should be successful
in GPR with minor modifications. One such method is the discrete wavelet transform,
which has proven to be successful in enhancing seismic data. The theory of the
discrete wavelet transform will be discussed in chapter 3.
3-28

Chapter 3 - Theory
3.1 The Fourier transform
The aim of signal analysis is to extract relevant information from a signal by
transforming it in some manner (Rioul, 1991). It can be viewed as a communication
problem. The objective is to make the interpreter aware of the information content of
the data. In many geophysical applications, the frequency content of a signal is
important for interpretation; for example noise from power lines in aeromagnetic
surveys can be filtered out by removing energy of the associated frequency. Filtering
out low frequency information and retaining the higher frequencies can enhance
details such as dykes. Many filters are more easily applied in the frequency domain.
Some, while being theoretically possible in the time domain are impossible to use in
practice (Cooper, 2001).

The Fourier transform allows a signal x(t) to be transformed from its time domain
representation to its frequency domain representation as follows:

+∞

∫ x(t )e
− 2 iπft
X(f ) = dt (3.1)
−∞

where x(t) is assumed to be a stationary signal; that is, its properties do not evolve in
time. The coefficients of X(f) define the relative contribution of the power of a
particular frequency f to the signal x(t). They are derived as shown in equation 3.1 by
the inner product of the signal with a complex exponential basis function. The
complex exponential can be rewritten as

e −2iπft = cos(2πft ) − i sin( 2πft ) (3.2)

Thus the basis for the Fourier transform is a combination of real cosine and imaginary
sine waves. The Fourier coefficients of X(f) describe the amount of each basis
function represented in the signal and hence provide a measure of the frequency
content of the signal (Ridsdill-Smith, 2000). The coefficients of the Fourier transform
are made up of orthogonal basis functions; this implies that the signal is represented
efficiently with minimum redundancy. The inverse transform is defined as

+∞

∫ X ( f )e
2 iπft
x(t ) = df (3.3)
−∞

In addition to the stationarity assumption, the Fourier transform assumes the signal is
smooth, continuous and infinite. Any abrupt change in the signal’s characteristics will
be averaged out over the whole frequency axis in X(f). This is easily illustrated;
consider the Fourier power spectrum of two sine waves shown in figure 3.1 below.
The addition of a single spike drastically changes the power spectrum.
3-29

Figure 3-1Example illustrating how the addition of a single spike to a signal alters the power
spectrum. From (Cooper, 2001).

It is clear from figure 3.1 that non-stationary signals require more than the Fourier
transform can offer. A method is needed that will introduce time dependency, while
still maintaining linearity. This can be achieved by ‘windowing’ the signal x(t), into
portions in which the signal is approximately stationary. Alternatively, the sine wave
basis functions can be modified to be more concentrated in time. This is known as
introducing compact support and is achieved by the short time Fourier transform.

3.2 The Short-Time Fourier transform


A two-dimensional time-frequency representation S(t,f), is needed to represent the
non-stationary signal x(t). The concept is similar to a musical score, which shows
notes of particular frequencies played in time. Gabor was the first to define S(t,f), by
adapting the Fourier transform as follows. Consider a signal x(t) and assume it is
stationary when viewed through a window g(t) of limited extent, cantered at time
location τ. The Fourier transform (1) of the windowed signals x(t)g*(t-τ) yields the
Short-Time Fourier Transform (STFT) (Rioul, 1991)

STFT (τ , f ) = ∫ x(t ) g * (t − τ )e −2iπft dt (3.4)

which maps the signal into a two-dimensional function in a time-frequency plane (τ,f).

The analysis of a signal in this way depends critically on the choice of the window
g(t). There is a drawback between time and frequency resolution. Consider the ability
of the STFT to distinguish between two sinusoids. Given a window function g(t) and
its Fourier transform G(f), we define the ‘bandwidth’ of the filter ∆f as
3-30

∫ f G( f ) df
2 2

∆f 2
= (3.5)
∫ G( f ) df
2

where the denominator is the energy of g(t). Two sinusoids will be discriminated only
if their frequency difference is greater than ∆f.

The spread in time of the filter is defined in a similar way:

∫ t g (t ) dt
2 2

∆t 2
= (3.6)
∫ g (t ) dt
2

where the denominator again is the energy of g(t). Two pulses in time can only be
discriminated by the STFT if they are more than ∆t apart. The resolution of time and
frequency is lower-bounded; one has to measure at least one period in order to
establish the frequency. This can be formulated as the uncertainty principle known
from Quantum mechanics as the Heisenberg inequality:

1
∆t∆f ≥ (3.7)

The uncertainty principle implies that one can trade time resolution for frequency
resolution and vice versa.

An important property of the STFT is that once a window has been chosen, then the
time-frequency resolution given by equations 3.4 and 3.5 is fixed over the entire time-
frequency plane. A characteristic scale is introduced, illustrated by a breakdown of the
time-frequency plane shown in figure 3.2. A signal analysed by the STFT, will allow
each component to be analysed with good time resolution or good frequency
resolution, but not both.

Figure 3-2Breakdown of the time-frequency plane for the STFT. The tiles represent the coverage in the
time-frequency plane of a given basis function. Note the fixed resolution in both time and frequency.
From (Rioul, 1991).
3-31

3.3 The wavelet transform


3.3.1 The Continuous Wavelet Transform
To overcome the problem of the characteristic scale introduced by the STFT, one can
let the resolution of ∆f and ∆t vary in the time-frequency plane, introducing a multi-
resolution analysis.

In the continuous wavelet transform (CWT) this is achieved by translating and


rescaling a basis function h(t), which is termed the basic wavelet or mother wavelet.
Intuitively, by increasing the central frequency of the mother wavelet, the time
resolution is increased. Consequently the frequency resolution ∆f is reduced. Thus it
can be said that ∆f is proportional to f, or

∆f
=c (3.8)
f

where c is a constant. The transform can then be thought of as a filter bank composed
of band-pass filters with constant relative bandwidth. This is known as constant-Q
analysis in the signal processing community. Instead of the frequency response of the
analysis filter being regularly spaced over the time-frequency plane as in the STFT
case, they are regularly spread in a logarithmic scale.

CWT analysis allows arbitrarily good time resolution at high frequencies and
arbitrarily good frequency resolution at low frequencies. Using higher frequencies in
order to increase time resolution can eventually separate two very close short pulses.

Scaling of the mother wavelet h(t) is achieved as follows

1 t
ha (t ) = h  (3.9)
a p a

where a is the scale factor; a<1 results in compression of the mother wavelet and a>1
p
stretches the mother wavelet. The factor 1 / a is used for energy normalization and
controls the stretching in the vertical direction. If p>0, then the effect of this term is to
compress the signal vertically, whenever it is stretched horizontally and vice versa
(p<0 is not used). Typically p=½, but different values may be used depending on the
chosen wavelet.

Time localization is achieved by translating the wavelet through the data. Rescaling
the wavelet and repeating the process achieves frequency localization. Formulating
this process leads to the continuous wavelet transform

+∞
1  t −τ 
CWT (τ , a ) = ∫ x(t )h 
*
p dt (3.10)
a −∞
a 
3-32

Figure 3-3 The CWT illustrated schematically. (a) The wavelet is convolved with the signal. It is then
rescaled and the convolution repeated (b). From (Cooper, 2001).

Figure 3.3 above illustrates the process schematically. Since the same prototype
wavelet is used at all scales the analysis is self-similar. There exist a wide variety of
wavelets h(t). They can be thought of as constituting a toolbox. Depending on the job
at hand certain tools will be more appropriate than others. This allows a certain degree
of customisation of the CWT. Section 3.7 will discuss wavelet properties and
commonly used wavelets are listed in section 3.8. It is sufficient to mention now that
h(t) need not be a complex function, allowing us to deal only with real-valued
transforms.

3.3.2 Frequency and scale


In wavelet analysis it is not common to talk about frequency. The term ‘scale’ is used
instead. Scale in wavelet analysis is used in a similar fashion the scale in geographical
maps; large scales correspond to a global view whereas small scales correspond to
more detailed views.

Scale is linked to frequency in that it depends on the frequency of the wavelet being
used. Since wavelets usually contain power at more than one frequency, we define the
centre frequency, which captures the wavelet’s main oscillations. The centre
frequency is defined by associating the wavelet with a purely periodic signal of
frequency fc. Figure 3.4 illustrates the process.
3-33

Figure 3-4 Defining the centre frequency for (a) Daubechies 2 wavelet (b) Coiflet 1 wavelet. From
(Mathworks, 2002).

If the centre frequency of a wavelet is known, then a ‘pseudo’ frequency can then be
defined which is related to scale as follows:

fc
fa = (3.11)
a∆

where fc is the centre frequency of the wavelet, a is the scale, and ∆ is the sampling
period.

3.3.3 Wavelet scalograms


The wavelet scalogram is defined as the square modulus of the CWT. It is the
distribution of energy in the time-scale plane and is expressed as power per frequency
unit. The energy of the signal is distributed unevenly over the time-scale plane as
shown in figure 3.5.

Figure 3.5a shows the influence of the signal’s behaviour on the scalogram around
t = t 0 . Note that the analysis is limited to a cone and is therefore very localized
around t 0 for small scales. This is known as the cone of influence. The width of the
cone depends on the compact support of the wavelet used at each scale (See section
3.7). It is interesting to compare the scalogram to the equivalent spectrogram used for
3-34

the STFT. Figure 3.5b shows that the region of influence of the STFT is constant over
all frequencies.

Since the frequency response of the wavelet transform is logarithmic in frequency, the
area of influence of some pure frequency f 0 in the signal increases with f 0 in the
scalogram (figure 3.5c). In the spectrogram of the STFT, the influence is constant
over all frequencies as demonstrated in figure 3.5d.

Figure 3-5 Comparison of the spread in time and frequency for the CWT and STFT. Regions of
influence of a Dirac pulse at t=t0 is shown a) for the CWT and b) for the STFT. The influence of three
sinusoids of frequencies f0, 2f0, 4f0 are shown c) for the CWT and d) the STFT. From (Rioul, 1991).

The CWT contains a high degree of redundancy, meaning that more information is
retained than is necessary to reconstruct the original signal. This redundancy makes
the CWT particularly suited to analysis by scalograms. Trends are reinforced and
information is made more visible. This is especially true in the analysis of subtle
information. Thus, the analysis gains in readability and in ease of interpretation what
it loses in terms of memory requirements. Figure 3.6 below illustrates this. In figure
3.6a the scalogram is computed with a non-redundant transform. Notice the loss in
readability. Figure 3.6b illustrates how the transient character of signals is highlighted
in the CWT.
3-35

Figure 3-6 Comparison of scalograms calculated from (a) the DWT, and (b) the CWT. From
(Mathworks, 2002).

3.3.4 Discrete and continuous sampling


In computation, every signal is sampled discretely, so why is the term continuous
wavelet transform used? The continuous wavelet transform is distinguished from the
discrete wavelet transform (discussed in section 3.4) by the range of scales and
positions it operates at. The continuous wavelet transform is continuously scalable
and translatable whereas the discrete wavelet transform (DWT) is not.

As discussed in section 2.3.1, the CWT is redundant. This redundancy comes at great
computational cost, hence the CWT is not suited to filtering or data compression.

It turns out that there is a natural way in which to sample a signal, defined by
Nyquist’s rule (Rioul, 1991). Two scales a0<a1, roughly correspond to two
frequencies f0>f1. The wavelet coefficients at scale a1 can be subsampled at ( f 0 / f 1 ) th
the rate of the coefficients at scale a0. The wavelet is sampled as follows

h j ,k (t ) = a 0− j / 2 h(a0− j t − kT ) (3.12)

where j and k are integers. The resulting wavelet coefficients are

c j ,k = ∫ x(t )h *j ,k (t )dt (3.13)

The reconstruction problem is to find a0, T and h(t) such that x(t) is defined by a set of
orthonormal basis functions hj,k(t)

x(t ) ≈ c∑∑ c j ,k h j , k (t ) (3.14)


j k

where c is a constant which does not depend on the signal. If a0 is close enough to 1
and if T is small enough, then the wavelet functions are overcomplete, and signal
reconstruction takes place within non-restrictive conditions on h(t). If the sampling is
3-36

sparse, a0=2, then a true orthonormal basis will be obtained only for very special
choices of h(t).

In summary, for equation 3.14 to work there is a trade-off between redundancy and
restrictions on h(t). If the signal is highly oversampled, only mild restrictions are
placed on h(t). If on the other hand, the signal is sampled close to the critical
sampling, then h(t) is highly constrained. If a0 is chosen to be 2, the sampling is
known as dyadic sampling (figure 3.7). Wavelet basis functions are chosen with
scales and positions based on powers of two. If dyadic sampling is chosen, the
analysis will be much more efficient and just as accurate, provided the wavelet h(t) is
chosen carefully. This type of analysis is known as the discrete wavelet transform.

Figure 3-7 Dyadic sampling grid in the time-scale plane. Each node corresponds to a wavelet basis
function hj,k(t) with scale 2-j and shift 2-jk. From (Mallat, 1998).

Figure 3.8 shows a breakdown of the time-frequency plane for the discrete wavelet
transform. The blocks are of equal area, obeying Heisenberg’s uncertainty principle.
Higher frequencies have better spatial resolution and lower frequencies have better
frequency resolution. Compare this with the breakdown of the time-frequency plane
for the STFT.

Figure 3-8 Breakdown of the time-frequency plane for the discrete wavelet transform.From (Cooper,
2001)

3.4 The discrete wavelet transform


The discrete wavelet transform (DWT) uses dyadic sampling and requires an
orthogonal or biorthogonal wavelet to represent the data. This results in computational
efficiency at the expense of a more restrictive set of conditions on the choice of
3-37

wavelets and a loss of translational invariance. Translational invariance implies that


the DWT’s of a signal and another time-shifted version of the same signal are not
shifted versions of each other. The process by which the DWT operates is termed
‘decomposition’.

3.4.1 Wavelet Decomposition


In the late seventies and early eighties, two coding methods were developed
independently, the multiresolution pyramid, and subband coding (Rioul, 1991). Both
techniques involve decomposing the signal into an approximation (large scale) and
detail (small scale) using a two-channel filter bank. In subband coding, the
approximation is calculated and downsampled by 50%. The detail is then calculated
and also downsampled by 50%. The downsampling is necessary to avoid redundancy.
The DWT follows the same principle. Figure 3.9 shows the signal decomposition for a
sine wave with added noise. The filters are shown as a lowpass filter for the
approximation and a highpass filter for the detail. Note that the effect of down
sampling is to preserve the original number of samples.

The actual lengths of the detail and approximation might be slightly more than half
the length of the original signal. This has to do with the filtering process, which is
implemented by convolving the signal with a filter. The convolution ‘smears’ the
signal, introducing extra samples into the result.

Figure 3-9 DWT signal decomposition of a sinewave with added noise (S) into approximation (cA) and
detail (cD). From (Mathworks, 2002).

The process can be repeated by successively decomposing each new approximation,


breaking the signal down into lower resolution components. The process is shown
schematically as a wavelet decomposition tree in figure 3.10.
3-38

Figure 3-10 Typical decomposition tree for the DWT. From (Mathworks, 2002).

3.4.2 Wavelet synthesis


The original signal can be reconstructed or synthesized by the application of the
inverse discrete wavelet transform (IDWT). Where the DWT involved downsampling,
the IDWT requires upsampling to reconstitute the original signal. This is achieved by
inserting zeros between samples.

The wavelet reconstruction process is shown schematically in figure 3.11. What


would happen if zeros were inserted in place of one of the blocks of the
decomposition tree? If both details L’, were set to zero before synthesis, a smooth
approximation of the original signal will be constructed. Alternatively the
approximations may be set to zero, resulting in only the detail being reconstructed.
The number of levels excluded will determine the degree of filtering. In multi-level
analysis, there are several ways to re-assemble the original signal. This flexibility is
useful for compression and denoising of signals. Denoising will be discussed in detail
in section 3.9.

Figure 3-11 Reconstruction (synthesis) process performed by the IDWT. From (Mathworks, 2002).

3.4.3 Aliasing and quadrature mirror filters


In the wavelet decomposition, the downsampling process usually results in aliasing of
the original signal. The reconstruction filters can be designed carefully in order to
cancel out this aliasing effect. The low and high pass decomposition filters together
with their associated reconstruction filters, form a system known as quadrature mirror
filters. It turns out that the quadrature mirror filter dictates the shape of the wavelet
used in the analysis. The most important constraint on the filter coefficients is that of
3-39

perfect reconstruction. The perfect reconstruction conditions are explained in (Strang,


1996) p103-113.

These conditions along with the orthogonality conditions, place stringent


requirements on the choice of filters. The requirements are addressed further in
section 3.7.

3.4.4 The dilation equation and the wavelet equation


At this point it is useful to consider how wavelets are related to the coefficients of the
quadrature mirror filters used in the DWT.

The wavelet is determined from the high-pass coefficients h(k) using the wavelet
equation

ψ (t ) = 2∑ h(k )φ (2t − k ) (3.15)

where Φ is the scaling function. The factor of 2t serves to compress the coefficients,
and –k shifts it.

The low-pass coefficients g(k) are linked to the scaling function by the dilation
equation

φ (t ) = 2∑ g (k )φ (2t − k ) (3.16)

Note the appearance of two timescales in equation (3.16), making it a ‘two-scale


difference equation’. This joint appearance of t and 2t is the novelty of the dilation
equation, and also provides a source of great difficulty when constructing appropriate
filter coefficients. The implications of two scales are (Strang, 1996):
i. There is not necessarily a solution of Φ(t)
ii. The solution is zero outside the interval 0 ≤ t < N
iii. The solution seldom has an elementary formula
iv. The solution is not likely to be a smooth function

As a consequence, not all wavelets have scaling functions (e.g. Mexican Hat and
Morlet wavelets), so not all can be used by the DWT.

The dilation equation demonstrates that any function that can be represented by the
scaling function Φ(t) can also be represented as a combination of the dilated scaling
functions Φ(2t) (Ridsdill-Smith, 2000). Thus the space of functions spanned by the
scaling functions Φ(t), is ‘nested’ within the space of functions spanned by Φ(2t).
This idea of nested spaces forms the essence of multiresolution analysis, characteristic
of wavelets. The Wavelet function Ψ(t) is also nested within the space of functions
spanned by the scaling functions Φ(2t).

So in the DWT, the basis is described by two functions, the scaling function Φ, and
the wavelet function Ψ, which are generated from the low-pass and high-pass filter
coefficients respectively. Furthermore, these functions are nested within the dilations
of the scaling function.
3-40

3.4.5 The fast wavelet transform


In 1988, Mallat designed a fast DWT implementation known as the fast wavelet
transform (FWT). The filter scheme is known classically in the signal processing
community as a two channel subband coder using quadrature mirror filters
(Mathworks, 2002). The filter coefficients are arranged in a transformation matrix as
shown in figure 3.12. Every odd row consists of the wavelet coefficients and zeros.
These rows produce a weighted moving average, forming the approximation term of
the transform. The even rows consist of the scaling coefficients and zeros. These rows
form a weighted moving difference, which constitutes the detail of the signal (Press,
1992).

Figure 3-12 Transformation matrix of the FWT. Odd rows consist of the low pass filter. Even rows
consist a high pass filter. Together, these rows form a quadrature mirror filter. From (Press, 1992).

The odd and even rows form what is known in the signal processing community as
quadrature mirror filters. A pyramidal decomposition tree (figure 3.10) is used as
described for the DWT in section 3.4. The downsampling is done as part of the
application of the transformation matrix. The FWT is an order n operation compared
with n log n for the fast Fourier transform.

3.5 The stationary wavelet transform


As mentioned briefly in section 3.4, the DWT is not translation invariant. For
example, the DWT of a sine wave and the DWT of the same sine wave shifted by π
4
will not be shifted versions of each other. This aliasing results from the downsampling
process, sometimes referred to as decimation. In the DWT, even samples are retained
and the odd samples are thrown out.

One can just as easily choose to retain the odd samples and throw away the even
samples. Considering both types of decimation, if the signal is being decomposed to
level N, there will be 2N possible decompositions of the signal. This transform is
called the ε-decimated DWT.

The ε-decimated DWT maintains translation invariance by averaging together the


various ‘shifts’ of the DWT during synthesis. It is easy to modify the DWT algorithm
in order to calculate the ε-decimated DWT, but this requires many iterations. Another
simple but efficient algorithm has been developed, known as the stationary wavelet
transform (SWT).

For a level-1 decomposition, the SWT algorithm is identical to the DWT except that
no downsampling occurs. Thus the approximation and detail are the same length as
the original signal. The next step involves upsampling the filter coefficients before
3-41

calculating the next approximation and detail. The generalized step is illustrated in
figure 3.13 below for coefficients at level j.

Figure 3-13 Schematic of the generalised decomposition step at level j for the SWT. From (Mathworks,
2002).

The most important application of the SWT is denoising where it is important to


retain the stationarity of a signal.

3.6 Wavelet packet analysis


In conventional wavelet analysis, the signal is split into an approximation and detail
as discussed in section 3.4. Each successive level of the decomposition operates only
on the approximation, resulting in a decomposition tree similar to figure 3.10. This
allows n+1 possible reconstructions for an n level decomposition.

In addition, it is possible to operate on the detail at each level, splitting it into an


approximation and further detail as shown in figure 3.14. This type of analysis is
n −1
referred to as wave-packet analysis and yields more than 2 2 possible
reconstructions! The time-scale plane breakdown is presented in figure 3.15 for a
comparison with that from standard wavelet analysis (figure 3.8).

Figure 3-14 Typical decomposition tree from wavepacket analysis. Each level is broken into an
approximation and detail. From (Mathworks, 2002).
3-42

Figure 3-15 Breakdown of the time-frequency plane for wavepacket analysis. Compare with that for
standard wavelet analysis (figure 3.8). From (Mallat, 1998).

Choosing the best possible synthesis presents an interesting problem. Often entropy is
used to decide whether to proceed further with the decomposition of a particular
branch. The entropy of the wavelet coefficients at each node is calculated. To decide
which branches of the decomposition tree to use, the algorithm will proceed to the
next level only if the sum of the entropies of the next level is less than the entropy of
the current node. An example is presented in figure 3.16 below.

Figure 3-16 Using entropy to determine whether to decompose the next level. The 'best tree' is
displayed on the right. From (Mathworks, 2002).

Wavelet packet analysis is particularly useful in image compression and signal


denoising, which will be further discussed in section 3.9. The algorithm is an n log n
process, like the fast Fourier transform.

3.7 Wavelet properties


The mother wavelet h(t), used in the CWT need only satisfy the admissibility
condition in order to ensure reconstruction of a signal from its wavelet coefficients.
The admissibility condition is given by;

2
+∞
ψˆ 0 (ω )

−∞
ω
dω < ∞ (3.17)

An obvious consequence of equation 3.17 is that the Fourier transform of a mother


wavelet, ψˆ 0 (ω ) must vanish at ω = 0 in order to satisfy the admissibility condition.
3-43

This is equivalent to stating that the wavelet must have a zero mean, resulting in the
characteristic oscillatory shape of admissible wavelets (Ridsdill-Smith, 2000). For a
wavelet to be of practical use, it must also be localized in the space-frequency plane.

3.7.1 Describing wavelets


As mentioned in section 3.4.3, the wavelet shape is derived from its quadrature mirror
filter. As a result, wavelets are defined by their mathematical properties. The
constraints imposed on the choice of wavelet are more stringent for the DWT as
mentioned in section 3.4. The section begins with the requirements of the CWT.

3.7.2 Vanishing moments


Due to the admissibility condition, a wavelet must possess at least one vanishing
moment in order to have a zero mean. The number of vanishing moments a wavelet
possesses is an important property to consider when choosing a wavelet. A wavelet
with n vanishing moments is orthogonal to a polynomial of degree n-1. Therefore a
polynomial of degree k will show zero detail when analysed with a wavelet having
k+1 vanishing moments. The effect of the number of vanishing moments on the
analysis of a quadratic is illustrated in figure 3.17. Note how the detail decreases with
each vanishing moment until n=3, where the detail is practically zero (there is some
edge effect).

Figure 3-17 The effect of increasing the number of vanishing moments of the Daubechies wavelet. A
quadratic a) is analysed with b) 1 c) 2, and d) 3 vanishing moments. The result of d) is practically zero,
except for some edge effects. From (Cooper, 2001).
3-44

3.7.3 Regularity
Wavelets can be thought of as studying the regularity of a signal. Usually a wavelet is
chosen for a particular application to suit the signal being analysed. A smooth signal
requires a smooth wavelet for best results and vice versa. The regularity is a
quantitative measure of the smoothness of a signal. A simple definition follows
(Mathworks, 2002).

The regularity s of a signal f will be defined. If the signal is s-times continuously


differentiable at x0 and s is an integer ≥ 0 , then the regularity is s.

r
If the derivative of f of order m resembles x − x0 locally around x0, then s = m + r
with 0 < r < 1.

The regularity of f in a domain is equal to that of its least regular point. The greater s,
the more regular the signal.

3.7.4 Support
The support of a wavelet is the speed of convergence to 0 of the wavelet function
ψ (t ) or ψ (ω ) when the time t or the frequency ω goes to infinity. This quantifies both
time and frequency localizations. If an orthogonal wavelet has n vanishing moments,
then it must have a support of at least 2n-1.

The term ‘compact support’ is often used in describing some wavelets. Compact
support means the wavelet has finite support, i.e. It converges to 0 in finite time. Not
all wavelets need have compact support (e.g. Meyer). Wavelets having a compact
support are better used in analysis on a local scale. This is the case for Haar and
Daubechies wavelets, for example.

3.7.5 Further constraints imposed on DWT wavelets


The DWT requires more stringent conditions to be placed on the choice of wavelet. In
particular, the wavelet is required to be orthogonal or biorthogonal. The wavelet must
also have a corresponding scaling function (section 3.4.4).

3.7.5.1 Orthogonal wavelet bases


Two functions are orthogonal if

a
f n , f n = ∫ f n ( x) f m* ( x)dx = 0 if n ≠ m (3.18)
b
and orthonormal if

fn , fn = 1 (3.19)

Orthogonal basis functions offer distinct computational advantages as they avoid


redundancy, since all terms with n ≠ m are zero. Orthogonal basis functions allow a
signal to be reconstructed in only one way.
3-45

In order to form an orthogonal basis, a wavelet must be orthogonal to its own dilations
and translations. For example consider the Haar wavelet given by
ψ (t ) = (+1,+1,−1,−1) shown in figure 3.18a. Its dilation is given by ψ (2t ) and its
dilated translation by ψ (2t − 1) (figures 3.18b and 3.18c).

Figure 3-18 a) The Haar wavelet is orthogonal to b) its own dilations, and c) its own translations. After
(Cooper, 2001).

Using equation 3.18 to check for orthogonality:

+∞

∫ψ (t )ψ (2t )dt = (1 × 1) + (1 × −1) = 0


−∞
+∞

∫ψ (t )ψ (2t − 1)dt = (1 × 1) + (1 × −1) = 0


−∞
(3.20)

+∞

∫ψ (2t )ψ (2t − 1)dt = (1 × 0) + (0 × −1) = 0


−∞

Equations 3.20 confirm that the Haar wavelet is orthogonal to all translations by k and
dilations by 2j.

Other orthogonal wavelet families include: Daubechies, Coiflets, Meyer, and Symlets.

3.7.5.2 Biorthogonal Wavelet basis


Except for the Haar wavelet, a symmetrical wavelet will not allow perfect
reconstruction if the same symmetrical wavelet is used in both the decomposition and
synthesis of a signal. This problem can be avoided by using a different wavelet for the
reconstruction process.

Using different wavelets also allows the useful wavelet properties for analysis (e.g.
oscillations, zero moments) to be incorporated in the decomposition wavelet. Useful
synthesis properties (e.g. regularity) can be incorporated into the reconstruction
wavelet. The two wavelets have to be related by duality in the following sense:

~ (t )ψ (t )dt = 0
∫ψ j ,k j ' ,k '
for
j ≠ j'
(3.21)
~
∫φ 0, k (t )φ 0,k ' (t )dt = 0 k ≠ k'
3-46

this is known as the biorthogonality condition. Figure 3.19 below shows two
biorthogonal wavelet pairs. The numbers refer to the vanishing moments of the
wavelet pair.

Figure 3-19 Two wavelets from the Biorthogonal wavelet family. From (Mathworks, 2002).

An important feature of biorthogonal wavelets is the fact that Parseval’s theorem does
not hold in biorthogonal systems. The power measured in the wavelet domain is not
the same as the power in the time domain.

3.7.6 Complex wavelets


Complex wavelets can be designed to have a non-oscillatory envelope in the space
domain. This makes the interpretation of wavelet coefficients easier. They also allow
phase information to be displayed.

3.7.7 Custom wavelets


Wavelets may even be customized to suit the data. The Perrier wavelet has been
constructed to optimise analysis of potential field data.

3.8 Choice of wavelet


This section explains the choice of wavelets used in analysing the project data. Matlab
has 15 different wavelet families to choose from. The collection is best thought of as a
toolbox. For efficiency the correct tool should be chosen for the job. One would not
hammer a nail into a plank using a screwdriver. Tables 2 and 3 at the end of this
section summarise the distinguishing properties of the 15 available wavelet families in
Matlab.

A process of deduction was used to choose two wavelets, one for use with the discrete
and stationary wavelet transforms, and one for the complex continuous wavelet
transform.
3-47

3.8.1 DWT and SWT


The bulk of the project made use of the stationary wavelet transform. This restricted
the choice of wavelet to one that has a scaling function Φ. Despite this restriction,
there are still eight available wavelet families in Matlab that may be used. As
mentioned previously the choice of wavelet depends strongly on the nature of the
data. Wavelets are classed by their mathematical properties and these can be used as a
guide when selecting a particular wavelet.

A desirable property for the wavelet is compact support. A wavelet with compact
support is not affected by frequencies in the data outside the support width of the
wavelet. Wavelets with smaller support width allow analysis on a more local scale. A
small support width is advantageous in extracting the detail of a signal. This is a
useful property when denoising signals. The requirement of a compactly supported
orthogonal wavelet restricts the available wavelet families to: Haar, Daubechies,
Symlets and Coiflets.

The next selection criterion used was regularity. Regularity is essentially a measure of
smoothness. The wavelet’s regularity should be chosen to suit the signal. More
simply, a smooth signal should be analysed with a smooth wavelet and vice versa. A
borehole radar trace is shown in figure 3.20 below. The signal can be described as
smooth; consequently a wavelet with high regularity is needed. This immediately
eliminates the Haar wavelet, which is not regular. Regularity is difficult to calculate
for wavelets that don’t have an associated function. No precise measurement of
regularity was found in the literature for the Symlets and Coiflets families. The higher
order Daubechies wavelets have an approximate regularity of 0.2N where N is the
order. This gives a maximum regularity of 2 for the Daubechies wavelets. Judging by
eye, the Coiflets and Symlets wavelets appear to be of comparable regularity. Another
selection criteria will be used to further narrow the selection.

Figure 3-20 Typical borehole radar trace taken from dataset Bhr2.

The signal in figure 3.20 shows some degree of symmetry (depending on the chosen
origin). If the wavelet is chosen to suit the signal, a symmetric wavelet is required.
This requirement eliminates the Daubechies family, whose wavelets are far from
symmetrical.
3-48

In comparing the Coiflets and Symlets families, there is a trade-off between the
number of vanishing moments, and the smallest possible support width. As mentioned
in section 3.7, the number of vanishing moments of a wavelet is an important
consideration. For a complicated signal like the one in figure 3.20, the highest
possible number of vanishing moments is required.

The Coiflets family has vanishing moments for the scaling function Φ as well as for
the wavelet Ψ. The Symlets only has vanishing moments for the wavelet Ψ. On these
grounds the Coiflets family is the most attractive choice. Table 2.1 below lists the
distinguishing properties of the highest order wavelets from the Symlets and Coiflets
families. Their wavelet and scaling functions are plotted in figure 3.21.

Table 2-1 Distinguishing properties for the highest order Symlets and Coiflets wavelets.
Property Coiflets Symlets
Order 5 8
Support Width 29 15
Vanishing moments Ψ 10 8
Vanishing moments Φ 9 0

Although the Coiflets 5 wavelet has the smallest number of vanishing moments, the
Symlets 8 wavelet does have a smaller support width. Initially, the data was processed
using both wavelets. The comparison showed no visible difference between the two
wavelets.

Figure 3-21 Wavelet functions for (a) Coiflets 5 wavelet, and (b) Symlets 8 wavelet. Scaling functions
for (c) Coiflets 5 wavelet, and (d) Symlets 8 wavelet.
3-49

Taking the number of vanishing moments as being more important than the support
width, the Coiflets 5 wavelet was selected as the most suitable. It was used for both
the denoising and wavelet detrending of the radar data.

3.8.2 Complex continuous wavelet transform


In section 4.7, the complex continuous wavelet transform was used to calculate a
complex trace that could be used to define instantaneous attributes. This restricted the
choice of wavelet to complex wavelets. The three available complex wavelet families
in Matlab are: complex Gaussian, complex Morlet, Shannon, and Frequency B-Spine.

None of the complex wavelets are compactly supported. The complex Morlet and
complex Gaussian families appear to be more regular than the Frequency B-Spline
and Shannon families. The choice between the complex Gaussian and complex Morlet
family was made in a heuristic manner. The shape of the complex Gaussian wavelet is
more similar to the trace shown in figure 3.20 than the Morlet. Figure 3.22 shows a
plot of the complex Gaussian 2 wavelet. The order of the wavelet corresponds to the
nth derivative of the complex Gaussian function. If ‘n’ is chosen to be odd, the real
part will be symmetric and the imaginary part antisymmetric. The converse is true for
‘n’ even.

Figure 3-22 Complex Gaussian wavelet showing (a) Real part (b) Imaginary part (c) Modulus, and (d)
Phase.

The complex Gaussian 2 wavelet was decided upon so that the imaginary part is
antisymmetric. This best represents the radar signal and since we are interested in
extracting the complex attributes of the signal it is better to match the imaginary part
of the wavelet to the signal. Figure 3.22 shows the modulus and phase of the wavelet.
3-50

They are simple functions. A simple response is preferred, as this will simplify
interpretation of the calculated instantaneous attributes.
3-51

Table 2-2 Summary of wavelet family properties, Part 1. From (Mathworks, 2002). Note ≈ means
approximately.
Property morl mexh meyr haar dbN symN coifN biorNr.Nd
Crude
Infinitely regular
Arbitrary regularity
Compactly supported
orthogonal
Compactly supported
biothogonal
Symmetry
Asymmetry
Near symmetry
Arbitrary number of
vanishing moments
Vanishing moments for

Existence of
Orthogonal analysis
Biorthogonal analysis
Exact reconstruction

FIR filters
Continuous transform
Discrete transform
Fast algorithm
Explicit expression For splines
3-52

Table 2-3 Summary of wavelet family properties, Part 2. From (Mathworks, 2002). Note ≈ means
approximately.
Property rbioNr.Nd gaus dmey cgau cmor fbsp shan
Crude
Infinitely regular
Arbitrary regularity
Compactly supported orthogonal
Compactly supported
biothogonal
Symmetry
Asymmetry
Near symmetry
Arbitrary number of vanishing
moments
Vanishing moments for
Existence of
Orthogonal analysis
Biorthogonal analysis
Exact reconstruction

FIR filters
Continuous transform
Discrete transform
Fast algorithm
Explicit expression For splines
Complex valued
Complex continuous transform
FIR-based approximation
3-53

3.9 Wavelet De-noising


Two of the most popular applications of wavelets in signal and image processing are
de-noising and compression. The techniques are closely related as both involve
removing unnecessary information from a signal. This section will outline the
principles of one-dimensional signal denoising, focusing on the specific techniques
used to denoise borehole radar data.

The general denoising procedure involves three steps:

i. Decompose the signal. This involves choosing a decomposition algorithm,


choosing a suitable wavelet, and choosing the level of decomposition.
ii. Threshold the wavelet coefficients. This involves selecting a threshold for
each level of the decomposition.
iii. Reconstruction. This involves computing the wavelet reconstruction using the
original approximation coefficients of the highest level N and the modified
detail coefficients of levels from 1 to N.

3.9.1 Decomposition
The stationary wavelet transform (section 3.5) was chosen for the decomposition. It is
particularly suited to denoising due to its shift invariance. Coifman and Donoho
(Coifman, 1995) state that the effect of the Gibbs phenomenon is far weaker in the
stationary wavelet transform than in the traditional DWT.

The Coiflets 5 wavelet was chosen for the decomposition. This is discussed in section
3.8. Its advantages are: compact support, regular, near symmetrical and high number
of vanishing moments.

The optimum level of decomposition was chosen empirically. A maximum level


decomposition was made. The amplitudes of the detail coefficients were examined.
When the detail coefficient amplitudes become negligibly small, it is useless to
decompose further. The calculated thresholds for the detail can also be examined.
Where no coefficients are thresholded, it is pointless to decompose further.

3.9.2 Thresholding
The thresholding of coefficients is the most important aspect of wavelet denoising.
The threshold selection process decides which coefficients to set to zero and which to
keep.

There are two principle approaches to thresholding; hard or soft thresholding. Hard
thresholding is the simplest case where coefficients below a certain threshold are set
to zero. It can be described mathematically as

D(Y , λ ) = Y if Y > λ (3.22)


D(Y , λ ) = 0 otherwise

where λ is the threshold. In the case of soft thresholding, otherwise known as wavelet
shrinkage, the coefficients are gradually thresholded to zero as follows.
3-54

D(Y , λ ) = sign(Y ).( Y − λ ) if Y > λ (3.23)


D(Y , λ ) = 0 otherwise

The effect of hard and soft thresholding is most easily understood by considering the
thresholding of a straight line shown in figure 3.23.

Figure 3-23 Comparison of Hard, Soft and Qian thresholding on a straight line. From (Qian, 2002).
Soft thresholding is the preferred technique for denoising applications. It reduces the
effect of the Gibbs phenomenon, however Qian (Qian, 2002) shows that hard
thresholding is best in preserving edges. He proposes a compromise between the two
extremes, called Qian thresholding. It is demonstrated by the green in figure 3.23
above.

Qian thresholding is given by the function

 Y Q − λQ 
D(Y , λ ) =  Y  if Y > λ , and (3.24)
 Y
Q 
 

D(Y , λ ) = 0 otherwise.

If Q = 1 , equation 3.24 is equivalent to soft thresholding and if Q = ∞ , it is


equivalent to hard thresholding. Qian thresholding does indeed preserve edges;
however there is some trade-off and the Qian thresholded image is noisier than soft
thresholding. The borehole radar traces do not have sharp edges. For this reason soft
thresholding was chosen.

The threshold value is calculated automatically using a form of statistical measure.


Matlab has four threshold selection rules, which are described below.

• Stein’s Unbiased Estimate of Risk (SURE) – Uses the quadratic loss function
to estimate the risk for a particular threshold value. The threshold value is
selected by minimising the risk.
• Fixed Form threshold – A fixed form threshold is chosen with minimax
performance. It is then multiplies by a factor proportional to
2 log(length( x)) .
3-55

• Heuristic Sure – Combines both the above selection rules. The Rigorous Sure
selection is used until the signal to noise ratio becomes very small. When such
a situation is detected, the fixed form thresholding is used. The switch is made
because the unbiased risk estimate is very noisy if the data vector has a small
l 2 norm. The Matlab Heuristic Sure threshold selection is based on Dohono
and Johnstone’s “Sureshrink” technique (Dohono, 1994). It is described as
being smoothness adaptive. If the signal contains a jump, this is preserved in
the reconstruction. Where the signal is smooth, the reconstruction will be as
smooth as the mother wavelet allows.
• Minimax – A fixed threshold is chosen to yield minimax performance for
mean square error against an ideal procedure. The minimax principle is used in
statistics to design estimators. Since the de-noised signal can be assimilated to
the estimator of the unknown regression function, the minimax estimator is the
option that realizes the minimum, over a given set of functions, of the
maximum mean square error (Donoho, 1992).

For a mathematical treatment of the aforementioned thresholding techniques, the


reader is referred to (Donoho, 1992; Donoho, 1994; Donoho, 1995; Mallat, 1992;
Ogden, 1995).

The Minimax and Rigorous Sure selection rules tend to be more conservative in
thresholding whereas the Fixed Form selection rule is more severe. The Heuristic Sure
selection is a compromise. It was found to be both effective and robust in the
denoising of the radar data.

3.9.3 Noise model selection


The standard noise model can be expressed as

s ( n ) = f ( n ) + σe ( n ) (3.25)

where time n is equally spaced, e(n) is the noise model and σ is the noise level. In
the simplest model e(n) is Gaussian white noise and σ = 1 . The threshold selection
rules are based upon the Gaussian white noise model.

Deviations from Gaussian white noise can be dealt with by rescaling the output
threshold. Matlab has two rescaling options corresponding to unscaled white noise
and non-white noise.

If the model is unscaled white noise then σ ≠ 1 and the noise level σ must be
estimated. The first level detail coefficients are essentially noise coefficients with
standard deviation equal to σ . A robust method has to be used to estimate σ for two
reasons. The first level may contain a small number of detail coefficients (provided f
is sufficiently regular) and there are usually edge effects that are pure artefacts due to
computation at the signal’s edges. It turns out that the median absolute deviation of
the coefficients provides a robust estimate of σ .
3-56

For e(n) = nonwhite , the thresholds must be rescaled using a level dependent
estimation of the noise level σ . A separate noise level σ lev is calculated for each
level of detail independently. Once again the median absolute deviation is used as a
robust estimate of σ lev .

It was found that the unscaled white noise model suited the radar data best.

3.9.4 Variance adaptive thresholding


Threshold selection using the above rules fails in cases where the noise variance is
non-stationary, i.e. the noise level is a function of time, σ (t ) . This problem is dealt
with by estimating the change points in the noise variance throughout the signal. The
signal is then thresholded independently in the intervals between change points.

The process first involves completely isolating the noise coefficients from the first
level detail. This is done by replacing the largest 2% coefficients with the mean value
of the coefficients. The Matlab function wvarchg, based on the work by Marc
Lavielle, is then used to estimate the change points in the noise variance.

Marc Lavielle investigated the problem of estimating an unknown number of


changepoints in a marginal distribution function of a sequence of dependent variables.
He showed that the number of changepoints could be estimated by minimising a
penalised contrast function. The method also applies to the case of a discretised non-
parametric distribution, as in the case of wavelet coefficients.

The method assumes that the number of segments K within a distribution Y of


observations is unknown, and that this number is upper bounded by a known K . The
configuration of changes τ , the vector of parameters θ and the number of segments
~
K , can be estimated by minimising a penalised contrast function J n (τ ,θ , K ) , defined
by

~ K
J n (τ ,θ , K ) = ∑ Wn (Yk ,θ k ) + β n K
k =1

for any K ∈ {1,2,..., K } , and any (τ ,θ ) ∈ Τk × Θ k .

Wn (Yk ,θ k ) is the constrast function computed over segment k of τ . The sequence


{β n } is positive and tends to 0 when n tends to infinity. The parameter β n is a trade-
off between the fit with the observations and the size of the model, that is, the number
of segments that cannot be too big.

The minimisation problem is then formulated as

~ ~
J n (τ n ,θ n , K n ) ≤ J n (τ ,θ , K ) , ∀(τ ,θ , K ) ∈Τ K ×θ K × {1,2,..., K }
3-57

−1
The rate of convergence is τˆn − τ * = OP (n ) and does not depend on the

covariance structure of the process.

Once the change points have been calculated, the signal is split into segments having
uniform noise variance. These segments are thresholded independently. They are then
recombined before wavelet reconstruction.

3.9.5 Reconstruction
The thresholded detail coefficients are then used with the original approximation to
reconstruct the signal.

3.9.6 One-dimensional or two-dimensional?


Wavelet denoising can be performed using one-dimensional or two-dimensional
wavelets. The 2D wavelet decomposition splits each detail into three components:
horizontal, vertical and diagonal. Each detail component contains coefficients from
the entire image. Consequently, the thresholding process in 2D is applied on a more
global scale when compared with 1D thresholding. Furthermore it is not possible to
implement variance adaptive thresholding. 2D denoising treats the data as an image,
and is better suited to removing noise from photographs and for compression
purposes.

Zhang and Ulrych (2002) suggest that the shortfall of 2D wavelet denoising in seismic
data is that the wavelet is typically a tensor multiplication of its 1D wavelet
equivalent, emphasizing vertical, horizontal and diagonal dependence of the data.
They introduce a hyperbolic 2D wavelet suited to the hyperbolic trajectory observed
in seismic data. The advantage of using such a wavelet in 2D denoising is that the
trace-to-trace coherence is capitalized. Building such a wavelet for use in Matlab is a
complicated process and is beyond the scope of the project. Without such a
customized wavelet, 2D denoising is unsatisfactory.

3.10 Application of wavelet de-noising in seismic and


radar processing
This section will outline the current state of affairs with the application of wavelet
denoising to seismic and radar data.

Duval and Galibert (Duval, 2002), describe the use of the stationary wavelet
transform in coherent noise filtering of seismic data. In seismics, coherent noise
occurs in the form of ground-roll and air-waves. Ground-roll is introduced by the
interference of the surface wave with the reflected wave. The air-wave is generated if
the seismic source is in contact with the atmosphere. Both ground roll and air-waves
overlap in frequency with the reflected waves. Coherent noise presents one of the
most complicated issues in the processing of land seismic data. Wavelet filtering has
recently begun to challenge the popular and robust frequency-wavenumber filter.
3-58

Duval and Galibert show that use of the stationary wavelet transform is an
improvement over the discrete wavelet transform, being less subject to aliasing and
the Gibbs phenomenon.

Yu et al. (Yu, 2002) treat two separate cases of the ground roll problem: when there is
partial overlap in frequency between signal and noise, and where there is full overlap
in frequency between signal and noise.

In the first case Yu et al. use the stationary wavelet transform in order to denoise the
signal. They define two hard threshold selection techniques designed specifically to
remove ground roll and air-waves.

The second case where there is full overlap in frequency the situation is more
complex. A common approach is to simply mute the whole ground roll region. Yu et
al. use coherent events outside the ground roll region as a target waveform. A
correlation function is then designed to search for reflections obscured by groundroll.
They wavelet filter the data recursively to generate a large number of filtered
versions. These are then compared via cross correlation to the target waveform. The
optimal solution is the one that provides the highest correlation value of the filtered
trace with the target waveform.

Zhang and Ulrych (Zhang, 2002) state that since seismic data exhibit a high trace-to-
trace coherence, denoising methods that are based on individual traces cannot
attenuate noise optimally. Yu et al. used cross correlation to overcome this problem,
however Zhang and Ulrych use a customized 2D wavelet to take into account the
coherent nature of the data. The 2D wavelet is based upon the hyperbolic trajectory
typical of a seismic reflection. 2D wavelet denoising was performed both on synthetic
and real datasets. The denoising is especially simple and effective and ground roll was
attenuated. The technique does not perform well however if the reflection events are
not true hyperbola.

Pipan et al. (Pipan, 2002) have applied wavelet de-noising to airborne radar data from
the Vostok area (Antarctica). The technique was compared with Fourier and
polynomial techniques. Results show that wavelet de-noising was more effective in
removing both the low frequency (wow effect) and high frequency noise. The de-
noising was performed with the discrete wavelet transform using a Daubechies
wavelet at the second level.
4-59

Chapter 4 – Data Processing and Results


Two borehole radar datasets were provided by the CSIR for processing, ‘Bhr1’ and
‘Bhr2’. This chapter outlines all work performed on the data. The first two sections
deal with data preparation. The following sections 4.4, 4.5 and 4.6 compare Fourier
and wavelet denoising. Section 4.6 briefly outlines space domain filtering. Section 4.7
describes the attribute analysis which was performed both using the Hilbert transform
and the complex continuous wavelet transform.

Data preparation is just as important as the applied transform. Poorly prepared data
will cause artefacts to be created during the transformation. Frequency domain and
wavelet transforms require delicate preparation of the signal in order to preserve its
original frequency characteristics. For this reason the following two sections outline
the techniques that were used to prepare the data. These include removing trends as
well as padding the signal to the required number of samples.

The associated functions were written in Matlab and their code is included in the
appendix.

4.1 Trend removal


Function ‘wavedetrend’

The Fast Fourier and Discrete Wavelet transforms assume a continuous signal of
infinite length. For all intensive purposes one can imagine the input signal as
repeating itself infinitely. If the start and end points do not have the same value, there
will be a repeated discontinuity. This discontinuity will introduce unwanted
frequencies into the analysis. Any regional trends in the data will have the same effect
of introducing aliased frequencies, predominantly in the low frequency portion of the
signal. If the signal’s characteristics are to be correctly represented, care must be
taken to remove the trends in the signal and to ensure continuity at the boundaries.

Traces from Bhr1 showed a prominent trend, which obscured a lot of the detail and
noise in the data. Figure 4.1 shows two 50-bin histograms, one for each dataset. Bhr1
shows a nearly uniform distribution whereas Bhr2 shows a normal distribution. This
is because Bhr1 has been processed with histogram equalisation. The histogram
equalisation has introduced trends into the data that have to be removed before
processing. Figure 4.2 below shows a typical trace from Bhr1. A 6th degree
polynomial highlights the approximate trend. Higher order polynomials decayed too
rapidly at the end of the trace.
4-60

Figure 4-1 50-bin Histogram plot for datasets a) BHR1, and b) BHR2. Note the nearly uniform
distribution of BHR1 indicating that histogram equalisation was performed on the data.

Figure 4-2 Typical trace from BHR1 showing prominent trend. A 6th degree polynomial fit highlights
the approximate trend.

It is not a trivial matter to automatically remove the above non-linear trend. The 6th
degree polynomial fit shown in figure 4.2 above is a good approximation for most of
the curve, however it drops off too rapidly at the end. There is no guarantee that a 6th
degree polynomial will work as well for all other traces. It is unlikely that this fit will
work at all for other datasets, as it is too specific.

A more adaptive approach was required. The simplest such approach is to apply a low
pass filter over each trace, obtaining a smooth approximation to the trace. The only
problem with this approach is selecting the appropriate frequencies to discard.
Various moving average kernels were also applied to the curve without success. A
different approach was needed. A discrete wavelet transform was performed on each
trace and the level five approximation was selected. The choice to take the level five
approximation was an empirical decision. It gave a smooth approximation,
highlighting the regional trend in the data.
4-61

The Coiflets five wavelet was chosen for the transform. The choice of this particular
wavelet is discussed in section 3.8.

Figure 4.3 below compares the wavelet approximation with the 6th degree polynomial
fit.

Figure 4-3 Typical trace from BHR1 comparing 5th level wavelet approximation to 6th degree
polynomial approximation.

It is clear that the data is better represented by the wavelet approximation. Figure 4.4
displays the above typical trace before and after wavelet detrending. Figure 4.5 shows
the entire Bhr1 dataset before and after wavelet trend removal illustrating its effect on
bringing out the detail of the dataset.

Figure 4-4 Wavelet detrending of typical trace. Detrending was performed using the 5th level
approximation.
4-62

Figure 4-5 Comparison between the Bhr1 dataset (a) before and (b) after trend removal. The detail
(and noise) is made clearer in the second image.

The 5th level approximation also performed well on the Bhr2 dataset. The 5th level
approximation will work on most BHR data, however it is suggested this be tested on
a few traces individually before running the whole batch.

4.1.1 Discussion
There is a spotted stripe between traces 1035 to 1043. These traces were examined in
more detail in order to determine the nature of this effect.

Figure 4.6a shows a wiggle trace plot of the section from traces 1035 to 1045. Traces
1038 to 1042 show a sudden boost in the gain towards the end of the trace, which
could not be explained. It is possible that the gain settings were too high during
acquisition. Figure 4.6b confirms that the trends have been correctly removed,
resulting in traces with mean close to zero. Because of the saturation at the end of the
trace, when the section is imaged, the colourmap will assign white to the minima and
black to the maxima in the high amplitude portion of the signal, resulting in the
‘spotted’ appearance in figure 4.5. In conclusion, the portion of traces from 1038 to
1042 are bad traces. The effect of these bad traces may be suppressed by applying a
time-varying gain filter, such as those commonly applied in seismic data processing.
4-63

Figure 4-6 Wiggle trace plot of the portion of Bhr1 giving rise to the 'spotted' effect seen on figure 4.5,
(a) before trend removal and (b) after.

4.2 Padding techniques


As mentioned previously, the FFT and DWT see the input dataset as infinitely
periodic. For this reason the first and last points of each trace have to coincide
guaranteeing that there will be no step discontinuity in the data when viewed by the
transform.

In addition the FFT and DWT use dyadic sampling as discussed in section 3.3.4. This
requires the input signal length to be a power of 2. Usually data is not so conveniently
sampled and has to be lengthened or truncated before running the DWT. The way in
which this is done is crucial. If a signal is carelessly prepared, severe aliasing will
result.

The SWT algorithm only requires the signal to be divisible by 2n, where n is the level
of decomposition. This a distinct advantage, since one usually requires padding very
little compared to the DWT, reducing computation time.

When performing the FFT, DWT or SWT in Matlab, the signal is automatically
extended to the required length. This must be kept in mind when applying the Matlab
transforms. It turns out that the default extension is to simply pad the signal with
zeros. In the SWT, a periodic extension is applied, but no effort is made to keep the
boundaries continuous. None of the automatic extension options ensure continuity at
the boundaries. It is therefore important to extend the signal oneself in order to
prevent aliasing of the signal.
4-64

4.2.1 Hermite cubic spline interpolation


The initial approach taken was to remove a small linear fit between the start and end
points and then pad the trace out with a cubic hermite spline. The standard cubic
spline reacted badly to the flat sections in the high gain part of the signal. For this
reason the hermite cubic spline was used. Although the technique did work well for
most traces, it was not found to be robust enough for automation.

Figure 4.7b below shows a typical trace after the removal of the residual linear trend.

Figure 4-7 Typical trace from BHR1 showing a) non-linear trend removal, and b) residual linear trend
removal.

Figure 4.8b shows the entire BHR1 dataset after trend removal. The image looks
streaky with some prominent black stripes such as the portion of traces from 921 to
925.
4-65

Figure 4-8 The Bhr1 dataset (a) before and (b) after wavelet and residual linear detrending. Note the
resulting streaks.

Figure 4.9b shows a wiggle trace plot of the traces causing “streaking” in the region
921 to 925. Figure 4.9a illustrates these traces before trend removal. The large ‘kick’
at the end of the trace causes the endpoint to be far off zero. Since the linear removal
aims to shift the endpoints to zero, a large gradient results. When the linear trend is
removed, the mean of the trace is shifted to a negative value, resulting in the trace
being plotted as a dark streak in the image.
4-66

Figure 4-9 Traces 918 to 928 from BHR1 (a) before and (b) after wavelet and linear trend removal.

The problem can be overcome by applying an adaptive gain filter to the traces before
detrending. Such filters are commonly applied in seismic data processing. Coding the
filter is beyond the scope of the project and since code for such a filter was
unavailable, a different approach was employed. Also note that it is preferable to
implement adaptive gain control during acquisition to avoid aliasing from post
processing (Daniels, 1996).

4.2.2 Exponential decay taper


Function ‘expad’ and ‘expadx’
A simple and effective way to pad out the signal while ensuring continuity at the
boundaries is to apply a smoothly varying function at the edges. The effect is to gently
taper the signal to zero. Suitable functions are:

 π ( x − x0 ) 
Cosine bell: A = A cos n   (4.1)
 2 ( x1 − x0 ) 
2
Exponential decay: A = Ae ( − k ( x − x0 ) )
(4.2)

A
Butterworth: A= (4.3)
x
(1 + ( ) n )
x0

(x0 is the boundary point and x1 is the last extended point)


4-67

The exponential decay function was used to pad the data to the required length. Any
of the above functions give similar results. Figure 4.10 below shows a typical trace
that has been padded using two different decay constants.

Figure 4-10 Typical trace after detrending. Trace has been padded to 256 samples using an
exponential decay function (Equation 4.2) . a) Decay constant k = 0.01 b) Decay constant k=0.6

It is important to choose the right decay parameter for the taper. If the taper is too
sharp, a discontinuity will remain, aliasing the high frequency portion of the spectrum.
A smooth taper like to one in figure 4.10a will add low frequency to the spectrum as
illustrated by the blue curve in figure 4.11. Figure 4.10a has an added problem in that
the gradient is discontinuous at the start of the taper, this will introduce high
frequencies into the data.

Figure 4.10b is a better choice for k, maintaining a balance between smoothness and
continuity. Its spectrum is plotted in red in figure 4.11.

Figure 4-11 Power spectra for the exponentially padded curves depicted in figure 4.10. Note the
aliasing of frequencies evident in the blue curve.
4-68

4.2.3 Symmetric padding


Function ‘swtpad’ and ‘swtpadx’
When using the SWT, a different padding technique was applied. The SWT requires
that the signal length be divisible by 2n, where n is the level of decomposition.
Commonly, only a minimal number of samples are needed to fill out the signal.

When applying an exponential taper over short segments (for example 4 samples),
there are too few samples to fully represent the taper. As a result the taper is cut at the
border before it reaches zero, causing a discontinuity at the borders. The problem can
obviously be avoided by extending the signal further to a power of 2. When running
the SWT on traces extended in such a manner, aliasing resulted. This drove me to
explore alternative padding techniques.

The wavelet toolbox in Matlab has a built in signal extension function, wextend. It
allows extension by
i. Zero padding
ii. Periodic padding (wraparound)
iii. Symmetric padding (reflection)
iv. Smooth padding (Order 1 or 0)

The problems with using zero padding have already been discussed. Periodic
extension works well if the signal is genuinely periodic or close to it. It has the added
advantage of creating a circulant matrix of the signal. The DWT algorithm converts
the circulant matrix to a diagonal matrix, allowing for rapid computation. Smooth
padding fits a low order polynomial to the border data points and uses this to
extrapolate new points outside the signal. Order 1 works well in general for smooth
signals. It uses a linear extension to fit the first two and last two values. Order 0
(constant padding) smooth padding just repeats the first and last points. Figure 4.12
below illustrates the application of the first three methods.

Figure 4-12 Types of signal extension. Dashed line indicates padded section. From (Strang, 1996).

It is important to note that both zero-padding and periodic padding can introduce a
jump discontinuity if the first and last points of the signal are not equal. In theory, this
is avoided by using symmetric extension. It replicates boundary values symmetrically,
essentially reflecting the original signal. The problem with symmetric extension
however, is that it may introduce a discontinuity in the first derivative as seen in
figure 4.12. This is not as serious as the jump discontinuities caused by zero and
periodic padding. By default symmetric padding is used by the DWT algorithm at
every stage of the decomposition process. Symmetric padding works the best on
images.
4-69

Symmetric extension is an attractive choice in theory. In practice however, symmetric


extension will still create a jump discontinuity at the edges if the signal is not padded
out to exactly twice its original length as in figure 4.13. It is rarely the case that
signals need to be extended to exactly twice their length so what advantage does
symmetric padding provide? By reflecting the signal, the same amplitude and
frequency characteristics of the signal are used to extend it. Aliasing will occur due to
the jump discontinuity at the boundary, however the SWT is quite robust and exact
reconstruction is still possible.

Figure 4-13 Symmetric extension of trace 250 of dataset Bhr1. Signal has been extended to twice its
original length.

4.2.4 Discussion
Wavelet detrending was found to be efficient in removing non-linear trends from the
data. In particular, it enhanced the Bhr1 dataset, bringing out the late time portion of
the image. An exponential taper was used to pad out the signal to the required length
and ensure continuity at the signal’s boundaries. The exponential taper has the
advantage of being robust when automated. The taper can easily be removed after
processing.

4.3 De-noising
Two different approaches were taken in denoising the dataset. The first is covered in
section 4.4 and tackles the data on a trace-by-trace basis. This approach is preferred as
there is no directional bias; the data is denoised along the same direction it was
collected. No significant improvement was made using wavelet denoising in this way.
The next approach, discussed in section 4.5, processes the data across traces. The
technique worked well, and directional effects were not significant.

Wavelet de-noising is known to be more efficient than Fourier filtering; however


Fourier techniques are still popular. In both sections standard Fourier techniques are
used to denoise the data as a comparison.
4-70

Section 4.6 briefly looks at smoothing the data in the space domain. Once again this is
included as a comparison.

This section only focuses on the techniques used in denoising the data. The details of
data preparation were discussed in sections 4.1 and 4.2. All algorithms were
programmed as Matlab functions, and the program code can be found in the appendix.

4.4 Down trace


4.4.1 Wavelet denoising
Function ‘waveden’

The dataset was prepared using the wavedetrend function. Next, the data was padded
symmetrically using the function swtpad.

The SWT denoising algorithm decomposes each trace to its 3rd level approximation.
Any further decomposition gave negligible improvement at more computational cost.
The Coiflets5 wavelet was used. Section 3.8 discusses the choice of wavelet.

Figure 4.14 below shows a typical trace prepared for denoising.

Figure 4-14 Trace 250 from Bhr1 prepared for SWT denoising. Red line indicates approximate point
where signal changes from saturated to unsaturated.

The Aardwolf system was designed to have maximum possible gain. This improves
penetration, however the early arrivals are saturated and amplitude information is lost.
One expects to see a difference in the noise characteristics between the saturated and
unsaturated portion of the signal.

Figure 4.15 overleaf shows the level 1 detail for the above signal with Fixed Form
thresholding applied. As expected, the early time coefficients show a different
4-71

variance compared with the late time coefficients. For this reason variance adaptive
thresholding was applied (Section 3.9).

Figure 4-15 Level one detail for trace 250. Red lines indicate the fixed form threshold.
The variance adaptive algorithm identified a single change point. Figure 4.16 shows
the first level detail with interval dependent thresholding applied.

Figure 4-16 Trace 250 with independent thresholding applied.


Note how the change point corresponds to the transition between saturated and
unsaturated portion of the signal. The late time portion of the signal is now effectively
thresholded.

Threshold selection was performed using fixed form thresholding. The different
available thresholding techniques are discussed in section 3.9.2. Fixed form
thresholding was found to set the highest threshold. Even so, very little of the first
level detail was excluded. Figure 4.17 shows each level before and after thresholding.
4-72

Figure 4-17 Reconstructed detail of trace 250 before (black) and after thresholding (red).

The thresholded coefficients were then reconstructed using the inverse SWT. Figure
4.18 compares the original and denoised traces.

Figure 4-18 Trace 250 before and after wavelet denoising.

The denoised trace (red) correlates well with the original trace, giving us confidence
in the shift invariance of the SWT. There are no drastic changes to the signal. Some of
the ‘spikiness’ has been removed, essentially smoothing the trace. Focussing on the
portion from samples 150 to 200, it is clear that smaller kinks in the signal have been
4-73

smoothed out nicely. It is extremely difficult to determine, without some control,


whether these kinks are reflectors or indeed noise.

Figure 4.19 shows the test block image before and after wavelet denoising. It is
immediately clear that not much has been removed.

Figure 4-19 Test block showing BHR1 data before and after trace-by-trace wavelet denoising.

4.4.2 Fourier denoising


Function ‘fftden’
For comparison, standard Fourier analysis was used to denoise the data on a trace-by-
trace basis. Once again the data was prepared by wavedetrend and then padded using
expad. The function fftden uses a simple, low-pass filter algorithm

Each trace was transformed into the frequency domain. The power spectrum for a
single trace is shown in figure 4.20 below.
4-74

Figure 4-20 Power spectrum for trace 250 from dataset Bhr1. The red dashed line marks the cutoff
frequency of 90hz used in the lowpass filtering. Note that the frequency values are relative and do not
represent true frequency characteristics of the signal.

The noise characteristically lies in the high frequency portion of the signal. A cut off
frequency of 90Hz was used. An exponentially tapered window was used to smoothly
remove power at unwanted frequencies. This avoids ringing due to the Gibbs
phenomenon. Figure 4.21 shows the trace before and after filtering.

Figure 4-21 Comparison of trace 250 before (black) and after (red) lowpass filtering.

The result is a smoothed version of the original trace. The problem with applying a
low pass filter is that there is no attempt to discriminate between the signal and the
4-75

noise. Useful detail may be removed with the noise. Figure 4.22 compares the Fourier
and wavelet algorithms using the same test trace.

Figure 4-22 Comparison between wavelet and Fourier denoising on trace 250.

It is clear that the wavelet technique is superior in preserving the signal information.
The Fourier filter has the affect of shifting peaks and does not preserve amplitude
information as well. Figure 4.23 shows the test image before and after low pass
filtering at 90Hz. Despite the drawbacks of the low pass filtering, the Fourier
technique is effective in smoothing the image. Although detail is lost, the image is
made more pleasing to the eye. Filtering at lower frequencies is at the expense of
severe loss in detail.
4-76

Figure 4-23 Test image before and after low pass filtering at 90Hz.

4.4.3 Discussion
Low pass filtering of the data results in a smoother, more pleasing image. However,
detail is lost in the process. This can be seen in figure 4.21, which shows a single
filtered trace. Figure 4.22 compares the filtered result from both wavelet and Fourier
processing. The wavelet processing is superior in that it preserves the signal, however
it does not filter enough of the signal. This is evident in figure 4.19, which shows the
wavelet filtered result on a number of traces.

We have to conclude that the thresholding technique is not efficient enough in


identifying the noise. The reason is that the noise characteristics are similar to the
signal characteristics, a common problem in seismic and GPR processing. The fixed
form thresholding is designed to pick out white noise. Its failure in this application
implies either that the noise is not white, or that the noise is buried in the signal,
making it hard to identify.

One way to attack the problem is be to use an adaptive, target oriented method. Yu et
al. (2002) describe such an adaptive algorithm. The technique is discussed in section
3.9.4. It may be adapted for GPR data, however the implementation would be beyond
the scope of the project.

4.5 Across trace


In an ideal scenario, one expects fairly good correlation between adjacent traces. A
section at any particular sample should appear relatively smooth. Figure 4.24 below
shows a time slice from the early arrivals. The signal is relatively smooth as expected.
4-77

Figure 4-24 Time slice across sample 10 from dataset Bhr1. The relatively smooth signal indicates
good correlation between traces.

Figure 4.25 is a time slice through the late time arrivals of the same dataset. The
section appears noisy, the ‘noise’ essentially being due to poor correlation between
traces. By smoothing the signal in figure 4.25, most of the noise will be filtered out.
The problem with such an approach is the directional bias with which the dataset is
processed; horizontal reflectors will be preferably picked out. We wish to use wavelet
denoising, which will allow the most sensitivity in selecting the noise with minimal
smoothing of the signal. This will serve to minimise the directional bias. The
technique will be compared with Fourier processing.
4-78

Figure 4-25 Time slice through sample 180 from dataset Bhr1. The signal has been extended
symmetrically to 2240 samples in preparation for SWT denoising.

4.5.1 Wavelet denoising


Function ‘wavedenx’

The data was detrended using the function wavedetrend. The dataset was then
padded symmetrically across trace using the function swtpadx. The function
wavedenx was then used to perform denoising. The processing of a single trace will
be shown, allowing an illustrated explanation of the algorithm.

Figure 4.25 above shows a noisy section at sample number 180, which has been
prepared for wavelet denoising. The signal is then decomposed using the SWT with
the Coiflets 5 wavelet. Figure 4.26 shows the reconstructed detail for levels one to six.
4-79

Figure 4-26 Reconstructed detail for section 180. Red lines show the detail after Heuristic SURE
thresholding.

The first three levels of detail show high amplitude coefficients. This is an indication
that the signal contains a lot of noise. As before, independent thresholding was
performed. The method for calculating the breakpoints is slightly different to that
described for trace-by-trace denoising. Instead of using the first level detail to analyse
the variance in the noise, the third level of detail was used. This had to be done as the
Matlab function wvarchg used to estimate the change points in the variance fails if it
is run on the first level detail.

It is obvious that the first three levels of detail contain most of the noise. If fixed form
thresholding is performed on these levels, most of the coefficients are thresholded out,
although some of the very high amplitude coefficients are kept. This gives rise to a
few unwanted spots in the resulting image. Figure 4.28 shows a late time portion from
dataset bhr1 illustrating the spots left by fixed form thresholding.
4-80

Figure 4-27 Denoised image using fixed form thresholding to level three decomposition. Image was
taken between traces 1500 and 1800 and samples 160 to 200 from dataset Bhr1. Note the streaky
appearance.

The best results were obtained by zeroing the coefficients from the first three levels.
Thereafter, heuristic SURE or fixed form thresholding can be performed. It was
found that fixed form thresholding was too severe, smoothing the signal
unnecessarily. This results in a directionally biased image. The heuristic SURE
thresholding allowed denoising at deep levels without the risk of over smoothing the
image (See figure 4.26). Figure 4.27 shows a test image, which has been denoised
using the two different thresholding techniques. An eight level decomposition was
performed in both cases.
4-81

Figure 4-28 (a) Test image, (b) Heuristic Sure thresholded denoising, (c) Fixed form thresholded
denoising. An eight level decomposition was used in denoising both images.

It is clear from figure 4.28c that the fixed form thresholding technique over smooths
the signal. This results in a strong horizontal bias which can be seen clearly by the
elongation of the bottom most reflectors.

In this particular case, the best results were obtained by simply zeroing the first three
levels of detail coefficients. This is not suggested as a general practice. Rather
heuristic SURE thresholding should be used for all levels as a robust general-purpose
filter. When using this thresholding technique, the dataset may be decomposed to the
deepest possible level without degradation to the dataset due to smoothing. A level
4-82

five decomposition was sufficient to denoise dataset Bhr1. Any further decomposition
gave no improvement at great computational expense. For this reason, it is advisable
to test the algorithm on a small portion of the dataset to determine the optimal level of
decomposition before running it on the entire dataset. The algorithm can of course be
programmed to automatically detect optimum thresholds, and these refinements are
recommended as future work.

Figure 4.29 below shows the dataset Bhr1 before and after wavelet denoising. Figure
4.30 shows the same image in pseudocolour. The deeper reflectors are considerably
enhanced when compared with the original image.

Figure 4-29 Dataset Bhr1 before and after wavelet denoising.


4-83

Figure 4-30 Pseudocolour image of wavelet-denoised data.

4.5.2 Fourier denoising


Function ‘fftden’
The data was first detrended using wavedetrend and then padded using the function
expadx.

Once again a simple low pass filter was used to denoise the data across trace. The
same FFT algorithm used to denoise the data on a trace-by-trace basis was modified
to process across trace.

Figure 4.31 shows a portion of the test section from sample 180. Plotted on the same
axis is the filtered output using two different cutoff frequencies.
4-84

Figure 4-31 Portion of the time slice taken across sample 180. Overlaid is the smoothed output from
low pass filtering using different cutoff frequencies.

The low pass filtering approach appears to be effective in smoothing the signal. The
problem however comes when trying to choose the appropriate cut-off frequency.
This can be done by trial and error, however this is tedious. One is also not aware of
how much signal is being excluded with the noise. Figure 4.32 compares the wavelet-
denoised trace with a similar trace obtained by low pass filtering with a cutoff
frequency of 40Hz. The two traces are very similar, indicating that low pass filtering
will have similar results to wavelet denoising. Figure 4.33 shows processing of the
test image at two different cut-off frequencies. A wavelet-denoised image is included
in figure 4.33 for comparison.

Figure 4-32 Portion of signal comparing wavelet and Fourier denoising.


4-85

Figure 4-33 Low pass filtering using 200Hz and 40Hz cut-off frequencies. The wavelet-denoised image
is included as a comparison.

4.5.3 Discussion
Figure 4.33 suggests that there is very little difference between the low pass 40Hz
image and the wavelet-denoised image. There are subtle differences, which become
more apparent on a larger scale. Figure 4.34 compares Fourier and wavelet denoised
images created from the late time arrivals of dataset Bhr1.
4-86

Figure 4-34 Images from lower 100 samples of dataset Bhr1. Wavelet denoising is compared with low
pass filtering at 40Hz.

Figure 4.34 confirms that the wavelet denoising is an improvement on Fourier


processing. There is no doubt that the low pass filtered image could be improved with
more work and more elaborate frequency domain filtering. However this would
constitute a project in itself. The point is that excellent results were obtained using
standard wavelet denoising techniques.

Fourier filtering is often an attractive option as the FFT is computationally cheap. It


must be noted however, that the FFT required dyadic sampling compared with the
SWT whose signal length only has to be a multiple of 2n, where n is the level of
decomposition. Thus the FFT usually requires more samples to pad the signal. In
4-87

practice then, the two functions fftdenx and wavedenx have similar computational
demands.

Another advantage of the wavelet denoising is its robustness. Wavelet denoising,


using the above function requires less user intervention than low pass filtering. One
does not need to pick cut-off frequencies. The wavelet thresholding automatically
discerns between signal and noise. Figure 4.35 overleaf shows the dataset ‘bhr2’
before and after wavelet denoising.

No adjustments were made to the function wavedenx (developed using the dataset
Bhr1) before running it on the new dataset. One can see from the original image that
the dataset’s characteristics are quite different to ‘bhr1’. The denoised image is a
successful enhancement of the original. To achieve the same result using low pass
filtering would require several attempts before achieving a comparable result.
4-88

Figure 4-35 Wavelet denoising of dataset Bhr2 using the function 'wavedenx '.

4.6 Space domain filtering


Similar results to Fourier filtering can be achieved by filtering in the space domain. In
this section filtering is done using a lowpass filter kernel. Since smoothing across
trace tends to give good results, various simple averaging kernels were convolved
with the dataset. Figure 4.36 shows dataset Bhr1 after convolution with a 1x7-
averaging kernel. This is equivalent to lowpass filtering across trace.
4-89

Figure 4-36 Filtering of dataset Bhr1 in the space domain. The 'original' image has been detrended.

The image is improved, illustrating that even averaging to obtain a lowpass


approximation may enhance the data. The advantage with convolution kernels is that
the image is processed rapidly. The disadvantages are strong directional bias and little
fine control on the amount of smoothing.
4-90

4.7 Complex trace analysis


4.7.1 Introduction
Complex trace analysis is common practice in seismic interpretation. In complex trace
analysis a seismic trace is viewed as the real part of a complex trace or analytic signal,
which can be uniquely calculated from the real seismic trace. The analytic signal
allows the calculation of instantaneous phase and frequency. The instantaneous
frequency differs from the frequency calculated by the Fourier transform in that it is a
value associated with a particular point in time. The instantaneous attributes provide a
powerful interpretation aid. Another attribute that can be calculated using the complex
trace is reflection strength. In this section, complex trace analysis is performed on
borehole radar data.

Traditionally, the complex trace is calculated using the Hilbert transform. The
normalized Hilbert time-domain operator in digital form is given by

2 ∞ sin 2 (πn / 2)
f (t ) = ∑ f (t − n∆t )
*
n≠0 (4.4)
π n= −∞ n

where ∆t is the sample interval (Taner, 1979). The analytic signal of the trace can
then be created. Its real part is the original trace and the Hilbert transform constitutes
the imaginary part.

F (t ) = f (t ) + if * (t ) = A(t )e iθ (t ) (4.5)

The analytic signal F(t) can be thought of as the trace in complex space of a vector
which is continually changing its length and rotating, thus tracing out an irregular
helix as shown in Figure 4.37 (Taner, 1979). The real and quadrature traces are given
by the projection of the trace of the rotating vector on the real and imaginary planes.
4-91

Figure 4-37 Isometric diagram of analytic signal of a seismic trace. After (Taner, 1979).

Luo and Marhoon (2001) have designed an enhanced edge detection algorithm for use
on 3D seismic datasets. The algorithm is based upon the generalized Hilbert
transform. The generalized Hilbert transform is performed in the frequency domain
using the windowed Fourier transform. This allows complex trace analysis at a range
of scales, highlighting both rapid and gentle changes in 3D seismic data. Expanding
upon this concept, the complex continuous wavelet transform will be used to provide
multi-scale complex trace analysis.

Section 3.7.6 briefly introduces complex wavelets. Complex wavelets can be used in
the continuous wavelet transform (CWT) to yield complex wavelet coefficients. These
coefficients can be used like the analytic signal to provide instantaneous attributes.
Furthermore, the complex continuous wavelet transform allows complex trace
analysis at an infinite range of scales.

The section will begin by introducing each attribute, using the analytic signal as an
example. The same attributes will then be calculated at selected scales using the
complex continuous wavelet transform allowing a comparison between the two
techniques.

4.7.2 Reflection strength


Reflection strength is independent of phase; it may have its maximum at phase points
other than the peaks of the real trace, especially where an event is the composite of
several reflections. In seismic processing, high reflection strengths are often
associated with major lithologic changes. The reflection strength corresponds to A(t)
of equation 4.5. It is calculated by taking the modulus of the analytic signal.

A(t ) = F (t ) = f 2 (t ) + f *2 (t ) (4.6)
4-92

Figure 4.38 shows the reflection strength calculated for the test image from the
wavelet denoised dataset Bhr1.

Figure 4-38 (a) Denoised test image, (b) Reflection strength

Interestingly, the directional bias in denoising across trace presents itself strongly in
the reflection strength as a lateral smearing in the amplitude region of 30. Figure 4.39
shows the reflection strength calculated from the noisy image.

Figure 4-39 (a) Test image, (b) Reflection strength

The noise appears to have a high reflection strength implying a low signal to noise
ratio. This would explain the difficulty in denoising down trace.

4.7.3 Instantaneous phase


The instantaneous phase corresponds to θ (t ) in equation 4.5. It is calculated as
follows:
 f * (t ) 
θ (t ) = tan −1   (4.7)
 f (t ) 
Equation 4.7 only gives the principle phase value. In order to make the phase
information continuous, the phase has to ‘unwrapped’ by determining the location of
2π phase jumps and correcting them. A simple way to obtain values associated with
4-93

continuous phase is to take the cosine of the phase values. Since


cos(θ ) = cos(θ ± 2π ) , the 2π phase jumps are removed.

The phase calculated in equation 4.7 is associated with a particular point in time.
When imaged in colour, this allows any phase angle to be followed from trace to
trace. The effect is to emphasize the continuity of events. In calculating instantaneous
phase, all amplitude information is lost. This means that weak coherent events are
made clearer (usually the late time arrivals). In seismic processing instantaneous
phase is used to show: discontinuities, faults, pinch outs, angularities, and events with
different dip attitudes that interfere with each other.

A special colourmap is used to display phase information. The phase angles + π and
− π are the same. For this reason, the two end nodes of the colourmap are the same. A
standard colourmap in Matlab, “HSV” has this property.

Figure 4.40 shows the calculation of instantaneous phase for the test image.

Figure 4-40 (a) Denoised test image, and (b) Instantaneous phase

4.7.4 Instantaneous frequency


The instantaneous frequency is simply the rate of change of the time dependent phase,
given by
dθ (t )
ω (t ) = (4.8)
dt
Once again, the phase has to be ‘unwrapped’ before calculating the instantaneous
frequency. The instantaneous frequency is highly sensitive to lateral changes in
lithology like pinchouts and edges. In seismic analysis fracture zones in brittle rock
are sometimes associated with low-frequency shadows. The application of
instantaneous frequency to borehole radar data might prove to be a useful diagnosis
tool as to the degree of fracturing in the surrounding rock.

Figure 4.41 below shows the application of instantaneous frequency to the test image.
4-94

Figure 4-41 (a) Denoised test image, and (b) Instantaneous Frequency

4.7.5 Wavelet calculation of attributes


The complex Gaussian wavelet (order 2) was chosen for the decomposition. Section
3.8 discusses the choice of wavelet. The function enhance was used to calculate the
attributes and the program code is in the appendix.

Figures 4.42 to 4.44 show the calculation of the wavelet attributes at different scales.

Figure 4-42 (a) Denoised test image. Wavelet calculation at scale 10 of (b) Reflection Strength, (c)
Instantaneous Phase, (d) Instantaneous Frequency.
4-95

Figure 4-43 (a) Denoised test image. Wavelet calculation at scale 4 of (b) Reflection Strength, (c)
Instantaneous Phase, (d) Instantaneous Frequency.
4-96

Figure 4-44 (a) Denoised test image. Wavelet calculation at scale 1 of (b) Reflection Strength, (c)
Instantaneous Phase, (d) Instantaneous Frequency.

The wavelet calculated attributes are similar in appearance to those calculated using
the Hilbert transform. Larger scales provide an attribute for a larger packet of
rockmass. This may be useful when the interpreter is interested in features of a
particular scale. In particular, calculation of the reflection strength at smaller scales
provides more detail than is possible with the Hilbert transform. Using the
instantaneous phase and frequency as guides, the scale 4 attributes are most similar to
those calculated from the analytic signal. Figure 4.45 compares the reflection strength
and instantaneous phase of the analytic signal and the scale 4 complex continuous
wavelet transform.
4-97

Figure 4-45 Reflection strength calculated (a) from the analytic signal, and (c) from the complex
continuous wavelet transform (Scale 4). Instantaneous phase calculated (b) from the analytic signal,
and (d) from the complex continuous wavelet transform (Scale 4).

The wavelet calculated reflection strength shows more detail than the analytic signal
reflection strength. The wavelet calculated instantaneous phase is exactly π out of
phase with the analytic signal instantaneous phase. Apart from the phase shift, the
instantaneous phases resemble each other.

The attributes can be overlaid on the original data in order to accentuate particular
features. Figure 4.46a shows such a composite image. The original data provides a
base map and the instantaneous phase is contoured over it. The contours were
calculated at scale = 1. In this image the dense contours serve to delineate boundaries
between different layers. This may be useful where reflections are weak. Figure 4.46b
shows the instantaneous frequency calculated at scale = 3 as the overlying contours.
Here, the clustering of contours is associated with discontinuities where layers are
faulted or pinch out.
4-98

Figure 4-46 (a) Instantaneous phase calculated at scale = 1 contoured over original denoised test
data. (b) Instantaneous frequency calculated at scale = 3 contoured over original denoised data.

4.7.6 Discussion
This section has shown that complex trace analysis can be performed on borehole
radar data using the complex continuous wavelet transform. The scale four wavelet
calculated attributes were found to be of similar appearance to the analytic signal
attributes. On comparison, the wavelet calculated reflection strength provided a more
detailed analysis. Using smaller scales resulted in a more detailed analysis. Figure
4.46b shows that the instantaneous frequency attribute may aid in enhancing
discontinuities in borehole radar data. Figure 4.46a shows that instantaneous phase
may be useful in delineating boundaries between weak reflectors.

The complex attributes are most valuable when interpreted as a set. The independence
of instantaneous frequency to amplitude as well as its high sensitivity to discontinuous
events can be exploited. In seismics, complex attributes are used for horizon picking.
This involves cross correlating adjacent traces in a moving window. Wavelets have
the advantage of being able to choose a particular scale for the attribute calculation.
By using wavelet attributes for the horizon picking, one might improve the results
obtained with standard horizon picking procedures.
4-99

Figure 4.47 shows the denoised Bhr2 dataset. Figures 4.48 to 4.50 show the attributes
calculated using the complex continuous wavelet transform at scale 2.

Figure 4-47 Bhr2 dataset after wavelet denoising.

Figure 4-48 Wavelet calculated reflection strength at scale 2.


4-100

Figure 4-49 Wavelet calculated instantaneous phase at scale 2.

Figure 4-50 Wavelet calculated frequency at scale 2


5-101

Chapter 5 - Conclusion
Borehole Radar has proven to be a useful tool in the South African mining
environment. It provides high-resolution images of the rock mass surrounding the
borehole, allowing a detailed mapping of the orebody and the surrounding host rock.
The information acquired with borehole radar allows for detailed mine planning,
which ultimately increases profits and reduces mining hazards.

The technique is prone to noise where it encounters material of high relative


permittivity. The electromagnetic wave is dispersed and attenuated, allowing system
noise to dominate the deeper reflections. Standard de-noising techniques are not well
suited to radar data as the traces have non-stationary characteristics. The first part of
the project investigated the use of wavelets in de-noising the data. The data was also
processed using Fourier and space domain techniques for a comparison.

Section 4.4 looked at processing the data down-trace. No significant improvement


was observed in the image after wavelet de-noising. Examining individual traces
revealed that only a small fraction of the noise had been removed. Most importantly
there was no aliasing of the original signal. Fourier de-noising allowed the image to
be smoothed, resulting in a more pleasing image. The problem with the Fourier
technique was severe aliasing of the signal. In conclusion no one technique was
effective in de-noising the data down-trace. The wavelet de-noising, however does
have scope for improvement. The available thresholding techniques were not efficient
in identifying coherent noise buried in the signal. An adaptive thresholding technique
is required.

Better results were obtained when de-noising across trace. Wavelet denoising was
successful in improving the image without aliasing. The algorithm appears to be
robust. When tested on a second dataset without modifications, excellent results were
obtained. Fourier techniques produced comparable results, however the wavelet image
was superior. Fourier de-noising requires the user to physically select a cutoff
frequency. Directionality problems were encountered when setting a cutoff frequency
that was too low. The choice of the cutoff is crucial for the success of the algorithm.
In conclusion Wavelet de-noising gives excellent results without the need for user
intervention. Fourier de-noising is not automatic, in that it requires the interpreter to
select a suitable cutoff frequency. This usually requires several attempts before a
suitable image is obtained.

Filtering was also done in the space domain using averaging filter kernels of different
sizes. The best results were obtained with a 1x7-averaging kernel. The image was
improved slightly, however wavelet and Fourier de-noising were found to be superior.
The problem with using larger filter kernels in the space domain was a directional
bias; horizontal reflectors were preferably enhanced.

The second part of the project investigated the used of the complex continuous
wavelet transform in defining complex attributes. Attribute analysis is common
practice in seismic interpretation. The attributes are usually calculated using the
Hilbert transform, which is scale invariant. The use of the wavelet transform allows
the introduction of scale into the calculation of attributes. Instantaneous phase,
instantaneous frequency and reflection strength were calculated. The scale 4 attributes
5-102

were found to be most similar to those calculated from the Hilbert transform. The use
of larger scales in the instantaneous phase and frequency resulted in less cluttered
images, which may be more readily interpreted. When the scale 1 instantaneous phase
was contoured over the original data, the contours outlined the reflection boundaries
perfectly. The scale 1 instantaneous phase when contoured over the original data had
the effect of highlighting discontinuities such as faults and pinchouts. In conclusion,
the complex wavelet allows a lot more versatility in the calculation and interpretation
of attributes than the Hilbert transform.

As proposed, wavelet techniques, due to their inherent spatial localisation, are well
suited to the processing of borehole radar data. Their application to the processing of
seismic and radar data is still in its infancy; however there is no doubt that wavelets
will have a lot more to contribute in the near future.

5.1 Future work


The fact that de-noising across trace worked well indicates that the optimum approach
will be to de-noise down dip. This will require the reflector dip to be quantified prior
to wavelet de-noising. The problem of estimating the dip of reflectors is common in
seismic data processing. One technique uses the covariance in a moving window to
estimate the dip. Taking the dip into account will serve to improve the results obatined
with the wavelet de-noising and minimise the risk of aliasing due to directional bias.

Yu et al (2002) describe an adaptive wavelet de-noising algorithm for removing


coherent noise from land seismic data. The adaptive thresholding of wavelet
coefficients is based on a clean seismic trace that is selected outside the noisy area.
The problem with applying this technique to radar data is that there are no sufficiently
clean traces that one can use as a target. One might try recursively thresholding the
signal until the de-noised signal satisfies a smoothness criterion. A robust measure of
smoothness has to be defined and this is suggested as future work.

In seismic data processing horizon picking is often performed on the complex


attributes. The versatility provided by the wavelet attributes may be used to enhance
standard horizon picking algorithms.

Zang and Ulrych (2002) have built a two-dimensional wavelet based on the
hyperbolic moveout curve encountered in seismic reflection sections. The wavelet
performed well in denoising seismic data. Due to the similarities between seismic and
GPR reflection sections, it is expected that the 2D wavelet will also perform well on
radar data. The project did not allow enough time to build Zang and Ulrych’s wavelet
for use in Matlab. Given time, this is worth pursuing.

5.2 Acknowledgements
I would like to thank the CSIR for funding the project. In particlular my thanks goes
to Declan Vogt who provided me with the data as well as help on the technicalities of
the radar system. I’d also like to thank my supervisor, Dr. Gordon Cooper for his
valuable advice and constructive criticism when reading through my early drafts.
6-103

Author’s contact details


Postal address: Guy Antoine
PO Box 208
Wits
2050
E-mail: zeebs@websurfer.co.za
Mobile: +27-73-161-2230

Chapter 6 - References
Berthelier, J.J., Ney, R., Meyer, A., Hamelin, M., Legac, C., Costard, F., Reineix, A.,
Martinat, B., Kofman, W., Paillou, P., and Duvanaud, C., 2000. The GPR on
Mars NETLANDER, Proc. of the 8th International conference on ground
penetrating radar, Queensland, Australia
Coifman, R.R., and Donoho, D.L., 1995. Translation invariant De-Noising, Lecture
notes in statistics, pp. 125-150. Alternate source, retrieved November 22,
2002, http://citeseer.nj.nec.com/80329.html
Cook, J.C., 1975. Radar transparencies of mine and tunnel rocks. Geophysics, 40(5):
865-885.
Cooper, G.R.J., and Peltoniemi, M., 2001. EAGE wavelet workshop., EAGE wavelet
workshop, Amsterdam
Daniels, D.J., Gunton, D. J., and Scott, H. F., 1988. Introduction to subsurface radar.
IEE Proc. F., 135(4): 278-321.
Daniels, D.J., 1996. Surface penetrating radar. Radar, Sonar, Navigation and Avionics
Series 6. IEE, U.K.
Donoho, D.L., and Johnstone, I.M., 1992. Minmax estimation via wavelet shrinkage.
Retrieved November 22, 2002, From
http://citeseer.nj.nec.com/donoho92minimax.html
Donoho, D.L., and Johnstone, I.M., 1994. Adapting to unknown smoothness via
wavelet shrinkage. Retrieved November 22, 2002, From
http://citeseer.nj.nec.com/donoho94denoising.html
Donoho, D.L., 1995. De-noising by soft thresholding. IEEE Transactions on
information theory, 41(3): 613-627.
Duval, L.C., and Galibert, P.Y., 2002. Efficient coherent noise filtering: an
application of shift-invariant wavelet denoising., EAGE 64th Conference and
Exhibition, Florence, Italy
Eisenburger, D., And Gundelach, V., 2000. Borehole radar measurements in complex
geological structures, Proc. of the 8th international conference on ground
penetrating radar, Queensland, Australia
Heggy, E., Paillou F. P., Demontoux, F., Ruffié, G., and Grandjean, G., 2002. Water
detection in the martian subsurface, Proc. of the 9th International Conf. on
Ground Penetrating Radar, California, U.S.A
Holser, W.T., Brown, R.J.S., Roberts, F.A., Fredrikson, O.A., and Unterberger, R.R.,
1972. Radar logging of a salt dome. Geophysics, 37(5): 889-906.
Lavielle, M., 1999. Detection of multiple changes in a sequence of dependent
variables. Stoch. Proc. and their Applications, 83(2): 79-102.
6-104

Luo, T., and Marhoon, M., 2001. Seismic edge detection using the generalised Hilbert
transform, EAGE 63rd Conference and technical exhibition, Amsterdam, The
Netherlands
Mallat, S., Hwang, W.L., 1992. Singularity detection and processing with wavelets.
IEEE Transactions on information theory, 38(2): 617-643.
Mallat, S., 1998. A wavelet tour of signal processing. Academic press.
Mathworks, T., 2002. Wavelet Toolbox Documentation. Retrieved November 22,
From
http://www.mathworks.com/access/helpdesk/help/toolbox/wavelet/wavelet.sht
ml
Mccann, D.M., Jackson, P.D., and Fenning, P.J., 1988. Introduction to subsurface
radar. IEE Proceedings F, 135(4): 380-390.
Ogden, T.a., Parzen, E., 1995. Change-point approach to analytic wavelet
thresholding. Retrieved November 22, From
http://citeseer.nj.nec.com/ogden96changepoint.html
Olhoeft, G.R., 2000. Maximizing the information return from ground penetrating
radar. J. Applied Geophys, 43: 175-187.
Oswald, G.K.A., 1988. Geophysical Radar design. IEE Proc. F., 135(4): 371-379.
Pipan, M., De Vecchi, M., Forte, E., and Tabacco, I., 2002. Wavelet based processing
of airborne radar data from East Antarctica, EAGE 64th Conference and
Exhibition, Florence, Italy
Press, W.H., Teukolsky, S.A., Vetterling, W.T, Flannery, B.P., 1992. Numerical
Recipes in Fortran, Second Edition. Cambridge University Press, 963 pp.
Qian, J., 2002. Denoising by wavelet transform. Retrieved November 22, 2002 From
http://www-ece.rice.edu/~joejqian/elec532.pdf
Rees, W.G., 2001. Physical principles of remote sensing. Second edition. Cambridge
University Press, U.K., 9-41, 175-191, 213-245 pp.
Ridsdill-Smith, T.A., 2000. The Application of the Wavelet Transform to the
Processing of Aeromagnetic Data. PhD Thesis, University of Western
Australia, Perth, 167 pp.
Rioul, O., and Vetterli, M., 1991. Wavelets and signal processing. IEEE Signal
Process. Mag., 8(Oct): 14-38.
Siever, K., 2000. Three-dimensional borehole radar measurements - A standard
logging method ?, Proc. of the 8th international conference on ground
penetrating radar, Queensland, Australia
Simmat, C.M., Osman, N., Hargreaves, J.E., and Mason I.M., 2002. Borehole radar
imaging from deviating boreholes, Proc. of the 9th international conference on
ground penetrating radar, California, U.S.A
Soon Sam Kim, C., S.R., Mysoor, N.R., Ulmer, C.T., and Arvidson, R.E., 2002.
Miniature ground penetrating radar for planetary subsurface
characterization: Preliminary field test results, Proc. of the 9th international
conference for ground penetrating radar, California, U.S.A.
Strang, G., and Nguyyen, T., 1996. Wavelets and filter banks. Wellesley-Cambridge
Press, U.S.A., 490 pp.
Taner, M.T., Koelhler, F., Sheriff, R.E., 1979. Complex seismic trace analysis.
Geophysics, 44(6): 1041-1063.
Trickett, J.C., Stevenson, F., Vogt, D., Mason, I., Hargreaves, J., Eybers, H., Fynn, R.,
and Meyering, M., 2000. The application of borehole radar to South Africa's
ultra-deep gold mining environment, Proc. of the 8th international conference
on ground penetrating radar, Queensland, Australia
6-105

Turner, G., Mason, I., Hargreaves, J., and Wellington, A., 2000. Borehole radar
surveying for orebody delineation, Proc. of the 8th international conference on
ground penetrating radar, Queensland, Australia
Van Brakel, W.J.A., Van Wyk, M.D., Rütschlin, M., and Cloete, J.H., 2002. The
effect of wet drilling in kaolinitic strata on borehole radar performance, Proc.
of the 9th international conference on ground penetrating radar, California,
U.S.A
Van Dongen, K.W.A., Van Den Berg, P.M., and Fokkema, J.T., 2002. A directional
borehole radar for three-dimensional imaging, Proc. of the 9th international
conference on ground penetrating radar, California, U.S.A
Vogt, D., 2002. A slimline borehole radar for in-mine use, Proc. of the 9th
international conference on ground penetrating radar, California, U.S.A
Yu, Z., McMechan, G. A., Ferguson, J. F., and Anno, P. D., 2002. Adaptive wavelet
filtering of seismic data in the wavelet transform domian. Journal of Seismic
Exploration(11): 223-246.
Zhang, R., and Ulrych, T.J., 2002. Physical wavelety frame denoising of seismic data,
EAGE 64th Conference and Exhibition, Florence, Italy
Zhdanov, M.S., And Keller, G.V., 1994. The geoelectrical methods in geophysical
exploration. Elsevier Science B.V., pp. 710-720.
7-106

Chapter 7 - Appendix
The following Matlab functions were programmed using Matlab version 6, Release
13. The functions require the Wavelet Toolbox in order to run. Instructions on how to
use each function are listed in the first comment lines. To view these instructions in
Matlab simply type ‘help function’, where function is the name of the funtion in
question.

P.T.O.
7-107

7.1 wavedetrend.m
function ret = wavedetrend(fileo,files);
%Guy Antoine 29-09-2002
%Uses fifth level wavelet approximation to detrend dataset on a
%trace-by-trace basis
%
%wavedetrend(file0,files)
%fileo = Input file
%files = Output file

load(fileo);

trend = zeros(size(data));

[m n] = size(data); %m=samples n=trace


maxval = max(max(data));

for j = 1:n %Loop through trace by trace


[c,l] = wavedec(data(:,j),5,'coif5'); %Perform wavelet decomposition to level 5 using Coiflets5
c(l(1):end) = 0; %Set all detail coefficients to zero
trend(:,j) = waverec(c,l,'coif5'); %Reconstruct approx coeffs

data(1:m,j) = data(1:m,j)-trend(1:m,j); %Remove the trend


end

save(files,'data');

ret = data; %Keeps detrended data in workspace as 'ans'


7-108

7.2 expad.m
function ret = expad(fileo,files,type);
%Guy Antoine 29-09-2002
%This function applies an exponential taper to the data in order to pad it
%out to the required length.
%expad(fileo,files,type)
%fileo = Input file
%files = Output file
%type = 'd' Pads out to power of 2
%type = 's' Pads out to a factor of 2^level for SWT

load(fileo);

[m n]=size(data); %m=samples n=traces

if rem(m,2)==0; %If even


datao = data;
else
datao = [data' data(end,:)']'; %If odd then repeat last point to make even
[m n]=size(datao); %Recalculate m and n
end

%Find out how much to pad out to


if type == 'd'
pad = pow2(nextpow2(m)); %For dyadic sampling
elseif type == 's'
pad = ceil(m/2^3)*2^3; %For 2^level used in SWT
else
disp('Invalid type. Please use "s" for swt and "d" for dwt');
end

%These will save the bell curves


dataext1 = zeros((pad-m)/2-1,n);
dataext2 = zeros((pad-m)/2-1,n);
dataext3 = zeros((pad-m)/2-1,n);

for j = 1:n %Trace by trace

for i = 1:(pad-m)/2
dataext1(i,j)=datao(1,j).*exp(-0.5.*(i-(pad-m)./2 ).^2); %Exponential taper
dataext2(i,j)=datao(m,j).*exp(-0.5.*(i-(pad-m)./2 ).^2);
end

for i = 0:(pad-m)/2-1;
dataext3(i+1,j) = dataext2((pad-m)/2-i,j); %Flip the end trace around
end

end

data = [dataext1' datao' dataext3']'; %Splice them all together

save(files,'data');

ret = data;
7-109

7.3 expadx.m
function ret = expadx(fileo,files);
%Guy Antoine 29-09-2002
%This function applies an exponential taper to the data in order to pad it
%out to the required length. Note padding is done across trace
%expadx(fileo,files)
%fileo = Input file
%files = Output file

load(fileo);

[m n]=size(data); %m=pseudotrace n=pseudosample

if rem(n,2)==0; %If even


datao = data;
else
datao = [data';data(:,end)']'; %If odd then repeat last point to make even
[m n]=size(datao); %Recalculate m and n
end

%Find out how much to pad out to


pad = pow2(nextpow2(n)); %For dyadic sampling

%These will save the bell curves


dataext1 = zeros(m,(pad-n)/2-1);
dataext2 = zeros(m,(pad-n)/2-1);
dataext3 = zeros(m,(pad-n)/2-1);

for j = 1:m %Trace by trace

for i = 1:(pad-n)/2
dataext1(j,i)=datao(j,1).*exp(-0.5.*(i-(pad-n)./2 ).^2); %Exponential taper
dataext2(j,i)=datao(j,n).*exp(-0.5.*(i-(pad-n)./2 ).^2);
end

for i = 0:(pad-n)/2-1;
dataext3(j,i+1) = dataext2(j,(pad-n)/2-i); %Flip the end trace around
end

end

data = [dataext1';datao';dataext3']'; %Splice them all together

save(files,'data');

ret = data;
7-110

7.4 swtpad.m
function ret = swtpad(fileo,files,lev);
%Guy Antoine 29-09-2002
%Pads out the data to a factor of 2^level for use with the SWT
%Note this pads down trace
%swtpad(fileo,files,lev)
%fileo = Input file
%files = Output file
%lev = level of SWT decomposition

load(fileo);

[m n] = size(data); %m=samples n=traces

pad = floor((ceil(m/2^lev)*2^lev-m)/2); %Calculate length to pad out to

for j = 1:n %Trace by trace


%This will pad the signal using symmetric extension on both sides
dataext(:,j)=wextend('1','sym',data(:,j),pad,'b');
end

data=dataext;

save(files,'data');

ret = data;
7-111

7.5 swtpadx.m
function ret = swtpadx(fileo,files,lev);
%Pads out the data to a factor of 2^level for use with the SWT
%Guy Antoine 29-09-2002
%Pads out the data to a factor of 2^level for use with the SWT
%Note this pads across trace
%swtpadx(fileo,files,lev)
%fileo = Input file
%files = Output file
%lev = level of SWT decomposition

load(fileo);
data=data'; %Flip dataset around
[m n] = size(data); %m=samples n=traces

if rem(m,2) ~= 0; %if odd


data = [data;data(end,:)];
m=m+1;
else end

pad = floor((ceil(m/2^lev)*2^lev-m)/2); %Calculate length to pad out to

for j = 1:n %Trace by trace


%This will pad the signal using symmetric extension on both sides
dataext(:,j)=wextend('1','sym',data(:,j),pad,'b');
end

data=dataext'; %Flip dataset back

save(files,'data');

ret = data;
7-112

7.6 waveden.m
function ret = waveden(fileo,files);
%Guy Antoine 29-09-2002
%Denoises data down trace using the SWT
%waveden(fileo,files)
%fileo = input file
%files = output file

load(fileo);

[m n] = size(data); %m=samples n=trace

clean = zeros(size(data));

for j = 1:n
%Perform SWT to level 3 using coiflets 5
swc = swt(data(:,j),3,'coif5');
swd = swc(1:end-1,:); %Detail coefficients
swa = swc(end,:); %Approximation coefficients

mzero = zeros(size(swd)); %Matrix of zeros

%Reconstruct only the detail at level 1


D = mzero;
swcfs = mzero;
swcfs(1,:) = swd(1,:);
D(1,:) = iswt(mzero,swcfs,'coif5');

%Extract noise from detail 1


det = D(1,:);
x = sort(abs(det));
red = x(fix(length(x)*0.98));
ind = find(abs(det)>red);
det(ind)=mean(det);

%Estimate change points in the varience of the noise


[cp_est,kopt,t_est] = wvarchg(det);

%Set up matrix of break points to segment co-effs for independent


%thresholding

brk = [0 cp_est length(swc)];

%This fancy little loop thresholds for any no of intervals


for i = 1:kopt+1;
sws = swc(:,brk(i)+1:brk(i+1));
Cth = zeros(size(sws));
thr = wthrmngr('sw1ddenoLVL','sqtwolog',sws,'sln'); %Uses fixed form thresholding (Unscaled
white noise)

for k=1:3
Cth(k,:) = wthresh(sws(k,:),'s',thr(k));
end

if i == 1
Call = Cth;
else
7-113

Call = [Call Cth];


end
end

%------------------Rebuild the denoised signal--------------------

Call(4,:) = swc(4,:);
den = iswt(Call,'coif5')';
clean(:,j) = den;

end

data = clean;

save(files,'data');

ret = data;
7-114

7.7 wavedenx.m
function ret = wavedenx(fileo,files,lev);
%Guy Antoine 29-09-2002
%Denoises data across trace using the SWT
%wavedendx(fileo,files,lev)
%fileo = input file
%files = output file
%lev = level of decomposition

load(fileo);
data=data'; %Just flip around
[m n] = size(data); %m=samples n=trace

clean = zeros(size(data));

for j = 1:n
%Perform SWT to level 3 using Coiflets 5
swc = swt(data(:,j),lev,'coif5');
swd = swc(1:end-1,:); %Detail coefficients
swa = swc(end,:); %Approximation coefficients

mzero = zeros(size(swd)); %Matrix of zeros

%Reconstruct only the detail at level 3


D = mzero;
swcfs = mzero;
swcfs(3,:) = swd(3,:);
D(3,:) = iswt(mzero,swcfs,'Coif5');

%Extract noise from detail 3


det = D(3,:);
x = sort(abs(det)); %Sort int ascending order
red = x(fix(length(x)*0.98)); %Find 98% value of coefficients
ind = find(abs(det)>red); %Find coefficients larger than 98%
det(ind)=mean(det); %Set 2% of largest coefficients to mean

%Estimate change points in the varience of the noise using extracted detail
[cp_est,kopt,t_est] = wvarchg(det);

%Set up matrix of break points to segment co-effs for independent


%thresholding

brk = [0 cp_est length(swc)];

%This fancy little loop thresholds for any no of intervals


for i = 1:kopt+1;
sws = swc(:,brk(i)+1:brk(i+1));
Cth = zeros(size(sws));
%thr = wthrmngr('sw1ddenoLVL','sqtwolog',sws,'sln'); %Uses fixed form thresholding
thr = wthrmngr('sw1ddenoLVL','heursure',sws,'sln'); %Uses heuristic SURE thresholding (Unscaled
white noise)

for k=4:lev %Throw away first 3 levels


Cth(k,:) = wthresh(sws(k,:),'s',thr(k));
end

if i == 1
7-115

Call = Cth;
else
Call = [Call Cth];
end
end

%------------------Rebuild the denoised signal--------------------

Call(lev+1,:) = swc(lev+1,:);
den = iswt(Call,'coif5')';
clean(:,j) = den;

end

data = clean'; %Flip it back

save(files,'data');

ret = data;
7-116

7.8 fftden.m
function ret = fftden(fileo,files,cut);
%Guy Antoine 5-10-2002
%Low pass filtering down trace
%fftden(fileo,files,cut)
%fileo = Input file
%files = Output file]
%cut = cutoff frequency

load(fileo);

[m n]=size(data);

y = zeros(size(data));
f = zeros(size(data));

for j = 1:n

y(:,j) = fft(data(:,j),m);
y(:,j) = fftshift(y(:,j));
f(:,j) = (1000.*((-(m./2)+1):(m./2))./m)';

for i = 1:m
if abs(f(i,j)) > cut;
y(i,j)=y(i,j).*exp(-0.05.*(abs(f(i,j))-cut).^2);
else end
end

y(:,j) = ifftshift(y(:,j));
y(:,j)=real(ifft(y(:,j),m));

end

data = y;

save(files,'data')

ret = data;
7-117

7.9 fftdenx.m
function ret = fftdenx(fileo,files,cut);
%Guy Antoine 5-10-2002
%Low pass filtering across trace
%fftdenx(fileo,files,cut)
%fileo = Input file
%files = Output file
%cut = cutoff frequency

load(fileo);
data = data'; %Flip around
[m n]=size(data);

y = zeros(size(data));
f = zeros(size(data));

for j = 1:n

y(:,j) = fft(data(:,j),m);
y(:,j) = fftshift(y(:,j));
f(:,j) = (1000.*((-(m./2)+1):(m./2))./m)';

for i = 1:m
if abs(f(i,j)) > cut;
y(i,j)=y(i,j).*exp(-0.05.*(abs(f(i,j))-cut).^2);
else end
end

y(:,j) = ifftshift(y(:,j));
y(:,j)=real(ifft(y(:,j),m));

end

data = y'; %Flip back

save(files,'data')

ret = data;
7-118

7.10 spden.m
function ret = spden(fileo,files,n,m);
%Guy Antoine 29-09-2002
%Convolves filter kernel with the data
%spden(fileo,files,n,m)
%fileo = input file
%files = output file
%n,m = size of kernel

load(fileo);

f = ones(n,m); %Make kernel of ones


fil = conv2(f,data); %Convolve
figure;subplot(2,1,1);imagesc(data);subplot(2,1,2);imagesc(fil);colormap('gray');

data=fil;

save(files,'data');

ret = f;
7-119

7.11 enhance.m
function ret = enhance(fileo,files,lev);
%Guy Antoine 25-10-2002
%Calculates instantaneos attributes at selected scale using complex continuous wavelet
%transform
%enhance(fileo,files,lev)
%fileo = input file
%files = output file
%lev = scale

load(fileo);
c = zeros(size(data));
[m n] = size(data);

for i = 1:n
c(:,i) = cwt(data(:,i),lev,'cgau2')'; %Perform decomposition using complex Gaussian wavelet
end
c = c(11:256-11,:); %Removes padded points - will have to change depending on size of padded
region
ph = cos(angle(c)); %Calculate instantaneous phase

fr = diff(ph); %Calculate instantaneous frequency


mod = abs(c); %Calculate reflection intensity
data=data(11:256-11,:); %Remove padded points (Note will have to change)
%Now plot histograms

figure;subplot(3,1,1);hist(mod(:),50);title('mod');subplot(3,1,2);hist(ph(:),50);title('phase');subplot(3,1,3
);hist(fr(:),50);title('freq');

%Plot attributes side by side for comparison


figure;
subplot(2,2,1);imagesc(data,[-80 80]);axis off;title('Original');colorbar('vert');
subplot(2,2,2);imagesc(mod);axis off;title('Reflection Strength');colorbar('vert');
subplot(2,2,3);imagesc(ph);axis off;title('Instantaneous Phase');colorbar('vert');
subplot(2,2,4);imagesc(fr,[-0.5 0.5]);axis off;title('Instantaneous Frequency');colorbar('vert');
figure;subplot(2,2,3);imagesc(ph);colormap('hsv');axis off;title('Instantaneous Phase');colorbar('vert');

save(files,'ph','fr','mod');

ret = ph;

You might also like