Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
0 of .
Results for:
P. 1
Wave Let Intro

# Wave Let Intro

Ratings:

4.5

(2)
|Views: 259 |Likes:
lidar wavlet denoising
lidar wavlet denoising

Categories:Types, Research, Science

### Availability:

See more
See less

03/17/2012

pdf

text

original

1
Wavelets and wavelet thresholding
Every theory starts from an idea. The wavelet idea is simple and clear. At a ﬁrstconfrontation, the mathematics that work out this idea might appear strange anddifﬁcult. Nevertheless, after a while, this theory leads to insight in the mechanismin wavelet based algorithms in a variety of applications.This chapterdiscusses the waveletidea andexplainsthe wavelet slogans.For themathematics, we refer to the numerous publications. Comprehensive introductionsto the ﬁeld include [84, 12, 10, 73]. Other, sometimes more application oriented ormore theoretical treatments can be found in [64, 28, 90, 57, 79, 50, 5, 49]. Bookson wavelets in statistics, such as [75, 96], include large parts on non-parametricregression.
1.1 Exploiting sample correlations
1.1.1 The input problem: sparsity
Suppose we are given a
discrete
signal

. In practice, this signal is often digital,i.e. quantized and possibly transformed into a binary form. Figure 1.1 shows howthese discrete data can be represented on a continuous line as a piecewise constantfunction. This piecewise constant is of course not the only possible continuous rep-resentation.Typically,adjacentpointsshowstrongcorrelations.Onlyat a fewpoints,we ﬁndlarge jumps. Storing all these values separately seems a waste of storage capacity.Therefore,wetakeapairofneighbors
¡£¢
and
¡¥¤
andcomputeaverageanddifferencecoefﬁcients:

2 1. Wavelets and wavelet thresholding
DifferenceDifferenceDifferenceInput = digital signalAverageAverageAverage
Figure 1.1.
Using correlations between neighboring samples leads to a sparse representationof the input. This is the Haar wavelet transform.

1.1 Exploiting sample correlations 3
Inthe ﬁgure,the averagesarerepresentedonthesecondlineas apiecewise constant, just like the input, but the difference coefﬁcients appear as two opposite blocks:every pair of two opposite blocks is one coefﬁcient. This coefﬁcient tells how farthe ﬁrst data point was under the average of the pair and at the same time, howmuch the second data point was above this average. ‘Adding’ the left plot and theright one returns the input on top. This ‘adding’ is indeed the inverse operation:
The average signal is somehow a blurred version of the input. We can repeat thesame procedure on the averages again. Eventually, this operation decomposes theinput into one global average
plus
difference signal at several locations on theaxis and with different widths,
scales
, or
resolutions
. Since each step is invert-ible, the whole transform satisﬁes the
perfect reconstruction
property.This is calledthe
Haar
-transform, after Alfred Haar, who was the ﬁrst to study it in 1910, longbefore the actual wavelet history began [45].As the Figure 1.1 illustrates, most of the difference coefﬁcients are small. Thelargest coefﬁcient appears at the location of the biggest ‘jump’ in the input signal.This is even more striking in the more realistic example of Figure 1.2. In this pic-ture, all coefﬁcients are plotted on one line. Dashed lines indicate the boundariesbetween scales. Only a few coefﬁcients are signiﬁcant. They indicate the singular-ities (jumps) in the input. This
sparsity
is a common characteristic for all wavelettransforms. Wavelet transforms are said to have a
decorrelating property
.
0 200 400 600 800 1000 1200 1400 1600 1800 2000−2024681012
0 200 400 600 800 1000 1200 1400 1600 1800 2000−30−20−100102030
Figure 1.2.
Test signal (Left) and Haar transform (Right): all coefﬁ cients are plotted on oneline. Dashed lines indicate the boundaries between scales.
1.1.2 Basis functions and multiresolution
The input vector

can be seen as coefﬁcients for a basis of characteristic functions(‘block’ functions), as shown on top of Figure 1.3: i.e. we can write the continuous