You are on page 1of 47

# Introduction to Wavelet Transform and Image Compression

Student: Kang-Hua Hsu Advisor: Jian-Jiun Ding E-mail: r96942097@ntu.edu.tw Graduate Institute of Communication Engineering National Taiwan University, Taipei, Taiwan, ROC

DISP@MD531

Outline (1)
 Introduction  Multiresolution Analysis (MRA) - Subband Coding - Haar Transform - Multiresolution Expansion  Wavelet Transform (WT) - Continuous WT - Discrete WT - Fast WT - 2-D WT  Wavelet Packets  Fundamentals of Image Compression - Coding Redundancy - Interpixel Redundancy - Psychovisual Redundancy - Image Compression Model
DISP@MD531
2

Outline (2)
 Lossless Compression - Variable-Length Coding - Bit-plane Coding - Lossless Predictive Coding  Lossy Compression - Lossy Predictive Coding - Transform Coding - Wavelet Coding  Conclusion  Reference

DISP@MD531

Introduction(1)-WT v.s FT
Bases of the FT: time-unlimited weighted sinusoids with different frequencies. No temporal information. WT: limited duration small waves with varying frequencies, which are called wavelets. WTs contain the temporal time information. Thus, the WT is more adaptive.

DISP@MD531

## Introduction(2)-WT v.s TFA

Temporal information is related to the time-frequency analysis. The time-frequency analysis is constrained by the Heisenberg uncertainty principal. Compare tiles in a time-frequency plane (Heisenberg cell):

DISP@MD531

Introduction(3)-MRA
It represents and analyzes signals at more than one resolution. 2 related operations with ties to MRA:  Subband coding  Haar transform MRA is just a concept, and the wavelet-based transformation is one method to implement it.

DISP@MD531

Introduction(4)-WT
The WT can be classified according to the of its input and output.  Continuous WT (CWT)
 Discrete WT (DWT) 1-D DWT 2-D transform (for image processing) Fast WT (FWT)
recursive relation of the coefficients

DISP@MD531

MRA-Subband Coding(1)

Since the bandwidth of the resulting subbands is smaller than that of the original image, the subbands can be downsampled without loss of information. We wish to select h n , h n , g n , g n so that the input can be perfectly reconstructed. Biorthogonal Orthonormal
0 1 0 1

DISP@MD531

MRA-Subband Coding(2)
Biorthogonal filter bank:
g 0 ?k A, h0 ?2n  k A ! H ?n A g 0 ?k A, h1 ?2n  k A ! 0 g1 ?k A, h1 ?2n  k A ! H ?n A g1 ?k A, h0 ?2n  k A ! 0

## Orthonormal (its also biorthogonal) filet bank:

1 ( n) ! (1)n g 0 (2 K  1  n) g h i ! {0,1} : time-reversed relation i ( n) ! gi (2 K  1  n),

,where 2K denotes the number of coefficients in each filter. The other 3 filters can be obtained from one prototype filter.
DISP@MD531
9

MRA-Subband Coding(3)
1-D to 2-D: 1-D two-band subband coding to the rows and then to the columns of the original image.

Where a is the approximation (Its histogram is scattered, and thus lowly compressible.) and d means detail (highly compressible because their histogram is centralized, and thus easily to be modeled).

## FWT can be implemented by subband coding!

DISP@MD531
10

Haar Transform
Y ! H X H T will put the lower frequency components of X

## at the top-left corner of Y. This is similar to the DWT.

1/ 2 1/ 2 H ! 1 / 2 0 1/ 2 1/ 2 1 / 2 0 1/ 2 1 / 2 0 1/ 2 1/ 2 1 / 2 0 1 / 2

## This implies the resolution (frequency) and location (time).

DISP@MD531
11

Multiresolution Expansions(1)
, E k : the real-valued expansion coefficients. f ( x) ! E k Jk ( x) , J ( x) : the real-valued expansion functions. k Scaling function J x : span the approximation of the signal. j /2 j J j ,k ( x ) ! 2 J (2 x  k ) : this is the reason of its name. J If we define V j ! span _ j , k ( x )a, then ... V0 V1 V2 ....
k

## J x ! hJ n 2J 2 x  n , hJ n : scaling function coefficients

n

DISP@MD531

12

Multiresolution Expansions(2)
4 requirements of the scaling function:  The scaling function is orthogonal to its integer translates.  The subspaces spanned by the scaling function at low scales are nested within those spanned at higher scales.  The only function that is common to all V j is f x ! _0a ! Vg .  Any function can be represented with arbitrary coarse resolution, because the coarser portions can be represented by the finer portions.

DISP@MD531

13

Multiresolution Expansions(3)
The wavelet function ] x : spans difference between any two adjacent scaling subspaces, V j and V j 1 . j j ] j ,k x ! 2 2 ] 2 x  k span the subspace W j .

DISP@MD531

14

Multiresolution Expansions(4)
] ( x) ! h] (n) 2J (2 x  n) , n h] ( n) : wavelet function coefficients Relation between the scaling coefficients and the wavelet coefficients: h] (n) ! (1)n hJ (1  n) This is similar to the relation between the impulse response of the analysis and synthesis filters in page 11. There is time-reverse relation in both cases.

DISP@MD531

15

CWT

The definition of the CWT is Continuous input to a continuous output with 2 continuous variables, translation and scaling. Inverse transform: f x ! 1 W s,X x  X dX ds ]
g g

1 g x X WJ s,X ! f x ] | s | g s

dt

C] s 2 s

0 g

## Its guaranteed to be reversible if the admissibility criterion is satisfied.

C] !

= ( f ) |2 df | f |

Hard to implement!

DISP@MD531

16

DWT(1)
wavelet series expansion:
f ( x ) ! c j0 (k )J j0 ,k ( x ) 
k

d
j ! j0 k

(k )] j ,k ( x )

## j0 : arbitrary starting scale

c j0 (k ) : approximation or scaling coefficients

## d j (k ) : detail or wavelet coefficients

% % c j0 ( k ) ! f ( x), J j0 ,k ( x) ! f ( x)J j0 ,k ( x) dx
% % d j (k ) ! f ( x ),] j , k ( x ) ! f ( x )] j , k ( x) dx

This is still the continuous case. If we change the integral to summation, the DWT is then developed.

DISP@MD531

17

DWT(2)
f ( x) ! 1 M

WJ ( j0 , k )J%,k ( x)  j k
0

1 M

W] ( j, k )]%,k ( x) j j! j k
0

1 WJ ( j0 , k ) ! M 1 W] ( j , k ) ! M

M 1 x!0 M 1 x !0

f ( x)J%

j0 , k

( x)

f ( x)]%

j,k

( x)

The coefficients measure the similarity (in linear algebra, the orthogonal projection) of f x with basis functions % J j0 , k ( x ) and ] j ,k ( x ) . %

DISP@MD531

18

FWT(1)
By the 2 relations we mention in subband coding,
J x ! hJ n 2J 2 x  n
n

] ( x ) ! h] (n) 2J (2 x  n)
n

## We can then have

W] ( j, k ) ! h] ( m  2 k )WJ ( j  1, m) ! h] (  n)  WJ ( j  1, n)
m n ! 2 k ,k u 0

WJ ( j , k ) ! hJ ( m  2 k )WJ ( j  1, m) ! hJ (  n)  WJ ( j  1, n)
m

n ! 2 k ,k u0

DISP@MD531

19

FWT(2)

When the input is the samples of a function or an image, we can exploit the relation of the adjacent scale coefficients to obtain all of the scaling and wavelet coefficients without defining the scaling and wavelet functions.
W] ( j , k ) ! h] ( m  2 k )WJ ( j  1, m) ! h] (  n)  WJ ( j  1, n)
m n ! 2 k ,k u0

WJ ( j , k ) ! hJ (m  2k )WJ ( j  1, m) ! hJ (  n)  WJ ( j  1, n)
m

n ! 2 k ,k u 0

DISP@MD531

20

FWT(3)
FWT 1 :

FWT resembles the two-band subband coding scheme! The constraints for perfect reconstruction is the same as in the subband coding.
h 0 ( n) ! hJ ( n) h 1 ( n) ! h] (  n) g 0 ( n) ! h0 (  n) ! hJ ( n) g 1 ( n) ! h1 (  n) ! h] ( n)

DISP@MD531

21

2-D WT(1)
J ( x, y ) ! J ( x)J ( y) H ] ( x, y ) ! ] ( x)J ( y ) V ] ( x, y ) ! J ( y )] ( x) D ( x, y ) ! ] ( x)] ( y ) ]

## These wavelets have directional sensitivity naturally.

DISP@MD531

22

2-D WT(2)

Note that the upmost-leftmost subimage is similar to the original image due to the energy of an image is usually distributed around lower band.

DISP@MD531

23

Wavelet Packets
A wavelet packet is a more flexible decomposition.

DISP@MD531

24

## Fundamentals of Image Compression(1)

Goal: To convey the same information with least amount of data (bits). 3 kinds of redundancies in an image:  Coding redundancy  Interpixel redundancy  Psychovisual redundancy Image compression is achieved when the redundancies were reduced or eliminated.
DISP@MD531
25

## Fundamentals of Image Compression(2)

Image compression can be classified to  Lossless(error-free, without distortion after reconstructed)  Lossy Information theory is an important tool .
Data { Information
: information is carried by the data.

DISP@MD531

26

## Fundamentals of Image Compression(3)

Evaluation of the lossless compression: n CR ! 1 Compression ratio : n2 1 Relative data redundancy : ! 1  C
D

## Evaluation of the lossy compression: root-mean-square (rms) error erms 1 ! MN

M 1 N 1

x, y  f x, y f x !0 y !0

DISP@MD531

27

Coding Redundancy
If there is a set of codeword to represent the original data with less bits, the original data is said to have coding redundancy.

We can obtain the probable information from the histogram of the original image. Variable-length coding: assign shorter codeword to more probable gray level.

DISP@MD531

28

Interpixel Redundancy(1)
Interpixel redundancy is resulted from the correlation between neighboring pixels. Because the value of any given pixel can be reasonably predicted from the value of its neighbors, the information carried by individual pixels is relatively small.

DISP@MD531

29

Interpixel Redundancy(2)
To reduce interpixel redundancy, the original image will be transformed to a more efficient and nonvisual format. This transformation is called mapping. Run-length coding. Ex. 10000000 1,111 Difference coding.
7 0s

DISP@MD531

30

Psychovisual Redundancy
Humans dont respond with equal importance to every pixel.
For example, the edges are more noticeable for us. Information loss! We truncate or coarsely quantize the gray levels (or coefficients) that will not significantly impair the perceived image quality. The animation take advantage of the persistence of vision to reduce the scanning rate.

DISP@MD531

31

## Image Compression Model

The quantizer is not necessary. The mapper would 1.reduce the interpixel redundancy to compress directly, such as exploiting the run-length coding. or 2.make it more accessible for compression in the later stage, for example, the DCT or the DWT coefficients are good candidates for quantization stage.
DISP@MD531
32

Lossless Compression
It can be reconstructed without distortion.
No quantizer involves in the compression procedure. Generally, the compression ratios range from 2 to 10. Trade-off relation between the compression ratio and the computational complexity.

DISP@MD531

33

Variable-Length Coding
It assigns fewer bits to the more probable gray levels than to the less probable ones.

## It merely reduces the coding redundancy. Ex. Huffman coding

DISP@MD531

34

Bit-plane Coding
A monochrome or colorful image is decomposed into a series of binary images (that is, bit planes), and then they are compressed by a binary compression method. It reduces the interpixel redundancy.

DISP@MD531

35

## Lossless Predictive Coding

It encodes the difference between the actual and predicted value of that pixel.

It reduces the interpixel redundancies of closely spaced pixels. The ability to attack the redundancy depends on the predictor.

DISP@MD531

36

Lossy Compression
It can not be reconstructed without distortion due to the sacrificed accuracy.
It exploits the quantizer. Its compression ratios range from 10 to 100 (much more than the lossless cases). Trade-off relation between the reconstruction accuracy and compression performance.

DISP@MD531

37

## Lossy Predictive Coding

It is just a lossless predictive coding containing a quantizer.
It exploits the quantizer. Its compression ratios range from 10 to 100 (much more than the lossless cases). The quantizer is designed based on the purpose for minimizing the quantization error. Trade-off relation between the quantizer complexity and less quantization error. Delta modulation (DM) is an easy example exploiting the oversampling and 1-bit quantizer.
DISP@MD531
38

Transform Coding(1)
Most of the information is included among a small number of the transformed coefficients. Thus, we truncate or coarsely quantize the coefficients including little information.

The goal of the transformation is to pack as much information as possible into the smallest number of transform coefficients. Compression is achieved during the quantization of the transformed coefficients, not during the transformation.
DISP@MD531
39

Transform Coding(2)
More truncated coefficients Higher compression ratio, but the rms error between the reconstructed image and the original one would also increase. Every stage can be adapted to local image content. Choosing the transform: Information packing ability Practical! Computational complexity needed
KLT Information packing ability Best WHT Not good DCT Good

## Computational complexity High

Lowest

Low

DISP@MD531

40

Transform Coding(3)
Disadvantage: Blocking artifact when highly compressed (this causes errors) due to subdivision. Size of the subimage: Size increase: higher compression ratio, computational complexity, and bigger block size.

? ?
DISP@MD531

?? ?

? ?
41

## How to solve the blocking artifact problem? Using the WT!

Wavelet Coding(1)
Wavelet coding is not only the transforming coding exploiting the wavelet transform------No subdivision!

No subdivision due to:  Computationally efficient (FWT)  Limited-duration basis functions. Avoiding the blocking artifact!
DISP@MD531
42

Wavelet Coding(2)
We only truncate the detail coefficients. The decomposition level: the initial decompositions would draw out the majority of details. Too many decompositions is just wasting time.

DISP@MD531

43

Wavelet Coding(3)
Quantization with dead zone threshold: set a threshold to truncate the detail coefficients that are smaller than the threshold.

DISP@MD531

44

Conclusion
The WT is a powerful tool to analyze signals. There are many applications of the WT, such as image compression. However, most of them are still not adopted now due to some disadvantage. Our future work is to improve them. For example, we could improve the adaptive transform coding, including the shape of the subimages, the selection of transformation, and the quantizer design. They are all hot topics to be studied.

DISP@MD531

45

Reference
 R.C Gonzalez, R.E Woods, Digital I age Processing, 2nd edition, Prentice Hall, 2002.  J.C Goswami, A.K Chan, Funda entals of Wavelets, John Wiley & Sons, New York, 1999.  Contributors of the Wikipedia, Arithmetic coding, available in http://en.wikipedia.org/wiki/Arithmetic_coding.  Contributors of the Wikipedia, Lempel-Ziv-Welch, available in http://en.wikipedia.org/wiki/Lempel-ZivWelch.  S. Haykin, Co unication Syste , 4th edition, John Wiley & Sons, New York, 2001.
DISP@MD531
46

47