Abstract
The purpose of this paper is to introduce an image compression scheme using a combination of wavelet packet transform [1] and pyramidal vector quantization [2]. All the wavelet packet bases corresponding to various tree structures have been considered and the best one has been coined based upon the peak signal to noise ratio and compression ratio of the reconstructed image. In first step input image decorrelation using the wavelet packet transform, the second step in the coder construction is the design of a pyramid vector quantizer. Pyramid Vector Quantization (PVQ) was first introduced by Fischer [2] as a fast and efficient method of quantizing Laplacian-like data, such as generated by transforms (especially wavelet transforms) or sub-band filters in an image compression system. PVQ has very simple systematic encoding and decoding algorithms and does not require codebook storage. PVQ has culminated in high performance and faster PVQ image compression systems for both transforms and subband decompositions. The proposed algorithm provides a good compression performance.

Attribution Non-Commercial (BY-NC)

242 views

Abstract
The purpose of this paper is to introduce an image compression scheme using a combination of wavelet packet transform [1] and pyramidal vector quantization [2]. All the wavelet packet bases corresponding to various tree structures have been considered and the best one has been coined based upon the peak signal to noise ratio and compression ratio of the reconstructed image. In first step input image decorrelation using the wavelet packet transform, the second step in the coder construction is the design of a pyramid vector quantizer. Pyramid Vector Quantization (PVQ) was first introduced by Fischer [2] as a fast and efficient method of quantizing Laplacian-like data, such as generated by transforms (especially wavelet transforms) or sub-band filters in an image compression system. PVQ has very simple systematic encoding and decoding algorithms and does not require codebook storage. PVQ has culminated in high performance and faster PVQ image compression systems for both transforms and subband decompositions. The proposed algorithm provides a good compression performance.

Attribution Non-Commercial (BY-NC)

- iPod 3rd Generation Teardown Report - (Paul Halpin 2010)
- Wave Let Intro
- My Project Doc
- 22-Grafica
- Exploring textural analysis for historical documents characterization
- 2014-15 IEEE MATLAB Projects Triple N Infotech
- Learn Korean
- Image Compression Techniques Using Modified High Quality Multi Wavelets
- Modified Adaptive Lifting Structure Of CDF 9/7 Wavelet With Spiht For Lossy Image Coding
- 40120140502010
- Audio Compression
- 10.1.1.42
- Crit2 Updated
- Text Compression Algorithm Using Bits for Character Representation
- Presentation 2
- PP 63-66 Performance Analysis of Image Compression a Discrete Wavelet Transform Based Approach Yogita
- 6.pdf
- An Objective and Subjective Evaluation of Content-based Privacy Protection of Face Images in Video Surveillance Systems using JPEG XR
- EC2009
- Jpeg

You are on page 1of 8

QUANTIZATION

Abstract

The purpose of this paper is to introduce an image compression scheme using a combination of

wavelet packet transform [1] and pyramidal vector quantization [2]. All the wavelet packet bases

corresponding to various tree structures have been considered and the best one has been coined based

upon the peak signal to noise ratio and compression ratio of the reconstructed image. In first step input

image decorrelation using the wavelet packet transform, the second step in the coder construction is the

design of a pyramid vector quantizer. Pyramid Vector Quantization (PVQ) was first introduced by Fischer

[2] as a fast and efficient method of quantizing Laplacian-like data, such as generated by transforms

(especially wavelet transforms) or sub-band filters in an image compression system. PVQ has very simple

systematic encoding and decoding algorithms and does not require codebook storage. PVQ has

culminated in high performance and faster PVQ image compression systems for both transforms and

subband decompositions. The proposed algorithm provides a good compression performance.

1.Introduction

Wavelet packet transformations have a wide variety of different applications in computer

graphics including radiosity, multiresolution painting, curve design, mesh optimization, volume

visualization, image searching animation control, image compression. In this paper a new decomposition

scheme for 2D wavelets packets based on adaptive selection of 2D wavelets based on the compression of

their coefficients is presented and the application of this scheme in the field of lossless image

compression is discussed.

Although there are many image compression techniques currently available, there still exists a

need to develop faster and robust algorithms to compress images. In this paper, we propose high

frequency image compression using wavelet packet [1] and Pyramidal Vector Quantization [2].

1

2. Image Compression: Brief Review

A common characteristic of most of images is that the neighboring pixels are

correlated. Therefore most important task is to find a less correlated representation

of image. The fundamental components of compression are reduction of

redundancy and irrelevancy. Redundancy reduction aims at removing duplication

from the image. Irrelevancy reduction omits parts of the signal that will not be

noticed by the signal receiver namely the human visual system (HVS). Three types

of redundancies can be identified: Spatial redundancy, Spectral redundancy and

temporal redundancy. In still image, the compression is achieved by removing

spatial redundancy and Spectral redundancy.

Image compression addresses the problem of reducing the amount of data required to represent a

digital image. Compression can be achieved by transforming data, projecting the representation on a

smaller set of data, and encoding the result. The underlying basis of the reduction process is the removal

of redundant data. Digital image compression techniques aim at reducing the coding, interpixel, and

psycho visual redundant information. Data compression is achieved when at least one of these

redundancies is reduced or eliminated. The coding redundancy can be reduced using the probability of

occurrence of the events (such as gray-level values). To reduce the interpixel redundancy the 2-D pixel

array normally used for human viewing and interpretation must be transformed (or mapped) into a more

efficient (but not necessarily visible) format. The transformation is called reversible if the original image

elements can be reconstructed from the transformed data set. The psycho visual redundancies can be

reduced using the fact that the human eye does not respond with equal sensitivity to all visual

information. Consequently, certain information can be eliminated without significantly impairing the

quality of the image perception.

4. Error Metrics

Two of the error metrics used to compare the various image compression techniques are the Mean

Square Error (MSE) and the Peak Signal to Noise Ratio (PSNR). The MSE is the cumulative squared

error between the compressed and the original image, whereas PSNR is a measure of the peak error. It is

most easily defined via the mean squared error (MSE) which for two m x n monochrome images

I and K where one of the images is considered a noisy approximation of the other is defined as:

The mathematical formulae for the two are

.

The PSNR is defined as:

.

Here, MAXI is the maximum pixel value of the image. When the pixels are represented

using 8 bits per sample, this is 255. More generally, when samples are represented using linear

PCM with B bits per sample, MAXI is 2B − 1.

A lower value for MSE means lesser error, and as seen from the inverse relation between the

MSE and PSNR, this translates to a high value of PSNR. Logically, a higher value of PSNR is good

because it means that the SNR ratio is higher. Here, the 'signal' is the original image, and the 'noise' is the

error in reconstruction. So, a compression scheme having a lower MSE (and a high PSNR) can be

recognized as a better one.

2

5. Wavelet Packets

Ideas of wavelet packet is the same as wavelet, the only difference is that

wavelet packet offers a more complex and flexible analysis because in wavelet

packet analysis the details as well as the approximation are splited. Wavelet packet

decomposition gives a lot of bases from which we can look for the best

representation

Wavelet packets represent a generalization of the method of the multiresolution decomposition

and comprise the entire family of subband coded (tree) decomposions. It can be characterized as a

recursive application of the highpass and lowpass filters that form a Quadrature Mirror Filter (QMF) pair.

The calculation of the DWT begins by filtering a signal by the high pass and lowpass filters and then

down sampling the output. The process is repeated by applying the QMF pair to the output of the lowpass

filter. In the wavelet packet (WP) decomposition the recursive procedure is applied to all the coarse scale

approximation and detail signals, which leads to a complete WP tree. It is an idea of R. R. Coifman, Y.

Meyer, and V. Wickerhauser [1]. The calculation of wavelet packets is often schematically characterized

by the formation of a binary tree with each branch representing the highpass and lowpass filtered output

of a root node, which is shown in Figure1. Wavelet packets are efficiently implimentable from arbitrary

sub band decomposition trees and it may sometimes lead to adaptive wavelet transforms.

Wavelet packet based image coding can be performed in three steps.

Selection of optimal wavelet packets from the best tree structure

Vector Quantization of optimal wavelet packet coefficients

Entropy coding of quantized values.

The input image f (m, n) of resolution 256 x 256 is subdivided into low and high frequency

bands by performing linear convolution on the gray scale input image with the low pass and high pass

FIR filter coefficients. The down sampled outputs are again subdivided into high and low frequency

bands and the above step is repeated. This is continued up to three levels of decomposition i.e. till we are

getting eight wavelet packets. Only two levels are shown in Figure1.

The synthesis filter has been used for the reconstruction of the input image. The up sampled output

will be convoluted with the adjacent low pass and high pass synthesizing filters and the outputs will be

added up to give the reconstructed output. The careful design of QMFs is very important for achieving

good performance in sub band coding.

Wavelet packets [3],[4] offer more flexibility compared to ordinary wavelet based coding. In

wavelet packets all the possible tree structure are searched for a particular filter set instead of using a

fixed wavelet packet tree and the wavelet packet coefficients of each tree structure have been

determined. Then all the wavelet packets from the different tree structures have been carefully

investigated for selection of the optimal wavelet packets in order to best match the space frequency

characteristics of the input image.

The optimal wavelet packets are selected based on two performance factors.

1. Peak Signal to Noise Ratio (PSNR) of the reconstructed image

2. Compression Ratio.

L LL

L 2

H LH

Input

image L HL

3

H 2

H HH

Figure1. Image Decomposition

6. Vector Quantization

Quantization [5], [6] refers to the process of approximating the continuous set of values in the

image data with finite, preferably small, set of values. The input to a quantizer is the original data and the

output is always one among a finite number of levels. The quantizer is a function whose set of output

values is discrete and usually finite. Obviously, this is a process of approximation and a good quantizer is

one, which represents the original signal with minimum loss or distortion. There are two types of

quantization: scalar quantization and vector quantization. In scalar quantization, each input symbol is

treated in producing the output while in vector quantization the input symbols are clubbed together in

groups called vectors, and processed to give the output. This clubbing of data and treating them as a

single unit, increases the optimality of the vector quantizer, but at the cost of increased computational

complexity [5].

According to Shannon's rate distortion theory, better results are always obtained when vectors

rather than scalars are quantized. Developed by Gersho and Gray in 1980, vector quantization has proven

to be a powerful tool for digital image compression [7]. The method is encoding a sequence of samples

(vectors) rather than encoding each sample individually. Encoding is performed by approximating the

sequence to be coded by a vector belonging to a catalogue of shape, usually known as a codebook or

dictionary. The codebook consists of code vectors. The number of code vectors in the codebook is given

by 2RL, where R is rate or bits per pixel (bpp) and L is the dimension.

The process of pyramidal vector quantization [8] involves the following steps and the algorithm

for pyramidal vector quantization can be listed as:

1. All the vectors (having dimension of L) taken from the wavelet-transformed

images (sub-band images) are projected on the surface of the pyramid

S L, L such that the projection yields the least mean squared error. All

λ

λ λ

4

L

Figure 2. Projection of vectors on the surface of the pyramid S L,

λ

The black dots in space represent the vectors taken from the sub-band images with a dimension of

L

L. These vectors are projected on the surface of the pyramid S L,

λ

with least MSE. The main outcome

of this projection is that, the dynamic range of the actual vectors is decreased, but produces least

distortion.

2. The vectors lying on the surface of the pyramid S L, L are then scaled to

λ

an inner pyramid S ( L, K ) with a scaling factor of ϒ where the inner

pyramid S ( L, K ) is chosen based on the rate criterion ‘R’ bits/dimension.

7 Huffman Coding

Huffman coding [9] is based on probability and entropy. Huffman codes exploit the entropy of

the message to give good compression ratio. Huffman compression is a lossless compression technique.

1. Input all symbols (characters) along with their respective frequencies.

2. Create leaf (Right and Left links will be NULL) nodes representing the symbols scanned.

3. Let S be set containing all the nodes created in second step.

4. Sort the nodes (symbols) in S with respect to their frequencies.

5. Create a new node to combine the 2 nodes (symbols) with least frequencies

6. Frequency of this new combined node (symbol) will be equal to sum of frequencies of

Node (symbols) which were combined. This newly created combined node will be parent of

2 nodes, which were combined.

7. Replace in S, the 2 nodes which were combined with the new combined code.

8. Sort the nodes (symbols) in S with respect to their frequencies.

9. Create a new node to combine the 2 nodes (symbols) with least frequencies.

10. Frequency of this new combined node (symbol) will be equal to sum of frequencies of nodes

(symbols) which were combined. This newly created combined node (symbol) will be parent

of 2 nodes which were combined.

5

11. Replace, in S, the 2 nodes (symbols) which were combined with the new combined node

(symbol).

8. Experimental Results

The experimental result for different levels of decomposition is shown in Figure 4.

of Decomposition = 2

Decomposition = 3.

Figure 3 Shows the experimental results for two different levels of decomposition.

From Figure 3, it is clear that as the level of decomposition increases, the quality of the

image decreases. That is the reconstructed image is not similar to the input image as the level of

decomposition increases. The obtained results are given in Table 1.

Level of PSNR Compression Ratio

Decomposition

1 31.33 3.1476

2 27.22 9.6780

3 23.74 27.7122

4 20.02 46.5165

6

From the table, it is clear that as the level of decomposition increases, the Compression ratio

increases, but the PSNR decreases.

The different level of decomposition is shown in Figure 4a,4b and Figure 5a,5b,5c. From Figures

4 and 5, it is clear that as the level of decomposition increases, the quality of the image decreases.

Figure 4a. Second level decomposition in Figure 4b. Third level decomposition in LL

LL side side

Figure 5a. Second level decomposition in Figure 5b. Second level decomposition in

LH side HL side

HH side

7

9. Conclusion

The image compression technique based on wavelet packet transform and pyramidal vector

quantization is found to perform well with compression ratio and good PSNR values in the range of 20-40

db. The results demonstrate significant gain in percentage zero in wavelet packet with good signal to

noise ratio for standard as well as high frequency images. The performance of the proposed scheme was

seen to be competitive with that of the state of the art coders in terms of PSNR and Compression ratio.

References

IEEE Trans. Inform. Theory, vol. 38, pp. 713–718, Mar.1992.

2. T.R. Fisher, “A pyramid vector quantizer,” IEEE Trans. Inform.Theory, vol. IT-32, pp.568-583,

July 1986.

3. M. B. Martin and A. E. Bell. “New image compression techniques using multiwavelets and

multiwavelet Packets” to appear in IEEE Transactions on Image Processing, March 2001.

4. Shohrez Kasaei, Mohamed Deriche. “A Novel Fingerprint Image Compression Technique using

Wavelet Packets and Pyramid Lattice Vector Quantization,” IEEE Transaction on Image

Processing, Vol.11, No: 12, December 2002.

5. K.P.Soman & K.I.Ramachandran “Insight into Wavelets from theory to

practice”, Prentice Hall India, New Delhi, 2002.

6. Strang, G. and Nguyen, T. “Wavelets and Filter Banks,” Wellesley-Cambridge

Press, Wellesley, MA, 1996,http://www.math.mit.edu/~gs/ books/wfb.html

7. M.Nelson, “The Data Compression Book” New York: M&T Books, 1992.

8. A.Gersho and R.M. Gray, “Vector Quantization and Signal Compression,” Kluwer Academic

Publishers, 1992.

9. Image Processing Toolbox for use with Matlab. The Math works Inc., Natick MA, second

edition, 1997.

- iPod 3rd Generation Teardown Report - (Paul Halpin 2010)Uploaded byPaul Halpin
- Wave Let IntroUploaded byeeppwk
- My Project DocUploaded byumamaheshbatta
- 22-GraficaUploaded byvadavada
- Exploring textural analysis for historical documents characterizationUploaded byJournal of Computing
- 2014-15 IEEE MATLAB Projects Triple N InfotechUploaded byTriple N Infotech
- Learn KoreanUploaded byJovy Diaz
- Image Compression Techniques Using Modified High Quality Multi WaveletsUploaded byEditor IJACSA
- Modified Adaptive Lifting Structure Of CDF 9/7 Wavelet With Spiht For Lossy Image CodingUploaded byidescitation
- 40120140502010Uploaded byIAEME Publication
- Audio CompressionUploaded byMawar Berduri
- 10.1.1.42Uploaded bynarang24
- Crit2 UpdatedUploaded byAdam Stevenson
- Text Compression Algorithm Using Bits for Character RepresentationUploaded byIJEC_Editor
- Presentation 2Uploaded byJerson Alex
- PP 63-66 Performance Analysis of Image Compression a Discrete Wavelet Transform Based Approach YogitaUploaded byEditorijset Ijset
- 6.pdfUploaded byVikas Singh Kohaal
- An Objective and Subjective Evaluation of Content-based Privacy Protection of Face Images in Video Surveillance Systems using JPEG XRUploaded byWesley De Neve
- EC2009Uploaded byKrishna Kanth
- JpegUploaded byxacdinhdi
- Paper 569Uploaded byJorge David
- Lecture 25Uploaded byAnonymous aYm8hnODI
- Title Pages With ContentsUploaded byujranchaman
- 0fcfd50decc1c47080000000Uploaded byAmita Get
- _loco-iUploaded byMohit Gupta
- Ubicc Final 245Uploaded byUbiquitous Computing and Communication Journal
- Dwm Session 1Uploaded bymuhammeddarboe
- DIP SyllabusUploaded byAnkit Bhurane
- The Dirac Family of CompressionUploaded byAdrian Ilie
- Universal Compression of Memoryless Sources Over Unknown Alphabets(Paper)Uploaded byravikiranmusinada

- WIRELESS COMMUNICATIONS AND MOBILE TECHNOLOGYUploaded bymycatalysts
- Computer Organisation --Morris ManoUploaded byRamaSankar Molleti
- WitricityUploaded byankit ocean
- wireless optical communicationUploaded bymycatalysts
- SMART PHONE(EMBEDDED SYSTEMS)Uploaded bymycatalysts
- Virtual realityUploaded bymycatalysts
- Wireless Communication Mobile IPUploaded bymycatalysts
- WimaxUploaded bymycatalysts
- virtual realityUploaded bymycatalysts
- Tesla coilUploaded bymycatalysts
- wi-fiUploaded bymycatalysts
- VLSI18Uploaded bymycatalysts
- Data MiningUploaded bymycatalysts
- UBIQUITOUS COMPUTINGUploaded bymycatalysts
- VLSI & SDLUploaded bymycatalysts
- vista book SPEECH SYSTEMSUploaded bymycatalysts
- Data Mining and Data WarehousingUploaded bymycatalysts
- Super worms and Crypto virologyUploaded bymycatalysts
- Video Inpainting Under Constrained Camera MotionUploaded bymycatalysts
- VERMICOMPOSTING FOR DOMESTIC WASTEUploaded bymycatalysts
- Data mining based modell aggregationUploaded bymycatalysts
- STAVIESUploaded bymycatalysts
- UTAC Mobile ComputinUploaded bymycatalysts
- Ubiquitous computing1(2)Uploaded bymycatalysts
- SIX SIGMAUploaded bymycatalysts
- Soft ComputingUploaded bydharmurali
- CREATING THE JOY OF SIGHT FOR THE BLINDUploaded bymycatalysts
- SOLAR POWER SATELLITE SystemUploaded bymycatalysts
- SMART MATERIALSUploaded bymycatalysts
- SINGLE PHASE THREE LEG AC/AC CONVERTERUploaded bymycatalysts

- IAC-16,B2,8-GTS.3,7,x33593Uploaded byZaid Rn
- digital communicationUploaded byReishel Reyes
- Static eccentricity fault diagnosis in accelerating no-load three phase saturated squirrel cage induction motor - J. Faiz and M.B EbrahimiUploaded byCarlos Núñez
- AudioUploaded byDiego Edson Yucra Parishuaña
- Comtech/EFData HPOD GaAs Amplifier Data Sheet (9/16/2016)Uploaded byarzeszut
- 1Uploaded byDante Pangan Satur Jr.
- volume32-number1Uploaded byPedro Morris
- waveforms_rm.pdfUploaded byJosueCruzOrtiz
- Dsp SummaryUploaded byAce Silvestre
- Impact of Mit DatabaseUploaded byPraveen Chand G
- 7SG14 - Duobias M Complete Technical Manual.pdfUploaded byhizbi7
- SMP_web_enUploaded byLaurent Derousseau
- Laboratory 1 Discrete and Continuous-Time SignalsUploaded byYasitha Kanchana Manathunga
- Spgram TutUploaded bySudhir Rawat
- OLEN-D-16-00332 (1).pdfUploaded byAyalsew Dagnew
- digsilent DSMUploaded bySheik Mohamed
- Characteristics of the Vold-Kalman Order Tracking FilterUploaded byWon-young Seo
- Filter DesignUploaded byMichael Norris
- Adaptive EqualizersUploaded byzamsae
- Mastered for ItunesUploaded byfabbro700
- 6248 DetectionHigh-Impedance DH 20060914 WebUploaded byapofview
- 0modulo_de_adquisicion_dewe43[1]Uploaded byJaime Veramendi Jaimes
- Sinha, Pradip K.-image Acquisition and Preprocessing for Machine Vision Systems-SPIE (2012)-171-213Uploaded byhorlandovragas
- Data ConverterUploaded bynguyenphuonghuy1979
- Dsp Lab Manual (1)Uploaded byapi-26783388
- FM Radio Receiver With Digital DemodulationUploaded byVishal Mishra
- i r Lib ReferenceUploaded bybiltou
- (MIT 103-104-113-114Uploaded byBoreda Rahul
- Pitch Shifter Project ReportUploaded byKeegan Patel
- sgen_mdlUploaded byLuciano Júnior