Uncompressed multimedia data requires considerable storage capacity and transmission bandwidth. Despite rapid progress in mass storage density processor speeds and digital communication system performance, demand for data storage capacity and data transmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia-based web applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to storage and communication technology.

For still image compression, the joint photographic experts group (JPEG) standard has been established. The performance of these codes generally degrades at low bit rates mainly because of the underlying block-based Discrete cosine Transform (DCT) scheme. More recently, the wavelet transform has emerged as a cutting edge technology, within the field of image compression. Wavelet based coding provides substantial improvements in picture quality at higher compression ratios. Over the past few years, a variety of powerful and sophisticated wavelet based schemes for image compression have been developed and implemented. Because of the many advantages, the top contenders in JPEG-2000 standard are all wavelet based compression algorithms.




Image compression is a technique for processing images. It is the compressor of graphics for storage or transmission. Compressing an image is significantly different than compressing saw binary data. Some general purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. Also some finer details in the image can be sacrificed for saving storage space.

Compression is basically of two types. 1. Lossy Compression
2. Lossless Compression.

Lossy compression of data concedes a certain loss of accuracy in exchange for greatly increased compression. An image reconstructed following lossy compression contains degradation relative to the original. Often this is because the compression scheme completely discards redundant information. Under normal viewing conditions no visible is loss is perceived. It proves effective when applied to graphics images and digitized voice.


Here the reconstructed image after compression is numerically identical to the original image. 3 . This is the type of compression used when storing data base records. spread sheets or word processing files. Lossless compression can only achieve a modest amount of compression.‘03 Lossless compression consists of those techniques guaranteed to generate an exact duplicate of the input data stream after a compress or expand cycle.

Compression using wavelet transforms belongs to a class of technique called transform coding. The objectives of transform coding are I) To create a representation for the data in which there is less correlation among the coefficient values. The other two components are discussed later.‘03 IMAGE COMPRESSION SYSTEM A typical lossy image compression system consists of three closely connected components namely. This called decorrelating the data. 4 . (a) Source encoder (b) Quantizer (c) Entropy encoder Input signal/image Source Encoder Quantizer Entropy encoder Compressed signal/ image FIGURE 1 IMAGE COMPRESSION SYSTEM Source encoder This is a linear transformer in which the given signal or image is transformed to a different domain. II) To have a representation in which it is possible to quantize different coordinates with different precision.

Bit allocation The first step in compressing an image is to segregate the image data in to different classes. or for bit rate control in general. based on their importance. Depending on the importance of the data it contains. Specifying the rate (bits available) and distortion (tolerable error) parameters for the target image. 1. Then procedure is called bit allocation. Encode each class separately using an entropy coder and write to the file. One approach to solve the problem of optimal bit allocation using Rate Distortion theory is explained below. 5. The Rate Distortion theory is often used for solving the problem of allocating bits to a set of classes. 2. 4.‘03 STEPS IN COMPRESSION The usual steps involved in compressing an image are. by optimally allocating bits to the various classes of data. Quantize each class separately using the bit allocation information. such that the compressed image has the minimum possible distortion. each class is allocated a portion of the total bit budget. Dividing the available bit budget among these classes such that the distortion is a minimum. 3. The theory aims at reducing the distortion for a given target bit rate. Dividing the image data into various classes. 5 .

with the fine details being represented as sharp edges in between the 6 . Initially. 5. The total distortion for all classes D is calculated. where p is the probability and B is the bit allocation for each class. 2. and the distortion due to the reduction of that one bit is calculated. The total rate for all the classes is calculated as R = p (i) * B (i). If not optimal. 6. or both. all classes are allocated a predefined maximum numbers of bits. each coefficient representing the brightness level in that point. one bit is reduced from its quota of allocated bits.‘03 1. and 1 bit is reduced from its quota of bits. For each class. Of all the classes. 3. Compare the target rate and distortion specifications with the values obtained above. When looking from a higher perspective. But most natural images have smooth colour variations. and lesser important one. the coefficients cannot be differentiated as more important one. 4. Here we keep on reducing one bit at a time till we achieve optimality either or distortion or target rate. the class with minimum distortion for a reduction of 1 bit is noted. go to step 2. Classifying image data An image is represented as a two dimensional array of coefficients.

DWT of an image A low pass filter and a high pass filter are chosen. But since the low pass filter is a half band filter. Hence the smooth variations are demanding more importance than the details. thereby getting the low frequency components of the row. Separating the smooth variations and details of the image can be done in many ways. First the low pass filter is applied for each row of data. So they can be subsampled by two.‘03 smooth variations. Now the high pass filter is applied for the same row of data. One such way is the decomposition of the image using Discrete Wavelet Transform (DWT). and similarly the high pass 7 . the smooth variations in colour can be termed as low frequency variations and the sharp variations as high frequency variations. The filter pass is called the analysis filter pair. so that the output data now contains only half the original number of samples. the output data contains frequencies only in the first half of the original frequency range. The low frequency components constitute the base of an image and the high frequency components add upon them to refine the image thereby giving a detailed image. such that they exactly halve the frequency range between themselves. Technically.

with the degree of importance decreasing from the top of the pyramid to the bands at the bottom. HL (high-low). LH (Low-High) and HH (High-High). the filtering is done for each column of the intermediate data. This procedure is done for all rows. thereby resulting in a pyramidal decomposition as shown. 8 . The resulting two dimensional array of coefficients contains four bands of data. thereby producing even more subbands. The LL band can be decomposed once again in the same manner.‘03 components are separated and placed by the side of the low pass components. each labeled as LL(low. Next. LL HL LL LL HL LH HL HL HH LH HH HL HL HH LH LH HH LH HH LH HH Single level decomposition Two level decomposition Three level decomposition FIGURE 2 Inverse DWT of an image. The LL band at the highest level can be classified as most important and the other detail bands can be classified as of lesser importance.Low). This can be done up to any level.

apply the filters coloumnwise first and then rowwise and proceed to the next level. If the input range is divided into levels of equal spacing. A 9 . Quantization Quantization refers to the process of approximating the continuous set of values in the image data with a finite set of values. A pair of high pass and low pass filters is used here also. The input to a quantizer is the original data. it is termed as a non-uniform quantizer. We start from the topmost level. this is a process of approximation. The filtering procedure is just the opposite.‘03 Just as a forward transform is used to separate the image data into various classes of importance a reverse transform is used to reassemble the various classes of data into a reconstructed image. Then filter pair is called the synthesis filter pair. and usually finite. A quantizer can be specified by its input partitions and output levels. and as good quantizer is one which represents the original signal with minimum loss or distortion. and if not. and the output is always one among a finite number of levels. till we reach the first level. Obviously. The quantizer is a function whose set of output values are discrete. then the quantizer is termed as a uniform quantizer.

if the input falls between n*r and (n=1)*r. implementing a uniform quantizer is easier than a non-uniform quantizer. the best decoder is one that puts the reproduction parts x1 on the centers of mass of the partitions. Also. output n-2 n-1 n n+1 n+2 | (n-2)r X | (n-1)r X | nr X | (n+1)r X | (n+2)r X | (n+3)r input FIGURE 3 UNIFORM QUANTIZER Just the same way a quantizer partitions its input and outputs discrete levels. • Given the output levels or partitions of the encoder. by translating each level into a reproduction point in the actual range of data. the quantizer out put the symbol n. 10 . This is known as centered condition.‘03 uniform quantizer can be easily specified by its lower bound and step size. In a uniform quantizer. a dequantizer is one which receives the output levels of a quantizer and converts them into normal data. The optimum quantizer (encoder) and optimum dequantizer (decoder) must satisfy the following conditions.

Entropy means the amount of information present in the data. The quantization error (x-x1) is used as a measure of the optimality of the quantizer and dequantizer.‘03 • Given the reproduction points of the decoder. Two of the most popular entropy coding schemes are Huffman coding and Arithmetic coding. and an entropy coder encodes the given set of symbols with the minimum number of bits required to represent them. it can be encoded using an entropy coder to give additional compression. Entropy coding After the data has been quantized in to a finite set of values. 11 ..e. This is known as nearest neighbor condition. each x is translated to its nearest reproduction point. the best encoder is one that puts the partition boundaries exactly in the middle of the reproduction points i.

ADV6OLIC. 12 . claims to accommodate compression ratios from visually lossless to as great as 350-to-1. decimator and interpolator Adaptive quantizer Run length coder Huffman coder On chip transform buffer Host 1/0 post and FIFO Host FIGURE 4 ARCHITECTURE OF ADV601LC In wavelet-based compression processing.‘03 CHIP PROVIDES WAVELET TRANSFORMS Analog Devices have developed a family of general purpose waveletcodec chips. Figure below shows the architecture of the chip. The latest chip. the silicon area needed for compression is the same as the area needed for decompression. In contrast. DRAM Dram manager Digital Component Video I/O Digital video 1/0 post Wavelet filters. other compression techniques require more work and special circuitry to compress than to decompress a signal.

TheADV60ILC compresses images by filtering the video into 42 separate frequency bands. the IC accepts a compressed bit stream through its host interface and delivers component digital video through its video interface. 13 .‘03 The ADV60ILC accepts component digital video through its video interface and delivers a compressed video stream through its host interface in encode mode . this is no reason to compress and store this information. Because the eye lacks sensitivity at high frequencies. The chip then optimizes each band to include only frequencies the naked eyes can discern.In decode mode.

Because wavelet transforms compress the entire frame.Wavelet processing captures every image and creates a mathematical map of the entire image from which it can be determined whether the image has undergone alternations 14 . it facilitates efficient post processing to even further compress the already –compressed images for archival storage. it has higher resolution than DCT based JPEG and MPEG 3. The compressed video file cannot be edited 6.‘03 ADVANTAGES Wavelet video processing technology offers some enticing features 1. any change makes it impossible to decompress the image. 5. In magnification mode images can be enlarged almost to infinity without the pixelation effects that accompany linear zooms. The high image compression ratios reduces the hard disk storage capacity for real time recording and for archival storage 2. 4. This aspect is important for courtroom evidence.

each with separate encoding 3.‘03 APPLICATIONS 1. MPEG-4 uses wavelet tiling to allow the division of images into several tiles. uses wavelet technology in to video surveillance systems 15 . JPEG2000 uses wavelet transforms to compress images 2. Kallix corp.

image coding based on models of human perception. joint source channel coding. Interaction of harmonic analysis with data compression. Interesting issues like obtaining accurate models of images. optimal representations of such models and rapidly computing such optimal representation are the grand challenges facing the data compression community. and complexity are a few of the many outstanding challenges in image coding to be fully resolved and may affect image data compression performance in the years to come. The JPEG-2000 standard incorporates wavelet technology. Because of the inherent multi resolution nature wavelet based codes facilitate progressive transmission of images thereby allowing variable bit rates. error resilience. 16 .‘03 CONCLUSION Wavelet-based coding provides substantial improvement in picture quality at low bit rates because of overlapping bases function and better energy compaction property of wavelet transforms. scalability robustness.

”Fundamentals of wavelets. “Wavelets both implode and explode images”. Introduction to theory and applications”.C.theory. Wiley Interscience Publication.Y. http:/engineering. Jaideva. 5.html 17 .Rao and Ajit.Bopardikar. Raghuveer.Chan. “Wavelet Transforms.edu/~polikar/WAVELETS/WTtutorial.‘03 BIBLIOGRAPHY 1. Pearson Education Asia.algorithms and application”.M. Chan. December 2000 2.Goswami and Andrew. 4.S.”Wavelet basics”. EDN.K. Kluwer Academic Publishers.rowan.T. Bill Travis. 3.

Compression lowers the cost of storage and transmission by packing data into a smaller space. This is the blot that occurs when pictures. sound and video are converted from their natural analog form into computer language for manipulation or transmission. including high compression ratios and eye pleasing enlargements. the need to compress it with less distortion of data is the need of the hour. Wavelet Video Processing Technology offers some alluring features. In the present explosion of high quality data.‘03 ABSTRACT The biggest obstacle to the multimedia revolution is digital obesity. One of the hottest areas of advanced form of compression is wavelet compression. 18 .

INTRODUCTION 2. CHIP PROVIDES WAVELET TRANSFORMS ADVANTAGES APPLICATIONS CONCLUSION BIBLIOGRAPHY 19 . 6.‘03 CONTENTS 1. IMAGE COMPRESSION SYSTEM • Steps in compression • Bit allocation • Classifying image data • DWT of an image • Inverse DWT of an image • Quantization • Entropy coding 4. 5. 7. 8. IMAGE COMPRESSION 3.

‘03 20 .