G.

PULLAREDDY ENGINEERING COLLEGE KURNOOL
B.Sri Krishna Acharya III ECE Srikrishna3118@gmail.com J.K.Sashank III ECE sashankjujaru@gmail.com

Presenting: IMAGE COMPRESSION USING WAVELETS
(An application of a digital image processing)

At

JNTU KAKINADA Vizianagaram

Here in this paper we have encrypted the brief overview of basic image compression techniques and some useful algorithms. Contents: . demand for data storage capacity and data-transmission bandwidth continues to outstrip the capabilities of available technologies. processor speeds. Uncompressed multimedia (graphics. audio and video) data requires considerable storage capacity and transmission bandwidth. Despite rapid progress in mass-storage density. but also motivated the need for better compression algorithms. and digital communication system performance.Abstract: The proliferation of digital technology has not only accelerated the pace of development of many image processing software and multimedia applications. The recent growth of data intensive multimedia-based web • Introduction • Definition • Basic principles • Classification • Block diagram • JPEG Standard • Multiresolution and wavelets • Advanced schemes • Advantages • Disadvantages • Conclusion wavelet coding applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to storage and communication technology.

Redundancy reduction aims at removing duplication from the signal source (image/video). It also reduces the time required for images to be sent over the Internet or downloaded from Web pages. Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver. However Heisenburg’s Uncertainty Principle states that as the resolution of the signal improves in the time domain. which allows certain parts of the signal to be resolved well in time. but in order to process them more easily other information. such that the frequencies of a signal can be seen. . and other parts to be resolved well in frequency. namely the Human Visual System (HVS). In general. such as frequency. Introduction: Often signals we wish to process are in the time-domain. The foremost task then is to find less correlated representation redundancy of the and image. is required. the Fourier transform converts a signal between the time and frequency domains. Basic principles of image compression: A common characteristic of most images is that the neighboring pixels are correlated and therefore contain redundant information. Ideally. However the Fourier transform cannot provide information on which frequencies occur at specific times in the signal as time and frequency are viewed independently. by zooming on different sections.Definition: Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. Two fundamental components of compression are irrelevancy reduction. two types of redundancy can be identified in image pattern: • Spatial Redundancy or correlation between neighboring pixel values. the frequency resolution gets worse. For example. Mathematical transforms translate the information of signals into different representations. The power and magic of wavelet analysis is exactly this multiresolution. a method of multiresolution is needed.

lossy schemes are capable of achieving much higher compression. Source Encoder (or Linear Transformer):Over the years. However lossless compression can only achieve a modest amount of compression. is numerically identical to the original image. An image reconstructed following lossy compression contains degradation relative to the original. (A) Lossless vs. first transforms the image from its spatial domain representation to a different type of representation using some well-known transform and then codes Classification: Two ways of classifying compression the transformed values (coefficients). Differential Pulse Code Modulation (DPCM) is one particular example of predictive coding. Since this is done in Block diagram: A typical image compression system consists of three closely connected components namely (a) SOURCE ENCODER (b) QUANTIZER (c) ENTROPY ENCODER Compression is accomplished by applying a linear transform to decorrelate the image data. and entropy coding the quantized values. information already sent or available is used to predict future values. (B) Predictive vs. no visible loss is perceived (visually lossless). techniques are mentioned here. a variety of linear transforms have been developed which . quantizing the resulting transform coefficients. However. and the difference is coded. Transform coding. although at the expense of greater computation. after compression.• Spectral Redundancy or correlation between different color planes or spectral bands. Transform coding: In predictive coding. Often this is because the compression scheme completely discards redundant information. the reconstructed image. This method provides greater data compression compared to predictive methods. Under normal viewing conditions. on the other hand. the image or spatial domain. Lossy compression: In lossless compression schemes. it is relatively simple to implement and is readily adapted to local image characteristics.

1(b) JPEG Decoder Block Diagram Despite all the advantages of JPEG compression schemes based on DCT namely . It uses a model to accurately determine the probabilities for each quantized value and produces an appropriate code based on these probabilities so that the resultant output code stream will be smaller than the input stream. progressive. it is a lossy process and is the main source of compression in an encoder. The JPEG standard specifies three modes namely sequential. the `Joint Photographic Experts Group' or JPEG standard has been established by ISO. Since this is a many-to-one mapping. and JPEG established the first international standard for still image compression where the encoders and decoders are DCT-based. although for applications requiring fast execution. Quantizer: A quantizer simply reduces the number of bits needed to store the transformed coefficients by reducing the precision of those values. Discrete Wavelet Transform (DWT) and many more. Discrete Cosine Transform (DCT). The most commonly used entropy encoders are the Huffman encoder and the arithmetic encoder. each with its own advantages and disadvantages. simple run-length encoding (RLE) has proven very effective. 1(a) JPEG Encoder Block Diagram Fig. and hierarchical for lossy encoding. The DCT-based encoder can be thought of as essentially compression of a stream of 8x8 blocks of image samples. Entropy Encoder: An entropy encoder further compresses the quantized values losslessly to give better overall compression. JPEG: For still image compression. Each 8x8 block makes its way through each processing step. It is important to note that a properly designed quantizer and entropy encoder are absolutely necessary along with optimum signal transformation to get the best possible compression. and one mode of lossless encoding.include Discrete Fourier Transform (DFT). Fig. and yields output in compressed form into the data stream.

to show blocking artifact Over the past several years. In many applications wavelet-based schemes (also referred as subband coding) outperform other coding schemes like the one based on DCT. Fig. 2(a) Original Lena Image. increased computational complexity of such algorithms do not justify wide replacement of DCT by LOT. and in image compression research in particular.simplicity. and (b) Reconstructed Lena with DC component only. Lapped Orthogonal Transforms (LOT) attempts to solve this problem by using smoothly overlapping blocks. satisfactory performance. Although blocking effects are reduced in LOT compressed images. Multiresolution and Wavelets Wavelet analysis can be used to divide the information of an image into approximation . and availability of special purpose hardware for implementation. This results in noticeable and annoying ``blocking artifacts'' particularly at low bit rates. these are not without their shortcomings. Since the input image needs to be ``blocked.'' correlation across the block boundaries is not eliminated. the wavelet transform has gained widespread acceptance in signal processing in general.

it takes a “mother wavelet” then the signal is translated into shifted and scale versions of this mother wavelet. The power of Wavelets comes from the use of multiresolution. Wavelet analysis does a similar thing. meaning that the detail has not been changed. In Fourier analysis a signal is broken up into sine and cosine waves of different frequencies. and three detail subsignals show the vertical. so a balance between the two needs to be found. The approximation subsignal shows the general trend of pixel values. different parts of the wave are viewed through different size windows (or resolutions). The greater the number of zeros the greater the compression that can be achieved. low frequency parts use a big window to get good frequency information. If the energy retained is 100% then the compression is known as “lossless”. If any values are changed then energy will be lost and this is known as lossy Compression. Subband Coding: The fundamental concept behind Subband Coding (SBC) is to split up the frequency band of a signal (image in our case) and then to code each subband using a . However. as more zeros are obtained more energy is lost. High frequency parts of the signal use a small window to give good time resolution. Ideally. Wavelet coding schemes are especially suitable for applications where scalability and tolerable degradation are important. The area of the window is controlled by Heisenberg’s Uncertainty principle. horizontal and diagonal details or changes in the image. as frequency resolution gets bigger the time resolution must get smaller. during compression the number of zeros and the energy retention will be as high as possible. An important thing to note is that the ’windows’ have equal area even though the height and width may vary in wavelet analysis. as the image can be reconstructed exactly.and detail subsignals. and it effectively re-writes a signal in terms of different sine and cosine waves. The value below which details are considered small enough to be set to zero is known as the threshold. If these details are very small then they can be set to zero without significantly changing the image. The amount of information retained by an image after compression and decompression is known as the “energy retained” and this is proportional to the sum of the squares of the pixel values. Rather than examining entire signals through the same window. This occurs when the threshold value is set to zero.

Such filters must meet additional and often conflicting requirements. SBC has been used extensively first in speech coding and later in image coding because of its inherent advantages namely variable bit assignment among the subbands as well as coding error confinement within the subbands. the subband signals are decoded. 3(a) Separable 4-subband Filter bank. and linear phase of both types of filters since nonlinear waveform phase introduce unpleasant edges. upsampled and passed through a bank of synthesis filters and properly summed up to yield the reconstructed image. At the decoder. in twoband Finite Impulse Response (FIR) systems linear phase and orthogonality are mutually exclusive. and 3(b) Partition of the Frequency Domain since orthogonal filters. as in the case of 1-D. Since 1990. in addition to preservation of energy.coder and bit rate accurately matched to the statistics of the band. But. These include short impulse response of the analysis filters to preserve the localization of image features as well as to have fast computation. short impulse response of the synthesis filters to prevent spreading of artifacts (ringing around edges) resulting from quantization errors. The process can be iterated to obtain higher band decomposition filter trees. distortions around (a) Orthogonality is another useful requirement (b) Fig. implement a unitary transform between the input and the subbands. From Subband to Wavelet Coding Over the years. methods very similar and closely related to subband coding have been proposed by various researchers under the name of Wavelet Coding (WC) using filters specifically designed for this purpose. . there have been many efforts leading to improved and efficient design of filterbanks and subband coding techniques. and so orthogonality is sacrificed to achieve linear phase.

adaptive wavelet-packet decomposition.Link between Wavelet Transform and Filterbank Construction of orthonormal families of wavelet basis functions can be carried out in continuous time. and These or include uniform decomposition. octave-band decomposition is the most widely used. (a) developed earlier is in fact a form of wavelet coding in disguise. Daubechies was the first to discover that the discrete-time filters or QMFs can be iterated and under certain regularity conditions will lead to continuous-time wavelets. and Feauveau (CDF). the same can also be derived by starting from discrete-time filters. An Example of Wavelet Decomposition There are several ways wavelet transforms can decompose a signal into various (b) Fig. 4(a) Three level octave-band decomposition of Lena image. . Daubechies. This is a very practical and extremely useful wavelet decomposition scheme. Wavelets did not gain popularity in image coding until Daubechies established this link in late 1980s. Out of these. and (b) Spectral decomposition and ordering. since FIR discrete-time filters can be used to implement them. This is a non-uniform band splitting method that decomposes the lower frequency part into narrower bands and the high-pass output at each level is left without any further decomposition. It follows that the orthonormal bases in correspond to a subband filters coding for scheme with as exact for coding reconstruction property. subbands. Later a systematic way of constructing a family of compactly supported biorthogonal wavelets was developed by Cohen. octave-band decomposition. using the same FIR reconstruction So. subband decomposition. However.

SFQ. CREW. This results in bits that are generated in order of importance. The main advantage of this encoding is that the encoder can terminate the encoding at any point. Wavelet Image Coding using VQ. yielding a fully embedded code. SPIHT. 1. and presented his elegant algorithm for entropy encoding called Embedded Zerotree Wavelet (EZW) algorithm. EBCOT. The idea is to define a tree of zero symbols which starts at a root which is also zero and labeled as end-of-block. Later. and the statistics of transformed coefficients. and Lossless Image Compression using Integer Lifting. EPWIC. thereby . each coefficient in the high-pass bands of the wavelet transform has four coefficients corresponding to its spatial position in the octave band above in frequency. Image Coding using Wavelet Packets. because the tree grows as powers of four. A number of more sophisticated variations of the standard entropy encoders have also been developed. Second Generation Image Coding. Many enhancements have been made to the standard quantizers and encoders to take advantage of how the wavelet transform works on an image. it probably needed a smarter way of encoding its coefficients to achieve better compression results. Recent Developments in Subband and Wavelet Coding The interplay between the three components of any image coder cannot be overemphasized necessary since along a with properly designed signal quantizer and entropy encoder are absolutely optimum transformation to get the best possible compression. The EZW algorithm encodes the tree structure so obtained. Because of this very structure of the decomposition. in 1993 Shapiro called this structure zerotree of wavelet coefficients. a variety of novel and sophisticated wavelet-based image coding schemes have been developed.. Many insignificant coefficients at higher frequency subbands (finer resolutions) can be discarded. SR.Advanced Wavelet Coding Schemes A. Embedded Zerotree Wavelet (EZW) Compression In octave-band wavelet decomposition. The zerotree is based on the hypothesis that if a wavelet coefficient at a coarse scale is insignificant with respect to a given threshold T. then all wavelet coefficients of the same orientation in the same spatial location at a finer scales are likely to be insignificant with respect to T. the properties of the HVS. Over the past few years. These include EZW. We will briefly discuss a few of these interesting algorithms here.

An efficient way to code the ordering information is also proposed. Since its inception in 1993. many enhancements have been made to make the EZW algorithm more robust and efficient. The algorithm produces excellent results without any pre-stored tables or codebooks. . Similarly.allowing a target bit rate to be met exactly. training. and call it the SPIHT algorithm. or prior knowledge of the image source. ordered bitplane transmission of refinement bits. they offer a new and more effective implementation of the modified EZW algorithm based on set partitioning in hierarchical trees. 5(a) Structure of zerotrees. the decoder can also stop decoding at any point resulting in the image that would have been produced at the rate of the truncated bit stream. and exploitation of self-similarity of the image wavelet transform across different scales of an image are the three key concepts in EZW. They use a uniform scalar quantizer and claim that the ordering information made this simple quantization method more efficient than expected. Set Partitioning in Hierarchical Trees (SPIHT) Algorithm SPIHT offered an alternative explanation of the principles of operation of the EZW algorithm to better understand the reasons for its excellent performance. results from the SPIHT coding algorithm in most cases surpass those obtained from EZQ algorithm. One very popular and improved variation of the EZW is the SPIHT algorithm. They also present a scheme for progressive transmission of the coefficient values that incorporates the concepts of ordering the coefficients by (a) (b) Fig. 2. Partial ordering by magnitude of the transformed coefficients with a set partitioning sorting algorithm. and 5(b) Scanning order of subbands for encoding magnitude and transmitting the most significant bits first. According to them. In addition.

Lower quality than JPEG at low compression rates • Conclusion: error resilience. and rapidly computing such optimal representations are the "Grand Challenges" facing the data compression community. the current data compression methods might be far away from the ultimate limits imposed by the underlying structure of specific data sources such as images. for example the DCT was used for the JPEG format to compress images. Transformation of the whole imageà introduces inherent scaling Better identification of which data is relevant to human perceptionà higher compression ratio Higher flexibility: Wavelet function can be freely chosen as images. • • • • Disadvantages of DWT • • The cost of computing DWT as compared to DCT may be higher. robustness. it has higher compression ratios avoid blocking artifacts. However. image coding based on models of human perception.Wavelet analysis is very powerful and extremely useful for compressing data such Advantages: • No need to divide the input coding into non-overlapping 2-D blocks. and complexity are a few of the many outstanding challenges in image . Interaction of harmonic analysis with data compression. Allows good localization both in time and spatial frequency domain. scalability. joint source-channel coding. Its power comes from its multiresolution. wavelet analysis can be seen to be far superior. Interesting issues like obtaining accurate models of images. and in that it doesn’t create “blocking artifacts”. wavelet-based coders facilitate progressive transmission of images thereby allowing variable bit rates. The use of larger DWT basis functions or wavelet filters produces blurring and ringing noise near edge regions in images or video frames Longer compression time. Although other transforms have been used. optimal representations of such models. The upcoming JPEG-2000 standard will incorporate many of these research works and will address many important aspects in image coding for the next millennium. Because of the inherent multiresolution nature.

.coding to be fully resolved and may affect image data compression performance in the years to come.

Sign up to vote on this title
UsefulNot useful