You are on page 1of 49


Wavelets are mathematical functions that divide data into different components and then study each component with the resolution matched to its scale. The wavelet analysis procedure is to adopt a wavelet prototype function, called an analyzing wave or mother let wavelet. Wavelets have advantages over traditional Fourier methods in analyzing physical situations where the signals contain discontinuities and sharp spikes. As Fourier analysis consists of breaking up a signal into sine waves of various frequencies, wavelet analysis is the breaking up of a signal into shifted and scaled version of original wavelet. An important property of wavelet basis is providing multi-resolution analysis, which makes it possible to analyse any signal without disturbing the time and frequency resolution. Wavelet transform have received significant attention recently due to their suitability for a number of important signal and image processing tasks, such as image compression and denoising. Wavelet based methods have become the state-of-the-art technique for image compression after Embedded Zerotree Wavelet (EZW) image coding developed by J.M.Shapiro in 1993. Image compression techniques, especially non-reversible or lossy ones, have been known to grow computationally more complex as they grow more efficient. But EZW is not only competitive in performance with the most complex techniques, but also extremely fast in execution. Embedded zerotree algorithm has proven to be a simple algorithm that requires no training, no pre-stored tables or codes, books and prior knowledge of image sources. This has the property of generating bits of the bit stream in the order of importance, thus yielding a fully embedded code. The significant part of EZW algorithm is that the encoding can be terminated at any point, thus making any target bit rate achievable. Even in the process of decoding, the decoder can truncate the bit stream at any point still producing the same image that would have been encoded at bit rate corresponding to the truncated bit stream. Till now, EZW is just used for image compression. In this thesis, the application of EZW algorithm for both image compression and image denoising has been explained and programmed with an example using MATLAB. A noised image is decomposed into a set of wavelet coefficients. Initially soft thresholding was performed on wavelet coefficients by using a certain threshold, which varies with the signal to noise ratio of the image. The threshold if chosen do that the reconstructed image (denoised image) achieves acceptable 1

PSNR ratio. EZW algorithm is then applied to these coefficients, where they are coded in several passes measuring them against a particular threshold in each pass. The image is finally coded into set of code list, Dominant and Subordinate code list for every pass representing a corresponding significant coefficient values. The reduction of bits due to the EZW algorithm has also been explained.


Introduction A wavelet literally means a small wave or a ripple. In mathematical terms it is defined as a function which breaks the given data into small frequency components, where each frequency component can be separately analyzed. Wavelets find their importance in the fields of mathematics, medicine, quantum physics, biology for cell membrane recognition, metallurgy to find the characteristics of rough surfaces, in internet traffic description and in particular electrical engineering. They are also used in many other applications such as digital signal processing, image compression, image denoising, fingerprint charts recognition, image encoding, etc. Wavelets are a relatively new signal processing method that has been introduced to overcome the shortcomings of the fourier transform which cannot be used to apply on nonstationary signals. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelets are able t separate data into different frequency components, and study each component with a resolution matched to its scale. In wavelet transforms a signal is represented by a sum of scaled and shifted wavelets. Narrow wavelets are comparable to high frequency sinusoids and wide wavelets are comparable to low frequency sinusoids. All wavelets are derived from a single mother wavelet.

Different types of Wavelets

The first step in using wavelet analysis for any kind of application is to decide n a type of mother wavelet. The original signal can then be represented as translation and dilation of mother wavelet. Many kinds of mother wavelet families exist and each has its own significance and drawbacks. The different families have different trade-off between how compactly they are localized in space and how smooth they are. Hence, the choice of wavelets usually depends on the particular type of application.


Haar Wavelet: This is the simplest of all wavelets. The Haar wavelet is discontinuous and resembles a step as shown in the figure.

Fig1.1: Haar Wavelet

The main advantage of Haar wavelet is that, it is orthogonal and it has the symmetry property.

Daubechies Wavelet

Daubechies wavelets are the compactly supported orthogonal wavelets and supports both DCT and DWT implementations. The daubechies wavelet has its own family of wavelets represented as µdbN¶, where µdb¶ stands for daubechies and µN¶ is the number of vanishing moments. The db1 wavelet is the same as the Haar wavelet.




Fig.1.2: Daubecheis Wavelet 4

9 5 .3Biorthogonal Wavelets (a)bior2. These are widely used in various signal and image processing applications that require perfect reconstructing properties.Biorthogonal Wavelets These wavelets exhibit orthogonality and symmetry.2 (b)bior3. they exhibit the property of linear phase which is needed for signal and image reconstruction. Most importantly. Fig:1.

To analyze the high and low frequencies. As. Discrete Wavelet Transform In DWT to analyze a signal at different scales. The resolution of the signal is changed by filtering operations and the scale is changed by upsampling and downsampling.CHAPTER 2 WAVELET TRANSFORM The application of wavelets on a particular signal involves multiplying the different segment of signal with a wavelet function that generates a Continuous Wavelet Transform (CWT). filters of different cutoff frequencies are used. Downsampling by a factor of µn¶ reduces the number of samples in the signal µn¶ times. Upsampling a signal by a factor of µn¶ increases the number of samples in the signal by a factor of µn¶. This generates the Discrete Wavelet Transform (DWT) obtained by sampling the continuous wavelet transform at a frequency of twice its highest frequency (nyquist criteria). Downsampling a signal corresponds to reducing the sampling rate or removing some of the samples of the signal. Upsampling a signal corresponds to increasing the sampling rate of a signal by adding new samples to the signal.1: (a) Original sequence (b) Upsampled sequence 6 . the signal is passed through series of high pass and low pass filters. it is necessary to discretize the waveform. Fig2. CWT cannot be practically computed using a digital computer.

Fig2.2:Four band subband encoding 7 . The reverse of the above procedure reconstructs the signal. The DWT of the original signal is obtained by concatenating all the coefficients starting from the last level of the decomposition to the first. These filters are called analysis filters. High pass and low pass filter output after the first stage contains half the number of samples (each). Successive decompositions are done until there are 2 DWT coefficients left. In this stage upsampling followed by low pass and high pass filters (called as synthesis filter) is referred to as interpolation of the signal. These constitute the DWT coefficients of level one. The above procedure is known as subband coding. At every level filtering and downsampling will result in half the number of samples.The decomposition of the signal into different frequency bands is simply obtained by successive high pass and low pass filtering of the time domain signal.

In case of an image.3: 2-DWavelet decomposition for the Image 8 . Fig2. high-low and high-high. low-high. The low-low frequency part can further be processed so that it is again subdi ided into further v four bands. In case of an image the filter is applied along the rows and then applied along the columns. low-low. thus the operations results in four bands. the low frequency part of the image carries most information and the high frequency part adds intricate details to the image.

The size of the image in bits is equal to the image size multiplied by color depth in bits. So image compression is necessary. green and blue which combine together and gives 224 different colors in true color palette. 9 . Consider a 512×512 image with 8-bit color depth. Images storage requirements are a function of time and color depth. Image used in photogrammetry and aerial mapping often occupies more than 100 megabytes of space. All digital images are two dimensional signals. If m-bit color means 2m colors are available. A true color version of same image takes three times as much space. It is measured in terms of bits. green and blue values. That means there are 256 shades each of red.CHAPTER 3 IMAGE COMPRESSION AND DENOISING Introduction Before going into the details of image compression and denoising one should have clear idea about images. There is need to reduce the size of the image. Any 8-bit image is stored with color values ranging from 0-255. Size= 512×512×8= 256 kilobytes This is the storage requirement for an image of average size and few colors. All kinds of uncompressed images require considerable amunt of storage capacity and transmission bandwidth. The mapping of these values to RGB values is stored in a color map. Color depth is defined as describing the number of different colors available to an image. They specify a color value for every pixel within its space. True color images have colors specified with red. The human eye is said t be able to differentiate between a maximum of 224 colors. Most commonly used is 8-bit color which offers 256 different colors.

It can just achieve a modest amount of compression. Discrete Cosine Transform (DCT) is used extensively for image compression. a high density diskette with a capacity of 1. Wavelets in Image Compression Before the evolution of wavelets. these lack something. The reduction of file size allows more images to be stored in an amount of memory space and also makes it fast for the images to be sent over the internet or downloaded from web pages. In DCT. In lossless compression scheme. satisfactory performance and availability of special purpose hardware for implementation. So we can say that it is visually lossless. the capacity might be increased to 280 pictures. DCT can be regarded as discrete time version of fourier cosine series. we can achieve higher compression ratios and most importantly no visible loss is perceived. the reconstructed image after compression is numerically identical to the original one. the space. bandwidth. By using this scheme.TIFF is an image format that can be compressed in a lossless way. JPEG is an image that is based on this scheme. Most common characteristics of images is that the neighboring pixels are correlated and they contain redundant information. It is close to discrete fourier transform which can be used to convert signal into elementary frequency components. For example. If the picture can be compressed by 20 to 1 without any perceptual distortion.Image compression Image compressing is nothing but minimizing the size of graphics file by reducing the number of bits needed to represent an image without degrading the quality of the image to an unacceptable level.4 mb can store about 14 pictures. It is because it completely removes the redundant information from the image. since the image 10 . The two fundamental components of compression are redundancy reduction which aims at removing the duplication from the image and other one is irrelevancy reduction which omits part of the signal that will not be noticed by the human visual system. So with a 20:1 compression ratio. All graphics files can be compressed in two ways: lossless and lossy. in lossy compression scheme. Now the first task is to find the less correlated representation in the image. Despite all the advantages of JPEG compression based on DCT like simplicity. and transmission time requirements can be reduced by a factor of 20 with acceptable quality. if the original picture is 100 kbyte. an image reconstructed following lossy compression is a degraded one compared to the original one. DCT is a real valued and gives better approximation of a signal with very few coefficients which is unlike DFT. But.

to begin the decomposition. The choice of wavelet depends on contents and resolution of an image. Fig. DWT can be efficiently used in image coding application because of their data reductio0n capabilities. Localization of wavelet functions. Lapped Orthogonal Transforms (LOT) are used to solve these problems which employs smoothly overlapping blocks. the wavelet transform has gained a widespread acceptance in signal processing especially image processing. HL1. increase computational efficiency of such algorithm doesn¶t justify replacing DCT by LOT. they can be named as LL1.1: First stage of DWT of image Each coefficient represents a spatial area corresponding to approximately 2×2 size of the original picture. DWT have some properties which makes it better choice over DCT for image co0mpression. DWT have higher decorrelation and energy compression efficiency so DWT can provide better image quality on higher compression ratios. It represents the image on different resolution levels. When the discrete wavelet transform is applied on the image.3. blocking effects are reduced in LOT compressed images. he image is divided into four subbands. The basic of DWT can be composed of any functions (wavelets) that satisfies the requirements of multiresolution analysis. The discrete wavelet transform is identical to the hierarchical subband system.needs to be ³blocked´. gives DWT potentiality for good representation of images with fewer coefficients. each subband corresponding to the output from two consecutive filters in the subband. especially for higher resolutions. Subband LL1 represents the coarser wavelet coefficients and hence more 11 . LH1 and HH1 as shown in figure. the entire image transformed and compressed as a single data object rather than block by block as in DCT based system. allowing for a uniform distribution of compression error across the entire image. In DWT based system. both in time and frequency. Over the past several years. correlation across the block boundaries cannot be eliminated. Although. This results in noticeable blocking artifacts particularly at lower bit rates which can be annoying to human eye. Hence.

Fig. 12 . LH1 and HH1 represents the finest scale wavelet coefficient. Once again three subbands (details) are generated and the lowest frequency subband (approximation) which is remaining would contain the coarser information at this scale.information of the image is contained by this. For best reconstruction results. The lowest frequency subband (LL1. In other words. the subband LL1 is further decomposed and critically sampled.details). LH1 and HH1 corresponds to the details of the image.3. To obtain the next coarser scale of wavelet coefficients. LH1 and HH1. on image the decomposition is applied from 3 to 5 scales.2: Two scale wavelet decomposition of Image In two-scale wavelet decomposition each coefficient in the subband LL2.LH2 and HH2 represents a spatial area corresponding to approximately a 4×4 size of the original image as shown in figure. The subbands HL1. LL1 corresponds to the approximate value of the image and HL1. HL2.approximations) contains larger part of information than the higher frequency subbands (HL1. The process could be continued until the final scale or the desired scale is reached.

A tradeoff between noise reduction and the preservation of the actual image features has to be made in such a way that it has the same diagnostically irrelevant image content. this noise might decrease to negligible levels under ideal conditions. Later on with the evolution of transforms there was confusion whether the denoising should take place in original signal domain or in a transform domain. Phenomenon like reflection creates µnoise¶ on images. there is every chance that one could erase the information behind it. But still the original image looks somewhat blurred. the method of wavelet thresholding has been extensively used for denoising images. that means he has not got the good data to work with and he often cannot give correct solution to a problem. Since noise is uncorrelated and usually small with respect to signal. noise suppression is a very delicate and difficult task. But if the threshold is well chosen based on statistical properties of input signal and required reconstruction quality. Wavelets shrinkage is a kind of method of noise reduction by just trying to remove the wavelet coefficients that correspond to noise. one needs image analysis to access several useful data. whether one should use fourier transform for the time frequency domain or the wavelet transform for the time-scale domain. It is like a challenge. 13 . we often have disturbance on images taken from bodies or from space.Image Denoising In this real world. so that it does not require denoising but for further data analysis. everyone worked with methods using first or second order statistics in original signal domain. we must remove the noise corrupting a signal to recover that signal. In medical images. in some fields like astronomy or medicine. removing as much noise as possible without really losing important data. Unfortunately. In early days. If one wants to delete the noise. will be thrown away. which is unwarranted. The idea is therefore to remove these small wavelet coefficients before reconstruction and so remove the noise. For all practical purpose. the result will be quite remarkable. But this method is not perfect because the small wavelet coefficients that might result from some part of the signal of interest and cannot be distinguished from noisy coefficients. When one has noise on images. Many people have described the development of wavelet transforms as revolutionizing modern signal and image processing over the past two decades. After the evolution of wavelets. If it is in transform domain. it is assumed that the corresponding wavelet coefficients that result from it will be uncorrelated too and probably will be small as well.

These coefficients are the small ones. Another is soft thresholding. it just throws out all the wavelet coefficients that do not meet the quality criteria that one sets for reconstruction. This wavelet shrinkage is very similar to lossy EZW compression or decompression. Herein lossy EZW.There are two types of wavelet shrinkage: one is hard thresholding in which one simply throws all wavelet coefficients smaller than a certain threshold. 14 . where one simply subtracts a constant value from all wavelet coefficients and throws away all the wavelet coefficients that are smaller than zero (if one considers only positive values). which add extra details. these are the coefficients that are coded at the end of EZW stream and used to lift already decoded coefficients to their original level. The way EZW works.

These are widely used in various signal and image processing applications that require perfect reconstruction properties. This kind of encoding is known as Embedded encoding which explains the µE¶ in EZW. Biorthogonal wavelets exhibit orthogonality and symmetry. This means that when more bits are added to stream. they exhibit the property of linear phase which is needed for signal and image reconstruction. The surprising part of using embedded encoding is.CHAPTER 4 EZW Encoding Introduction EZW algorithm is based on the wavelet coefficients of an image and uses wavelet transforms which explains the µW¶ in the EZW. The zerotree coding can efficiently encode a significance map of wavelet coefficients by predicting the absence of significant information across scales. 15 . The biorthogonal filter is used to determine the wavelet coefficients. terminating the encoding at any arbitrary point in the encoding process doesn¶t produce any artifacts. which is the basis for entire compressed encoding process. the decoded image will contain more detail. The EZW encoder is based on progressive encoding to compress an image into a bit stream with increasing accuracy. The µZ¶ in EZW stands for the concept of Zerotree. Most importantly.

The four descendants each also have four descendants in the next higher subband and hence a quad-tree develops as shown in figure. Fig4. In order to compress the transform signal.The Zerotree A wavelet transform transforms a signal from time domain to joint time scale domain. but also their positions in time. This means that the wavelet coefficients are two dimensional. not only the coefficient value have to be coded.1: Relation between wavelet coefficients in different subbands 16 . which develops a relationship between the wavelet coefficients across the different subbands. In the process of wavelet transform sub-sampling of the coefficients is performed at every level. A coefficient in a low subband can be thought of having four descendants in the next higher subband.

The tree is coded with a single symbol during encoding and can be reconstructed by the decoder as a quad-tree filled with zeros.2: Quad-tree : Every root with four leaves 17 .A Zerotree can be defined as a quad-tree of which all nodes are equal to or smaller than the root. Fig4.

Fig4.Scanning of coefficients We will follow the Morton Scan order for scanning of coefficients as depicted in the following picture.3: Morton Scan Order 18 .

Encoding Process In order to encode a matrix filled with values. This would considerably reduce the number of bits required to transmit the same matrix.its 19 . the same matrix can be encoded just by specifying whether the values in it are greater or smaller than the threshold using the symbols µH¶ or µL¶. which would consume a large number of bits depending upon the size of values. This can be explained with a simple example. Morton scanning of an unknown matrix and the corresponding codes obtained by measuring the values against a particular threshold are shown. With the application of the zerotree concept. the code can be replaced with µHHTH HLLH HLHL¶. the above matrix would be encoded as µHHLH HLLH LLLL HLHL¶. Fig4. The same logic is followed in the EZW encoding process. Better results can be obtained by repeating the procedure with differe nt values of thresholds. Instead. As l in the upper left part is followed by four Ls in the lower left part. In the normal process of encoding. when a threshold is set.4: Zerotree Encoding In the above figure. where the zerotree plays an additional significant role to further reduce the number of coefficients to be transmitted. the simplest way would be to directly represent them in the form of bits.

z.Negative. if it is larger than the threshold. MAX(. these five codes are replaced by a single code µT¶. The coefficients in the coarse scale is called a parent and all the coefficients corresponding to the same spatial location at the next finer scale of similar orientation are called children. The threshold can be chosen according to the bit plane coding as. n.  where. Every pass or scan of all wavelet coefficients has two sub-passes. Fig4. Then codes are generated for every coefficient as stated. if it is smaller than minus the threshold.descendants.) represents the maximum value of the image and represents the coefficient. For a given parent.Positive. Dominant pass Subordinate pass 20 . t.5: Parent child dependencies of subbands As a first step in the process of encoding a threshold value µT0¶ is chosen. set of all coefficients at all finer scales of similar orientation. p. The scanning of coefficients is performed in such a way that no child node is scanned before its parent. if it is the root of the zerotree. Coefficients are then compared versus this threshold value in a predefined scan order. if it is smaller than the threshold but it is not the root of a zerotree.Isolated zero.Zerotree.

Also. if a coefficient is found to be significant in one dominant pass.6: First dominant and subordinate pass and coefficient quantization values The process is continued with the threshold decrease by a factor of 2 for the following passes. 2T0). EZW maintains two separate lists. In order to keep track whether the coefficients are significant or not (already mapped or not). since a code word has been truncated. dominant list and subordinate list. Dominant list contains the location information of all the coefficients that have not been coded (found significant) and is in such an order that coefficients of the current subband appear n initial dominant list prior to coefficients in the next subband. if it is in the lower limit of the interval within (T0. 2T0) for the first pass. If terminated at an arbitrary point. Hence. This makes the embedded code to be very useful in rate-constraint or distortion constraint applications like image transfer over internet. The subordinate pass gives an output µ1¶. Fig4. Ti=Ti-1/2 The dominant passes are made only on those coefficients on the dominant list.Dominant pass creates a significance map of all coefficients with respect to the current threshold. Hence. the coefficients corresponding to the values of subordinate list are said to lie in the interval (T0. for the first pass. The subordinate list contains the location information of all the coefficients that have been coded (found significant) in previous passes. the bit stream cannot be decoded for the last symbol to a valid one. which are not found to be significant on a previous pass. it is updated in the subordinate list and not scanned on the next passes. the significant coefficients will lie in the range (T0. Still the decoder can reproduce the exact image with the reduced rate at which the encoding was stopped. 2T0) and µ0¶. if it is in the upper limit of the interval within (3T0/2. 21 . The encoding can be stopped at any time when a target bit rate is reached. 3T0/2) as shown in the figure.

dominant pass and subordinate pass. Values coded as µp¶ and µn¶ are said to be significant as their absolute value is greater than the threshold. Dominant pass contains the codes for all the positions whereas the subordinate pass has MSB corresponding to only the significant values of every pass.Every pass has two sub-passes. The process can be depicted in the form of the flow diagram show in figure. their positions are replaced with zeroes in the original matrix. in order to prevent re-coding of the same values. 22 . After being coded.

7: Flow chart for encoding a coefficient for a significant map 23 .Fig4.

Isolated zero t. which outputs either a µ1¶ or µ0¶. 4. 5.Root of a zerotree 3.  where MAX (x.e.EZW Algorithm The EZW algorithm can be summarized as follows: 1.Positive n. depending on whether it is in upper or lower half of the quantization level. Ti+1 = Ti/2 and the steps (2).Negative z. The wavelet coefficients are placed on the dominant list. The subordinate list is also updated with the positions of significant values of the current pass.y)) is the maximum value of the coefficient matrix. The initial threshold is set as. Depending on coefficients values that can be coded as p. Dominant pass: scan the coefficients on the dominant list with respect to current threshold Ti and subband ordering. The threshold is reduced by 2 i. (3) and (4) are repeated until desired image quality or bit rate is reached. 24 . 2. The significant values (coded as µp¶ or µn¶) are left for the subordinate pass. The position of the significant values of the current pass is replaced by zero and the dominant list is updated.

Example of EZW Encoding and Decoding Let us assume an 8×8 image with 3-scale wavelet transform coefficients to explain the order of the operation.8: An example of an 8×8 image that is transformed by a certain Wavelet transform. an isolated zero symbol. it has a significant descendant two generations down in the subband LH1 with magnitude 47. Out of the six dominant passes and five subordinate passes that can be generated. 25 . µz¶ is generated. a positive code µp¶ is generated. Thus. As this is greater than 32. The Morton scan order is followed for scanning the coefficients. Hence the initial threshold is. Figure 4. First level Encoder Section: y A maximum coefficient value has been identified as 63. =32 y The scanning starts with the first coefficient 63. the sequence of operation to generate first and second dominant and subordinate pass has been explained. y The next coefficient (as per Morton scan) is -34 which is less than minus threshold and hence an µn¶ is generated. y Even though the following coefficient 31 is insignificant with respect to the threshold 32.

64) with symbols µ0¶ and µ1¶ respectively. Thus zerotree symbolµt¶ is generated. 3. 34 lies in (32. 34. 7.y The coefficient 23 is less than 32 and all descendants (3. Thus. 9. -1) in HL1 also have magnitudes less than 32. 5. 0) in LH1. all have symbols µz¶. Hence. 11) and (6. Hence it is a zerotree. So. 13. 49 and 47(magnitudes) have been identified to be significant in the first dominant pass. The first entry 63 lies in the upper uncertainty interval (48. 48) and (48. The series of codes generated at the end of first dominant pass would be. y y As 47 is significant with respect to 32. 49 is greater than 32. D1: p n z t p t t t t z t t t t t t t p t t For values 63. are all insignificant. 6) are all insignificants. y y y 15 with its descendants (-5. 6. an isolated zero symbol µz¶ is generated. -6. The next value. the four values are moved to sublist and their positions to sublist. hence a µp¶ is generated. -14. as their descendants (2. the significant coefficients of the above dominant pass will be refined. The magnitude 10 is less than 32 and all descendants (-12. both isolated zero and zerotree can be combined to get a single code µz¶. it carries a µp¶. Hence. -3.similarly 49 and 47 get µ1¶ and µ0¶. as they belong to the upper and lower uncertainty intervals respectively. y y Next. 64) and hence is given µ1¶. 48) and hence a µ0¶. -9 and -7 are zerotree. -12. 4 and -1. The magnitudes are partitioned into uncertainty interval (32. Thus their positions will be replaced with a zero. Sublist = [63 34 49 47] In the first subordinate pass. 64). It is to be noted that no symbols were generated in subband HH2 which would normally proceed subband HL1 in the scan. S1: 1010 26 . 5. y Though 14 is insignificant with respect to 32. its descendants 47 in LH1 is significant. which has the interval (32. 3. it is coded with a zerotree µt¶. and no code will be generated for any coefficients in subbands HH2 and HH1 during the current dominant pass. The output of the first subordinate pass would be. 7. LH1 and HH1 have no descendants in the coding process. As HL1. -3 and 2 of LH1 also carry µz¶. 8) in subband HH2 and all coefficients in subband HH1 are insignificant.

31 and 23 get µ1¶.Decoder Section: The encoded bits will undergo two passes. it checks whether the encoded bit is a µp¶. (24. dominant and subordinate passes. -40. After first level the reconstructed coefficients looks like 56. (40. 47. In this dominant pass. 40) and hence a µ0¶. 64). µ1¶ and µ0¶. 49 lies in (48. 32). 64) and hence is given µ1¶. -31 and 23 are found to be significant and they are coded as µn¶ and µp¶ respectively and moved to sublist. In the subordinate pass. Second Level Encoder Section: The above said is for the first level for the threshold of 32. µ0¶. as they belong to the uncertainty intervals. (48. 56) and (56. the significant coefficients f the above dominant pass will be refined. The magnitudes are partitioned into uncertainty intervals (16. 56) and hence it is given µ0¶. µ1¶ respectively. The first entry 63 lies in the upper uncertainty interval (56.µt¶ or µz¶ and reconstructs to a particular value. 56. Similarly. µ1¶. (32. The output of the second subordinate pass would be. 40. 34 lies in the (32. 48). S2: 100110 27 . 24). In the dominant pass. it checks whether the value is coded as µ1¶ or µ0¶ and depending on that it refines the already constructed value towards original value. The next value. The process continues in the second dominant pass at the new threshold of 16 and a set of dominant and subordinate passes are generated. 64) with symbols µ0¶. D2: z t n p t t t t t t t t Now the new sublist looks like as follows Sublist= [63 34 49 47 31 23] In the second subordinate pass which has the interval (16. Next value. The series of codes generated at the end of second dominant pass would be. µn¶. 40). µ1¶. µ0¶.

Decoder Section The encoded bits will undergo two passes. The continuous alteration between dominant and subordinate passes adds more and more refinement bits for higher fidelity of the image and can be stopped at a desired bit-rate or quality of the image. -28. After the reverse transform of the decoded coefficients. µt¶ or µz¶ and reconstructs to a particular value. it checks whether the value is coded as µ1¶ or µ0¶ and depending on that it refines the already constructed value towards original value. 44. encoding and decoding of the data will be continued for different levels and can be stopped at any time. By this we can clearly see that the reconstructed coefficients are getting more refined for every level. dominant and subordinate passes. the original image can be reproduced. In the dominant pass. µn¶. it checks whether the encoded bit is a µp¶. 20. In the subordinate pass. 28 . 52. Similarly. -36. One can continue the process for all six levels and get perfect reconstruction. After first level the reconstructed coefficients looks like 60.

29 .Calculation of MSE and PSNR Two error metrics used to compare the image denoising techniques are the MSE(Mean Square Error) and PSNR (Peak Signal to Noise Ratio) MSE (Mean Square Error). š   N1 and N2 are the dimensions of the image. defined by  Ž‘‰ ) is the reconstructed image and In above equation š   is the original image. defined by  š   š   PSNR (Peak Signal to Noise Ratio).

45 dB 30 . Noise level is estimated by calculating signal to noise Ratio (SNR). the denoised image (after applying the algorithm) and PSNR ratios for original to noised and original to denoised image. Initially an image is taken and different set of noise levels are applied to that image.1: Catherine1 (c)Denoised Image SNR = 8. it shows the original image.5. A particular noise level (SNR=8. the results are shown as follows.CHAPTER 5 EXPERIMENTAL RESULTS This project is for developing a MATLAB code for denoising an image using EZW algorithm which was primarily used for image compression. After applying the algorithm on the image. First. Depending on the noise levels the threshold is chosen and the algorithm is applied on the image. the noised image (after adding noise). a sample image named µcatherine¶ is taken.2803 dB Original to noised Original to denoised PSNR = 14.2803 dB) is added to the image. (a)Original Image (b)Noised Image Fig. In the following figure.13 dB PSNR = 20.

2456 dB). Original Image Noised Image Fig.5. After applying the algorithm on the image.10 dB PSNR = 23. the results are shown as follows.2465 dB Original to noised Original to denoised PSNR = 16. the denoised image (after applying the algorithm) and PSNR ratio for original to noised and original to denoised image.24 dB 31 . the noised image (after adding noise). In the following figure.Again the image µcatherine¶ is taken and the noise level added is now changed to (SNR=10. it shows the original image.2: Catherine2 Denoised Image SNR = 10.

it shows the original image. the denoised image (after applying the algorithm) and PSNR ratio for original to noised and original to denoised image.2135 dB Original to noised Original to denoised PSNR = 14.53 dB 32 .3: Woman1 Denoised Image SNR = 8. In the following figure. After applying the algorithm on the image.2135 dB) is added to the image. the results are shown as follows.18 dB PSNR = 17.Another sample image named µwoman¶ is taken.5. A particular noise level (SNR=8. Original Image Noised Image Fig. the noised image (after adding noise).

the results are shown as follows.57 dB 33 . the denoised image (after applying the algorithm) and PSNR ratio for original to noised and original to denoised image.The image µwoman¶ is taken again and a noise level (SNR=8.5. After applying the algorithm on the image.2135 dB) is added to the image. it shows the original image. the noised image (after adding noise). Original Image Noised Image Fig. In the following figure.55 dB PSNR = 15.4: Woman2 Denoised Image SNR = 6.6298 dB Original to noised Original to denoised PSNR = 12.

18 dB 17.45 dB Catherine2 10.5: Results 34 .13 dB 20.53 dB Woman2 6.57 dB Fig5.Results: IMAGE SNR Original to Noised PSNR Original to Denoised PSNR Catherine1 8.10 dB 23.55 dB 15.6298 dB 12.2465 dB 16.2803dB 14.24 dB Woman1 8.2135 dB 14.

This is similar to low pass filtering in which the high frequency contents. Denoising of an image is another addition to the usefulness of EZW algorithm. this algorithm can be used for applications which need variable levels of smoothness depending on requirements. mostly noise and some details corresponding to the image which can be neglected. can be eliminated. we can clearly see that the denoised image has a better PSNR ratio than the noised image. apart from being an excellent compression technique. It can be concluded that this algorithm can be efficiently used for both compression and denoising at the same time.CHAPTER 6 CONCLUSION The recent research on image processing has given wavelet image coding the most attention due to its high performance with reasonable amount of quality of image. So. it was prominently noticed that the denoised image using EZW has better reconstruction quality than the denoised image using median filtering. 35 . When comparison is performed between EZW and median filtering. Major breakthrough has been achieved by wavelet transform with subband classification and zerotree quantization. EZW is one of the compression techniques which is used for compressing an image. The PSNR is improved by almost 5 dB. By observing the results.

1)/size(X. close all. fprintf('----------. fprintf('The psnr performance is %. v=reshape(y. v=v.^2))/size(X. image(s). title('Original Image') colormap(map). s=X+y. n = size(img_orig. figure(2).2). S] = func_DWT(img_orig. fprintf('done!\n'). title('dwt') colormap(map).1. image(X). u=u.256*256).4'.MATLAB CODE FOR IMPLEMENTATION Main Program function func_ezw_demo_main % % main program % clear all. 10*log10(Q*Q/MSE)). clc.^2. img_orig=s. SNR=10*log10(sum(u)/sum(v)) Q = 255. title('noised Image') colormap(map). figure(3). figure(1). % Load original image. 1). level =n_log .256).Hi_D.Load Image ----------------\n'). %wavelet basis [Lo_D. u=reshape(X.256*256).2f dB\n'.Lo_R. n_log = log2(n).Wavelet Decomposition ----------------\n').1. type = 'bior4. 36 . fprintf('----------. Lo_D. MSE = sum(sum((s-X). image(img_wavedata). Hi_D). load catherine. fprintf('done!\n'). %img_wavedata: wavelet coefficients of the input image [img_wavedata.Hi_R] = wfilters(type). y=40*randn(256.^2. level.

Hi_D). image(img_wavedata_dec).2f dB\n'.Performance ----------------\n'). level. fprintf('done!\n'). Hi_R. title('final') colormap(map).Inverse Wavelet Decomposition ----------------\n'). figure(5). fprintf('----------. Q = 255. fprintf('done!\n'). S. title('idwt') colormap(map). img_enc_significance_map. image(img_reconstruct). 10*log10(Q*Q/MSE)). img_wavedata_dec = func_ezw_dec(n. level).1)/size(X. img_reconstruct = func_InvDWT(img_wavedata_dec. img_enc_refinement] = func_ezw_enc(img_wavedata. S] = func_DWT(I. % input: I : input image % level : wavelet decomposition level % Lo_D : low-pass decomposition filter % Hi_D : high-pass decomposition filter % % output: I_W : decomposed image vector % S : corresponding bookkeeping matrix 37 . treshold. img_enc_refinement).EZW Encoding ----------------\n'). figure(4).2). ezw_encoding_threshold = 100. Wavelet Decomposition: function [I_W . fprintf('----------.^2))/size(X. ezw_encoding_threshold). [img_enc_significance_map.fprintf('----------. fprintf('The psnr performance is %. Lo_R. treshold = pow2(floor(log2(max(max(abs(img_wavedata)))))). MSE = sum(sum((img_reconstruct-X). Lo_D.EZW Decoding ----------------\n'). fprintf('done!\n'). fprintf('----------.

% % The output wavelet 2-D decomposition structure [C.3)) + 2*S(k.1:2) ).Lo_D.1:2)). if errargn(mfilename.3) + 3*sum(S(2:k-1. % horizontal part c_start = S(1.3)).s] = func_Mywavedec2(x.n.Lo_D.'d'). S(k. end % dim of detail coef nmatrices function [c.1:2) ). 1:S(1.Hi_D] = wfilters(varargin{1}.1).3)) + S(k. I_W( rows .S(L.[0:2]). error('*').1:2) ).level.3)).3) + 3*sum(S(2:k.3).3)) + 1.Hi_D). S(k.*S(:.3) + 3*sum(S(2:k-1. columns ) = reshape( C(c_start:c_stop) . I_W = zeros(S(L. c_stop = S(1.1))].2) ) = reshape( C(c_start:c_stop) . for k = 2 : L-1 rows = [sum(S(1:k-1.3) + 3*sum(S(2:k-1. % approx part I_W( 1:S(1. Hi_D = varargin{2}.3) + 3*sum(S(2:k-1.varargin) % For [C.[3:4].3) + 1.3)) + S(k. c_stop = S(1. columns ) = reshape( C(c_start:c_stop) .n. S(:.3).2).2) ) = reshape(C(1:S(1. end if errargt(mfilename.S] = func_Mywavedec2(I.1) .nargout.3) + 1.Hi_D). columns = [sum(S(1:k-1.3) = S(:.S] % contains the wavelet decomposition vector C and the % corresponding bookeeping matrix S.S(1.1))+1:sum(S(1:k.N. end 38 . end if nargin==3 [Lo_D.1). 1:S(k.[C. c_stop = S(1. % vertical part c_start = S(1. % diagonal part c_start = S(1.1) . else Lo_D = varargin{1}. % Lo_D is the decomposition low-pass filter and % Hi_D is the decomposition high-pass filter.2)).S] = WAVEDEC2(X. L = length(S). error('*').nargin.'int').3) + 3*sum(S(2:k-1.3)) + 2*S(k.2))].2))+1:sum(S(1:k. I_W( rows . I_W( 1:S(k. S(k.

Hi_D. img_wavedata_save = img_wavedata.ezw_encoding_threshold).'per'). % img_wavedata: wavelet coefficients to encode % ezw_encoding_threshold: determine where to stop encoding % significance_map: % a string matrix containing significance data for different passes ('p'.'z'.'t'). refinement] = func_ezw_enc(img_wavedata.'n'. % subordinate_list = []. % store details % store size s = [size(x). scan = func_morton([0:(n*n)-1]. where each row contains data for a different scanning pass. % refinement: a strubg matrix containing refinement data for different passes ('0' or '1'). significance_map = [].1).h.Lo_D. s = [size(x)]. % Initial threshold 39 .d] = dwt2(x. c = [].% Initialization. % decomposition c = [h(:)' v(:)' d(:)' c]. s = [size(x).s].v.s]. EZW Encoding function [significance_map. refinement = []. % Morton scan order n = size(img_wavedata. end % Last approximation. img_wavedata_mat = img_wavedata. each row contains data for a different scanning pass.n). c = [x(:)' c].'mode'. for i=1:n [x.

bits = log2(n*n). subordinate_list] = func_subordinate_pass(subordinate_list. dim = size(img_wavedata. refinement = strvcat(refinement. if(threshold == init_threshold). Dominant Pass function [signif_map. list.1:2:bits-1)). else subordinate_list = func_rearrange_list(subordinate_list. end [encoded. ' '. data = img_wavedata. 40 . data] = func_dominant_pass(img_wavedata. bin2dec(bin(:. '')). threshold). char(str)). strrep(num2str(encoded). scan). threshold = threshold / 2. threshold. end significance_map refinement Morton Scan function scan = func_morton(pos. img_wavedata] = func_dominant_pass(img_wavedata. subordinate_list = list. % number of bits needed to represent position bin = dec2bin(pos(:). scan. img_wavedata_save).init_threshold = pow2(floor(log2(max(max(abs(img_wavedata)))))) threshold = init_threshold.bits). subordinate_list. scan). signif_index = 1.1). subordinate_list = []. list. significance_map = strvcat(significance_map. threshold. % convert position to binary scan = [bin2dec(bin(:. signif_map = []. while (threshold >= ezw_encoding_threshold) [str. subordinate_index = 1.2:2:bits))].n).

row = scan(element. elseif(data(row.1)+1. % element is zerotree root signif_map(signif_index) = 't'. subordinate_list(2. mask = func_treemask(row. signif_index = signif_index + 1. column)) & data(row. else if(abs(data(row. signif_map(signif_index) = 'p'. column) < realmax). subordinate_list(1. data(row. end end masked = data . column) = 0.column) <= -threshold). subordinate_index= subordinate_index + 1.for element = 1:dim*dim. column) = realmax. % compare data to threshold if(isempty(find(abs(masked) >= threshold))). data(row. subordinate_index) = data(row. if(data(row. subordinate_index= subordinate_index + 1. column)) < threshold).column. % element is zerotree root signif_map(signif_index) = 't'. signif_index = signif_index + 1. signif_index = signif_index + 1. 41 . column) = 0. signif_index = signif_index + 1. else % element is isolated zero signif_map(signif_index) = 'z'. subordinate_index) = data(row.dim). subordinate_index) = threshold+threshold/2. else %to determine wether element is zerotree root if(row<dim/2 | column<dim/2).column) >= threshold). column = scan(element. column).* mask. subordinate_list(1. column). subordinate_list(2. signif_map(signif_index) = 'n'. % to check whether element should be processed if(~isnan(data(row. signif_index = signif_index+ 1.2)+1.threshold/2. subordinate_index) = -threshold . data(row.

column = scan(element. row = scan(element. y_min = 2*y_min .data = data + (mask*realmax). y. % % % % x. % index add_list for element = 1:size(scan. while(x_max <= dim & y_max <= dim). y_min:y_max) = 1. signif_index = signif_index + 1. x. x_max = 2*x_max. % calculate new subset x_min = 2*x_min . wavedata). data(index) = img_wavedata(index). subordinate_list = []. % index original_list a_index = 1.1. else % element is isolated zero signif_map(signif_index) = 'z'. mask(x_min:x_max.1).dim).1. scan. end Rearrange List function subordinate_list = func_rearrange_list(orig_list. add_list.1)+1.2)+1. x_min x_max y_min y_max = = = = x.y. end end end end index = find(data == realmax). func_treemask: function mask = func_treemask(x. y is the position in the matrix of the node where the EZW tree should start. o_index = 1. y_max = 2*y_max. top left is x=1 y=1 dim is the dimension of the mask (should be the same dimension as the wavelet data) mask = zeros(dim). y. 42 .

elseif(size(add_list. o_index = o_index + 1. subordinate_list(2.i) > 0).a_index)). threshold).i) .i) = subordinate_list(2.threshold/4. subordinate_list] = func_subordinate_pass(subordinate_list.i) + threshold/4. significance_map. else subordinate_list(2.i) . subordinate_list(2. else subordinate_list(2.2) >= o_index & wavedata(row.i) = subordinate_list(2. % dim: dimension of the wavelet matrix to reconstruct % threshold: initial threshold used while encoding % significance_map: a string matrix containing significance data 43 .threshold/4. % subordinate_list: current subordinate list containing coefficietns % threshold: current threshold to use when comparing % encoded: matrix containing 0's and 1's for refinement of the suborinate list % subordinate_list: new subordinate_list (second row containing reconstruction values encoded = zeros(1.i) = subordinate_list(2. subordinate_list = [subordinate_list add_list(:.a_index)]. column) == orig_list(1.size(subordinate_list. refinement).:)))) = 1. subordinate_list = [subordinate_list orig_list(:.o_index)].:) for i = 1:length(encoded).o_index)).:)) > abs(subordinate_list(2. end else if(subordinate_list(1. threshold.i) + threshold/4. encoded(find(abs(subordinate_list(1. end end end EZW Decoder: function img_wavedata_dec = func_ezw_decode(dim.i) > 0). % update subordinate_list(2. column) == add_list(1. if(encoded(i) == 1). end end Subordinate Pass function [encoded.2)). a_index = a_index + 1.i) = subordinate_list(2.if(size(orig_list.2) >= a_index & wavedata(row. if(subordinate_list(1.

%to check whether element should be processed if(isfinite(img_wavedata_dec(row. img_wavedata_dec = func_decode_refine(img_wavedata_dec. significance_map.1). index = 1. scan).dim).'z' and 't') threshold: threshold to use during this decoding pass (dominant pass) scan: scan order to use (Morton) img_wavedata_dec: the decoded wavelet coefficients backup = img_wavedata_dec.dim).2)+1. column))).:). end func_decode_significancemap: function img_wavedata_dec = func_decode_significancemap(img_wavedata_dec.'n'. % number of steps significance map (and refinement data) steps = size(significance_map. % % % % % img_wavedata_dec: input wavelet coefficients significance_map: string containing the significance map ('p'. for element = 1:n*n. threshold. column = scan(element. threshold.% refinement: a string matrix containing refinement data % img_wavedata_dec: reconstructed wavelet coefficients img_wavedata_dec = zeros(dim. %to get matrix index for element row = scan(element.1). refinement(step. scan).1)+1. significance_map(step. scan). threshold = threshold/2. n = size(img_wavedata_dec. threshold. %to determine type of element 44 . for step = 1:steps. %to decode significance map img_wavedata_dec = func_decode_significancemap(img_wavedata_dec. scan = func_morton([0:(dim*dim)-1].:).

img_wavedata_dec(row. column) = 0. column) = img_wavedata_dec(row. elseif(significance_map(index) == 'n'). index = 1. if(ref == '1'). img_wavedata_dec = img_wavedata_dec + backup. end end img_wavedata_dec(find(img_wavedata_dec > realmax)) = 0. img_wavedata_dec(row. % img_wavedata_dec: input wavelet coefficients % refinement: string containing refinement data ('0' and '1') % threshold: threshold to use during this refinement pass % scan: scan order to use (Morton) % img_wavedata_dec: the refined wavelet coefficients n = size(img_wavedata_dec. column = scan(element. else img_wavedata_dec(row.1). end 45 . column) . % get matrix index for element row = scan(element.if(significance_map(index) == 'p'). column) = img_wavedata_dec(row. elseif(significance_map(index) == 'z'). column) = 0. ref = refinement(index). img_wavedata_dec(row. func_decode_refine: function img_wavedata_dec = func_decode_refine(img_wavedata_dec. for element = 1:n*n. if(img_wavedata_dec(row. refinement. scan). column) = threshold + threshold/2. img_wavedata_dec(row.threshold/2. mask = func_treemask_inf(row.n). img_wavedata_dec = img_wavedata_dec + mask. column) ~= 0). else img_wavedata_dec(row. if(img_wavedata_dec(row. end index = index + 1.column. threshold.threshold/4. column) + threshold/4. column) > 0).1)+1. column) = -threshold .2)+1.

mask = zeros(dim). column) + threshold/4. end end index = index + 1. mask(x_min:x_max. column) = img_wavedata_dec(row. y. m = I_W. column) = img_wavedata_dec(row. % input: I_W : decomposed image vector % S : corresponding bookkeeping matrix % Lo_D : low-pass decomposition filter % Hi_D : high-pass decomposition filter % level : wavelet decomposition level % output: im_rec : reconstruted image L = length(S).threshold/4. Hi_R. else img_wavedata_dec(row. img_wavedata_dec(row.1. Lo_R.3))). % approx part 46 . y_min:y_max) = inf. dim). while(x_max <= dim & y_max <= dim). x. S. end end func_treemask_inf: function mask = func_treemask_inf(x. % calculate new subset x_min = 2*x_min . column) . x_max = 2*x_max. y_max = 2*y_max.3)+3*sum(S(2:L-1. C1 = zeros(1. y.else if(img_wavedata_dec(row. end IDWT function im_rec = func_InvDWT(I_W.1. y. column) > 0).S(1. y_min = 2*y_min . x_min x_max y_min y_max = = = = x. level).

Hi_R). 1 . c_stop = S(1.nargout. C1(S((level+2).3) + 3*sum(S(2:k-1.3) + 1.1). c_stop-c_start+1).3)). columns ) .1) .varargin) if errargn(mfilename.0). func_Mywaverec2: function x = func_Mywaverec2(c.3)) + S(k.3) + 3*sum(S(2:k-1. columns ) .3).s.2) ) . c_stop = S(1.C1(1:S(1. end S(:.1))+1:sum(S(1:k. columns = [sum(S(1:k-1.S. length(C1) . end if (( L .[3:4].nargin.3)))).3). 1:S(k.3)+3*sum(S(2:(level+1).3) + 3*sum(S(2:k-1.2))+1:sum(S(1:k.3)) + 2*S(k.3) ). S(1. 1 .s.[3:5]. error('*'). func_Myappcoef2: function a = func_Myappcoef2(c.3)) + S(k. c_stop = S(1.1) .varargin{:}. % diagonal part c_start = S(1.3)) + 2*S(k.3) + 3*sum(S(2:k.3)) = reshape( m( 1:S(1. if ischar(varargin{1}) 47 .3) = []. 1. C1(c_start:c_stop) = reshape( m( rows .2) ). 1 .3) + 1.3) + 3*sum(S(2:k-1. C1(c_start:c_stop) = reshape( m( rows . error('*').(S(1.3)+1 : length(C1)) = temp. % vertical part c_start = S(1. C1(c_start:c_stop) = reshape( m( 1:S(k.2) > level) temp = zeros(1.1))].nargout.3)) + 1.[0:1]). % horizontal part c_start = S(1. im_rec = func_Mywaverec2(C1.2))].s.3) + 3*sum(S(2:k-1. for k = 2:L-1 rows = [sum(S(1:k-1. nmax = rmax-2. Lo_R. c_stop-c_start+1). c_stop-c_start+1 ).varargin) if errargn(mfilename.[0:1]). end rmax = size(s.nargin. 1:S(1. end x = func_Myappcoef2(c.

:).c. end 48 . n = nmax. a(:) = c(1:nl*nc). next = 3.'invalid level value'. n = varargin{next}.[Lo_R. nc = s(1. rm = rmax+1.'msg').p). a = zeros('per'). end if nargin>=(2+next) . else Lo_R = varargin{1}.s.s(rm-p. a = idwt2(a.d] = detcoef2('all'.Hi_R.Hi_R] = wfilters(varargin{1}. next = 2.h. Hi_R = varargin{2}.2).1).Lo_R. error('*'). end nl = s(1. end if (n<0) | (n>nmax) | (n~=fix(n)) errargt(mfilename.'r').d.v. for p=nmax:-1:n+1 [h.'mode'. else.

‡ ‡ y y 49 . Daubechies : Image Coding using Wavelet Transform. ³Digital Image Processing. 41. September 1992. IEEE Transactions on Signal Processing 41. D. 40. IEEE transactions on Image Processing.mathieu.: DE-NOISING BY SOFT-THRESHOLDING. C. L.´ Pearson Education. 613-627. 3 (1995).M.REFERENCES ‡ Shapiro. Gonzalez. Woods. IEEE transactions on Signal Processing. Antonimi. R. Herley: Wavelets and filter banks: Theory and Design.: Embedded Image Coding using Zerotrees of Wavelet Coefficients. P. J. and I. M. No. Vol.205-220. Barland.Vetterli and C. E. p. M. January 1992 M. R. Vol.2207-2232. Donoho. pp. IEEE Transactions on Information Theory. pp. 3445±3462 (1993).