Compression Techniques and Water Marking of Digital Image Using Wavelet Transform and SPIHT Coding

You might also like

You are on page 1of 33

(IJCSIS) International Journal of Computer Science and Information Security,

(Vol. 9 No. 3), 2011.

COMPRESSION TECHNIQUES AND WATER MARKING OF DIGITAL IMAGE USING WAVELET TRANSFORM AND SPIHT CODING
G.Prasanna Lakshmi Computer Science,IBSAR Karjat,India Prasanalaxmi@yahoo.com Dr. D.A.Chandulal Professor and HOD, IBSAR Computer Science India dr.chandulal@yahoo.com Dr.KTV Reddy) Professor & Principal Electronics & Telecommunications Dept. Computer Science India ktvreddy@rediffmail.com techniques w.r.t the objective fidelity criteria . The objective fidelity criteria are: a) MSE (mean square error) : if MSE is less, the compressed image is more close to the original image b) PSNR (peak signal to noise ratio): if PSNR is more the compressed image is more close to the original image. The amount of compression is measured using CR (compression ratio) for each elimination technique. If compression ratio is more the compression is more. Further the two coding techniques were compared w.r.t Encoding and Decoding time. The proposed block diagram for the compression and decompression (at transmitter and receiver)is:

I.

INTRODUCTION Fig: 1.1 ENCODER

Advances that facilitate electronic publishing and Commerce also heighten threats of intellectual property theft and unlawful tampering. One approach to address this problem involves embedding an invisible structure into a host signal to mark its ownership. These structures are called digital watermarks and the associated embedding process is called digital watermarking. One major driving force for research in this area is the need for effective copyright protection scenarios for digital imagery. In such an application a serial number or a message is embedded into the image to protect and to identify the copyright holder. So the objective of watermarking is authenticity check. In this project the discrete wavelet transform of an image is used which transforms the image into two parts: an approximation part and a detail part. So, using this transformation the details of an image can be extracted. The control of the details of an image permits to identify the invisible ones hence watermark can be inserted by changing only the less important details of an image. The watermark should survive the image processing techniques like compression etc. This project compresses the image by using two different techniques called HUFFMAN and SPIHT Coding techniques. SPIHT (set partitioning in hierarchical trees) is a new and a very fast Encoding techniqueSPIHT algorithm is based on 3 concepts, they are Ordered bit plane progressive transmission. b) Set partitioning sorting algorithm. c) Spatial orientation trees. Also in this project each coding technique i.e. Huffman and SPIHT are performed by using two elimination techniques of wavelet transform, HH (LL and LH bands) and H* elimination (only LL band) and then we compared the two .
226

Fig: 1.2 DECODER II. DISCRETE WAVELET TRANSFORM

In DWT, we pass the time-domain signal from various high Pass and low pass filters, which filters out either high frequency or low frequency portions of the signal. This procedure is repeated, and every time some portion of the signal corresponding to some frequencies is removed from the signal. There are two types of data elimination methods used in wavelet transform. They are HH elimination and H* elimination and in this project we use both elimination techniques. The proposed architecture for HH and H* Elimination techniques is as shown below

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

This is the connection between the wavelets theory and the digital watermarking of images representing the title of this thesis. V. THE WATERMARK INSERTION SYSTEM

Fig: 1.3 H* ELIMINATION TECHNIQUE

Various insertion techniques like amplitude modulation of frequencies etc are used for inserting the message into the image .The watermark insertion system used in this thesis is by using the ASCII codes of each alphabet of the message to be inserted i.e. Watermarking is obtained by applying wavelet transform and then altering the chosen frequencies of the original image according to the ASCII code of the alphabets in the message. The 8 bits code of each alphabet is embedded into the LSBS of pixels starting from some chosen location. VI COMPRESSION

Fig: 1.4 HH ELIMINATION TECHNIQUE III. DIGITAL WATERMARKING OF STILL IMAGES

One of the most used multimedia signals category is that of images. For example 80% of the data transmitted using the internet are images. This is the reason why it is very important to study the digital watermarking methods of images. In this thesis a novel watermarking approach which embeds a watermark in the discrete wavelet domain of the image is presented. This novel approach provides information on specific frequencies of the image that have been modified. IV. WATERMARKING USING WAVELETS

After watermarking the next thing we present for proper transmission of the image is to compress the image to reduce the bandwidth required for transmission and the memory needed to store the image. Compression refers to the process of reducing the amount of data required to represent a given quantity of information i.e. the reduction process is the removal of redundant data, the data which contains no relevant information is called data redundancies given by Rd = 1-1/Cr Where Cr is the compression ratio given by Cr = n1/n2 And n1 and n2 are the number of information carrying units of input and output image. Compression refers to removing the redundancies so that the image takes less memory and less bandwidth for transmission. In digital image four basic redundancies are present Inter pixel redundancies Psycho visual redundancies Coding redundancies VII. INTERPIXEL REDUNDANCIES

The discrete wavelet transform of an image transforms the image into two parts: an approximation part and a detail part. So, using this transformation the details of an image can be extracted. The control of the details of an image permits to identify the invisible ones. This is very important because changing only the less important details of an image is easy to insert a watermark in this image, keeping the insertion procedure invisible. This can be a very simple and fast procedure. Transforming these details, a new image, very similar with the original one, can be obtained. This new image can be regarded like the watermarked image associated to the original one. Their difference can be considered the watermark embedded in the original image. So the discrete wavelet transform can be used to embed a watermark into an image. .
227

This represents the Inter co-relation between the pixels within an image. These are eliminated by applying image transform which involves mapping the original image data into another mathematical space where it is easier to compress the data by representing it into fewer numbers of bits than the original image. In this project, the Wavelet Transform which is used for watermarking to remove the interpixel dundancies. VIII. PSYCHOVISUAL REDUNDANCIES

This is the information which has less relative importance than other information in normal visual processing. These redundancies are removed by using quantization. Quantization means mapping of a broad range of input values to a limited number of output values. Quantization is applied on the output obtained after applying DWT. Quantization is an irreversible

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

process, hence information lost cannot be regained during decompression. IX. CODING REDUNDANCIES

Coding Involves mapping the discrete data from the quantizer onto a code in an optimal manner i.e. construction of codes such that the number of bits used to represent the data is reduced i.e. by assigning fewer bits to the more probable gray levels than to the less probable ones which achieves compression. In this project we applied two coding techniques called Huffman coding and SPIHT Coding (Set Partitioning in Hieraricial Trees). X. HUFFMAN CODING

A repeatable or reproducible means of quantifying the nature and extent of information loss is highly desirable. Two general classes of criteria for digital images are 1) Objective Fidelity Criteria and 2) Subjective Fidelity Criteria In this paper we present Objective Fidelity Criteria like 1) MSE (Mean Square Error): As MSE decreases the clarity of the image increases i.e. the compressed image is more close to the original image. 2) PSNR (Peak Signal to Noise Ratio): As PSNR increases the clarity of image increases i.e. the compressed image is more close to original image. 3) CR (Compression Ratio): As CR increases we achieve more compression.

Huffman coding technique is the most popular technique for removing coding redundancies. Huffman coding yields the smallest possible number of code symbol per source symbol. In terms of the noiseless coding theorem, the resulting code is optimal for a fixed value of n, subject to the constraint that the source symbols be coded one at a time.The Huffman algorithm can be described in five steps. 1. Find the gray level probabilities for the image by finding the histogram 2. Order the input probabilities from smallest to largest 3. Combine the smallest two by addition 4. GOTO step 2, until only two probabilities are left 5. By working backward along the tree, generate code by alternating assignment of 0 and 1 XI. SPIHT CODING (Set partitioning in hierarchical trees)

2.

LITERATURE REVIEW

2.1

TRANSFORMS

WHAT IS A TRANSFORM? WHY DO WE NEED TRANSFORMS?

SPIHT Coding offers a new, fast and different implementation based on set partitioning in hierarcial trees, which provides better performance than other coding techniques. It has become the benchmark state-of-the-art algorithm for image compression. SPIHT algorithm is based on 3 concepts a) Ordered bit plane progressive transmission. b) Set partitioning sorting algorithm. c) Spatial orientation trees. SPIHT has the following advantages: a) Optimized for progressive image transmission b) Produces a fully embedded coded file c) Simple quantization algorithm d) Fast coding and decoding e) Wide application f) Good image quality, high PSNR g) Can code to exact bit rate or distortion h) Efficient combination with error protection XII. FIDELITY CRITERIA

A Transform is a mathematical operation that takes a function or sequence and maps it into another one. Transforms are used because a) The transform of a function may give additional /hidden information about the original function, which may not be available /obvious otherwise b) The transform of an equation may be easier to solve than the original equation (recall Laplace transforms for DiffEquations) c) The transform of a function/sequence may require less storage, hence provide data compression reduction. d) An operation may be easier to apply on the transformed function, rather than the original function (recall convolution). Mathematical transformations are applied to signals to obtain further information from that signal that is not readily available in the raw signal. Most of the signals in practice, are Time domain signals in their raw format, i.e. whatever that signal is measuring, is a function of time. In other words, when we plot the signal, one of the axes is time (independent variable), and the other (dependent variable) is usually the amplitude. When we plot time-domain signal we obtain a Time Amplitude representation of the signal. This representation is not always the best representation of the signal for most IMAGE PROCESSING related applications. In many cases, the most distinguished information is hidden in the frequency content of the signal. The frequency spectrum of a signal is basically the frequency components (spectral components) of that signal. The frequency spectrum of a signal shows what frequencies exist in the signal.

.
228 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Intuitively, we all know that the frequency is something to do with the change in rate of something. If something (a mathematical or physical variable would be the technically correct term) changes rapidly, we say that it is of high frequency, where as if this variable does not change rapidly, i.e., it changes smoothly, we say that it is of low frequency. If this variable does not change at all, then we say it has zero frequency, For example the publication frequency of a daily newspaper is higher than that of a monthly magazine. Frequency is measured in cycles/second, or with a more common name, in "Hertz". Now, look at the following figures. The first one is a sine wave at 3 Hz, the second one at 10 Hz as shown below.

The following

shows the FT of

the 50Hz signal:

Fig: 2.1.2 FT OF A 50 Hz SIGNAL Although FT is probably the most popular transform being used, there are many other transforms that are used quite often by engineers and mathematicians. Hilbert transform, short-time Fourier transform, Wigner distributions, the Radon Transform, and of course our featured transform, the wavelet transform constitute only a small portion of a huge list of transforms that are available at engineer's and mathematician's disposal. Every transformation technique has its own area of application, with advantages and disadvantages, and the wavelet transform (WT) is no exception. For a better understanding of the need for the WT let's look at the FT more closely. FT and WT both are reversible transforms, that is, it allows going back and forwarding between the raw and processed (transformed) signals. However, only either of them is available at any given time. That is, no frequency information is available in the time-domain signal, and no time information is available in the Fourier transformed signal. The natural question that comes to mind is that is it necessary to have both the time and the frequency information at the same time? Recall that the FT gives the frequency information of the signal, which means that it tells us how much of each frequency exists in the signal, but it does not tell us when in time these frequency components exist. This information is not required when the signal is so-called stationary, i.e. in stationary signals, all frequency components that exist in the signal, exist throughout the entire duration of the signal. There is 10 Hz at all times, there is 50 Hz at all times, and there is 100 Hz at all times as shown below.

Fig: 2.1.1

SINE WAVES WITH DIFFERENT FREQUENCIES

So how do we measure frequency, or how do we find the frequency content of a signal or an image? The answer is FOURIER TRANSFORM (FT). If the FT of a signal in time domain is taken, the frequency-amplitude representation of that signal is obtained. In other words, we now have a plot with one axis being the frequency and the other being the amplitude. This plot tells us how much of each frequency exists in our signal or image. For example the FT of the electric current that we use in our house, we get one spike at 50 Hz, and nothing Elsewhere, since that signal has only 50 Hz frequency component.

.
229 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Fig: 2.1.3 Stationery signal

Fig: 2.1.6 FT OF A NON-STATIONERY SIGNAL Do not worry about the little ripples at this time; they are due to sudden changes from one frequency component to another. Now, compare the Figures 1.4 and 1.6. The similarity between these two spectrums should be apparent. Both of them show four spectral components at exactly the same frequencies, i.e., at 10, 25, 50, and 100 Hz. Other than the ripples, and the difference in amplitude (which can always be normalized), the two spectrums are almost identical, although the corresponding time-domain signals are not even close to each other. The signals involve the same frequency components, but the first one has these frequencies at all times, the second one has these frequencies at different intervals. So, how come the spectrums of two entirely different signals look very much alike? Recall that the FT gives the spectral content of the signal, but it gives no information regarding where in time those spectral components appear. Therefore, FT is not a suitable technique for non-stationary signal, with one exception; FT can be used for stationary signals, if we are only interested in what spectral components exist in the signal, but not interested where these occur. However, if this information is needed, i.e., if we want to know, what spectral component occur at what time (interval) , then Fourier transform is not the right transform to use. When the time localization of the spectral components is needed, a transform giving the TimeFrequency representation of the signal is needed. Hence we go for a transform called WAVELET TRANSFORM. 2.2) THE WAVELET TRANSFORM The Wavelet transform is a transform of this type i.e. it provides the time-frequency representation. There are other transforms which give this information too, such as short time Fourier transforms, Wigner distributions. Wavelet transform is capable of providing the time and frequency information simultaneously, hence giving a time-frequency representation of the signal. The WT was developed as an alternative to the STFT (Short Time Fourier transform). The advantages of Wavelet Transform is a. Overcomes the present resolution problem of the STFT by using a variable length window b. Analysis windows of different lengths are used for different frequencies: c. For analysis of high frequencies, Use narrower windows for better time resolution .
230 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

Fig: 2.1.4 FT OF A STATIONERY SIGNAL Note the four spectral components corresponding to the frequencies 10, 25, 50 and 100 Hz. Contrary to the above signal, the signal shown below is a non-stationary signal whose frequency constantly changes in time. This signal is known as the "chirp" signal or a non stationary signal.

Fig: 2.1.5 NON- STATIONERY SIGNAL The above plot shows a signal with four different frequency components at four different time intervals, hence a nonstationary signal. The interval 0 to 300 ms has a 100 Hz sinusoid, the interval 300 to 600 ms has a 50 Hz sinusoid, the interval 600 to 800 ms has a 25 Hz sinusoid, and finally the interval 800 to 1000 ms has a 10 Hz sinusoid. And the following is its FT:

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

d.

For analysis of low frequencies, Use wider windows for better frequency resolution e. This works well, if the signal to be analyzed mainly consists of slowly varying characteristics with occasional short high frequency bursts. f. Heisenberg principle still holds good g. The function used to window the signal is called the wavelet. Wavelet Transforms basically work on two properties a) Scaling property and b) Translation property Translation Property: It is the time shift property f(t) f(a.t) a>0 If 0<a<1 then contraction takes place i.e. low scale (high frequency) If a>1 then dilation takes place i.e. expansion, large scale (lower frequency) if f(t) f(a/t) a>0 If 0<a<1 then dilation takes place i.e. large scale (lower frequency) If a>1 then contraction takes place, low scale (high frequency) Scaling Property : It has a similar meaning as that of scale in maps A. Large scale: Overall view, long term behavior B. Small scale: Detail view, local behavior

The continuous wavelet transform is obtained using the equation

Computation of CWT

Continuous Wavelet Transform:

.
231 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

DISCRETE WAVELET TRANSFORM:


we need not have to use a uniform sampling rate for the translation parameters, since we do not need as high time sampling rate when the scale is high (low frequency). Lets consider the following sampling grid:

Transform:

Fig: 2.2.3 1D WAVELET TRANSFORMS

SCALING FUNCTION:
Equations Fig: 2.2.2 SAMPLING GRID

The equations are an exception to the prescribed specifications of this template. You will need to determine whether or not your equation should be typed using either the Times New Roman or the Symbol font (please no other font). To create multileveled equations, it may be necessary to treat the equation as a graphic and insert it into the text after your paper is styled. Number equations consecutively. Equation numbers, within parentheses, are to position flush right, as in (1), using a right tab stop. To make your equations more compact, you may use the solidus ( / ), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in + = .

The equation for a 1D DWT is

Consider an example as shown below for a 2D Wavelet .


232

2D WAVELET FUNCTIONS:
http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

let other common scientific constants, is zero with subscript formatting, not a lowercase letter o. In American English, commas, semi-/colons, periods, question and exclamation marks are located within quotation marks only when a complete thought or

Fig: 2.2.5 SINGLE STAGE DECOMPOSITION

Fig: 2.2.4 IMPLEMENTATION OF 2D WAVELET TRANSFORM In DWT, we pass the time-domain signal from various high pass and low pass filters, which filters out either high frequency or low frequency portions of the signal. This procedure is repeated, and every time some portion of the signal corresponding to some frequencies being removed from the signal. This is the technique used for compression of an image using Wavelet Transform. Here is how this works: The WT can be performed by using two elimination methods, they are 1) H-Elimination method and 2) H* Elimination method. The elimination methods are chosen based on the required compression. Now suppose we have a signal which has frequencies up to 1000 Hz. In the first stage we split up the signal into two parts by passing the signal from a high pass and a low pass filter (filters should satisfy some certain conditions, so-called admissibility condition) which results in two different versions of the same signal: portion of the signal corresponding to 0500 Hz (low pass portion), and 500-1000 Hz (high pass portion). Then, we take either portion (usually low pass portion) or both, and do the same thing again. This operation is called decomposition. The figure below shows the single stage and multi stage decomposition in Wavelet Transform.

Fig: 2.2.6 MULTI STAGE DECOMPOSITION Assuming that we have taken the low pass portion, we now have 3 sets of data, each corresponding to the same signal at frequencies 0-250 Hz, 250-500 Hz, 5001000 Hz. Then we take the low pass portion again and pass it through low and high pass filters; we now have 4 sets of signals corresponding to 0-125 Hz, 125-250 Hz, 250-500 Hz, and 500-1000 Hz. We continue like this until we have decomposed the signal to a pre-defined certain level. Then we have a bunch of signals, which actually represent the same signal, but all corresponding to different frequency bands. We know which signal corresponds to which frequency band, and then based on the required compression ratio some frequencies are computed and some frequencies are skipped as shown below The results of applying Discrete Wavelet Transform on the image in single stage and multiple stage is as shown below

.
233 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.)

Fig: 2.2.7 ORIGINAL IMAGE

Fig: 2.2.8 FIRST STAGE DISCRETE WAVELET


TRANSFORM

name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation .
234

In wavelet analysis the signal is multiplied with a function, i.e. a wavelet, similar to the window and the transform is computed separately for different segments of the timedomain signal. The width of the window is changed as the transform is computed for every single spectral component, which is probably the most significant characteristic of the wavelet transform. The term Wavelet means small wave. The smallness refers to the condition that this (window) function is of finite length (compactly supported). The wave refers to the condition that this function is oscillatory. In terms of frequency, low frequencies (high scales) correspond to a global information of a signal (that usually spans the entire signal), whereas high frequencies (low scales) correspond to a detailed information of a hidden pattern in the signal (that usually lasts a relatively short time). Fortunately in practical applications, low scales (high frequencies) do not last for the entire duration of the signal, unlike those shown in the figure, but they usually appear from time to time as short bursts, or spikes. The discrete wavelet transform (DWT), on the other hand, provides sufficient information both for analysis and synthesis of the original signal, with a significant reduction in the computation time. The DWT is considerably easier to implement when compared to the CWT. In the discrete case, filters of different cutoff frequencies are used to analyze the signal at different scales. The signal is passed through a series of high pass filters to analyze the high frequencies, and it is passed through a series of low pass filters to analyze the low frequencies. The resolution of the signal, which is a measure of the amount of detail information in the signal, is changed by the filtering operations, and the scale is changed by up sampling and down sampling (sub sampling) operations. Sub sampling a signal corresponds to reducing the sampling rate, or removing some of the samples of the signal. For example, sub sampling by two refers to dropping every other sample of the signal. Sub-sampling by a factor n reduces the number of samples in the signal n times. Up sampling a signal corresponds to increasing the sampling rate of a signal by adding new samples to the signal. For example, up sampling by two refers to adding a new sample, usually a zero or an interpolated Value, between every two samples of the signal. Up sampling a signal by a factor of n increases the number of samples in the signal by a factor of n. The procedure starts with passing this signal (sequence) through a half band digital low pass filter with impulse response h[n]. Filtering a signal corresponds to the mathematical operation of convolution of the signal with the impulse response of the filter. The convolution operation in discrete time is defined as follows:

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

The unit of frequency is of particular importance at this time. In discrete signals, frequency is expressed in terms of radians. Accordingly, the sampling frequency of the signal is equal to 2p radians in terms of radial frequency. Therefore, the highest frequency component that exists in a signal will be p radians, if the signal is sampled at Nyquists rate (which is twice the maximum frequency that exists in the signal); that is, the Nyquists rate corresponds to p rad in the discrete frequency domain. Therefore using Hz is not appropriate for discrete signals. After passing the signal through a half band low pass filter, half of the samples can be eliminated according to the Nyquists rule, since the signal now has a highest frequency of p/2 radians instead of p radians. Simply discarding every other sample will sub sample the signal by two, and the signal will then have half the number of points. The scale of the signal is now doubled. Note that the low pass filtering removes the high frequency information, but leaves the scale unchanged. Only the sub sampling process changes the scale. Resolution, on the other hand, is related to the amount of information in the signal, and therefore, it is affected by the filtering operations. Half band low pass filtering removes half of the frequencies, which can be interpreted as losing half of the information. Therefore, the resolution is halved after the filtering operation. Note, however, the sub sampling operation after filtering does not affect the resolution, since removing half of the spectral components from the signal makes half the number of samples redundant anyway. Half the samples can be discarded without any loss of information. In summary, the low pass filtering halves the resolution, but leaves the scale unchanged. The signal is then sub sampled by 2 since half of the number of samples are redundant. This doubles the scale. This procedure can mathematically be expressed as Having said that, we now look how the DWT is actually computed: The DWT analyzes the signal at different frequency bands with different resolutions by decomposing the signal into a coarse approximation and detail information. DWT employs two sets of functions, called scaling functions and wavelet functions, which are associated with low pass and high pass filters, respectively. The decomposition of the signal into different frequency bands is simply obtained by successive high pass and low pass filtering of the time domain signal. The original signal x[n] is first passed through a half band high pass filter g[n] and a low pass filter h[n]. After the filtering, half of the samples can be eliminated according to the Nyquists rule, since the signal now has a highest frequency of p /2 radians instead of p. The signal can therefore be sub sampled by 2, simply by discarding every other sample. This constitutes one level of decomposition and can mathematically be expressed as follows:

Where yhigh[k] and ylow[k] are the outputs of the high pass and low pass filters, respectively, after sub sampling by 2. This decomposition halves the time resolution since only half the number of samples now characterizes the entire signal. However, this operation doubles the frequency resolution, since the frequency band of the signal now spans only half the previous frequency band, effectively reducing the uncertainty in the frequency by half. The above procedure, which is also known as the sub band coding, can be repeated for further decomposition. At every level, the filtering and sub sampling will result in half the number of samples (and hence half the time resolution) and half the frequency band spanned (and hence doubles the frequency resolution). Figure 4.1 illustrates this procedure, where x[n] is the original signal to be decomposed, and h[n] and g[n] is low pass and high pass filters, respectively. The bandwidth of the signal at every level is marked on the figure as "f".

Fig: 2.2.10 DWT COEFFICIENTS The Sub band Coding Algorithm As an example, suppose that the original signal x[n] has 512 sample points, spanning a frequency band of zero to p rad/s. At the first decomposition level, the signal is passed through the high pass and low pass filters, followed by sub sampling by 2. The output of the high pass filter has 256 points (hence half the time resolution), but it only spans the frequencies p/2 to p rad/s (hence double the frequency resolution). These 256 samples constitute the first level of DWT coefficients. The output of the low pass filter also has 256 samples, but it spans the other half of the .
235 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

frequency band, frequencies from 0 to p/2 rad/s. This signal is then passed through the same low pass and high pass filters for further decomposition. The output of the second low pass filter followed by sub sampling has 128 samples spanning a frequency band of 0 to p/4 rad/s, and the output of the second high pass filter followed by sub sampling has 128 samples spanning a frequency band of p/4 to p/2 rad/s. The second high pass filtered signal constitutes the second level of DWT coefficients. This signal has half the time resolution, but twice the frequency resolution of the first level signal. In other words, time resolution has decreased by a factor of 4, and frequency resolution has increased by a factor of 4 compared to the original signal. The low pass filter output is then filtered once again for further decomposition. This process continues until two samples are left. For this specific example there would be 8 levels of decomposition, each having half the number of samples of the previous level. The DWT of the original signal is then obtained by concatenating all coefficients starting from the last level of decomposition (remaining two samples, in this case). The DWT will then have the same number of coefficients as the original signal. The frequencies that are most prominent in the original signal will appear as high amplitudes in that region of the DWT signal that includes those particular frequencies. The difference of this transform from the Fourier transform is that the time localization of these frequencies will not be lost. However, the time localization will have a resolution that depends on which level they appear. If the main information of the signal lies in the high frequencies, as happens most often, the time localization of these frequencies will be more precise, since they are characterized by more number of samples. If the main information lies only at very low frequencies, the time localization will not be very precise, since few samples are used to express signal at these frequencies. This procedure in effect offers a good time resolution at high frequencies, and good frequency resolution at low frequencies. Suppose we have a 256-sample long signal sampled at 10 MHZ and we wish to obtain its DWT coefficients.Since the signal is sampled at 10 MHz, the highest frequency component that exists in the signal is 5 MHz. At the first level, the signal is passed through the low pass filter h[n], and the high pass filter g[n], the outputs of which are sub sampled by two. The high pass filter output is the first level DWT coefficients. There are 128 of them, and they represent the signal in the [2.5 5] MHz range. These 128 samples are the last 128 samples plotted. The low pass filter output, which also has 128 samples, but spanning the frequency band of [0 2.5] MHz, are further decomposed by passing them through the same h[n] and g[n]. The output of the second high pass filter is the level 2 DWT coefficients and these 64 samples precede the 128 level 1 coefficients in the plot. The output of the second low pass filter is further decomposed, once again by passing it through the filters h[n] and g[n]. The output of the third high pass filter is the level 3 DWT coefficients. These 32 samples precede the level 2 DWT coefficients in the plot.The procedure continues until only 1 DWT coefficient can be computed at level 9. This one coefficient is the first to be .
236

plotted in the DWT plot. This is followed by 2 level 8 coefficients, 4 level 7 coefficients, 8 level 6 coefficients, 16 level 5 coefficients, 32 level 4 coefficients, 64 level 3 coefficients, 128 level 2 coefficients and finally 256 level 1 coefficients. Note that less and less number of samples is used at lower frequencies, therefore, the time resolution decreases as frequency decreases, but since the frequency interval also decreases at low frequencies, the frequency resolution increases. Obviously, the first few coefficients would not carry a whole lot of information, simply due to greatly reduced time resolution. One area that has benefited the most from this particular property of the wavelet transforms is image processing. DWT can be used to reduce the image size without losing much of the resolution. Here is how: For a given image, you can compute the DWT of, say each row, and discard all values in the DWT that are less then a certain threshold. We then save only those DWT coefficients that are above the threshold for each row, and when we need to reconstruct the original image, we simply pad each row with as many zeros as the number of discarded coefficients, and use the inverse DWT to reconstruct each row of the original image. We can also analyze the image at different frequency bands, and reconstruct the original image by using only the coefficients that are of a particular band.

2.3) WATERMARKING
Digital watermarking is an adaptation of the commonly used and well-known paper watermarks to the digital world. Digital watermarking describes methods and technologies that allow hiding of information, for example a number or text, in digital media, such as images, video and audio. The embedding takes place by manipulating the content of the digital data that means the information is not embedded in the frame around the data. There are two types of watermarks. They are 1) VISIBLE WATERMARKS: These are visible to the viewers as in a bond paper to mark the paper type

2) INVISIBLE WATERMARKS: These are invisible to the viewer and are useful for identifying the authorized owner.

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

6. Imperceptibility: The watermark should not be visible by human visual system (HVS) and should not degrade the image quality 7. Reliability: To ensure that the project application returns the watermark each time.

2.3.2) DOMAINS USED IN WATERMARKING


Spatial domain: This is one of the simplest techniques Simple Technique obtained by LSB Substitution i.e. Obtain the bit planes of the Host Image and Replace the zero bit plane of host image with watermark image Advantages: a) A simple technique b) Requires no watermark image to retrieve it from watermarked im c) No blocking artifacts d) Maximum Capacity Disadvantages: a) Prone to tampering and attacks like Compression Rotation Scaling Translation Cropping etc. Transform domain: Host image is transformed into another domain using DCT, Hartley, and Wavelet etc. Watermark image is embedded in the frequency coefficients of the transformed host image Watermark is extracted from the watermarked image by taking inverse transform and identifying the coefficients Advantages: a) Robustness b) Resistant to rotation, scaling and translation and Compression c) Perceptibility Disadvantages: a) Less Capacity b) Computationally Complex c) Blocking artifacts due to block processing Hybrid domain: This is intermediate between spatial and transform domain, it is a combination of both spatial and frequency domain. Advantages: a) To increase the capacity of the watermark. b) To make use of the benefit of transform domains. c) To maximize the immunity of the watermark against various distortion attacks.

Fig: 2.3.2 INVISIBLE WATERMARKING First applications of watermarking that came to mind were related to copyright protection of digital media. In the past duplicating artwork was quite complicated and required a great expertise for that the counterfeit looked like the original. However, in the digital world this is not true. For everyone it is extremely easy to duplicate digital data and this even without any loss of quality

Fig: 2.3.3 Classification of information hiding technique

2.3.1) REQUIREMENTS OF DIGITAL WATERMARKING


Digital watermarking has to meet the following requirements: 1. Perceptual transparency: The algorithm must embed data without affecting the perceptual quality of underlying host signal. 2. Security: A secure data embedding procedure can not be broken unless the unauthorized user access to a secret key that controls the insertion of data in host signal. 3. Robustness: Watermarking must survive attacks by lossy data compression and image manipulation like cut and paste, filtering etc 4. Unambiguous: Retrieval of watermark should unambiguously identify the owner 5. Universal: Same watermark algorithm should be applicable to all multimedia under consideration .
237

2.3.3) APPLICATIONS OF WATERMARKING Watermarking has wide range of application .they can be used for

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

1. Data Hiding -- Providing private secret messages 2. Copyright Protection -- To prove ownership 3. Copy Control -- To trace illegal copies and License Agreement 4. Data Authenticatio -- Check if content is modified 5.Broadcasting Monitor -- For commercial Advertisement 6. Copy Protection -- To protect illegal copying of the information 2.3.4) WATERMARKING COMPONENTS VISIBILITY

of a message) are used by specialized systems or organizations to check the authenticity of a message. The embedding mechanism entails imposing imperceptible changes to the host signal to generate a watermarked signal containing the watermark information, while the extraction routine attempts to reliably recover the hidden watermark from a possible tampered watermarked signal. One of the most used multimedia signals category is that of images. For example 80% of the data transmitted using the internet are images. This is the reason why it is very important to study the digital watermarking of images.

2.3.5) TYPES OF WATERMARK IN


TERMS OF FIDELITY
CAPACITY ROBUSTNESS There are three types of watermarks in terms of Fidelity. They are a) RobustROBUSTNESS This watermark has the ability to Watermark: withstand various image attacks thus providing authentication. b) Fragile Watermark: This watermark is mainly used for detecting modification of data. This watermark gets degraded even for a slight modification of data in the image. c) Semi Fragile Watermark: It is an intermediate between fragile and robust watermarks. It is not robust against all possible image attacks. 2.3.6) WATERMARK INSERTION SYSTEM The watermarks can be inserted by using various techniques like a) Flip the lowest order bit of chosen pixels b) Superimpose a symbol over the area of an image c) By using color separation, i.e. the watermark appears in only one color band d) By applying transforms and then altering the chosen frequencies from the original

CAPACITY: It refers to the amount of information we are able to insert into the host image . Capacity = Bytes of hidden data Bytes of Cover image

ROBUSTNESS:
It refers to ability of inserted information to withstand image modifications. At present, digital watermarking research primarily involves the identification of effective signal processing strategies to discreetly, robustly, and unambiguously hide the watermark information into multimedia signals. The general process involves the use of a key which must be used to successfully embed and extract the hidden information. The embedding mechanism entails imposing imperceptible changes to the host signal to generate a watermarked signal containing the watermark information, while the extraction routine attempts to reliably recover the hidden watermark from a possible tampered watermarked signal. The objective of this project is the security of image and security has the following objectives: - The transmitted information confidentiality- The transmitted information integrity, - The transmitted information authenticity, - The transmitted information non-repudiation, - The disposability of the required information and of the required services, The authenticity of the image can be verified by another person or system connected in the same network. This kind of authenticity check is very important and was intensively developed in the last years. The author of the message sends a transformed form of another message, related with the first one, to a third entity. Processing this transformed form of the messages the third entity can establish the author. Today, digital signatures or digital envelopes (the transformed forms .
238

2.4) COMPRESSION
Compression refers to the process of reducing the amount of data required to represent a given quantity information. The compression system model consists of two parts: 1. The Compressor 2. The Decompressor Image compression model

Fig: 2.4.1 SOURCE ENCODER

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Fig: 2.4.2 SOURCE DECODER The data which contains no relevant information is called data redundancies given by Rd = 1-1/Cr Where Cr is the compression ratio given by Cr = n1/n2 And n1 and n2 are the number of information carrying units of input and output image. In digital image three basic redundancies are present which can be eliminated for compression 1) Interpixel redundancies 2) Coding redundancies 3) Psycho visual redundancies 2.4.1) INTERPIXEL REDUNDANCIES The correlations which exist between the pixels due to structural or geometric relationships between the objects of the image .some redundancies arise due to this Inter correlation between the pixels within an image. A variety of names like spatial redundancies, Geometric Redundancies, and Interframe redundancies .these are eliminated in an image , the 2D pixel array normally used for human viewing and interpretation must be transformed into a more efficient format i.e. the difference between adjacent is used to represent an image . This is usually done by applying transforms. This process is also known as mapping. In this project interpixel redundancies are removed by using Wavelet Transforms. 2.4.2) PSYCHOVISUAL REDUNDANCIES Human eye does not respond with equal sensitivity to all visual information. Certain information simply has less relative importance than other information in normal visual processing. This information is said to be psycho visually redundant which can be removed without significantly impairing the quality of image perception because the information itself is not essential for normal visual processing. Since the elimination of psycho visual redundant data results in a loss of quantitative information, it is commonly referred to as Quantization. Quantization is mapping of a broad range of input values to a limited number of output values. Quantization is an irreversible process and results in lossy compression. 2.4.3) CODING REDUNDANCIES In the process of removing coding redundancies the shortest code word is assigned to the grey levels that occur most frequently in an image i.e. fewer bits are assigned to the most probable grey levels than to the less probable ones and this achieves data compression. This process is referred to as variable length coding. Coding redundancies are removed by using the process of encoding. There are various encoding process like variable length coding 1) Huffman coding 2) Arithmetic coding 3) LZW Coding 4) Bit plane coding 5) SPIHT Coding .
239

In this project we have used two different encoding techniques i.e. Huffman coding and SPIHT coding using both HH and H* Elimination techniques. 2.4.4) HUFFMAN CODING The Huffman code, developed by D. Huffman in 1952, is a minimum length code which is the most popular technique for removing coding redundancies. Huffman coding yields the smallest possible number of code symbol per source symbol. In terms of the noiseless coding theorem, the resulting code is optimal for a fixed value of n, subject to the constraint that the source symbols be coded one at a time. Huffman coding gives a statistical distribution of the gray levels (the histogram), the Huffman algorithm will generate a code that is as close as possible to the minimum bound. For complex images, Huffman coding alone will typically reduce the file by 10% to 50% but this ratio can be improved to 2:1 or 3:1 by preprocessing for irrelevant information removal. The Huffman algorithm can be described in five steps 1. Find the gray level probabilities for the image by finding the histogram 2. Order the input probabilities from smallest to largest 3. Combine the smallest two by addition 4. GOTO step 2, until only two probabilities are left 5. By working backward along the tree, generate code by alternating assignment of 0 and 1. An example of how the Huffman coding algorithm works is as shown below: 1) The first step in Huffmans approach is to create a series of source reductions by ordering the probabilities of the symbols under considerations and combining the lowest probability symbols into a single symbol that replaces them in the next source reduction as shown in the tabular column below

The second step in Huffmans procedure is to code each reduced source, starting with the smallest source and working back to the original source and the minimal length binary code for a two symbol source, is the symbols 0 and 1.

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

So, SPIHT can be very useful for applications where the user can quickly inspect the image and decide if it should be really downloaded, or is good enough to be saved, or need refinement. A. Optimized Embedded Coding: Suppose you need to compress an image for three remote users. Each one have different needs of image reproduction quality, and you find that those qualities can be obtained with the image compressed to at least 8 Kb, 30 Kb, and 80 Kb, respectively. If you use a non-embedded encoder (like JPEG) to save in transmission costs (or time) you must prepare one file for each user. On the other hand, if you use an embedded encoder (like SPIHT) then you can compress the image to a single 80 Kb file, and then send the first 8 Kb of the file to the first user, the first 30 Kb to the second user, and the whole file to the third user. Surprisingly, with SPIHT all three users would get (for the same file size) an image quality comparable or superior to the most sophisticated non-embedded encoders available today. SPIHT achieves this feat by optimizing the embedded coding process and always coding the most important information first. B. Compression Algorithm: SPIHT represents a small "revolution" in image compression because it broke the trend to more complex (in both the theoretical and the computational senses) compression schemes. While researchers had been trying to improve previous schemes for image coding using very sophisticated vector quantization, SPIHT achieved superior results using the simplest method: uniform scalar quantization. Thus, it is much easier to design fast SPIHT codes. C. Encoding/Decoding Speed: The SPIHT process represents a very effective form of entropy-coding. When compared to SPIHT coding to other coding techniques the difference in compression is small, showing that it is not necessary to use slow methods. A straightforward consequence of the compression simplicity is the greater coding/decoding speed. The SPIHT algorithm is nearly symmetric, i.e., the time to encode is nearly equal to the time to decode. (Complex compression algorithms tend to have encoding times much larger than the decoding times.) D. Applications: SPIHT exploits properties that are present in a wide variety of images. It had been successfully tested in natural (portraits, landscape, weddings, etc.) and medical (X-ray, CT, etc) images. Furthermore, its embedded coding process proved to be effective in a broad range of reconstruction qualities. For instance, it can code fair-quality portraits and high-quality medical images equally well (as compared with other methods in the same conditions). .
240 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

The final code appears at the far left in the above table which shows that fewer bits are allotted to the most probable symbols. Huffman encoded symbols can be decoded by examining the individual symbols of the string in a left to right manner. 2.4.5) SPIHT CODING SPIHT Coding offers a new, fast and different implementation based on set partitioning in hierarchial trees, which provides better performance than other coding techniques. It has become the benchmark state-of-the-art algorithm for image compression. SPIHT has the following advantages: 1) good image quality , high PSNR, especially for color images 2) it is optimized for progressive image transmission 3) produces a fully embedded coded file 4) simple quantization algorithm 5) fast coding/decoding time 6) has wide application, completely adaptive 7) can be used for lossless compression 8) can code to exact bit rate or distortion 9) efficient combination with error protection Image quality: SPIHT yields very good quality visual images by exploiting the properties of wavelet transform images. Progressive image transmission: I some systems with progressive image transmission (like WWW browsers) the quality of the displayed images follows the sequence: (a) weird abstract art; (b) you begin to believe that it is an image of something; (c) CGA-like quality; (d) lossless recovery. With very fast links the transition from (a) to (d) can be so fast that you will never notice. With slow links (how "slow" depends on the image size, colors, etc.) the time from one stage to the next grows exponentially, and it may take hours to download a large image. Considering that it may be possible to recover an excellent-quality image using 10-20 times less bits, it is easy to see the inefficiency. Furthermore, the mentioned systems are not efficient even for lossless transmission. The problem is that such widely used schemes employ a very primitive progressive image transmission method. On the other extreme, SPIHT is a state-of-the-art method that was designed for optimal progressive transmission (and still beats most non-progressive methods!). It does so by producing a fully embedded coded file in a manner that at any moment the quality of the displayed image is the best available for the number of bits received up to that moment.

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

E. Lossless Compression: SPIHT codes the individual bits of the image wavelet transform coefficients following a bit-plane sequence. Thus, it is capable of recovering the image perfectly (every single bit of it) by coding all bits of the transform. In other words, the property that SPIHT yields progressive transmission with practically no penalty in compression efficiency applies to lossless compression too. Rate or Distortion Specification: Almost all image compression methods developed so far do not have precise rate control. For some methods you specify a target rate, and the program tries to give something that is not too far from what you wanted. For others you specify a "quality factor" and wait to see if the size of the file fits your needs. (If not, just keep trying...). The embedded coding property of SPIHT allows exact bit rate control, without any penalty in performance (no bits wasted with padding or whatever). The same property also allows exact mean squarederror (MSE) distortion control. Even though the MSE is not the best measure of image quality, it is far superior to other criteria used for quality specification. Error Protection F. Errors in the compressed file cause havoc for practically all important image compression methods. This is not exactly related to variable length entropycoding, but to the necessity of using context generation for efficient compression. For instance, Huffman codes have the ability to quickly recover after an error. However, if it is used to code run-lengths, then that property is useless because all runs after an error would be shifted.

G. Use with Graphics SPIHT uses wavelets designed for natural images. It was not developed for artificially generated graphical images that have very wide areas of the same color. Even though there are methods that try to compress efficiently both graphic and natural images, the best results for graphics have been obtained with methods like the Lempel-Ziv algorithm. Actually, graphics can be much more effectively compressed using the rules that generated them. There is still no "universal compression" scheme, in the future documents we will use more extensively what is already being used by WWW browsers: one decoder for text, another for sound, another for natural images (how about SPIHT?), another for video etc. 2.4.6) SPIHT ALGORITHM SPIHT algorithm is based on 3 concepts: 1) Ordered Bit Plane Progressive Transmission 2) Set Partitioning Sorting Algorithm 3) Spatial Orientation Trees Ordered Bit Plane Progressive Transmission: A major objective in a progressive transmission scheme is to select the most important information- which yields the largest distortion reduction- to be transmitted first. It incorporates two concepts: Ordering the coefficients by magnitude Transmitting the most significant bits (MSBS) first. Set Partitioning Sorting Algorithm: The sorting algorithm divides the set of pixels into partitioning subsets Tm and performs the significance test by using the function Sn (T) = 1, max {(I, j) T[C i, j] > 2n = o, otherwise number of pass Spatial Orientation Trees: where n is the

SPIHT is not an exception for this rule. One difference, however, is that due to SPIHT embedded coding property, it is much easier to design efficient error-resilient schemes. This happens because with embedded coding the information is sorted according to its importance, and the requirement for powerful error correction codes decreases from the beginning to the end of the compressed file. If an error is detected, but not corrected, the decoder can discard the data after that point and still display the image obtained with the bits received before the error. Also, with bit-plane coding the error effects are limited to below the previously coded planes. Another reason is that SPIHT generates two types of data. The first is sorting information, which needs error protection as explained above. The second consists of uncompressed sign and refinement bits, which do not need special protection because they affect only one pixel. While SPIHT can yield gains like 3 dB PSNR over methods like JPEG, its use in noisy channels, combined with error protection as explained above, leads to much larger gains, like 6-12 dB. .
241

Fig : 2.4.3 SPATIAL ORIENTATION TREES

O (i, j): set of coordinates of all offspring of node (i, j); children only D (i, j): set of coordinates of all descendants of node (i, j); children, grandchildren, great-grand, etc. H (i, j): set of all tree roots (nodes in the highest pyramid level); parents L (i, j): D (i, j) O (i, j) (all descendents except the offspring); grandchildren, great-grand, etc. For practical applications the following sets are used to store ordered lists LSP: List of significant pixels

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

INITIALIZATION

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

LIP: List of insignificant pixels LIS: List of insignificant sets To illustrate how the SPIHT Coding works lets look at the following example

S(0,1) similarly the sets S(1,0) and S(1,1) are insignificant hence transmit two 0s We need not process LSP since it is null Update LSPT to LSP The transmitted bit stream is 10000000(8 bits) LIP={(0,1),(1,0),(1,1)} LIS={D(0,1),D(1,0),D(1,1)} LSP={ (0,0)}

INITIALIZATION LIP (0, 0) = 26 EMPTY (0, 1) = 6 (1, 0) = -7 (1, 1) = 7 Fig: 2.4.6 BLOCK DIAGRAM AFTER FIRST SORTING PASS After Second Sorting Pass n=4-1=3, Threshold T1= 2n=23=8 Process LIP S(0,1)=6, S(1,0)=-7, S(1,1),=7, are insignificant ,hence we transmit 3 0S, Process LIS DS (0, 0) =13, DS (0, 1) =10, these two are > T1 hence we transmit 1 for the set then we transmit 10 for 1 and again we transmit 10 for 10, then move (0, 2) and (0, 3) to LSPT DS(1,0)=6 and DS(1,1)=4 < T1 we transmit two OS ,then move (1,2) and(1,3) to LIP The sets D (1, 0) and D (1, 1) are insignificant hence we transmit two 0S Process LSP C (0, 0) =26= (11010)2 ---- TRANSMIT NTH MSB =1 Update LSPT to LSP The transmitted bit stream is 0001101000001(13 bits) LIP={(0,1),(1,0),(1,1),(1,2),(1,3)} LIS={D(1,0),D(1,1)} LSP={ (0,0),(0,2),(0,3) } LSP LIS (0, 1)= {13, 10, 6, 4} (1, 0)= {4,-4, 2,- 2} (1, 1)= {4,-3,-2, 0}

n = log2 (MAX COEFF) n=log2(26) =4

The first step in SPIHT coding is the initialization of the sets LSP, LIS, LIP which is done as shown below Fig 2.4.5 INITIALIZATION Then the pixels are sorted according to a threshold which is given below i.e. After First Sorting Pass

Threshold To= 2n=24=16


Process LIP S(0,0)=26>To, we transmit 1, since 26 is +ve ,we transmit 0; then move (0,0) to LSPT (Temporary), then S(0,1)=6, S(1,0)= -7, S(1,1)=7 are all <To hence they are insignificant , therefore transmit three 0, Process LIS DS(0,0)=13, DS(0,1)=10, DS(1,0)=4 ,DS(1,1) are all less than To hence we transmit 0 for the complete set .
242

Fig: 2.4.7 BLOCK DIAGRAM AFTER SECOND SORTING


PASS

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

DECODER An example of decoding the above transmitted bit is as shown below First Receive Get n=4, To=2n = 16 LIP = {(0, 0), (0, 1), (1, 0), (1, 1)} LIS ={ D(0,0),D(0,1),D(1,0),D(1,1)} LSP = { } The transmitted bit stream is 10000000(8 bits) Process LIP Get 1=S(0,0) is significant, next is zero hence +ve value; move S(0,0) to LSP, Then construct C(0,0)=(3/2)TO=(3/2)16 =24 Get three 0 = S (0, 1), S (1, 0), S (1, 1) are insignificant Process LIS Get three 0 = DS (0, 1), DS (1, 0), DS (1, 1) are insignificant LIP = {(0, 1), (1, 0), (1, 1)} LIS ={ D(0,1),D(1,0),D(1,1)} LSP = {(0, 0)}

28 0 0 0

0 0 0 0

12 0 0 0

12 0 0 0

Fig: 2.4.9 PIXELS AFTER SECOND RECIEVE 2.5) DECOMPRESSION


The decompression process is exactly the reverse of the compression process. Decompression is involved with decoding. The decoding process consists of Huffman decoding or SPIHT decoding. The reconstruction is done by using Inverse Wavelet Transform. The watermark can be retrieved by changing the frequency of the LSBS of the output image. 2.6) FIDELITY CRITERIA During the removal of redundancies i.e. compression some information of interest may be lost, a repeatable or reproducible means of quantifying the nature and extent of information loss is highly desirable. There are two types of criteria used to make such an assessment. They are 1) Objective Fidelity Criteria 2) Subjective Fidelity Criteria.Subjective Fidelity Criteria: Most decompressed images are ultimately evaluated by human observer therefore measuring image quality by subjective evaluation of human observer is done in subjective criteria which is done by showing a decompressed image to a cross section of viewers and averaging their evaluations. The evaluations may be made using an absolute rating scale or by side by side comparison of original and decompressed image.Objective Fidelity Criteria: This offers a very simple and convenient mechanism for evaluating information loss. Here the level of information loss is expressed as a function of the original image and the decompressed image. For objective fidelity we use MSE (mean square error), PSNR (peak signal to noise ratio) and CR (compression ratio). Let f( x , y ) and f '( x ,y ) represent an input and a compressed image then for any value of x, y the error e( x, y) is defined as e( x, y) = f '(x , y)- f (x , y) then the total error between two images of size MX N is M-1 N-1 [f '(x, y) f (x, y)] x=0 y=0 .
243 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

24 0 0 0

0 0 0 0

0 0 0 0

0 0 0 0

Fig: 2.4.8 PIXELS AFTER FIRST RECIEVE Second receive Get n=4-1=3, T1=2n = 8 LIP = {(0, 1), (1, 0), (1, 1)} LIS ={ D(0,1),D(1,0),D(1,1)} LSP = {(0, 0)} The transmitted bit stream is 0001101000001(13 bits) Process LIP Get 000 =S (0, 1), S (1, 0), S (1, 1) are insignificant Process LIS Get 1 = DS (0, 1) is significant Get 10 = C (0, 2) is a positive significance Move (0, 2) to LSP, then reconstruct C (0, 2) = + (3/2) T1 = (3/2)8 = 12 Get 10 = C (0, 3) is a positive significance Move (0, 3) to LSP then reconstruct C (0, 3) = + (3/2) T1 = (3/2)8 =12 Get 00 = C (1, 2), C (1, 3) are insignificant move to LIP Get 00 = DS (1, 0), DS (1, 1) are insignificant Process LSP Get 1, then add 2 n-1 to C (0, 0) = 24+2 n-1=24+2 2 =24+4 =28

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

The root mean square error is the square root of the squared error averaged over the MXN array given by M-1 N-1 MSE = [ [f '(x, y) f (x, y)] 2]1/2 x=0 y=0 If f(x, y) is considered as a combination of original image f(x, y) and noise signal e(x, y) then the mean square signal to noise ratio is given as M-1 N-1 [f '(x, y)] 2 x=0 y=0 PSNR = ---------------------------------------------------M-1 N-1 [f '(x, y) f (x, y)] 2 x=0 y=0 The compression ratio is given by CR = No. of input pixels ----------------------No. of output pixels

In this project we use the above Objective Fidelity Criteria for the assessment of the quality of image and a comparison is made between Huffman and SPIHT w.r.t MSE, PSNR and CR. 2.7) INTRODUCTION TO MATLAB 7.0

Fig 2.7.1 MATLAB The software used in this project is MATLAB 7.0. MATLAB stands for Matrix Laboratory. The very first version of MATLAB, written at the University of New Mexico and Stanford University in the late 1970s was intended for use in Matrix theory, Linear algebra and Numerical analysis. Later on with the addition of several toolboxes the capabilities of Matlab were expanded and today it is a very powerful tool at the hands of an engineer. It offers a powerful programming language, excellent graphics, and a wide range of expert Knowledge. MATLAB is published by and a trademark of The Math Works, Inc. The focus in MATLAB is on computation, not mathematics: Symbolic expressions and manipulations are not possible (except through the optional Symbolic Toolbox, a clever interface to Maple). All results are not only numerical but inexact, thanks to the rounding errors inherent in computer arithmetic. .
244

The limitation to numerical computation can be seen as a drawback, but it is a source of strength too: MATLAB is much preferred to Maple, Mathematical, and the like when it comes to numeric. On the other hand, compared to other numerically oriented languages like C++ and FORTRAN, MATLAB is much easier to use and comes with a huge standard library. The unfavorable comparison here is a gap in execution speed. This gap is not always dramatic, and it can often be narrowed or closed with good MATLAB programming. Moreover, one can link other codes into MATLAB, or vice versa, and MATLAB now optionally supports parallel computing. Still, MATLAB is usually not the tool of choice for maximum-performance computing. Typical uses include: a) Math and Computation b) Algorithm development c) Modeling, simulation and prototyping d) Data analysis, exploration and visualization e) Scientific and engineering graphics f) Application development which includes graphical user interface building. MATLAB is an interactive system whose basic data element is an array. Perhaps the easiest way to visualize MATLAB is to think it as a full-featured calculator. Like a basic calculator, it does simple Math like addition, subtraction, multiplication and division. Like a scientific calculator it handles Square roots, complex numbers, logarithms and trigonometric operations such as sine, cosine and tangent. Like a programmable calculator, it can be used to store and retrieve data; you can create, execute and save sequence of commands, also you can make comparisons and control the order in which the commands are executed. And finally as a powerful calculator it allows you to perform matrix algebra, to manipulate polynomials and to plot data. When you start Matlab the following window will appear:

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Fig: 2.7.2 Main screen of MATLAB When you start MATLAB, you get a multipaneled desktop. The layout and behavior of the desktop and its components are highly customizable (and may in fact already be customized for your site). The component that is the heart of MATLAB is called the Command Window, located on the right by default. Here you can give MATLAB commands typed at the prompt, >>. Unlike FORTRAN and other compiled computer languages, MATLAB is an interpreted environmentyou give a command, and MATLAB tries to execute it right away before asking for another. At the top left you can see the Current Directory. In general MATLAB is aware only of files in the current directory (folder) and on its path, which can be customized. For simple problems, entering the commands at the MATLAB prompt is fast and efficient. However as the number of commands increases, or when you wish to change the value of a variable and then revaluate all the other variables, typing at the command prompt is tedious. Matlab provides for this a logical solution: I.e. place all your commands in a text file and then tell Matlab to evaluate those commands. These files are called script files or simple M-files. To create an M-file, chose from the File menu the option NEW and then chose M-file. Or click at the appropriate icon at the command window. Then you will see this window:

root page. The heart and soul of MATLAB is linear algebra. In fact, MATLAB was originally a contraction of Matrix laboratory. More so than any other language, MATLAB encourages and expects you to make heavy use of arrays, vectors, and matrices. MATLAB is oriented towards minimizing development and interaction time, not computational time. In some cases even the best MATLAB code may not keep up with good C code, but the gap is not always wide. In fact, on core linear algebra routines such as matrix multiplication and linear system solution, there is very little practical difference in performance. MATLABs language has features that can make certain operations, most commonly those involving loops in C or FORTRAN. After you type your commands save the file with an appropriate name in the directory work. Functions are the main way to extend the capabilities of MATLAB. Compared to scripts, they are much better at compartmentalizing tasks. Each function starts with a line such as Function [out1, out2] = myfun (in1, in2, in3) The variables in1, etc. are input arguments, and out1 etc. are output arguments. You can have as many as you like of each type (including zero) and call them whatever you want. The name myfun should match the name of the disk file.

3) METHODOLOGY Flow diagram showing the methodology of work for ENCODER

Fig: 2.7.3 M- File Screen Then to run it go at the command prompt and simple type its name or in the M-file window press F5. MATLAB is huge. Nobody can tell you everything that you personally will need to know, nor could you remember it all anyway. It is essential that you become familiar with the online help. There are two levels of help: If you need quick help on the syntax of a command, use help. For example, help plot shows right in the Command Window all the ways in which you can use the plot command. Typing help by itself gives you a list of categories that themselves yield lists of commands. Typing doc followed by a command name brings up more extensive help in a separate window. For example, doc plot is better formatted and more informative than help plot. In the left panel one sees a hierarchical, brows able display of all the online documentation. Typing doc alone or selecting Help from the menu brings up the window at a .
245 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

Fig: 3.1 ENCODER FLOW DIAGRAM The flow diagram showing the methodology of work done for DECODER

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Fig: 3.2 DECODER FLOW DIAGRAM

Fig: 4.5 Image after embedding the message using Huffman coding and LL band (H* Elimination)

4) RESULTS
Input image:
The first block is an input image, In this project we used a part of the satellite image as input which is shown below.

Fig: 4.3 Second Stage Discrete Wavelet Transform

Fig: 4.1 ORIGINAL IMAGE


WAVELET TRANSFORM Discrete Wavelet transform was applied on this input image in two stages. In the first stage only L and H bands were formed as shown below

Then the embedding process was done by using HH and H* Elimination method. The message which was embedded in the project was SIT DEPT any other message can also be embedded, there is a provision to embed any message in this project.

4.1) HUFFMAN RESULTS:

Fig: 4.4 Image before embedding the message using Huffman coding and LL band (H* Elimination)

Fig: 4.2 First Stage Discrete Wavelet Analysis

Further a second stage wavelet transform was applied on this to create LL, LH, HL and HH bands as shown below

.
246 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Image after decoding using Huffman decoding technique and H* Elimination Further the embedding was done using both the bands i.e. LL and LH (HH Elimination method) bands and the results were

Fig: 4.6
Image before embedding the message using Huffman coding and HH Elimination

Fig: 4.9
Image after reconstruction (IDWT) using Huffman decoding and H* Elimination

Fig: 4.7 Image after embedding using Huffman coding and HH Elimination

Fig: 4.10
Image after decoding using Huffman decoding and HH Elimination

Fig: 4.8
.
247 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Final output image using Huffman coding and H* Elimination

Fig: 4.13 Final output image using Huffman coding and HH Elimination
4.2) SPIHT CODING AND DECODING RESULTS:

Fig: 4.11 Image after reconstruction using Huffman decoding and HH Elimination

Fig: 4.14 Image before embedding using SPIHT coding and H* Elimination

Fig: 4.12

Fig: 4.15 .
248 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Image after embedding using SPIHT coding and H* Elimination

Image after embedding using SPIHT coding and HH Elimination

Fig: 4.16 Image before embedding using SPIHT coding and HH Elimination

Fig: 4.18 Image after decoding using SPIHT and H* Elimination

Fig: 4.17

Fig: 4.19

.
249 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Image after reconstruction (IDWT) using SPIHT and H* Elimination

Fig: 4.22 Fig: 4.20


Image after decoding using SPIHT and HH Elimination

Final output image using SPIHT coding and H* Elimination

Fig: 4.23 Final output image using SPIHT coding and HH Elimination

Fig: 4.21 Image after reconstruction (IDWT) using SPIHT and HH Elimination

.
250 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

4.3) COMPARISON OF RESULTS Finally the results were compared for Huffman coding between H* and HH Elimination with respect to MSE, PSNR and CR and the results obtained were

Fig: 4.26 Result of SPIHT coding using H* Elimination

Results were MSE: 4.18 PSNR: 41.94 CR : 2.13

Fig: 4.24 Result of compression using Huffman coding and H* Elimination method The results of Huffman coding and H* Elimination method were MSE: 4.167 PSNR: 41.94 CR: 5.91

Fig: 4.27 Results of SPIHT coding using HH Elimination MSE: 0.42 PSNR: 51.69 CR : 1.68 Finally a comparison was made between the two coding techniques i.e. Huffman coding and SPIHT coding with respect to coding and decoding time and the results obtained were

Fig: 4.25 Result of compression using Huffman coding and HH Elimination method

MSE: 0.42 PSNR: 51.89 CR : 2.75


.
251 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Fig: 4.28 RESULT OF COMPARISON BETWEEN ENCODING AND DECODING TIME

Fig: 5.2 Screen after importing the original image


Huffman Coding Time: 4.146 sec Huffman Decoding Time: 1.807 sec SPIHT Coding Time: 1.723 sec SPIHT Decoding Time: 0.511 sec

5) A GLANCE THROUGH THE GUIS OF THE PROJECT


This is the main screen of the software. Choose the encoding technique from the above screen

This is the screen after importing the input image. click on the message button and write the message which is to be embedded in the input image and then click on the embed button to embed the message and compress the image then the decoding operation is applied by pressing the retrieve button and the reconstructed image obtained as shown below.

Fig: 5.1 Main Screen Choose the band i.e. LL or LL and LH and import the input image by clicking on the browse button.

Fig: 5.3 Screen after reconstructing the image using Huffman coding and H* Elimination Then the values of MSE, PSNR and CR is obtained by pressing the validate button

.
252 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Fig: 5.6 Screen after reconstructing the image using both LL and LH bands and Huffman coding

Fig: 5.4 Screen showing the results of MSE, PSNR and CR using Huffman coding and H* Elimination Then these results are cleared by pressing the clear button and the above process is repeated by choosing both LL and LH bands as shown below

Fig: 5.7 Screen showing the values of MSE, PSNR and CR using Huffman coding and HH Elimination

SPIHT CODING Fig: 5.5 Screen after importing and embedding the message using HH Elimination

.
253 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Fig: 5.10 Screen showing the values of MSE, PSNR and CR of SPIHT coding and H* Elimination
Fig: 5.8 Screen after importing the image using LL SPIHT Coding and H* Elimination

Fig: 5.9 Screen after reconstructed image using SPIHT coding H* Elimination

Fig: 5.11 Screen after importing the input image using SPIHT coding and HH Elimination

.
254 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Fig: 5.12 Screen after reconstructed image using SPIHT coding and HH Elimination

Fig: 5.14 Screen showing the encoding and decoding time for Huffman and SPIHT coding

6) CONCLUSION
The results obtained are put in a tabular column for easy comparison

Fig: 5.13 Screen showing the values of MSE, PSNR and CR using SPIHT coding and HH Elimination Finally the two techniques i.e. Huffman and SPIHT coding techniques were compared by clicking the validate button on the main screen as shown below .
255 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

When the above results are analyzed we come to a conclusion that a) In both Huffman Coding and SPIHT Coding when a comparison is made between H* and HH Elimination Techniques, we see that MSE which was 4.1 is reduced to 0.4 when two bands were used indicating that the signal error decreases with increase in number of bands. b) Similarly in both Huffman and SPIHT Coding, the PSNR has increased when two bands (HH Elimination) are used indicating that the signal is more compared to noise with increase in number of bands. c) When a comparison is made between the CR, in both the techniques there is a decrease in compression with increase in number of bands. d) If a comparison is made between the two techniques i.e. Huffman and SPIHT we see that MSE and PSNR are almost same for both the techniques whereas there is a decrease in the value of CR (compression ratio) i.e. SPIHT gives 50 % less compression compared to Huffman coding whether single LL band ( H* Elimination) is used or both LL and LH bands (HH Elimination) is used. e) If a comparison is made between the two techniques i.e. Huffman and SPIHT in terms of encoding and decoding time we see that SPIHT encoding is FOUR times faster than Huffman encoding and in terms of decoding time SPIHT decoding is very fast compared to Huffman decoding i.e. it is around 12 times faster than Huffman decoding. Finally we come to a conclusion that when ever there is a need for large compression we go for only LL band(H* Elimination) but a compromise should be made with respect to error in signal , but if we need more clarity of image we

compromise with less compression and go for both LL and LH bands ( HH Elimination). Also when choos ENCODING DECODING ing amon TIME TIME g the two techn HUFFMAN 4.146 iques CODING SEC 11.807 SEC i.e. Huff man SPIHT 1.723 and SPIH CODING SEC 0.511 SEC T a compromise should be made between he speed and the amount of compression because SPIHT is very fast but gives less compression compared to Huffman which is slow but gives more compression compared to SPIHT. Therefore SPIHT is used for large images like satellite images which are very big where compression can be achieved very fast but with a compromise in compression ratio.

7) FUTURE DEVELOPMENT
Though this project has been tested for attacks on watermark due to compression, this project can further be tested for various other attacks on watermark like noise filtering and other digital image processes. This project can also be further extended by embedding an image in an image. Future work on this project would be the Visual Cryptography where n images are encoded in a way that only the human visual system can decrypt the hidden message without any Cryptographic computations when all shares are stacked together. It is basically hiding a colored Image into multiple colored cover images. This scheme achieves lossless recovery and reduces the noise in the cover images without adding any computational complexity.

LL BAND

LL AND LH BANDS

MSE

PSNR

CR

MSE

PSNR

CR

HUFFMAN CODING 4.167 41.94 5.91 0.42 51.89 Fig: 1.1 2.75 Fig: 1.2 Fig: 1.3 Fig: 1.4 Fig: 2.1.1 1.68 .
256

8) LIST OF FIGURES
Encoder Decoder H* Elimination Technique HH Elimination Technique Sine Waves With Different Frequencies

SPIHT CODING 4.18 41.94 2.13 0.42 51.69

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

Fig: 2.1.2 FT of a 50 Hz Signal Fig: 4.18 Image After Decoding Using SPIHT and H* 16 Fig: 2.1.3 Stationery Signal 17 Elimination Fig: 2.1.4 FT of a Stationery Signal Fig: 4.19 Image After Reconstruction (IDWT) using SPIHT Fig: 2.1.5 Non-Stationery Signal 18 and H* Elimination Fig:2.1.6 FT of a Non-Stationery Signal 18 Fig: 4.20 Image After Decoding using SPIHT and HH Elimination Fig: 2.2.1 ComputationofCWT Fig: 4.21 Image After Reconstruction (IDWT) using SPIHT Fig: 2.2.2 Sampling Grid 22 and HH Elimination Fig: 2.2.3 1D Wavelet Transform 23 Fig: 4.22 Final output image using SPIHT coding and H* Fig: 2.2.4 Implementation of 2D Wavelet Transform 26 Elimination Fig: 2.2.5 Single Stage Decomposition 27 Fig: 4.23 Final output image using SPIHT coding and HH Fig: 2.2.6 Multistage Decomposition 27 Elimination Fig: 2.2.7 Original Image 28 Fig: 4.24 Result of compression using Huffman coding Fig: 2.2.8 First Stage Discrete Wavelet Transform 29 and H* Elimination method Fig: 2.2.9 Second Stage Discrete Wavelet 29 Fig: 4.25 Result of compression using Huffman coding Fig: 2.2.10 DWT Coefficients 33 and HH Elimination method Fig: 2.3.1 Visible Watermarking 36 Fig: 4.26 Result of SPIHT coding using H* Elimination Fig: 2.3.2 Invisible Watermarking 36 Fig: 4.27 Results of SPIHT coding using HH Elimination Fig: 2.3.3 Classification of Information Hiding Technique 37 Fig: 4.28 Result of Comparison between Encoding and Fig: 2.4.1 Source Encoder 42 Decoding Time Fig: 2.4.2 Source Decoder 42 Fig: 5.1 Main Screen Fig: 2.4.3 Spatial Orientation Trees 50 Fig: 5.2 Screen after importing the original image Fig: 2.4.4 A 4x4 Matrix Showing The Pixel Values OfFig: 5.5 A Screen after importing and embedding the message Digital Image 51 using Huffman and HH Elimination Fig: 2.4.5 Initialization 52 Fig: 5.6 Screen after reconstructing the image using Fig: 2.4.6 Block Diagram after First Sorting Pass Huffman coding and HH Elimination Fig: 5.7 Screen showing the values of MSE, PSNR and CR using Fig: 2.4.7 Block Diagram after Second Sorting Pass Huffman coding and HH Elimination Fig: 2.4.9 Pixels after Second Receive Fig: 5.8 Screen after importing the image using SPIHT Coding and Fig 2.7.1MATLAB H* Elimination Fig: 2.7.2 Main Screen of MATLAB Fig: 5.9 Screen after reconstructing the image using SPIHT coding Fig: 2.7.3 M- File Screen H* Elimination Fig: 3.1 Encoder Flow Diagram Fig: 5.10 Screen showing the values of MSE, PSNR and CR of SPIHT Fig: 3.2 Decoder Flow Diagram coding using H* Elimination Fig: 4.1 Original Image Fig: 5.11 Screen after importing the input image using SPIHT coding Fig: 4.2 First Stage Discrete Wavelet Analysis Fig: 4.3 Second Stage Discrete Wavelet Transform and HH Elimination Fig: Screen after reconstructing the image using SPIHT coding Fig: 4.4 Image Before Embedding the Message Using Huffman 5.12 and HH Elimination Coding and H* Elimination Fig: Fig: 4.5 Image after Embedding the Message Using Huffman 5.13 Screen showing the values of MSE, PSNR and CR using SPIHT coding and HH Elimination Coding and H* Elimination. Fig: 5.14 Screen showing the encoding and decoding time for Fig: 4.6 Image Before Embedding the Message Using Huffman and SPIHT coding Huffman Coding and HH Elimination Fig: 4.7 Image After Embedding Fig: 4.8 Image After Decoding Using Huffman Decoding Technique and H* elimination BIBLIOGRAPHY Fig: 4.9 Image After Reconstruction (IDWT) Using Huffman 1) Amir Said,William a.Pearlman , A New, Fast AND Decoding and H* elimination Fig: 4.10 Image After Decoding Using Huffman Decoding and Efficient Codec Based on Set Partitioning in Hierarchical Trees , IEEE Trans on circuits and systems for video HH Elimination Fig: 4.12 Final Output Image Using Huffman Coding and H*technology, vol: 6, no:3, june 1996 2) M.Rabbani and P.W.Jones, Digital Image Compression Elimination Fig: 4.13 Final Output Image Using Huffman Coding and Techniques, Belligham, WA: SPIE Opt, Eng. Press, 1991 3) Jean- Lue Dugelay, Stephen Roche, Christian Rey and HH Elimination Gwenael Doerr, Still Image Watermarking Robust to Local Fig: 4.14 Image Before Embedding Using SPIHT Coding and Geometric Distortions, IEEE Trans on Image Processing, H* Elimination Fig: 4.16 Image Before Embedding Using SPIHT Coding vol:15 , no: 9, sept:2006 4) The Engineers Ultimate Guide to Wavelet Analysis by and HH Elimination Robi Polikar. Fig: 4.17 Image After Embedding Using SPIHT Coding and 5) Digital Image Processing By Rafael C. Gonzalez and HH Elimination Richard E. Woods. .
257 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security,


(Vol. 9 No. 3), 2011.

6) Crash Course in Matlab by Trobin A. Driscell. 7) Digital Watermarking Of Images and Wavelets Alexandru, ISAR , Electronics and Telecommunications Faculty, "Politehnica" University, Timioara. 8) Introduction to Graphical User Interface (GUI) MATLAB 6.5 by: Refaat Yousef Al Ashi, Ahmed Al Ameri and Prof. Abdulla Ismail A. 9) Digital Watermarking Technology by Dr. Martin Kutter and Dr. Frdric Jordan. 10) SPIHT Image Compression by SPIHT description .htm. 11) Watermarking Applications and Their Properties by Ingemar J. Cox, Matt L. Miller and Jeffrey A. Bloom NEC Research Institute. 12) Watermarking of Digital Images by Dr. Azizah A. Manaf & Akram M. M. Zeki Zeki University Technology Malaysia. 13) Digital image processing by A.K. Jain. 14) Digital Image Processing, a Remote Sensing Perspective by John. R. Jensen. 15) Lillesand, R. M and R.W.Kiefer, 1994,Remote Sensing and Image Interpretation, New York, 1996. 16) Wavelet Transforms: Introduction to Theory and Applications by Bopardikar, Addison Wesley, 1998. 17) Wavelets and Filter Banks, Gilbert Strang and Truong Nguyen, Wellesley-Cambridge press, 1997. 18) Introduction to Wavelets and Wavelet Transforms: A Primer, Burrens, Gopinath and Guo. 19) Digital Image Processing by Milan soni. 20) Digital Image Processing: a Remote sensing Perspective by John .R. Jensen, 2ND Edition.

.
258 http://sites.google.com/site/ijcsis/ ISSN 1947-5500

You might also like