You are on page 1of 25


Chapter: I
Data compression is a technique of reducing the redundancies in data and represents it in
shorter manner. Image compression is the application of data compression. Image
compression can be lossy or lossless. For high value images such as medical images
where loss of critical information is not acceptable, lossless or visually lossless
compression is preferred. In addition, in many countries lossy compression is forbidden
by law for medical images when they are used for diagnosis purpose .
Medical images are widely used for surgical plan and diagnosis purposes. They include
human body pictures and are being present in digital form. Imaging devices improve
everyday and generate more data per patient. In the field of profiling patient’s data,
medical images need long-term storage . Therefore, images need compression. For such
purpose compression ratio is important. However, in real time processes such as
telemedicine and online diagnosis systems which need hardware implementation,
simplicity of compression algorithm plays an important role. It can accelerate
computation process.
As can be seen in Fig .1, lossless compression consists of two major parts: transformation
and coding. Input image goes through transformation and encoding steps and form in a
shorter manner as a compressed bit-stream. Mostly, in lossy compression quantization
adds to this flowchart.

Fig 1. Flowchart of Lossless Compression.

Lossless JPEG, JPEG-LS and lossless version of JPEG2000 are lossless methods

by JPEG committee and are widely used in the world . Huffman coding,

Golomb-Rice and Arithmetic coding are used in these standards. The output of
Dept. of ECE, SJBIT

Page 1


transformation step is the input of these encodings. Transformation decorrelate input
image and reduce entropy value. Entropy value is a measure for possibility of
compression which is obtained by encoding. Entropy indicates require bit per pixel
amount and is calculated as:

1.1.1 Objective of the Project
1. To study and develop a MATLAB code for the JPEG encoding scheme.
2. To utilize the property of DCT, zig-zag encoding for JPEG image compression.
3. To obtain the parameters like compression ratio for different images.

1.1.2 Approach and Implementation
1. Literature Survey: - Study of Encoding Standard adopted by a JPEG committee, a
study of other compression technique for image, Study of Discrete Cosine Transform,
Study the need for zigzag in the context of image compression.
2. Implementation of the DCT, Quantization and Zigzag encoding to get the compressed
3. Development of software involves the coding the JPEG encoder using the MATLAB
editor and evaluating the code with respect to the quality metrics.

1.3 Probable Approach:

Dept. of ECE, SJBIT

Page 2


Figure 2. Block Diagram of New Transformation Method
input image is divided by n and keep rounded quotient in Quotient, and remainder of division in
Remainder. This paper chose divisor in a fashion that sum of quotient depth and remainder depth
is equal to input image depth. Therefore, divisor is power of two and smaller than biggest
possible intensity value.
n = 2^m

m = 1,2,…,k-1

For a specific n of (3) quotient and remainder are:
q(i,j) = floor ( I(i,j) / n )
r(i,j) = I(i,j) mod n

Chapter: II
Image compression:

Image compression is minimizing the size in bytes of a graphics file without

degrading the quality of the image to an unacceptable level. The reduction in file size
Dept. of ECE, SJBIT

Page 3


allows more images to be stored in a given amount of disk or memory space. It also
reduces the time required for images to be sent over the Internet or downloaded from
Web pages.
The image audio and video requires sufficient storage, space and large
transmission bandwidth and long transmission time to store, process and transmit. At the
present state of technology the only solution to compress the multimedia data before its
storage and transmission and decompress it at the receiver for playback.
A common characteristic of the most images is that the neighboring pixels are
correlated and therefore contains the redunded information. The foremost task then is to
find less correlated representation of the image. Two fundamental components of
compression are redundancy and irrelevancy reduction. The redundancy reduction aims at
removing the duplication from the signal source. Irrelevancy reduction omits the parts of
the signal that will not be noticed by the signal receiver, namely the Human Visual
System. In general three types of redundancy can be identified:
Spatial redundancy: Neighboring pixels are not independent but correlated each other.
Spectral redundancy: Correlation between different color planes or spectral bands.
Temporal redundancy: Correlation between the adjacent frames in a sequence of images
(Video application)
Image compression research aims at reducing the number of bits needed to
represent the image by removing the spatial and spectral redundancy as much as possible.
Since we are focusing on still image compression, we are not interested in temporal

2.2 Different classes of compression techniques
Two ways of classifying the compression techniques are:
Dept. of ECE, SJBIT

Page 4


Lossless Compression
Lossless image compression as their name implies, involve no loss of
information. If the data have been lossless compressed, the original data can be
reconstructed exactly from the compressed data. Lossless compression is used for the
application that does not tolerate any difference in the original and reconstructed data.
Text compression is an area that comes under lossless compression. It is very
important that the reconstruction is identical to the text original.
Lossy Compression
Lossy compression techniques involves some loss of information, and the data
that is compressed using the lossy algorithms generally cannot be recovered or
constructed exactly In return for accepting the distortion in the reconstruction we can
obtain the high compression ratio. In many applications, this lack of reconstruction is not
a problem. For example the speech, image, and video.


Different methods of compression for image:

Differential Encoding:
In this method the difference between the actual value of a sample and a
prediction of those values is encoded. It is also known as predictive encoding. Example
of technique include: differential pulse code modulation, delta modulation and adaptive
pulse code modulation -- differ in prediction part.
This method is suitable where successive signal samples do not differ much, but
are not zero. E.g. Video -- difference between frames, some audio signals. Differential
pulse code modulation (DPCM) simple prediction scheme we use in differential
encoding which predicts the current value based on the previous actual value:
fpredict(ti) = factual(ti-1)

Dept. of ECE, SJBIT

Page 5


If the source is highly correlative (As most of the images are)simple Markov
model can be used where current estimated is used to predict the next value. So we
simply need to encode:

2.3.2 Arithmetic coding:
Arithmetic coding is a form of variable-length entropy encoding used in lossless
data compression normally, a string of characters such as the words "hello there" is
represented using a fixed number of bits per character, as in the ASCII code. When a
string is converted to arithmetic encoding, frequently used characters will be stored with
fewer bits and not-so-frequently occurring characters will be stored with more bits,
resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy
encoding such as Huffman coding in that rather than separating the input into component
symbols and replacing each with a code, arithmetic coding encodes the entire message
into a single number, a fraction n where (0.0 ≤ n < 1.0).
Normally arithmetic coding is employed for the Lossless image compression.

2.3.3 Sub-band coding: (SBC)
It is form of transform coding that breaks a signal into a number of different
frequency bands and encodes each one independently. This decomposition is often the
first step in data compression for audio and video signals.
So simply put, Subband Coding is an approach to compression which relies on
separating the source output into different bands of frequencies using digital filters. Each
Dept. of ECE, SJBIT

Page 6


of these constituent parts are encoded using one or more of the methods. Since we are
using digital filters in the case of subband coding. The subband coding mainly deals with
the filter design-analysis filter and synthesis filter. The scheme is shown below.

Figure-2.1 Filter Bank used in Sub Band Coding

2.3.4 Transform coding:
Transform coding is a type of data compression for "natural" data like audio
signals or photographic images. The transformation is typically lossy, resulting in a lower
quality copy of the original input.
In transform coding, knowledge of the application is used to choose information
to discard, thereby lowering its bandwidth. The remaining information can then be
compressed via a variety of methods. When the output is decoded, the result may not be
identical to the original input, but is expected to be close enough for the purpose of the
The transforms of interest for image compression are K L transforms, Discrete
Cosine Transforms (DCT), Discrete Sine Transforms, and Discrete Walsh Hadamard
Transform (DWHT).

2.3.5 Wavelet based compression
Dept. of ECE, SJBIT

Page 7


In Fourier transform (FT) we represent a signal in terms of sinusoids. FT
provides a signal which is localized only in the frequency domain. It does not give any
information of the signal in the time domain. Basis functions of the wavelet transform
(WT) are small waves located in different times. They are obtained using scaling and
translation of a scaling function and wavelet function. Therefore, the WT is localized in
both time and frequency.
In addition, the WT provides a multiresolution system. Multiresolution is useful in
several applications. For instance, image communications and image data base are such
applications. If a signal has a discontinuity, FT produces many coefficients with large
magnitude (significant coefficients). But WT generates a few significant coefficients
around the discontinuity. Nonlinear approximation is a method to benchmark the
approximation power of a transform. In nonlinear approximation we keep only a few
significant coefficients of a signal and set the rest to zero. Then we reconstruct the signal
using the significant coefficients. WT produces a few significant coefficients for the
signals with discontinuities, thus we obtain better results for WT nonlinear approximation
when compared with the FT.
Most natural signals are smooth with a few discontinuities (are piece-wise
smooth).Speech and natural images are such signals. Hence, WT has better capability for
representing this signal when compared with the FT. Good nonlinear approximation
results in efficiency in several applications such as compression and denoising.


Typical Image Coder
A typical lossy image compression system is as shown in the figure 2.4.1.It
consists of three closely connected components namely source encoder, Quantizer and
entropy encoder. Compression is accomplished by applying a linear transforms to decorrelate the image data, quantizing the resulting transform coefficient and entropy
coding the quantized values.

Dept. of ECE, SJBIT

Page 8


Figure 2.2 Typically Lossy Image Encoder

Source Encoder:
There are different kinds of linear transforms that have been developed which include
Discrete Fourier Transforms (DFT), Discrete Sine Transforms (DST), Discrete Cosine
Transforms (DCT), Discrete Wavelet Transforms and many more, each with its own
advantage, disadvantages and applications.

A quantizer simply reduces the number of bits needed to store the transform coefficient
by reducing the precision of those values. Since this is a many to one mapping it is a lossy
process and is the main source of compression in an encoder. Quantization can be performed
on each individual coefficient, which is called as scalar quantization. Quantization can also
be performed on the group of coefficients together; this is known as vector quantization.

Entropy encoder:
An entropy encoder further compresses the quantized values losslessly to give better
overall compression. It uses a model to accurately determine the probabilities for each
Dept. of ECE, SJBIT

Page 9


quantized values and produces an appropriate code based on this probabilities so that the
resultant output code is smaller than the input stream. The most commonly used entropy
encoder a arithmetic and Huffman encoder although for applications requiring the fast
execution zigzag encoding is proven more effective
It is important to note that a properly designed quantizer and entropy encoder are absolutely
necessary along with optimum signal transformation to get the best possible compression.

Chapter: III

JPEG Lossless Image Compression and Transformation
3.1.1 Transformation
Transformation is used to decorrelate the input image and transform image into
another domain. Transformation tries to concentrate the energy of the input image into
comparatively few coefficients [7]. Important attributes of a good transformation are
energy compaction, computational complexity and receiver complexity [8]. Therefore,
the objective of transformation is to transform image into coefficients matrix to be an
appropriate input for coding step. One of objectives of transformation is to reduce the
entropy value of input image. Thus, it is a measure of goodness for any transformation.


Dept. of ECE, SJBIT

General Block Diagram of the DPCM

Page 10


Fig 3. General Structure of DPCM
In first step, image pixels are predicted from their neighbour pixels by one of the prediction
models of Table 1. Neighboring positions is shown in Fig. 3 Further, subtracting predicted image
from input image will give prediction error which is transformed image.

Fig 4. Neighbor Pixels for Predicting Pixel X.

Table 1 illustrates predictors and their equivalent hardware requirement. Since hardware
optimization is important, it is better to include simplicity in spite of efficiency in predictor
selection. In order to have equal situation the same predictor is used in comparison step.

Dept. of ECE, SJBIT

Page 11


Fig 5. a) 512×512 Grayscale Lena Image. b) Original Image
Histogram. c) DPCM Transformed Image Histogram

Our new transform method engine is illustrated in block diagram form in Fig. 6. Division is first
applied on the input image. Then, quotients of division are being predicted by one of the
prediction equations of Table 1. Further, predicted matrix is rescaled in multiplication step. By
adding predicted matrix with remainders of division step, predicted image is produced.
Subtracting predicted image from original image produces the transformed image. When
intensities of an image are in the range of [0, 2k-1], the image is of k-bit depth. Typical grayscale
images are of 8 to 16 bit depth . RGB images are consisting of three matrixes of R, G and B that
each one has similar depth to grayscale. In this paper, grayscale images with 8-bit depth are

Dept. of ECE, SJBIT

Page 12


As it shown in Fig. 6, input image is divided by n and keep rounded quotient in Quotient, and
remainder of division in Remainder. This paper chose divisor in a fashion that sum of quotient
depth and remainder depth is equal to input image depth. Therefore, divisor is power of two and
smaller than biggest possible intensity value.
n = 2m m = 1,2,…,k-1 (3)
For a specific n of (3) quotient and remainder are:
q(i,j) = floor ( I(i,j) / n ) (4)
r(i,j) = I(i,j) mod n (5)
For input image I(i,j) intensities are in following interval.
0 _ I(i,j) _ 2k – 1 (6)
Therefore, from (4) and (5), quotients and remainders are in the range of:
0 _ I(i,j) / n _ (2k – 1) / n
0 _ q(i,j) _ (2k – 1) / 2m
0 _ r(i,j) _ 2m
Then, as a result:
q(i,j) = [0, 2k-m-1] (7)
r(i,j) = [0, 2m] (8)
It is almost certain from (7) and (8) that q(i,j) has (k-m)-bit depth and r(i,j) has m-bit depth. If the
depths of q(i,j) and r(i,j) add together, total depth will be k which is equal to input image depth.
Another advantage of n to be power of two is in hardware implementation point of view.
Because, it change division process to m-bit right shifting which is easily implement by using a
shift register instead of divisor.
Dept. of ECE, SJBIT

Page 13


In proceeding, the reason of intensities division before prediction is analyzed. Quotients of
division step are the input of prediction part. As it will prove in continue, if we divide image by
n, probability of correct prediction will increase n times. Assume using a linear prediction model
such as prediction number five of Table 1 (X = B), then, probability of correct prediction is the
probability of that intensity value which is shown in (9). Rr is the number of repetition of
intensity r in an M×N image, and Pr is the probability of r. Therefore, in this case Pr will be the
probability of correct prediction.
Pr = Rr / M×N (9)
We have q(i,j) from (4), which is obtained by dividing each pixel of input image by n. Hence,
probability of correct prediction for q(i,j) is:
Pq = Rq / M×N where q = 0,1,…,(2k-1) / n (10)
Rq is the number of repetition of the new intensity values (q) in divided image.
When only one number is consider instead of a series of values in an interval, the total number of
repetition would increase. For example, if input image is divided by 32 all the values betweens 0
to 31 will become 0. Therefore, number of repetition of 0 will increase. From (10), new values
of repetition Rq and probability Pq are:
Rq = d × Rr (11)
Pq = (d × Rr) / M×N

= d × Pr (12)
As can be seen from (12), probability of better prediction increases. In other sight, it is obvious
that probability of correct prediction in a smaller interval is higher than bigger interval. Hence,
better prediction will achieved. The objective of our method is to estimate prediction error, so it
would be necessary to have a predicted image. Next step will be rescaling of predicted matrix of
prediction step. For this goal, this paper multiply predicted matrix by n and add remainder to
protect the accuracy of prediction. Hence, predicted image is calculates as follow:
P(i,j) = qp(i,j) × n + r(i,j) (13)
In which qp(i,j) is predicted matrix or output of prediction step.
Prediction error or transformed image T(i,j) is obtained by:
T(i,j) = I(i,j) - P(i,j) (14)

Dept. of ECE, SJBIT

Page 14


Transformed image by this method has less intensity distribution compare to DPCM. Therefore,
more energy compaction will take place. This happens because transformed image values have
less variety than output of DPCM. Actually, as you can see from mathematical proof variety of
transformed image values of this method is n times smaller.
Similar to (13), I(i,j) calculated as follow:
I(i,j) = q(i,j) × n + r(i,j) (15)
r is equal in both (13) and (15). Hence, can say:
I(i,j) - q(i,j)×n = P(i,j) – qq(i,j)×n
I(i,j) - P(i,j) = n×( q(i,j) – qq(i,j) ) (16)
From (14) and (16), we have (17) for transformed image.
T(i,j) = n×( q(i,j) – qq(i,j) ) (17)
As a result of (7):
0 _ qq(i,j) _ 2k-m – 1 & 0 _ q(i,j) _ 2k-m - 1
0 _ | q(i,j) - qq(i,j) | _ 2k-m – 1 (18)
So all possible prediction error values Tv(i,j) for our transformation is calculated as:
Tv(i,j) = n × V where V = 0,1,…, 2k-m – 1 (19)
In DPCM all possible value for prediction errors are in the range of [0, 2k-1]. According to (19),
variety of outputs for this method is n times smaller. So energy compaction is n times better,
which is a good attribute. All outputs of the method are coefficients of n. For example, this paper
applied this method with n equal to 32 on 512×512 Barbra image with 8-bit depth and compare
it’s histogram to histogram of lossless JPEG transformation output in Fig. 7. From (19) variety of
prediction error values are computed as:
Tv(i,j) = 32 × V
V = 0, 1, …, 23-1
So, there will be only 8 different possible values for prediction error which are [0, 32, 64, …,
224]. Negative values of these numbers can be included which obtain to 16 values, but this paper
shift them to have only unsigned integer in output. In the other hand, DPCM outputs have variety
in range of 0 to 255. As can be seen from Fig. 7, the new method causes less variety in output
and as a result, more energy compaction and more entropy reduction.

Dept. of ECE, SJBIT

Page 15


Dept. of ECE, SJBIT

Page 16


Chapter 4
Design and Implementation
4.1 Introduction
The JPEG Algorithm was implemented by writing a MATLAB code to compress the grey
scale image. The code is written to compute the 2-D DCT, divided method encoding for
Store /Transmit.
The flow of the developed code is as follows
We incise a 3×3 part of Barbra image and show all the steps of the new method in Fig. 8.
n is equal to 8 for this example and prediction number one (X = (B+D) / 2) is used. As
can be seen in Fig. 8, all prediction error values are factors of 8. So all the energy will
compact into 32 different
values which are:
Tv(i,j) = 8 × V
Where V = 0,1,…,2^5-1 (= 31)
It is almost clear from numerical example that keeping remainder helps a lot through this
energy compaction. Also it should be noted that good predictor gives better result.

Dept. of ECE, SJBIT

Page 17


Entropy value decreases by enhance energy compaction, and better entropy value cause
more compression. Therefore, by transforming image with more energy compaction
better compression will obtained. Assume an image with equal probability distribution for
all intensities. According to (1) and for such image entropy value is:
E= - M×N ( log2(1/L) / M×N )
= - log2(1/L) where L = 2k (21)
Where L is the number of intensities. For an 8-bit image, L is equal to 256. Therefore,
entropy is equal to k for described image. For a specific n, intensity values decrease n
times and then from (3) and by division:
Ln = 2k-m
= L/n (22)
Ln is the number of intensities for divided image. From (21) and (22) for an image
transformed by the new method, the new entropy value is:
E = - log2(1/Ln) = - log2(n/L)
= k-m (23)
It means that required bit per pixel or entropy decreases which leads to more
compression. For example, for Barbra image with different intensity distribution, entropy
values of different methods are calculated as Table 3.

It is almost certain from Table 3, bigger n causes more entropy reduction as this paper
expected from the equations. Therefore, the new method reduces entropy value and
intensity distribution which cause better compression and shows the efficiency of
introduced method in compression.

Dept. of ECE, SJBIT

Page 18


4.2 Challenges considered in a lossless compression
A. Computational Complexity
Computational complexity is another factor that determines the efficiency of a method. It
can be the number of CPU cycles, number of hardware gates or it can be memory
bandwidth and etc [5]. These are dependent on target application. As a rough indication
this paper is compared consider operations and order for transformation. This algorithm
has less computational complexity than transformation of JPEG2000, because it does not
need any filter banking or convolution, but it has higher computational complexity than
lossless JPEG. On the other hand higher computational complexity of the new method is
caused by having two more shift processes and one more addition compare to
transformation of lossless JPEG. On the other hand, as mentioned before, complexity is
less than JPEG-LS.
Since no hierarchical loop add to older DPCM algorithm, the order of algorithm is O(n)
in which n is the number of pixels or M×N. It should be noted that choosing a less
complex predictor such as X=B, can decrease the total computational complexity and
accelerate the transformation process. For example, predictor X=(B+D)/2 needs a one bit
shift register and an adder block, but X=B can implement only with one selector.
However, predictor must have good entropy reduction either. For CT image compression,
X=B is an efficient predictor.

B. Lossless Transformation
Lossless compression of medical images was our main purpose for introducing this
method. Hence, for lossless implementation of this method we have to prevent losing
information. Thus, n=2 must be picked. Therefore, the only possible values for remainder
would be 0 or 1 and there would be no loss of critical information in reconstruction
phase. In images reconstruction one failure in intensity value estimation (e.g.
reconstructing 201 beside 200) cause about 0.39 percent of intensity change. This change
can be disregarded. Another way to ensure that the method is lossless is to save or send
Dept. of ECE, SJBIT

Page 19


remainders with transformed image. When we divide image by two, remainders can be
saved by one bit. Therefore, the size of remainder matrix will be 1/8 of original image.
Hence, in the case of crucial lossless compression, the remainders can be sent and it is
still acceptable compression. Calculating remainder in an intelligent manner in
reconstruction that would be our future work is another solution for lossless compression.
For less complex lossless scheme, this paper used shift with carry. As a result, the
remainder of dividing process saved in carry. Further, in the rescaling process one left
shift with carry adds remainder to predicted matrix. Therefore, adding process eliminates
and computational complexity decreases.
Therefore, lossless transformation of this method is achieved by adding two shift
registers to previous DPCM of lossless JPEG. As a result, complexity stays low.

C. Near-Lossless Transformation
For near-lossless or visually lossless transformation n is equal to 4. It causes at most 1.56
percent intensity changes in reconstruction of transformed image. For critical images with
large amount of detail, this paper suggests to use lossless scheme for important parts and
near lossless for other parts. Region of interest (ROI) can be considered for near-lossless

D. Lossy Transformation
Lossy compression is achieved by applying bigger n for division step. If we pick a very
big n like 64, we may lose lots of information and predicted image do not have an
acceptable quality and as a result reconstructed image will not have a good quality
respectively. Therefore, it depends on importance of required quality for picking divisor.

Dept. of ECE, SJBIT

Page 20


E. Error
Another way to measure the goodness of method is calculating root mean square error
(RMSE) which estimates by (24) between input image and predicted image [9].
Inasmuch as RMSE is dependent on prediction error, better prediction cause less error.
Root mean square signal to noise ratio (SNRrms) is another error estimation which
calculates by square of mean square signal to noise ratio (SNRms) [9]. SNRms is shown
in (25).
In experimental results which are in next Section, error and signal to noise ratio are

Final implementation to DCT:

using DCT

Dept. of ECE, SJBIT

Page 21


Chapter 5
Simulation and Results
5.1 Measurement Parameters:
In our project the MATLAB code has been written to compress and store the image after
the encoding of any grey scale images. Any image that is compressed will be evaluated on the
two parameters-PSNR and Compression Ratio (In our project we have evaluated compression

5.1.1 Peak Signal-to-Noise Ratio
It is the term defined as the ratio between the maximum possible power of the signal and the
power of the noise that affects the fidelity of its representation. Dynamic range PSNR is usually
expressed in terms of the logarithmic decibel scale. The PSNR is most commonly used as a
measure of the quality of the reconstruction of lossy compression codecs. The signal in this case
is the original data, and the noise is the error introduced by the compression. When comparing
compression codecs it is used as an approximation to human perception of the reconstruction
quality, therefore in some cases one reconstruction may appear to be closer to the original than
another, even though it has a lower PSNR (A higher PSNR indicates the reconstruction is of
higher quality).It is most easily defined via the Mean Squared Error (MSE). Typical values of the
PSNR in the lossy image and video compression are between 30 dB and 50 dB.
Acceptable values for wireless transmission quality loss are considered to be 20 dB to 25 dB.
When the two images are identical, the MSE value is zero. For this PSNR is undefined.
Dept. of ECE, SJBIT

Page 22


5.1.2 Data Compression Ratio:
It is a term used to quantify the reduction in the data representation size produced by data
compression algorithm. The data compression is defined as the ratio of number bits used to
represent the data before compression and number of bytes of data required to represent the data
after compression.



Simulation Results


Lena Image

Figure 8 Simulation Results for Lena fighter image

Chapter 6
Dept. of ECE, SJBIT

Page 23


As it has been proved and illustrated by simulations, our new method causes more energy
compaction compare to lossless JPEG and as a result more entropy reduction. Its entropy
reduction is even better than DWT step of JPEG2000 about 4 percent in spite of its low
computational complexity. Hence, it is an efficient method for lossless or near lossless
transformation of medical images when real time applications and hardware simplicity is of











telemedicineteleradiology and in online diagnosis applications.

[1] S. Burak, C. Tomasi, B. Girod and C. Beaulieu. "Medical Image Compression based on
Region of Interest with Application to Colon CT images.", Proc. of 23rd Int. Conf. of the IEEE
Engineering in Medicine and Biology Society, 2453-2456, Istanbul, Turkey, 2001.
[2] R. Starosolski, "Simple fast and adaptive lossless image compression algorithm", SoftwarePractice And Experience, Wiley InterScience, pp.37-56, 2007.
Dept. of ECE, SJBIT

Page 24


[3] A. Savakis and M. Piorun, "Benchmarking and Hardware Implementation of JPEG-LS,"
ICIP'02, Rochester, NY, Sept. 2002.
[4] R. Calderbank, I. Daubechies, W. Sweldens, and B. L. Yeo. "Lossless image compression
using integer to integer wavelet transforms", In Proc. ICIP-97, IEEE International Conference
on Image, volume 1, pp. 596–599, Santa Barbara, California, Oct. 1997.
[5] T. Ebrahimi, D. Santa Cruz, J. Askelöf, M. Larsson and C. Christopoulos, “JPEG 2000 Still
Image Coding Versus Other Standards", SPIE Int. Symposium, Invited paper in Special Session
on JPEG2000, San Diego California USA, 30 July - 4 August 2000.
[6] F. Dufaux, G.J. Sullivan, and T. Ebrahimi, "The JPEGXR Image Coding Standard" IEEE
Signal Processing Magazine, pp. 195-204, Nov, 2009.
[7] H. Foos, E. Muka, Richard.M. Slone, B.J. Erickson, M.J. Flynn, D.A. Clunie, L. Hildebrand,
K. Kohm and S. Young, "JPEG2000 Compression of medical imagery", DICOM SPIE MI,
vol3980, San Diego, 2000.
[8] T. Bose, Digital Signal and Image Processing, pp. 541-644, JOHN WILEY & SONS, INC,

Dept. of ECE, SJBIT

Page 25