You are on page 1of 49

Bachelor Thesis

Electrical Engineering
June 2020

Bachelor Thesis
Electrical Engineering
June 2020

Lossless Image compression using


MATLAB
Comparative Study

Surya Teja Kodukulla

Department of Mathematics and Natural Sciences


Blekinge Institute of Technology
SE–371 79 Karlskrona, Sweden
This thesis is submitted to the Department of Mathematics and Natural Sciences at Blekinge
Institute of Technology in partial fulfillment of the requirements for the degree of Bachelor of
Science in Electrical Engineering.

Contact Information:
Author(s):
Surya Teja Kodukulla
E-mail: suko19@student.bth.se

Supervisor:
Benny Lövström

University Examiner:
Irina Gertsovich

Department of Mathematics and Natural Internet : www.bth.se


Sciences
Blekinge Institute of Technology Phone : +46 455 38 50 00
SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57
Abstract

Context: Image compression is one of the key and important applica-


tions in commercial, research, defence and medical fields. The larger
image files cannot be processed or stored quickly and efficiently. Hence
compressing images while maintaining the maximum quality possible
is very important for real-world applications.
Objectives: Lossy compression is widely popular for image compres-
sion and used in commercial applications. In order to perform efficient
work related to images, the quality in many situations needs to be high
while having a comparatively low file size. Hence lossless compression
algorithms are used in this study to compare the lossless algorithms
and to check which algorithm makes the compression retaining the
quality with decent compression ratio.
Method: The lossless algorithms compared are LZW, RLE, Huff-
man, DCT in lossless mode, DWT. The compression techniques are
implemented in MATLAB by using image processing toolbox. The
compressed images are compared for subjective image quality. The im-
ages are compressed with emphasis on maintaining the quality rather
than focusing on diminishing file size.
Result: The LZW algorithm compression produces binary images
failing in this implementation to produce a lossless image. Huffman
and RLE algorithms produce similar results with compression ratios
in the range of 2.5 to 3.7, and the algorithms are based on redundancy
reduction. The DCT and DWT algorithms compress every element
in the matrix defined for the images maintaining lossless quality with
compression ratios in the range 2 to 3.5.
Conclusion: The DWT algorithm is best suitable for a more efficient
way to compress an image in a lossless technique. As the wavelets are
used in this compression, all the elements in the image are compressed
while retaining the quality. The Huffman and RLE produce lossless
images, but for a large variety of images, some of the images may not
be compressed with complete efficiency.

Keywords: Algorithms, Compression Ratio, Efficiency, Image Com-


pression, Lossless, Lossy, Quality
Acknowledgments

I would like to express my sincere gratitude towards Mr. Benny Lövström for
guiding me through the course and providing all the help I needed and asked
for to complete the project in an efficient manner. I would also like to thank
Irina Gertsovich for guiding me before the start of the thesis work. Her valuable
guidance was essential for the start of my thesis.

I would also like to thank my parents, family, friends and colleagues for sup-
porting and helping us to reach our preset goals and deadlines for this project.

- Surya Teja Kodukulla

ii
Contents

Abstract i

Acknowledgments ii

1 Introduction 1
1.1 Lossy Compression . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Lossless Compression . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Aims and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 MATLAB as a tool . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Related Work 4
2.1 Photographic data . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Modal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Work on Image Compression . . . . . . . . . . . . . . . . . . . . . 4

3 Method 7
3.1 Huffman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 LZW (Lempel-Ziv-Weich) . . . . . . . . . . . . . . . . . . . . . . 8
3.3 RLE (Run Length Encoding) . . . . . . . . . . . . . . . . . . . . 8
3.4 DCT (Discrete Cosine Transform) . . . . . . . . . . . . . . . . . . 9
3.5 DWT (Discrete Wavelet Transformation) . . . . . . . . . . . . . . 9

4 Implementation 11
4.1 LZW (Lempel-Ziv-Weich) . . . . . . . . . . . . . . . . . . . . . . 11
4.2 RLE (Run Length Encoding) . . . . . . . . . . . . . . . . . . . . 11
4.3 Huffman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.4 DCT (Discrete Cosine Transform) . . . . . . . . . . . . . . . . . . 14
4.5 DWT (Discrete Wavelet Transform) . . . . . . . . . . . . . . . . . 17
4.6 Implementation in MATLAB . . . . . . . . . . . . . . . . . . . . 19

5 Results 20
5.1 Results for LZW (Lempel-Ziv-Weich) . . . . . . . . . . . . . . . . 20
5.2 Results for RLE (Run Length Encoding) . . . . . . . . . . . . . . 21

iii
5.3 Results for Huffman . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.4 Results for DCT (Discrete Cosine Transform) . . . . . . . . . . . 27
5.5 Results for DWT (Discrete Wavelet Transform) . . . . . . . . . . 30
5.6 Overview of Results . . . . . . . . . . . . . . . . . . . . . . . . . . 33

6 Discussion 34

7 Conclusions and Future Work 35


7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

References 37

iv
List of Figures

4.1 Flowgraph of LZW algorithm . . . . . . . . . . . . . . . . . . . . 12


4.2 Flowgraph of RLE algorithm . . . . . . . . . . . . . . . . . . . . . 13
4.3 Flowgraph of Huffman algorithm . . . . . . . . . . . . . . . . . . 15
4.4 Flowgraph of DCT algorithm . . . . . . . . . . . . . . . . . . . . 16
4.5 Masking Matrix used in DCT . . . . . . . . . . . . . . . . . . . . 17
4.6 Flowgraph of DWT algorithm . . . . . . . . . . . . . . . . . . . . 18

5.1 Berry Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21


5.2 Compressed using LZW. . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 Berry Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.4 Result of RLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.5 Bird Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.6 Result of RLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.7 Dog Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.8 Result of RLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.9 Flower Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.10 Result of RLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.11 Lamp Post . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.12 Result of RLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.13 Berry Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.14 Result of Huffman . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.15 Bird Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.16 Result of Huffman . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.17 Dog Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.18 Result of Huffman . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.19 Flower Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.20 Result of Huffman . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.21 Lamp Post . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.22 Result of Huffman . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.23 Berry Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.24 Result of DCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.25 Bird Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.26 Result of DCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.27 Dog Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

v
5.28 Result of DCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.29 Flower Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.30 Result of DCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.31 Lamp Post . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.32 Result of DCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.33 Berry Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.34 Result of DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.35 Bird Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.36 Result of DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.37 Dog Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.38 Result of DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.39 Flower Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.40 Result of DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.41 Lamp Post . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.42 Result of DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

vi
List of Tables

5.1 Compression Ratio for LZW Algorithm . . . . . . . . . . . . . . . 20


5.2 Compression Ratio for RLE Algorithm . . . . . . . . . . . . . . . 21
5.3 Compression Ratio for Huffman Algorithm . . . . . . . . . . . . . 24
5.4 Compression Ratio for DCT Algorithm . . . . . . . . . . . . . . . 27
5.5 Compression Ratio for DWT Algorithm . . . . . . . . . . . . . . . 30

vii
Chapter 1
Introduction

Image compression is a technique used widely to reduce the image size during
storing and processing of the image. With increasing quality and size of the im-
ages, compression has become essential in day to day life. With the increasing
usage of cloud storage, the compression plays a significant role in storing a large
number of images online [1].

The image compression is efficient when the amount of data which is necessary
to represent an image is reduced, and the space to store them is decreased, and
image transfer efficiency increases.

An image is represented digitally using two-dimensional Cartesian coordinates


as I(m,n). The indices m, n are used to represent the rows and columns of the
image, and coordinate (m,n) represents pixel at that place which representation
starts from the top left corner of the image. Images can also be denoted in
abstract spaces based on the application. The coordinates can also be used to
represent a space for three or more dimensions [1] [2].

Different colour channels define the intensity for a pixel location in the image.
The pixel location contains a value which will be converted to an image using
a colour map, by assigning the value to a level in the colour map. grey-scale is
used as the most common colour map in which has all shades from black to white
assigned. Apart from grey-scale images, true-colour images consist of a vector
with Red, Green, Blue components for each pixel position. This type of image
can be considered as an image with three two dimensional planes [2].

The compression techniques are of two types, lossy and lossless compression.

1.1 Lossy Compression


The lossy compression mainly focuses on compression of the file size. The file
size will be reduced significantly, but the quality of the image will be deteriorated
compared to the original image [3]. Lossy compression allows for fidelity for a

1
Chapter 1. Introduction 2

given transmission and storage. It also minimizes the number of bits required for
an image with loss of information.

1.2 Lossless Compression


The lossless compression does not usually reduce the size of the image signifi-
cantly, but the quality of the image is preserved to the maximum possible extent.
This type of compression is useful for research and image assessment applications
[4]. The main aim of this compression is to make the image with the smallest
possible bits and increasing transmission rate, decreasing storage requirements.

Though lossy compression returns a very high reduction in bit rate, as low-quality
images are not preferred in application point of view, the lossless compression is
suitable for broad areas of applications such as Medical, Depth Sensing, Defence
and research purposes [5]. Lossless compression is applied to a wide variety of
files such images, pdf, text documents etc. All lossless compression algorithms
are not applied to images [6]. Hence, this study focuses on the application of
different lossless compression algorithms on images and compare the results for
best quality and compression ratio obtained.

1.3 Aims and Objectives


The main aim of this study is to compare the different lossless compression al-
gorithms such as Huffman, LZW (Lempel-Ziv-Weich), RLE (Run Length Encod-
ing), DCT (Discrete Cosine Transform), DWT (Discrete Wavelet Transform) for
effective image compression without any quality loss. Subjective quality of the
compressed image is considered to check the validity of the lossless compression.

The other objectives of this thesis are

1. To check whether the improvement in the compressed file size for lossless
image compression can be used for better functionality and applications.

2. To combine the compression algorithms for high-quality image resolution using


MATLAB.

3. To compare the efficiency of compression by finding compression ratio for


different compression algorithms.
Chapter 1. Introduction 3

1.4 Research Questions


1. Can the high-resolution images be compressed without major compromise in
the quality by comparing different lossless compression algorithms?

2. Can the high-resolution image file size be reduced by using the best com-
pression algorithm after comparing the algorithms?

1.5 MATLAB as a tool


MATLAB with image processing toolbox is an important aspect for this study.
The image processing and implementing the algorithms is more simpler and effi-
cient if implemented in MATLAB. The functions available for decomposing and
compressing the images make it easier to find a solution for any image processing
problem. Though other programming languages can be used for image compres-
sion, with the advantage of having included libraries and functions, MATLAB
provides easier way to perform image compression
Chapter 2
Related Work

Lossless compression is based on the characteristics of the data to be compressed


that are predictable. The data from still image can be divided into two categories
1. Photographic data
2. Modal data

2.1 Photographic data


It is the data that projects intensities in a continuous pattern. This type of data
is called as Tone Art in photographic terms [7].

2.2 Modal Data


It is the data that projects intensities in a discontinuous pattern. This type of
data is called Line Art in photographic terms.

2.3 Work on Image Compression


The lossless image compression has been used in many ways as a superior method
of image compression compared to PNG, JPEG2000 and WebP. The lossless im-
age compression has been successful in outperforming the PNG, JPEG2000 and
WebP codecs in reducing the file size while retaining the quality of the image.
This comparison is made by implementing the compression in a probabilistic
model for entropy coding [7] [8].

The lossless compression has also been implemented using the Artificial Neu-
ral Network as a predictor. The prediction process is made lossless by predicting
the integers at compression and decompression stages. However, this method
uses artificial neural networks which may result in good compression ratio but
not available widely [1].

Another method is encryption in JPEG and then compressing the image using

4
Chapter 2. Related Work 5

lossless standards. It uses block-based encryption which increases the security,


but the compression quality can be compromised. The encryption methods have
also not been designed for international lossless compression standards [9].

Algorithms such as DTT (Discrete Tchebichef Transform) which is usually imple-


mented in lossy compression has been attempted to be implemented in lossless
compression. This method has a limited area of implementation and can be com-
pared to the standard JPEG standard [8].

The hardware implementation for the lossless image compression has also been
tested using Spartan 3 EDK kit. This method improves the compression and
decompression of images, but hardware implementation for everyday image com-
pression is not an optimal method [10].

Encoding of the images has been done using the RLE algorithm by using bit
planes. Higher-level bit planes were encoded using the RLE algorithm, whereas
other bit planes are encoded using arithmetic coding. By combing the arithmetic
encoding and RLE, a high compression efficiency has been achieved. This method
of compression is suited for grey-scale images [11].

LZW algorithm has been implemented for lossy image compression for grey scale
images, with compression efficiency around 40%. The GIF encoder with LZW
algorithm has been implemented to achieve this lossless compression [12].

Huffman encoding has also been used in order to compress the grey-scale im-
ages. This is a lossless compression which reduces the source symbols and then
Huffman coding is used for compression. The method yields 10% more compres-
sion ratio than the usual compression using Huffman encoding [13].

RLE algorithm has been used for semi-lossless compression of images. This
method maps the colours of image to a vector. The RLE is applied on the vector
to obtain a new vector with values of colour and frequency. As the frequencies
are stored in vectors, size of the image is reduced [14].

The implementation of RLE algorithm has been made for Computer Generated
Hologram (CGH). The recurrence of the patterns in the CGH algorithm to gener-
ate images. The calculation time is improved by 88% by using RLE algorithm [15].

Bit Back Coding combined with HiLLoC was tested for lossless compression. This
method was targeted for codecs available on ImageNet images. This method can
also be combined with machine learning for better compression ratios [6].

The lossy image compression using Huffman encoding has been used in medical
Chapter 2. Related Work 6

image compression. The Huffman encoding is used for compressing the grey-scale
scanned images for storing and research purposes [16]. Wavelet compression is
also used for image denoising and compression using MATLAB for biomedical
research applications [17].

In the previous implementations, the lossless image compression has been tested
using a particular algorithm or method. Most of the implementations were made
for grey-scale images. As colour images are widely used for most of the applica-
tions, the lossless compression for colour images is considered in this study. This
study compares the lossless compression algorithms Huffman, LZW, RLE, DCT
and DWT to determine the optimum algorithm to retain the quality of the image
as efficiently as possible. The subjective quality is taken into consideration for
comparing the results obtained for each algorithm.
Chapter 3
Method

The algorithms are executed using MATLAB coding. The Linnaeus 5 data-set
[18] has been used to test the algorithms of lossless compression which are Huff-
man, LZW (Lempel-Ziv-Weich), RLE (Run Length Encoding), DCT (Discrete
Cosine Transform) and DWT (Discrete Wavelet Transform). Each algorithm will
be used to compress the images provided in the Linnaeus 5 data-set, which con-
sists of the following category of images
1. Berries
2. Birds
3. Dogs
4. Flowers
5. Portraits
6. Other images such as signboards, lamp posts, water bodies etc.

These images are of 256 x 256 size in terms of resolution. MATLAB is chosen
for implementation as the image processing toolbox will compile the algorithms
faster, and the functions in MATLAB will provide a superior method for process-
ing the images. The algorithms are implemented as follows:

3.1 Huffman
Huffman encoding is based on the probability of occurrence of values in a given
data. It assigns a variable-length code for various characters, and the frequency
of occurrence of a character will decide the code length. The characters which
occur most frequently have the codes shorter lengths, and the characters which
occur less frequently have codes with longer lengths [19]. In order to compress
the data, two trees are created.
1. Huffman Tree
2. Transverse Tree

Based on the probability of occurrence of the characters, the characters are re-
ordered in descending order. As per these probabilities, a new set of codes are
generated and again rearranged. The process continues until the grouping comes

7
Chapter 3. Method 8

down to one code [20].

3.2 LZW (Lempel-Ziv-Weich)


LZW (Lempel-Ziv-Weich) compression algorithm is one of the oldest lossless com-
pression algorithms used widely for Unix file formats and GIF image compression.
The algorithm compresses by reading the symbols, grouping the symbols into the
string from the given data. The strings are converted to codes as the amount of
space required to store codes is less, and the compression is achieved [6] [21].
The algorithm works as follows
1. A table is created with numerous entries, with 4096 being a common choice.
2. The single bytes are represented by using codes 0 to 255 in the table.
3. The single bytes from 0 to 255 are used to start the encoding and compression
is done by using codes from 256 to 4095.
4. The algorithm identifies repeated sequences in data and enters them in a code
table.
5. Decoding takes place by translating the code table to the actual data it repre-
sents.

3.3 RLE (Run Length Encoding)


Run Length Encoding reads the data and the recognises the same values present
in the data. The values may occur in consecutive patterns. These repeated values
are stored as a single data value. All such identical values are stored as single
values, and the redundancy is reduced. As the redundant values are reduced, the
size of the file is decreased. The algorithm can be found effective if the file has a
large number of redundancy values. For data which does not have repeating data
values, the algorithm will increase the file size [9].

The algorithm works by reducing the physical size of the string of repeating
data which is called a run. The run is encoded into two bytes
The first byte stores the run count, which is the number of values in the run. The
run count contains n-1 characters if the encoded run contains n characters.
The second byte represents the run value, which is the value of the character
present in the run. The value will be in the range of 0 to 255.
The two bytes combined is called an RLE Packet. The RLE Packets are gener-
ated for every new value or if the number of characters in a run exceeds the limit.
RLE has three methods of compressing the data.
1. Encoding along X-axis
2. Encoding along Y-axis
3. Zig-zag Encoding
Chapter 3. Method 9

Encoding along X-axis


This method of encoding is called sequential processing. The one-dimensional
lines are scanned from a two-dimensional map of data by encoding a bitmap from
the upper left corner until the end of a right corner for each line [22].

Encoding along Y-axis


This method of encoding is also sequential processing but is performed along
Y-axis, i.e. for the columns.

Zig-zag Encoding
This type of encoding is used for encoding a bitmap into two dimensional instead
of scanning only one dimension. This encoding method scans the map in a diag-
onal fashion. The scanning begins from the upper left corner and continues in a
diagonally Zig-zag manner till the end of bottom right corner [23].

Sequential encoding is best suited for compressing text data containing char-
acter strings or sequence. In order to compress image files, the encoding must be
done in two dimensions, and Zig-zag encoding is best suited for it [2].

3.4 DCT (Discrete Cosine Transform)


Discrete Cosine Transform uses the sum of cosine functions oscillating at different
frequencies and returns them as a sequence of data points [4]. Discrete Cosine
Transform is based on Fourier Transform with real numbers in which Fourier Se-
ries coefficients of a periodic sequence are used [23]. DCT is generally considered
lossy, but with proper masking technique in DCT-II, it can be used as a lossless
technique to compress images [3] [24]. The coefficients generated after the trans-
form is performed used for compression of the data. Based on the coefficients
that are compressed, the compression can be made lossy or lossless [25] [26]. The
compression is achieved by performing matrix multiplications of the image matrix
with the DCT matrix and its transpose. The DCT matrix available in the image
processing toolbox of MATLAB can be used in the implementation.

3.5 DWT (Discrete Wavelet Transformation)


Discrete Wavelet Transform is a compression technique in which the discrete
sampling of wavelets is implemented to achieve compression[8]. Wavelets are the
orthonormal series generated in any image, and these are used to represent a
square-integrable function. The wavelet consists of information with both time
and data. Wavelet transform allows the change in time extension but not in
Chapter 3. Method 10

the shape of the function. Wavelet transformation gives the same information as
short-time Fourier transformation along with wavelet properties [27]. The DWT
compression can be implemented at different levels of decomposition of wavelets
of the image. Haar wavelet decomposition and compression are chosen for this
study as the computation time is short for Haar wavelet and does not require
multiplications. It consists of many zero elements, so adding elements is easier to
Haar wavelet [28].
Chapter 4
Implementation

The algorithms have been implemented in the MATLAB. To execute the algo-
rithms, the same set of images have been used for compression in order to compare
the results.

4.1 LZW (Lempel-Ziv-Weich)


This algorithm is implemented, as mentioned in section 3.2 by appending all the
elements of the image into a dictionary. The consecutive elements are checked
for concatenation. If the concatenation exists 1 is returned else 0 is returned,
and code is generated for the elements in the dictionary, and the code is mapped
to the element in the dictionary [29] [6] [21]. The elements are replaced in the
dictionary, and the indices are updated in for the elements. The encoded image
consists of the code that has been generated for the elements based on the con-
catenation [30] [29].

In the decoding process, the code generated for the elements is used to generate a
dictionary with an image vector based on the concatenation. The redundancy is
reduced by using the code generated, and the similar elements are not repeated
by using new values. This reduces the size of the image [31]. Flowgraph in figure
4.1 describes the working of the algorithm.

4.2 RLE (Run Length Encoding)


The algorithm is implemented as mentioned in section 3.3 by keeping track of re-
lation between the words encountered and the code values. As the code is placed
in place of the words, and the file is compressed, the efficiency of the algorithm
is more if the redundancy in the data is high [9]. The compression of the input
stream is done in a single pass and does not require prior information regard-
ing the input data [15]. In order to produce a lossless compressed image, the
RLE algorithm is combined with wavelet decomposition. The normal implemen-
tation of RLE for colour images produces grey-scale images; hence the image is

11
Chapter 4. Implementation 12

decomposed into wavelets, and the data from the wavelets is quantised. The RLE
algorithm is then applied to these images using Zig-Zag encoding, as mentioned
in section 3.3 [23].

Figure 4.1: Flowgraph of LZW algorithm

The image is reconstructed by using inverse Zig-Zag decoding, which has the
encoded image and the dimensions of the image as inputs. The Zig-Zig decoding is
possible due to the quantisation of the wavelet data as it has discrete values. The
Zig-Zag decoded data will still be in wavelet format, which will be reconstructed
to get the compressed image using RLE. Flowgraph in figure 4.2 describes the
working of the algorithm.
Chapter 4. Implementation 13

Figure 4.2: Flowgraph of RLE algorithm


Chapter 4. Implementation 14

4.3 Huffman
The implementation is done, as mentioned in section 3.1. The image is encoded
based on the colours present in the image, and the codes are created for each
colour. Huffman codes are created based on the frequency of occurrence of each
colour. This reduces the size of the image without any loss in the subjective
quality of the image [16].

In order to execute the compression, the size of the image is calculated as a


number of rows and columns in the image and a matrix is formed with size as
same as the dimensions of the image. The histogram data is used to count the
number of pixels for each tonal value in the images. The pixels are counted by
splitting the red, green and blue spectrum. A cell array is created with matrix
elements from the image. A variable is assigned to store the ratio of the number
of pixels from the histogram and the number of elements in the cell array [32].
The codes generated for different pixels are stored in the cell array created. The
binary codes are converted to decimal numbers, and the matrix is formed with
the same dimensions as the original image. Flowgraph in figure 4.3 describes the
working of the algorithm.

4.4 DCT (Discrete Cosine Transform)


The implementation is executed with reference to section 3.4. The image is com-
pressed by using a square matrix, which is used for masking the image matrix.
The larger the masking, the higher the compression and loss in quality. With
proper masking values in the matrix, the compression can be made into lossless
and reduce the size of the image. The masking matrix is predefined and can be
changed based on the requirements [4].

To compress the images using DCT algorithm, 8 x 8 pixel block case is cre-
ated to find the DCT coefficients. The entries of the pixel block case are either
0 or 1. This pixel block case is used for masking the image matrix. The image
matrix is multiplied with the DCT matrix and its transpose. The DCT matrix is
provided in the image processing toolbox in MATLAB, which is an n x n discrete
matrix. The result of this matrix multiplication is multiplied with a pixel block
case, i.e., the masking matrix and the resultant matrix is again multiplied with
the DCT matrix and its transpose [33] [34]. This results in the compressed image
matrix with higher energy coefficients in the upper left corner of the DCT matrix.
The masking matrix is shown in figure 4.5.
Chapter 4. Implementation 15

Figure 4.3: Flowgraph of Huffman algorithm


Chapter 4. Implementation 16

Figure 4.4: Flowgraph of DCT algorithm


Chapter 4. Implementation 17

Figure 4.5: Masking Matrix used in DCT

As the elements in the masking matrix other than the upper left corner are
zero, only the higher energy coefficients are compressed with produces a lossless
compressed image. When combined with coding also of the error residual se-
quence, this method of implementation in DCT produces a lossless image [35]
[36]. Flowgraph in figure 4.4 describes the working of the algorithm.

4.5 DWT (Discrete Wavelet Transform)


As mentioned in section 3.5 when wavelet transform is performed, coefficients for
every pixel in the image are produced. There is no compression at this stage as
it is only a transform. The coefficients generated are more easily compressed as
statistically the information is concentrated in few coefficients. All this process is
called transform coding [37] [34]. The coefficients are then quantized and encoded
to get compression. Among different wavelets available, the Haar wavelet trans-
form is the fastest and efficient way to perform the compression. This wavelet
captures both frequencies and the times at which they occur. This allows the
wavelet to perform the operations based on a discrete function [38].

To perform the wavelet transformation, four coefficient matrices cA, cH, cV, cD
which correspond to level 1 approximation, horizontal details, vertical details,
diagonal details respectively. Similarly, level 2 decomposition can also be made
with the approximations from level 1. The data from level 2 decomposition along
with level 1 decomposition is stored in a vector. After multilevel decomposi-
tion, the original image is synthesized or reconstructed based on the coefficients
from the vector, which consists of the decomposed data [39]. Based on this data
decomposed data the image can be compressed by using wavelet toolbox in MAT-
LAB with commands ’ddencmp’ and ’wpdencmp’. After compression of the
Chapter 4. Implementation 18

decomposed data stored as vectors, the images are reconstructed based on the

Figure 4.6: Flowgraph of DWT algorithm

coefficients obtained after compression. While performing the image compres-


Chapter 4. Implementation 19

sion using decomposed, the data obtained after first level decomposition is also
compressed along with the second level decomposed data in the vector in order to
prevent any subjective deterioration of the image [39] [40]. Flowgraph in figure
4.6 describes the working of the algorithm.

4.6 Implementation in MATLAB


The lossless algorithms are implemented using MATLAB software. The algo-
rithms have been executed by choosing a specific dataset of images Linnaeus 5.
As all the algorithms are not designed to compress images, they have been used to
compress images to check the compatibility of the algorithms for images. Differ-
ent functions have been created for implementation, and the compressed images
are displayed at the end of the execution. The compression ratio is calculated to
check for compression efficiency. The compressed image is checked for subjective
quality. The compression ratio for each algorithm is checked for different classes
of images which are berry, birds, dogs, flower, portraits, and other images.
Chapter 5
Results

The algorithms have been executed as mentioned in Chapters 3 and 4. The


images compressed using algorithms have been compared in this chapter. The
compression ratios have been calculated for the same images for each algorithm.
All the algorithms could not yield lossless compression. The subjective quality
of the images have been taken into consideration mainly as a check that the
algorithms have been implemented correct and not lost image data, since true
lossless image compression should give the exact image after decompression. The
compression ratio has been calculated using equation 5.1.

Compression Ratio = Original File Size/Compressed File Size (5.1)

5.1 Results for LZW (Lempel-Ziv-Weich)


The LZW algorithm implementation has returned binary images. As the imple-
mentation of algorithm uses dictionary and mapping of code generated to the
elements in dictionary, the colour images could not be returned after compres-
sion. The binary image yields compression ratios in the range of 6.4 to 7.6. As
the result of compressed images is binary, the compression cannot be considered
as lossless with the data being lost. An example of the lossy image compression
and decompression is shown in figures 5.1 and 5.2 respectively. The dictionary in
which the elements of the image are stored for encoding cannot be used storing all
the colour maps. The compression ratios obtained for LZW algorithm are given
in table 5.1.

Table 5.1: Compression Ratio for LZW Algorithm


Image Compression ratio
Birds 6.93
Berry 7.12
Dogs 6.88
Flower 7.25
Portraits 7.55
Other Images 6.44

20
Chapter 5. Results 21

Figure 5.1: Berry Image Figure 5.2: Compressed using LZW.

5.2 Results for RLE (Run Length Encoding)


Run Length encoding uses Zig-zag encoding after implementing the algorithm
the result returns a fully reconstructed image with compression ratios as follows
in table 5.2. The reconstructed image is lossless considering the subjective image
quality. All the categories of the images in the dataset used return the lossless
compression. The compression ratios range between 2.5 to 3.2. Figures 5.4, 5.6,
5.8, 5.10, 5.12 show the results of image compression using RLE algorithm.

Table 5.2: Compression Ratio for RLE Algorithm


Image Compression ratio
Birds 2.77
Berry 2.91
Dogs 2.83
Flower 3.13
Portraits 3.06
Other Images 2.53
Chapter 5. Results 22

Figure 5.3: Berry Image Figure 5.4: Result of RLE

Figure 5.5: Bird Image Figure 5.6: Result of RLE


Chapter 5. Results 23

Figure 5.7: Dog Image Figure 5.8: Result of RLE

Figure 5.9: Flower Image Figure 5.10: Result of RLE


Chapter 5. Results 24

Figure 5.11: Lamp Post Figure 5.12: Result of RLE

The subjective image quality is similar to that of the original image used for
compression. The time taken for compiling the code increases with increasing
size and resolution of the image.

5.3 Results for Huffman


Huffman encoding also produces lossless image after compression. Huffman uses
probability function to encode and compress. The compression ratios are as
follows for Huffman encoding in table 5.3. The result of the compression is lossless
considering the subjective image quality. Figures 5.14, 5.16, 5.18, 5.20, 5.22 show
the results of compression using Huffman algorithm. The compiling of Huffman
algorithm is faster than that of RLE. The compression ratios range between 2.8
to 3.8. The compression ratios are finer compared to that of RLE.

Table 5.3: Compression Ratio for Huffman Algorithm


Image Compression ratio
Birds 3.27
Berry 2.89
Dogs 3.10
Flower 3.73
Portraits 3.66
Other Images 3.23
Chapter 5. Results 25

Figure 5.13: Berry Image Figure 5.14: Result of Huffman

Figure 5.15: Bird Image Figure 5.16: Result of Huffman


Chapter 5. Results 26

Figure 5.17: Dog Image Figure 5.18: Result of Huffman

Figure 5.19: Flower Image Figure 5.20: Result of Huffman


Chapter 5. Results 27

Figure 5.21: Lamp Post Figure 5.22: Result of Huffman

5.4 Results for DCT (Discrete Cosine Transform)


Discrete Cosine Transform produces a lossless image as the masking matrix is
adjusted in order to produce a image without any losses in the quality. The
compression is efficient for particular images and the compression ratios are as
follows in table 5.4. The results of the compression is lossless as the subjective
image quality is similar. Figures 5.24, 5.26, 5.28, 5.30, 5.32 show the results
obtained for DCT compression. The compression ratios range between 2.1 to 3.1.
Though the compression ratios are bit lower than that of Huffman, the compiling
speeds are faster and the design and execution of the algorithm is easier. The
masking matrix can be modified to improve the compression ratio, but may result
in loss of data which cannot be considered as lossless compression.

Table 5.4: Compression Ratio for DCT Algorithm


Image Compression ratio
Birds 2.17
Berry 2.61
Dogs 2.43
Flower 3.09
Portraits 3.11
Other Images 2.83
Chapter 5. Results 28

Figure 5.23: Berry Image Figure 5.24: Result of DCT

Figure 5.25: Bird Image Figure 5.26: Result of DCT


Chapter 5. Results 29

Figure 5.27: Dog Image Figure 5.28: Result of DCT

Figure 5.29: Flower Image Figure 5.30: Result of DCT


Chapter 5. Results 30

Figure 5.31: Lamp Post Figure 5.32: Result of DCT

The Fourier coefficients with higher energy are compressed in order to retain
the image quality. The low energy coefficients are either retained or modified to
return zero to make the compression lossless and prevent data loss.

5.5 Results for DWT (Discrete Wavelet Transform)


Discrete Wavelet Transform results in lossless image after compression. Haar
wavelet is quantized and compressed to give the best image quality images. The
compression ratios are as follows for DWT in table 5.5. Figures 5.34, 5.36, 5.38,
5.40, 5.42 show the results of DWT compression. The compression ratios range
between 2.4 to 3.7 which are higher than that of DCT and lower than that of
Huffman. The existing commands for wavelet decomposition and synthesis of the
image in MATLAB make it easier to implement the algorithm. The number of
levels of decomposition can also be increased in the wavelet decomposition using
the Haar wavelet.

Table 5.5: Compression Ratio for DWT Algorithm


Image Compression ratio
Birds 3.17
Berry 2.79
Dogs 2.43
Flower 3.69
Portraits 2.65
Other Images 2.95
Chapter 5. Results 31

Figure 5.33: Berry Image Figure 5.34: Result of DWT

Figure 5.35: Bird Image Figure 5.36: Result of DWT


Chapter 5. Results 32

Figure 5.37: Dog Image Figure 5.38: Result of DWT

Figure 5.39: Flower Image Figure 5.40: Result of DWT


Chapter 5. Results 33

Figure 5.41: Lamp Post Figure 5.42: Result of DWT

5.6 Overview of Results


Except for the LZW algorithm, the other algorithms implemented in this study
were able to perform lossless compression. The compression ratios and compiling
speeds in MATLAB were different for different algorithms. The Huffman algo-
rithm returned better compression ratios but had longer compiling speeds. The
RLE returned descent compression ratios but the implementation is complex and
the implementation may terminate without completion for images with higher
resolution. The DCT and DWT algorithm implementations returned lossless im-
ages with decent compression ratios and compiling speeds faster than Huffman
and RLE. In comparison between DCT and DWT, DWT compression is simpler
to implement without much complexity in execution.
Chapter 6
Discussion

In order to compare the lossless algorithms that can be used for image compres-
sion, different implementations of image compression have been studied, and the
lossless algorithms that can be modified for colour image compression were im-
plemented.

The LZW algorithm was unable to return the lossless compression of images,
combinations with other methods like splitting into wavelets, combining with
RLE were experimented, but the implementation was not successful. RLE, Huff-
man algorithms were originally returning grey-scale images. The RLE algorithm
was modified by quantizing the data and then encoded using the algorithm. The
algorithm encoded only the rows of the image matrix, and implementation did not
convert it into a two-dimensional image with a colour map. The reconstruction
of the image could not be done in the beginning as there were no specifications of
how the image should be formed. The algorithm was then extended for columns,
and the Zig-Zag encoding was used to encode and decode the two-dimensional
image. The algorithm was then able to return a colour image.

The Huffman algorithm at first could not return the colour images similar to
RLE. As the algorithm is based on probabilities, the final codes obtained were
needed to be decoded and then converted to coordinates of the image. This con-
version was not made efficiently, and different functions were implemented to get
the reconstructed lossless image. The decoded codes were then converted to cell
array, which then was used to reconstruct the image.

The DCT was modified to be made into lossless compression by modifying the
masks used for encoding the image for compression. The DWT was used by using
only one level of wavelet transformation. It was then modified to transform the
image again without losing quality.

34
Chapter 7
Conclusions and Future Work

7.1 Conclusion
The DWT algorithm produces lossless images with decent compression ratios for
all kinds of images that were used in this study. Images with higher resolu-
tion were also compressed without any interruption in compiling the algorithm.
Though the compression ratios were a bit low compared to Huffman encoding,
the DWT algorithm is more reliable for high resolution image compression. The
execution of the DWT algorithm is also simple without major complications and
complexity compared to other algorithms used in this study. The DCT algorithm
also produces lossless compressed images with descent compression ratios but
the matrix conversion and compression of coefficients may become complex while
performing compression for lager number of high resolution images. Modifying
the DCT algorithm is also complex as the changing the masking matrix may not
return lossless compression.

For Huffman and RLE algorithms, though the compression ratios are bit higher
than DWT and DCT algorithms, the compiling is not stable for increasing reso-
lution of the images. The execution of Huffman encoding may encounter sudden
pause or termination in compilation without any output. This makes the Huffman
algorithm unreliable for lossless compression of high resolution of images. In order
to produce lossless colour images using RLE algorithm, the execution of the al-
gorithm is very complex with many functions, encoding and decoding techniques
to be implemented. Compression of high resolution images fails in most of the
cases with implementation of RLE algorithm. As LZW algorithm produces the
binary images as output, it cannot be considered as lossless and it requires modi-
fications to implement lossless compression and generate colour images as output.

Therefore, in study lossless compression algorithms have been compared and con-
sidering the subjective image quality it can be concluded that DWT algorithm
is best suited for lossless compression of high resolution images by considering
the simple execution and implementation of the algorithm in MATLAB without
much complexity. The file size can be reduced after lossless compression with de-

35
Chapter 7. Conclusions and Future Work 36

cent compression ratios. The subjective image quality after lossless compression
and reduction of file sizes remains the same as that of the original images. The
DCT algorithm can be combined with other algorithms or the matrix implemen-
tation can be changed without making the implementation complex. The RLE
and Huffman can be modified to perform lossless compression for high resolution
images without any interruptions and complexities.

7.2 Future Work


To improve the compression ratios for DWT algorithm, the levels of decompo-
sition of wavelets can be increased and the compression can be performed. The
DCT algorithm can be combined with DWT or Huffman in order to produce
lossless image. The masking matrix used to produce the compressed image can
be modified along the matrix multiplication process to improve the compression
without making the process to execute. The Huffman algorithm can be made
reliable for higher resolution images by attempting to apply the algorithm to
the wavelets and synthesizing the image at the end of compression. The RLE
algorithm can be modified by exploring encoding techniques other than Zig-Zag
encoding. Combination of algorithms can also be attempted for achieving simple
execution of algorithm.

The lossless algorithms were just compared in this study. However, the algo-
rithms can be combined with techniques like Machine Learning, Neural Networks
or combining them with other methods of image compression to produce more
efficient compression. The lossless compression can also be extended for the video
files, by compressing them frame by frame. The compression can also be tested
for compression speeds to modify the algorithms for faster compression and better
transmissions over a network.
References

[1] Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, and
Luc Van Gool. Practical full resolution learned lossless image compression.
In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 10629–10638, 2019.

[2] Chris Solomon and Toby Breckon. Fundamentals of Digital Image Processing:
A practical approach with examples in Matlab. John Wiley & Sons, 2011.

[3] Kenta Kurihara, Shoko Imaizumi, Sayaka Shiota, and Hitoshi Kiya. An
encryption-then-compression system for lossless image compression stan-
dards. IEICE transactions on information and systems, 100(1):52–56, 2017.
Publisher: The Institute of Electronics, Information and Communication
Engineers.

[4] M. D. Manigandan and S. Deepa. Comprehensive study on the effect of


entropy encoding algorithms on medical image compression. International
Research Journal of Engineering and Technology, 5(4):3460–3468, 2018.

[5] Mohamed Uvaze Ahamed Ayoobkhan, Eswaran Chikkannan, Kannan Ra-


makrishnan, and Saravana Balaji Balasubramanian. Prediction-based Loss-
less Image Compression. In International Conference on ISMAC in Compu-
tational Vision and Bio-Engineering, pages 1749–1761. Springer, 2018.

[6] James Townsend, Thomas Bird, Julius Kunze, and David Barber. HiLLoC:
Lossless Image Compression with Hierarchical Latent Variable Models. arXiv
preprint arXiv:1912.09953, 2019.

[7] Karam [Lina J. Chapter 16 - Lossless Image Compression. In Al Bovik,


editor, The Essential Guide to Image Processing, pages 385 – 419. Academic
Press, Boston, 2009.

[8] Bin Xiao, Gang Lu, Yanhong Zhang, Weisheng Li, and Guoyin Wang. Loss-
less image compression based on integer Discrete Tchebichef Transform. Neu-
rocomputing, 214:587–593, 2016. Publisher: Elsevier.

[9] Bogdan Rusyn, Oleksiy Lutsyk, Yuriy Lysak, Adolf Lukenyuk, and Lubomyk
Pohreliuk. Lossless image compression in the remote sensing applications.

37
References 38

In 2016 IEEE First International Conference on Data Stream Mining &


Processing (DSMP), pages 195–198. IEEE, 2016.

[10] N. Muthukumaran and R. Ravi. Hardware implementation of architecture


techniques for fast efficient lossless image compression system. Wireless Per-
sonal Communications, 90(3):1291–1315, 2016. Publisher: Springer.

[11] Med Karim Abdmouleh, Atef Masmoudi, and Med Salim Bouhlel. A new
method which combines arithmetic coding with rle for Lossless image com-
pression. 2012. Publisher: Scientific Research Publishing.

[12] S. W. Chiang and L. M. Po. Adaptive lossy LZW algorithm for palettised
image compression. Electronics Letters, 33(10):852–854, 1997. Publisher:
IET.

[13] Paul G. Howard and Jeffrey Scott Vitter. Parallel lossless image compression
using Huffman and arithmetic coding. In Data Compression Conference,
1992., pages 299–308. IEEE, 1992.

[14] Rafeeq Al-Hashemi, Ayman Al-Dmour, Fares Fraij, and Ahmed Musa. A
Grayscale Semi-Lossless Image Compression Technique Using RLE. Journal
of Applied Computer Science & Mathematics, (10), 2011.

[15] Takashi Nishitsuji, Tomoyoshi Shimobaba, Takashi Kakue, and Tomoyoshi


Ito. Fast calculation of computer-generated hologram using run-length en-
coding based recurrence relation. Optics express, 23(8):9852–9857, 2015.
Publisher: Optical Society of America.

[16] Awwal Mohammed Rufai, Gholamreza Anbarjafari, and Hasan Demirel.


Lossy medical image compression using Huffman coding and singular value
decomposition. In 2013 21st Signal Processing and Communications Appli-
cations Conference (SIU), pages 1–4. IEEE, 2013.

[17] Vipul Sharan, Naveen Keshari, and Tanay Mondal. Biomedical image de-
noising and compression in wavelet using MATLAB. International Journal
of Innovative Science and Modern Engineering (IJISME) ISSN, pages 2319–
6386, 2014. Publisher: Citeseer.

[18] Chaladze G. Kalatozishvili. Linnaeus 5 Dataset for Machine Learning


http://chaladze.com/l5/, 2017.

[19] Mamta Sharma. Compression using Huffman coding. IJCSNS International


Journal of Computer Science and Network Security, 10(5):133–141, 2010.

[20] Rachit Patel, Virendra Kumar, Vaibhav Tyagi, and Vishal Asthana. A fast
and improved Image Compression technique using Huffman coding. In 2016
References 39

International Conference on Wireless Communications, Signal Processing


and Networking (WiSPNET), pages 2283–2286. IEEE, 2016.

[21] A. Alarabeyyat, S. Al-Hashemi, T. Khdour, M. Hjouj Btoush, S. Bani-


Ahmad, R. Al-Hashemi, and S. Bani-Ahmad. Lossless image compression
technique using combination methods. Journal of Software Engineering and
Applications, 5(10):752, 2012. Publisher: Scientific Research Publishing.

[22] Hassan K. Albahadily, V. Yu Tsviatkou, and V. K. Kanapelka. Grayscale


image compression using bit plane slicing and developed RLE algorithms.
Int. J. Adv. Res. Comput. Commun. Eng., 6:309–314, 2017.

[23] Amit Birajdar, Harsh Agarwal, Manan Bolia, and Vedang Gupte. Im-
age Compression using Run Length Encoding and its Optimisation. In
2019 Global Conference for Advancement in Technology (GCAT), pages 1–6.
IEEE, 2019.

[24] Nageswara Rao Thota and Srinivasa Kumar Devireddy. Image Compression
Using Discrete Cosine Transform. page 9.

[25] Andrew B. Watson. Image compression using the discrete cosine transform.
Mathematica journal, 4(1):81, 1994. Publisher: Redwood City, Ca.: Ad-
vanced Book Program, Addison-Wesley Pub. Co., c1990-.

[26] Prabhakar Telagarapu, V. Jagan Naveen, A. Lakshmi Prasanthi, and G. Vi-


jaya Santhi. Image compression using DCT and wavelet transformations.
International Journal of Signal Processing, Image Processing and Pattern
Recognition, 4(3):61–74, 2011.

[27] M. Mozammel Hoque Chowdhury and Amina Khatun. Image compression


using discrete wavelet transform. International Journal of Computer Science
Issues (IJCSI), 9(4):327, 2012. Publisher: International Journal of Computer
Science Issues (IJCSI).

[28] Kamrul Hasan Talukder and Koichi Harada. Haar Wavelet Based Ap-
proach for Image Compression and Quality Assessment of Compressed Im-
age. page 8.

[29] B. Eswara Reddy and K. Venkata Narayana. A lossless image compression


using traditional and lifting based wavelets. Signal & Image Processing,
3(2):213, 2012. Publisher: Academy & Industry Research Collaboration
Center (AIRCC).

[30] Memon Nasir, Guillemot Christine, and Ansari Rashid. 5.6 - The JPEG
Lossless Image Compression Standards. In AL BOVIK, editor, Handbook of
Image and Video Processing (Second Edition), Communications, Networking
References 40

and Multimedia, pages 733 – 745. Academic Press, Burlington, second edition
edition, 2005.

[31] Tretter Daniel, Memon Nasir, and Charles A. Bouman. 5.7 - Multispectral
Image Coding. In AL BOVIK, editor, Handbook of Image and Video Process-
ing (Second Edition), Communications, Networking and Multimedia, pages
747 – 760. Academic Press, Burlington, second edition edition, 2005.

[32] C. Saravanan and R. Ponalagusamy. Lossless grey-scale image compression


using source symbols reduction and Huffman coding. International Journal
of Image Processing (IJIP), 3(5):246, 2009.

[33] Navpreet Saroya and Prabhpreet Kaur. Analysis of image compression algo-
rithm using DCT and DWT transforms. International Journal of Advanced
Research in Computer Science and Software Engineering, 4(2), 2014.

[34] Nushwan Y. Baithoon. Combined DWT and DCT Image Compression Us-
ing Sliding RLE Technique. Baghdad Science Journal, 8(3):832–839, 2011.
Publisher: Baghdad University.

[35] Mehnaz Tabassum. Lossless Image Compression using Discrete Cosine


Transform (DCT). 2017 ISBN 978-6202055864.

[36] Giridhar Mandyam, Nasir Ahmed, and Neeraj Magotra. Lossless Image
Compression Using the Discrete Cosine Transform. Journal of Visual Com-
munication and Image Representation, 8(1):21 – 26, 1997.

[37] D. Mehta and K. Chauhan. Image Compression using DCT and DWT-
Technique. International Journal of Engmeenng Sciences & Research Tech-
nology, 2(8):2133–2139, 2013.

[38] R. Praisline Jasmi, B. Perumal, and M. Pallikonda Rajasekaran. Comparison


of image compression techniques using huffman coding, DWT and fractal
algorithm. In 2015 International Conference on Computer Communication
and Informatics (ICCCI), pages 1–5. IEEE, 2015.

[39] Y. Sukanya and J. Preethi. Analysis of image compression algorithms using


wavelet transform with GUI in Matlab. IJRET: International Journal of
Research in Engineering and Technology, 2013.

[40] Ya-feng LIU, Bo-qing XU, and Jia JIA. Application of Wavelet Transform
in Image Compression Based on Matlab [J]. Computer and Modernization,
6, 2005.

You might also like