Professional Documents
Culture Documents
Lossless Image Compression Techniques Comparative Study
Lossless Image Compression Techniques Comparative Study
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/296674534
CITATIONS READS
0 874
2 authors, including:
Ashraf Y. A. Maghari
Islamic University of Gaza
26 PUBLICATIONS 25 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
An Evaluation of Topology Effect on Tiny Service Discovery Protocol for Wireless Sensor Networks
View project
All content following this page was uploaded by Ashraf Y. A. Maghari on 03 March 2016.
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract- In image compression, we can reduce the how an image data can be best compressed apart from
quantity of pixels used in image representation without sacrificing the quality of the image.
Image compression methods can be classified in several
excessively change image Visualization. Reducing
ways. One of the most important criteria of classification is
image size enhance images sharing, transmitting and whether the compression algorithms remove some part of
storing. This paper examines the performance of a set data which cannot be recovered during decompression.
of lossless data compression algorithms which are RLE, The algorithm which removes some part of data is called
Delta encoding and Huffman techniques on binary lossy data compression. And the algorithm that achieve
image, grey level images and RGB images. the same what we compressed after decompression is
called lossless data compression [2]. The lossy data
The selected algorithms are implemented and compression algorithm is usually use when a perfect
consistency with the original data is not necessary after
evaluated on different aspects like: compression ratio,
decompression. Example of lossy data compression is
saving storage percentage and compression time. A set compression of video or picture data. Lossless data
of defined images are used as test bed. The compression is used in text file, database tables and in
performance of different algorithms are measured on medical image because law of regulations. Various lossless
the basis of different parameter and tabulated. data compression algorithm have been proposed and
used. Some of main technique are Huffman Coding, Run
The results showed that delta algorithm is the best in Length Encoding, Delta encoding, Arithmetic Encoding and
case of compression ratio and saving storage Dictionary Based Encoding. In this paper we examine
percentage, while Huffman encoding is the best Huffman Coding and Arithmetic Encoding and give
compression between them according to their
technique when evaluated by compression time.
performances.
This paper examines the performance of the Run Length
Key Words: compression; image; Delta; RLE; Encoding Algorithm (RLE), Huffman Encoding Algorithm
Huffman; coding; algorithm and delta Algorithm. Performance of above listed
algorithms for compressing images is evaluated and
compared.
1. INTRODUCTION
The remainder of this paper is organized as follows:
WITH the invention of recent smart computing devices, Section II describes the background concepts of image
generating, transmitting and sharing of digital images have compression and related work. Section III describes
excessively been increased. The more the small electronic statement of the problem. Section IV describes used
devices are incorporating cameras and providing the user methodology. Section V describes details experiment
with technologies to share the captured images directly to results and discussion. Section VI describes conclusion and
the Internet, the more the storage devices are grasping the future work.
necessity of effectual storing of huge amount of image data
[1]. 2. Background
Transmission of raw images over different types of
networks required extra demand of bandwidth because 2.1 IMAGE COMPRESSION
images data contains more information than simple text or
document files [1]. Therefore image size need to reduce
Image compression is the process intended to yield a
before they are either stored or transmitted. Diverse
compact representation of an image, thereby reducing the
studies and researches have been conducted regarding
image storage and transmission requirements by reducing
the amount of information required to represent a digital
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
image. Every image will have redundant data. Redundancy 2.2.1 Lossless Compression:
means the duplication of data in the image. Either it may
be repeating pixel across the image or pattern, which is In the process compression if no data is lost and the exact
repeated more frequently in the image. The image replica of the original image can be retrieved by
compression occurs by taking benefit of redundant decompress the compressed image then the compression
information of in the image. Reduction of redundancy is of lossless compression type. Text compression is
provides helps to achieve a saving of storage space of an generally of lossless type. Lossless compression technique
image. Image compression is achieved when one or more can be broadly categorized in to two classes [6]:
of these redundancies are reduced or eliminated. In image Entropy Based Encoding: In this compression process
compression, three basic data redundancies can be the algorithm first counts the frequency of occurrence
identified and exploited. Compression is achieved by the of each pixel in the image. Then the compression
removal of one or more of the three basic data technique replaces the pixels with the algorithm
redundancies [3]. generated pixel. These generated pixels are fixed for a
certain pixel of the original image; and doesn’t depend
2.1.1 Inter Pixel Redundancy on the content of the image. The length of the
“In image neighboring pixels are not statistically generated pixels is variable and it varies on the
independent. It is due to the correlation between the frequency of the certain pixel in the original image [6].
neighboring pixels of an image. This type of redundancy is Dictionary Based Encoding: This encoding process is
called Inter-pixel redundancy. This type of redundancy is also known as substitution encoding. In this process
sometime also called spatial redundancy. This redundancy the encoder maintain a data structure known as
can be explored in several ways, one of which is by ‘Dictionary’. This is basically a collection of string. The
predicting a pixel value based on the values of its encoder matches the substrings chosen from the
neighboring pixels. In order to do so, the original 2-D array original pixel and finds it in the dictionary; if a
of pixels is usually mapped into a different format, e.g., an successful match is found then the pixles is replaced
array of differences between adjacent pixels. If the original by a reference to the dictionary in the encoded file [6].
image pixels can be reconstructed from the transformed
data set the mapping is said to be reversible” [4]. 2.2.2 Lossy Compression:
2.1.2 Coding Redundancy Lossy Compression is generally used for image, audio,
video; where the compression process neglects some less
“Consists in using variable length code words selected as important data. The exact replica of the original file can’t
to match the statistics of the original source, in this case, be retrieved from the compressed file. To decompress the
the image itself or a processed version of its pixel values. compressed data we can get a closer approximation of the
This type of coding is always reversible and usually original file [6].
implemented using lookup tables (LUTs). Examples of
image coding schemes that explore coding redundancy are
the Huffman codes and the arithmetic coding
2.3 DATA COMPRESSION TECHNIQUES
technique”[3].
Various kind of image compression algorithms have been
2.1.3 Psycho Visual Redundancy
proposed till date, mainly those algorithms is lossless
Many experiments on the psycho physical aspects of
algorithm. This paper examines the performance of the
human vision have proven that the human eye does not
Run Length Encoding Algorithm (RLE), Huffman Encoding
respond with equal sensitivity to all incoming visual
Algorithm and delta Algorithm. Performance of above
information; some pieces of information are more
listed algorithms for compressing images is evaluated and
important than others. Most of the image coding
compared.
algorithms in use today exploit this type of redundancy,
such as the Discrete Cosine Transform (DCT) based
algorithm at the heart of the JPEG encoding standard [3].
2.3.1 Run length encoding (RLE):
2.2 Types of Compression:
Run length encoding (RLE) is a method that allows data
Compression can be of two types: Lossless Compression, compression for information in which pixels are repeated
Lossy Compression. constantly. The method is based on the fact that the
repeated pixel can be substituted by a number indicating
how many times the pixel is repeated and the pixel itself
[7]
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 2
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 3
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
4. METHODOLOGY:
Figure 2: screenshot from used software
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 4
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
compression factor
original image size
compressed image
compression ratio
saving percentage
compression time
RLE, delta and Huffman) over the 7 test images are shown
size "pixels"
image type
in tables 1,2,3, the tables show compressed images size in
Image No.
"pixels"
"ms"
pixels, compression ratio, compression factor, saving
percentage and compression time in ms.
5.1 Results:
1 8100 1225 0.15 6.61 85% 0.25
Binary
5.1.1 RLE algorithm:
2 16384 2984 0.18 5.49 82% 0.5
Table 1: RLE algoritjim results 3 157960 137025 0.87 1.15 13% 6.5
Level
Grey
4 65536 57293 0.87 1.14 13% 2.75
compression factor
original image size
compressed image
compression ratio
saving percentage
compression time
5 473880 419885 0.89 1.13 11% 20.25
size "pixels"
image type
RGB
Image No.
"pixels"
5.2 Discussion:
1 8100 550 0.07 14.73 93% 0.25
Binary
RTE compression
compression
compression
4 65536 111066 1.69 0.59 -69% 2
image No.
Huffman
Delta
Ratio
Ratio
Ratio
5 157960 928728 5.88 0.17 -488% 15.5
RGB
saving percentage
compression time
original image
compression
size "pixels"
compressed
image type
image size
Image No.
"pixels"
factor
"ms"
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 5
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
percentage
percentage
image No.
compression
time "ms"
time "ms"
time "ms"
Huffman
Delta
6 14.75 33 19.75
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 6
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
6. REFERENCES:
From compression time comparison which presented in [5] M. Grossberg, I. Gladkova, S. Gottipati, M. Rabinowitz, P. Alabi,
T. George, and A. Pacheco, “A comparative study of lossless
table 6 and shown in Figure 5, we can notice that: compression algorithms on multi-spectral imager data,” Proc. -
2009 Data Compression Conf. DCC 2009, 2009.
In case of binary image, the compression time was
very small in all techniques. [6] Arup Kumar Bhattacharjee, Tanumon Bej, and Saheb Agarwal,
In case of grey level and RGB images, Huffman “Comparison Study of Lossless Data Compression Algorithms for
encoding is the best technique then RLE and the delta Text Data \n,” IOSR J. Comput. Eng., vol. 11, no. 6, pp. 15–19,
2013.
encoding was the worst one.
In the future, more compression techniques will used and [12] N. Efford, “Digital image processing: a practical introduction using
compared over a large data set of images and video files java.” 2000.
until reach to the best compression technique.
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 7
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
16384
2 chess .png Binary image
pixels
157960
3 mattgrey .jpg Grey level image
pixels
65536
4 matthead .png Grey level image
pixels
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 8
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
157960
5 matthew1 .jpg RGB
pixels
157960
6 matthew1 .png RGB
pixels
65536
7 frplanet .png RGB
pixels
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 9