You are on page 1of 15

Difference between Lossless Compression and Lossy

Compression.

Lossy Compression Lossless Compression

1. The technique involves some loss of 1. Involves no loss of information.


information.

2. Data that has been compressed using this 2. If data has been (lossless) compressed,
technique can’t be recovered and the original data can be recovered from
reconstructed exactly. the compressed data.

3. Used for application that can tolerate 3. Used for application that can’t tolerate
difference between the original and any difference between original and
reconstructed data. reconstructed data.

4. In return for accepting this distortion in 4. No loss in information so compression


reconstructed data we obtain high rate is small.
compression rate

5. Sound and Image compression uses lossy 5. Text compression uses lossless
compression. compression

6. More data can be accommodated in 6. Less data can be accommodated in


channel. channel.

7. Distortion 7. Distortion less

8. E.g.(i)TelephoneSystem, 8. E.g(i)FaxMachine,
(ii)VideoCDE. (ii)Radiological Imaging

9. Example: Text compression -> It is very 9. Example: When viewing a


important that the reconstruction is identical reconstruction of a video sequence, the
to original text, as very small difference can fact that the reconstruction is different
result in statements with very different from the original is generally not
meanings. Consider “ Do not send Money “ important as long as the difference do not
and “ Do now send Money “ result in annoying artifacts. Thus, video is
generally compressed using lossy
compression.
Explain image compression with the help of block diagram.

Compression has two types i.e. Lossy and Lossless technique. Atypical image compression system
comprises of two main blocks An Encoder (Compressor) and Decoder (Decompressor). The image f(x,y)
is fed to the encoder which encodes the image so as to make it suitable for transmission. The decoder
receives this transmitted signal and reconstructs the output image f(x,y). If the system is an error free one
f(x,y) will be a replica of f(x,y).

The encoder and the decoder are made up of two blocks each. The encoder is made up of a Source encoder
and a Channel encoder. The source encoder removes the input redundancies while the channel encoder
increases the noise immunity of the source encoders. The decoder consists of a channel decoder and a
source decoder. The function of the channel decoder is to ensure that the system is immune to noise.
Hence if the channel between the encoder and the decoder is noise free, the channel encoder and the
channel decoder are omitted.

Source encoder and decoder:

The three basic types of the redundancies in an image are inter-pixel, coding redundancies and
psychovisual redundancies. Run length coding is used to eliminate or reduce inter-pixel redundancies
Huffman encoding is used to eliminate or reduce coding redundancies while I.G.S is used to eliminate
inter-pixel redundancies. The job of the source decoders is to get back the original signal. The problem
solved by run length coding, Huffman encoding and I.G.S coding are examples of source encoders and
decoders.

The input image is passed through a mapper. The mapper reduces the interpixel redundancies. The
mapping stage is a lossless technique and hence is an reversible operation. The output of a mapper is
passed through a Quantizer block. The quantizer block reduces the psychovisual redundancies. It
compresses teh data by eliminating some information and hence is an irreversible operation. The quantizer
block uses JPEG compression which means a lossy compression. Hence in case of lossless compression,
the quantizer block is eliminated. The final block of the source encoder is that of a symbol encoder. This
block creates a variable length code to represent the output of the quantizer. The Huffman code is a typical
example of the symbol encoder. The symbol encoder reduces coding redundancies.

The source decoder block performs exactly the reverse operation of the symbol encoder and the mapper
blocks. It is important to note that the source decoder has only two blocks. Since quantization is
irreversible, an inverse quantizer block does not exist. The channel is noise free and hence have ignored
the channel encoder and channel decoder.

Channel encoder and decoder:

The channel encoder is used to make the system immune to transmission noise. Since the output of the
source encoder has very little redundancy, it is highly susceptible to noise. The channel encoder inserts a
controlled form of redundancy to the source encoder output making it more noise resistant.

  Run length coding


Run-length coding (RLC) is effective when long sequences of the same symbol occur. Run-length coding
exploits the spatial redundancy by coding the number of symbols in a run. The term run is used to indicate
the repetition of a symbol, while the term run-length is used to represent the no of repeated symbols. Run-
length coding maps a sequence of numbers into a sequence of symbol pairs. Images with large areas of
constant shade are good candidates for this kind of compression. Run-length coding can be classified into
(i) 1D run-length coding and (ii) 2D run-length coding.

The first row of the image has 15 grey values.


The first code specifies the grey value, followed by the length of the run. Hence the RLE of the
first row is simply.
235864
As is evident, run length encoding achieves considerable compaction in images which have a
fairly constant background. The RLE eliminates Interpixel redundancies. The runlength encoding
also has a serious problem. At times, the RLE doubles the size of file.

Given below is the table of 8 symbols and their frequency of


occurrences. Give Huffman code for each symbol.

Symbol S1 S2 S3 S4 S5 S6 S7 S8

Freque 0.2 0.1 0.0 0.0 0.2 0.1 0.0 0.0


ncy 5 5 6 8 1 4 7 4

Symbol Probabilit Codewor Length of


y d code

S1 0.25 01 2

S2 0.15 001 3

S3 0.06 1010 4

S4 0.08 0000 4

S5 0.21 11 2

S6 0.14 100 3

S7 0.07 0001 4

S8 0.04 1011 4
What are different types of redundancies in digital image?
Explain in detail.

(i) Redundancy can be broadly classified into Statistical redundancy and Psycho visual
redundancy.
(ii) Statistical redundancy can be classified into inter-pixel redundancy and coding redundancy.
(iii) Inter-pixel can be further classified into spatial redundancy and temporal redundancy.
(iv) Spatial redundancy or correlation between neighboring pixel values.
(v) Spectral redundancy or correlation between different color planes or spectral bands.
(vi) Temporal redundancy or correlation between adjacent frames in a sequence of images in
video applications.
(vii) Image compression research aims at reducing the number of bits needed to represent an
image by removing the spatial and spectral redundancies as much as possible.
(viii) In digital image compression, three basic data redundancies can be identified and exploited:
Coding redundancy, Inter-pixel redundancy and Psychovisual redundancy.

 Coding Redundancy:
o Coding redundancy is associated with the representation of information.
o The information is represented in the form of codes.
o If the gray levels of an image are coded in a way that uses more code symbols
than absolutely necessary to represent each gray level then the resulting image is
said to contain coding redundancy.
 Inter-pixel Spatial Redundancy:
o Inter-pixel redundancy is due to the correlation between the neighboring pixels in
an image.
o That means neighboring pixels are not statistically independent. The gray levels
are not equally probable.
o The value of any given pixel can be predicated from the value of its neighbors
that is they are highly correlated.
o The information carried by individual pixel is relatively small. To reduce the
inter-pixel redundancy the difference between adjacent pixels can be used to
represent an image.
 Inter-pixel Temporal Redundancy:
o Inter-pixel temporal redundancy is the statistical correlation between pixels from
successive frames in video sequence.
o Temporal redundancy is also called inter frame redundancy. Temporal
redundancy can be exploited using motion compensated predictive coding.
o Removing a large amount of redundancy leads to efficient video compression.
 Psychovisual Redundancy:
o The Psychovisual redundancies exist because human perception does not involve
quantitative analysis of every pixel or luminance value in the image.
o It’s elimination is real visual information is possible only because the information
itself is not essential for normal visual processing.
State Objective and Subjective Fidelity Criteria of Image
evaluation.
Obtain Huffman coding word COMMITEE.
Explain all the steps in JPEG image compression standard.

JPEG COMPRESSION STEPS


Step 1 (Transformation): Color images are transformed from RGB into a
luminance/chrominance image (Eye is sensitive to luminance, not chrominance, so that
chrominance part can lose much data and thus can be highly compressed.
Step 2 (Down Sampling): The down sampling is done for colored component and not for
luminance component .Down sampling is done either at a ratio 2:1 horizontally and 1:1 vertically
(2h 1 V). Thus the image reduces in size since the ‘y’ component is not touched, there is no
noticeable loss of image quality.
Step 3 (Organizing in Groups): The pixels of each color component are organized in groups of
8×2 pixels called “ data units” if number of rows or column is not a multiple of 8, the bottom
row and rightmost columns are duplicated.
Step 4 ( Discrete Cosine Transform): Discrete Cosine Transform ( DCT) is then applied to
each data unit to create 8×8 map of transformed components.DCT involves some loss of
information due to the limited precision of computer arithmetic. This means that even without
the map there will be some loss of image quality but it is normally small.
Step 5 (Quantization): Each of the 64 transformed components in the data unit is divided by a
separate number called its ‘Quantization Coefficient (QC)’ and then rounded to an integer. This
is where information is lost irretrievably, Large QC cause more loss. In general, the most JPEG
implements allow use QC tables recommended by the JPEG standard.
Step 6 (Encoding): The 64 quantized transformed coefficients ( Which are now integers) of each
data unit are encoded using a combination of RLE and Huffman coding.
Step 7 (Adding Header): The last step adds header and all the JPEG parameters used and output
the result.
The compressed file may be in one of the 3 formats:
1. Interchange Format: In which the file contains compressed image and all the tables
needed by the decoder. (Quantization table and Huffman code table).
2. Abbreviated Format: Where the file contains compressed image and may contain first a
few table. (Since the same encoder-decoder pair is used and they have some tables built
in)
3. Abbreviated format for table and specification data: Where the file contains just
tables and number of compressed images. Since images compressed by same encoder and
same tables, hence when it is to be decoded, they are sent to decoder preceded by one file
with table specification data).
The JPEG decoder performs the reverse steps. Thus JPEG is a symmetric compression method.
Short note on: JPEG 2000

1. JPEG 2000 standard for the compression of still images is based on the Discrete Wavelet
Transform (DWT). This transform decomposes the image using functions called
wavelets.
2. The basic idea is to have a more localized analysis of the information which is not
possible using cosine functions whose temporal or spatial supports are identical to the
data.

JPEG 2000 Advantages:

3. Better image quality that JPEG at the same file size or alternatively 25-35 % smaller file
sizes with the same quality.
4. Good image quality at low bit rates ( even with compression ratios over 80 :1)
5. Low complexity option for devices with limited resources.
6. Scalable image files – no decomposition needed for reformatting with JPEG 2000, the
image that best matches the target device can be extracted from a single compressed file
on a server. Options include:
a) Image sizes from thumbnail to full size.
b) Grayscale to full 3 channel color.
c) Low quality image to lossless (identical to original image)
7. JPEG 2000 is more suitable to web-graphics than baseline JPEG because it supports
alpha-channel (transparency component)
8. Region of Interest (ROI): One can define some more interesting parts of image, which are
coded with more bits than surrounding areas.

Block Diagram of JPEG 2000


A. Segment Generator :

 It checks the tone of image based on bpp value. It accepts color or grayscale images.
 It checks tone of image it generate variable length segment of image called ‘Tiles’.
Because of this improved functionality JPEG 2000 can accept non standard image.
 It divides the complete image into multiple of 8 pixels called as block of size of 8 pixels
called as block of size of 8 × 8.
 If image is not complete multiples of 8 pixels then it adds extra zeros this process is
called as zero padding.

B. Level Shifter:

 It converts unsigned value of an image into sign value of an image. This process
increases the efficiency of discrete cosine/ wavelet transform.
 If bpp value is 8 bits, number of possible levels of an image is 256. i. e 28 ( Range 0 to
255). This unsigned range is converted into sign range of -128 to 127.

C. Discrete Wavelet Transform:


 It is localize image transform. Because of localize process LF and HF component are get
scanned row wise as well as column wise. Because of localize image transform image is
divided in 4 blocks:
LL: LF component Row and Column wise
LH: LF component Row wise and HF component column wise
HL: HF component Row wise and LF component column wise
HH: HF component Row and Column wise
 This DWT process is based on 2 dimensional subband coding
 1 Dimensional Subband Coding given below

DWT Process
D. Quantizer

 Quantizer provide rounding off frequency domain image with respect to amplitude as
well as co-ordinate of image the amount of rounding is described by fixed rounding table.
 The fractional value of frequency domain coefficient is rounded to the nearest possible
integer is called amplitude rounding. In co-ordinate rounding it will round off
unnecessary HF component of image.

E. Zig- Zag Encoder

 It converts 2 dimensional frequency domain image into one dimensional matrix. By using
scanning process. The zigzag encoder doesn’t follow the natural scanning. It follows
diagonal scanning as shown in figure below:

 Zig Zag Encoder

 The diagonal scanning is based on Comet distribution because of this diagonal scanning
the element in one dimensional matrix is also arranged frequency basis. The size of one
dimensional area for each block 1 × 64.

F. Huffman Encoder

 It converts fixed length frequency domain pixel into variable length frequency domain
pixel depending on probability of frequency domain pixel.
 The large amount compression in JPEG is achieved in Huffman encoder block. The
blocks before the Huffman encoder are functions such a way that it will improve the
compression ratio provided by Huffman encoder.

G. RLE

 The compression ratio provided by Huffman encoder is increased by run length encoding
because in output of Huffman encoder there is in Run of zeroes or ones.
Short note: JPEG

JPEG is widely used image compression technique. It is used in image processing systems such
as copiers, scanners and digital camera’s. These devices ofte3n require high-speed image
compression techniques.
JPEG Encoder:
In the figure DCT stands for the Discrete Cosine Transform and IDCT stands for Inverse
Discrete Cosine Transform. The input image is partitioned into a 8x8 sub-block. The DCT is
computed on each of the 8x8 blocks of pixels. The coefficients with zero frequency in both the
directions is called the “DC coefficient’ and the remaining 63 coefficients are called the ‘AC
coefficients’. The DCT processing step lays the foundation for achieving data compression by
concentrating most of the signal in the lower spatial frequencies. The 64 DCT coefficients are
scalar quantized using the uniform quantisation tables based upon psychovisual experiments.
After the DCT coefficients are quantised, the coefficients are ordered according to the zigzag
scan as shown in the below figure. The purpose of the zigzag scanning is based upon the
observation that most of the high-frequency coefficients are zero after quantization.

JPEG Decoder:
The JPEG standard is a broad standard encompassing several compression and transmission
modes. In order to facilitate future expansion to other modes, this implementation has a very
modular construction. A single thread of control code handles all individual routines (kernels)
which are called multiple times as required by the application. This control code will always be
in ‘C’to facilitate the changes in the control architecture. The quantizes and the Huffman encodes
the DC coefficients obtained from the DCT module. In JPEG the DC component is differentially
encoded i.e. a difference between the present and the preceding DC component is computed and
this difference is quantized and encoded. Quantization involves an inherent division operation
with an element from the quantizer table. In this implementation a reciprocal quantizer table, pre-
computed from the quantizer table, is used.

You might also like