Professional Documents
Culture Documents
2
SOME IDEAS
Data compression refers to the process of
reducing the amount of data required to
represent a given quantity of information.
Data are the means by which information is
conveyed.
Various amount off data can be used to represent
the same amount of information.
Extra data used to represent same information
may lead to data redundancy.
Data redundancy is a central issue in digital
image compression.
3
DATA REDUNDANCY
Let n1 and n2 be the amount of data in two data sets that
represent the same information.
The relative data redundancy of first data set can be
defined as
5
CODING REDUNDANCY
Let rk represents the gray level of a pixel and
pr(rk) is the probability of occurrence of the gray
level rk.
7
EXAMPLE OF VARIABLE LENGTH CODING
Lavg=2×0.25+1×0.47+3×0.25+3×0.03=1.81
CR=8/1.81=4.42
RD=1-1/4.42=0.774
8
INTERPIXEL REDUNDANCY
Usually, the gray levels of successive pixels are
correlated with each other leading to interpixel
redundancy.
Correlation can be coded in terms of non-image
data format, e.g. run-length coding
For example, if a particular line of a binary
image contains sequentially 63 white, 87 black,
37 white, 5 black, 4 white, 556 black, 62 white
and 210 black pixels, then it can be coded like:
(1,63); (0,87); (1,37); (0,5); (1,4); (0,556), (1,62); (0,210)
So, 1024 bits can be coded into 88 bits (11×8).
9
PSYCHOVISUAL REDUNDANCY
Eye doesn’t respond with equal sensitivity to all
visual information.
Some information has less relative importance than
other in normal visual processing. This information is
said to be psychovisually redundant.
Eliminating psychovisually redundant data leads to
quantization.
Improved Gray Scale (IGS) quantization process is
used.
It recognizes the eye’s sensitivity to edges and breaks
them up by adding to each pixel a pseudorandom
number generated from the low order bits of
10
neighboring pixel.
EXAMPLE IGS CODE
11
FIDELITY CRITERIA
As removal of psychovisually redundant data
leads to loss of quantitative visual information,
the nature and extent of information loss need to
be measured.
There may be two assessments:
Objective fidelity criteria
When the amount of information loss is expressed in terms
of input image, compressed image and decompressed image,
it is said to be based on objective fidelity criteria.
Subjective fidelity criteria
When the decompressed image is visually assessed by a
section of viewers and averaging their evaluations is done,
it is said to be based on subjective fidelity criteria. 12
OBJECTIVE FIDELITY CRITERIA
A good example is root mean square (rms)
error between an input and output image.
Let f(x,y) denotes an input image and fˆ ( x, y )
denotes the approximation of f(x,y) that results
from successive compression and decompression
of f(x,y).
For any value of x and y, the error e(x,y) is
defined as
e( x, y) fˆ ( x, y) f ( x, y)
13
So, the total error in the approximated image is
M 1 N 1
etot fˆ ( x, y ) f ( x, y )
x 0 y 0
1 M 1 N 1 2 2
erms fˆ ( x, y ) f ( x, y)
MN x 0 y 0
The mean-squared-signal-to-noise ratio of
the output image is
M 1 N 1
fˆ ( x, y)
x 0 y 0
2
14
SNRms
fˆ ( x, y) f ( x, y)
M 1 N 1 2
x 0 y 0
IMAGE COMPRESSION MODEL
• Source Encoder:
Removes input redundancies
Source Channel • Channel Encoder:
Encoder Encoder Increases noise immunity
f ( x, y)
Encoder
Channel
* For noise free channel, Channel Encoder and Channel Decoder may be omitted
SOURCE ENCODER
Symbol
Mapper Quantizer
f ( x, y) Encoder Channel
Encoder
Mapper:
Converts the image into a usually non-image format designed to reduce
interpixel redundancies in the input image.
The operation is generally reversible.
May or may not reduce directly the amount of data (e.g. run-length
coding).
Quantizer:
It reduces the accuracy of the mapper’s output in accordance with some
fidelity criteria.
This stage reduces psychovisual redundancy.
Symbol Encoder:
It creates a fixed or variable length code to represent the quantizer 16
output.
It reduces coding redundancy.
SOURCE DECODER
Symbol Inverse
Channel Decoder Mapper fˆ ( x, y)
Decoder
17
CHANNEL ENCODER AND DECODER
This pair plays an important role in case the
channel is noise prone.
Some controlled redundant bits are added to the
source encoded data.
Mostly used: Hamming Code
c11 c10 c9 c8 c7 c6 c5 c4 c3 c2 c1
d7 d6 d5 r4 d4 d3 d2 r3 d1 r2 r1
c1 c1 c3 c5 c7
c2 c2 c3 c6 c7
18
c4 c4 c5 c6 c7
c8 c8 c9 c10 c11
HUFFMAN CODING
It is a type of variable length lossless coding
Step 1:
Create a series of source reductions by ordering the
probabilities of symbols under consideration.
Combining the lowest probability symbols into a
single symbol that replaces them in the next source
reduction.
Step 2:
Code each reduced source, starting with the smallest
source and working back to the original source.
19
HUFFMAN CODING – STEP 1
20
HUFFMAN CODING – STEP 2
22
ARITHMETIC CODING
23
LZW CODING
Named after the inventors: Lempel-Ziv-Welch
Eliminates interpixel redundancies.
24
39 39 126 126
39 39 126 126
39 39 126 126
EXAMPLE: 4×4 8-BIT IMAGE 39 39 126 126
25
LOSSLESS PREDICTIVE CODING MODEL
Encoder
Decoder
26
LOSSY PREDICTIVE CODING MODEL
Encoder
Decoder
27
DELTA MODULATION
28
THANK YOU
29