You are on page 1of 23

UNIT- II

LOSSY COMPRESSION
QUANTIZATION
The Quantization Problem
 Encoder mapping
 Map a range of values to a codeword
 Irreversible mapping
 If source is analog  A/D converter

 Decoder mapping
 Map the codeword to a (fixed) value representing the
range
 Knowledge of the source distribution can help us
pick a better value representing each range
 If output is analog  D/A converter

 Informally, the encoder mapping is called the


quantization process, and the decoder mapping
is called the inverse quantization process
Quantization Examples
 3-bit  Decoder
Quantizer (D/A)
 Encoder
(A/D)

 Digitizing a sine
wave
Quantization Function
 A quantizer describes the relation between the
encoder input values and the decoder output
values
 Example of a quantization function:
Quantization Problem
Formulation
 Input:
 X – random variable
 fX(x) – probability density function (pdf)

 Output:
 {bi}i = 0..M decision boundaries

 {yi}i = 1..M reconstruction levels

 Discrete processes are often approximated


by continuous distributions
 Example: Laplacian model of pixel differences
 If source is unbounded, then the first and the last
decision boundaries = ± (they are often called
“saturation” values)
Quantization Error
 If the quantization operation is denoted by Q(·), then
Q(x) = yi if bi–1 < x  bi.
The mean squared quantization error (MSQE) is
then 
x  Q(x)2 dx
σ2 q 

X
 M
f -------- Eqn.
  x  yi 2
bi
1 X dx
i1 bi1 f
 Quantization error is also called quantization noise
or quantizer distortion, e.g., additive noise model:
Quantization noise

Quantizer input + Quantizer output


Quantized Bitrate with Fixed
Length Coding

 If the number of quantizer output is M, then the rate (per


symbol) of the quantizer output is
R = log2M
 Example: M = 8  R = 3

17
/55
Quantized Bit rate with Variable Length
Coding
• If we allow to use variable-length codes, such as Huffman codes
or arithmetic codes, along with the size of the alphabet, the
selection of the decision boundaries will also affect the rate of the
quantizer.

• According to this codeword assignment, if the output y4 occurs,


we use 2 bits to encode it, while if the output y1 occurs, we need
4 bits to encode it.

• Obviously, the rate will depend on how often we have to encode


y4 versus how often we have to encode y1. In other words, the
rate will depend on the probability of occurrence of the outputs.
• If li is the length of the codeword corresponding to
the output yi, and P(yi) is the probability of
occurrence of yi, then the rate is given by
M
R   li P( yi )
i 1

The probabilities {P(yi)} depend on the decision


boundaries {bi}.

The probability of yi occurring is given by


bi

P( yi )  
bi1
f X ( x)dx
Optimization of Quantization
Therefore the rate R is a function of the decision boundaries and is
given by the expression
M bi

R   li  f X ( x)dx
i 1 bi1

 For a given source input, the partitions we select and the


representation for those partitions will determine the distortion
incurred during the quantization process.
 The partitions we select and the binary codes for the partitions will
determine the rate for the quantizer.
 Thus, the problem of finding the optimum partitions, codes, and
representation levels are all linked.
UNIFORM QUANTIZER

You might also like