You are on page 1of 58

Uniform Quantization

 Simplest type of quantizer


 All intervals are of the same size
 Decision Boundaries are evenly spaced (step size:∆), except for two
outer most intervals
 Reconstruction
 Usually the midpoint is selected as the representing value
 Quantizer types:
 Midrise quantizer: zero is not an output level (representative
level)
 Midtread quantizer: zero is an output level
 As the midtread quantizer has zero as one of its output levels, it is
especially useful in situations where it is important that the zero
value be represented—for example, control systems in which it is
important to represent a zero value accurately, and audio coding
schemes in which we need to represent silence periods.
 Midrise quantizer is used if the number of levels is even and a
midtread quantizer if the number of levels is odd.
Midrise vs. Midtread
Quantizer
 Midrise
Midtreadd

3/55
i) Uniform Quantization of Uniformly distributed
Sources
 If the source is uniformly distributed in [–Xmax, Xmax], the output
is quantized by an M-level uniform quantizer, then the
quantization step size is given by
2X max

M
Ex: -4 -2 0 2 4, ∆ = 2(4) / 4 = 2
and the distortion is
M/
 2i 1
 (i1)  x 12
i 2
σ 2 2
  dx  2
q
2 i1   2 max . 12
 X
Here 1 1
f X ( x)  
i  (i 1) (2i 1) ( X max  X max ) 2 X max
Mean   
2 2
Alternative method for MSQE Derivation
We can also compute the MSQE by examining the behaviour
of quantization error q given by
q = x – Q(x),
Figure shows quantization Error versus input signal for eight
level uniform quantizer ( midrise type) . The quantization Error
q  [–∆/2, ∆/2] . As the input is uniform, the quantization
error is also uniform over this interval.
1 1
fQ (q)  
  

2 2

• The SNR of Quantization: The signal variance σs2 for


a uniform random variable, which takes on values in
the interval [−Xmax, Xmax] , is obtained as

since
• Thus for every additional bit in the quantizer, we get an increase
in the signal-to-noise ratio of 6.02 dB.

• This is a well-known result and is often used to get an indication


of the maximum gain available if we increase the rate.
ii) Uniform Quantization of Non-uniform Sources(Gaussian and
Laplacian):

• Quite often the sources we deal with do not have a uniform distribution;
however, we still want the simplicity of a uniform quantizer.

• In these cases, even if the sources are bounded, simply dividing the range of
the input by the number of quantization levels does not produce a very good
design.

• This approach becomes totally impractical when we model our sources with
distributions that are unbounded, such as the Gaussian distribution.
Therefore, the PDF (Probability Density Function) of the source is to
included in the design process.
• If the input is unbounded, the quantization error is no longer
bounded either.

• The bounded error is called granular error or granular noise, while


the unbounded error is called overload error or overload noise.

• In the expression for the MSQE ,the first term represents the granular
noise, while the second term represents the overload noise. The
probability that the input will fall into the overload region is called
the overload probability.
Mismatching effects
• When assumed distribution type matches the
actual distribution type, but the variance of
the input is different from the assumed
variance.
• The second mismatch is, when the actual
distribution type is different from the
distribution type assumed when obtaining the
value of the step size.
Variance mismatching
Adaptive quantization
• One way to deal with the mismatch problem is
to adapt the quantizer to the statistics of the
input.
• there are main approaches to adapt the
quantizer parameters are
1) Offline(farward adaptive approach)
2) Online(back ward adaptive approach)
Block diagram of adaptive quantizer
 Forward adaptive (encoder-side analysis)
 Divide input source in blocks
 Analyze block statistics
 Set quantization scheme
 Send the scheme to the decoder via side channel

 Backward adaptive (decoder-side analysis)


 Adaptation based on quantizer output only
 Adjust  accordingly (encoder-decoder in sync)
 No side channel necessary
forward adaptive quantization
• In forward adaptive quantization, the source
output is divided into blocks of data.
• Each block is analyzed before quntization, and
the quantization parametrs are set
accordingly.
• The settings of the quntizer are then
transmitted to the receiver as side
information.
• The size of the block of data is also important, it changes some of
the statistical parametrs.
• If the size of the block is too large, then the adaption process may
not capture the changes taking place in the input statistics.
• Large block size means more delay, which may not be tolarable in
certain applications.
• On the other hand, small block sizes mean that the side
information has to be transmitted more often, which in turn
means the amount of overhead per sample increases.
• The selection of the block size is a trade-off between the increase
in side information necessitated by small block sizes and the loss
of fidelity due to large block sizes
• The variance estimation procedure is simple. At time n
we use a block of N future samples to compute an
estimate of the variance

• Mean of the input is assumed to be zero.

• The variance information also needs to be quantized


so that it can be transmitted to the receiver.

• Usually, the number of bits used to quantize the value


of the variance is significantly larger than the number
of bits used to quantize the sample values.
• There are some disadvantages of Forward AQ:
• Have to send Side Information… reduces the
compression ratio
• Block-Size Trade-Offs - large block size-more
delay,
Small block size-
more overhead
• Coding Delay… can’t quantize any samples in
block until see whole block
Backward-Adaptation Addresses these Drawbacks - as
Follows:
• Monitor which quantization cells the past samples fall in
– Increase Step Size if outer cells are too common
– Decrease Step Size if inner cells are too common
– Because it is based on past quantized values,
• no side info needed for the decoder to synchronize to the
encoder
• no delay because current sample is quantized based on
past samples
backward adaptive quntization
• In backward adaptive quntization, the
adaption is performed based on the quantizer
output. as this is available to both tramsmitter
and receiver, there is no need for side
information.
• In backward adaptive quantization, only the
past quantized samples are available for use in
adapting the quantizer.
• The values of the input are only known to the
encoder; therefore, this information cannot be
used to adapt the quantizer.

• If we studied the output of the quantizer for a


long period of time, we could get some idea
about mismatch from the distribution of output
values
NON UNIFORM QUANTIZATION
 Law/A Law
• The -law algorithm (μ-law) is a companding algorithm, primarily used in the
digital telecommunication systems of north America and Japan.
• Its purpose is to reduce the dynamic range of an audio signal.
• In the analog domain, this can increase the signal to noise ratio achieved during
transmission.
• In the digital domain, it can reduce the quantization error (hence increasing signal
to quantization noise ratio).
• It has mid treadd at origin

• A-law algorithm used in the rest of worlds.


• A-law algorithm provides a slightly larger dynamic range than the mu-law at the
cost of worse proportional distortion for small signals. By convention, A-law is
used for an international connection if at least one country uses it.
• It has mid rise at origin
pdf-Optimized Quantization
pdf-Optimized Quantization - Lloyd-Max agorithm
A direct approach for locating the best
nonuniform quantizer, if we have a
probability model for the source, is to find
the bi and yi that minimize MSQE.
....... Eqn 1
• Setting the derivative of Equation.1 with respect to yj to
zero, and solving for yj, we get

---- Eqn2
• The output point for each quantization interval is the
centroid of the probability mass in that interval.
• Taking the derivative with respect to bj and setting it
equal to zero, we get an expression for bj as

----Eqn 3
--- Eqn 4
• The decision boundary is simply the midpoint of the
two neighboring reconstruction levels.

• Solving these two equations will give us the values


for the reconstruction levels and decision boundaries
that minimize the mean squared quantization error.
• From Eqns. 2 and 3 , to solve for yj , we need the
values of bj and bj−1, and to solve for bj , we need the
values of yj+1 and yj .
• Therefore, the Lloyd-Max algorithm is introduced
how to solve these two equations (2) and (3)
iteratively.
• This algorithm works as follows: let us apply it to a specific
situation.

• Let we want to design an M-level symmetric midrise quantizer.

• From the figure, we see that in order to design this quantizer,


we need to obtain the reconstruction levels {y1, y2 ,.., y M/2}
and the decision boundaries {b1, b2,… , bM/2 −1}

• The reconstruction levels {y−1 ,y−2 , .... y−M2} and the decision
boundaries {b−1b−2 b−(M/2 −1) can be obtained through
symmetry.
• The decision boundary b0 is zero, and the decision boundary
bM/2 is simply the largest value the input can take on (for
unbounded inputs this would be ꝏ).
• Let j =1
------ Eqn 5
• As b0 is known to be 0, we have two unknowns in
this equation, b1 and y1.
• We make a guess at y1 and later we will try to refine
this guess.
• Using this guess in Equation 5, we numerically find
the value of b1 that satisfies Equation 5.
• Setting j equal to 1 in Equation 4 ,
y2 = 2b1 +y1
This value of y2 can then be used in Equation 2 with
j = 2 to find b2, which in turn can be used to find y3.

• We continue this process, until we obtain a value


for {y1 ,y2 ,...., yM/2} and { b1, b2,… bM/2−1}.

• The accuracy of all the values obtained to this


point depends on the quality of our initial estimate
of y1.
• We can check this by noting that yM/2 is the centroid of the
probability mass of the interval [b M/2 −1, bM/2]
• We know bM/2 from our knowledge of the data.

• Therefore, we can compute the integral and compare it


with the previously computed value of y M/2
• If the difference is less than some tolerance threshold, we
can stop.
• Otherwise, we adjust the estimate of y1 in the direction
indicated by the sign of the difference and repeat the
procedure.
Properties of Lloyd-Max quantizer
1 . The mean values of the input and output of a
Lloyd-
Max quantizer are equal.
2. For a given Lloyd-Max quantizer, the variance of the output is always less than
or equal to the variance of the input.
E[(Q(X))2 ] = E[X2]
Proof:
3. The mean squared quantization error for a Lloyd-
Max quantizer is given by

• Let N be the random variable corresponding to the


quantization error. Then for a given Lloyd-Max
quantizer

• For a given Lloyd-Max quantizer, the quantizer


output and the quantization noise are orthogonal

You might also like