You are on page 1of 12

Differential pulse code

modulation
(DPCM)
Yenduva Yogeswari
191220053
Digital Communication Project
PCM

A technique by which analog signal gets


converted into digital form in order to have
signal transmission through a digital network is
known as Pulse Code Modulation.
WHY DPCM
more correlated neighboring samples
When the input analog signal is sampled at a rate higher than the Nyquist rate, the successive
samples become more correlated.
There exist a very little difference between the amplitudes of the successive samples.
When all these samples are quantized and encoded there exist more redundant information in the
transmitted signal.
In order to use the bandwidth effectively we should not transmit redundant information.
larger amplitude input signal
Number of levels of a signal depend on the signal. If input signal is larger then number of levels
must be larger. If number of levels is larger then number of bits use to represent a sample is larger.
If input signal is smaller then number of levels are smaller then number of bits are also smaller.
So we can achieve more amount of compression.
To solve these two problems i.e., to reduce the redundant information and to achieve more
compression , only the difference between the successive samples are transmitted.
So this is called differential PCM.
DPCM Principle

For the samples that are highly correlated, when encoded by PCM
technique, leave redundant information behind. To process this
redundant information and to have a better output, it is a wise
decision to take a predicted sampled value, assumed from its
previous output and summarize them with the quantized values.
Such a process is called as Differential PCM  technique.
DPCM Transmitter x(nTs) Sampled input

x^(nTs) Predicted sample


e(nTs) Difference of sampled input
and the predicted output,
often called as prediction
error
eq(nTs) quantized output

xq(nTs) predictor input which is


actually the summer output
of the predictor output and
the quantizer output
In PCM, x(nTs) is directly given as input to quantizer. So input signal range may be high.
In DPCM, input to quantizer is error signal e(nTs). So input signal range is small compared
to PCM.
e(nTs) = x(nTs) – x^(nTs)
Quantization output is given by
eq(nTs) = Q[e(nTs)]
eq(nTs) = e(nTs) + q(nTs)
where q (nTs) is the quantization error

Predictor input is the sum of quantizer output and predictor output


xq(nTs)=x^(nTs)+eq(nTs)
xq(nTs) = x^(nTs) + e(nTs) + q(nTs)
xq(nTs) = x^(nTs) + x(nTs) – x^(nTs) + q(nTs)
xq(nTs) = x(nTs) + q(nTs)
The predictor produces the assumed samples from the previous outputs of the transmitter
circuit. The same predictor circuit is used in the decoder to reconstruct the original input.
DPCM Receiver

• Receiver input will be the same as the encoded transmitter output


• The predictor assumes a value, based on the previous outputs. The
input given to the decoder is processed and that output is summed up
with the output of the predictor, to obtain a better output.
xq(nTs) = x^(nTs) + eq(nTs)
ADVANTAGES
• Bandwidth requirement is less in DPCM compared to PCM.
• Quantization error is reduced because of prediction filter.
• Number of bits used to represent one sample value are also reduced compared to PCM.
DISADVANTAGES
• High bit rate.
• Practical usage is limited.
• Need the predicator circuit to be used which is very complex.
APPLICATIONS
• The DPCM technique mainly used in speech, image and audio signal compression. The
DPCM conducted on signals with the correlation between successive samples leads to
good compression ratios.
• This method is suitable for real-Time applications.
SNR of
DPCM
(SNR) = σ / σ
o x
2
Q
2

σx2 is variance of the original input x(nTs)


σQ2 is variance of quantized output eq(nTs)
(SNR)o = (σx2/ σe2)*(σe2/ σQ2)
σe2 is variance of prediction error e(nTs)
σe2/ σQ2 = (SNR)p = prediction error to quantization noise ratio
σx2/ σe2 = Gp = prediction gain produced by the differential quantization
(SNR)o= Gp*(SNR)p
Gp > 1  there is a gain in SNR due to the differential quantization
On fixing σx2 , Gp is maximized by minimizing the variance σe2 of the prediction error.
EXAMPLE
Assume first order prediction filter x^(nTs) = xq(n-1)
Let input samples = {2.1,2.2,2.3,2.6,2.7,2.8}
ENCODER DECODER

x(n) x^(n)= e(n)=x(n eq(n) xq(n)=x^(n) eq(n) x^(n)= xq(n)=x^(n)+


xq(n-1) )-x^(n) +eq(n) xq(n-1) eq(n)

2.1 0 initially 2.1 2 0+2=2 Transmitted bit 2 0 2+0=2


sequence
2.2 2 0.2 0 2+0=2 2 , 0 , 0 , 1 , 0 , 0 0 2 2+0=2
010 000 000 001 000 000
2.3 2 0.3 0 2+0=2 0 2 2+0=2

2.6 2 0.6 1 2+1=3 Reconstructed sample 1 2 1+2=3

2.7 3 -0.3 0 3+0=3 {2,2,2,3,3,3} 0 3 0+3=3

2.8 3 -0.2 0 3+0=3 0 3 0+3=3


x(nTs) 2.1 , 2.2 , 2.3 , 2.6 , 2.7 , 2.8
eq(nTs) 2 , 0 , 0 , 1 , 0 , 0
e(nTs) 2.1 , 0.2 , 0.3 , 0.6 , -0.3 , -0.2

Mean of x(nTs) = (2.1+2.2+2.3+2.6+2.7+2.8)/6 = 2.45


Mean of e(nTs) = (2.1+0.2+0.3+0.6-0.3-0.2)/6 = 0.45
Mean of eq(nTs)=(2+0+0+1+0+0)/6 = 0.5
Variance of x(nTs)= σx2= ∑((x-2.45)^2)/6 = 0.069167
Variance of e(nTs)= σe2= ∑((e-0.45)^2)/6 = 0.6358
Variance of eq(nTs)= σQ2= ∑((eq-0.5)^2)/6 = 0.5833
SNR = Gp*SNRp = 0.1088*1.0900 = 0.1186
THANKYOU

You might also like