You are on page 1of 13

Differential Pulse Code Modulation (DPCM)

• In the PCM system, each sample is quantized and encoded independently.


 Previous sample values have no effect on the quantization of the new samples.
• Sampling a message signal (random process) at the rate  Nyquist rate
results in
• Sampled values that are highly correlated
• This means that the previous samples give some information about the next sample
• For instance, if the previous sample values were small, it is highly probable that the next sample value
will also be small as well
• Encoding these highly correlated samples results in a lot of redundant bits
• Correlation information can be utilized to remove the redundancies
• Better waveform coding (lesser number of bits)
 DPCM
DPCM (contd.)
• Consider the simplest form of DPCM)
• Take the difference between two adjacent samples (current and previous samples)
and quantized and encode it.
𝑒 𝑛 = 𝑚 𝑛 − 𝑚[𝑛 − 1] (1)
• Because two adjacent samples are highly correlated, their difference has small variations (dynamic
range) as compared to original sample values
Since, step size = DR/L (quantization level)
For the same quality (), small DR requires lesser L
 Less number of bits per sample
At the receiver
𝑚 𝑛 = 𝑚 𝑛 − 1 + 𝑒[𝑛] (2)
DPCM (contd.)
• Since the samples are correlated, therefore current sample 𝑚[𝑛] can be
predicted from the previous samples
• Quantized and encode the difference of the sample 𝑚[𝑛] and its predicted
value 𝑚[𝑛]

𝑒 𝑛 = 𝑚 𝑛 − 𝑚[𝑛] ෝ (3)
• Linear predictor of the form
𝑝
𝑚 σ
ෝ 𝑛 = 𝑘=1 𝜔𝑘 𝑚[𝑛 − 𝑘] Convolution sum (4)
Where 𝜔𝑘 is predictor coefficients (weights) and is chosen such that the mean square
error between the sample and its predicted value is minimized.
𝐸 𝑒 2 [𝑛] = E 𝑚 𝑛 − 𝑚[𝑛]
ෝ 2 = 𝐸 𝑚2 [𝑛] − 2𝐸 𝑚 𝑛 𝑚 ෝ 2 [𝑛] (5)
ෝ 𝑛 +𝐸 𝑚
Let the sample value 𝑚 𝑛 be the sample function of stationary random process M(t)
with zero mean
DPCM (contd.)
Let us denote
𝑅𝑀 𝑘 = autocorrelation of the process M(t) for a lag of k
𝑅𝑀 𝑘 = 𝐸 𝑚 𝑛 𝑚[𝑛 − 𝑘] (6)
𝑅𝑀 0 = 𝐸 𝑚2 [𝑛] = variance of the process M(t) = average signal power
Putting (4) in (5), we have
𝑝 𝑝 𝑝

𝐸 𝑒 2 [𝑛] = 𝑅𝑀 0 − 2 ෍ 𝜔𝑘 𝐸 𝑚[𝑛]𝑚[𝑛 − 𝑘] + ෍ ෍ 𝜔𝑗 𝜔𝑘 𝐸 𝑚[𝑛 − 𝑗]𝑚[𝑛 − 𝑘]


𝑘=1 𝑗=1 𝑘=1
𝑝 𝑝
= 𝑅𝑀 0 − 2 σ𝑘=1 𝜔𝑘 𝑅𝑀 𝑘 + σ𝑘=1 𝜔𝑗 𝜔𝑘 𝑅𝑀 𝑗 − 𝑘 j= 1,2 , 3, ……… (7)
Differentiating (7) w.r.t 𝜔𝑘 and setting equal to zero, we get
σ𝑝𝑗=1 𝜔𝑗 𝑅𝑀 𝑗 − 𝑘 = 𝑅𝑀 𝑘 = 𝑅𝑀 −𝑘 1 k  p (8)
• Solving the above set of equations (usually referred to as Wiener-Hopf equations for
linear prediction), one can find the optimal set of predictor coefficients 𝜔𝑘 .
DPCM (contd.)
• Eqn. (8) can be written in matrix form as
𝑹𝑋 𝝎𝑜 = 𝒓𝑋 (9)
where
𝝎𝑜 = 𝑝1 optimum coefficient vector = 𝜔1 , 𝜔2 , … … … … . , 𝜔𝑝
𝒓𝑋 = 𝑝1 autocorrelation vector = 𝑅𝑋 1 , 𝑅𝑋 2 , … … … … . , 𝑅𝑋 [𝑝]
𝑹𝑋 = 𝑝𝑝 autocorrelation matrix

𝑅𝑋 0 𝑅𝑋 1 𝑅𝑋 𝑝 − 1

… 𝑅 𝑝−2
𝑅𝑋 1 𝑅 0
𝑹𝑋 = …𝑋
. 𝑋

… …
𝑅𝑋 𝑝 − 1 𝑅𝑋 𝑝 − 2 … 𝑅𝑋 0
Assume that inverse of 𝑹𝑋 exist, then
𝝎𝑜 = 𝒓𝑋 𝑹𝑋−1
p coefficients can be uniquely determined by knowing the p+1 autocorrelations.
DPCM (contd.)
• Linear predictor is implemented as a tapped delay line filter
DPCM (contd.)
Adaptive DPCM
• Speech is a non-stationary process
• Correlation values 𝑅𝑋 [𝑘] changes with time
• Predictor coefficients should be made adaptive
Delta Modulation
• DM is a simplified version of DPCM
• If a message signal is sampled at a rate much higher than the Nyquist rate
• Adjacent samples are highly correlated
• Only a two-level (1 bit) quantizer can be used.
• It provides staircase approximation
of the oversampled signal as shown
in the adjacent Fig.
DM (contd.)
DM (contd.)
• Slope overload distortion
• Occur in the region of the steep slope of 𝑚(𝑡)
• To avoid it
∆ 𝑑𝑚(𝑡)
≥ 𝑚𝑎𝑥
𝑇𝑠 𝑑𝑡

• Occur in the region of low slope


Ans. 1.08 V
Ans. (a) Step size ∆ ≥ 0.126 𝑉
(b) (SNR)o = 475
= 26.8 dB

You might also like