You are on page 1of 51

Chapter 2

Suramya-Student Copy
2.1 Sampling Theorem
1. A finite energy, strictly band limited signal (i.e., containing no frequencies higher than fm
hertz) is completely described by the samples (values) of the signal at instants of time
separated by 1/2fm seconds apart (for transmitter end).
2. A finite energy, strictly band limited signal (i.e., containing no frequencies higher than W
hertz), may be completely recovered from its samples (values) taken at rate of 2fm per
second (for receiving end).

2.2 Proof of Sampling Theorem

To prove the sampling theorem, consider an analog signal x (t ) (Fig.2.1(a)) which is continuous
in both time and amplitude. The spectrum of signal x (t ) is band-limited to fm Hz as shown in
Fig.2.1(b). Suppose that we sample the signal x (t ) at a uniform interval of every Ts seconds.
This uniform sampling can be accomplished by multiplying x (t ) by an impulse train  T (t ) of s

Fig.2.1(c), which results in a sampled signal x (t ) shown in Fig.2.1(e). The sampled signal
consists of impulses spaced every Ts seconds (the sample interval). Then, we obtain an infinite
sequence of samples spaced Ts seconds apart with amplitude represented by x(nTs ) , where
n  0,  1,  2,... . We refer to Ts as the sampling period, and its reciprocal i.e., f s  1 / Ts as the
sampling rate. Thus, the mathematical relation between the sampled signal x (t ) and the original
analog signal x (t ) is

x (t )  x(t ) Ts (t )

  x(nTs ) (t  nTs ) (2.1)
n  

Where  (t  nTs ) is Dirac delta function located at time t  nTs .In Eq.(2.1), each delta function
in the series is weighted by the corresponding sample value of the input signal x (t ) i.e., x(nTs ) .

As the impulse train  T (t ) is a periodic signal with period Ts, it can be expressed as an
s

exponential Fourier series as shown by Eq. (2.2),



x (t )  C
n  
n e jn 2f st

Where Cn  1 / Ts ,
1  jn 2f s t (2.2)
 T (t )  e
s
Ts n  

Suramya-Student Copy
Therefore,

x (t )  x(t ) Ts (t )
1  jn 2f s t
  x(t ) e
Ts n (2.3)

To find X  ( f ) , the Fourier transform of x (t ) , we take the Fourier transform of the summation
in Eq.(2.3). Based on the frequency-shifting property, the transform of the n th term is shifted by
nf s .

x(t ) 
FT
X( f )

x(t )e jn2fst 
FT
 X ( f  nfs )

Therefore,

1  (2.4)
X ( f )   X ( f  nfs )
Ts n  

This means that the spectrum X  ( f ) consists of X ( f ) (spectrum of signal x(t)) scaled by a
constant 1 / Ts , repeating periodically with period f s  1 / Ts Hz, as shown in Fig. 2.1(f).

(a) (b)

δTs(t)

x(t)
Multiplier x (t )
Ts  T (t )
s

(d)
(c)
Suramya-Student Copy
(e)
(f)

Fig. 2.1.(a) A continuous time signal. (b) Spectrum of continuous time signal. (c) Impulse train
as sampling function. (d) Multiplier. (e) Sampled signal. (f) Spectrum of sampled signal.

If we are to reconstruct x (t ) from x (t ) , we should be able to recover X ( f ) from X  ( f ) . This


is possible if there is no overlap between successive cycles of X  ( f ) . Fig.2.1(f) shows that this
requires

fs  2 fm (2.5)

Also, the sampling interval Ts  1 / f s . Therefore,

1 (2.6)
Ts 
2 fm

Thus, as long as the sampling frequency f s is greater than twice the signal bandwidth f m (in
hertz), X  ( f ) will consist of non-overlapping repetitions of X ( f ) . When this is true, Fig. 2.1(f)
shows that x (t ) can be recovered from its samples by passing the sampled signal x (t ) through
an ideal low-pass filter of bandwidth f m Hz. The minimum sampling rate f s  2 f m required to
recover x (t ) from its samples x (t ) is called the Nyquist rate for x (t ) , and the corresponding
sampling interval Ts  1 / 2 f m is called the Nyquist interval for x (t ) given by Eq. (2.5) and (2.6)
respectively.

2.2 Signal Reconstruction

The process of reconstructing a continuous time signal x (t ) from its samples is known as
interpolation. Fig. 2.1, shows the constructive proof that a signal x (t ) band-limited to f m Hz,
can be reconstructed (interpolated) exactly from its samples. This is done by passing the sampled
signal through an ideal low-pass filter of bandwidth f m Hz.

Suramya-Student Copy
The expression for sampled signal is written as

x (t )  x(t )Ts (t ) (2.7)

As seen from Eq. (2.3), the sampled signal contains a component (1 / Ts ) x(t ) when n  0 , and to
recover x (t ) [or X ( f ) ] , the sampled signal given by Eq. (2.1) must be sent through an ideal
low-pass filter of bandwidth f m Hz and gain Ts . Such an ideal filter response has the transfer
function

 w  (2.8)
H ( f )  Ts  rect 
 4f m 

2.2.1 Ideal Reconstruction

To recover the analog signal from its uniform samples, the ideal interpolation filter transfer
function found in Eq.(2.8) is shown in Fig.2.2(a).

The impulse response of this filter i.e., the inverse Fourier Transform of H ( f ) , is

  w 
h(t )  F 1 Ts  rect 
  4 f m 
 2 f mTs sinc(2f mt )
(2.9)

1
Assuming that sampling is done at Nyquist rate, then Ts 
2 fm

So that, 2 f mTs  1

Substituting the value of 2 f mTs in Eq.(2.9). We have

h(t )  sinc(2f mt ) (2.10)


H(f)

Suramya-Student Copy
Ts

-1/2fm 1/2fm

f t
-fm fm

(b)
(a)

Sampled signal Reconstructed signal


xδ(t) x(t)

(c)

Fig.2.2. (a) Ideal reconstruction filter. (b) Impulse response of ideal reconstruction filter. (c)
Reconstructed signal.

The impulse response h (t ) given in Eq.(2.10) is shown in Fig.2.2(b). It can be seen from
Fig.2.2(b) that h(t )  0 at all Nyquist sampling instants i.e., at t   n / 2 f m , except at t  0 .
Now when the sampled signal x (t ) is applied at the input of the filter, the output will be x (t ) .
Each sample in x (t ) , being an impulse, produced a sinc pulse of height equal to the strength of
the sample. Addition of the sinc pulses produced by all the samples results in x (t ) . For instant,
the kth sample of the input x (t ) is the impulse x(kTs ) (t  kTs ) . The filter output of this
impulse will be x(kTs )h(t  kTs ) . Therefore, the filter output to x (t ) i.e., x (t ) , may be
expressed as a sum given in Eq.(2.11).

x(t )   x(kTs )h(t  kTs ) (2.11)


k

x(t )   x(kTs )sinc[2f m (t  kTs )] (2.12)


k

x(t )   x(kTs )sinc[2f m t  k )] (2.13)


k
Flat top and Natural sampling (self study)

Suramya-Student Copy
2.3 Practical considerations

There are various practical issues that arise during conversion of an analog signal to digital form
using sampling. Some of the issues occurring during practical implementations are

1. Unrealizable Ideal Reconstruction Filters


If the signal is sampled at the Nyquist rate (i.e., fs=2fm), the spectrum X  ( f ) consists of
repetitions of X ( f ) without any gap in between successive cycles, as shown in Fig. 2.11(a). To
recover x(t) from , we need to pass the sampled signal through an ideal low-pass filter.
However, ideal low-pass filter is unrealizable in practice. Thus, filter with certain amount of roll-
off is used. In doing so, during recovery the portion of the side band of the nearby spectrum in
X  ( f ) will also be filtered out along with the required message signal spectrum.

Non-ideal Xδ(f)
Distortion
Xδ(f) response of
LPF

-fs -fm 0 fm fs -fs -fm 0 fm fs


(a) (b)

Fig. 2.11(a) Distortion due to non-ideal reconstruction filter. (b) Sampling at rate higher than
Nyquist to compensate for non-ideal reconstruction filter.

One of the ways to minimize this effect is to sample the signal at a rate higher than the Nyquist
rate (i.e., fs>2fm). This results in X  ( f ) , consisting of X ( f ) with finite band gap between
successive cycles, as shown in Fig. 2.11(b). We can now recover X ( f ) from X  ( f ) by using a
low-pass filter with a gradual cutoff characteristic as shown in Fig. 2.11(b) by the dotted line.

2. Aliasing
According to Nyquist criteria the original signal can be recovered from its samples only if signal
is sampled at a rate greater or equal to Nyquist rate (i.e., f s  2 f m ) . If the signal is sampled at a
rate lower than Nyquist rate (i.e., f s  2 f m ), then the successive cycles of the spectrum X  ( f )
overlap with each other as shown in Fig. 2.12.
X(f)

Suramya-Student Copy
Xδ(f)

Fig. 2.12.Aliasing Effect

Hence, the signal is under-sampled (i.e., f s  2 f m ), resulting in some amount of spectral folding
or aliasing. From Fig. 2.12, it is clear that due to aliasing it is not possible to recover original
signal x(t) from the sampled signal x (t ) by just using a low-pass filter, since the signal in the
overlap region overlap, resulting in distortion. Signals encountered in real life are usually time
limited rather than band-limited. Thus to decide a sampling frequency is always a problem.
Therefore, a signal is first passed through a low-pass filter. This low-pass filter blocks all the
frequencies which are above fm Hz. This low-pass filter is known as anti-aliasing or pre-aliasing
filter which band-limiting the signal to fm Hz , making it easier to decide sampling frequency.

Subsampling (go through video)

Chapter 3

Go through PAM and PWM

The PCM is not a modulation in conventional sense. Modulation is the process in which some
parameters of carrier is varied according to instantaneous value of the message signal. In PCM,
the only section in which this happens is while sampling.
Suramya-Student Copy
Discrete Digital
Analog signal signal
signal Encoded
Sampling Quantization Encoding
output

(a)

Holding Low Pass Encoded


Decoding
circuit Filter output

(b)

Fig.3.9 (a) Block for PCM encoding. (b) Block diagram of PCM decoder.

1.2.1 Sampling

The first step of analog to digital conversion or PCM is the sampling. The incoming message
signal (analog signal) is sampled with a train of narrow rectangular pulses (as studied under flat
top sampling).

According to Nyquist criteria, to ensure the perfect reconstruction of the message at the receiver,
the sampling rate must be greater than twice of the highest frequency component fm of the
message signal. In practice, a low-pass pre-alias filler is used at front end of the sampler in order
to exclude frequencies greater than fm before sampling. Thus, sampling converts a continuously
varying message signal (Analog signal) into a limited number of discrete values per second
(called discrete signal).

1.2.2 Quantization

An analog signal, such as voice, has a continuous range of amplitudes and therefore its samples
covers a continuous amplitude range. In other words, within a finite amplitude range of signal we
find an infinite number of amplitude levels.
Infinite finite

Suramya-Student Copy
levels levels

Quantization

Fig.3.10. Operation of quantization.

However, it is not necessary to transmit the exact amplitude of the samples. Any human senses
(ears or eyes) can detect only finite intensity differences. This means that the original analog
signal may be approximated by a signal constructed of discrete amplitudes. The existence of a
finite number of discrete amplitude levels is a basic condition of PCM. Clearly, if we assign the
discrete amplitude levels with sufficiently close spacing, we may make the approximated signal
practically indistinguishable from the original analog signal.

Quantization is the process of transforming the sample amplitude x(nTs) of a message signal x(t)
at time t = nTs into a discrete amplitude xq(nTs) taken from a finite set of possible amplitudes as
shown in Fig.3.10.

Quantizers can be uniform or non-uniform type. In uniform quantizer, the representation levels
are uniformly spaced. Whereas in non-uniform quantizer the representation levels are not
uniform.

1. Uniform quantization

XH
m7 M7
m6 M6
m5 M5
m4 M4
m3 M3
m2  /2 M2
m1 M1
m0 M0
XL

Fig.3.11. Uniform Quantization

Fig. 3.11 shows the quantization levels of a message signal x (t ) .The operation of uniform
quantization can be described in steps given below:
 Let us consider a message signal x (t ) , with peak value X H and X L . The separation
between X H and X L are divided into N equal intervals each of size  . Where  is called

Suramya-Student Copy
the step size and is given by
X  XL
 H
N
In Fig.3.11. a specific example has been given with N  8 levels. At center of each of these
steps, we allocate quantization level denoted by x0 , x1 , x2 , ..., x7 .
 Now the quantized signal xq (t ) is generated by allocating the quantization levels to the
message signal that falls within its range as follows. The message signal within the range of
X 0 is represented by a single level x0 , and that in the range of X 1 , can by single level x1 ,
and so on.
 Thus we can conclude that at every instant, the message signal x (t ) does not change, but
only jumps from one level to another level. For example, till the value of message signal is in
the range X 0 ; the output of quantized level is x0 ; and when message signal changes from
X 0 to X 1 , the output of quantization will jump to output x1 .

2. Non-uniform quantization

In uniform quantization, once the step size is fixed, the quantization noise power remains
constant as it depends on the step size. However, the signal power is not constant and is
proportional to the square of the signal amplitude. Thus for weak signals the signal power is
weak but the quantization noise is constant, resulting in the decrease of signal to quantization
noise ratio (SQNR). Non-uniform quantizer has nonlinear characteristics and the step size is not
constant. The step size is varied according to the signal level to keep the SQNR adequately high.
The non-uniform quantization is practically achieved through the process called companding.
Companding is derived from two words i.e., compression and expansion.

Companding =Compressing + Expanding

Practically it is difficult to implement the non-uniform quantization because the changing levels
of input signal is not known in advance. Thus, non-uniform quantinization is basically a
compressor followed by a uniform quantization. At the receiver, the reverse process of
compression is carried out called expansion.

Input Output
Uniform
Compressor Expander
quantizer
Fig.3.13 Companding Process

Suramya-Student Copy
The input-output characteristics of non-uniform quantizer has been shown in Fig. 3.14. It shows
the companding characteristics which is the combination of the compressor and expander
characteristics.

Fig.3.14 Compander characteristics

The compressor curve provides a higher gain to the weak signal and smaller gain to the strong
input signals. However, the expander characteristics is exactly the inverse of the compressor
characteristics resulting in the recovery of the original amplitudes at the receiver.

Types of companding characteristics

Number of companding laws has been proposed. Among these, two compression laws are

i.  law of companding
For a given input x ; where the expression for compressed output y is given by
 x  (3.1)
log1   

yn 
F ( x)
  x max 
wher e 0 
x
1
xmax log(1   ) xmax
F ( x) log1   xn  (3.2)
yn   wher e 0  xn  1
xmax log(1   )
In Eq.(3.2), xn and yn are the normalized input and output, and is only for positive amplitude. For
representation of both positive and negative amplitude of the input and output, signam signal (

Suramya-Student Copy
Sgn(x ) ) is used resulting in Eq.(3.3)
 
 log 1   x
(3.3)
 Sgn xn
F ( x)
yn  wher e - 1  xn  1
n

xmax log(1   )
The compression characteristics F(x) is continuous, with approximate linear characteristics for
lower input level of x and logarithmic characteristics for higher input levels. When µ=0, it is a
special case of µ law and is a simple uniform quantization.

Fig.3.15 µ law compression characteristics

The normalized form of µ law compression characteristics, y n , is shown in Fig.3.15 for


0  xn  1 , along with curves for three different values of µ . A practical value of µ is 255. The
µ law is used for PCM telephone systems in United States, Canada, and Japan.

ii. A law of companding

For a given input x ; where the expression for compressed output y is given by
 x  (3.4)
 A x 
x max

Suramya-Student Copy
 1 
, 0 
 1  log A x max A 
F( x )  
yn   
x max  
x
1  log A 
 x max 1 x 
 1  log A ,   1
 A x max 
 A xn 1  (3.5)
 , 0  xn  
 1  log A A 

y n  F ( xn )   
1  log A x 1 
 n
,  x n  1
 1  log A A 
In Eq.(3.5) xn and yn are the normalized input and output, and is only for positive amplitude. The
compression characteristics of F(x) is piecewise, with linear segment for lower input level of x
and logarithmic characteristics for higher input levels.

Fig.3.16 A law compression characteristics

When A=1, it is a special case of A law and is a simple uniform quantization. The normalized
form of A law compression characteristics, y n , is shown in Fig.3.16 for 0  xn  1 , along with
curves for three different values of A . A practical value of A is 87.56. The A law companding is
used for PCM telephone systems in Europe.
Both the  law and A law curve have odd symmetry about the vertical axis.

Suramya-Student Copy
1.2.3 Encoding

After sampling and quantization, the analog message signal becomes limited to a discrete set of
values, but it is still not suited to transmission over a telephone line or radio link. To making the
transmitted signal more robust to noise, interference, and other channel impairments, we require
encoding of the discrete set of samples obtained after sampling and quantization.

One of the mostly used encoding is a binary code. In binary code each discrete sample value is
assigned a unique set of combination of 1' and 0's called codeword. Suppose that, in a binary
code each codeword consists of n bits. Then, using such code, we can represent a total of 2n
distinct numbers. For example, if the sampled values are quantized into one of available 4 levels,
than each level can be represented by 2 bit codeword (i.e., 22=4).

1.2.4 Signal to Quantization Noise ratio for a PCM system (Linear quantization)
In PCM the maximum Q-error (qe) is   / 2 , where Δ is the step size (i.e.,
  / 2  Q  error   / 2 with equal probability). This gives a uniform PDF given by

1 1 1 (3.6)
f Q (q e )   
b  a ( / 2)  ( / 2) 

2. Fig.3.17 uniform PDF of quantization noise


For uniform distribution, mean=0 and mean square Q-noise is the variance.

2 (3.7)
Pq 
12

Where, Pq is the average power of Q-noise. It is seen from the above Eq.(3.7) that Q-
error in uniform quantization is dependent only upon the step size.
Suppose we are using sinusoidal modulating signal swings between XH and XL, where XH
=+xm and XL =-xm. Suppose the number of levels used is N. So the quantization step size

Suramya-Student Copy
can be obtained

XH  XL

N

x m  ( x m )

N

2x m (3.8)

N

Where N is the total number of representation levels. Substituting the value of  in


Eq.(3.7). In PCM we code each representational level in binary form of 1's and 0's. Thus
we can write N=2n , where n is the number of bits so we have

2x m x (3.9)
 n
 nm-1
2 2

Now the average noise power is


2
2  xm  1
Pq   
12  2 n 1  12

x m2 (3.10)
Pq 
3 4n

Assuming that the average signal power is x 2 , the signal to quantization noise ratio will be

x2 x2 x2 (3.11)
SQNR     3 4n
Pq xm2 xm2
3  4n

x2
The ratio can be replaced by normalized signal power.
x m2

x2 (3.12)
v
x m2

Then the SQNR will be


SQNR  v  3  4 n (3.13)

Suramya-Student Copy
In terms of dB, the SQNR will be

SQNR( dB)  P( dB)  10 log3  10n log 4 (3.14)

SQNR( dB)  P( dB)  4.8  6n (3.15)

It means that for each extra bit (n) used for representing each quantization level, the
SQNR increases by 6dB.

Signaling rate (r) or Baud rate is the number of symbols transmitted per second. For binary
waveform signaling rate is expressed in bits per second and represented as

Bits Sample
r 
Sample Second

r  nf s (3.25)

Let us consider a band-limited signal with upper maximum frequency fx as shown in


Fig.3.18. We know from Nyquist creteria that f s  2 f m .It is seen from Nyquist's
formula for multi-level signaling for a noiseless channel is
r=2×B×logM (3.26)

Thus, for binary signaling (i.e., for M=2) for PCM,


Signaling rate(r)
PCM Bandwidth( B) 
2

r (3.27)
B
2

1
B  nf s
2

1
B  n  (2 f m )
2

Therefore, the minimum bandwidth required as per Nyquist criteria for zero ISI is

B  nf m (3.28)

In practice,
B  (1   )nf m (3.29)

Suramya-Student Copy
Where  is called the roll-off factor.

Go through what is TDM

signaling rate of a TDM system is


R=Nfs (3.32)

But in order to satisfy Nyquist criteria f s  2 f m


Thus the signaling rate of a TDM system is given by
R  2 Nf m (3.33)

The minimum transmission bandwidth of a TDM channel is given by

1
Bandwidth( B)   Signaling Rate(R)
2

Therefore, transmission bandwidth is given by

1 (3.34)
B  2 Nf m  Nf m
2

Hence, the minimum bandwidth is

B  Nf m (3.35)

Go through T1 and E1 hierarchy

DPCM transmitter
Fig.3.24. DPCM Transmitter

Suramya-Student Copy
Fig.3.24 shows the transmitter of DPCM system. Let, x(nTs ) be the sampled sequence of the

message signal x(t) sampled at Ts intervals. Assuming that x(nTs ) is the prediction of x(nTs ) ,
derived from the previous x{(n  1)Ts } sample. Then the comparator evaluates the difference

between x(nTs ) and x(nTs ) to determine what is known as prediction error e(nTs ) .

 (3.36)
e(nTs )  x(nTs )  x(nTs )

This error signal is quantized to produce quantized version of the error signal eq (nTs ) . Thus the
quantized output can be represented as

eq (nTs )  e(nTs )  q(nTs ) (3.37)

Where, q(nTs ) is the quantization error. The quantized output signal eq (nTs ) and previous
predicted signal is added to generate an input to the prediction filter and thus generates the
current predicted output which is more and more closer to the actual sampled signal after every
cycle. We can see that the quantized error signal eq (nTs ) is very small and can be encoded by
using small number of bits. Thus, the number of bits per sample is reduced in DPCM.

Now the prediction filter input x q (nTs ) is obtained by

 (3.38)
xq (nTs )  x(nTs )  eq (nTs )

Substituting the value of eq (nTs ) from Eq.(3.36) in Eq.(3.37) ,we get,

 (3.39)
xq (nTs )  x(nTs )  e(nTs )  q(nTs )

From Eq.(3.36) we can write


 (3.40)
e(nTs )  x(nTs )  x(nTs )

Now substituting this value in Eq.(3.39) we get

xq (nTs )  x(nTs )  q(nTs ) (3.41)


From Eq.(3.41) it can be concluded that irrespective of the properties of prediction filter, the
quantized version of the signal x q (nTs ) is the sum of original sample value x(nTs ) and the

Suramya-Student Copy
quantization error q(nTs ) . The quantized signal eq (nTs ) is now transmitted over the channel.

DPCM receiver

Fig.3.25 DPCM receiver

The receiver shown in Fig.3.25 is identical to the shaded portion of the transmitter. The inputs in

both cases are also the same (i.e., eq (nTs ) ). Therefore, the predictor output must be x(nTs ) (the
same as the predictor output at the transmitter). Hence, the receiver output (which is the predictor
input) is also the same

xq (nTs )  x(nTs )  q(nTs ) (3.42)

Thus we are able to receive the desired signal x(nTs ) plus the quantization noise q(nTs ) The
received samples x q (nTs ) are decoded and passed through a lowpass filter for digital to analog
conversion.

As the correlation between successive samples are high, less number of levels are required to
encode. Thus, the required bandwidth is reduced. However, the only disadvantage is in the
complexity of implementation of prediction filter.

Prediction filter

The prediction filter in general is a tapped delay line filter type, where the predicted value

x(nTs ) is modeled as linear combination of past values of quantized inputs.
Suramya-Student Copy
Fig.3.26. Transversal filter (tapped delay line) used as a linear predictor.

The output of the above prediction filter is the linear sum of n previous values of samples scaled
by some coefficients.
 N (3.43)
x(nTs )   a k x q {( n  k )Ts }
k 1

Where, N is the order of prediction filter.

2.2.1 Delta Modulation (DM)

In DPCM waveform coding technique, the number of bits to be transmitted is considerably


reduced in comparison to PCM. Thus in reducing the transmission bandwidth, the complexity of
the system is increased compared to PCM. Thus, the tradeoff between complexity-bandwidth has
to be maintained. To achieve this objective, transmission bandwidth is traded off for reduced
system complexity. Thus, we use new modulation technique known as Delta Modulation (DM).
DM is less complex than DPCM but at the cost of increased bandwidth compared to DPCM.

In DM, the incoming message signal x(t) is oversampled at rate higher than the Nyquist rate.
Thus, the sample-to-sample amplitude difference is considerably reduced or in other words
generating high correlation between corresponding samples. So, it may be possible for 1-bit
quantization of the difference signal. Delta modulation is also viewed as a 1-bit DPCM scheme.

Some points on delta modulation;

1. We sample the signal at the rate higher than Nyquist rate to get better correlation between
successive samples.
2. Represent each sample by a single bit.
Suramya-Student Copy
Fig.3.27 Delta Modulation

In DM, we use a first order predictor, which, as seen earlier, is just a time delay of Ts (the
sampling interval). Hence, the present sample value is compared with the previous sample value,
and the resulting information of amplitude increase or decrease is transmitted. Input signal x(t) is
approximated to a step signal with fixed step size. The difference between the input signal x(t)
and staircase approximation signal is confined to two levels, i.e., +Δ and - Δ. Now, if the
difference is positive, then approximated signal is increased by one step, i.e, Δ and if the
difference is negative, then the approximated signal is reduced by Δ. The encoding is done such
that, when the step size is reduced, '0' is transmitted and if the step size is increased, '1' is
transmitted. Hence, for each sample, only one bit is transmitted. Fig.3.27 shows the analog signal
x(t) and its staircase approximated signal by delta modulation.

DM transmitter

Fig. 3.28 DM transmitter


The error between the sampled value of x(t) and last approximated sample is given as,

Suramya-Student Copy
 (3.44)
e(nTs )  x(nTs )  x(nTs )

where, e(nTs ) is the error at present sample. x(nTs ) is the present sampled signal of x(t).

x(nTs ) is the last sample approximation of the staircase waveform.


If the present value of staircase output is assumed to be u (nTs ) ,then, u[( n  1)Ts ]  x(nTs )

Now let us define a term b(nTs ) as an output of one bit quantizer, which produces only   and
  . Thus,

b(nTs )   Sgn[e(nTs )] (3.45)

Thus, depending on the sign of e(nTs ) the sign of step size  is determined. In other words, we
can write

 
   if x(nTs )  x(nTs ) [transmit binary '1']
b(nTs )   
  if x(nTs )  x(nTs ) [transmit binary '0']

Fig.3.28 shows the transmitter of DM. The summer in the accumulator adds quantizer output
(±Δ) with the previous sample approximation. This gives present sample approximation. i.e.,

u (nTs )  u (nTs  Ts )  b(nTs ) (3.46)

u (nTs )  u[( n  1)Ts ]  [ ] (3.47)

The previous sample approximation u[( n  1)Ts ] is restored by delaying one sample period Ts.
The error signal e(nTs ) is obtained by substration of the samples input signal x(nTs ) and

staircase approximated signal x(nTs ) .

Thus, depending on the sign of e(nTs ) , one bit quantizer generates an output of +Δ and -Δ . If
the step size is +Δ, then binary '1' is transmitted and if it is -Δ, then binary '0' is transmitted.

DM receiver

The receiver of DM consists of an accumulator followed by a low-pass filter as shown in


Fig.3.29. The accumulator generates the staircase approximation signal output and is delayed by
one sample period Ts. It is then add to the current input signal. If the received binary is '1', then it
adds Δ to the delayed signal and if the received signal is binary '0' it subtracts Δ to the delayed

Suramya-Student Copy
signal. The out off band quantization noise in the high-frequency staircase waveform is rejected
by passing it through a low-pass filter with a bandwidth equal to the original signal bandwidth.

Fig. 3.29 DM receiver

(i) Slope over load distortion


If the rate of change of quantization is slow in comparison to the slope (rate of change) of
message signal (i.e., the step size may not be sufficient to follow rate of change of the signal), it
is called slope overload distortion. To evaluate the condition, at which the slope overload can be
eliminated, let s consider an example given below.

Example 3.2 Consider a sinusoidal signal of Am amplitude and fm frequency applied to a delta
modulator with representation level   .Show that in order to avoid slope overload distortion, it
is necessary that


Am 
2f mTs

Where Ts  sampling period

Solution

Let us consider a sine wave represented as,

x(t )  Am sin(2f m t )

and the maximum slope of the signal is

dx(t )
Maximum slope 
d (t ) max
Now, the slope of delta modulation is be given as

Suramya-Student Copy
Step size
Maximum slope 
Sample period


Ts

For given signal, slope overload can be avoided if the slope of the signal is less than the slope of
delta modulation. Thus,

dx(t ) 

dt max Ts

2f m  Am  cos(2f m t ) max 
Ts

2f m  Am 
Ts

Am 
2f mTs

Thus, to avoid slope overloading the amplitude Am of the given sine wave must be less than the

term .
2f mTs

(ii) Granular noise


When the slope of the signal is low (i.e., signal is almost constant with respect to time), and the
step size Δ is relatively high, the approximation starts swinging from - Δ to + Δ. causing high
noise level. This type of noise is known as granular noise and can be minimized by reducing the
step size Δ.

Fig.3.30. Slope overload distortion and granular noise of DM.


Therefore, a large step size is required to reduce slope overload distortion and small step size is
required to reduce granular noise. Adaptive DM is the modification to overcome these errors.

Suramya-Student Copy
2.2.2 Signal to quantization noise ratio in DM

We know that the condition to avoid the slope overload distortion is expressed as

   fs  (3.48)
Am    
2f mTs 2  fm 

The maximum output power considering a sinusoidal signal with amplitude Am is

Am2
Pmax 
2

Hence,
2
A 2  f  1
Pmax  m   s  
2  2f m  2

2  f s2  (3.49)
Pmax   2 
8 2  fm 

Now, we require to obtain the expression for quantization noise power.

Mean square quantization error in DM

In DM the maximum Q-error (qe) is   , where Δ is the step size (i.e.,    Q  error   with
equal probability). This gives a uniform PDF given by

1 1 1 (3.50)
f Q (qe )   
b  a (  )  (   ) 2

Fig.3.31. uniform PDF of error in DM


For uniform distribution, mean=0 and Mean square value of Q-noise is equal to variance

Suramya-Student Copy
2 (3.51)
P
3

Where, Pq is the average power of Q-noise. It is seen from the above Eq.(3.50) that Q-error in
uniform quantization is dependent only upon the step size.

At the output of DM receiver the signal is passed through a reconstruction low-pass filter (LPF).
We assume the bandwidth of LPF is such that

f LPF  f m and f LPF  f s

Now assuming that the qantization noise power Pq is distributed uniformly over the frequency
band upto fs. The PSD of Pq is

Pq (3.52)
Gq ( f )  for f  f s
2 fs

2 (3.53)
Gq ( f ) 
3 2 fs

Thus, the output quantization noise power within the bandwidth fLPF is given by
f LPF (3.54)
P   Gq ( f ) df
'
q
 f LPF

2 (3.55)
f LPF

P  
'
df
3 2 fs
q
 f LPF

2 2 f LPF (3.56)
f LPF

P   df 
'

3 2 fs
q
 f LPF
3 fs

Thus the expression for output signal to quantization noise ratio is

Pmax 2  f s2  3 f s (3.57)
SQNRDM    
Pq' 8 2  f m2  2 f LPF
3 f s3 (3.58)
SQNRDM 
8 2 f m2 f LPF

Suramya-Student Copy
For, f LPF  f m Eq.(3.58) becomes

3 (3.59)
3 f 
SQNRDM  2  s 
8  f m 

Delta modulation does not require analog to digital conversion

2.2.3 Comparison of PCM and DM(self study)

Linear predictive coding(self study)

4.1.1. Information

A message is a sequence of symbols intended to reduce uncertainty of the receiver. If the


sequence of symbols does not change the uncertainty level of the receiver then the message does
not contain any information.

4.1.2. Entropy or Average information content

H ( X )  E[ I ( xk )] (4.9)

K 1 (4.10)
H ( X )   pk I ( xk )
k 1

K 1 1 (4.11)
H ( X )   p k log 2
k 1 pi

Entropy is the measure of the average information content per source symbol. If the information
is in bits then the unit of entropy is bits/symbol

4.1.3. Information Rate

If the rate at which source emits symbols is r, than the information rate R of the source is given
by
R  r H ( X ) bit / sec (4.12)

Suramya-Student Copy
Here R is the information rate, H(X) is entropy or average information and r is the rate at which
symbols are generated.

4.1.4. Shannon-Hartley channel capacity theorem

 S (4.13)
C  B log 2 1   bits / sec
 N

Where C is the channel capacity, B is the channel bandwidth in hertz, S is the signal power and
N is the noise power (i.e., N0B with N0/2 being the two sided noise PSD).

Note: S/N is the ratio watt/watt not dB.

Implication of Theorem:

1. Indicates the upper limit of data transmission for reliable communication.

2. Trade off between B and SNR for given C


3. Bandwidth Compression
Explain?

Theoretical limits of Shannon's channel capacity theorem

It can be seen from Eq. that if channel bandwidth B=∞, channel capacity C=∞. However, noise
power is proportional to the bandwidth (i.e., the noise signal considered is a white noise with a
uniform power density spectrum over the entire frequency range). Therefore, as the bandwidth B
increases, noise N also increases and hence the channel capacity remains finite at Cmax even if
B=∞.

If N0/2 is the noise power density, then we have N=N0B, and

 S  (4.15)
C  B log 2 1   bits / sec
 N 0 B 

lim lim S N0 B  S  (4.16)


C log2 1   bits / sec
B B   N0 S  N 0 B 
N0 B (4.17)
lim S lim  S  S
C log2 1   bits / sec

Suramya-Student Copy
B B   N0  N0 B 

The above limit may be found with the help of the following standard expression:

lim (4.18)
log 2 1  x   log 2 e  1.44
1/ x

x0

S
Therefore, replacing by x we have
N0B

N0 B (4.19)
lim S lim  S  S
C log2 1  
B B   N0  N 0 B 

lim S lim (4.20)


C 1  x 1 / x
B N0 x  0

lim S (4.21)
C log2 e
B N0

lim S (4.22)
C  C max 1.44
B N0

So as the bandwidth goes to infinity the capacity goes to 1.44S/N0, i.e., it goes to a finite value.

Baseband data communication


system using PAM have the following functional blocks shown in Fig.4.2.
Suramya-Student Copy
Fig.4.2 Block diagram of Baseband binary data transmission system

The input binary sequence {bk} consists of symbols 1 and 0, each of duration Tb . This sequence
is applied to a pulse generator, producing the discrete PAM signal
 (4.23)
x(t )   a k p g (t  kTb )
k  

Where p g (t ) denotes the basic pulse whose amplitude a k depends upon the input data sequence
as

 1 if symbol bk is 1 (4.24)
ak  
 1 if symbol bk is 0

And is normalized such that

p g (t )  0 (4.25)

Signal x (t ) is passed through a transmission filter of impulse response ht (t ) . The output further
modified as it passes through the channel of impulse response hc (t ) . In addition, the channel
adds random noise to the signal at the receiver input. The channel output is then passed through a
receive filter of impulse response hr (t ) . The receive filter output is written as

y (t )    a k p r (t  kTb   d )  n(t ) (4.26)


k

Where  is the scaling factor,  d is delay introduced in the system and n (t ) is noise.
The pulse p r (t ) is normalized such that

Suramya-Student Copy
pr (0)  1 (4.27)

The resulting filter output y(t) is sampled synchronously with the transmitter, with the sampling
instants being determined by a clock or timing signal that is usually extracted from the receive
filter output. Finally, the sequence of samples thus obtained is used to reconstruct the original
data sequence by means of a decision device. Specifically, the amplitude of each sample is
compared to a threshold  . If the threshold  is exceeded, a decision is made in favor of
symbol 1. If the threshold  is not exceeded, a decision is made in favor of symbol 0.

For simplicity of further analysis we assume that  d  0 and that the channel is noiseless, i.e.,
n(t )  0

Then in frequency domain the received pulse can be expressed as the response of the cascaded
connection of the transmitting filter, the channel, and the receiving filter, which is produced by
the pulse p g (t ) applied to the input of this cascade connection. Therefore, we may relate p g (t )
and p r (t ) in the frequency domain by

Pg ( f )  Pr ( f ) H T ( f ) H C ( f ) H R ( f ) (4.28)

Where Pg(f) and Pr(f) are the Fourier transforms of pg(t) and pr(t), respectively. HT(f), HC(f) and
HR(f) are the transfer functions of transmitting filter, channel and receiving filter respectively.

The receiving filter output y(t) is sampled at time t=mTb is


 (4.29)
y (t  mTb )  a m    bk p r (mTb  kTb )
k  
k m

 (4.30)
y (t  mTb )  a m    a k p r [( m  k )Tb ]
k  
k m

In Eq.(4.30), the first term a m is the mth decoded bit and the second term represents the residual
effect of all other transmitted bit on the decoding of the mth bit; this residual effect is called
intersymbol interference (ISI).

ISI arises due to dispersion of pulse shape by the filters and channel. Therefore, one of the major
task of the system designer is to optimally design transmitting and receiving filters and the shape
of the basic pulse to minimize ISI.

Nyquist’s criterion for distortionless baseband binary transmission


Typically, the transfer function of the channel and the transmitted pulse shape are known.
However, as stated above, the problem is to determine the transfer function of the transmitter and

Suramya-Student Copy
receiving filters so as to reconstruct the transmitted data sequence {bk}. The receiver does this by
extracting and then decoding the corresponding sequence of weights, {ak}, from the output
y(t).at some time t = mTb. The decoding requires that the weighted pulse akpr(mTb - kTh) for k=m
be free from ISI due to the overlapping tails of all other weighted pulse represented by k  m .
This, in turn, requires that we control the received pulse pr(t), as shown by Eq.(4.31)

1 m  k (4.31)
p r (mTb  kTb )  
0 m  k

or in general,

1 i  0 (4.32)
p r (iTb )  
0 i  0

Where, integer i=m-k and pr(0)=1. If pr(t) satisfies the condition of Eq.(4.31), the receiver output,
given by Eq.(4.30), simplifies to

y (t  mTb )  a m (4.33)

which implies zero ISI. Hence, the condition of Eq.(4.32) assures perfect reception in the
absence of noise.

Frequency Domain Analysis

From a design point of view, it is informative to transform the condition of Eq. 6.23 into the
frequency domain. Consider then the sequence of samples {pr(nTb)}, where n = 0, ±1, ±2, . . . .
.From chapter 2 on the sampling process for a low-pass function, we know that sampling in the
time domain produces periodicity in the frequency domain. Thus, from Eq.(2.3) in chapter 2 we
may write,
 (4.34)
P ( f )  Rb  Pr ( f  nRb )
n  

Where Rb=1/Tb is the bit rate; P ( f ) is the Fourier transform of an infinite periodic sequence of
delta function of period Tb , and whose strengths are weighted by the receptive sample values of
p(t). That is, P ( f ) is given by

 
P ( f )     p r (iTb ) (t  iTb )exp(  j 2ft )dt
(4.35)
  i  
Let the integer i=m-k. Then, m=k corresponds to i=0, and likewise m  k corresponds to i  0 .
Accordingly, imposing the condition of Eq.(4.31). on the sample values of p(t) in the integral of

Suramya-Student Copy
Eq.(4.35)., we get
  (4.36)
P ( f )    p r (0) (t ) exp(  j 2ft )dt
  i  

P ( f )  p r (0) (4.37)

Where, we have made use of the shifting property of the delta function. Since p(0)=1, by
normalization, we thus see from Eq.(4.34).and Eq.(4.37). that the condition for zero ISI is
satisfied if
 (4.38)
 Pr ( f  nRb )  Tb
n  

Eq.(4.31) in terms of the time function pr(t), or equivalently, Eq.(4.38) in terms of the
corresponding frequency function P(f), is called the Nyquist criterion for distortionless baseband
transmission in the absence of noise.

Ideal Solution

From Eq.(4.38) we see that the function P(f) represents the series of shifted spectrums. P(f) is
obtained by permitting only one non-zero component in the series (i.e., for n=0). The range of
frequencies for P(f) extend from -Bo to Bo where Bo denotes half the bit rate:

Rb
Hence, Bo 
2

That is, P(f) is specify as a rectangular spectrum given by

1  f  (4.39)
P( f )  rect 
2 Bo  2 Bo 

This is the spectrum of a signal which produces zero ISI which is shown in Fig.4.3(a). The time
domain representation of the signal is nothing but a sinc function obtained by taking inverse
Fourier Transform of Eq.(4.39) and is shown below

 1  f  (4.40)
p(t )  F 1 P( f )  F 1  rect 
 2 Bo  2 Bo 
sin(2Bo t ) (4.41)
p(t ) 
2Bo t

Suramya-Student Copy
 , p(t )  sinc(2Bo t ) (4.42)

(a)

(b)

Fig.4.3 (a). Graphical representation of P(f). (b). Time-domain response


Suramya-Student Copy
Fig.4.4 The series of sinc pulses corresponding to the sequence 1011010

Raised Cosine Spectrum (Practical Consideration)

The practical difficulties faced in the ideal Nyquist channel can be overcome by extending the
bandwidth from Bo = Rb/2 to an adjustable value between Bo and 2Bo. In doing so, we expand
the series on the left side of Eq.(4.38)

i.e.,
 (4.43)
 Pr ( f  nRb )  Tb
n  

and retain only three terms which corresponds to n=-1, 0 and 1 and restrict the frequency band of
interest to f  Bo as shown by

1 (4.44)
P ( f )  P ( f  2 Bo )  P ( f  2 Bo )   Bo  f  Bo
2 Bo

Now, we may devise several band-limited functions that satisfy Eq.(4.44). One of the function
that match many desirable features is a raised cosine spectrum. The spectrum characteristic of
which consists of a flat portion and a roll-off portion that has a sinusoidal form, expressed
mathematically as
 1 (4.45)
 f  f
2 Bo

Suramya-Student Copy

 1    ( f  f1 )  
P( f )   1  cos  f 1  f  2 Bo  f 1
 4 Bo   2 Bo  2 f1  
 0 f  2 Bo  f 1

The frequency f1 and bandwidth Bo are related by

f1 (4.46)
  1
Bo

The parameter  is called the rolloff factor, which indicates the excess bandwidth over the ideal
solution Bo. For  = 0, that is, f1=Bo, we get the minimum bandwidth solution as described in
ideal case.

The time response p(t), that is, the inverse Fourier transform of P(f), is defined by

cos(2Bo t ) (4.47)
p(t )  sinc(2 Bo t )
1  16 2 Bo2 t 2

The normalized frequency response of the raised cosine function is obtained by multiplying P(f)
by 2Bo, and is shown plotted in Fig. 4.5(a) for three values of a namely, 0, 0.5, and 1. The
corresponding time response p(t) is plotted in Fig.4.5(b).
Suramya-Student Copy
Fig.4.5 Responses for different roll-off factors (a) Frequency response. (b) Time response

This function consists of the product of two factors: the factor sinc(2Bot) associated with the
ideal filter, and a second factor that decreases as 1/|t|2 for large |t|. The first factor ensures zero
crossings of p(t) at the desired sampling instants of time t = iTb with i an integer (positive and
negative).

The second factor reduces the tails of the pulse considerably below that obtained from the ideal
low-pass filter, so that the transmission of binary waves using such pulses is relatively
insensitive to sampling time errors. In fact, the amount of ISI resulting from this timing error
decreases as the roll-off factor  is increased from zero to unity.

For the special case of  = 1 (i.e., f1=0) is known as the full-cosine roll-off characteristics, for
which the frequency response of Eq. simplifies to

 1   f  (4.48)
 1  cos , 0  f  2 Bo
P( f )   4 Bo   2 Bo 
 0, f  2 Bo

the function p(t) simplifies to

sinc(4 Bo t ) (4.49)
p(t ) 
1  16Bo2 t 2
Suramya-Student Copy
Correlative coding

Duobinary Signaling

Duobinary signaling describes the basic idea behind correlative coding, where “duo” implies
doubling of the transmission capacity of a binary system.

Consider a binary input sequence {bk} consisting of uncorrelated binary digits each having
duration Tb seconds, with symbol 1 represented by a pulse of amplitude +1 volt, and symbol 0 by
a pulse of amplitude -1 volt. When this sequence is applied to a duo binary encoder, it is
converted into a three-level output, namely, -2, 0, and +2 volts. This transformation is produced
according to scheme shown in Fig. 4.6.

Duobinary conversion filter H(f)


Input binary
sequence Ideal channel Output
 Sequence
{bk} Hc(f)
Sample at {ck}
time
{bk-1} t=kTb
Delay
Tb

Fig. 4.6 Duobinary signaling scheme

The binary sequence {bk} is first passed through a simple filter involving a single delay element.
The digit ck at the duobinary coder output is the sum of the present binary digit bk and its
previous value bk-1, as shown by

ck  bk  bk 1 (4.50)

Such that

  2 if bk and bk 1 are both 1 (4.51)



c k   0 if bk and bk 1 are different
  2 if b and b are both 0
 k k 1

The effects of the transformation described by Eq.(4.50) is to change the input sequence {bk} of
uncorrelated binary digits into a sequence {ck} of correlated digits. This correlation may be
viewed as introducing intersymbol interference into the transmitted signal in an artificial manner.
However, this intersymbol interference is under the designer’s control, which is the basis of
correlative coding.
An ideal delay element, producing a delay of Tb seconds, has the transfer function exp(  j 2fTb ) ,
so that the transfer function of the simple filter shown in Fig.4.6 is 1  exp( j 2fTb ) . Hence, the

Suramya-Student Copy
overall transfer function of this filter connected in cascade with the ideal channel Hc(f) is

H ( f )  H c ( f )[1  exp( j 2fTb )] (4.52)

H ( f )  H c ( f )[exp( jfTb )  exp(  jfTb )] exp( jfTb ) (4.53)

H ( f )  2 H c ( f ) cos(fTb ) exp( jfTb ) (4.54)

For an ideal channel of bandwidth Bo=Rb/2, we have

1 f  Rb / 2 (4.55)
Hc ( f )  
0 otherwise

Thus the overall frequency response has the form of a half-cycle cosine function, as shown by

2 cos(fTb ) exp(  jfTb ) f  Rb / 2 (4.56)


H( f )  
 0 otherwise

for which the amplitude and phase response are as shown in Fig.4.7(a) and Fig.4.7(b),
respectively. An advantage of this frequency response is that it can be easily approximated in
practice.

(a) (b)

Fig.4.7. Frequency response of duobinary conversion filter. (a) Amplitude response. (b) Phase
response.

The corresponding value of the impulse response consists of sinc pulse, time-displaced by Tb
seconds, as shown by
sin(t / Tb ) sin[ (t  Tb ) / Tb ] (4.57)
h(t )  
t / Tb  (t  Tb ) / Tb

Suramya-Student Copy
sin(t / Tb ) sin(t / Tb ) (4.58)
h(t )  
t / Tb  (t  Tb ) / Tb

Tb2 sin(t / Tb ) (4.59)


h(t ) 
t (Tb  t )

Fig. 4.8 Impulse response of duobinary conversion filter

The above Fig.4.8 shown that h(t) has two distinguishable values at the sampling instants +Tb
and -Tb.

Detection

The original data (bk) may be detected from the duobinary-coded sequence (ck) by subtracting
the previous decoded binary digit from the currently received digit ck in accordance with

Eq.(4.50). Suppose, bk represent the estimate of the original binary digit bk, we have

  (4.60)
bk  ck  bk 1

 
Eq.(4.60) will yield correct bk only if the previous bit bk 1 was correctly decoded at sampling
instance t = (k - 1 )Tb.
Input
Modulo-2
binary

Suramya-Student Copy
adder
sequence ak
bk

ak 1
Delay
Tb

Precoder
Output
Sequence
ak Ideal channel {ck}

Hc(f) Sample at
time
a k 1 t=kTb
Delay
Tb

Duobinary conversion filter


H(f)

Fig.4.9 A precoded duobinary scheme

A practical means of avoiding this error propagation is the use of precoding before the duobinary
coding (Fig.4.6), as shown in Fig. 4.9. The precoding operation performed on the input binary
sequence (bk) converts it into another binary sequence ( a k ) defined by

a k  bk  a k 1 (4.61)

The '  ' sign in Eq.(4.61) represents module-2 addition which is equivalent to the EXCLUSIVE-
OR operation. The resulting precoder output ( a k ) is applied to the duobinary coder, thereby
producing sequence (ck) that is related to ( a k ) as follows.

c k  a k  a k 1 (4.62)

bk  1 if ck  1 volt
ck ck Decision
Rectifier
device
bk  0 if ck  1 volt

Threshold=1
Fig.4.10 Detector for recovering original binary sequence from the precoded duobinary coder
output.

Suramya-Student Copy
It is assumed that symbol 1 at the precoder output in Fig.4.9 is represented by +1 volt and
symbol 0 by -1 volt. Then if mth bit bm is 0 then a m  a m1 from Eq.(4.61). It is evident that for
bm=0, cm will be either 2 or -2. Similarly, if bm=1, then am is the complement of am 1 i.e., am
and am 1 will always be different resulting in cm=0.

Therefore, we find that

 2 volts, if a k and a k 1 are both 1 (4.63)



ck   0 volts, if a k and a k 1 are different
  2 volts, if a and a are both 0
 k k 1

From Eq.(4.63), setting the threshold level at 1 and -1 we can correctly detect the original input
binary sequence {bk} from {ck}.

symbol 0 if c k  1 volt (4.64)


bk  
 symbol1 if c k  1 volt

A block diagram of the detector is shown in Fig.4.10. The detector does not require the
knowledge of any input sample other than the present sample. Hence, error propagation cannot
occur in the detector of Fig.4.10. It can be shown from example below that, the decoding of
present symbol (bk) is not affected by the error in its past value (bk-1).

Example: Assumption: a k 1 =1

Consider a input binary sequence 0010110. Now the process of duobinary encoding and
decoding is explained in the table below.

Count (k-1) k
Input Seq. (bk) 0 0 1 0 1 1 0
precoder 1 1 1 0 0 1 0 0
output (assumed
( a k =bk  a k 1 ) value)
Polar +1 +1 +1 -1 -1 +1 -1 -1
representation
of a k
DB encoder +2 +2 0 -2 0 0 -2
output
(ck= a k + a k 1 )
Decoded bit 0 0 1 0 1 1 0
using Eq.(4.64)
The same result is obtained for the assumption a k 1 =0.

Suramya-Student Copy
M-ary signal

In general, the information IM transmitted by an M-ary symbol is

IM=log2M bits (4.74)

H ( X )  log2 (M ) (4.78)
If the rate at which source emits symbols is r, than the information rate R of the source is given
by

R  r H ( X ) bit / sec (4.79)


R  Rs log2 M bit / sec (4.80)
The signaling interval duration Ts=1/Rs, is same for both binary and M-ary systems. Therefore,
the absolute minimum bandwidth required to transmit Rs log2 M bits/sec of information is Rs/2
Hz.

Under similar conditions (i.e., Tb=Ts), the signaling rate for binary system is

Rb  Rs log2 2  Rs (4.81)

An eye pattern provides a great deal of information about the performance of the pertinent
system, as shown in Fig.4.16.

Fig.4.16 Interpretation of eye pattern.


The information provided by the eye pattern is

Suramya-Student Copy
1. The width of the eye opening defines the time interval over which the received wave can be
sampled without error from intersymbol interference. It is apparent that the preferred time for
sampling is the instant of time at which the eye is open widest.

2. The sensitivity of the system to timing error is determined by the rate of closure of the eye as
the sampling time is varied.

3. The height of the eye opening, at a specified samplingtime, defines the margin over noise.

When the effect of intersymbol interference is severe, traces from the upper portion of the eye
pattern cross traces from the lower portion, with the result that the eye is completely closed. In
such a situation, it is impossible to avoid errors due to the combined presence of intersymbol
interference and noise in the system, and a solution has to be found to correct for them.

Line coding (self study)

High Density Bipolar (HDB) Signaling

Chapter 8

Example 8.1 Apply the Huffman coding procedure for the following message

X  [ x1 x 2 x3 x4 x5 x6 x7 ]
P  [0.4 0.2 0.12 0.08 0.08 0.08 0.04]

Solution

Message Probability First Second Third Forth Fifth


Reduction Reduction Reduction Reduction Reduction
x1 0.4 0.4 0.4 0.4 0.4 0.6 1
x2 0.2 0.2 0.2 0.24 0.36 1 0.4 0
x3 0.12 0.12 0.16 0.2 1 0.24 0
Fourth digit
x4 0.08 0.12 0.12 1 0.16 0
from right (1)
x5 0.08 0.08 1 0.12 0 Third digit
from right (1)
x6 0.08 1 0.08 0 Second digit
x7 0.04 0 from right (0)
First digit
from right (1)

Code-word for symbol x 4 is.


c4 =1101

Suramya-Student Copy
Similarly,

Code Length
c1=0 1
c2=111 3
c3=101 3
c4=1101 4
c5=1100 4
c6=1001 4
c7=1000 4

 7
L   p k l k  (0.4  1)  (0.2  3)  (0.12  3)  (0.08  4)
k 1

 (0.08  4)  (0.08  4)  (0.04  4)


 2.48 letters/message

8.1.1 Shannon-Fano Coding

Example 8.2 Apply the Shannon-Fano coding procedure for the following message

X  [ x1 x 2 x3 x4 x5 x6 x7 ]
P  [0.4 0.2 0.12 0.08 0.08 0.08 0.04]

Solution

These symbols can be coded by partitioning the table in two ways.

8.1. X 1  [ x1 x2 ] X 2  [ x3 x4 x5 x6 x7 ]

Message Probability Encoded Length


Message

x1 0.4 0 0 2
x2 0.2 0 1 2
x3 0.12 1 0 0 3
x4 0.08 1 0 1 3
x5 0.08 1 1 0 3
x6 0.08 1 1 1 0 4
x7 0.04 1 1 1 1 4

Suramya-Student Copy
7
L   p k l k  (0.4  2)  (0.2  2)  (0.12  3)  (0.08  3)
k 1

 (0.08  3)  (0.08  4)  (0.04  4)


 2.52 letters/message

8.2. X 1  [ x1 ] X 2  [ x2 x3 x4 x5 x6 x7 ]

Message Probability Encoded Length


Message

x1 0.4 0 1
x2 0.2 1 0 0 3
x3 0.12 1 0 1 3
x4 0.08 1 1 0 0 4
x5 0.08 1 1 0 1 4
x6 0.08 1 1 1 0 4
x7 0.04 1 1 1 1 4
 7
L   p k l k  (0.4  1)  (0.2  3)  (0.12  3)  (0.08  4)
k 1

 (0.0843)  (0.08  4)  (0.04  4)


 2.48 letters/message


Thus, it can be seen that the second method is better as it gives a lower value of L .

Now,
7
H ( X )    p k log p k  [( 0.4 log 0.4)  (0.4 log 0.4)  (0.4 log 0.4)  (0.4 log 0.4) 
k 1

(0.4 log 0.4)  (0.4 log 0.4)  (0.4 log 0.4)]


 2.42 bits/message

Hence, efficiency of second method is

H(X ) 2.42
 
  97.6%
L 2.48

8.2.1 Basic Terminology used in Coding Theory

1. Code word
The code word is the n bit encoded block of bits. It contains message bits and parity or
redundant bits.

Suramya-Student Copy
2. Code rate or code efficiency
The code rate or code efficiency (rc) is defined as the ratio of the number of message bits ( k ) to
the total number of bits ( n ) in a code word.
k (8.5)
rc 
n

3. Code vectors
We can visualize an n-bit code-word to be present in an n-dimensional space. The coordinates or
elements of this code vector are the bits present in the code word. Fig.8.3, which shows the 3-bit
code vectors. Table 8.1 enlists the 8 (i.e.,23) possible combinations of the 3-bit code word. We
can assume bits x 0 to be on X-axis, bits x1 to be on Y-axis and bits x 2 to be a Z-axis.
4. Hamming distance
The Hamming is the number by which a code word differ from other code word of the same code
vector.
5. Hamming weight of a Code Word:
The Hamming weight of a code word x is defined as the number of non-zero elements in the
code word. Hamming weight of a code vector (code word) is the distance between the code word
and an all zero code vector (A code having all elements equal to zero).

6. Minimum Distance ( d min )


The minimum distance d min of a linear block code is defined as the smallest Hamming distance
between any pair of code vectors in the code. Therefore, the minimum distance is same as the
smallest Hamming weight of difference between any pair of code vectors. It can be proved that
the minimum distance of a linear block code is the smallest Hamming weight of the non-zero
code vectors in the code.

7. Rate of d min in Error detection and Correction


The error detection is always possible when the number of transmission errors in a code word is
less than the minimum distance d min because then the erroneous word is not a valid code word.
But when the number of errors equals or exceeds d min , the erroneous code word may correspond
to another valid code word and errors cannot be detected. The error detection and correction
capabilities of a coding technique depend on the minimum distance d min as given below.
a. For detecting upto s errors per word, d min  (s  1) must be satisfied.
b. For correcting upto t errors per word, d min  (2t  1) must be satisfied.
c. For correcting upto t errors and detecting s  t errors per word, d min  (t  s  1) must be
satisfied.

Suramya-Student Copy
1. Linear Block Code

Generation of Linear block code

D  [d1 d 2 ....d k ] (8.6)


C  [C1C 2 ....C n ] (8.7)
By encoding, and there are 2 k distinct code words. It may be noted that there is one unique code
word for each distinct message block. This set of 2 k code words, also known as code vectors, is
k
called an ( n, k ) block code. The rate efficiency of this is .
n

Systematic Linear Block Code

1000  0  p11 p12  p1,n  k  (8.10)


0100  0  p 21 p 22  p 2,n  k 

C1C 2 ......C n   [d1d 2 .....d k ]0010  0  p31 p32  p 3, n  k 
 
         
0000  1  p k ,1 pk ,2  p k ,n  k  k n

or

C  DG (8.11)
Where G is the k  n matrix on the RHS of Eq.(8.11) is called the generator matrix of the code
and is used in encoding operation. It has the form

G  I k  Pk n (8.12)
Parity check matrix

A parity check matrix H is associated with each ( n, k ) block code is given by


H  PT  I nk 
( n  k )n
(8.13)
 p11 p21  pk ,1  1 0 0  0  (8.14)
 p p22  pk , 2  0 1 0  0  
H 
12

          
 
 p1,n  k p2,n  k  pk ,n  k  0 0 0  1 
Where P T is the transpose of matrix P . The parity check matrix can be used to verify whether
a code word C is generated by the matrix G . The rule for verification is:
C is a code word in the ( n, k ) block code generated by G if and only if,

Suramya-Student Copy
CH T  [0 0  0] (8.15)
Where H T is the transpose of H and is also given by

 P  (8.16)
HT   
 I n k 
The parity check matrix H is used in the decoding operation as follows:

Syndrome Testing

S  RH T (8.18)
Eq.(8,18) can be rewritten as

S  (C  E ) H T (8.19)
S  CH  EH T T (8.20)
S  EH T (8.21)
since CH T  [0 0.... 0]
Thus, the syndrome of the received vector is zero if R is a valid code vector. If error occurs in
transmission, the syndrome S of the received vector is non-zero. Moreover, S is related to E
and the decoder uses S to detect and correct.

2. Cyclic code

Non-Systematic Cyclic Code

c( x)  d ( x) g ( x) (8.29)
k
Note that there are 2 distinct polynomials (or code words). For cyclic code, a code word after
cyclic shift is still a code word. Thus generated code words are non-systematic in form.

Systematic Cyclic Code

c( x)  x n  k d ( x)   ( x) (8.30)
Where,  (x) is the remainder from dividing x nk d (x) by g (x) , i.e.,

x n k d ( x) (8.31)
 ( x)  Rem
g ( x)

Syndrome Decoding for cyclic code


r ( x) (8.35)
s ( x)  Rem
g ( x)

Suramya-Student Copy
c ( x )  e( x ) (8.36)
s ( x)  Rem
g ( x)
e( x ) (8.37)
s( x)  Rem
g ( x)

8.2.3 Convolutional Code


1. Constraint length (N)
N=(M+1).

2. Code Dimension
(n,k,M).

Fig.8.6 shows a convolutional encoder with n=2 and N=3 . Hence the code rate of this encoder is
n/k i.e.,1/2.

Modulo-2
adder

c1
k FF1 FF2
n
Flip-flop Flip-flop
M1 M2 Commutator
c2

Fig. 8.6 convolutional encoder with constraint length N=3, rate=1/2

Encoder

The operation of convolutional encoder is explained with the help of Fig 8.1. In the figure the
encoder consists of two flip-flops and two modulo-2 adder. The encoder takes a single bit input
which is converted to two bit code word. The output of each of the modulo-2 adders are defined
by the following equations

c1  k  FF1  FF 2 (8.38)
c2  k  FF 2 (8.39)
Operation

Suramya-Student Copy
i. Initially suppose that all shift registers are clear.
ii. First message bit enters FF1 and during the bit period of this message bit the commutator
samples all two outputs.
iii. Therefore a single input bit is converted into 2-bit code word.
iv. When the next message bit enter the FF1, its contents are shifted to FF2 and during the bit
duration (Tb ) of second message bit, the commutator samples c1 and c2 to generate another
code block.
v. In this way the status of the first bit influence the two blocks of code words containing of a
total of 6 bits (including the first set of code word generated immediately i.e., when at block
k). Therefore the influence level (constraint length) is equal to N=(M+1).

You might also like