This action might not be possible to undo. Are you sure you want to continue?
Prepared by J.Saranya, Lecturer/ECE
DIGITAL COMMUNICATION III YEAR ECE A &B
UNIT I DIGITAL COMMUNICATION SYSTEM
SAMPLING:
rate. sampling higher have or bandwidth signal
limit the may we aliasing, avoid .To occurs aliasing
sampling) (under limited  band not is signal When the
2
1
interval Nyquist
2 rate Nyquist
)
2
( from recovered completely be can signal The . 2
. )
2
( by described
completely be can , to limited is which signal 1.a
signals limited  band strictly for Theorem Sampling
W
W
W
n
g
W
n
g
W f W
=
=
)
`
¹
¹
´
¦
)
`
¹
¹
´
¦
< < ÷
rate sampling : 1
period sampling : where
(3.1) ) ( ) ( ) (
signal sampled ideal the denote ) ( Let
s s
s
s
n
s
T f
T
nT t nT g t g
t g
=
÷ =
¿
·
÷· =
o
o
o
¿
¿
¿
¿
¿
¿
¿
·
÷· =
·
=
÷· =
·
÷· =
·
÷· =
·
÷· =
·
÷· =
·
÷· =
÷ =
= > =
÷ + =
÷ =
÷ ·
÷ =
÷ 
· ÷
n
s
m
m
s s s
s
n
s
m
s s
m
s s
m s s
n
s
W
n f j
W
n
g f G
W
T W f f G
mf f G f f G f f G
nf T j nT g f G
mf f G f t g
mf f G f
T
m
f
T
f G
nT t t
(3.4) ) exp( )
2
( ) (
2
1
and for 0 ) ( If
(3.5) ) ( ) ( ) ( or
(3.3) ) 2 exp( ) ( ) (
obtain to (3.1) on Transform ier apply Four may or we
(3.2) ) ( ) (
) (
) (
1
) (
) ( ) g(
have we A6.3 Table From
0
t
t
o
o
o
o
o
o
) ( of n informatio all contains )
2
( or
for )
2
( by determined uniquely is ) (
(3.7) , ) exp( )
2
(
2
1
) (
as ) ( rewrite may we (3.6) into (3.4) ng Substituti
(3.6) , ) (
2
1
) (
that (3.5) Equation from find we
2 . 2
for 0 ) ( . 1
With
t g
W
n
g
n
W
n
g t g
W f W
W
nf j
W
n
g
W
f G
f G
W f W f G
W
f G
W f
W f f G
n
s
)
`
¹
¹
´
¦
· < < · ÷
< < ÷ ÷ =
< < ÷ =
=
> =
¿
·
÷· =
t
o
) ( of formula ion interpolat an is (3.9)
(3.9)  , ) 2 ( sin )
2
(
2
) 2 sin(
)
2
(
(3.8) )
2
( 2 exp
2
1
)
2
(
) 2 exp( ) exp( )
2
(
2
1
) 2 exp( ) ( ) (
have may we , )
2
( from ) ( t reconstruc To
t g
t n Wt c
W
n
g
n Wt
n Wt
W
n
g
df
W
n
t f j
W W
n
g
df f t j
W
n f j
W
n
g
W
df ft j f G t g
W
n
g t g
n
n
n
W
W
W
W
n
¿
¿
¿
}
}
¿
}
·
÷· =
·
÷· =
·
÷· =
÷
÷
·
÷· =
·
· ÷
· < < · ÷ =
÷
÷
=
(
¸
(
¸
÷ =
÷ =
=
)
`
¹
¹
´
¦
t t
t t
t
t
t
t
Figure 3.3 (a) Spectrum of a signal. (b) Spectrum of an
undersampled version of the signal exhibiting the aliasing
phenomenon.
Figure 3.4 (a) Antialias filtered spectrum of an informationbearing signal. (b) Spectrum
of instantaneously sampled version of the signal, assuming the use of a sampling rate
greater than the Nyquist rate. (c) Magnitude response of reconstruction filter.
PulseAmplitude Modulation :
(3.14) ) ( ) ( ) ( ) (
have we , property sifting the Using
(3.13) ) ( ) ( ) (
) ( ) ( ) (
) ( ) ( ) ( ) (
(3.12) ) ( ) ( ) (
is ) ( of version sampled ously instantane The
(3.11)
otherwise
T t 0, t
T t 0
,
, 0
2
1
, 1
) (
(3.10) ) ( ) ( ) (
as pulses top  flat of sequence the denote ) ( Let
s
n
s
s
n
s
s
n
s
n
s s
s
n
s
nT t h nT m t h t m
d t h nT nT m
d t h nT nT m
d t h m t h t m
nT t nT m t m
t m
t h
nT t h nT m t s
t s
÷ = 
÷ ÷ =
÷ ÷ =
÷ = 
÷ =
¦
¹
¦
´
¦
= =
< <
=
÷ =
¿
}
¿
}
¿
}
¿
¿
·
÷· =
·
· ÷
·
÷· =
·
· ÷
·
÷· =
·
· ÷
·
÷· =
·
÷· =
o
o o
o
t t t o
t t t o
t t t
o
Pulse Amplitude Modulation – Natural and FlatTop
Sampling:
(3.18) ) ( ) ( ) (
(3.17) ) ( ) ( M
(3.2) ) ( ) ( (3.2) Recall
(3.16) ) ( ) ( ) (
(3.15) ) ( ) ( ) (
is ) ( signal PAM The
¿
¿
¿
·
÷· =
·
÷· =
·
÷· =
÷ =
÷ =
÷ ·
= ·
 =
k
s s
k
s s
m
s
s
δ
f H k f f M f f S
k f f M f f
mf f G f t g
f H f M f S
t h t m t s
t s
o
o
o
The most common technique for sampling voice in PCM
systems is to a sampleandhold circuit.
The instantaneous amplitude of the analog (voice) signal is
held as a constant charge on a capacitor for the duration of
the sampling period Ts.
This technique is useful for holding the sample constant
while other processing is taking place, but it alters the
frequency spectrum and introduces an error, called aperture
error, resulting in an inability to recover exactly the original
analog signal.
The amount of error depends on how mach the analog
changes during the holding time, called aperture time.
To estimate the maximum voltage error possible, determine
the maximum slope of the analog signal and multiply it by
the aperture time DT
Recovering the original message signal m(t) from PAM signal :
. completely recovered be can ) ( signal original e Ideally th
(3.20)
) sin( ) sinc(
1
) (
1
is response equalizer Let the
effect aparture
(3.19) ) exp( ) sinc( ) (
by given is ) ( of ansform Fourier tr
that the Note . ) ( ) ( is output filter The
is bandwidth filter the Where
2
delay distortion amplitude
s
t m
f T
f
f T T f H
f T j f T T f H
t h
f H f M f
W
T
t
t
t
= =
¬
÷ =
=
Other Forms of Pulse Modulation:
In pulse width modulation (PWM), the width of each pulse is
made directly proportional to the amplitude of the information
signal.
In pulse position modulation, constantwidth pulses are used,
and the position or time of occurrence of each pulse from some
reference time is made directly proportional to the amplitude of
the information signal.
Pulse Code Modulation (PCM) :
Pulse code modulation (PCM) is produced by analogto
digital conversion process.
As in the case of other pulse modulation techniques, the rate
at which samples are taken and encoded must conform to the
Nyquist sampling rate.
The sampling rate must be greater than, twice the highest
frequency in the analog signal,
fs > 2fA(max)
Quantization Process:
{ }
function. staircase a is which stic, characteri quantizer the called is
(3.22) ) g( mapping The
size. step the is , levels tion reconstruc or tion representa the are
L , 1,2, , where is output quantizer the then ) ( If
3.9 Fig in shown as ) (
amplitude discrete a into ) ( amplitude sample
the ing transform of process The : on quantizati Amplitude
. threshold decision or the level decision the is Where
(3.21) , , 2 , 1 , :
cell partition Define
1
1
m
m m
t m
nT
nT m
m
L k m m m
k k
s
s
k
k k
=
÷
= e
= s <
+
+
v
v
k ν ν J
J
k k
k
k
Figure 3.10 Two types of quantization: (a) midtread and (b)
midrise.
Quantization Noise:
Figure 3.11 Illustration of the quantization process
(3.28)
12
1
) ( ] [
(3.26)
otherwise
2 2
, 0
,
1
) (
levels of number total : ,
(3.25)
2
is size  step the
type midrise the of quantizer uniform a Assuming
(3.24) ) 0 ] [ ( ,
(3.23)
value sample of variable
random by the denoted be error on quantizati Let the
2
2
2
2
2
2
2 2 2
max max
max
A
=
A
= = =
¦
¹
¦
´
¦
A
s <
A
÷
A
=
< ÷
= A
= ÷ =
÷ =
} }
A
A
÷
A
A
÷
<
dq q dq q f q Q E
q
q f
L m m m
L
m
M E V M Q
m q
q Q
Q Q
Q
o
v
). (bandwidth increasing lly with exponentia increases (SNR)
(3.33) )2
3
(
) (
) ( of power average the denote Let
(3.32) 2
3
1
(3.31)
2
2
(3.30) log
sample per bits of number the is where
(3.29) 2
form, binary in expressed is sample quatized When the
o
2
2
max
2
o
2 2
max
2
2
max
R
m
P
P
SNR
t m P
m
m
L R
R
L
R
Q
R
Q
R
R
=
= ¬
=
= A
=
=
÷
o
o
Pulse Code Modulation (PCM):
Figure 3.13 The basic elements of a PCM system
Quantization (nonuniform quantizer):
Compression laws. (a) m law. (b) Alaw.
¦
¹
¦
´
¦
s s
s s
+
+
=
¦
¦
¹
¦
¦
´
¦
s s
s s
+
+
+
=
+
+
=
+
+
=
(3.51)
1
1
1
0
) 1 (
log 1
(3.50)
1
1
1
0
log 1
) log( 1
log 1
) (
law  A
(3.49) ) 1 (
) 1 log(
(3.48)
) 1 log(
) 1 log(
law 
m
A
A
m
m A
A
A
d
m d
m
A
A
m
A
m A
A
m A
m
d
m d
m
v
v
µ
µ
µ
v
µ
µ
v
µ
Figure 3.15 Line codes for the electrical representations of binary
data.
(a) Unipolar NRZ signaling. (b) Polar NRZ signaling.
(c) Unipolar RZ signaling. (d) Bipolar RZ signaling.
(e) Splitphase or Manchester code.
Noise consideration in PCM systems:
(Channel noise, quantization noise)
TimeDivision Multiplexing(TDM):
Digital Multiplexers :
Virtues, Limitations and Modifications of PCM:
Advantages of PCM
1. Robustness to noise and interference
2. Efficient regeneration
3. Efficient SNR and bandwidth tradeoff
4. Uniform format
5. Ease add and drop
6. Secure
Delta Modulation (DM) :
 
     
   
     
   
  size step the is and , of version quantized the
is , output quantizer the is where
(3.54) 1
(3.53) ) sgn(
(3.52) 1
is signal error The
). ( of sample a is ) ( and period sampling the is where
, 2 , 1 , 0 , ) ( Let
A
+ ÷ =
A =
÷ ÷ =
± ± = =
n e
n e n m
n e n m n m
n e n e
n m n m n e
t m nT m T
n nT m n m
q q
q q q
q
q
s s
s
The modulator consists of a comparator, a quantizer, and an
accumulator
The output of the accumulator is
Two types of quantization errors :
Slope overload distortion and granular noise
   
  (3.55)
) sgn(
1
1
¿
¿
=
=
=
A =
n
i
q
n
i
q
i e
i e n m
Slope Overload Distortion and Granular Noise:
DeltaSigma modulation (sigmadelta modulation):
The modulation which has an integrator can
relieve the draw back of delta modulation (differentiator)
Beneficial effects of using integrator:
1. Preemphasize the lowfrequency content
2. Increase correlation between adjacent samples
(reduce the variance of the error signal at the quantizer input)
3. Simplify receiver design
Because the transmitter has an integrator , the receiver
consists simply of a lowpass filter.
(The differentiator in the conventional DM receiver is cancelled
by the integrator )
 
     
       
 
. ) ( of slope local the to relative large too is
size step when occurs noise granular hand, other On the
(3.58)
) (
max (slope)
require we , distortion overload  slope avoid To
signal input the of difference backward
first a is input quantizer the , 1 for Except
(3.57) 1 1
have we , (3.52) Recall
(3.56)
, by error on quantizati the Denote
t m
dt
t dm
T
n q
n q n m n m n e
n q n m n m
n q
s
q
A
>
A
÷
÷ ÷ ÷ ÷ =
÷ =
E ÷ A
Linear Prediction (to reduce the sampling rate):
Consider a finiteduration impulse response (FIR)
discretetime filter which consists of three blocks :
1. Set of p ( p: prediction order) unitdelay elements (z1)
2. Set of multipliers with coefficients w1,w2,…wp
3. Set of adders ( ¿ )
 
     
   
         
      (3.62)
2
have we (3.61) and (3.60) (3.59) From
minimize to , , , Find
(3.61) error) square (mean
be e performanc of index Let the
(3.60) ˆ
is error prediction The
(3.59) ) ( ˆ
is ) input the of predition linear (The output filter The
1 1
1
2
2 1
2
1
¿¿
¿
¿
= =
=
=
÷ ÷ +
÷ ÷ =
=
÷ =
÷ =
p
j
p
k
k j
p
k
k
p
p
k
k
k n x j n x E w w
k n x n x E w n x E J
J w w w
n e E J
n x n x n e
k n x w n x
       
   
       
   
   
     
equations Hopf  Wiener called are (3.64)
(3.64) 2 1 ,
0 2 2
(3.63) 2
as simplify may We
) (
ation autocorrel The
) (
0) ]] [ [ ( mean zero with process stationary is ) ( Assume
1
1
1 1 1
2
2
2 2 2
,p , , k k R k R j k R w
j k R w k R
w
J
j k R w w k R w J
J
k n x n x E k R kT R
n x E
n x E n x E
n x E t X
p
j
X X X j
p
j
X j X
k
p
j
p
k
X k j
p
k
X k X
X
s
X
X
= ÷ = = ÷
= ÷ + ÷ =
c
c
÷ + ÷ =
÷ = = =
=
÷ =
=
¿
¿
¿¿ ¿
=
=
= = =
o
t
o
 
     
     
     
     
   
 
2
min
1
1 2
0
2
1
2
1 1
2
min
2 1 0
1
0
1
than less always is 0,
(3.67)
2
yields (3.63) into (3.64) ng Substituti
, , 1 , 0
0 2 1
2 0 1
1 1 0
]] [ ],..., 2 [ ], 1 [ [
, , , where
(3.66) exists if , as
X X X
T
X
X X
T
X X
T
X X
p
k
X k X
p
k
X k
p
k
X k X
X X X
X X X
X X X
X X X
X
T
X X X X
T
p
X X X
J
k R w
k R w k R w J
p R R R
R p R p R
p R R R
p R R R
p R R R
w w w
o
o o
o
o
>
÷ = ÷ =
÷ =
+ ÷ =
(
(
(
(
¸
(
¸
÷ ÷
÷
÷
=
=
=
=
÷
÷
=
= =
÷ ÷
¿
¿ ¿
r R r
r R r w r
R
r
w
r R w R
Linear adaptive prediction :
   
   
on. presentati of
e convenienc for is
2
1
and parameter size  step a is where
(3.69) 2 1 ,
2
1
1
1 update Then . n iteration at value the denotes
(3.68) 2 1 ,
ector gradient v the Define
descent steepest of method the using iteration Do 2.
values initial any starting , , , 2 , 1 , Compute 1.
sense follow in the adaptive is predictor The
µ
µ ,p , , k g n w n w
n w n w
,p , , k
w
J
g
p k w
k k k
k
k
k
k
k
= ÷ = +
+
=
c
c
=
=
   
           
   
           
           
     
       
algorithm square  mean  lease called are equations above The
(3.73) (3.60) (3.59) by ˆ where
) 72 . 3 ( , , 2 , 1 , ˆ
ˆ ˆ 1 ˆ
) 71 . 3 ( , , 2 , 1 , 2 2 ˆ
n) expectatio the (ignore
k]]  E[x[n]x[n for use we computing he simplify t To
(3.70) , , 2 , 1 , 2 2
2 2
1
1
1
1
1
+ ÷ ÷ =
= ÷ + =


.

\

÷ ÷ ÷ + = +
= ÷ ÷ + ÷ ÷ =
÷
= ÷ ÷ + ÷ ÷ =
÷ + ÷ =
c
c
=
¿
¿
¿
¿
¿
=
=
=
=
=
j n x n w n x n e
p k n e k n x n w
j n x n w n x k n x n w n w
p k k n x j n x n w k n x n x n g
k n x n x
p k k n x j n x E w k n x n x E
j k R w k R
w
J
g
p
j
j
k
p
j
j k k
p
j
j
p
j
j
P
j
X j X
k
k
k
µ
µ
Figure 3.27
Block diagram illustrating the linear adaptive prediction process
Differential PulseCode Modulation (DPCM):
Usually PCM has the sampling rate higher than the Nyquist rate
.The encode signal contains redundant information. DPCM can
efficiently remove this redundancy.
Figure 3.28 DPCM system. (a) Transmitter. (b) Receiver.
Input signal to the quantizer is defined by:
     
 
     
 
       
 
      (3.78)
(3.77)
ˆ
is input filter prediction The
error. on quantizati is where
(3.75)
is output quantizer The
value. prediction a is
ˆ
(3.74)
ˆ
n q n m n m
n m
n q n e n m n m
n q
n q n e n e
n m
n m n m n e
q
q
q
+ = ¬
+ + =
+ =
÷ =
From (3.74)
Processing Gain:
Adaptive Differential PulseCode Modulation
(ADPCM):
Need for coding speech at low bit rates , we have two
aims in mind:
1. Remove redundancies from the speech signal as far
as possible.
2. Assign the available bits in a perceptually efficient
manner.
   
) (minimize G maximize filter to prediction a Design
(3.82) G Gain, Processing
(3.81) ) SNR (
is ratio noise on quantizati  to  signal the and
error s prediction the of variance the is where
(3.80) ) SNR (
) )( ( (SNR)
and 0) ] ] [ [ ( of variances are and where
(3.79) (SNR)
is system DPCM the of (SNR) The
2
2
2
2
2
2
2
2
2
2
o
2 2
2
2
o
o
E p
E
M
p
Q
E
Q
E
Q p
Q
E
E
M
Q M
Q
M
G
n q n m E n m
o
o
o
o
o
o
o
o
o
o
o o
o
o
=
=
=
=
=
=
Figure 3.29 Adaptive quantization with backward estimation
(AQB).
Figure 3.30 Adaptive prediction with backward estimation (APB).
UNIT II: BASEBAND FORMATTING TECHNIQUES
CORRELATIVE LEVEL CODING:
Correlativelevel coding (partial response signaling)
– adding ISI to the transmitted signal in a controlled
manner
Since ISI introduced into the transmitted signal is
known, its effect can be interpreted at the receiver
A practical method of achieving the theoretical
maximum signaling rate of 2W symbol per second in a
bandwidth of W Hertz
Using realizable and perturbationtolerant filters
Duobinary Signaling :
Duo : doubling of the transmission capacity of a straight binary
system
Binary input sequence {bk} : uncorrelated binary symbol 1, 0
1 1
0 1
k
k
k
if symbol b is
a
if symbol b is
+
¦
=
´
÷
¹
1 ÷
+ =
k k k
a a c
The tails of hI(t) decay as 1/t2, which is a faster rate of
decay than 1/t encountered in the ideal Nyquist channel.
Let represent the estimate of the original pulse ak as
conceived by the receiver at time t=kTb
Decision feedback : technique of using a stored estimate of
the previous symbol
Propagate : drawback, once error are made, they tend to
propagate through the output
Precoding : practical means of avoiding the error propagation
phenomenon before the duobinary coding
) exp( ) cos( ) ( 2
) exp( )] exp( ) )[exp( (
)] 2 exp( 1 )[ ( ) (
b b Nyquist
b b b Nyquist
b Nyquist I
fT j fT f H
fT j fT j fT j f H
fT j f H f H
t t
t t t
t
÷ =
÷ ÷ + =
÷ + =
2cos( ) exp( ),   1/ 2
( )
0,
b b b
I
fT j fT f T
H f
otherwise
t t ÷ s ¦
=
´
¹
) (
) / sin(
/ ) (
] / ) ( sin[
/
) / sin(
) (
2
t T t
T t T
T T t
T T t
T t
T t
t h
b
b b
b b
b b
b
b
I
÷
=
÷
÷
+ =
t
t
t
t
t
t
¹
´
¦ s
=
otherwise
T f
f H
b
Nyquist
2 / 1  
, 0
, 1
) (
1 ÷
© =
k k k
d b d
1
1 1
0
k k
k
symbol if either symbol b or d is
d
symbol otherwise
÷
¦
=
´
¹
{dk} is applied to a pulseamplitude modulator, producing a
corresponding twolevel sequence of short pulse {ak}, where
+1 or –1 as before
ck=1 : random guess in favor of symbol 1 or 0
ck=1 : random guess in favor of symbol 1 or 0
1 ÷
+ =
k k k
a a c
1 0
0 2
k
k
k
if data symbol b is
c
if data symbol b is
¦
=
´
±
¹
Modified Duobinary Signaling :
Nonzero at the origin : undesirable
Subtracting amplitudemodulated pulses spaced 2Tb second
precoding
1 ÷
+ =
k k k
a a c
( ) ( )[1 exp( 4 )]
2 ( )sin(2 ) exp( 2 )
IV Nyquist b
Nyquist b b
H f H f j fT
jH f fT j fT
t
t t
= ÷ ÷
= ÷
2 sin(2 ) exp( 2 ),   1/ 2
( )
0,
b b b
IV
j fT j fT f T
H f
elsewhere
t t ÷ s
¦
=
´
¹
2
sin( / ) sin[ ( 2 ) / ]
( )
/ ( 2 ) /
2 sin( / )
(2 )
b b b
IV
b b b
b b
b
t T t T T
h t
t T t T T
T t T
t T t
t t
t t
t
t
÷
= ÷
÷
=
÷
2
2
1 1
0
k k k
k k
d b d
symbol if either symbol b or d is
symbol otherwise
÷
÷
= ©
¦
=
´
¹
ck=1 : random guess in favor of symbol 1 or 0
  1, 1
  1, 0
k k
k k
If c say symbol b is
If c say symbol b is
>
<
Generalized form of correlativelevel coding:
ck=1 : random guess in favor of symbol 1 or 0
¿
÷


.

\

÷ =
1
sin ) (
N
n
b
n
n
T
t
c w t h
Baseband Mary PAM Transmission:
Produce one of M possible amplitude level
T : symbol duration
1/T: signaling rate, symbol per second, bauds
– Equal to log2M bit per second
Tb : bit duration of equivalent binary PAM :
To realize the same average probability of symbol error,
transmitted power must be increased by a factor of
M2/log2M compared to binary PAM
Tappeddelayline equalization :
Approach to high speed transmission
– Combination of two basic signalprocessing operation
– Discrete PAM
– Linear modulation scheme
The number of detectable amplitude levels is often limited by
ISI
Residual distortion for ISI : limiting factor on data rate of the
system
Equalization : to compensate for the residual distortion
Equalizer : filter
– A device wellsuited for the design of a linear equalizer
is the tappeddelayline filter
– Total number of taps is chosen to be (2N+1)
P(t) is equal to the convolution of c(t) and h(t)
nT=t sampling time, discrete convolution sum
Nyquist criterion for distortionless transmission, with T used
in place of Tb, normalized condition p(0)=1
Zeroforcing equalizer
– Optimum in the sense that it minimizes the peak
distortion(ISI) – worst case
– Simple implementation
– The longer equalizer, the more the ideal condition for
distortionless transmission
Adaptive Equalizer :
The channel is usually time varying
– Difference in the transmission characteristics of the
individual links that may be switched together
– Differences in the number of links in a connection
Adaptive equalization
¿
÷ =
÷ =
N
N k
k
kT t w t h ) ( ) ( o
¿ ¿
¿
÷ = ÷ =
÷ =
÷ = ÷  =
÷  =  =
N
N k
k
N
N k
k
N
N k
k
kT t c w kT t t c w
kT t w t c t h t c t p
) ( ) ( ) (
) ( ) ( ) ( ) ( ) (
o
o
¿
÷ =
÷ =
N
N k
k
T k n c w nT p ) ) (( ) (
¹
´
¦
± ± ± =
=
=
¹
´
¦
=
=
=
N n
n
n
n
nT p
....., , 2 , 1
0
, 0
, 1
0 , 0
0 , 1
) (
– Adjust itself by operating on the the input signal
Training sequence
– Precall equalization
– Channel changes little during an average data call
Prechannel equalization
– Require the feedback channel
Postchannel equalization
synchronous
– Tap spacing is the same as the symbol duration of
transmitted signal
LeastMeanSquare Algorithm:
Adaptation may be achieved
– By observing the error b/w desired pulse shape and
actual pulse shape
– Using this error to estimate the direction in which the
tapweight should be changed
Meansquare error criterion
– More general in application
– Less sensitive to timing perturbations
: desired response, : error signal, : actual response
Meansquare error is defined by cost fuction
Ensembleaveraged crosscorrelation
2
n
E e c ( =
¸ ¸
 
2 2 2 2 ( )
n n
n n n n k ex
k k k
e y
E e E e E e x R k
w w w
c
÷
( ( c c c
= = ÷ = ÷ = ÷
( (
c c c
¸ ¸ ¸ ¸
 
( )
ex n n k
R k E e x
÷
=
Optimality condition for minimum meansquare error
Meansquare error is a secondorder and a parabolic function
of tap weights as a multidimentional bowlshaped surface
Adaptive process is a successive adjustments of tapweight
seeking the bottom of the bowl(minimum value )
Steepest descent algorithm
– The successive adjustments to the tapweight in
direction opposite to the vector of gradient )
– Recursive formular (µ : step size parameter)
LeastMeanSquare Algorithm
– Steepestdescent algorithm is not available in an
unknown environment
– Approximation to the steepest descent algorithm using
instantaneous estimate
LMS is a feedback system
In the case of small µ, roughly similar to steepest
descent algorithm
0 0, 1,....,
k
for k N
w
c c
= = ± ±
c
1
( 1) ( ) , 0, 1,....,
2
( ) ( ), 0, 1,....,
k k
k
k ex
w n w n k N
w
w n R k k N
c
µ
µ
c
+ = ÷ = ± ±
c
= ÷ = ± ±
( )
( 1) ( )
ex n n k
k k n n k
R k e x
w n w n e x µ
÷
÷
=
+ = +
Operation of the equalizer:
square error Training mode
– Known sequence is transmitted and synchorunized
version is generated in the receiver
– Use the training sequence, so called pseudonoise(PN)
sequence
Decisiondirected mode
– After training sequence is completed
– Track relatively slow variation in channel characteristic
Large µ : fast tracking, excess mean
I mplementation Approaches:
Analog
– CCD, Tapweight is stored in digital memory, analog
sample and multiplication
– Symbol rate is too high
Digital
– Sample is quantized and stored in shift register
– Tap weight is stored in shift register, digital
multiplication
Programmable digital
– Microprocessor
– Flexibility
– Same H/W may be time shared
DecisionFeed back equalization:
Baseband channel impulse response : {hn}, input : {xn}
Using data decisions made on the basis of precursor to take
care of the postcursors
– The decision would obviously have to be correct
0
0 0
n k n k
k
n k n k k n k
k k
y h x
h x h x h x
÷
÷ ÷
< >
=
= + +
¿
¿ ¿
Feedforward section : tappeddelayline equalizer
Feedback section : the decision is made on previously
detected symbols of the input sequence
– Nonlinear feedback loop by decision device
Eye Pattern:
Experimental tool for such an evaluation in an insightful
manner
– Synchronized superposition of all the signal of interest
viewed within a particular signaling interval
Eye opening : interior region of the eye pattern
(1)
(2)
n
n
n
w
c
w
(
=
(
¸ ¸
n
n
n
x
v
a
(
=
(
¸ ¸
T
n n n n
e a c v = ÷
(1) (1)
1 1 1
(2) (2)
1 1 1
n n n n
n n n n
w w e x
w w e a
µ
µ
+ +
+ +
= ÷
= ÷
In the case of an Mary system, the eye pattern contains (M
1) eye opening, where M is the number of discreteamplitude
levels
Interpretation of Eye Diagram:
UNIT III BASEBAND CODING TECHNIQUES
ASK, OOK, MASK:
• The amplitude (or height) of the sine wave varies to transmit
the ones and zeros
• One amplitude encodes a 0 while another amplitude encodes
a 1 (a form of amplitude modulation)
Binary amplitude shift keying, Bandwidth:
• d ≥ 0related to the condition of the line
B = (1+d) x S = (1+d) x N x 1/r
Implementation of binary ASK:
Frequency Shift Keying:
• One frequency encodes a 0 while another frequency encodes
a 1 (a form of frequency modulation)
FSK Bandwidth:
• Limiting factor: Physical capabilities of the carrier
• Not susceptible to noise as much as ASK
• Applications
– On voicegrade lines, used up to 1200bps
– Used for highfrequency (3 to 30 MHz) radio
transmission
– used at higher frequencies on LANs that use coaxial
cable
( )
¦
¹
¦
´
¦
= t s
( ) t f A
2
2 cos t
1 binary
( ) t f A
2
2 cos t 0 binary
DBPSK:
• Differential BPSK
– 0 = same phase as last signal element
– 1 = 180º shift from last signal element
Concept of a constellation :
( )
¦
¦
¹
¦
¦
´
¦
= t s

.

\

+
4
2 cos
t
t t f A
c
11

.

\

+
4
3
2 cos
t
t t f A
c

.

\

÷
4
3
2 cos
t
t t f A
c

.

\

÷
4
2 cos
t
t t f A
c
01
00
10
Mary PSK:
Using multiple phase angles with each angle having more than one
amplitude, multiple signals elements can be achieved
– D = modulation rate, baud
– R = data rate, bps
– M = number of different signal elements = 2L
– L = number of bits per signal element
M
R
L
R
D
2
log
= =
QAM:
– As an example of QAM, 12 different phases are
combined with two different amplitudes
– Since only 4 phase angles have 2 different amplitudes,
there are a total of 16 combinations
– With 16 signal combinations, each baud equals 4 bits of
information (2 ^ 4 = 16)
– Combine ASK and PSK such that each signal
corresponds to multiple bits
– More phases than amplitudes
– Minimum bandwidth requirement same as ASK or PSK
QAM and QPR:
• QAM is a combination of ASK and PSK
– Two different signals sent simultaneously on the same
carrier frequency
– M=4, 16, 32, 64, 128, 256
• Quadrature Partial Response (QPR)
– 3 levels (+1, 0, 1), so 9QPR, 49QPR
Offset quadrature phaseshift keying (OQPSK):
• QPSK can have 180 degree jump, amplitude fluctuation
• By offsetting the timing of the odd and even bits by one bit
period, or half a symbolperiod, the inphase and quadrature
components will never change at the same time.
Generation and Detection of Coherent BPSK:
Figure 6.26 Block diagrams for (a) binary FSK transmitter and
(b) coherent binary FSK receiver.
F Fi ig gu ur re e 6 6. .3 30 0 ( (a a) ) I I n np pu ut t b bi in na ar ry y s se eq qu ue en nc ce e. . ( (b b) ) W Wa av ve ef fo or rm m o of f s sc ca al le ed d
t ti im me e f fu un nc ct ti io on n s s
1 1
f f
1 1
( (t t) ). . ( (c c) ) W Wa av ve ef fo or rm m o of f s sc ca al le ed d t ti im me e f fu un nc ct ti io on n s s
2 2
f f
2 2
( (t t) ). .
( (d d) ) W Wa av ve ef fo or rm m o of f t th he e M MS SK K s si ig gn na al l s s( (t t) ) o ob bt ta ai in ne ed d b by y a ad dd di in ng g s s
1 1
f f
1 1
( (t t) ) a an nd d
s s
2 2
f f
2 2
( (t t) ) o on n a a b bi it t b by y b bi it t b ba as si is s. .
Fig. 6.28
6.28
Figure 6.29 Signalspace diagram for MSK system.
Generation and Detection of MSK Signals:
Figure 6.31 Block diagrams for (a) MSK transmitter and (b)
coherent MSK receiver.
UNIT IV BASEBAND RECEPTION TECHNIQUES
• Forward Error Correction (FEC)
– Coding designed so that errors can be corrected at the
receiver
– Appropriate for delay sensitive and oneway
transmission (e.g., broadcast TV) of data
– Two main types, namely block codes and convolutional
codes. We will only look at block codes
Block Codes:
• We will consider only binary data
• Data is grouped into blocks of length k bits (dataword)
• Each dataword is coded into blocks of length n bits
(codeword), where in general n>k
• This is known as an (n,k) block code
• A vector notation is used for the datawords and codewords,
– Dataword d = (d1 d2….dk)
– Codeword c = (c1 c2……..cn)
• The redundancy introduced by the code is quantified by the
code rate,
– Code rate = k/n
– i.e., the higher the redundancy, the lower the code rate
Hamming Distance:
• Error control capability is determined by the Hamming
distance
• The Hamming distance between two codewords is equal to
the number of differences between them, e.g.,
10011011
11010010 have a Hamming distance = 3
• Alternatively, can compute by adding codewords (mod 2)
=01001001 (now count up the ones)
• The maximum number of detectable errors is
• That is the maximum number of correctable errors is given
by,
where dmin is the minimum Hamming distance between 2
codewords and means the smallest integer
Linear Block Codes:
• As seen from the second Parity Code example, it is possible
to use a table to hold all the codewords for a code and to
lookup the appropriate codeword based on the supplied
dataword
• Alternatively, it is possible to create codewords by addition
of other codewords. This has the advantage that there is now
no longer the need to held every possible codeword in the
table.
• If there are k data bits, all that is required is to hold k linearly
independent codewords, i.e., a set of k codewords none of
which can be produced by linear combinations of 2 or more
codewords in the set.
• The easiest way to find k linearly independent codewords is
to choose those which have „1‟ in just one of the first k
positions and „0‟ in the other k1 of the first k positions.
• For example for a (7,4) code, only four codewords are
required, e.g.,
1
min
÷ d
(
¸
(
¸
÷
=
2
1
min
d
t
1 1 1 1 0 0 0
1 1 0 0 1 0 0
1 0 1 0 0 1 0
0 1 1 0 0 0 1
• So, to obtain the codeword for dataword 1011, the first, third
and fourth codewords in the list are added together, giving
1011010
• This process will now be described in more detail
• An (n,k) block code has code vectors
d=(d1 d2….dk) and
c=(c1 c2……..cn)
• The block coding process can be written as c=dG
where G is the Generator Matrix
• Thus,
• a
i
must be linearly independent, i.e.,
Since codewords are given by summations of the ai vectors,
then to avoid 2 datawords having the same codeword the ai vectors
must be linearly independent.
• Sum (mod 2) of any 2 codewords is also a codeword, i.e.,
Since for datawords d1 and d2 we have;
So,
(
(
(
(
¸
(
¸
=
(
(
(
(
¸
(
¸
=
k
2
1
2 1
2 22 21
1 12 11
a
.
a
a
...
. ... . .
...
...
G
kn k k
n
n
a a a
a a a
a a a
¿
=
=
k
i
i i
d
1
a c
2 1 3
d d d + =
¿ ¿ ¿ ¿
= = = =
+ = + = =
k
i
i i
k
i
i i
k
i
i i i
k
i
i i
d d d d d
1
2
1
1
1
2 1
1
3 3
a a )a ( a c
2 1 3
c c c + =
Error Correcting Power of LBC:
• The Hamming distance of a linear block code (LBC) is
simply the minimum Hamming weight (number of 1‟s or
equivalently the distance from the all 0 codeword) of the
nonzero codewords
• Note d(c1,c2) = w(c1+ c2) as shown previously
• For an LBC, c1+ c2=c3
• So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))
• Therefore to find min Hamming distance just need to search
among the 2k codewords to find the min Hamming weight –
far simpler than doing a pair wise check for all possible
codewords.
•
Linear Block Codes – example 1:
• For example a (4,2) code, suppose;
a1 = [1011]
a2 = [0101]
• For d = [1 1], then;
Linear Block Codes – example 2:
• A (6,5) code with
(
¸
(
¸
=
1 0 1 0
1 1 0 1
G
0 1 1 1
_ _ _ _
1 0 1 0
1 1 0 1
c
=
+
=
(
(
(
(
(
(
¸
(
¸
=
1 1 0 0 0 0
1 0 1 0 0 0
1 0 0 1 0 0
1 0 0 0 1 0
1 0 0 0 0 1
G
• Is an even single parity code
Systematic Codes:
• For a systematic block code the dataword appears unaltered
in the codeword – usually at the start
• The generator matrix has the structure,
R = n  k
• P is often referred to as parity bits
I is k*k identity matrix. Ensures data word appears as beginning of
codeword P is k*R matrix.
Decoding Linear Codes:
• One possibility is a ROM lookup table
• In this case received codeword is used as an address
• Example – Even single parity check code;
Address Data
000000 0
000001 1
000010 1
000011 0
……… .
• Data output is the error flag, i.e., 0 – codeword ok,
• If no error, data word is first k bits of codeword
• For an error correcting code the ROM can also store data
words
  P  I
.. 1 .. 0 0
.. .. .. .. .. .. .. ..
.. 0 .. 1 0
.. 0 .. 0 1
G
2 1
2 22 21
1 12 11
=
(
(
(
(
¸
(
¸
=
kR k k
R
R
p p p
p p p
p p p
• Another possibility is algebraic decoding, i.e., the error flag
is computed from the received codeword (as in the case of
simple parity codes)
• How can this method be extended to more complex error
detection and correction codes?
Parity Check Matrix:
• A linear block code is a linear subspace S
sub
of all length n
vectors (Space S)
• Consider the subset S null of all length n vectors in space S
that are orthogonal to all length n vectors in S
sub
• It can be shown that the dimensionality of S
null
is nk, where
n is the dimensionality of S and k is the dimensionality of
S
sub
• It can also be shown that S
null
is a valid subspace of S and
consequently S
sub
is also the null space of S
null
• S
null
can be represented by its basis vectors. In this case the
generator basis vectors (or „generator matrix‟ H) denote the
generator matrix for S
null
 of dimension nk = R
• This matrix is called the parity check matrix of the code
defined by G, where G is obviously the generator matrix for
S
sub
 of dimension k
• Note that the number of vectors in the basis defines the
dimension of the subspace
• So the dimension of H is nk (= R) and all vectors in the null
space are orthogonal to all the vectors of the code
• Since the rows of H, namely the vectors bi are members of
the null space they are orthogonal to any code vector
• So a vector y is a codeword only if yHT=0
• Note that a linear block code can be specified by either G or
H
Parity Check Matrix:
R = n  k
• So H is used to check if a codeword is valid,
• The rows of H, namely, bi, are chosen to be orthogonal to
rows of G, namely ai
• Consequently the dot product of any valid codeword with any
bi is zero
This is so since,
and so,
• This means that a codeword is valid (but not necessarily
correct) only if cHT = 0. To ensure this it is required that the
rows of H are independent and are orthogonal to the rows of
G
• That is the bi span the remaining R (= n  k) dimensions of
the codespace
• For example consider a (3,2) code. In this case G has 2 rows,
a1 and a2
• Consequently all valid codewords sit in the subspace (in this
case a plane) spanned by a1 and a2
(
(
(
(
¸
(
¸
=
(
(
(
(
¸
(
¸
=
R
2
1
2 1
2 22 21
1 12 11
b
.
b
b
...
. ... . .
...
...
H
Rn R R
n
n
b b b
b b b
b b b
¿
=
=
k
i
i i
d
1
a c
¿ ¿
= =
= = =
k
i
i i
k
i
i i
d d
1
j
1
j j
0 ) b . (a a . b .c b
• In this example the H matrix has only one row, namely b1.
This vector is orthogonal to the plane containing the rows of
the G matrix, i.e., a1 and a2
• Any received codeword which is not in the plane containing
a1 and a2 (i.e., an invalid codeword) will thus have a
component in the direction of b1 yielding a non zero dot
product between itself and b1.
Error Syndrome:
• For error correcting codes we need a method to compute the
required correction
• To do this we use the Error Syndrome, s of a received
codeword, cr
s = crHT
• If cr is corrupted by the addition of an error vector, e, then
cr = c + e
and
s = (c + e) HT = cHT + eHT
s = 0 + eHT
Syndrome depends only on the error
• That is, we can add the same error pattern to different code
words and get the same syndrome.
– There are 2(n  k) syndromes but 2n error patterns
– For example for a (3,2) code there are 2 syndromes and
8 error patterns
– Clearly no error correction possible in this case
– Another example. A (7,4) code has 8 syndromes and
128 error patterns.
– With 8 syndromes we can provide a different value to
indicate single errors in any of the 7 bit positions as
well as the zero value to indicate no errors
• Now need to determine which error pattern caused the
syndrome
• For systematic linear block codes, H is constructed as
follows,
G = [ I  P] and so H = [PT  I]
where I is the k*k identity for G and the R*R identity for H
• Example, (7,4) code, dmin= 3
Error Syndrome – Example:
• For a correct received codeword cr = [1101001]
In this case,
 
(
(
(
(
¸
(
¸
= =
1 1 1 1 0 0 0
0 1 1 0 1 0 0
1 0 1 0 0 1 0
1 1 0 0 0 0 1
P  I G  
(
(
(
¸
(
¸
= =
1 0 0 1 0 1 1
0 1 0 1 1 0 1
0 0 1 1 1 1 0
I  P  H
T
    0 0 0
1 0 0
0 1 0
0 0 1
1 1 1
0 1 1
1 0 1
1 1 0
1 0 0 1 0 1 1 H c s
T
r
=
(
(
(
(
(
(
(
(
(
¸
(
¸
= =
Standard Array:
• The Standard Array is constructed as follows,
• The array has 2k columns (i.e., equal to the number of valid
codewords) and 2R rows (i.e., the number of syndromes)
Hamming Codes:
• We will consider a special class of SEC codes (i.e., Hamming
distance = 3) where,
– Number of parity bits R = n – k and n = 2R – 1
– Syndrome has R bits
– 0 value implies zero errors
– 2R – 1 other syndrome values, i.e., one for each bit that
might need to be corrected
– This is achieved if each column of H is a different
binary word – remember s = eHT
• Systematic form of (7,4) Hamming code is,
c
1
(all zero)
e
1
e
2
e
3
…
e
N
c
2
+e
1
c
2
+e
2
c
2
+e
3
……
c
2
+e
N
c
2
c
M
+e
1
c
M
+e
2
c
M
+e
3
……
c
M
+e
N
c
M
……
……
……
……
……
…… s
0
s
1
s
2
s
3
…
s
N
 
(
(
(
(
¸
(
¸
= =
1 1 1 1 0 0 0
0 1 1 0 1 0 0
1 0 1 0 0 1 0
1 1 0 0 0 0 1
P  I G  
(
(
(
¸
(
¸
= =
1 0 0 1 0 1 1
0 1 0 1 1 0 1
0 0 1 1 1 1 0
I  P  H
T
• The original form is nonsystematic,
• Compared with the systematic code, the column orders of
both G and H are swapped so that the columns of H are a
binary count
• The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the
nonsystematic H is col. 7 in the systematic H.
Convolutional Code Introduction:
• Convolutional codes map information to code bits
sequentially by convolving a sequence of information bits
with “generator” sequences
• A convolutional encoder encodes K information bits to N>K
code bits at one time step
• Convolutional codes can be regarded as block codes for
which the encoder has a certain structure such that we can
express the encoding operation as convolution
• Convolutional codes are applied in applications that require
good performance with low implementation cost. They
operate on code streams (not in blocks)
• Convolution codes have memory that utilizes previous bits to
encode or decode following bits (block codes are
memoryless)
• Convolutional codes achieve good performance by
expanding their memory depth
• Convolutional codes are denoted by (n,k,L), where L is code
(or encoder) Memory depth (number of register stages)
(
(
(
(
¸
(
¸
=
1 0 0 1 0 1 1
0 1 0 1 0 1 0
0 0 1 1 0 0 1
0 0 0 0 1 1 1
G
(
(
(
¸
(
¸
=
1 0 1 0 1 0 1
1 1 0 0 1 1 0
1 1 1 1 0 0 0
H
• Constraint length C=n(L+1) is defined as the number of
encoded bits a message bit can influence to
• Convolutional encoder, k = 1, n = 2, L=2
– Convolutional encoder is a finite state machine (FSM)
processing information bits in a serial manner
– Thus the generated code is a function of input and the
state of the FSM
– In this (n,k,L) = (2,1,2) encoder each message bit
influences a span of C= n(L+1)=6 successive output
bits = constraint length C
– Thus, for generation of nbit output, we require n shift
registers in k = 1 convolutional encoders
Here each message bit influences
a span of C = n(L+1)=3(1+1)=6
successive output bits
3 2
'
j j j j
x m m m
÷ ÷
= © ©
3 1
''
j j j j
x m m m
÷ ÷
= © ©
2
'''
j j j
x m m
÷
= ©
Convolution point of view in encoding and generator matrix:
Example: Using generator matrix
Representing convolutional codes: Code tree:
(n,k,L) = (2,1,2) encoder
(1)
( 2)
[1 0 11]
[111 1]
=
 

=
\ .
g
g
2 1
2
'
''
j j j j
j j j
x m m m
x m m
÷ ÷
÷
= © ©
¦
´
= ©
¹
1 1 2 2 3 3
' '' ' '' ' '' ...
out
x x x x x x x =
Representing convolutional codes compactly: code trellis and
state diagram:
State diagram
Inspecting state diagram: Structural properties of
convolutional codes:
• Each new block of k input bits causes a transition into new
state
• Hence there are 2k branches leaving each state
• Assuming encoder zero initial state, encoded word for any
input of k bits can thus be obtained. For instance, below for
u=(1 1 1 0 1), encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1
1, 1 1) is produced:
•
 encoder state diagram for (n,k,L)=(2,1,2) code
 note that the number of states is 2L+1 = 8
Distance for some convolutional codes:
THE VITERBI ALGORITHEM:
• Problem of optimum decoding is to find the minimum
distance path from the initial state back to initial state (below
from S0 to S0). The minimum distance is the sum of all path
metrics
• that is maximized by the correct path
• Exhaustive maximum likelihood
method must search all the paths
in phase trellis (2k paths emerging/
entering from 2 L+1 states for
an (n,k,L) code)
• The Viterbi algorithm gets its
efficiency via concentrating intosurvivor paths of the trellis
THE SURVIVOR PATH:
• Assume for simplicity a convolutional code with k=1, and up
to 2k = 2 branches can enter each state in trellis diagram
0
ln ( , ) ln (  )
j
m j mj
p p y x
·
=
¿
= y x
• Assume optimal path passes S. Metric comparison is done by
adding the metric of S into S1 and S2. At the survivor path
the accumulated metric is naturally smaller (otherwise it
could not be the optimum path)
• For this reason the nonsurvived path can
be discarded > all path alternatives need not
to be considered
• Note that in principle whole transmitted
sequence must be received before decision.
However, in practice storing of states for
input length of 5L is quite adequate
The maximum likelihood path:
The decoded ML code sequence is 11 10 10 11 00 00 00 whose
Hamming
distance to the received sequence is 4 and the respective decoded
sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum
distance path.
(Black circles denote the deleted branches, dashed lines: '1' was
applied)
How to endup decoding?
• In the previous example it was assumed that the register was
finally filled with zeros thus finding the minimum distance
path
• In practice with long code words zeroing requires feeding of
long sequence of zeros to the end of the message bits: this
wastes channel capacity & introduces delay
• To avoid this path memory truncation is applied:
– Trace all the surviving paths to the
depth where they merge
– Figure right shows a common point
at a memory depth J
– J is a random variable whose applicable
magnitude shown in the figure (5L)
has been experimentally tested for
negligible error rate increase
– Note that this also introduces the
delay of 5L!
Hamming Code Example:
• H(7,4)
• Generator matrix G: first 4by4 identical matrix
• Message information vector p
• Transmission vector x
• Received vector r
and error vector e
• Parity check matrix H
5 stages of the trellis J L >
Error Correction:
• If there is no error, syndrome vector z=zeros
• If there is one error at location 2
• New syndrome vector z is
Example of CRC:
Example: Using generator matrix:
(1)
( 2)
[1 0 11]
[111 1]
=
 

=
\ .
g
g
11 10
01
11 00 01 11 01 © © © =
Turbo Codes:
• Backgound
– Turbo codes were proposed by Berrou and Glavieux in
the 1993 International Conference in Communications.
correct:1+1+2+2+2=8;8 ( 0.11) 0.88
false:1+1+0+0+0=2;2 ( 2.30) 4.6
total path metric: 5.48
· ÷ = ÷
· ÷ = ÷
÷
– Performance within 0.5 dB of the channel capacity limit
for BPSK was demonstrated.
• Features of turbo codes
– Parallel concatenated coding
– Recursive convolutional encoders
– Pseudorandom interleaving
– Iterative decoding
Motivation: Performance of Turbo Codes
• Comparison:
– Rate 1/2 Codes.
– K=5 turbo code.
– K=14 convolutional code.
• Plot is from:
– L. Perez, “Turbo Codes”, chapter 8 of Trellis Coding by
C. Schlegel. IEEE Press, 1997
Pseudorandom Interleaving:
• The coding dilemma:
– Shannon showed that large blocklength random codes
achieve channel capacity.
– However, codes must have structure that permits
decoding with reasonable complexity.
– Codes with structure don‟t perform as well as random
codes.
– “Almost all codes are good, except those that we can
think of.”
• Solution:
– Make the code appear random, while maintaining
enough structure to permit decoding.
– This is the purpose of the pseudorandom interleaver.
– Turbo codes possess randomlike properties.
– However, since the interleaving pattern is known,
decoding is possible.
Why Interleaving and Recursive Encoding?
• In a coded systems:
– Performance is dominated by low weight code words.
• A “good” code:
– will produce low weight outputs with very low
probability.
• An RSC code:
– Produces low weight outputs with fairly low
probability.
– However, some inputs still cause low weight outputs.
• Because of the interleaver:
– The probability that both encoders have inputs that
cause low weight outputs is very low.
– Therefore the parallel concatenation of both encoders
will produce a “good” code.
Iterative Decoding:
• There is one decoder for each elementary encoder.
• Each decoder estimates the a posteriori probability (APP) of
each data bit.
• The APP‟s are used as a priori information by the other
decoder.
• Decoding continues for a set number of iterations.
– Performance generally improves from iteration to
iteration, but follows a law of diminishing returns
The TurboPrinciple:
Turbo codes get their name because the decoder uses feedback,
like a turbo engine
Performance as a Function of Number of Iterations:
Turbo Code Summary:
• Turbo code advantages:
– Remarkable power efficiency in AWGN and flatfading
channels for moderately low BER.
– Deign tradeoffs suitable for delivery of multimedia
services.
• Turbo code disadvantages:
– Long latency.
– Poor performance at very low BER.
– Because turbo codes operate at very low SNR, channel
estimation and tracking is a critical issue.
0.5 1 1.5 2
10
7
10
6
10
5
10
4
10
3
10
2
10
1
10
0
E
b
/N
o
in dB
B
E
R
1 iteration
2 iterations
3 iterations
6 iterations
10 iterations
18 iterations
• The principle of iterative or “turbo” processing can be
applied to other problems.
– Turbomultiuser detection can improve performance of
coded multipleaccess systems.
UNIT V BANDPASS SIGNAL TRANSMISSION AND
RECEPTION
• Spread data over wide bandwidth
• Makes jamming and interception harder
• Frequency hoping
– Signal broadcast over seemingly random series of
frequencies
• Direct Sequence
– Each bit is represented by multiple bits in transmitted
signal
– Chipping code
Spread Spectrum Concept:
• Input fed into channel encoder
– Produces narrow bandwidth analog signal around
central frequency
• Signal modulated using sequence of digits
– Spreading code/sequence
– Typically generated by pseudonoise/pseudorandom
number generator
• Increases bandwidth significantly
– Spreads spectrum
• Receiver uses same sequence to demodulate signal
• Demodulated signal fed into channel decoder
General Model of Spread Spectrum System:
Gains:
• Immunity from various noise and multipath distortion
– Including jamming
• Can hide/encrypt signals
– Only receiver who knows spreading code can retrieve
signal
• Several users can share same higher bandwidth with little
interference
– Cellular telephones
– Code division multiplexing (CDM)
– Code division multiple access (CDMA)
Pseudorandom Numbers:
• Generated by algorithm using initial seed
• Deterministic algorithm
– Not actually random
– If algorithm good, results pass reasonable tests of
randomness
• Need to know algorithm and seed to predict sequence
Frequency Hopping Spread Spectrum (FHSS):
• Signal broadcast over seemingly random series of
frequencies
• Receiver hops between frequencies in sync with transmitter
• Eavesdroppers hear unintelligible blips
• Jamming on one frequency affects only a few bits
Basic Operation:
• Typically 2k carriers frequencies forming 2k channels
• Channel spacing corresponds with bandwidth of input
• Each channel used for fixed interval
– 300 ms in IEEE 802.11
– Some number of bits transmitted using some encoding
scheme
• May be fractions of bit (see later)
– Sequence dictated by spreading code
Frequency Hopping Example:
Frequency Hopping Spread Spectrum System (Transmitter):
Frequency Hopping Spread Spectrum System (Receiver):
Slow and Fast FHSS:
• Frequency shifted every Tc seconds
• Duration of signal element is Ts seconds
• Slow FHSS has Tc > Ts
• Fast FHSS has Tc < Ts
• Generally fast FHSS gives improved performance in noise
(or jamming)
Slow Frequency Hop Spread Spectrum Using MFSK (M=4,
k=2)
Fast Frequency Hop Spread Spectrum Using MFSK (M=4,
k=2)
FHSS Performance Considerations:
• Typically large number of frequencies used
– Improved resistance to jamming
Direct Sequence Spread Spectrum (DSSS):
• Each bit represented by multiple bits using spreading
code
• Spreading code spreads signal across wider frequency
band
– In proportion to number of bits used
– 10 bit spreading code spreads signal across 10
times bandwidth of 1 bit code
• One method:
– Combine input with spreading code using XOR
– Input bit 1 inverts spreading code bit
– Input zero bit doesn‟t alter spreading code bit
– Data rate equal to original spreading code
• Performance similar to FHSS
Direct Sequence Spread Spectrum Example:
Direct Sequence Spread Spectrum Transmitter:
Direct Sequence Spread Spectrum Receiver:
Direct Sequence Spread Spectrum Using BPSK
Example:
Code Division Multiple Access (CDMA):
• Multiplexing Technique used with spread spectrum
• Start with data signal rate D
– Called bit data rate
• Break each bit into k chips according to fixed pattern specific
to each user
– User‟s code
• New channel has chip data rate kD chips per second
• E.g. k=6, three users (A,B,C) communicating with base
receiver R
• Code for A = <1,1,1,1,1,1>
• Code for B = <1,1,1,1,1,1>
• Code for C = <1,1,1,1,1,1>
CDMA Example:
• Consider A communicating with base
• Base knows A‟s code
• Assume communication already synchronized
• A wants to send a 1
– Send chip pattern <1,1,1,1,1,1>
• A‟s code
• A wants to send 0
– Send chip[ pattern <1,1,1,1,1,1>
• Complement of A‟s code
• Decoder ignores other sources when using A‟s code to
decode
– Orthogonal codes
CDMA for DSSS:
• n users each using different orthogonal PN sequence
• Modulate each users data stream
– Using BPSK
• Multiply by spreading code of user
CDMA in a DSSS Environment:
******************** ALL THE BEST ******************
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.