You are on page 1of 128

ECPC24 Digital Communication

Unit IV - Channel Coding Techniques

Topic 1: Linear Block Codes

Semester VI (Section A)

Department of ECE
Introduction to Linear Block Codes

We assume that the output of an information source is


a sequence of binary digits “0” or “1”
This binary information sequence is segmented into
message block of fixed length, denoted by u.
Each message block consists of k information digits.
There are a total of 2k distinct message.
The encoder transforms each input message u into a
binary n-tuple v with n > k
This n-tuple v is referred to as the code word ( or code
vector ) of the message u.
There are distinct 2k code words.
Introduction to Linear Block Codes

This set of 2k code words is called a block code.


For a block code to be useful, there should be a one-to-one
correspondence between a message u and its code word v.
A desirable structure for a block code to possess is the linearity.
With this structure, the encoding complexity will be greatly
reduced.

u Encoder v
Message block Code word
(k info. digits) (n-tuple , n > k )
(2k distinct message) (2k distinct code word)
Introduction to Linear Block Codes

A block code of length n and 2k code word is called a


linear (n, k) code iff its 2k code words form a k-
dimensional subspace of the vector space of all the n-
tuple over the field Galois field (a finite field is a set on
which the operations of multiplication, addition,
subtraction and division are defined and satisfy certain
basic rules).

In fact, a binary block code is linear if the module-2


(addition) sum of two code word is also a code word

0 must be code word.


The block code given in Table 1 is a (7, 4) linear code.
Introduction to Linear Block Codes
Table 1:

For large k, it is virtually impossible to build up the loop up table.


Introduction to Linear Block Codes
Since an (n, k) linear code C is a k-dimensional
subspace of the vector space Vn of all the binary n-
tuple, it is possible to find k linearly independent code
word, g0 , g1 ,…, gk-1 in C
v = u 0 g 0 + u1g 1 + ⋅ ⋅ ⋅ + u k −1g k −1 (1)

where ui = 0 or 1 for 0 ≤ i < k


Introduction to Linear Block Codes

Let us arrange these k linearly independent code words as


the rows of a k × n matrix as follows:
⎡ g 0 ⎤ ⎡ g 00 g 01 g 02 . . g 0,n −1 ⎤
⎢g ⎥ ⎢ g g11 g12 . . g1,n −1 ⎥⎥
⎢ 1 ⎥ ⎢ 10
⎢ . ⎥ ⎢ . . . . . . ⎥
⎢ ⎥ ⎢ ⎥
G=⎢ . ⎥=⎢ . . . . . . ⎥
⎢ . ⎥ ⎢ . . . . . . ⎥ (2)
⎢ ⎥ ⎢ ⎥
⎢ . ⎥ ⎢ . . . . . . ⎥
⎢g ⎥ ⎢⎢ g . g k −1,n −1 ⎥⎥⎦
⎣ k −1 ⎦ ⎣ k −1,0 g k −1,1 g k −1,2 .

where gi = (gi0, gi1,…,gi,n-1) for 0 ≤ i < k


Introduction to Linear Block Codes
If u = (u0,u1,…,uk-1) is the message to be encoded, the
corresponding code word can be given as follows:
v = u ⋅G
⎡ g0 ⎤
⎢g ⎥
⎢ 1 ⎥
⎢ . ⎥
= (u0 , u1 ,..., uk −1 ) • ⎢ ⎥ (3)
⎢ . ⎥
⎢ . ⎥
⎢ ⎥
⎢⎣ g k −1 ⎥⎦
= u0 g 0 + u1g1 + ⋅⋅⋅ + uk −1g k −1
Introduction to Linear Block Codes

Because the rows of G generate the (n, k) linear code C,


the matrix G is called a generator matrix for C

Note that any k linearly independent code words of an


(n, k) linear code can be used to form a generator matrix
for the code

It follows from (3) that an (n, k) linear code is


completely specified by the k rows of a generator matrix
G
Introduction to Linear Block Codes

Example 1
the (7, 4) linear code given in Table 1 has the following matrix as a
generator matrix :
⎡ g 0 ⎤ ⎡1 1 0 1 0 0 0⎤
⎢ g ⎥ ⎢0 1 1 0 1 0 0 ⎥⎥
G = ⎢ 1⎥ = ⎢
⎢g 2 ⎥ ⎢1 1 1 0 0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎣ g 3 ⎦ ⎣1 0 1 0 0 0 1⎦
If u = (1 1 0 1) is the message to be encoded, its corresponding
code word, according to (3), would be
v = 1 ⋅ g 0 + 1 ⋅ g1 + 0 ⋅ g 2 + 1 ⋅ g 3
= (11 0 1 0 0 0) + (0 11 0 1 0 0) + (1 0 1 0 0 0 1)
= (0 0 0 11 0 1)
Introduction to Linear Block Codes

A desirable property for a linear block code is the


systematic structure of the code words as shown in Fig. 1
where a code word is divided into two parts
The message part consists of k information digits
The redundant checking part consists of n − k parity-check digits

A linear block code with this structure is referred to as a


linear systematic block code
Redundant checking part Message part
n - k digits k digits

Fig. 1 Systematic format of a code word


Introduction to Linear Block Codes

A linear systematic (n, k) code is completely specified by a


k × n matrix G of the following form :

P matrix k × k identity matrix

⎡ g 0 ⎤ ⎡ p00 p 01 . . . p0,n − k −1 | 1 0 0 . 0⎤. .


⎢g ⎥ ⎢ p p11 . . . p1,n − k −1 | 0 1 0 . 0⎥⎥
. .
⎢ 1 ⎥ ⎢ 10
⎢ g 2 ⎥ ⎢ p20 p21 . . . p2,n − k −1 | 0 0 1 . . . 0⎥
⎢ ⎥ ⎢ ⎥
G=⎢ . ⎥=⎢ | ⎥ (4)
⎢ . ⎥ ⎢ | ⎥
⎢ ⎥ ⎢ ⎥
⎢ . ⎥ ⎢ | ⎥
⎢g ⎥ ⎢ p | 0 0 0 0 0 0 1⎥⎦
⎣ k −1 ⎦ ⎣ k −1, 0 pk −1,1 . . . pk −1, n − k −1
where pij = 0 or 1
Introduction to Linear Block Codes
Let u = (u0, u1, … , uk-1) be the message to be encoded
The corresponding code word is
v = (v0 , v1 , v2 ,..., vn −1 )
= (u0 , u1 ,..., uk -1 ) ⋅ G (5)

It follows from (4) & (5) that the components of v are


Message vn-k+i = ui for 0 ≤ i < k (6a)
and
vj = u0 p0j + u1 p1j + ··· + uk-1 pk-1, j for 0 ≤ j < n-k (6b)
Parity check
Introduction to Linear Block Codes
Equation (6a) shows that the rightmost k digits of a code
word v are identical to the information digits u0, u1,…uk-1 to
be encoded

Equation (6b) shown that the leftmost n – k redundant


digits are linear sums of the information digits

The n – k equations given by (6b) are called parity-check


equations of the code
Introduction to Linear Block Codes
Example 2
The matrix G given in example 1
Let u = (u0, u1, u2, u3) be the message to be encoded
Let v = (v0, v1, v2, v3, v4, v5,v6) be the corresponding code word
Solution :
⎡1 1 0 1 0 0 0⎤
⎢0 1 1 0 1 0 0 ⎥⎥
v = u ⋅ G = (u 0 , u1 , u 2 , u3 ) ⋅ ⎢
⎢1 1 1 0 0 1 0⎥
⎢ ⎥
⎣1 0 1 0 0 0 1⎦
Introduction to Linear Block Codes
By matrix multiplication, we obtain the following digits of the
code word v
v6 = u 3
v5 = u 2
v4 = u1
v3 = u 0
v2 = u1 + u 2 + u3
v1 = u 0 + u1 + u 2
v0 = u 0 + u 2 + u 3

The code word corresponding to the message (1 0 1 1) is (1 0 0 1 0 1 1)


Introduction to Linear Block Codes

For any k × n matrix G with k linearly independent rows, there


exists an (n-k) ×n matrix H with n-k linearly independent rows
such that any vector in the row space of G is orthogonal to the
rows of H and any vector that is orthogonal to the rows of H is in
the row space of G.

An n-tuple v is a code word in the code generated by G if and only if


v • HT = 0
This matrix H is called a parity-check matrix of the code
The 2n-k linear combinations of the rows of matrix H form an (n, n – k)
linear code Cd
This code is the null space of the (n, k) linear code C generated by
matrix G
Cd is called the dual code of C
Introduction to Linear Block Codes
If the generator matrix of an (n,k) linear code is in the systematic
form of (4), the parity-check matrix may take the following form :

[
H = I n−k PT ]
⎡1 0 0 . . . 0 p00 p10 . . . pk-1,0 ⎤
⎢0
⎢ 1 0 . . . 0 p01 p11 . . . pk-1,1 ⎥⎥
⎢0 0 1 . . . 0 p02 p12 . . . pk-1,2 ⎥
⎢ ⎥
= ⎢. ⎥ (7)
⎢. ⎥
⎢ ⎥
⎢. ⎥
⎢0
⎣ 0 0 . . . 1 p0 ,n-k-1 p1,n-k-1 . . . pk-1,n-k-1 ⎥⎦

Let hj be the jth row of H


g i ⋅ h j = pij + pij = 0 for 0 ≤ i < k and 0 ≤ j < n – k
This implies that G ‧ HT = 0
Introduction to Linear Block Codes

Let u = (u0, u1, …, uk-1) be the message to be encoded


In systematic form the corresponding code word would be
v = (v0, v1, … , vn-k-1, u0,u1, … , uk-1)
Using the fact that v • HT = 0, we obtain
vj + u0 p0j + u1 p1j + ··· + uk-1 pk-1,j = 0 (8)
Rearranging the equation of (8), we obtain the same
parity-check equations of (6b)

An (n, k) linear code is completely specified by its parity-


check matrix
Example 3
Consider the generator matrix of a (7,4) linear code given in
example 1
The corresponding parity-check matrix is

⎡1 0 0 1 0 1 1 ⎤
H = ⎢⎢0 1 0 1 1 1 0⎥⎥
⎢⎣0 0 1 0 1 1 1 ⎥⎦
Introduction to Linear Block Codes
Summaries

For any (n, k) linear block code C, there exists a k × n


matrix G whose row space given C

There exist an (n – k) × n matrix H such that an n-tuple v


is a code word in C if and only if v • HT = 0

If G is of the form given by (4), then H may take form


given by (7), and vice versa
Introduction to Linear Block Codes
Based on the equation of (6a) and (6b), the encoding
circuit for an (n, k) linear systematic code can be
implemented easily
The encoding circuit is shown in Fig. 2
where
denotes a shift-register stage (flip-flop)
Pij denotes a connection if pij = 1 and no connection if pij = 0
denotes a modulo-2 adder
As soon as the entire message has entered the message register, the n-k
parity-check digits are formed at the outputs of the n-k module-2 adders

The complexity of the encoding circuit is linear proportional to the


block length
The encoding circuit for the (7,4) code given in Table 1 is shown
in Fig. 3
Introduction to Linear Block Codes

Encoding circuit: Figure 2


Introduction to Linear Block Codes

Figure 3: Encoding circuit for the (7,4) code given in Table 1


Syndrome and Error Detection

Let v = (v0, v1, …, vn-1) be a code word that was transmitted


over a noisy channel
Let r = (r0, r1, …, rn-1) be the received vector at the output
of the channel
v r=v+e

e
e = r + v = (e0, e1, …, en-1) is an n-tuple (9)
ei = 1 for ri ≠ vi
ei = 0 for ri = vi
The n-tuple e is called the error vector (or error pattern)
Syndrome and Error Detection
Upon receiving r, the decoder must first determine
whether r contains transmission errors
If the presence of errors is detected, the decoder will take
actions to locate the errors
Correct errors
Request for a retransmission of v
When r is received, the decoder computes the following
(n – k)-tuple :
s = r • HT
= (s0, s1, …, sn-k-1) (10)
which is called the syndrome of r
Syndrome and Error Detection

s = 0 if and only if r is a code word and receiver accepts r


as the transmitted code word
s≠0 if and only if r is not a code word and the presence of
errors has been detected
When the error pattern e is identical to a nonzero code word
(i.e., r contain errors but s = r • HT = 0), error patterns of
this kind are called undetectable error patterns
Since there are 2k – 1 nonzero code words, there are 2k – 1
undetectable error patterns
Syndrome and Error Detection
Based on (7) and (10), the syndrome digits are as
follows :
s0 = r0 + rn-k p00 + rn-k+1 p10 + ··· + rn-1 pk-1,0
s1 = r1 + rn-k p01 + rn-k+1 p11 + ··· + rn-1 pk-1,1
. (11)
sn-k-1 = rn-k-1 + rn-k p0,n-k-1 + rn-k+1 p1,n-k-1 + ··· + rn-1 pk-1,n-k-1
The syndrome s is the vector sum of the received parity
digits (r0,r1,…,rn-k-1) and the parity-check digits
recomputed from the received information digits (rn-k,rn-
k+1,…,rn-1).

A general syndrome circuit is shown in Fig. 4


Syndrome and Error Detection

Figure 4: General syndrome circuit


Syndrome and Error Detection

Example 4
The parity-check matrix is given in example 3
Let r = (r0, r1, r2, r3, r4, r5, r6, r7) be the received vector
The syndrome is given by
⎡1 0 0 ⎤
⎢0 1 0 ⎥
⎢ ⎥
⎢0 0 1 ⎥
⎢ ⎥
s = ( s0 , s1 , s2 ) = ( r0 , r1 , r2 , r3 , r4 , r5 , r6 ) ⋅ ⎢1 1 0 ⎥
⎢0 1 1 ⎥
⎢ ⎥
⎢1 1 1 ⎥
⎢1 0 1 ⎥
⎣ ⎦
Syndrome and Error Detection
Example 4
The syndrome digits are
s0 = r0 + r3 + r5 + r6
s1 = r1 + r3 + r4 + r5
s2 = r2 + r4 + r5 + r6

Figure 5: The syndrome circuit for this (7,4) code as shown in Table 1
Syndrome and Error Detection

Since r is the vector sum of v and e, it follows from (10)


that
s = r • HT = (v + e) • HT = v • HT + e • HT
however,
v • HT = 0

consequently, we obtain the following relation between the


syndrome and the error pattern :
s = e • HT (12)
Syndrome and Error Detection

If the parity-check matrix H is expressed in the systematic


form as given by (7), multiplying out e•HT yield the
following linear relationship between the syndrome digits
and the error digits :
(n-k equations and n variables)
s0 = e0 + en-k p00 + en-k+1 p10 + ··· + en-1 pk-1,0
s1 = e1 + en-k p01 + en-k+1 p11 + ··· + en-1 pk-1,1
.
. (13)
sn-k-1 = en-k-1 + en-k p0,n-k-1 + ··· + en-1 pk-1,n-k-1
Syndrome and Error Detection

The syndrome digits are linear combinations of the error


digits
The syndrome digits can be used for error correction
Because the n – k linear equations of (13) do not have a
unique solution but have 2k solutions
There are 2k error pattern that result in the same syndrome,
and the true error pattern e is one of them
The decoder has to determine the true error vector from a
set of 2k candidates
To minimize the probability of a decoding error, the most
probable error pattern that satisfies the equations of (13)
is chosen as the true error vector
Syndrome and Error Detection
Example 5
We consider the (7,4) code whose parity-check matrix is given in
example 3.3
Let v = (1 0 0 1 0 1 1) be the transmitted code word
Let r = (1 0 0 1 0 0 1) be the received vector
The receiver computes the syndrome
s = r • HT = (1 1 1)
The receiver attempts to determine the true error vector
e = (e0, e1, e2, e3, e4, e5, e6), which yields the syndrome above
1 = e0 + e3 + e5 + e6
1 = e1 + e3 + e4 + e5
1 = e2 + e4 + e5 + e6
There are 24 = 16 error patterns that satisfy the equations above
Syndrome and Error Detection
Example 5
The error vector e = (0 0 0 0 0 1 0) has the smallest number of
nonzero components
If the channel is a binary symmetric (BSC), e = (0 0 0 0 0 1 0)
is the most probable error vector that satisfies the equation
above
Taking e = (0 0 0 0 0 1 0) as the true error vector, the receiver
decodes the received vector r = (1 0 0 1 0 0 1) into the following
code word
v* = r + e = (1 0 0 1 0 0 1) + (0 0 0 0 0 1 0)
= (1 0 0 1 0 1 1)
where v* is the actual transmitted code word
The Minimum Distance of a Block Code
Let v = (v0, v1, … , vn-1) be a binary n-tuple, the
Hamming weight (or simply weight) of v, denote by w(v),
is defined as the number of nonzero components of v

For example, the Hamming weight of v = (1 0 0 0 1 1 0) is 3

Let v and w be two n-tuple, the Hamming distance


between v and w, denoted d(v,w), is defined as the
number of places where they differ
For example, the Hamming distance between v = (1 0 0 1 0 1 1)
and w = (0 1 0 0 0 1 1) is 3

Given, a block code C, the minimum hamming distance of C,


denoted dmin, is defined as
dmin = min{d(v, w) : v, w ∈ C, v ≠ w}
Error-Detecting and Error-Correcting
Capabilities of a Block Code
If the minimum distance of a block code C is dmin, any two
distinct code vector of C differ in at least dmin places
A block code with minimum distance dmin is capable of
detecting all the error pattern of dmin – 1 or fewer errors
However, it cannot detect all the error pattern of dmin errors
because there exists at least one pair of code vectors that
differ in dmin places and there is an error pattern of dmin
errors that will carry one into the other
The random-error-detecting capability of a block code with
minimum distance dmin is dmin – 1
Error-Detecting and Error-Correcting
Capabilities of a Block Code
A block code with minimum distance dmin guarantees
correcting all the error patterns of t = ⎢⎣(d min − 1) / 2 ⎥⎦ or fewer
errors, where ⎢⎣(d min − 1) / 2⎥⎦ denotes the largest integer no
greater than (dmin – 1)/2
The parameter t = ⎢⎣(d min − 1) / 2 ⎥⎦ is called the random-error
correcting capability of the code

Refs:
1. 'Communication systems' by Simon Haykin.
2. 'Digital Communication' by John G. Proakis
3. Online resources
ECPC24 Digital Communication
Semester VI (Section A)

Unit IV - Channel Coding Techniques

Topic 2: Cyclic Codes


Department of ECE
Cyclic Codes

Ref:
Ref:
(1)

Ref:
Generator Polynomial:

(2)

(1)
Accordingly, 
(3)
(2) (3)

Ref:
(Designed using Generator Polynomial)

(Designed using Generator Polynomial)

Ref:
Assignment: 

1. Example 10.2  (page 639)
2. Example 10.3  (page 648)

Ref:
Convolutional Codes
After reading this lesson, you will learn about
¾ Basic concepts of Convolutional Codes;
¾ State Diagram Representation;
¾ Tree Diagram Representation;
¾ Trellis Diagram Representation;
¾ Catastrophic Convolutional Code;
¾ Hard - Decision Viterbi Algorithm;
¾ Soft - Decision Viterbi Algorithm;

Convolutional codes are commonly described using two parameters: the code rate
and the constraint length. The code rate, k/n, is expressed as a ratio of the number of bits
into the convolutional encoder (k) to the number of channel symbols output by the
convolutional encoder (n) in a given encoder cycle. The constraint length parameter, K,
denotes the "length" of the convolutional encoder, i.e. how many k-bit stages are
available to feed the combinatorial logic that produces the output symbols. Closely
related to K is the parameter m, which indicates how many encoder cycles an input bit is
retained and used for encoding after it first appears at the input to the convolutional
encoder. The m parameter can be thought of as the memory length of the encoder.

Convolutional codes are widely used as channel codes in practical communication


systems for error correction. The encoded bits depend on the current k input bits and a
few past input bits. The main decoding strategy for convolutional codes is based on the
widely used Viterbi algorithm. As a result of the wide acceptance of convolutional codes,
there have been several approaches to modify and extend this basic coding scheme.
Trellis coded modulation (TCM) and turbo codes are two such examples. In TCM,
redundancy is added by combining coding and modulation into a single operation. This is
achieved without any reduction in data rate or expansion in bandwidth as required by
only error correcting coding schemes.

A simple convolutional encoder is shown in Fig. 6.35.1. The information bits are
fed in small groups of k-bits at a time to a shift register. The output encoded bits are
obtained by modulo-2 addition (EXCLUSIVE-OR operation) of the input information
bits and the contents of the shift registers which are a few previous information bits.
C1

x(1) D D

+
C2

Fig. 6.35.1 A convolutional encoder with k=1, n=2 and r=1/2

If the encoder generates a group of ‘n’ encoded bits per group of ‘k’ information
bits, the code rate R is commonly defined as R = k/n. In Fig. 6.35.1, k = 1 and n = 2. The
number, K of elements in the shift register which decides for how many codewords one
information bit will affect the encoder output, is known as the constraint length of the
code. For the present example, K = 3.

The shift register of the encoder is initialized to all-zero-state before encoding


operation starts. It is easy to verify that encoded sequence is 00 11 10 00 01 ….for an
input message sequence of 01011….

The operation of a convolutional encoder can be explained in several but


equivalent ways such as, by a) state diagram representation, b) tree diagram
representation and c) trellis diagram representation.

a) State Diagram Representation


A convolutional encoder may be defined as a finite state machine. Contents of the
rightmost (K-1) shift register stages define the states of the encoder. So, the encoder in
Fig. 6.35.1 has four states. The transition of an encoder from one state to another, as
caused by input bits, is depicted in the state diagram. Fig. 6.35.2 shows the state diagram
of the encoder in Fig. 6.35.1. A new input bit causes a transition from one state to
another. The path information between the states, denoted as b/c1c2, represents input
information bit ‘b’ and the corresponding output bits (c1c2). Again, it is not difficult to
verify from the state diagram that an input information sequence b = (1011) generates an
encoded sequence c = (11, 10, 00, 01).
10
1/11 1/01
0/10

00 11
1/10
0/00 1/00
0/11
0/01

01

Fig.6.35.2 State diagram representation for the encoder in Fig. 6.35.1

b) Tree Diagram Representation


The tree diagram representation shows all possible information and encoded
sequences for the convolutional encoder. Fig. 6.35.3 shows the tree diagram for the
encoder in Fig. 6.35.1. The encoded bits are labeled on the branches of the tree. Given an
input sequence, the encoded sequence can be directly read from the tree. As an example,
an input sequence (1011) results in the encoded sequence (11, 10, 00, 01).
0 0
0 0
1 1
0 0
1 0
1 1
0 1
0 0
0 1 1
1 0
0 0
1 1
0 1
0 1
1 0
0 0
1 1
1 1
1 0
1 0
0 0
0 1
1 1 1 1
1
0 1
0 0
0 1
0 1
1 0
1 0
t=0 t=1 t=2 t=3 t=4

Fig. 6.35.3 A tree diagram for the encoder in Fig. 6.35.1

c) Trellis Diagram Representation


The trellis diagram of a convolutional code is obtained from its state diagram. All
state transitions at each time step are explicitly shown in the diagram to retain the time
dimension, as is present in the corresponding tree diagram. Usually, supporting
descriptions on state transitions, corresponding input and output bits etc. are labeled in
the trellis diagram. It is interesting to note that the trellis diagram, which describes the
operation of the encoder, is very convenient for describing the behavior of the
corresponding decoder, especially when the famous ‘Viterbi Algorithm (VA)’ is
followed. Figure 6.35.4 shows the trellis diagram for the encoder in Figure 6.35.1.

t1 00 t2 t3 t4 t5 t6
00 00 00 00
State a = 00
11 11 11 11 11 Codeword
branch
11 11 11
b = 00 00 00 00

10 10 10 10
c = 00
01 01 01 01 01 01 01

d = 00
Legend 10 10 10
Input bit 0
Input bit 1

Fig.6.35.4(a) Trellis diagram for the encoder in Fig. 6.35.1

Input data sequence m 1 1 0 1 1 …

Transmitted codeword U: 11 01 01 00 01 …

Received sequence Z: 11 01 01 10 01 ….
t1 2 t2 1 t3 1 t4 1 t5 1 t
State a = 00
0 1 1 1 1 1 1 1
Branch
metric
b = 00 1 1
1
2 2 0 2

c = 00 0 …
0 2 0
0 2
0
d = 00 2 0 2

Fig.6.35.4(b) Trellis diagram, used in the decoder corresponding to the encoder in Fig.
6.35.1
Catastrophic Convolutional Code
The taps of shift registers used for a convolutional encoder are to be chosen
carefully so that the code can effectively correct errors in received data stream. One
measure of error correction capability of a convolutional code is its ‘minimum free
distance, dfree’, which indicates the minimum weight (counted in number of ‘1’-s) of a
path that branches out from the all-zero path of the code trellis and again merges with the
all-zero path. For example, the code in Fig.6.35.1 has dfree= 5. Most of the convolutional
codes of present interest are good for correcting random errors rather than correcting
error bursts. If we assume that sufficient number of bits in a received bit sequence are
error free and then a few bits are erroneous randomly, the decoder is likely to correct
these errors. It is expected that (following a hard decision decoding approach, explained
later) the decoder will correct up to (dfree-1)/2 errors in case of such events. So, the taps of
a convolutional code should be chosen to maximize dfree. There is no unique method of
finding such convolutional codes of arbitrary rate and constraint length that ensures
maximum dfree. However, comprehensive description of taps for ‘good ‘ convolutional
codes of practical interest have been prepared through extensive computer search
techniques and otherwise.

While choosing a convolutional code, one should also avoid ‘catastrophic


convolutional code’. Such codes can be identified by the state diagram. The state diagram
of a ‘catastrophic convolutional code’ includes at least one loop in which a nonzero
information sequence corresponds to an all-zero output sequence. Tree diagram of a
‘catastrophic convolutional code’ is shown in Fig.6.35.5.

Sb
1/00

0/00
Sa

1/00 Sc

Fig.6.35.5 Example of a catastrophic code

Hard-Decision and Soft-Decision Decoding


Hard-decision and soft-decision decoding are based on the type of quantization
used on the received bits. Hard-decision decoding uses 1-bit quantization on the received
samples. Soft-decision decoding uses multi-bit quantization (e.g. 3 bits/sample) on the
received sample values.
Hard-Decision Viterbi Algorithm
The Viterbi Algorithm (VA) finds a maximum likelihood (ML) estimate of a
transmitted code sequence c from the corresponding received sequence r by maximizing
the probability p(r|c) that sequence r is received conditioned on the estimated code
sequence c. Sequence c must be a valid coded sequence.

The Viterbi algorithm utilizes the trellis diagram to compute the path metrics. The
channel is assumed to be memory less, i.e. the noise sample affecting a received bit is
independent from the noise sample affecting the other bits. The decoding operation starts
from state ‘00’, i.e. with the assumption that the initial state of the encoder is ‘00’. With
receipt of one noisy codeword, the decoding operation progresses by one step deeper into
the trellis diagram. The branches, associated with a state of the trellis tell us about the
corresponding codewords that the encoder may generate starting from this state. Hence,
upon receipt of a codeword, it is possible to note the ‘branch metric’ of each branch by
determining the Hamming distance of the received codeword from the valid codeword
associated with that branch. Path metric of all branches, associated with all the states are
calculated similarly.

Now, at each depth of the trellis, each state also carries some ‘accumulated path
metric’, which is the addition of metrics of all branches that construct the ‘most likely
path’ to that state. As an example, the trellis diagram of the code shown in Fig. 6.35.1,
has four states and each state has two incoming and two outgoing branches. At any depth
of the trellis, each state can be reached through two paths from the previous stage and as
per the VA, the path with lower accumulated path metric is chosen. In the process, the
‘accumulated path metric’ is updated by adding the metric of the incoming branch with
the ‘accumulated path metric’ of the state from where the branch originated. No decision
about a received codeword is taken from such operations and the decoding decision is
deliberately delayed to reduce the possibility of erroneous decision.

The basic operations which are carried out as per the hard-decision Viterbi Algorithm
after receiving one codeword are summarized below:
a) All the branch metrics of all the states are determined;

b) Accumulated metrics of all the paths (two in our example code) leading to a state
are calculated taking into consideration the ‘accumulated path metrics’ of the
states from where the most recent branches emerged;

c) Only one of the paths, entering into a state, which has minimum ‘accumulated
path metric’ is chosen as the ‘survivor path’ for the state (or, equivalently ‘node’);

d) So, at the end of this process, each state has one ‘survivor path’. The ‘history’ of a
survivor path is also maintained by the node appropriately ( e.g. by storing the
codewords or the information bits which are associated with the branches making
the path);
e) Steps a) to d) are repeated and decoding decision is delayed till sufficient number
of codewords has been received. Typically, the delay in decision making = Lx k
codewords where L is an integer, e.g. 5 or 6. For the code in Fig. 6.35.1, the
decision delay of 5x3 = 15 codewords may be sufficient for most occasions. This
means, we decide about the first received codeword after receiving the 16th
codeword. The decision strategy is simple. Upon receiving the 16th codeword and
carrying out steps a) to d), we compare the ‘accumulated path metrics’ of all the
states ( four in our example) and chose the state with minimum overall
‘accumulated path metric’ as the ‘winning node’ for the first codeword. Then we
trace back the history of the path associated with this winning node to identify the
codeword tagged to the first branch of the path and declare this codeword as the
most likely transmitted first codeword.

The above procedure is repeated for each received codeword hereafter. Thus, the
decision for a codeword is delayed but once the decision process starts, we decide once
for every received codeword. For most practical applications, including delay-sensitive
digital speech coding and transmission, a decision delay of Lx k codewords is acceptable.

Soft-Decision Viterbi Algorithm


In soft-decision decoding, the demodulator does not assign a ‘0’ or a ‘1’ to
each received bit but uses multi-bit quantized values. The soft-decision Viterbi algorithm
is very similar to its hard-decision algorithm except that squared Euclidean distance is
used in the branch metrics instead of simpler Hamming distance. However, the
performance of a soft-decision VA is much more impressive compared to its HDD (Hard
Decision Decoding) counterpart [Fig. 6.35.6 (a) and (b)]. The computational requirement
of a Viterbi decoder grows exponentially as a function of the constraint length and hence
it is usually limited in practice to constraint lengths of K = 9.
Fig. 6.35.6 (a) Decoded BER vs input BER for the rate – half convolutional codes with
Viterbi Algorithm ; 1) k = 3 (HDD),2) k = 5 (HDD),3) k = 3 (SDD), and 4) k= 5 (SDD).
HDD: Hard Decision Decoding; SDD: Soft Decision Decoding.

Fig. 6.35.5(b) Viterbi decoded BER vs Eb/No for the rate – half convolutional codes; 1) b
= 3 HDD), 2) b = 5 (HDD), 3) b = 3 (SDD), 4) b = 5 (Transfer function bound) and 5)
Uncoded system.

Fig. 6.35.6 (b) Decoded BER vs Eb/No (in dB) for the rate – half convolutional codes with
Viterbi Algorithm ; 1) Uncoded system; 2) with k = 3 (HDD) and 3) k = 3 (SDD). HDD:
Hard Decision Decoding; SDD: Soft Decision Decoding.
Coded Modulation
Schemes
After reading this lesson, you will learn about
¾ Trellis Code Modulation;
¾ Set partitioning in TCM;
¾ Decoding TCM;

The modulated waveform in a conventional uncoded carrier modulation scheme


contains all information about the modulating message signal. As we have discussed
earlier in Module #5, the modulating signal uses the quadrature carrier sinusoids in PSK
and QAM modulations. The modulator accepts the message signal in discrete time
discrete amplitude from (binary or multilevel) and processes it following the chosen
modulation scheme to satisfy the transmission and reception requirements. The
modulating signal is generally treated as a random signal and the modulating symbols are
viewed as statistically independent and having equal probability of occurrence. This
traditional approach has several merits such as, (i) simplified system design and analysis
because of modular approach, (ii) design of modulation and transmission schemes which
are independent of the behavior of signal source, (iii) availability of well developed
theory and practice for the design of receivers which are optimum (or near optimum).

To put it simply, the modular approach to system design allows one to design a
modulation scheme almost independent of the preceding error control encoder and the
design of an encoder largely independent of the modulation format. Hence, the end-to-
end system performance is made up of the gains contributed by the encoder, modulator
and other modules separately.

However, it is interesting to note that the demarcation between a coder and a


modulator is indeed artificial and one may very well imagine a combined coded
modulation scheme. The traditional approach is biased more towards optimization of an
encoder independent of the modulation format and on optimization of a modulation
scheme independent of the coding strategy. In fact, such approach of optimization at the
subsystem level may not ensure an end-to-end optimized system design.

A systems approach towards a combined coding and modulation scheme is


meaningful in order to obtain better system performance. Significant progress has taken
place over the last two decades on the concepts of combined coding and modulation (also
referred as coded modulation schemes). Several schemes have been suggested in the
literature and advanced modems have been successfully developed exploiting the new
concepts. Amongst the several strategies, which have been popular, trellis coded
modulation (TCM) is a prominent one. A trellis coded modulation scheme improves the
reliability of a digital transmission system without bandwidth expansion or reduction of
data rate when compared to an uncoded transmission scheme using the same transmission
bandwidth.

We discuss the basic features of TCM in this section after introducing some
concepts of distance measure etc., common to all coded modulation schemes. A TCM
scheme uses the concept of tree or trellis coding and hence is the name ‘TCM’. However
some interesting combined coding and modulation schemes have been suggested recently
following the concepts of block coding as well.

Distance Measure
Let us consider a 2-dimensional signal space of Fig.6.36.1 showing a set of signal
points. The two dimensions are defined by two orthonormal basis functions
corresponding to information symbols, which are to be modulated. The dimensions and
hence the x- and y-axes may also be interpreted as ‘real’ and ‘imaginary’ axes when
complex low pass equivalent representation of narrowband modulated signals is
considered. The points in the constellation are distinguishable from one another as their
locations are separate. Several possible subsets have been indicated in Fig.6.36.1
capturing multiple modulation formats.

Subset 1
Subset 2
Subset 3
Subset 4

Fig.6.36.1 Some two-dimensional signal constellations (M = 2m, m= 1,…,8) of TCM


codes.
Let there be ‘N’ valid signal points, denoted by (xi + jyi) , 1≤ i ≤ N, in the two
dimensional signal space such that each point signifies an information bearing symbol (or
signal) different from one another. Now, the Euclidean distance dij between any two
signal points, say (xi, yi) and (xj, yj) in this 2 –D Cartesian signal space may be expressed
as, d2ij =(xi-xj)2 +(yi-yj)2. Note that for detecting a received symbol at the demodulator in
presence of noise etc., it is important to maximize the minimum distance (say, dmin)
between any two adjacent signal points in the space. This implies that the N signal points
should be well distributed in the signal space such that the dmin amongst them is the
largest. This strategy ensures good performance of a demodulation scheme especially
when all the symbols are equally likely to occur.

Suppose we wish to transmit data from a source emitting two information bits
every T seconds. One can design a system in several ways to accomplish the task such as
the following:
(i) use uncoded QPSK modulation, with one signal carrying two information bits
transmitted every T seconds.

(ii) use a convolutional code of rate r = 2/3 and same QPSK modulation. Each QPSK
symbol now carries 4/3 information bits and hence, the symbol duration should be
reduced to 2T/3 seconds. This implies that the required transmission bandwidth is 50%
more compared to the scheme in (i).

(i) use a convolutional code of rate r = 2/3 and 8-Phase Shift Keying (8PSK) modulation
scheme to ensure a symbol duration of T sec. Each symbol, now consisting of 3 bits,
carries two bits of information and no expansion in transmission bandwidth is necessary.
This is the basic concept of TCM.

Now, an M-ary PSK modulation scheme is known to be more and more power
inefficient for larger values of M. That is, to ensure an average BER of, say, 10-5, 8-PSK-
modulation scheme needs more Eb/No compared to QPSK and 16-PSK scheme needs
even more of Eb/No. So, one may apprehend that the scheme in (iii) may be power
inefficient but actually this is not true as the associated convolutional code ensures a
considerable improvement in the symbol detection process. It has been found that an
impressive overall coding gain to the tune of 3 – 6dB may be achieved at an average BER
of 10-5. So, the net result of this approach of combined coding and modulation is some
coding gain at no extra bandwidth. The complexity of such a scheme is comparable to
that of a scheme employing coding (with soft decision decoding) and demodulation
schemes separately.

TCM is extensively used in high bit rate modems using telephone cables. The additional
coding gain due to trellis-coded modulation has made it possible to increase the speed of
the transmission.
Set Partitioning
The central feature of TCM is based on the concept of signal-set partitioning that
creates scope of redundancy for coding in the signal space. The minimum Euclidean
distance (dmin) of a TCM scheme is maximized through set partitioning.

The concept of set partitioning is shown in Fig. 6.36.2 for a 16-QAM signal
constellation. The constellation consists of 16 signal points where each point is
represented by four information bits. The signal set is successively divided into smaller
sets with higher values of minimum intra-set distance. The smallest signal constellations
finally obtained are labeled as D0, D1, … , D7 in Fig. 6.36.2. The following basic rules
are followed for set-partitioning:
Rule #1: Members of the same partition are assigned to parallel transitions.
Rule #2: Members of the next larger partition are assigned to adjacent transitions, i.e.
transitions stemming from, or merging in the same node.
Assumption: All the signal points are equally likely to occur.

1
Z0 = 0
A

Z1 = 0
B0 1 Z1 = 0 B1 1

C0 C2 C3
C1 1 1
Z2 = 0 1 Z2 = 0 Z2 = 0 Z2 = 0 1

D0 D1 D2 D3 D4 D5 D6 D7

Fig. 6.36.2 Set partitioning of the 16QAM constellation

A TCM encoder consists of a convolutional encoder cascaded to a signal mapper.


Fig. 6.36.3 shows the general structure of a TCM encoder. As shown in the figure, a
group of m-bits are considered at a time. The rate n/(n+1) convolutional encoder codes n
information bits into a codeword of (n+1) bits while the remaining (m-n) bits are not
encoded. However, the new group of (n+1+m-n) = m+1 bits are used to select one of the
2m+1 points from the signal space following the technique of set-partitioning.
m−n m−n m +1 Z im +1
bm Signal
i
Mapping

m −1
Z im
bi Uncoded
Bits

Select Signal Xni


Z in + 2 From Subsets
bin +1
n n +1 Z in +1
Convolutional Coded
bin Bits
Encoder
n
Rate = Z i1 Select
1
b n +1 Subsets
i

Fig. 6.36.3 General structure of TCM encoder

The convolutional encoder may be one of several possible types such a linear
non-systematic feed forward type or a feedback type etc. Fig. 6.36.4 shows an encoder of
r = 2/3, suitable for use with a 8-point signal constellation.

xn2 Z n2
xn: h =1
2
2
Z n1

T Z n0
T + T + h =1
0
h30 = 1 0

Fig. 6.36.4 Structure of a TCM encoder based on r=2/3 convolutional coding and 8-PSK
modulation format

Decoding TCM
The concept of code trellis, as introduced in Lesson #35, is expanded to describe
the operation of a TCM decoder. Distance properties of a TCM scheme can be studied
through its trellis diagram in the same way as for convolutional codes. The TCM
decoder-demodulator searches the most likely path through the trellis from the received
sequence of noisy signal points. Because of noise, the chosen path may not always
coincide with the correct path. The Viterbi algorithm is also used in the TCM decoder.
Note that there is a one-to-one correspondence between signal sequences and the paths in
a trellis. Hence, the maximum-likelihood (ML) decision on a sequence of received
symbols consists of searching for the trellis path with the minimum Euclidean distance to
the received sequence. The resultant trellis for TCM looks somewhat different compared
to the trellis of a convolutional code only. There will be multiple parallel branches
between two nodes all of which will correspond to the same bit pattern for the first (n+1)
bits as obtained from the convolutional encoder.
Code Acquisition
After reading this lesson, you will learn about
¾ Basics of Code Acquisition Schemes;
¾ Classification of Code Acquisition Schemes;

Code synchronization is the process of achieving and maintaining proper


alignment between the reference code in a spread spectrum receiver and the spreading
sequence that has been used in the transmitter to spread the information bits. Usually,
code synchronization is achieved in two stages: a) code acquisition and b) code tracking.
Acquisition is the process of initially attaining coarse alignment (typically within ± half
of the chip duration), while tracking ensures that fine alignment within a chip duration is
maintained. In this lesson, we primarily discuss about some concepts of code acquisition.

Code Acquisition Schemes


Acquisition is basically a process of searching through an uncertainty region,
which may be one-dimensional, e.g. in time alone or two-dimensional, viz. in time and
frequency (if there is drift in carrier frequency due to Doppler effect etc.) – until the
correct code phase is found. The uncertainty region is divided into a number of cells. In
the one-dimensional case, one cell may be as small as a fraction of a PN chip interval.

The acquisition process has to be reliable and the average time taken to acquire
the proper code phase also should be small. There may be other operational requirements
depending on the application. For example, some systems may have a specified limit in
terms of the time interval ‘T’ within which acquisition should be complete. For such
systems, an important parameter that may be maximized by the system designer is Prob
( t acq ≤ T). However, for other systems, it may be sufficient to minimize the mean
acquisition time E[ t acq ].

A basic operation, often carried out during the process of code acquisition is the
correlation of the incoming spread signal with a locally generated version of the
spreading code sequence (obtained by multiplying the incoming and local codes and
accumulating the result) and comparing the correlator output with a set threshold to
decide whether the codes are in phase or not. The correlator output is usually less than the
peak autocorrelation of the spreading code due to a) noise and interference in the received
signal or b) time or phase misalignment or c) any implementation related imperfections.
If the threshold is set too high (equal or close to the autocorrelation peak), the correct
phase may be missed i.e. probability of missed detection ( PM ) is high. On the other hand,
if the threshold is set too low, acquisition may be declared at an incorrect phase (or time
instant), resulting in high probability of false acquisition ( PFA ). A tradeoff between the
two is thus desired.

There are several acquisition schemes well researched and reported in technical
publications. Majority of the code acquisition schemes can be broadly classified
according to a) the detector structure used in the receiver (Fig. 7.39.1) and b) the search
strategy used (Fig.7.39.2).

Detector Structure

coherent Non – coherent Bayes Neyman other


-Pearson

Active Passive Fixed dwell Variable dwell time


(Matched Filter) time (sequential detection)

Multiple dwell time Single dwell time

Fig.7.39.1 Classification of detector structures used for code acquisition


on DS-SS systems
Search Strategy

Maximum – likelihood Serial search Sequential estimation


(Parallel)

Permitted Sweep strategy


acquisition time Detectr False acquisition for uncertainty
structure state

Limited Returning Absorbing Discrete Continued


Non-Limited state state step sweep

Random Fixed penalty Uniform Non-uniform


penalty time time

Z-type Expanding
window

Broken Continuous Broken Continuous

Fig. 7.39.2 Classification of acquisition schemes based on search strategy

The detector structure may be coherent or non-coherent in nature. If the carrier


frequency and phase are not precisely known, a non-coherent structure of the receiver is
preferred. Fig 7.39.3 shows a coarse code acquisition scheme for a non-coherent detector.
correlator t
X X ( )2
t = k∆T0 es
t – MT0
r(t) √2cos(wct)
H1
LO >
+ Rθ 2
Code gen deci
<
Hθ sion
-90º

-√2 sin(wct)
correlator t
( )2
X X
t = k∆T0 es
t – MT0

Fig. 7.39.3 Structure of a non-coherent detector

A code acquisition scheme may also be active or passive. In a receiver employing


active code acquisition (sometimes called code detection), portions of the incoming and
the local codes (a specific phase shifted version) are multiplied bit by bit and the product
is accumulated over a reasonable interval before comparison is made against a decision
threshold. If the accumulated value does not exceed the threshold, the process is repeated
with new samples of the incoming code and for another offset version of the local code.

A passive detector, on the other hand, is much like a matched filter with the
provision to store the incoming code samples in a shift register. With every incoming
chip, a decision is made (high decision rate) based on a correlation interval equal to the
length of the matched filter (MF) correlator. A disadvantage of this approach is the need
for more hardware when large correlation intervals are necessary.

Code acquisition schemes are also classified based on the criterion used for
deciding the threshold. For example, a Bayes’ code detector minimizes the average
probability of missed detection while a Neyman Pearson detector minimizes the
probability of missed detection for a particular value of PFA .

The examination or observation interval is known as ‘dwell time’ and several


code acquisition schemes (and the corresponding detectors) are named after the dwell
time that they use (Fig.7.39.1). Detectors working with fixed dwell times may or may not
employ a verification stage (multiple dwell times). A verification process enhances the
reliability of decision and hence is incorporated in many practical systems.
Classification based on search strategy
As noted earlier, the acquisition schemes are also classified on the basis of the
search strategy in the region of uncertainty (Fig 7.39.2). Following the maximum
likelihood estimation technique, the incoming signal is simultaneously (in parallel)
correlated with all possible time-shifted versions of the local code and the local code
phase that yields the highest output is declared as the phase of the incoming code
sequence. This method requires a large number of correlators. However, the strategy
may also be implemented (somewhat approximately) in a serial manner, by correlating
the incoming code with each phase-shifted version of the local code, and taking a
decision only after the entire code length is scanned. While one correlator is sufficient for
this approach, the acquisition time increases linearly with the correlation length. Further,
since the noise conditions are usually not the same for all code phases, this approach is
not strictly a maximum likelihood estimation algorithm.

Another search strategy is sequential estimation. This is based on the assumption


that, in the absence of noise, if ‘n’ consecutive bits of the incoming PN code (where “n”
is the length of the PN code generator) are loaded into the receivers code generator, and
this is used as the initial condition, the successively generated bits will automatically be
in phase with the incoming ones. Cross correlation between the generated and incoming
codes is done to check whether synchronization has been attained or not. If not, the next n
bits are estimated and loaded. This algorithm usually yields rapid acquisition and hence is
called the RASE (Rapid Acquisition by Sequential Estimation) algorithm. However, this
scheme works well only when the noise associated with the received spread signal is low.

An important family of code acquisition algorithms is known as Serial Search.


Following the serial search technique, a group of cells (probable pockets in the
uncertainty region) are searched in a serial fashion until the correct cell (implying the
correct code phase) is found. This process of serial search may be successful anywhere in
the uncertainty region and hence the average acquisition time is much less than that in the
maximum likelihood estimation schemes, though a maximum likelihood estimation
scheme yields more accurate result. Incidentally, a serial search algorithm performs better
than RASE algorithm at low CNR. Serial search schemes are also easier to implement. In
the following, we briefly mention a practical code search techniques known as
‘Subsequence Matched Filtering’.

Proposed in 1984, this is a rapid acquisition scheme for CDMA systems, that
employs Subsequence Matched Filtering (SMF). The detector consists of several
correlator based matched filters, each matched to a subsequence of length M. The
subsequence may or may not be contagious, depending on the length of the code and the
hardware required to span it. With every incoming chip of the received waveform, the
largest of the SMF value is compared against a pre-selected threshold. Threshold
exceedance leads to loading that subsequence in the local PN generator for correlation
over a longer time interval. If the correlation value does not exceed a second threshold, a
negative feedback is generated, and a new state estimate is loaded into the local
generator. Otherwise, the PN generation and correlation process continues. Fig. 7.39.4
shows the structure of a Subsequence Matched Filter based code acquisition process.

u(t) T
θ∫ edt
X LPF delay X

cos(wct)
( )2
r(t) LO

-90˚

-sin(wct)

v(t) T
θ∫ edt
X LPF delay X

( )2

SMF#1 … SMF#L SMF#1 … SMF#L

( )2 ( )2 ( )2 ( )2 PN code
generator
+
+
+ >
VTH (2)
<

y1 yL Decision
controller 2
>
Select VTH (2) 1
maximum <

Fig. 7.39.4 Structure of a Subsequence Matched Filter based code acquisition


process
M-ary modulation techniques

UNIT – III
M-ary modulation
1. Quadrature PSK (QPSK), using M=4 symbols
2. M-ary PSK (8PSK, using M=8 symbols,16PSK, using
M=16 symbols).
3. Quadrature amplitude modulation (QAM), a
combination of PSK and ASK .
4. Continuous phase modulation (CPM) methods
(i)Minimum-shift keying (MSK)
(ii)Gaussian minimum-shift keying (GMSK)
(iii)Continuous-phase frequency-shift keying (CPFSK)
5.Content : Simon Haykins -4th edition
Page Nos: 354 – 360,365-373,387-403 and 413 -420
Error control coding

UNIT – IV
contents

• Introduction
• Discrete Memoryless Channels
• Linear Block Codes
• Cyclic Codes
• Maximum Likelihood decoding of Convolutional Codes
• Trellis-Coded Modulation
Note: Content availble in Communication system by
Symon Haykins
Introduction

• Cost-effective facility for transmitting


information at a rate and a level of reliability
and quality
—signal energy per bit-to-noise power density ratio
—achieved practically via error-control coding
• Error-control methods
• Error-correcting codes
Note: Content availble in Communication
system by Symon Haykins –text book
Discrete Memoryless Channels
Discrete Memoryless Channels

• Discrete memoryless channles (see fig. 11.1)


described by the set of transition probabilities
—in simplest form binary coding [0,1] is used of
which BSC is an appropriate example
—channel noise modelled as additive white gaussian
noise channel
• the two above are so called hard-decision decoding
—other solutions, so called soft-decision coding
Linear Block Codes

• A code is said to be linear, if any twowords in


the code can be added in modulo-2 arithmetic
to produce a third code word in the code.
• Linear block code has n bits of which k bits are
always identical to the message sequence.
• Then n-k bits are computed from the message
bits in accordance with a prescribed encoding
rule that determines the mathematical
structure of the code
— these bits are called parity bits
Linear Block Codes

• Normally code equations are written in the


form of matrixes (1-by-k message vector)
—P is the k-by-(n-k) coefficient matrix
—I (of k) is the k-by-k identity matrix
—G is k-by-n generator matrix
• Another way to show the relationship between
the message bits and parity bits
—H is parity-check matrix
Linear Block Codes

• In Syndrome decoding the generator matrix


(G) is used in the encoding at the transmitter
and the parity-check matrix (H) atthe receiver
—if corrupted bit, r = c+e, this leads to two important
properties
• the syndrome is dependant only on the error pattern, not on the
trasmitted code word
• all error patterns that differ by a code word, have same syndrome
Linear Block Codes

• The Hamming distance (or minimum) can be


used to calculate the difference of the code
words
• We have certain amount (2_power_k) code
vectors, of which the subsets constitute a
standard array for an (n,k) linear block code
• We pick the error pattern of a given code
—coset leaders are the most obvious error patterns
Linear Block Codes

• Example : Let us have H as parity-check


matrix which vectors are
—(1110), (0101), (0011), (0001), (1000), (1111)
—code generator G gives us following codes (c) :
• 000000, 100101,111010, 011111
—Let us find n, k and n-k ?
Linear Block Codes

•Examples of (7,4) Hamming code


words and error patterns
Cyclic Codes

• Cyclic codes form subclass of linear block codes


• A binary code is said to be cyclic if it exhibits the
two following properties
— the sum of any two code words in the code is also a code
word (linearity)
• this means that we speak linear block codes
— any cyclic shift of a code word in the code is also a code
word (cyclic)
• mathematically in polynomial notation
Cyclic Codes
Convolutional Codes

• Convolutional codes work in serial manner, which


suits better to such kind of applications
• The encoder of a convolutional code can be viewed
as a finite-state machine that consists of an M-stage
shift register with prescribed connections to n
modulo-2 adders, and a multiplexer that
serializesthe outputs of the address
Convolutional Codes

• Convolutional codes are portrayed in graphical form


by using three different diagrams
—Code Tree
—Trellis
—State Diagram
Maximum Likelihood Decoding of
Convolutional Codes

• We can create log-likelihood function to a


convolutional code that have a certain hamming
distance
• The book presents an example algorithm (Viterbi)
—Viterbi algorithm is a maximum-likelihood decoder, which
is optimum for a AWGN (see fig. 11.17)
• initialisation
• computation step
• final step
Trellis-Coded Modulation

• Here coding is described as a process of imposing


certain patterns on the transmitted signal
• Trellis-coded modulation has three features
—Amount of signal point is larger than what is required,
therefore allowing redundancy without sacrificing
bandwidth
—Convolutional coding is used to introduce a certain
dependancy between successive signal points
—Soft-decision decoding is done in the receiver
To get full mark?
References:
• Class notes
• 11th chapter in your text book(Error control
coding )
• This PPT
• Practice based on model question paper.
Spread spectrum Modulation

UNIT- V
Spread Spectrum
• Pseudo random Noise sequence generator –
construction and generation of PN sequnce
• Types:
(i) Frequency hoping
— Signal broadcast over seemingly random series of frequencies
(ii) Direct Sequence
— Each bit is represented by multiple bits in transmitted signal

Note :Content availble in Communication system by


Symon Haykins –text book
Spread Spectrum Concept
• Input fed into channel encoder
— Produces narrow bandwidth analog signal around central
frequency
• Signal modulated using sequence of digits
— Spreading code/sequence
— Typically generated by pseudonoise/pseudorandom number
generator
• Increases bandwidth significantly
— Spreads spectrum
• Receiver uses same sequence to demodulate signal
• Demodulated signal fed into channel decoder
General Model of Spread
Spectrum System
Pseudorandom Numbers
• Generated by algorithm using initial seed
• Deterministic algorithm
—Not actually random
—If algorithm good, results pass reasonable tests of
randomness
• Need to know algorithm and seed to predict
sequence
Frequency Hopping Spread
Spectrum (FHSS)
• Signal broadcast over seemingly random series
of frequencies
• Receiver hops between frequencies in sync with
transmitter
• Eavesdroppers hear unintelligible blips
• Jamming on one frequency affects only a few
bits
Basic Operation
• Typically 2k carriers frequencies forming 2k
channels
• Channel spacing corresponds with bandwidth of
input
• Each channel used for fixed interval
—300 ms in IEEE 802.11
—Some number of bits transmitted using some
encoding scheme
• May be fractions of bit (see later)
—Sequence dictated by spreading code
Frequency Hopping Example
Frequency Hopping Spread
Spectrum System (Transmitter)
Frequency Hopping Spread
Spectrum System (Receiver)
Slow and Fast FHSS
• Frequency shifted every Tc seconds
• Duration of signal element is Ts seconds
• Slow FHSS has Tc  Ts
• Fast FHSS has Tc < Ts
• Generally fast FHSS gives improved performance
in noise (or jamming)
Slow Frequency Hop Spread
Spectrum Using MFSK (M=4, k=2)
Fast Frequency Hop Spread
Spectrum Using MFSK (M=4, k=2)
FHSS Performance
Considerations
• Typically large number of frequencies used
—Improved resistance to jamming
Direct Sequence Spread
Spectrum (DSSS)
• Each bit represented by multiple bits using spreading
code
• Spreading code spreads signal across wider frequency
band
— In proportion to number of bits used
— 10 bit spreading code spreads signal across 10 times bandwidth
of 1 bit code
• One method:
— Combine input with spreading code using XOR
— Input bit 1 inverts spreading code bit
— Input zero bit doesn’t alter spreading code bit
— Data rate equal to original spreading code
• Performance similar to FHSS
Direct Sequence Spread
Spectrum Example
Direct Sequence Spread
Spectrum Transmitter
Direct Sequence Spread
Spectrum Transmitter
Direct Sequence Spread
Spectrum Using BPSK Example
Approximate
Spectrum of
DSSS Signal
Code Division Multiple Access
(CDMA)
• Multiplexing Technique used with spread spectrum
• Start with data signal rate D
— Called bit data rate
• Break each bit into k chips according to fixed pattern
specific to each user
— User’s code
• New channel has chip data rate kD chips per second
• E.g. k=6, three users (A,B,C) communicating with base
receiver R
• Code for A = <1,-1,-1,1,-1,1>
• Code for B = <1,1,-1,-1,1,1>
• Code for C = <1,1,-1,1,1,-1>
CDMA Example
CDMA Explanation
• Consider A communicating with base
• Base knows A’s code
• Assume communication already synchronized
• A wants to send a 1
— Send chip pattern <1,-1,-1,1,-1,1>
• A’s code
• A wants to send 0
— Send chip[ pattern <-1,1,1,-1,1,-1>
• Complement of A’s code
• Decoder ignores other sources when using A’s code to
decode
— Orthogonal codes
NATIONAL INSTITUTE OF TECHNOLOGY, TIRUCHIRAPALLI-620 015
B.Tech. DEGREE VI SEMESTER, END ASSESSESMENT-APRIL-2019

Branch : ECE
Sub code : ECPC24 – Digital Communication Time: 9.30AM-12.30PM
Date : 29.04.2019 Max Marks: 50

Answer ALL questions


1. (a) Define sampling theorem. If 20 Band limited signals (1000 Hz) are sampled at Nyquist
rate and then quantized to 16 levels, calculate the pulse width of each bit to support the
multiplex of these signals. [2]
(b) What is aliasing? How it can be prevented? Explain with necessary diagram. [2]
(c) Explain the working of PCM system with an example and derive the expression for signal
to noise ratio. List its limitations over the Delta Modulation system. [3]
(d) Explain the concept of modified duo-binary scheme with an example and how is it superior
to duo-binary coding? [3]
2. (a)To transmit a bit sequence 10011011, draw the wave form using Unipolar RZ; Unipolar
NRZ;Bipolar RZ [2]
(b) Is it possible to detect a FSK signal using ASK receiver? If yes, explain. [2]
(c) Discuss in detail the generation, detection using coherent and non-coherent detector, signals
space diagram and error probability of BFSK. [4]
(d) Compare the performance of all the basic digital modulation techniques with help of Eb/No
versus BER plot and comment on the same. [2]
3. (a) Sketch the ASK, PSK,4QAM signal for the given sequence 0110101100. [2]
(b) What are the benefits of M - ary modulation techniques? Draw the signal space diagram of
16 QAM.[2]
(c) Explain the working of QPSK system with relevant block diagram and signal space
diagram and derive the error probability. [3]
(d) Write short notes on (i) MSK (ii) MPSK [3]
4. (a) Define the following terms (i) Channel coding (ii) Systematic code (iii) Hamming distance
(iv) Code rate [2]
(b) For a convolution coder with the impulse responses (111) and (101),
(i) Find the code word for the message sequence [1010] using code tree method
(ii) Find the transmitted code word using Viterbi algorithm for the received code
0101000000.[4]
(c) If the generator polynomial of a (7,4) cyclic code is 1+X+X3, draw and explain the
working of encoder and syndrome calculator diagrams with an example. [3]
(d) Explain how the error syndrome helps in correcting a single error of a (7,4) linear block
1 0 0 1 1 0 
code with the generator matrix G = 0 1 0 1 0 1 [2]
0 0 1 0 1 1

5. (a) What are needs of spread spectrum modulation? Explain the working of baseband DS-
spread spectrum modulation with relevant block diagram and waveforms. [3]
(b) Derive the processing gain and jamming margin of Pass band DS- spread spectrum
modulation system. Find the jamming margin, if the system has the following parameters
(i) Information bit duration Tb=4.095ms
(ii) PN chip duration Tc=1µs [5]
(c) The FH/MFSK signal with two bits per symbol and three bits per hop. Illustrates the
variation of the slow FH/MFSK signal with time for one complete period of the PN-
sequence [001110011001001]. [2]
Unit - V

Topic 3: Spread
Spectrum Modulation
VI Semester (Section A)

Department of ECE
After reading this lesson, you will learn about
¾ Basic concept of Spread Spectrum Modulation;
¾ Advantages of Spread Spectrum (SS) Techniques;
¾ Types of spread spectrum (SS) systems;
¾ Features of Spreading Codes;
¾ Applications of Spread Spectrum;

Introduction
Spread spectrum communication systems are widely used today in a variety of
applications for different purposes such as access of same radio spectrum by multiple
users (multiple access), anti-jamming capability (so that signal transmission can not be
interrupted or blocked by spurious transmission from enemy), interference rejection,
secure communications, multi-path protection, etc. However, irrespective of the
application, all spread spectrum communication systems satisfy the following criteria-
(i) As the name suggests, bandwidth of the transmitted signal is much greater
than that of the message that modulates a carrier.
(ii) The transmission bandwidth is determined by a factor independent of the
message bandwidth.
The power spectral density of the modulated signal is very low and usually comparable to
background noise and interference at the receiver.

As an illustration, let us consider the DS-SS system shown in Fig 7.38.1(a) and
(b). A random spreading code sequence c(t) of chosen length is used to
‘spread’(multiply) the modulating signal m(t). Sometimes a high rate pseudo-noise code
is used for the purpose of spreading. Each bit of the spreading code is called a ‘chip’.
Duration of a chip ( Tc) is much smaller compared to the duration of an information bit (
T). Let us consider binary phase shift keying (BPSK) for modulating a carrier by this
spread signal. If m(t) represents a binary information bit sequence and c(t) represents a
binary spreading sequence, the ‘spreading’ or multiplication operation reduces to
modulo-2 or ex-or addition. For example, if the modulating signal m(t) is available at the
rate of 10 Kbits per second and the spreading code c(t) is generated at the rate of 1 Mbits
per second, the spread signal d(t) is generated at the rate of 1 Mega Chips per second.
So, the null-to-null main lobe bandwidth of the spread signal is now 2 MHz. We say that
bandwidth has been ‘spread’ by this operation by a factor of hundred. This factor is
known as the spreading gain or process gain (PG). The process gain in a practical system
is chosen based on the application.
m(t)
Binary Balanced Transmitted
Adder Modulator signal s(t)
c(t)
Carrier Frequency f0
Pseudo – noise
code generator Clock

Fig: 1 (a) Direct sequence spread spectrum transmitter

s(t) + n(t) Narrow Message m̂ ( t )


Despreader
band Filter demodulator

PN code Local clock


reconstruction
generator

PN code
generator

Fig: 1 (b) Direct sequence spread spectrum receiver

On BPSK modulation, the spread signal becomes, s(t) = d(t).coswt. Fig.1 (b)
shows the baseband processing operations necessary after carrier demodulation. Note
that, at the receiver, the operation of despreading requires the generation of the same
spreading code incorrect phase with the incoming code. The pseudo noise (PN) code
synchronizing module detects the phase of the incoming code sequence, mixed with the
information sequence and aligns the locally generated code sequence appropriately.
After this important operation of code alignment (i.e. synchronization) the received signal
is ‘despread’ with the locally constructed spreading code sequence. The dispreading
operation results in a narrowband signal, modulated by the information bits only. So, a
conventional demodulator may be used to obtain the message signal estimate.
Advantages of Spread Spectrum (SS) Techniques
a) Reduced interference: In SS systems, interference from undesired sources is
considerably reduced due to the processing gain of the system.

b) Low susceptibility to multi-path fading: Because of its inherent frequency diversity


properties, a spread spectrum system offers resistance to degradation in signal
quality due to multi-path fading. This is particularly beneficial for designing mobile
communication systems.

c) Co-existence of multiple systems: With proper design of pseudo-random


sequences, multiple spread spectrum systems can co-exist.

d) Immunity to jamming: An important feature of spread spectrum is its ability to


withstand strong interference, sometimes generated by an enemy to block the
communication link. This is one reason for extensive use of the concepts of spectrum
spreading in military communications.

Types of SS
Based on the kind of spreading modulation, spread spectrum systems are broadly
classified as-
(i) Direct sequence spread spectrum (DS-SS) systems
(ii) Frequency hopping spread spectrum (FH-SS) systems
(iii) Time hopping spread spectrum (TH-SS) systems.
(iv) Hybrid systems

Direct Sequence (DS) Spread Spectrum System (DSSS)


The simplified scheme shown in Fig. 1 is of this type. The information signal in
DSSS transmission is spread at baseband and then the spread signal is modulated by a
carrier in a second stage. Following this approach, the process of modulation is separate
from the spreading operation. An important feature of DSSS system is its ability to
operate in presence of strong co-channel interference. A popular definition of the
processing gain (PG) of a DSSS system is the ratio of the signal bandwidth to the
message bandwidth.

A DSSS system can reduce the effects of interference on the transmitted


information. An interfering signal may be reduced by a factor which may be as high as
the processing gain. That is, a DSSS transmitter can withstand more interference if the
length of the PN sequence is increased. The output signal to noise ratio of a DSSS
receiver may be expressed as: (SNR)o = PG. (SNR)I, where (SNR)I is the signal to noise
ratio before the dispreading operation is carried out.
A major disadvantage of a DSSS system is the ‘Near-Far effect’, illustrated
in Fig.2.
receiver
code A intended transmitter
(code A)

intended transmitter
(code B)
Fig.2 Near-far effect

This effect is prominent when an interfering transmitter is close to the receiver than the
intended transmitter. Although the cross-correlation between codes A and B is low, the
correlation between the received signal from the interfering transmitter and code A can
be higher than the correlation between the received signal from the intended transmitter
and code A. So, detection of proper data becomes difficult.

Frequency Hopping Spread Spectrum


Another basic spread spectrum technique is frequency hopping. In a frequency
hopping (FH) system, the frequency is constant in each time chip; instead it changes
from chip to chip. An example FH signal is shown in Fig.3.

Power

Frequency

Desired Signal
Hops from
one frequency
to another
Time

Fig. 3. Illustration of the principle of frequency hopping

Frequency hopping systems can be divided into fast-hop or slow-hop. A fast-hop


FH system is the kind in which hopping rate is greater than the message bit rate and in the
slow-hop system the hopping rate is smaller than the message bit rate. This differentiation
is due to the fact that there is a considerable difference between these two FH types. The
FH receiver is usually non-coherent. A typical non-coherent receiver architecture is
represented in Fig.4.
Message
Demodulation Error m(t)
FSK Correction

Frequency
Multiplier Early-late Code loop
Gates Filter

Digital
Frequency
Synthesizer

m – 1 bits
PN code Clock
Generator VCO

Fig. 4 Block diagram of a non-coherent frequency-hopping receiver

The incoming signal is multiplied by the signal from the PN generator identical to
the one at the transmitter. Resulting signal from the mixer is a binary FSK, which is then
demodulated in a "regular" way. Error correction is then applied in order to recover the
original signal. The timing synchronization is accomplished through the use of early-late
gates, which control the clock frequency

Time Hopping
A typical time hopping signal is illustrated in the figure below. It is divided into
frames, which in turn are subdivided into M time slots. As the message is transmitted
only one time slot in the frame is modulated with information (any modulation). This
time slot is chosen using PN generator.

All of the message bits gathered in the previous frame are then transmitted in a
burst during the time slot selected by the PN generator. If we let: Tf = frame duration, k =
number of message bits in one frame and Tf = k × t m , then the width of each time slot in
T T t
a frame is f and the width of each bit in the time slot is f or just m . Thus, the
M kM M
transmitted signal bandwidth is 2M times the message bandwidth.
A typical time hopping receiver is shown in Fig.5. The PN code generator
drives an on-off switch in order to accomplish switching at a given time in the frame. The
output of this switch is then demodulated appropriately. Each message burst is stored and
re-timed to the original message rate in order to recover the information. Time hopping is
at times used in conjunction with other spread spectrum modulations such as DS or
FH. Table 1 presents a brief comparison of major features of various SS schemes.

On-off Demodulation Bit


switch Unit Synchronizer

AND gate Storage


and
retiming
m bits
PN code Clock
Generator VCO

Fig. 5 Block diagram of a time hopping receiver


Spreading Merits Demerits
Method
Direct i) Simpler to implement i) Code acquisition may be
Sequence ii) Low probability of interception difficult
iii) Can withstand multi-access ii) Susceptible to Near-Far
interference reasonably well problem
iii) Affected by jamming
Frequency i) Less affected by Near-Far i) Needs FEC
Hopping problem ii) Frequency acquisition may
ii) Better for avoiding jamming be difficult
iii) Less affected by multi-access
interference
Time i) Bandwidth efficient i) Elaborate code acquisition
Hopping ii) Simpler than FH system is needed.
ii) Needs FEC

Table 1 Comparison of features of various spreading techniques

Hybrid System: DS/(F) FH


The DS/FH Spread Spectrum technique is a combination of direct-sequence and
frequency hopping schemes. One data bit is divided over several carrier frequencies (Fig
6).
frequency-hop -
hop time period

Carrier1 PN code
Carrier 2 PN code
Carrier 3 PN code
Carrier 4 PN code

Time
Fig. 6 A hybrid DS-FH spreading scheme

As the FH-sequence and the PN-codes are coupled, a user uses a combination of
an FH-sequence and a PN-code.

Features of Spreading Codes


Several spreading codes are popular for use in practical spread spectrum systems.
Some of these are Maximal Sequence (m-sequence) length codes, Gold codes, Kasami
codes and Barker codes. In this section will be briefly discussed about the m-sequences.
These are longest codes that can be generated by a shift register of a specific length, say,
L. An L-stage shift register and a few EX-OR gates can be used to generate an m-
sequence of length 2L -1. Fig. 7 shows an m-sequence generator using n memory
elements, such as flip-flops. If we keep on clocking such a sequence generator, the
sequence will repeat, but after 2L -1 bits. The number of 1-s in the complete sequence and
the number of 0-s will differ by one. That is, if L = 8, there will be 128 one-s and 127
zero-s in one complete cycle of the sequence. Further, the auto-correlation of an
m-sequence is -1 except for relative shifts of (0 ± 1) chips (Fig. 8). This behavior of the
auto correlation function is somewhat similar to that of thermal noise as the
auto correlation shows the degree of correspondence between the code and its phase-
shifted version. Hence, the m-sequences are also known as, pseudo-noise or PN
sequences.

Mod 2

Memory elements +

1 2 n-2 n-1 n

Clock
Fig. 7 Maximal length pseudo random sequence generator

R(τ)

NTc
-1/N

T0

Tb

Fig. 8 Autocorrelation function of PN sequence


Another interesting property of an m-sequence is that, the sequence, when added
(modulo-2) with a cyclically shifted version of itself, results in another shifted version of
the original sequence. For moderate and large values of L, multiple sequences exist,
which are of the same length. The cross correlation of all these codes are studied. All
these properties of a PN sequence are useful in the design of a spread spectrum system.
Sometimes, to indicate the occurrence of specific patterns of sequences, we define ‘run’
as a series of ones and zero-s, grouped consecutively. For example, consider a sequence
1011010. We say, the sequence of has three runs of single ‘0’, two runs of single ‘1’ and
one run of two ones. In a maximum length sequence of length and 2L -1, there are
exactly 2L-(p+2) runs of length ‘p’ for both of ones and zeros except that there is only one
run containing L one-s and one containing (L-1) zero-s. There is no run of zero-s of
length L or ones of length (L-1). That is, the number of runs of each length is a
decreasing power of two as the run length increases.

It is interesting to note that, multiple m-sequences exist for a particular value of L


> 2. The number of available m- sequences is denoted by
( )
φ 2L -1
. The numerator
L
( )
φ 2 L - 1 is known as the Euler number, i.e. the number of positive integers, including 1,
that are relatively prime to L and less than (2L -1). When (2L -1) itself is a prime number,
all positive integers less than this number are relatively prime to it. For example, if L = 5,
30
it is easy to find that the number of possible sequences = = 6.
5
If the period of an m-sequence is N chips, N = (2n –1), where ‘n’ is the number of
stages in the code generator. The autocorrelation function of an m-sequence is periodic in
nature and it assumes only two values, viz. 1 and (-1/N) when the shift parameter (τ) is an
integral multiple of chip duration.

Several properties of PN sequences are used in the design of DS systems. Some


features of maximal length pseudo random periodic sequences (m-sequence or PN
sequence) are noted below:
a) Over one period of the sequence, the number of ‘+1’ differs from the number
of ‘-1’ by exactly one.
b) Also the number of positive runs equals the number of negative runs.
c) Half of the runs of bits in every period of the same sign (i.e. +1 or -1) are of
length 1, one fourth of the runs of bits are of length 2, one eighth of the runs
of bits are of length 3 and so on. The autocorrelation of a periodic sequence is
two-valued.

Applications of Spread Spectrum


A specific example of the use of spread spectrum technology is the North
American Code Division Multiple Access (CDMA) Digital Cellular (IS-95) standard.
The CDMA employed in this standard uses a spread spectrum signal with 1.23-MHz
spreading bandwidth. Since in a CDMA system every user is a source of interference to
other users, control of the transmitted power has to be employed (due to near-far
problem). Such control is provided by sophisticated algorithms built into control stations.
The standard also recommends use of forward error-correction coding with interleaving,
speech activity detection and variable-rate speech encoding. Walsh code is used to
provide 64 orthogonal sequences, giving rise to a set of 64 orthogonal ‘code channels’.
The spread signal is sent over the air interface with QPSK modulation with Root Raised
Cosine (RRC) pulse shaping. Other examples of using spread spectrum technology in
commercial applications include satellite communications, wireless LANs based on IEEE
802.11 standard etc.

You might also like