You are on page 1of 28

CHANNEL CODING

 Error Control Coding (ECC)


 Extra bits are added to the data at the transmitter
(redundancy) to permit error detection or correction at the
receiver
 Done to prevent the output of erroneous bits despite noise
and other imperfections in the channel
 The positions of the error control coding and decoding are
shown in the transmission model
TRANSMISSION MODEL

Error Modulator
Digital Source Line X(w)
Control (Transmit
Source Encoder Coding
Coding Filter, etc)

Hc(w) Channel
Transmitter

N(w) Noise
+
Error Demod
Digital Source Line
Control (Receive
Sink Decoder Decoding
Decoding Filter, etc) Y(w)

Receiver
BLOCK CODES

 Binary data is grouped into blocks of length k


bits (data word)
 Each data word is coded into blocks of length n
bits (codeword), where in general n>k
 This is known as an (n, k) block code
BLOCK CODES

 A vector notation is used for the data words and


code words,
 Data word d = (d1 d2….dk)
 Code word c = (c1 c2……..cn)
 The redundancy introduced by the code is
quantified by the code rate,
 Code rate = k/n
 i.e., the higher the redundancy, the lower the code rate
BLOCK CODE - EXAMPLE

 Dataword length k = 4
 Codeword length n = 7

 This is a (7,4) block code with code rate = 4/7

 For example, d = (1101), c = (1101001)


ERROR CONTROL PROCESS

Source code
Codeword (n
data chopped
101101 1000 bits)
into blocks
Channel
1000 coder
Dataword
(k bits)

Codeword +
Dataword possible errors
(k bits) (n bits)
Channel
Channel
decoder

Error flags
ERROR CONTROL PROCESS
 Decoder gives corrected data
 May also give error flags to
 Indicate reliability of decoded data
 Helps with schemes employing multiple layers of
error correction
BASIC DEFINITIONS

 Linearity:
If m1  c1 and m 2  c 2
then m1  m 2  c1  c 2
where m is a k-bit information sequence
c is an n-bit codeword.
 is a bit-by-bit mod-2 addition without
carry
 Linear code: The sum of any two code words is a
codeword.
 Observation: The all-zero sequence is a codeword in
every linear block code.
BASIC DEFINITIONS (CONT’D)
 Def: The weight of a codeword ci , denoted by
w(ci), is the number of of nonzero elements in the
codeword.
 Def: The minimum weight of a code, wmin, is the
smallest weight of the nonzero codewords in the
code.
 Theorem: In any linear code, dmin = wmin

 Systematic codes n-k k


check bits information bits

Any linear block code can be put in systematic


form
LINEAR BLOCK CODES
 The number of code words is 2k since there are 2k distinct
messages.
 The set of vectors {gi} are linearly independent since we must
have a set of unique code words.
 linearly independent vectors mean that no vector gi can be
expressed as a linear combination of the other vectors.
 These vectors are called basis vectors of the vector space C.
 The dimension of this vector space is the number of the basis
vector which are k.
 Gi є C the rows of G are all legal code words.
LINEAR BLOCK CODES

 Ifthere are k data bits, all that is required is


to hold k linearly independent code words, i.e.,
a set of k code words none of which can be
produced by linear combinations of 2 or more
code words in the set.
 The easiest way to find k linearly independent
code words is to choose those which have ‘1’ in
just one of the first k positions and ‘0’ in the
other k-1 of the first k positions.
LINEAR BLOCK CODES
 For example for a (7,4) code, only four code words are
required, e.g.,

1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 0 1 1
0 0 0 1 1 1 1
• So, to obtain the codeword for data word 1011,
the first, third and fourth code words in the list
are added together, giving 1011010
• This process will now be described in more detail
LINEAR BLOCK CODES
 An (n,k) block code has code vectors
d=(d1 d2….dk) and
c=(c1 c2……..cn)
 The block coding process can be written as c = d G
where G is the Generator Matrix
LINEAR BLOCK CODES – EXAMPLE 1

 For example a (4,2) code, suppose;

1 0 1 1 a1 = [1011]
G    a2 = [0101]
0 1 0 1
• For d = [1 1], then;
1 0 1 1
 0 1 0 1
c 
_ _ _ _
 1 1 1 0
LINEAR BLOCK CODES – EXAMPLE 2

 A (6,5) code with


1 0 0 0 0 1
0 1 0 0 0 1
 
G  0 0 1 0 0 1
 
0 0 0 1 0 1

0 0 0 0 1 1

• Is an even single parity code
GENERATOR MATRIX (CONT’D)
 c=dG
 G can be
 
1 1 0 1 0 0 0
0 1 1 0 1 0 0
G   P | Ik 
1 1 1 0 0 1 0
 
 1

0 1

0 0 0 1

 k  ( n k ) k k 
EXAMPLE:
ENCODING CIRCUIT
ERROR CORRECTING POWER OF LBC

 The Hamming distance of a linear block code


(LBC) is simply the minimum Hamming
weight (number of 1’s or equivalently the
distance from the all 0 codeword) of the non-
zero codewords
 Note d(c1,c2) = w(c1+ c2) as shown previously
 For an LBC, c1+ c2=c3
 So d(c1,c2)) = w(c1+ c2) = w(c3)
 Therefore to find min Hamming distance just
need to search among the 2k codewords to find
the min Hamming weight – far simpler than
doing a pair wise check for all possible
codewords.
PARITY-CHECK MATRIX
DECODING USING H MATRIX
DECODING
 Let c be transmitted and r be received, where
r=c+e c + r
e = error pattern = e1e2..... en, where e


th
ei   1 if the error has occured in the i location
 0 otherwise
The weight of e determines the number of errors.
If the error pattern can be determined, decoding can be
achieved by:
c=r+e
DECODING (CONT’D)
Consider the (7,4) code.
(1) Let 1101000 be transmitted and 1100000 be
received.
Then: e = 0001000 ( an error in the fourth location)
(2) Let r = 1110100. What was transmitted?
c e
#2 0110100 1000000
#1 1101000 0011100
#3 1011100 0101000
The first scenario is the most probable.
SYNDROME DECODING
Decoding Procedure:
1. For the received vector r, compute the syndrome s =
rHT.
2. Using the table, identify the error pattern e.
3. Add e to r to recover the transmitted codeword c.
DECODING (CONT’D)
CYCLIC CODES

 Hamming code is useful but there exist codes


that offers same (if not larger) error control
capabilities while can be implemented much
simpler.
 Cyclic code is a linear code that any cyclic shift
of a codeword is still a codeword.
 Makes encoding/decoding much simpler, no need
of matrix multiplication.
CYCLIC CODES

• Polynomial representation of cyclic codes.


C(x) = Cn-1xn-1 + Cn-2xn-2 + … + C1x1 + C0x0 ,
where, in this course, the coefficients belong to
the binary field {0,1}.
• That is, if the codeword is (1010011) (c6 first, c0
last), we write it as x6 + x4 + x + 1.
• Addition and subtraction of polynomials – Done
by doing binary addition or subtraction on each
bit individually, no carry and no borrow.
CYCLIC CODESCyclic Codes:
• Subclass of Linear Block Codes.
• Described in polynomial form
• Cyclic Linear Block Code Theorem
• Special Case: Systematic Cyclic Codes
• Systematic Cyclic Code as Linear Block Code
• Decoding Algorithm of Systematic Cyclic Code
• Encoding Circuit for generating Systematic Cyclic Code:

 x n  k d ( x) 
c( x)  x d ( x)   ( x);  ( x)  Re m
nk

 g ( x ) 
g ( x)  x n  k  g1 x n  k 1  .......  g n  k 1 x  1

You might also like