You are on page 1of 82

Unit -II

Error control codes (FEC)


Objectives
Definitions
• A word is a sequence of symbols.
• A code is a set of vectors called codewords.
• The Hamming weight (w(c)) of a codeword
(or any vector) is equal to the number of non-
zero elements in the codeword.
• Hamming distance between two codewords
is the number of places by which the
codewords differ. d(c1,c2)
• d(c1, c2) = w (c1 – c2).
Properties
Example.1
Example.2
Block Code (n, k)
Example
• Code C = {00000, 10100, 11110, 11001} is a
block code of block length equal to 5.

• M = 4, k = 2 and n = 5
• The sequence to be encoded is 1 0 0 1 0 1 0
0 1 1.…
• Partitioned: 10 01 01 00 11…
• Codeword: 11110 10100 10100 00000
11001…
Code Rate
Minimum Distance
Minimum Weight
Properties
Parity-check codes

minimum weight w*= 2

d* = 2.
Algebra - Field
Group, Ring & Field
Galois Fields – Finite Field
Properties
GF(2)
GF(7)
GF(4)
GF(6) Ring
Error control techniques

• Automatic Repeat reQuest (ARQ)


– Full-duplex connection, error detection codes
– The receiver sends a feedback to the transmitter, saying that
if any error is detected in the received packet or not (Not-
Acknowledgement (NACK) and Acknowledgement (ACK),
respectively).
– The transmitter retransmits the previously sent packet if it
receives NACK.
• Forward Error Correction (FEC)
– Simplex connection, error correction codes
– The receiver tries to correct some errors
• Hybrid ARQ (ARQ+FEC)
– Full-duplex, error detection and correction codes
Linear block codes

• The information bit stream is chopped into blocks of k bits.


• Each block is encoded to a larger block of n bits.
• The coded bits are modulated and sent over channel.
• The reverse procedure is done at the receiver.

Channel
Data block Codeword
encoder
k bits n bits

n-k Redundant bits


k
Rc  Code rate
n
Linear block codes – cont’d

• The Hamming weight of vector U, denoted by w(U),


is the number of non-zero elements in U.

• The Hamming distance between two vectors U and


V, is the number of elements in which they differ.
d (U, V )  w(U  V )

• The minimum distance of a block code is

d min  min d (U i , U j )  min w(U i )


i j i
Linear block codes – cont’d

• Error detection capability is given by

e  d min  1

• Error correcting-capability t of a code, which is defined


as the maximum number of guaranteed correctable
errors per codeword, is
 d  1
t   min 
 2 
Linear block codes – cont’d

• For memory less channels, the probability that the decoder commits an
erroneous decoding is
n
n j
PM     p (1  p) n  j
j  t 1  j 

– p is the transition probability or bit error probability over channel.

• The decoded bit error probability is

1 n
 n j
PB 
n
 j 
 
j  t 1  j 
p (1  p ) n j
Linear block codes – cont’d

• Encoding in (n,k) block code

U  mG  V1 
V 
(u1 , u 2 , , u n )  (m1 , m2 , , mk )   2 
 
 
Vk 
(u1 , u 2 , , u n )  m1  V1  m2  V2    m2  Vk
– The rows of G, are linearly independent.
– The sum of any two codewords is also a codeword.
Codeword
Linear block codes – cont’d
• Systematic block code (n,k)
– For a systematic code, the first (or last) k elements in
the codeword are information bits.

G  [P I k ] Coefficient
Matrix  gives
I k  k  k identity matrix dependency of
parity bits on the
Pk  k  (n  k ) matrix message bits

U  (u1 , u2 ,..., un )  (p 1 , p2 ,..., pn k , m1 , m2 ,..., mk )


parity bits message bits
Linear block codes – cont’d

• For any linear code we can find an matrix H ( n  k ),n


whose rows are orthogonal to rows of G:
T
GH  0
• H is called the parity check matrix and its rows are
linearly independent.
• For systematic linear block codes:

H  [I n  k PT ]
Example (7,4)
Linear block codes – cont’d

• Example: Block code (6,3)


Message vector Codeword
000 000000
 V1  1 1 0 1 0 0 100 110100
G   V2   0 1 1 0 1 0 010 011010
 V3  1 0 1 0 0 1  110 1 011 1 0
001 1 01 0 0 1
101 0 111 0 1
011 1 1 0 011
111 0 0 0 111
Encoder Circuit
Encoder Circuit (7,4)
Linear block codes – cont’d

Data source Format


m Channel U Modulation
encoding
channel
Channel Demodulation
Data sink Format
m̂ decoding r Detection

r  Ue
r  (r1 , r2 ,...., rn ) received codeword or vector
e  (e1 , e2 ,...., en ) error pattern or vector
• Syndrome testing:
– S is syndrome of r, corresponding to the error pattern e.

mGHT + eHT
S  rHT  eHT
Linear block codes – cont’d

• Standard array and syndrome table decoding


1. Calculate S  rHT
2. Find the coset leader, eˆ  e i, corresponding to S.
ˆ  r  eˆ and corresponding m̂.
3. Calculate U
ˆ  r  eˆ  (U  e)  eˆ  U  (e  eˆ )
U
– Note that
• If eˆ  e, error is corrected.
• If eˆ  e, undetectable decoding error occurs.
Linear block codes – cont’d

U  (101110) transmitted.
Error pattern Syndrome
000000 000 r  (001110) is received.
000001 101 The syndrome of r is computed :
000010 011 S  rH T  (001110) H T  (100)
000100 110
Error pattern corresponding to this syndrome is
001000 001
010000 010 eˆ  (100000)
100000 100 The corrected vector is estimated
010001 111 ˆ  r  eˆ  (001110)  (100000)  (101110)
U
Repetition Codes
• (n,1) code, eg. (5,1)
• G=[1111 1]
• H= 1000 1
0100 1
0010 1
0001 1
Hamming codes
 Hamming codes are a subclass of linear block codes and belong to
the category of perfect codes.
 Hamming codes are expressed as a function of a single integer
m2
Code length : n  2m  1
Number of information bits : k  2 m  m  1
Number of parity bits : n-k  m
Error correction capability : t  1

 The columns of the parity-check matrix, H, consist of all non-zero


binary m-tuples.
Hamming codes
• Example: Systematic Hamming code (7,4)
1 0 0 0 1 1 1
H  0 1 0 1 0 1 1  [I 33 PT ]
0 0 1 1 1 0 1
0 1 1 1 0 0 0 
1 0 1 0 1 0 0
G   [P I 44 ]
1 1 0 0 0 1 0
 
1 1 1 0 0 0 1 
Cyclic block codes
• Cyclic codes are a subclass of linear block
codes.
• Encoding and syndrome calculation are easily
performed using feedback shift-registers.
– Hence, relatively long block codes can be
implemented with reasonable complexity.
• BCH and Reed-Solomon codes are cyclic codes.
Cyclic block codes

• A linear (n,k) code is called a Cyclic code if all cyclic shifts of a


codeword are also a codeword.

– Linearity and Cyclic property


“i” cyclic shifts of U
U  (u0 , u1 , u2 ,..., un 1 )
U ( i )  (un i , un i 1 ,..., un 1 , u0 , u1 , u2 ,..., un i 1 )
– Example:
U  (1101)
U (1)  (1110 ) U ( 2)  (0111) U (3)  (1011) U ( 4)  (1101)  U
Cyclic block codes
• Algebraic structure of Cyclic codes, implies expressing
codewords in polynomial form

U( X )  u0  u1 X  u 2 X 2  ...  un 1 X n 1 degree (n-1)


• Relationship between a codeword and its cyclic shifts:

X iU(X) = X i ( u0 + u1X + ….. un-i-1X n-i-1 + un-iX n-i ….. + un-1X n-1 )

= u0 X i + u1 X i+1 + ….. un-i-1X n-1 + un-iX n ….. + un-1X n+i-1

= un-iX n ….. + un-1X n+i-1 + u0 X i + u1 X i+1 + ….. un-i-1X n-1


Cyclic block codes

X iU(X) = un-iX n ….. + un-1X n+i-1 + u0 X i + u1 X i+1 + ….. un-i-1X n-1

= un-i + un-i + un-iX n ….. + un-1 X i-1 + un-1 X i-1 + un-1 X n+i-1

+ u0 X i + u1 X i+1 + ….. un-i-1X n-1

= un-i + un-i (Xn + 1) ….. + un-1 X i-1 + un-1 X i-1 (Xn + 1)

+ u0 X i + u1 X i+1 + ….. un-i-1X n-1

= un-i +….. + un-1 X i-1 + u0 X i + u1 X i+1 + ….. un-i-1X n-1


Cyclic block codes
Let ,

U(i)(X) = un-i +….. + un-1 X i-1 + u0 X i + u1 X i+1 + ….. un-i-1X n-1

q(X) = un-i + un-i-1 X + ………+ un-1 X i-1

X iU(X) = q(X) (Xn + 1) + U(i)(X)


U (1) ( X )  XU( X ) modulo ( X n  1)
By extension
U (i ) ( X )  X i U( X ) modulo ( X n  1)
Cyclic block codes

Let C be a binary (n,k) linear cyclic code

 Within the set of code polynomials in C, there is a


unique monic polynomial g(X) with minimal degree r, called as the
generator polynomial.

g ( X )  g 0  g1 X  ...  g r X r
 g(X) is a polynomial of degree n-k and is a factor of (Xn + 1)

 Every code polynomial U(X) in C, can be expressed uniquely as


U(X) = m(X) g(X)
Cyclic block codes
m(X) = m0 + m1X + ….. mk-1X k-1
p(X) = p0 + p1X + ….. pn-k-1X n-k-1 Message vector Codeword
000 000000
In Systematic form,
100 110100
U(X) = p(X) + X n-k m(X) 010 011010
110 1 01 1 1 0
m(X) g(X) = p(X) + X n-k m(X) 001 1 01 0 0 1
101 0 111 0 1
X n-k m(X) = m(X) + p(X) 011 1 1 0 011
g(X) g(X)
111 0 0 0 111
Cyclic block codes

• Systematic encoding algorithm for an


(n,k) Cyclic code:
nk
1. Multiply the message polynomial m ( X )by X

2. Divide the result of Step 1 by the generator


polynomial g ( X ). Let p( X be
) the reminder.

nk
3. Add p ( X ) to X m( X to
) form the codeword U( X )
Cyclic block codes

 The orthogonality of G and H in polynomial form is


n
expressed as g ( X )h( X )  X  1

 This means h( X )is also a factor of X n  1

 The row i, i  1,..., k , of generator matrix is formed by the


coefficients of the k1 " cyclic" shift of the generator
polynomial.
 g0 g1  g r 0
 g ( X )   
 Xg ( X )   g0 g1  gr 
G      
    
 k 1   g0 g1  g r 
 X g ( X )  
0 g0 g1  g r 
Cyclic block codes

• Example: For the systematic (7,4) Cyclic code


with generator polynomial g( X )  1  X  X 3
1. Find the codeword for the message m  (1011)
n  7, k  4, n  k  3
m  (1011)  m ( X )  1  X 2  X 3
n k

X m( X )  X 3m( X )  X 3 (1  X 2  X 3 )  X 3  X 5  X 6
n k

Divide X m ( X ) by g ( X) :
X 3  X 5  X 6  (1  X  X 2  X 3 )(1  X  X 3 )  1
quotient q(X) generator g(X) remainder p(X)

Form the codeword polynomial :


U( X )  p( X )  X 3m( X )  1  X 3  X 5  X 6 U  (1 0 0 1 0 1 1 )
parity bits messagebit
s
Cyclic block codes

2. Find the generator and parity check matrices, G and H,


respectively.
g( X )  1  1  X  0  X 2  1 X 3  ( g 0 , g1 , g 2 , g 3 )  (1101)
1 1 0 1 0 0 0
0 Not in systematic form.
1 1 0 1 0 0 We do the following:
G
0 0 1 1 0 1 0 row(1)  row(3)  row(3)
 
0 0 0 1 1 0 1 row(1)  row(2)  row(4)  row(4)

1 1 0 1 0 0 0
0 1 0 0 1 0 1 1 
1 1 0 1 0 0
G H  0 1 0 1 1 1 0
1 1 1 0 0 1 0
  0 0 1 0 1 1 1
1 0 1 0 0 0 1
I 33 PT
P I 44
Cyclic block codes

• Syndrome decoding for Cyclic codes:


– Received codeword in polynomial form is given by
Received r ( X )  U ( X )  e( X ) Error
codeword pattern

– The syndrome is the reminder obtained by dividing the


received polynomial by the generator polynomial.
r ( X )  q ( X )g ( X )  S ( X ) Syndrome

– With syndrome and Standard array, error is estimated.

• In Cyclic codes, the size of standard array is considerably reduced.


Circuit for encoding
0 systematic cyclic codes
1

• In the circuit, first the message flows to the shift register, and
feedback switch is set to ‘1’, where after check-bit-switch is
turned on, and the feedback switch to ‘0’, enabling the check
bits to be outputted .
Decoding cyclic codes:
syndrome table

16.20 s ( x)  mod e( x) / g ( x) 

Using denotation of this example:


Decoding cyclic codes: error Table 16.6

correction

s ( x)  mod r ( x) / g ( x) 
Decoding circuit for (7,4) code
syndrome computation
received code x0 x1 xn-1 syndrome
0
received code 1 syndrome

G ( p)  p 3  p  1
• To start with, the switch is at “0” position
• Then shift register is stepped until all the received code bits
have entered the register
• This results is a 3-bit syndrome (n - k = 3 ):
S( p )  mod R ( p) / G ( p) 
that is then left to the register
• Then the switch is turned to the position “1” that drives the
syndrome out of the register.
Some block codes that can be
realized by cyclic codes
• (n,1) Repetition codes. High coding gain (minimum distance always n-1), but
very low rate: 1/n
• (n,k) Hamming codes. Minimum distance always 3. Thus can detect 2 errors and
correct one error. n=2m-1, k = n - m,
m3
• Maximum-length codes. For every integer there exists a maximum length
code (n,k) with n = 2k - 1,dmin = 2k-1. k 3
• BCH-codes. For every integer there exist a code with n = 2m-1,
and m error
where t is the  3 correction capability
• k  nReed-Solomon
(n,k)  mt d min (RS)
 2t  1
codes. Works with k symbols that consists of m bits
that are encoded to yield code words of n symbols. For these codes
and
• d min BCH
Nowadays  2t and
1n RS 2are
m
 1, number
very ofdue
popular check symbols
to large n  knumber
dmin, large  2t of codes,
and easy generation
• Code selection criteria: number of codes, correlation properties, code gain,
code rate, error correction/detection properties.

1: Task: find out from literature what is meant by dual codes!


Equivalent Codes
Example
Singleton bound
Nearest Neighbour decoding
Example
Perfect Code
Coset
Example
Example
Syndrome Decoding
Probability of Error Correction
Hamming Bound or Sphere
Packing Bound
Cyclic code
Polynomoial
Monic
Example
Polynomial Division
Field
Ring

You might also like