You are on page 1of 47

Outline

Basic principles
Linear Block Coding

Chapter 5: Linear Block Codes

Vahid Meghdadi

University of Limoges
meghdadi@ensil.unilim.fr

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Basic principles
Introduction
Block codes
Error detection
Error correction

Linear Block Coding

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Introduction

I Transmission through noisy channel


I Transmission errors can occur, 1’s become 0’s and 0’s become
1’s
I To correct the errors, some redundancy bits are added to the
information sequence, at the receiver the correlation is
exploited to locate transmission errors
I Here, only binary transmission is considered.
I In this chapter we try to optimally add this redundancy to the
information bits.

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Block codes (1/3)

Definition of block coding:


I A message m is a binary sequence of size k: m ∈ Ak
I To each message m correspond a codeword which is a binary
sequence c of size n : n > k.

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Block codes (2/3)

I The code space contains 2n points but only 2k of them are


valid codewords.
I A code must be a one-to-one relation (injection)
Example:
m ∈ {00, 01, 10, 11}
c ∈ {000, 011, 101, 110}
This is a one parity check code. 001,010,100,111 ∈
/C

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Block code (3/3)

Definition:
The rate of a code is defined as k/n. For example for the previous
code n = 3, k = 2 so the rate is 2/3.

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Linear codes

Definition:
A binary code is linear if the following condition is satisfied:

∀c1 ∈ C , ∀c2 ∈ C ⇒ c1 + c2 ∈ C

Example: With the previous example

011 + 101 = 110 ∈ C

011 + 110 = 101 ∈ C


101 + 110 = 110 ∈ C

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Hamming distance and Hamming weight (1/2)

Definition :
Hamming weight of a binary sequence is defined as the number of
1 in the sequence.

w ([101]) = 2 w ([11101]) = 4

Definition:
Hamming distance between two binary sequences v and w is the
number of places where they differ.
Example
v = [110101]
⇒ d(v, w) = 2
w = [110011]

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Hamming distance and Hamming weight (2/2)

Property 1: d(v, w) + d(w, x) ≥ d(v, x) (triangle inequality)


Property 2: d(v, w) = w (v + w)
Example:
v = [110101] w = [110011]
v + w = [000110] d(v, w) = w (v + w) = 2

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Minimum distance of block code

dmin = min{d(v, w) : v, w ∈ C and v 6= w}


v,w

If the code is linear:

dmin = min{w (v + w) : v, w ∈ C and v 6= w}


= min{w (x) : x ∈ C and x 6= 0}
= wmin

Theorem: The minimum distance of a linear block code is equal to


the minimum weight of its non zero codeword.

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Good codes
A good code is one whose minimum distance is large.
For the previous example dmin = 2:

{00, 01, 10, 11} ⇒ {000, 011, 101, 110}

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Error detection (1/3)

I Noisy channel introduces an error-pattern: c is sent, r is


received where r = c + e
I w (e) =the number of errors that occur during the
transmission
I e is called error pattern
I For a code with dmin as minimum distance, any two distinct
codewords differ in at least dmin places.
I No error pattern of weight dmin − 1 or less can change one
codeword into another.

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Error detection (2/3)

I Any error pattern of size at most dmin − 1 can be detected.


I Some error patterns of size dmin or larger can be detected.
Example:
{000, 011, 101, 110} ∈ C
For this code dmin = 2:
I any one error is detected
I but any 3 errors can be detected as well.

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Error detection (3/3)

I For c + e ∈ C, and e 6= 0, the error cannot be detected.


Because the code is linear, only for e as a codeword this
constraint is satisfied.
I So, for a (n, k) code, there are exactly 2n − 2k error patterns
that can be detected.
Example: for the previous code, all the following error pattern can
be detected:
{[001], [010], [100], [111]}
For this code, n = 3, k = 2, 2n − 2k = 4

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Error Detection Capacity

I The error detection fails only when the error pattern is


identical to a non-zero codeword, otherwise c + e ∈ C and the
error is not detected.
Definition: Weight distribution
I Let Ai be the number of codewords of weight i,
I A0 , A1 , · · · , An are called the weight distribution of the code.

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Code example

For this example:


A0 = 1, A1 = A2 = 0, A3 = 7, A4 = 7, A5 = A6 = 0, A7 = 1
Vahid Meghdadi Chapter 5: Linear Block Codes
Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Undetected Error Probability

Definition : Pu (E ) probability of undetected error

Pu (E ) = prob(e ∈ C)

In a BSC channel with p as the error transmission probability


n
X
Pu (E ) = Ai p i (1 − p)n−i
i=1

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Example

Considering previous code (7,4)

Pu (E ) = 7p 3 (1 − p)4 + 7p 4 (1 − p)3 + p 7

For example if p = 0.01 then Pu (E ) = 7 × 10−6

0 1-p
0
p

p
1 1
1-p

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Error correction 1/2


How many errors can be corrected ?
Let t be a positive number such that 2t + 1 ≤ dmin < 2t + 2
Let v be the transmitted and r the received sequences, and w any
codeword. We can write:
d(v, r) + d(w, r) ≥ d(v, w)
Suppose that t 0 errors occur: d(v, r) = t 0 , and for any v, w ∈ C
d(w, v) ≥ dmin ≥ 2t + 1
So d(w, r) ≥ 2t + 1 − t 0 . If t 0 ≤ t then:
d(w, r) > t
Conclusion: For any error pattern of t or fewer errors, r is closer to
real codeword v than any other codeword. So any t errors or fewer
can be corrected.
Vahid Meghdadi Chapter 5: Linear Block Codes
Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Error correction 2/2

Number
 of vectors (n-tuples) around a codeword with a distance of
n
u is: .
u
Definition: Hamming Sphere of radius r is the number of vectors
around a codeword with a Hamming distance up to t:
t  
X n
V (n, t) =
j
j=0

Vahid Meghdadi Chapter 5: Linear Block Codes


Introduction
Outline
Block codes
Basic principles
Error detection
Linear Block Coding
Error correction

Conclusion
I A block code with minimum distance dmin guarantees
correcting all error patterns of t = b(dmin − 1)/2c or fewer.
I t = b(dmin − 1)/2c is called error correcting capability of the
code.
I Some patterns of t + 1 or more can be corrected as well.
I In general, for a t-error-correcting (n, k) linear code, 2n−k
error patterns can be corrected.
I For a t-error-correcting (n, k) linear code and using an optimal
error correcting algorithm at the receiver, the probability that
the receiver make a bad correction is upper bounded by
n  
X n
P(E ) < p i (1 − p)n−i
i
i=t+1

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Basic principles
Introduction
Block codes
Error detection
Error correction

Linear Block Coding

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Introduction

I Given n and k, how the code can be designed ?


I How an information sequence can be mapped to the
corresponding code word ?
I We are interested in systematic design.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Code subspace

I An (n, k) linear code is a k-dimensional subspace of the vector


space of all the binary n-tuples, so it is possible to find k
linearly independent code words g0 , g1 , · · · , gk−1 to span this
space.
I So any code word can be written as a linear combination of
these base vectors:

c = m0 g0 + m1 g1 + · · · + mk−1 gk−1

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Generator matrix

We can arrange these k linearly independent code words (vectors)


as the rows of a k × n matrix as follows:
   
g0 g00 g01 ··· g0,n−1
 g1   g10 g11 · · · g1,n−1 
G= . =
   
. . . ..
 ..   .. .. .. 
. 
gk−1 gk−1,0 gk−1,1 · · · gk−1,n−1 k×n

c = mG

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Equivalent code

I One can obtain the same code (same codewords) but with a
different generator matrix. For example, we can exchange any
two rows of G.
I The only difference is that we obtain another mapping from
information sequences to codewords.
I We can replace each row by a linear combination of that row
with another row.
I All the codes obtained are equivalent.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Systematic codes

Definition:
I If in all the codewords we can find exactly the corresponding
information sequence, the code is called systematic.
It is convenient to group all these bits either at the end or at the
beginning of the code word.
I In this case the generator matrix can be divided into two sub
matrices [P|I ].

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Systematic codes

   
g0 p00 p01 ··· p0,n−1 1 0 ···0
 g1   p10 p11 ··· p1,n−1 0 1 ···0 
G= =
   
.. .. .. .. .. .. .. .. .. 
 .   . . . . . . . . 
gk−1 pk−1,0 pk−1,1 · · · pk−1,n−1 0 0 ···1

G = [Pn×n−k Ik×k ]
So for a given message m the codeword will be
c = mG = [mP m]1×n . The first part of the codeword is the
parity bits and the second is the systematic part of the codeword.
Note:Any linear code can be transformed into systematic code.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Example

Example: u = [1101] and


 
1 1 1 1 0 0 0
 1 0 1 0 1 0 0 
G=
 1

1 0 0 0 1 0 
0 1 1 0 0 0 1

So the codeword will be: [0011101]. The codeword is the sum of


the first, second and forth lines of the G.
The last four bits are the systematic part of the codeword.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Parity check matrix

For any k × n matrix G with k linearly independent rows, there


exists an (n − k) × n matrix H with (n − k) linearly independent
rows such that any vector in the row space of G is orthogonal to
the rows of H and vice versa. So we can say:
I An n-tuple c is a codeword generated by G if and only if
cHT = 0
Matrix H is called Parity check matrix.
I The rows of H span a code space called dual code of G.
I GHT = 0

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Parity check matrix of systematic codes

For the systematic code G = [Pn×n−k Ik×k ], the parity check


T
matrix is simply H = [I(n−k)×(n−k) P(n−k)×k ]. (Why?)
Exercise: For the following generator matrix, give the generator
matrix of its dual code.
 
1 1 1 1 0 0 0
 1 0 1 0 1 0 0 
G=  1 1 0 0 0 1 0 

0 1 1 0 0 0 1

Note: A non systematic linear code can be transformed into


systematic and then the parity check matrix can be calculated.
This parity check matrix can also be used for the original code.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Minimum distance of the code

Theorem: The minimum distance of a code is equal to the


smallest positive number of columns of H which are linearly
independent. That is, all combination of dmin − 1 columns of H are
linearly independent, so there is some set of dmin columns which
are linearly dependent.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Singleton bound

For an (n, k) linear code

dmin ≤ n − k + 1

Proof: the rank(H) = n − k so any n − k + 1 columns of H must


be linearly dependent (the row rank of a matrix is equal to its
column rank). So dmin cannot be larger than n − k + 1 (according
to the previous theorem).
Definition: A code for which dmin = n − k + 1 is called
”maximum distance separable” code.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Systematic Error Detection

I An n-tuple vector r is a codeword iff

rHT = 0

I The product S = rHT is a (1 × n − k) vector and is called the


syndrome of r.
I If the syndrome is not zero, a transmission error is declared.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Systematic Error Correction

Maximum likelihood error correction:

ĉ = arg min dH (c, r)


c∈C

Should we test all the possibilities ?


For a (256,200) binary code, there are 2200 = 1.6 × 1060
possibilities to be verified !
Solution: Syndrome decoding which decreases the number of
possibilies.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Syndrome Decoding

r is the received sequence


r =c+e
S = rHT = (c + e)HT = cHT + eHT = eHT
Conclusion: The syndrome is not a function of the transmitted
codeword but a function of error pattern. So we can construct only
a matrix of all possible error pattern with corresponding syndrome.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Syndrome Table Construction

I There are 2n possible received vectors


I There are 2k valid codewords
I There are 2n−k possible syndromes
First we generate all the error patterns of length 1, and calculate
the corresponding syndrome and put all of them into a matrix.
Then, we increase the error pattern weight and we do the same
thing as before.
Each time if the new syndrome had already been saved, it will be
thrown away and we continue with the next error pattern.
When all the 2k syndromes are found, the table is complete.
For the previous example (256,200) the size of the matrix is
256 = 7.2 × 1016 , which is still too big.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Exercise

  Error pattern Syndrome


1 0 0 0 0 1 1 0000000 0000
 0 1 0 0 1 0 1 
H=  1000000 1000
 0 0 1 0 1 1 0  .. ..
0 0 0 1 1 1 1 . .
0000001 1101
It is a (7,3) code with 1100000
dmin = 4. Complete the 1010000
syndrome table which has 16 .. ..
. .
rows: 1110000 1110

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Exercise

Suppose r = [0, 0, 1, 1, 0, 1, 1], calculate the syndrome and then


correct it in maximum likelihood sense. Use the same H as the
previous example.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Hamming Code
Definition: If the syndrome table becomes complete for all
w (e) ≤ t the code is called perfect.
For example for the code (7, 4) with t = 1, the syndrome table is
completed for all the error patterns of weight 1. These codes with
t = 1 are called Hamming codes.
For Hamming codes, there are 2n−k − 1 non-zero syndromes. This
number must be equal to the number of error patterns of weight 1.
It means that 2n−k − 1 = n. Calling the number of parities
m = n − k, we can write for a Hamming code that:

k = 2m − m − 1

So the valid Hamming codes are:


(3, 1), (7, 4), (15, 11), ..., (2m − 1, 2m − m − 1).
Vahid Meghdadi Chapter 5: Linear Block Codes
Outline
Basic principles
Linear Block Coding

Error Detection Performance Example

As we saw before the probability that a received sequence is taken


for a codeword by false is given by:
n
X
Pu (E ) = Ai p i (1 − p)n−i
i=1

For a (7,4) code the weight distribution is given by


[1, 0, 0, 7, 7, 0, 0, 1] and for a (15,11) code it is
[1, 0, 0, 35, 105, 168, 280, 435, 435, 280, 168, 105, 35, 0, 0, 1]. Using
a BSC channel
p derived from a BPSK system with
p = Q( 2REb /N0 ), we can draw the probability that a received
sequence is taken for a codeword erroneously.

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Pu curves

I R = 4/7 0
10
(15,11)
I Eb /N0 = 6dB, −2
10
(7,4)

p = 0.0165
I Eb /N0 = 10dB, −4
10

p = 3.6 × 10−4 −6

Pu
10
I R = 11/15
−8
I Eb /N0 = 6dB, 10

p = 0.0078 −10
10
I Eb /N0 = 10dB,
p = 6.41 × 10−5 −12
10
0 2 4 6 8 10
SNR(dB)

It means that at 10 dB of Eb /N0 , from 1011 received sequences,


just one codeword error is not detected (it is not related to the bits
but to the codewords).
Vahid Meghdadi Chapter 5: Linear Block Codes
Outline
Basic principles
Linear Block Coding

Error correction probability for perfect codes

Up to t errors are corrected in a perfect code where


t = b(dmin − 1)/2c. So the probability that a received sequence go
out of Hamming sphere of radius t can be calculated easily. It is in
fact the probability of failure for a correcting receiver.
t  
X n
P(F ) = 1 − p j (1 − p)n−j
j
j=0

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Error correction probability for perfect codes


0
10
(15,11)
(7,4)
−1
10

−2
10

−3
10
P(F)

−4
10

−5
10

−6
10

−7
10
0 1 2 3 4 5 6 7 8 9 10
Eb/N0(dB)

Note that this is not the bit error rate probability but word error
rate probability.
Vahid Meghdadi Chapter 5: Linear Block Codes
Outline
Basic principles
Linear Block Coding

Soft Decision Decoding


√ √
I A BPSK modulation is used to send the codeword (− Ec , + Ec )
I The Hamming distance of two codewords is √related to their
Euclidean distance by the relation: dE = 2 Ec dH
I With a (n, k, dmin ) block code, and an AWGN with σ 2 = N0 /2, a
transmitted codeword will be a point in n-dimensional space.
I Ec and Eb are related by Ec = REb where R = n/k is the code rate.
I Suppose that there are on the average K codewords at the distance
dmin from a codeword, the probability of block decoding (by the
union bound) is
r !
2Rdmin Eb
P(E ) ≈ KQ
N0
I Neglecting K , the asymptotic coding gain in comparison with
uncoded system is Rdmin .
Vahid Meghdadi Chapter 5: Linear Block Codes
Outline
Basic principles
Linear Block Coding

Modifications to Linear Codes: Extension


An (n + 1, k, d + 1) code can be constructed from an (n, k, d)
code by:
1. Add to the H a last all-zero column
2. Add to the new H a last all-one row
3. Use linear operation to obtain an equivalent systematic code
4. Find the generator matrix G
Exercise: Give the systematic generator matrix of (8, 4, 4) from the
following (7,4,3) code:
 
1 0 0 1 0 1 1
H= 0 1 0 1 1 1 0 
0 0 1 0 1 1 1

Vahid Meghdadi Chapter 5: Linear Block Codes


Outline
Basic principles
Linear Block Coding

Modifications to Linear Codes: Puncturing

A code is punctured by deleting one column in the H matrix. So a


(n, k) code is transformed into an (n − 1, k) code. The minimum
distance of the code can be decreased by one.

Vahid Meghdadi Chapter 5: Linear Block Codes

You might also like