You are on page 1of 20

Communications Lab

Experiment 7: “Linear Block Codes”

Department: Communication Systems

Name: Matr.-Nr.:

Supervisor: Date:

N T S

The preparatory exercises must be solved prior to the date of the experiment.
CONTENTS i

Contents
1 Introduction 1

2 Theoretical background 1
2.1 Linear block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2.1.1 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2.1.2 Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Reed-Solomon codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 Galois-Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.2 DFT-based coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.3 Algebraic decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Preparatory Exercises 11
3.1 Linear block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Reed-Solomon codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Experimental Tasks 13
4.1 Hamming codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.1.1 Generation of Hamming codes . . . . . . . . . . . . . . . . . . . . . . 13
4.1.2 Encoding using generator matrix . . . . . . . . . . . . . . . . . . . . . 13
4.1.3 Channel modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1.4 Decoding using parity check matrix . . . . . . . . . . . . . . . . . . . 14
4.1.5 Error Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Reed-Solomon codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.1 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.2 Channel modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2.3 Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Comparison between Hamming and Reed-Solomon codes . . . . . . . . . . . . 18
1

1 Introduction
When transmitting information data via a noisy channel, channel coding is used in order to
reduce the bit-error ratio (BER) at the receiver. Channel coding techniques add redundancy to
the transmitted data that is also known to the receiver. The receiver has therefore additional
information which it exploit to detect and correct errors that occured during the transmission.
In this seminar, the class of linear block codes is investigated. Reed-Solomon codes as spe-
cial linear block codes are powerful symbol-oriented codes applied in modern communication
standards Digital Audio Broadcasting (DAB), Digital Video Broadcasting (DVB) as well as in
audio compact discs (CD).

2 Theoretical background
In the following, some facts (repeated from the exercises) about linear block codes and, subse-
quently, Reed-Solomon codes will be summarized that are intended to assist the preparationary
exercises.

2.1 Linear block codes


2.1.1 Encoding

Each linear block code can be described by:

c = u·G (1)
(2)

where u is the uncoded information word with k bits, c is the corresponding code word for the
information word u with n bits and G is the n × k generator matrix of the block code.
With the information word u = (u0 u1 u2 , the matrix multiplication equals (here an example
generator matrix is taken):

c = u·G (3)
 
  1 1 0 1 0 0 1
c = u0 u 1 u2 ·  1 0 1 0 0 1 1  (4)
1 1 1 0 1 0 0
 
= u0 · 1 1 0 1 0 0 1 + (5)
| {z }
1st row of G
 
u1 · 1 0 1 0 0 1 1 + (6)
| {z }
 2 row of G 
nd

u2 · 1 1 1 0 1 0 0 . (7)
| {z }
3 row of G
rd
2 2 THEORETICAL BACKGROUND

Note that summation and multiplication is done in the binary domain (0 + 0 = 0, 0 + 1 =


1, 1 + 1 = 0). The complete code is given by all linear combinations of the rows of G. A clear
scheme how to get all linear combinations is given next:
n
z 
 }| {
 1 1 0 1 0 0 1
k  1 0 1 0 0 1 1  =G

1 1 1 0 1 0 0
  wH (ci )
u0 = 0 0 0 0 0 0 0 0 0 0 = c0 0
u1 = 0 0 1 1 1 1 0 1 0 0 = c1 4
u2 = 0 1 0 1 0 1 0 0 1 1 = c2 4
u3 = 0 1 1 0 1 0 0 1 1 1 = c3 4
u4 = 1 0 0 1 1 0 1 0 0 1 = c4 4
u5 = 1 0 1 0 0 1 1 1 0 1 = c5 4
u6 = 1 1 0 0 1 1 1 0 1 0 = c6 4
u7 = 1 1 1 1 0 0 1 1 1 0 = c7 4

The minimum distance dmin of the code is the minimum number of digits in which two code
words are different. It is shown in the lecture that the minimum distance equals the minimum
weight of the code words:

dmin = min {wH (ci ) |ci 6= 0}


= 4.

The number of errors in the code words that can be detected at the decoder side is

te = dmin − 1 = 3 .

The number of errors that can be corrected at the decoder side is


 d −2
 min2 dmin is even
t =
 dmin −1
2
dmin is odd

In this example, it therefore holds: t = 1.


Each linear blockcode can be converted into an equivalent systematic code

G −→ G′ G′ = k × n − matrix
c −→ c′ .

The code words of a systematic code is composed of the information words and parity bits. The
generator matrix of the systematic code G′ has the following structure:
. 
G′ = Ik .. P Ik : Identity matrix (k × k)
P : Parity bit matrix (k × (n − k)) .
2.1 Linear block codes 3

The rows of G′ are generated by combinations of the rows of G such that the first part of the
G′ is the identity matrix Ik :
 
..
(1 + 2 + 3 ) row of G −→  1 0 0 . 1 1 1 0 
st nd rd

(2nd + 3rd ) row of G −→  . = G′ .


 0 1 0 .. 0 1 1 1 

(1st + 3rd ) row of G −→ .
0 0 1 .. 1 1 0 1

Thus, for the example code the parity bit matrix P is


 
1 1 1 0
P= 0 1 1 1  .
1 1 0 1

The code words of the systematic code are obtained by the matrix equation:

c′ = u · G′

With  
..
1 0 0 . 1 1 1 0

 .. 
G =
 0 1 0 . 0 1 1 1 
 (8)
.
0 0 1 .. 1 1 0 1
an example code word for this systematic code is given by:
.
ua = (1 0 1) ( 1 0 1 .. 0 0 1 1 ) = c′a .
| {z } | {z }
=ua parity check bits

2.1.2 Decoding

The parity check matrix H′ is used to detect and correct errors. An important property of every
parity check matrix H is:

c · HT = 0 , if c is a valid code word


x · HT 6= 0 , if x is not a valid code word.

Now, the way how to generate H′ is described when the generator matrix of the systematic
code is given. The structures of generator and parity check matrix are

.
G′ = ( Ik .. P ) (generator matrix)
.
H′ = ( PT .. In−k ) (parity check matrix).
4 2 THEORETICAL BACKGROUND

Example: With the above determined parity bit matrix P, the parity check matrix is
 
  1 0 1
1 1 1 0  1 1 1 
P =  0 1 1 1  −→ PT =   1 1 0 

1 1 0 1
0 1 1
 
..
 1 0 1 .. 1 0 0 0 
 1 1 1 .. 0 1 0 0 
 
=⇒ H′ =   .
 1 1 0 ... 0 0 1 0 
 
..
0 1 1 . 0 0 0 1

Note that H′ is the parity check matrix for both code c and code c′ since they are equivalent
codes (codes with the same set of code words)!
Let us consider the following transmission model:

u G′ x y H′ x (errorfree)

channel

where the output vector y is given by


y = x + e.
↑ ↑
codeword vector error vector
Calculating the so-called syndrome vector s yields
T
s = y · H′
T
= (x + e) · H′
T T T
= x H′ } +e · H′ = e · H′ .
| ·{z
0

If only single errors are considered, the syndrome table consists of all possible resulting vectors
when e contains only one “1”:
Error at bit no. 2 e.g. e = ( 0 0 1 0 0 0 0 )
T
=⇒ s = e · H′
= ( 1 1 0 1 )
ˆ third column of H′ .
=
2.1 Linear block codes 5

Hence, the complete syndome table for this example is


error at bit no. syndrome s
0 1 1 1 0
1 0 1 1 1
2 1 1 0 1
3 1 0 0 0
4 0 1 0 0
5 0 0 1 0
6 0 0 0 1
no error 0 0 0 0
Decoding steps:
The decoding process is therefore done in following steps:

Step 1: Calculate the syndrome s by evaluating s = y ·H′ T .

Step 2: Check s (Case distinction)

if s = 0 =⇒ accept the received word (perhaps more than te = 3 errors)

if s 6= 0 =⇒ search in table
a) s included in the table =⇒ determine error vector e
b) s not included in the table =⇒ more than t = 1 errors
=⇒ not correctable

Step 3: Correct the error by calculation of ycorr = y + e .

Consider the following example with the parity check matrix:


 
1 1 1 0

 0 1 1 1 


 1 1 0 1 

T
H′ =
 1 0 0 0  .
 (9)

 0 1 0 0 

 0 0 1 0 
0 0 0 1
ya = ( 0 0 0 1 1 0 1 ) ( 1 1 0 1 ) = sa
The according error vector is obtained from the syndrome table.
sa = ( 1 1 0 1 ) =⇒ ea = ( 0 0 1 0 0 0 0 )

The corrected vector becomes therefore


ya,corr = ya + ea = ( 0 0 1 1 1 0 1 ) .
6 2 THEORETICAL BACKGROUND

2.2 Reed-Solomon codes


2.2.1 Galois-Fields
Now, an overview about Galois-fields is given. Two types of Galois-fields exist:

Direct fields : GF (p) , p = prime number


Extended fields : GF (p m ) , p = prime number
m = integer number, m > 1

The direct field with p consist of the elements

GF (p) = {0, 1, 2, · · · p−1}, valid only for direct fields.

Some properties of the elements of the Galois-field (direct or extended) are summarized:

• The results of the addition and multiplication of two elements will be elements of the
Galois-field:

ai ⊕ ak = al ∈ GF ⊕ =Modulo
ˆ p addition
ai ⊗ ak = am ∈ GF ⊗ =Modulo
ˆ p multiplication.

• Every element of the Galois-field GF(p) can be written as ak = (z x ) mod p


with 0 < x < p − 1, where z is the so-called primitive element.

• Inverse elements have the property:

– with respect to addition : a ⊕ (−a) = 0 =⇒ a + (−a) = n · p


where (−a) is an inverse element. (−a) ∈ GF
– with respect to multiplication: a ⊗ (a−1 ) = 1 =⇒ a · (a−1 ) = n · p + 1
where (a−1 ) is an inverse element, (a−1 ) ∈ GF.

2.2.2 DFT-based coding


• Design Rules:
For Reed-Solomon Codes, the Singleton-Bound is reached. Hence, we get the relation

dmin = n − k + 1

as a rule to design the code. E.g. if the code shall be able to correct t = 1 error, then
with dmin = 2t + 1 = 3 for the code length n and the information wordlength k holds
n − k = 2. n depends on the type of Galois-field and is n = pm − 1. If p and m are
given, then k can be calculated.
2.2 Reed-Solomon codes 7

• Transformations (e.g.: t = 1, p = 5, m = 1, n = 4, k = 2):


Let a = (a1 , a2 , a3 , a4 ) be the code word vector in time domain, A = (A0 , A1 , A2 , A3 )
the code word vector in the frequency domain. Then they are related to each other by the
Fourier-transform:
AT = MDFT · aT
     
A0 1 1 1 1 a0
 A1   1 z −1 z −2 z −3   a1 
 A2  = −  1 z −2 z −4 z −6
   · 
  a2 
A3 1 z −3 z −6 z −9 a3
with z = 2 being the primitive element. In a Galois field GF(pm ), z i can be written as
m −1)
zi = z i mod (p .
To get rid of any negative exponent, one can add k (pm − 1) to the exponent without
changing the result. Example:
m m −1)
=⇒ z i = z [i+k·(p −1)] mod (p
here
with pm − 1 = 51 − 1 = 4

     
1 1 1 1 1 1 1 1 4 4 4 4
 1 z 3 z 2 z 1  z=2,mod5  1 3 4 2   4 2 1 3 
= −  inv.elem.
 1 z2 z0 z2  = −  = 
MDFT   .
1 4 1 4   4 1 4 1 
1 z1 z2 z3 1 2 4 3 4 3 1 2
The inverse transform is equivalent:
a = MIDFT · AT
     
1 1 1 1 1 1 1 1 1 1 1 1
 1 z1 z2 z3   1 z1 z2 z3   1 2 4 3 
MIDFT  1 z2 z4 z6  =  1 z2 z0 z2  =  1 4
=      .
1 4 
1 z3 z6 z9 1 z3 z2 z1 1 3 4 2
An important property of the matrices is
MDFT · MIDFT = MIDFT · MDFT = 1 .

• Coding procedure (e.g.: t = 1, p = 5, m = 1, n = 4, k = 2):


The codewords are designed in the frequency domain by setting n − k symbols to zero
 
..
A = A0 A1 . 0 0

and transformed into the time domain. The time-domain codewords are then transmitted
via the channel. When transforming back into the frequency domain, errors can be de-
tected on the last n − k digits which are zero in case of an errorfree transmission. If not,
an error occured during the transmission.
8 2 THEORETICAL BACKGROUND

2.2.3 Algebraic decoding


The RS-code used as an example is RS(4, 2) over GF(5).

• Transmission model:
Again, the transmission model is given by the sum of coded word and error vector: r =
a + e. Note that r is the received vector in the time domain.

• Error check:
By transforming r into the frequency domain (DFT), errors can be detected on the digits
where zeros have been set. Without error, the received vector in the frequency domain
must be:
.
R = ( A0 A1 .. 0 0 ) .
| {z } | {z }
information “parity frequencies”
word length
length k n−k
• Error correction:
If there are errors, the last digits can be regarded as syndromes. E.g.:
.
R = (4 1 .. 2|{z}3 )
S = ( S0 S1 ) = ( 2 3 ).
| {z }
n−k=2

Since the Fourier transform is a linear operation, also

R=A+E

holds. A consists of ’0’s on the last (n − k) digits by design rules. Therefore the (n − k)
last digits of S are directly the last (n − k) digits of the error vector in the frequency
domain.
One way to detect and correct errors for RS-Codes is based on the error position polyno-
mial. The decoding steps are summarized:

– Determine the syndrome S.


– Determine the error position polynom C(x):
F
Error position polynomial: c(x) b r C(x) with the properties
time domain: ci · ei = 0 =⇒ ci = 0 if ei 6= 0
b
r =⇒ ci = 0 if error occurs at position i
frequency domain: C(x) · E(x) = 0 mod (xn − 1)
⇒ x = z i (z is primitive element) is a zero, if an error
Q i:
occurs at position
⇒ C(x) = ( x − zi ) .
i,ei 6=0

The coefficients of C(x) = C0 + · · · + Ce · xe (degree e) with e being the


number of errors shall be found. It is clear that the assumed number of errors e
2.2 Reed-Solomon codes 9

must be less than the maximum number of error t that can be corrected, thus it
holds: e ≤ t.
The matrix representation for the set of equation is
     

 S e · · · S 0  C0  



  0 
 .. ... ..   .   !  . 
2t−e  . .  ·  ..   e+1 =  ..  . (10)

 S  

 
2t−1 · · · S2t−e−1
   
Ce  
0
| {z }
e+1

The equation is fulfilled for the correct value of e, which is unknown a priori. There-
fore, one should start with the most probable case e = 1:
   C 
0
=⇒ S1 S0 · = 0
C1
⇐⇒ S1 · C0 + S0 · C1 = 0
⇐⇒ 3 · C0 + 2 · C1 = 0 .
Since there are more variables than equations, C(x) is normalized to e.g. C1 = 1.
C(x) can be always normalized, as only zeros are important. The solution for C(x)
is found by:
=⇒ 3 · C0 + 2 = 0 |−2 = ˆ + 3 ( inverse element )
⇐⇒ 3 · C0 = 3 | · 3−1 =ˆ · 2 ( inverse element)
⇐⇒ C0 = 6 mod 5 = 1
=⇒ C(x) = 1 + 1 · x .
Note that if the matrix equation is not solvable for e = 1, try e = 2, 3, 4, . . . , t. If
the matrix equation is not solvable at all, a correction is not possible.
– Find the position of the error (optional):
Once the error polynomial is found, the next step is to look for the position of the
errors which can be determined by the following relation:
C(x = z i ) = 0 ⇐⇒ Error on position i (i = 0 · · · n − 1, z = 2) .
In this example, all values of i are tested:
C(z 0 = 1) = 2 6= 0
C(z 1 = 2) = 3 6= 0
C(z 2 = 4)[= 5] = 0 =⇒ Error on position i = 2
C(z 3 = 3) = 4 6= 0 (clear, since only a single error was assumed, e = 1) .
Now the position of the error has been found. The value of the error is calculated
in the next step.
– Find the error vector in the frequency domain:
The last (n − k) digits of E correspond to S which can be seen from:
..
E = ( E0 E1 . E E )
| 2 {z }3
n−k=2
.. ..
= ( E0 E1 . S0 S1 ) = ( E0 E1 . 2 3)
| {z }
k=2
10 2 THEORETICAL BACKGROUND

The residual k digits of Ej are determined by the recursion formula:


e
X
Ej = −(C0−1 ) · Ci mod n · E(j−i) mod n , j = 0···k − 1.
i=1

In this example, E0 and E1 are

E0 = −(C0−1 ) · C1 mod 4 · E(0−1) mod 4 = 2


E1 = −(C0−1 ) · C1 mod 4 · E(1−1) mod 4 = 3 .

Finally, the complete error vector in the frequency domain is E = (2 3 2 3).


– Correct the received vector by subtracting the error vector. E.g.:

 = R “ − ” E = ( 4 1 2 3)−(2 3 2 3)=(2 3 0 0 ).
11

3 Preparatory Exercises
3.1 Linear block codes
Given is a linear block code with the generator matrix
 
1 1 0 0 1 0 1
G= 0 1 1 1 1 0 0  .
1 1 1 0 0 1 1

1. Calculate the number of valid codewords N and the coderate RC . Specify the complete
Code C .

2. Determine the generator matrix G′ of the appropriate systematic (separable) code C′ .
What are the codewords of the systematic code for the information words ua = 0 1 0 .

3. Specify the parity check matrix H′ of the systematic code C ′ .

4. Determine the syndrome table for single errors.

5. How is a received word y checked for errors and corrected? What are the valid codewords
that belong to the received words

ya = 1 0 0 1 0 0 1

yb = 1 1 1 0 0 1 1 .

3.2 Reed-Solomon codes


A Reed-Solomon-Code RS(n, k) over the Galois-field GF(5) with the primitive element z = 2
is to be generated, which is able to correct t = 1 errors in a codeword.

1. Insert in the table the missing inverse elements with respect to addition and multiplication
for a ∈ GF(5).

a 0 1 2 3 4
−a
a−1

2. Determine the codewordlength n, the number of information digits k and the minimum
distance dmin .

3. Determine the matrix MDFT , that describes the discrete fourier transform (DFT) of the
codeword-vectors a and A:
AT = MDFT · aT
The matrix MDFT should only consist of elements of GF(5).
12 3 PREPARATORY EXERCISES

4. Determine the corresponding matrix MIDFT , that describes the inverse discrete fourier
transform (IDFT) of the codeword-vectors a and A:

aT = MIDFT · AT

5. Calculate the codeword, that belongs to the information word A0 = 4, A1 = 1 by using


the IDFT.

The codeword a is transmitted via a channel. An error occurs during the transmission, such
that an error vector e = (0 2 0 0) is added to the codeword.

6. Find the received vector r and calculate the syndrome S using the DFT.

7. Determine the error position polynomial C(x) in the frequency domain and the error
positions of the received vector.

8. Determine the complete error vector E in the frequency domain.

9. Find the estimated value of the codeword  in the frequency domain.


13

4 Experimental Tasks
4.1 Hamming codes
4.1.1 Generation of Hamming codes

The purpose of this assignment is to build a hamming code to reduce errors in a noisy binary
symmetric channel with error rate p_err. We will construct functions that are capable of en-
coding and decoding any single-error-correction (n, k) Hamming code, where n is the number
of code bits and k is the number of information bits. What are possible (n, k) combinations?
Initialize the variables:
p_err = 0.01; % bit error probability
n = ???; % codeword length
k = ???; % information word length
m = n-k; % number of parity bits
The formation of the generator matrix is simply a matter of arranging binary coded decimal
(BCD) vectors of m bits in numerical order, where m=n-k is the number of parity bits. The
vectors that correspond to integer powers of 2 (1,2,4,8,16, etc) are not included:
% generate matrix P (parity matrix)
P=[]; % initialize P
for iC=1:n
if (log2(iC)-floor(log2(iC)))>0 % check if position is not 1,2,4,8
P=[P bcd(iC,m)’]; % form P
end
end
G = [eye(k),P’]; % add identity matrix
Display the generator matrix for a (7,4) and (31,26) Hammming code.

4.1.2 Encoding using generator matrix

We want to transmit an image file named ’image.jpg’.


Read the data in and display it by typing:
[I,map] = imread(’image.jpg’);
level = graythresh(I);
X = im2bw(I,level);
X = double(X);
[xr,xc] = size(X);
In order to limit the amount of data to be transmitted, we converted the figure to a black/white
one.
The information bits are encoded by first reshaping the raw data u into a matrix U with k rows,
padding the matrix as necessary at the end with zeros. The number of rows in U, is such that
the product of xr and k is equal to the number of information bits + number of padded bits.
% convert data to convenient format that it can be encoded
vInfo = X(:); % long column vector
14 4 EXPERIMENTAL TASKS

if ( floor(length(vInfo)/k)< length(vInfo)/k )
a = mod(length(vInfo),k);
vInfo = [vInfo; zeros(k-a,1)];
end
U = reshape(vInfo,prod(size(vInfo))/k,k);
[ur,uc] = size(U); % U has dimension: num infowords x k
The product of U and G yields an xr × n matrix C of coded bits.
% Encoding all info words
C = mod(U*G,2); % C has dimension: num infowords x n
Now, C contains the code words for all information words in U.
Look at different information words and their corresponding codewords to see that it is a sys-
tematic code, e.g.: U(18454,:) and C(18454,:).

4.1.3 Channel modeling

The channel is modeled as a binary symmetric channel with error probability p_err. A matlab
function channel.m has been prepared to model the channel. The function creates a vector of
length b of which e position are randomly set to 1 (the other positions are 0). Thus, p_err=e/b
and b is made equal to the total number of bits in C. The error vector is reshaped into matrix N
with the same dimensions as C. Then the received codes CN are generated by XORing the error
matrix with the code matrix.
% produce error vector with error probability p_err
N = errorv(prod(size(C)),round(prod(size(C))*p_err));
N = reshape(N,prod(size(C))/n,n); % same dimension as C
CN = xor(C,N); % add errors
Find the number of errors in N using the Matlab command find and compare it with the length
of N to verify the error probability.

4.1.4 Decoding using parity check matrix

Remember the structure of the parity check matrix of a systematic code. The parity check
matrix is readily computed from the matrix P above.
H = [P,eye(m)]; % parity matrix of systematic Hamming code
Display the parity check matrix H.
The binary syndromes S are calculated by multiplying the code matrix by the parity check
matrix. This results in an xr × m matrix:
% calculate syndrom vectors for all infowords
S = mod(CN*H’,2);
The decimal syndromes S_de are calculated from a simple binary to decimal conversion:
% get decimal value of syndroms
S_de = sum(((ones(ur,1)*(2.^((m-1):-1:0) )).*S)’);
These syndromes must then be related to the positions in the parity check matrix. A position
matrix Pm is generated by finding the decimal equivalent of the columns of H:
4.1 Hamming codes 15

% to check position of match, get decimal value for syndrome in H


Pm = sum(((ones(n,1)*(2.^((m-1):-1:0) )).*H’)’);
Look at H, Pm and S_de of a (7,4) code.
Then, the errors in CN are corrected:
% decoded matrix V
V = CN; % start with erroneous data
for iC = 1:ur % go through all infowords
% if position ~= 0, find position of error by comparing decimal position
% of syndrome vector and parity matrix
[dummy,pos] = find(Pm==S_de(iC));
% correct bits by XORing with 1
V(iC,pos) = xor(CN(iC,pos),1);
end
The decoded data V are split into information bits and parity bits:
% Hamming codes are systematic codes, therefore splitting into info and
% parity bits is easily possible
U_dec = V(:,1:k); % info bits
U_par = V(:,k+1:end); % parity bits
The errors in the information data can be found by:
vErr = xor(U_dec,U);
Padded bits (np of them) are removed from the information data, and both U_dec and U_par
are reshaped back into matrices. U_dec should resemble the original information data.
% reshape info bit matrix (necessary only if zeros had been padded)
U_dec = U_dec(:);
U_dec = U_dec(1:xr*xc,1);
X_dec = reshape(U_dec,xr,xc);
The parity data are reshaped into a matrix having the same number of rows as the original
information data U, and an appropriate number of columns such that the number of parity bits
is equal to (or slightly less than) the number of parity bits in V.
Plot the received image by using the imshow.
figure;
subplot(121); imshow(X); % original image
xlabel([’BER = 0’]);
subplot(122); imshow(X_dec); % received and decoded image
title([’p_{err}=’,num2str(p_err)]);

4.1.5 Error Statistics

The bit error ratio is estimated from:


Ne_dec = sum(vErr(:));
BER_exp = Ne_dec/ur/uc
xlabel([’BER = ’,num2str(BER_exp)]);
where ur and uc are the dimensions of the original information data matrix U.
16 4 EXPERIMENTAL TASKS

Test the simulation program with different parameters values for p_err and different codes.
Determine the BER and fill in the following table: Plot the BERs versus the channel error rate

p_err 10−1 10−2 10−3 10−4


(7,4)
(15,11)
(31,26)

Fig. 1: Bit-Error-Ratios (BER) for Hamming codes

p_err for the different codes. What conclusions can you draw from the results?

4.2 Reed-Solomon codes


In this assignment, we want to protect data against errors by Reed-Solomon codes over the
extended Galois field GF(23 ) which should be able to correct t = 2 error.
What is the codeword length n and information wordlength k?

4.2.1 Encoding
We want to transmit the same image as in 4.1. Since Matlab offers the possibility to calculate
within Galois fields and even to encode using a Reed-Solomon coder, we only need to adapt the
format of the message data. For the extended Galois field GF(2m ), m bits must be combined
such that one codeword contains m · n bits.
Start a new m-file and enter:
p_err = 0.01; % bit error probability
n = 7; % codeword length
k = 3; % information word length
m = 3; % extension field
% Note that one codeword consists of m*n bits!!!

% read data
% =========
[I,map] = imread(’image.jpg’);
level = graythresh(I);
X = im2bw(I,level);
X = double(X);
[xr,xc] = size(X);
% convert data to convenient format that it can be encoded
u = X(:);
if ( floor(length(u)/(k*m))< length(u)/(k*m) )
rest = mod(length(u),k*m);
% if there is rest, pad with zeros
4.2 Reed-Solomon codes 17

u = [u;zeros(m*k-rest,1)];
end
Then, convert m bits to decimal values. Note that Reed-Solomon codes are not binary codes:
U = reshape(u,m,length(u)/m);
U = U’;
U = bi2de(U); % convert bits to decimal values
U = reshape(U,length(U)/k,k);
Display ten rows of U by typing disp(U(1:10,:)) and disp(U(10001:10010,:)).
After reshaping the vector U into a matrix with the number of information words as number
of rows, the Matlab encoding routine requires to have the input message as a variable of type
Galois (Matlab specific).
U_gf = gf(U,m);
Now we can pass the message to the encoder:
C = rsenc(U_gf,n,k);

4.2.2 Channel modeling


As in the example before, random bit errors according to the previously specified channel error
ratio are generated. But before we can add the error matrix to the code matrix, also the error
vector N must have the same format as C (see steps above).
% produce error vector with error probability p_err
N = errorv(m*prod(size(C)),round(m*prod(size(C))*p_err));
% again, m bits are collected and transformed into decimal values
N = reshape(N,m,length(N)/m)’;
N = bi2de(N);
% form a matrix with dimension: num code words x m*n
N = reshape(N,length(N)/n,n);
% Matlab requires variable type conversion to Galois fields
N_gf = gf(N,m);
% add errors, note that results are still within the defined Galois field
CN = C+N;
Display the first hundred elements of N.

4.2.3 Decoding
The resulting matrix CN can be passed to the decoding routine
% Reed-Solomon decoding using Matlab toolbox function
[U_dec,cnumerr] = rsdec(CN,n,k);
The number of rows of U_dec is again equivalent to the number of information words. We
convert the matrix of decoded symbols back into a long vector and reverse all operations with
respect to the transmitter.
% convert matrix of decoded symbols to vector back
U_dec = U_dec(:);
% reverse operation with respect to transmitter
18 4 EXPERIMENTAL TASKS

% IMPORTANT: variable type back conversion is realized by ’.’-command!


U_dec = de2bi(double(U_dec.x))’;
U_dec = U_dec(:);
X_dec = U_dec(1:xr*xc);
X_dec = reshape(X_dec,xr,xc);
Afterwards, the bit-error ratio is calculated by comparing transmitted and receive bits:
errors = xor(X_dec,X);
Ne_dec = sum(errors(:));
BER_exp = Ne_dec/xr/xc
Finally, plot the transmitted and decoded image.
figure;
subplot(121); imshow(X); % original image
xlabel(’BER = 0’);
subplot(122); imshow(X_dec); % received and decoded image
xlabel([’BER = ’,num2str(BER_exp)]);
title([’p_{err}=’,num2str(p_err)]);

4.3 Comparison between Hamming and Reed-Solomon codes


It is well-known that Reed-Solomon Codes are well suited in case of burst errors (see scratches
on CDs). In order the analyze this behaviour, a Matlab routine named errorv2 is available
which generates pairwise bit errors.
Task: Run the simulations for Hamming coding and Reed-Solomon coding with following chan-
nel error ratios and different channel models. Fill in the following table:

p_err 10−1 10−2 10−3 10−4


BER (single errors)
BER (burst errors)

Fig. 2: Hamming-Code (7,4)

p_err 10−1 10−2 10−3 10−4


BER (single errors)
BER (burst errors)

Fig. 3: Reed-Solomon Code (7,3)

Do these experimental results agree with the theory?

You might also like