You are on page 1of 14

ECNG 4302 Fundamentals of

Communications II
Dr. Ahmed Khattab
Linear Block Codes & Syndrome Decoding

Electronics Engineering Department


American University in Cairo
1

Block Diagram of Digital Communication System

• We addressed so far modulator, channel and demodulator.


• We also addressed source coding which efficiently represent the data.

ECNG 4302 A. Khattab

1
Error Control Codes
• The introduction of controlled redundancy in the transmitted bit
stream to enable detection and/or correction of errors at the Rx that
occur due to channel noise and other impairments.

• The trade off is the increase in bandwidth required and complexity.

• Two types of error control codes


• Automatic repeat request (ARQ)
• Forward error correction (FEC)

ECNG 4302 A. Khattab

Automatic repeat request (ARQ)


• The decoder attempts to detect if any errors occurred and requests a
re-transmission if it detects an error.
• Requires a 2-way link.
• Reduces throughput
• More errors can be detected than can be corrected.
• Applications:
• When it is more convenient to retransmit the packet rather than
correcting the errors within:
• IEEE 802.11 (WiFi)
• IEEE 802.15.4 (Zigbee)

ECNG 4302 A. Khattab

2
Forward error correction (FEC)
• The decoder at the Rx attempts to correct all errors.

• Must have error detection and correction.

• FEC is applied in situations where retransmissions are costly or


impossible
• Satellite Communications
• One-way communication links
• When transmitting to multiple receivers in multicast.
• FEC information is usually added to mass storage devices to enable
recovery of corrupted data, and is widely used in modems.

• FEC is our focus… 5

ECNG 4302 A. Khattab

Types of Errors
• Single Error
• Due to uncorrelated AWGN

• Burst Error
• Due to fading, interference, crosstalk
• Using INTERLEAVER convert it to many single errors
• The interleaver randomizes the order of transmitted bits in time at
the Tx. The de-interleaver returns the bits to their original coder prior
to decoding at the Rx.
Info. bits
Channel Encoder Interleaving Modulator

Estimates Channel Decoder De-interleaving Demodulator Channel 6


of bits
ECNG 4302 A. Khattab

3
Channel Capacity and Channel Coding
• In 1948, Shannon demonstrated that by proper encoding of
the information, errors induced in the channel can be reduced
(Pe  0) subject to (R < C).

• Channel coding adds extra bits (redundancy) to data


transmitted over the channel to combat the errors due to
channel noise and fading.

ECNG 4302 A. Khattab

Types Of Channel Coding


Linear Block Code Convolutional Code
• Each k bit information is • Each k bit information is encoded
encoded into an n bit into an n bit codeword depending
codeword depending on the k on the k bit and the previous m
bit only bit

• Information is segmented into • Performed on a sliding window


BLOCKs treated separately of message bits

• Code dene as (n; k) • Code dene as (n; k;m)

• Code Rate R = k/n • Code Rate R = k/n

• Combinational logic • Sequential logic 8

ECNG 4302 A. Khattab

4
Linear Block Codes
• For every k message (info) bits , n code bits are generated, where n > k
• i.e: (n-k) redundant (parity) bits are introduced.

• The n code bits thus generated are called a codeword.

m Linear Block c
• The code rate R = k/n 0 < R < 1
k bits Encoder n bits

• Input m: a k bit information message


with 2k different patterns
• Output c: an n bit encoded codeword where n > k
with 2k different patterns 9

ECNG 4302 A. Khattab

• Computation of parity bits:


• Important note: all additions in this section are module-2
additions (XOR).
• Let the message vector be m (1 x K)
Let the code vector be c (1 x n)
Let the parity vector be b ( 1 x (n-K))
•C = m G, where G is the generator matrix
(1xn) (1xK) (Kxn)
• For systematic codes:
• G = [P | IK ] ,
(Kxn) (Kx(n-K)) (KxK)
where Ik is the KxK identity matrix
• ie: the message bits are included unchanged in the codeword.
• Therefore C = m[P| Ik] = [mP |m] = [b|m] 10

ECNG 4302 A. Khattab

5
 P0, 0 P0,1 ... P0,( n  K 1) 
 P ... ... ... 
b  mP where P   
1, 0

 ... ... ... ... 


 
 p( K 1), 0 ... ... P( K 1), P ( n  K 1) 

K 1
bi   m j pij pij  0 or 1
j 0
• Notes:
• Any codeword is a linear combination of the rows of G.
• The set of all possible code words is called ‘the code’,
there are 2K code words obtained by multiplying all m by
11
G.
ECNG 4302 A. Khattab

• Linear block codes (LBC):


• A block code is linear iff α1ci + α2cj gives another codeword
where α1,α2 ε [0,1] and ci , cj are any 2 codewords.
• This property is called closure
• Minimum distance dmin of a LBC:
• Hamming distance between code words c1 and c2 denoted by
dH(c1,c2) is defined as the number of locations in which the
respective elements differ.
• Ex: c1 = 111 , c2 = 000
• d(c1,c2) =3
• Hamming weight of a codeword ci denoted by w(ci)is defined as
the number of non-zero elements in the codeword ci
• In the previous example, w(c1) = 3 w(c2) = 0
• The minimum distance dmin is the smallest hamming distance 12
between any pair of code words dmin = min d(ci, cj) i ≠ j
ECNG 4302 A. Khattab

6
• Note:
• a linear block code must contain the all zeros codeword
• From the closure property,
• dmin = min w (ci ) excluding the all zeros codeword
• In the previous example, dmin = 3
• In general, the larger the dmin, the higher the error
correcting/detecting capability of the code.
• For correction:
 d  1
t   min  will be corrected
 2 
• For detection
t  d min  1 will be detected
13

ECNG 4302 A. Khattab

• Example : rate 1/3 repetition code


• 000 was sent
• Received 001 can correct and detect
• Received 011 can detect
• Received 111 cannot detect or correct (valid code
word)

14

ECNG 4302 A. Khattab

7
• Example: Consider the generator matrix

0 1 1 0 0
G 1 0 0 1 0
1 1 0 0 1
• Obtain the code described by G, find dmin
• Solution:
• GKxn K=3 n= 5
• mG = c

15

ECNG 4302 A. Khattab

• we let m take all values 000,001,…..,111 and multiply


by G
[000] = [00000]
[001] =[11001]
[010] 0 1 | 1 0 0 =[10010]
[011]  1 0 | 0 1 0 =[01011]
[100]   =[01100]
1 1 | 0 0 1
[101] =[10101]
[110] =[11110]
[111] =[00111]
• Hence, dmin = 2
• Hence, it is not guaranteed to correct any errors
• 2t+1 (for t =1) = 3
• However, it can detect 1 error 16

ECNG 4302 A. Khattab

8
LBC Encoding Circuits

m0 m1 mk-1
gk-1,0
g0,0

+ + +

c0 c1 cn-1

17

ECNG 4302 A. Khattab

Channel Model

c + r=c+e
1xn 1xn

e
1xn

18

ECNG 4302 A. Khattab

9
Decoding of LBC’s:
• The parity check matrix H

Define H | I n k | p(Tn  k )k |
I
H T  nk
P
I n k
Hence GHT = |P|Ik| x = P+P = 0
P
Recall that c = mG
Post multiplying by HT
cHT = mGHT = 0 (1x n-K)

19

ECNG 4302 A. Khattab

Binary Symmetric Channel (BSC)


0 0
1 1
• The modulator, channel, and demodulator are represented by the transition
probability p
• We can therefore define an error vector e1xn and r = c+e
• At the receiver, we post multiply r x HT. the result is called the error-
syndrome vector s (syndrome)
• s = rHT = (c + e) HT = cHT + eHT = eHT
• Clearly, if e = 0, then s = 0, and the decoder will conclude that no errors
occurred on the channel.

20

ECNG 4302 A. Khattab

10
• Example:
• A repetition code rate = 1/5
• G = [ 1 1 1 1| 1] k =1 n = 5
• c1 = 00000 c2 = 11111 dmin = 5
• It is guaranteed to correct up to 2 errors
• It is guaranteed to detect up to 4 errors
• Assume e = [01001] and c= [11111]
• r = [10110]
1 0 0 0
0 1 0 0

• s = rHT = [10110]  0 0 1 0 = [1011]
 
0 0 0 1
21
1 1 1 1 
ECNG 4302 A. Khattab

Hard Decision Decoding Of LBC’s


• For BSC, minimum distance decoding is optimum in the sense of
minimizing error probability.
• An efficient way of performing min distance decoding is syndrome
decoding.
• Recall that s = rHT = eHT , we have 2n-k different syndromes but 2n
possible error patterns.
• Hence, several error patterns result in the same syndrome
• Note: s depends only on e.
• We try to find the most likely error pattern that results in a given
syndrome.

22

ECNG 4302 A. Khattab

11
• Syndrome decoding:
• I )We form the standard array :
1. List all the code words in the first row starting with the
zeros code word.
2. List error patterns (co-set leaders) in the first column
starting with the lowest weight error vectors (ie all
single errors then all double errors and so on)
3. After inserting each error vector (co-set leader) add the
co-set leader to all code words in the first row to
complete the co-set. In choosing each co-set leader it
must not have appeared in the array thus far (steps 2 and
3 are together).
• Repeat steps 2 and 3 until rows are filled 2n-k rows.
23

ECNG 4302 A. Khattab

All
• c1 =e1 c2 c3 ….. c2k codewords
e2 c 2 + e2 c3 + e3 …… cosets
Coset e3 c 2 + e3 c3 + e3 ..... 2n-k
leaders

• II) We form the syndrome table from e1 HT, e2HT, …….,


e2n-k HT …ie from the co-set leaders
Syndrome Coset leaders

S1 = e1HT E1

S2 = e2HT E2

…… …….

S2n-k =e2n-kHT e2n-k

• III) Now, we are ready to decode 24

ECNG 4302 A. Khattab

12
• We compute the syndrome s from the received codeword r
^
and pick the corresponding most likely error e

• Then
cˆ  r  eˆ  c  e  eˆ (  c if ê  e )
• Ex:
10101
• A code with G =    [ I k | P]
01011
• Assume that the codeword 01011 was sent. The received
vector was
a) 11011 1 error
b) 11001 2 errors
c) 11010 2 errors
• Decodes in each time
25

ECNG 4302 A. Khattab

• Solution: to get the code words


[ 0 0 ] [00000]
[ 0 1 ] 1 0 1 0 1 [01011]
mG   
[ 1 0 ] 0 1 0 1 1 [10101]
[ 1 1 ] [11110]
• Standard array 00000 01011 10101 11110
10000 11011 00101 01110
01000 00011 11101 10110
00100 01111 10001 11010
00010 01001 10111 11100
• dmin = 3 00001 01010 10100 11111
11000 10011 01101 00110
10010 11001 00111 01100 26

ECNG 4302 A. Khattab

13
• To calculate the syndrome table, we must obtain HT
• Since, G = [ Ik |P]
• Hence H = [PT | In-k]
 P 
• Therefore HT   
 I n k 
Syndrome Error Pattern
000 00000
101  001 00001
 011 010 00010
 
• H T  100   100 00100
 
010 011 01000
 001 101 10000 27
110 11000
ECNG 4302 A. Khattab
111 10010

a) r = 11011 ^
s = rHT = [101]
• From syndrome table,e  10000
^ ^
• c = r + e = 01011 (which is the transmitted signal)
dmin=3, guarantees correcting all single errors
b) r = 11001
^
• s = rHT = 111 therefore e  10010

^ ^
• c = r + e = 01011 (which is the transmitted signal)

c) r = 11010 exercise: complete.

28

ECNG 4302 A. Khattab

14

You might also like