You are on page 1of 18

' $

ECE4601
Communication Systems

Week 15

Error Detection and Correction

Standard Array and Syndrome Decoding


Convolutional Codes

& %

0
c
2011, Georgia Institute of Technology (lect15 1)
' $

Error Detection

• A linear block code can detect all error patterns of dmin − 1 or fewer errors.
• If e 6= 0 is a codeword, then no errors are detected
• There are 2k−1 undetectable error patterns, but there are 2n − 1 possible
nonzero error patterns.
• The number of detectable error patterns is

2n − 1 − (2k − 1) = 2n − 2k

• Usually, 2k − 1 is a small fraction of 2n − 2k .


• Example: (7,4) Hamming code. There are 24 − 1 undetectable error pat-
terns and 27 − 24 = 112 detectable error patterns.

& %

0
c
2010, Georgia Institute of Technology (lect15 3)
' $

Weight Distribution

• Let Ai be the number of codewords of weight i.


• The set {A0, A1, . . . , An } is called the weight distribution.
• The weight distribution can be expressed as a weight enumerator poly-
nomial
A(z) = A0 z 0 + A1 z 1 + . . . + Anz n

• Example: (7,4) Hamming code

A0 = 1, A3 = 7, A4 = 7, A7 = 1

A(z) = 1 + 7z 3 + 7z 4 + z 7

& %

0
c
2010, Georgia Institute of Technology (lect15 4)
' $

Probability of Undetected Error

• The probability of an undetected error is

Pe (U ) = P ( e is a nonzero codeword)
n
X
= Ai P (w(e) = i)
i=1

• For a binary symmetric channel, P (w(e) = i) = pi (1 − p)n−i and


n
Aipi (1 − p)n−i
X
Pe (U ) =
i=1

• Example: For the (7,4) Hamming code

Pe (U ) = 7p3(1 − p)4 + 7p4(1 − p)3 + p7

– For p = 10−2, Pe (U ) = 7 × 10−6.

& %

0
c
2010, Georgia Institute of Technology (lect15 5)
' $

Error Correction

• A linear block code can correct all error patterns of t or fewer errors, where
dmin − 1
t=⌊ ⌋
2
and ⌊x⌋ is the largest integer ≤ x.
• A code is usually capable of correcting many error patterns of t + 1 or more
errors, but not all of them. In fact, up to 2n−k error patters may be corrected,
equal to the number of syndromes.
• The probability of error for a binary symmetric channel is

P (E) ≤ 1 − P (t or fewer errors)


 
t n
 pi (1 − p)n−i
X
= 1−
i=0 i

& %

0
c
2010, Georgia Institute of Technology (lect15 6)
' $

The Standard Array

1. Write out all 2k codewords in a row starting with c0 = 0.


2. From the remaining 2n − 2k n-tuples, select an error pattern e2 of weight 1
and place it under c0. Under each codeword put ci + e2
3. Select a minimum weight error patter e3 from the remaining unused n-tuples
and place it under c0 = 0. Under each codeword put ci + e3 .
4. Repeat 3) until all n-tuples have been used.

& %

0
c
2010, Georgia Institute of Technology (lect15 7)
' $

Example
 
1 1 0 0
G= 
0 1 0 1

 

0000 1100 0101 1001 



e2 0001 1101 0100 1000 





e3 0010 1110 0111 1011 


e4 0011 1111 0110 1010

Property: every n-tuple appears once and only once in the array.

& %

0
c
2010, Georgia Institute of Technology (lect15 8)
' $

Error Correction

• When y is received, find y in the standard array. If y is in row i and column


j, then ei is the most likely error pattern, and y is decoded into y + ei = cj .
• A code is capable of correcting all the ei , called coset leaders. If the error
pattern is not a coset leader, then erroneous decoding will result.
• Every (n, k) linear code can correct 2n−k error patterns, including the 0
vector. Recall that the same code can detect 2n − 2k error patterns. For
large n
2n−k
n k
≈ 2−k
2 −2
is a small number. Hence, the code can detect many more errors than it can
correct.

& %

0
c
2010, Georgia Institute of Technology (lect15 9)
' $

Syndrome Decoding

• Fact: all 2k n-tuples in the same row have the same syndrome.

1. Compute the syndrome s = yHT .


2. Locate the coset leader eℓ , where eℓ HT = s.
3. Decode y into y + eℓ .

& %

0
c
2010, Georgia Institute of Technology (lect15 10)
' $

(7,4) Hamming Code


 

1 0 0 1 0 1 1 
H =  0 1 0 1 1 1 0 


0 0 1 0 1 1 1

e s
0000000 000
0000001 101
0000010 111
0000100 011
0001000 110
0010000 001
0100000 010
1000000 100
Example: Receive y = (1110000). Compute s = yHT = (111). Decode y into
c = (1110000) + (0000010) = (1110010).

& %

0
c
2010, Georgia Institute of Technology (lect15 11)
' $

Rate-1/2 Encoder
(1)
b
+

input output
a b

+
(2)
b

& %

Rate 1/2 binary convolutional encoder.

0
c
2010, Georgia Institute of Technology (lect15 12)
' $

Rate-2/3 Encoder

(1)
b
+
a(1)
(2) output
input b b
a +

a(2) b
(3)

& %

Rate 2/3 binary convolutional encoder.

0
c
2010, Georgia Institute of Technology (lect15 13)
' $

Generator Sequences

• The connections to the shift register are described by generator sequences.


– Rate 1/2 code example

g(1) = (1, 1, 1)
g(2) = (1, 0, 1)

– Rate 2/3 code example


(1) (2) (3)
g1 = (1, 1), g1 = (0, 1), g1 = (1, 1)
(1) (2) (3)
g2 = (0, 1), g2 = (1, 0), g2 = (1, 0) .
(j)
• The jth output, bi , corresponding to the ith input sequence a(i) is
(j) (j)
bi = a(i) ∗ gi

& %

0
c
2010, Georgia Institute of Technology (lect15 14)
' $

State Description

• The state of the encoder at time ℓ is


 
(1) (1) (k) (k)
̺ℓ = aℓ−1 , · · · , aℓ−ν1 ;··· ; aℓ−1, · · · , aℓ−νm

where νi is the length of the shift register for input i.


∆ Pk
• There are a total of NS = 2νT encoder states, where νT = i=1 νi is defined
as the total encoder memory.
• For a rate-1/n code, the encoder state at epoch ℓ is simply

̺ℓ = (aℓ−1, · · · , aℓ−ν )

& %

0
c
2010, Georgia Institute of Technology (lect15 15)
' $

Rate-1/2 code

1/10
(3)
σ

1/01 0/01

1/00
(1) (2)
σ σ
0/10
1/11 0/11
(0)
σ

0/00

& %

State diagram for the rate-1/2 binary convolutional encoder example.

0
c
2010, Georgia Institute of Technology (lect15 16)
' $

Augmented State Diagram

DNL
(3)
σ

DNL DL

2
NL 2
(0)
D NL (1) (2)
DL (0)
σ σ σ σ
DL

& %

Modified state diagram for the rate-1/2 binary convolutional encoder.

0
c
2010, Georgia Institute of Technology (lect15 7)
' $

Transfer Function

• Any appropriate technique can be used to obtain the transfer function, such
as Mason’s formula
• For the rate-1/2 code example
D 5 N L3
T (D, N, L) =
1 − DN L(L + 1)
= D5 L3N + D6 N 2 L4(L + 1) + D7 N 3 L5(L + 1)2
+ · · · + Dk+5N k+1Lk+3(L + 1)k + · · ·

• The term Dk+5N k+1Lk+3(L + 1)k means there are 2k paths at Hamming
distance k+5 from the all-zeroes path, caused by k + 1 input ones. Of these
2k paths, nk have length k + n + 3.
• Free Hamming distance dfree = 5. This the smallest weight non-zero weight
codeword.

& %

0
c
2010, Georgia Institute of Technology (lect15 18)
' $

Trellis Diagram
epoch
0 1 2 3 4 5 6 7 8
state
(0) 00 00 00 00 00 00 00 00
σ
11 11 11 11 11 11 11 11
11 11 11 11 11 11
(1)
σ
00 00 00 00 00 00
10 10 10 10 10 10 10
σ
(2) 01 01 01 01 01 01 01

(3)
01 01 01 01 01 01
σ 10 10 10 10 10 10
input 1 input 0

Trellis diagram for the rate-1/2 binary convolutional code.

& %

0
c
2010, Georgia Institute of Technology (lect15 19)

You might also like