You are on page 1of 54

ECE 141 Lecture 12:

Linear Block Codes


Properties of Linear Block Codes, Hamming Distance and Hamming Weight, Examples of Linear
Block Codes

ECE 141: DIGITAL COMMUNICATIONS 1


Errors in Communication
❑ Suppose we have Pe = 10-3 (seems pretty good)
❑ However, this is not good enough…
❑ Consider a file mapped and converted into n = 250 symbols
❑ The file cannot be reconstructed if one symbol is wrong
❑ The “file” probability of error is 1 − (1 − Pe)n ≈ nPe =
250/1000 = 0.25
❑ This is bad! However, we cannot do much because noise is
inevitable
❑ Modulation only focus on the symbol level, not the file level

ECE 141: DIGITAL COMMUNICATION I 2


Discrete-Time AWGN
Channel Capacity

❑ Consider a I/O relationship for a discrete-time AWGN channel:

y[n] = x[n] + z[n], with z[n] ~ N (0,  2 )


N
1
❑ Power Constraint:
N
 |
n =1
x[ n ] |2
P

❑ We need to identify how to design code → Use sphere packing


interpretation

ECE 141: DIGITAL COMMUNICATION I 3


Sphere Packing Interpretation
❑ Consider an N-dimensional codebook
such that the codewords lie on an N-
dimensional sphere of radius 𝑁𝑃
(due to power constraint)
❑ Due to AWGN, we expect that each
received codeword will lie within the
vicinity of the transmitted codeword
with radius 𝑁σ2
❑ We expect that most y will lie inside
a sphere of radius 𝑁(𝑃 + σ2 )
❑ Vanishing error probability → No
overlapping spheres!

ECE 141: DIGITAL COMMUNICATION I 4


Sphere Packing Interpretation
❑ How many non-overlapping spheres
can be packed into the larger sphere?
Volume of y-sphere
# of codewords
N
N (P +  ) 2
2 NR  N
N 2

Volume of x-sphere

❑ Capacity (highest data rate) is given


by:
 N (P +  2 ) N 
1
R  log 2   = 1 log 2 1 + P 
  2   
N 2
N N 2
 
❑ How to achieve it?

ECE 141: DIGITAL COMMUNICATION I 5


Linear Block Codes
Codeword Received word Decoded
Information
Information
Block Discrete-time Block
Encoder AWGN Decoder
K bits N bits N bits K bits

❑ A block (or word) of k data bits b = (b0, b1,… bk-1 ) is mapped into a block of n
coded bits c = (c0, c1,… cn-1 ), where bi and ci can be 1 or 0
❑ A block code is linear if ci + cj is also a codeword (closure property).
▪ NOTE: Addition is modulo-2
b c • Linear code should also contain the
❑ Example: (5,2) code
b0 = 00 c0 = 00000 all-zero codeword since ci + ci = 0
b1 = 01 c1 = 01101 • Code is systematic if the K leftmost
bits are the data bits
b2 = 10 c2 = 10011
• Remaining (N-K) bits are called
b3 = 11 c3 = 11110 parity bits
ECE 141: DIGITAL COMMUNICATION I 6
Code Construction using
Generator Matrix
❑ A linear block code can be generated by using a generator
matrix G with K rows and N columns:
 g11 g1N 
 
[ c ]=[ m ] 
 
 
c [c1 c2 ... cN ]  {0,1}n  gK1 g KN 

m [m1 m2 ... cK ]  {0,1}k G  {0,1}k n


❑ Generator matrix in previous example (5,2) code:
1 0 0 1 1
𝐆=
0 1 1 0 1

ECE 141: DIGITAL COMMUNICATION I 7


Code Construction using
Generator Matrix
❑ A systematic code can be generated by a matrix G that has
the form:
𝐆= 𝐈 ⋮ 𝐏 P{k ( n − k )} is generator for parity
𝑘×𝑘 {𝑘×(𝑛−𝑘)}
check bits
❑ Intuition behind parity-bits and message bits:
▪ We can define a matrix H (called parity check matrix) such that
𝐇= 𝐏𝑇 ⋮ 𝐈(𝑛−𝑘)×(𝑛−𝑘)

▪ Multiplication of partitioned matrices:


𝐈𝑘×𝑘
𝐇𝐆𝑇 = 𝐏𝑇 ⋮ 𝐈(𝑛−𝑘)×(𝑛−𝑘) ⋯ = 𝐏𝑇 + 𝐏𝑇 = 𝟎
𝐏𝑇
▪ What if we multiply a codeword by HT?

cHT = mGHT = 0

ECE 141: DIGITAL COMMUNICATION I 8


Notes
❑ A linear code is systematic if the first 𝑘 rightmost bits of the
codeword 𝒄 is the original uncoded message 𝒎.
𝒄 = 𝑚1 … 𝑚𝑘 𝑐1 … 𝑐𝑛−𝑘 𝑇
❑ The vector 𝑐1 … 𝑐𝑛−𝑘 are the parity bits.
❑ Parity matrix 𝑷 𝑘× 𝑛−𝑘 is chosen such that the rows of the
generator matrix 𝑮 is linearly independent.
❑ Using a generator matrix with the form 𝑰𝒌 ⋮ 𝑷 , the
resulting linear code is called a systematic code.
❑ Parity check matrix 𝑯 is used to check if a codeword is valid.
❑ Addition of valid codewords result into another valid
codeword.

ECE 141: DIGITAL COMMUNICATION I 9


Example: (5,2) Code
❑ The resulting codeword length is 5 bits and the input has 2
bits. The generator matrix is:
1 0 0 1 1
𝑮=
0 1 1 0 1
❑ A codeword is generated using the equation:
𝒄 = 𝒎𝑮
❑ From the generator matrix, the resulting components of the
codeword 𝒄 = 𝑐1 𝑐2 𝑐3 𝑐4 𝑐5 𝑇 can be computed from
𝒎 = 𝑚1 𝑚2 𝑇 :
𝑐1 = 𝑚1 𝑐2 = 𝑚2 𝑐3 = 𝑚2
𝑐4 = 𝑚1 𝑐5 = 𝑚1 + 𝑚2

ECE 141: DIGITAL COMMUNICATION I 10


Example: (5,2) Code
𝑐1 = 𝑚1 𝑐2 = 𝑚2 𝑐3 = 𝑚2
𝑐4 = 𝑚1 𝑐5 = 𝑚1 + 𝑚2
𝒎 𝒄
00 00000
01 01101
10 10011
11 11110

❑ The 𝟎 vector is always present in a systematic code.


❑ Any other combination of bits will make the codeword invalid
and can be identified at the receiver.

ECE 141: DIGITAL COMMUNICATION I 11


Example: (5,2) Code
❑ The parity check matrix is:
0 1 1 0 0
𝑯= 1 0 0 1 0
1 1 0 0 1
𝒎 𝒄 𝒄𝑯𝑻
00 00000 0
01 01101 0
10 10011 0
11 11110 0

ECE 141: DIGITAL COMMUNICATION I 12


Example: (7,3) Code
❑ Determine the generator matrix of the following set of
codewords and corresponding messages:
𝒎 𝒄
000 0000000
001 0011011
010 0101110
011 0110101
100 1001101
101 1010110
110 1100011
111 1111000

ECE 141: DIGITAL COMMUNICATION I 13


Example: (7,3) Code
❑ The generator is of the form:
𝑮 = 𝑰𝒌 𝑷
❑ Solve for 𝑷 which is a 𝑘 × 𝑛 − 𝑘 → 3 × 4 matrix:
𝑝11 𝑝12 𝑝13 𝑝14
𝑷 = 𝑝21 𝑝22 𝑝23 𝑝24
𝑝31 𝑝32 𝑝33 𝑝34
❑ Use three distinct messages that are not 𝟎.
❑ For this example, we’ll use 001, 010, and 100. For 𝑝11 , 𝑝21 ,
and 𝑝31 :
𝑝31 = 1 𝑝21 = 1 𝑝11 = 1

ECE 141: DIGITAL COMMUNICATION I 14


Example: (7,3) Code
𝑝11 𝑝12 𝑝13 𝑝14
𝑷 = 𝑝21 𝑝22 𝑝23 𝑝24
𝑝31 𝑝32 𝑝33 𝑝34
❑ For 𝑝12 , 𝑝22 , and 𝑝32 :
𝑝32 = 0 𝑝22 = 1 𝑝12 = 1
❑ For 𝑝13 , 𝑝23 , and 𝑝33 :
𝑝33 = 1 𝑝23 = 1 𝑝13 = 0
❑ For 𝑝14 , 𝑝24 , 𝑝34 :
𝑝34 = 1 𝑝24 = 0 𝑝14 = 1
❑ The generator matrix is:
1 0 0 1 1 0 1
𝑮= 0 1 0 1 1 1 0
0 0 1 1 0 1 1
ECE 141: DIGITAL COMMUNICATION I 15
Hamming Weight
❑ The weight of a codeword 𝒄 ∈ 𝐶 is denoted by 𝑤 𝒄 which
is the number of nonzero components in the codeword.
𝒎 𝒄 𝑤 𝒄
00 00000 0
01 01101 3
10 10011 3
11 11110 4
❑ For systematic codes, there is exactly one codeword with a
weight of 0 which is the 𝟎 codeword.

ECE 141: DIGITAL COMMUNICATION I 16


Hamming Distance
❑ For any codeword 𝒄𝟏 , 𝒄𝟐 ∈ 𝐶, the distance between two
codewords is:
𝑑 = 𝑤 𝒄𝟏 − 𝒄𝟐
❑ For a systematic code in the field GF(2), subtraction and
addition are the same, thus
𝑑 = 𝑤 𝒄𝟏 + 𝒄𝟐
❑ Since 𝒄𝟏 + 𝒄𝟐 is also a valid codeword, it follows that the set
of distances between valid codewords is equal to the set of
weights of valid codewords.

ECE 141: DIGITAL COMMUNICATION I 17


Minimum Distance and
Minimum Weight
❑ Minimum distance of a code is the minimum of all possible
distances between two distinct codewords:
𝑑𝑚𝑖𝑛 = min 𝑑 𝒄𝟏 , 𝒄𝟐
𝒄𝟏 ,𝒄𝟐 ∈𝐶
𝒄𝟏 ≠𝒄𝟐

❑ Minimum weight of a set of codewords is the minimum weight


excluding the zero codeword:
𝑤𝑚𝑖𝑛 = min 𝑤 𝒄
𝒄∈𝐶
𝒄𝟏 ≠𝟎

❑ For a set of valid codewords:


𝑑𝑚𝑖𝑛 = 𝑤𝑚𝑖𝑛

ECE 141: DIGITAL COMMUNICATION I 18


Example: (7,4) Code
❑ Solve for the minimum Hamming weight from the codewords
generated by 𝑮 below.
1 0 0 0 1 0 1
𝑮= 0 1 0 0 1 1 1
0 0 1 0 1 1 0
0 0 0 1 0 1 1
❑ The elements of a codeword is defined by the equations:
𝑐1 = 𝑏1 𝑐2 = 𝑏2 𝑐3 = 𝑏3 𝑐4 = 𝑏4
𝑐5 = 𝑚1 + 𝑚2 + 𝑚3
𝑐6 = 𝑚2 + 𝑚3 + 𝑚4
𝑐7 = 𝑚1 + 𝑚2 + 𝑚4

ECE 141: DIGITAL COMMUNICATION I 19


Example: (7,4) Code
𝒃 𝒄 𝑤 𝒄
❑ Minimum weight is 0000 0000000 0
𝑤𝑚𝑖𝑛 = 3 0001 0001011 3
0010 0010110 3
❑ This is also is the 0011 0011101 4
minimum distance. 0100 0100111 4
0101 0101100 3
0110 0110001 3
0111 0111010 4
1000 1000101 3
1001 1001110 4
1010 1010011 4
1011 1011000 3
1100 1100010 3
1101 1101001 4
1110 1110100 4
1111 1111111 7

ECE 141: DIGITAL COMMUNICATION I 20


Effective Data Rate
❑ Generally, a linear block code has 𝑛 bits for a valid codeword
and 𝑘 bits for the message.
❑ If we transmit through the channel at a rate of 𝑅𝑏 bits per
second, the amount of useful data we transfer is just:
𝑘

𝑅𝑏 = 𝑅𝑏
𝑛
❑ We only transmit at a fraction of the actual data rate.
❑ This is a trade-off for reliable communication.

ECE 141: DIGITAL COMMUNICATION I 21


ECE 141 Lecture 12:
Linear Block Codes
Properties of Linear Block Codes, Hamming Distance and Hamming Weight, Examples of Linear
Block Codes

ECE 141: DIGITAL COMMUNICATIONS 22


Repetition Code
❑ Idea: repeat each bit N times ⟹ data rate R = 1/N

Many ways for


repetition

❑ Focus on the system diagram below:

ECE 141: DIGITAL COMMUNICATION I 23


Repetition Code
❑ A code with the message length equal to 1 and the codeword
length is 𝑁.
❑ A data bit is transmitted 𝑁 times which results into only 2
valid codewords.

𝒎 𝒄 𝑤 𝒄
0 000 … 0 0
1 111 … 1 𝑁

❑ The minimum distance and weight is 𝑁. This results in a highly


reliable communication system at the expense of data rate.
𝑅𝑏

𝑅𝑏 =
𝑁

ECE 141: DIGITAL COMMUNICATION I 24


Even Parity/Odd Parity Check
❑ This is a type of code with 𝑘 = 𝑛 − 1 which means that only
1 bit is used for parity check.
❑ Even parity is achieved if all the codewords have an even
weight; otherwise, it is an odd parity.
❑ The generator matrix is:
𝑮 = 𝑰𝒏−𝟏 𝑷
❑ where 𝑷 is a 𝑘 × 1 matrix with all ones and this results in an
even parity check. Adding a 1 at the result of multiplying 𝑷
with the message results into an odd parity check.

ECE 141: DIGITAL COMMUNICATION I 25


Example: Even Parity
❑ Fill up the following table for the corresponding codeword if
even parity check is used.
𝒎 𝒄
000 0000
001 0011
010 0101
011 0110
100 1001
101 1010
110 1100
111 1111
❑ Note: Regardless of message length, the minimum distance is
always 𝑑𝑚𝑖𝑛 = 2.

ECE 141: DIGITAL COMMUNICATION I 26


Hamming Code
❑ These are linear block codes with parameters 𝑛 = 2𝑚 − 1
and 𝑘 = 2𝑚 − 𝑚 − 1 for some integer 𝑚 ≥ 1.
❑ For message length 𝑚 ≥ 3, the parity check matrix 𝑯 which is
an 𝑚 × 2𝑚 − 1 whose columns contain all possible binary
vectors of length 𝑚.
❑ Example: (7,4) Hamming Code
1 0 0 0 1 0 1
1 1 1 0 1 0 0
𝑮= 0 1 0 0 1 1 1 𝑯= 0 1 1 1 0 1 0
0 0 1 0 1 1 0
0 0 0 1 0 1 1 1 1 0 1 0 0 1

Columns are all possible


vectors of length 3 except 000.

ECE 141: DIGITAL COMMUNICATION I 27


Hamming Code
❑ The data rate of a Hamming Code is:
2𝑚 − 𝑚 − 1
𝑅෨𝑏 = 𝑅𝑏
2𝑚 − 1
❑ As 𝑚 → ∞, 𝑅෨𝑏 → 𝑅𝑏 which means minimal data rate is lost as
𝑚 becomes larger.
❑ The minimum distance and weight of the Hamming code is
𝑑𝑚𝑖𝑛 = 3 regardless of the value of 𝑚.

ECE 141: DIGITAL COMMUNICATION I 28


Maximum-Length Codes
❑ Maximum-length codes are the dual of Hamming Codes in
which the generator matrix of maximum-length codes is equal
to the parity check matrix of Hamming Codes.
𝑮𝑴𝑳𝑪 = 𝑯𝑯𝑪
❑ Example: (7,4) Hamming Code becomes (7,3) Maximum-
Length Codes:
1 1 1 0 1 0 0
𝑮= 0 1 1 1 0 1 0
1 1 0 1 0 0 1

Columns are all possible


vectors of length 3 except 000.

ECE 141: DIGITAL COMMUNICATION I 29


Example: (7,3) Maximum-
Length Codes
𝒎 𝒄
000 0000000
001 1101001
010 0111010
011 1010011
100 1110100
101 0011101
110 1001110
111 0100111

❑ Except for the 𝟎 codeword, the weights are all equal with:
𝑤 = 2𝑚−1 = 4

ECE 141: DIGITAL COMMUNICATION I 30


Maximum-Length Codes
❑ For any value of the message length, 𝑚, the weight of a
Maximum-Length code is 𝑤 = 2𝑚−1 .
❑ The rate of the code is:
𝑚
𝑅෨𝑏 = 𝑅
2𝑚 − 1 𝑏
❑ Unlike the Hamming Code, Maximum-Length Code’s rate
decrease as 𝑚 increases – similar to the relationship of the
Repetition Code and the Even Parity Check.
❑ Trade-off between minimum distance/weight and data rate
can be observed.

ECE 141: DIGITAL COMMUNICATION I 31


Reed-Mueller Code
❑ The length of codewords is 𝑛 = 2𝑚 with order 𝑟 < 𝑚.
❑ The message length, 𝑘, is
𝑟
𝑚
𝑘=෍
𝑖
𝑖=0
❑ This setup achieves a minimum distance of 2𝑚−𝑟 .
❑ The Reed-Mueller code has flexible patterns and existence of
simple decoding algorithms.

ECE 141: DIGITAL COMMUNICATION I 32


Reed-Mueller Code
❑ The 𝑟 𝑡ℎ order Reed-Mueller Code has a generator matrix is
defined by the structure:
𝑮𝟎
𝑮
𝑮= 𝟏

𝑮𝒓
❑ 𝑮𝟎 is a 1 × 𝑛 matrix whose elements are all ones. 𝑮𝟏 is an
𝑚 × 𝑛 matrix whose columns are distinct binary sequences of
length 𝑚 put in natural binary order: 𝑚
0 1 2
0 0 ⋯ 1 1
0 0 ⋯ 1 1
𝑮𝟏 = ⋮ ⋮ ⋱ ⋮ ⋮
0 0 ⋯ 1 1
0 1 ⋯ 0 1

ECE 141: DIGITAL COMMUNICATION I 33


Reed-Mueller Code
❑ The matrix 𝑮𝟐 is a bigger matrix whose rows are a bitwise
𝑚
multiplication of 2 rows from 𝑮𝟏 . This makes 𝑮𝟐 an ×𝑛
2
matrix.
❑ Similarly, the rows of 𝑮𝒊 for 3 ≤ 𝑖 ≤ 𝑟 contains the bitwise
multiplication of some 𝑖 rows of 𝑮𝟏 .
❑ Example: first-order (23 , 4) Reed-Mueller Code generator.
1 1 1 1 1 1 1 1
𝑮= 0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
❑ NOTE: The Reed-Mueller code is not a systematic code.

ECE 141: DIGITAL COMMUNICATION I 34


Comparison of Codes
Code Minimum Code Rate
Distance/Weight
(𝑁, 1) Repetition Code 𝑁 𝑅𝑏
𝑁
(𝑁, 𝑁 − 1) Even Parity 2 𝑁−1
𝑅𝑏
Check 𝑁
(𝑛, 𝑘) Hamming Code 3 2𝑚 − 𝑚 − 1
𝑅𝑏
2𝑚 − 1
(𝑛, 𝑘) Maximum-Length 2𝑚−1 𝑚
𝑅
Code 2𝑚 − 1 𝑏
𝑟 𝑡ℎ order Reed-Mueller 2𝑚−𝑟 𝑚
σ𝑟𝑖=0
Code 𝑖 𝑅
𝑏
2𝑚

ECE 141: DIGITAL COMMUNICATION I 35


ECE 141 Lecture 12:
Linear Block Codes
Properties of Linear Block Codes, Hamming Distance and Hamming Weight, Examples of Linear
Block Codes

ECE 141: DIGITAL COMMUNICATIONS 36


Receiving Codewords
❑ In a receiver’s perspective, for an (𝑛, 𝑘) code, there are 2𝑛
possible codewords that it can receive.
❑ 2𝑘 out of 2𝑛 possible codewords are the only ones valid and
can be decoded to its original message.
❑ A received codeword 𝒓 is the noise-corrupted version of a
transmitted codeword 𝒄 represented by this equation:
𝒓=𝒄+𝒆
❑ 𝒆 is the error vector where 𝑤 𝒆 is the number of bit
errors occurred when 𝒄 was transmitted.

ECE 141: DIGITAL COMMUNICATION I 37


Error Detection using
Syndrome
❑ Suppose we sent the codeword c over a noisy binary channel.
We can express vector r as:
1, if an error occurred at i-th location
r = c + e, where ei = 
0, otherwise
❑ We can define a corresponding syndrome s to the vector r:
The parity-check matrix
s = rHT = (c + e)HT
allows us to compute the
= cHT + eHT syndrome (which depends
only on the error)

❑ If there are no errors than syndrome should be zero vector!


❑ Q: Give a scenario wherein we are unable to detect error.

ECE 141: DIGITAL COMMUNICATION I 38


Importance of Minimum
Distance
❑ Importance of dmin: It defines the error-correcting capability of the
code!

❑ Define: A t error-correcting code is a code that can correct any


error pattern e such that w(e) < t

❑ Using the nearest neighbor strategy, we can correct all error


patterns of hamming weight w(e) provided that the minimum
distance is greater than or equal to 2t+1

ECE 141: DIGITAL COMMUNICATION I 39


Minimum Distance Criterion
❑ Geometric Interpretation:

d(ci,cj) > 2t+1 d(c1,c2) < 2t


❑ An (n,k) linear block code of minimum distance dmin can
correct up to t errors if and only if:
1 
t   ( d min − 1) 
2 

ECE 141: DIGITAL COMMUNICATION I 40


Error Correction using
Syndrome
❑ Given: 2k codeword vectors {c1 , c 2 ,..., c 2k }
r is the received vector
❑ Decoder has a task of partitioning 2n possible received vectors into
2k disjoint subsets. This subsets is called standard array
❑ Standard array construction:
▪ The 2k code vectors are placed in a row with the all-zero code vector c1 as the
leftmost element.
▪ An error pattern e2 is picked and placed under c1, and a second row is formed by
adding e2 to each of the remaining code vectors in the first row; it is important
that the error pattern chosen as the first element in a row has not previously
appeared in the standard array.
▪ Step 2 is repeated until all the possible error patterns have been accounted for.

ECE 141: DIGITAL COMMUNICATION I 41


Example: (7,3) Maximum
Length Code
❑ Solve for the syndrome of each possible codeword when
▪ there is no error
▪ there is one bit error at the most significant bit.

1 1 1 0 1 0 0
𝑮= 0 1 1 1 0 1 0
1 1 0 1 0 0 1
1 0 1 1 0 0 0
𝑯= 1 1 1 0 1 0 0
1 1 0 0 0 1 0
0 1 1 0 0 0 1

ECE 141: DIGITAL COMMUNICATION I 42


Example: (7,3) Maximum
Length Code
❑ No bit errors

𝒎 𝒄 𝒔 = 𝒄𝑯𝑻
000 0000000 0000
001 1101001 0000
010 0111010 0000
011 1010011 0000
100 1110100 0000
101 0011101 0000
110 1001110 0000
111 0100111 0000

ECE 141: DIGITAL COMMUNICATION I 43


Example: (7,3) Maximum
Length Code
❑ One bit error

𝒎 𝒄 + 0000001 𝒔 = 𝒄𝑯𝑻
000 1000000 1110
001 0101001 1110
010 1111010 1110
011 0010011 1110
100 0110100 1110
101 1011101 1110
110 0001110 1110
111 1100111 1110

ECE 141: DIGITAL COMMUNICATION I 44


Error Correction using
Syndrome
❑ Example of standard array construction for an (n,k) code:

Coset
leaders

❑ Syndrome decoding:
▪ Compute the syndrome s = rHT
▪ Within a coset characterized by the syndrome, identify the coset leader, e0
▪ Compute the code vector c = r + e0 as the decoded version of r

ECE 141: DIGITAL COMMUNICATION I 45


Error Correction Using Hard
Decisions
❑ Using the Maximum-Likelihood Decision Rule, the codeword
with the minimum distance from the received codeword will
be selected.
𝒄ො = min 𝑤 𝒓 + 𝒄
𝒄∈ 𝒄𝟏 ,…,𝒄𝒏

❑ This is also equivalent to:


𝒄ො = min 𝑤 𝒄𝑯𝑻
𝒄∈ 𝒄𝟏 ,…,𝒄𝒏

❑ where 𝑯 is the parity-check matrix. Note that the argument


that is being minimized is the weight of the syndrome

ECE 141: DIGITAL COMMUNICATION I 46


Undetected Errors
❑ To correct 𝑡 errors, 𝑑𝑚𝑖𝑛 ≥ 2𝑡 + 1. Any number of bit errors
greater will result in undetected errors.
❑ For 𝑖 bit errors present in the received codewords, the
number of possible error vectors is
𝑛 𝑛!
=
𝑖 𝑛 − 𝑖 ! 𝑖!
❑ If 𝒆 ∈ 𝒄𝟏 , 𝒄𝟐 , … , 𝒄𝒌 then the error becomes undetected
(recall: codeword plus codeword equals codeword).

ECE 141: DIGITAL COMMUNICATION I 47


Example: (5,2) Code
❑ The generator and parity check matrices for the (5,2)
Hamming Code is
0 1 1 0 0
1 0 0 1 1
𝑮= 𝑯= 1 0 0 1 0
0 1 1 0 1
1 1 0 0 1
❑ The corresponding message to codeword mapping is
𝒎 𝒄
00 00000
01 01101
10 10011
11 11110

ECE 141: DIGITAL COMMUNICATION I 48


Example: (5,2) Code
❑ Let the error be 𝒆 = 01101.

Undetected errors.
𝒎 𝒄+𝒆 𝒔 = 𝒄𝑯𝑻
00 01101 000
01 00000 000
10 11110 000
11 10011 000

❑ Let the error be 𝒆 = 01100


𝒎 𝒄+𝒆 𝒔 = 𝒄𝑯𝑻
00 01100 001
01 00001 001
10 11111 001
11 10010 001

ECE 141: DIGITAL COMMUNICATION I 49


Undetected Errors
❑ The probability that an error is undetected is in the case that
the error vector equals valid codeword except for 𝟎
codeword.
❑ Given that the probability of bit error is 𝑃:
𝑃𝑢𝑑 = ෍ 𝑃 𝒆=𝒄
𝒄∈ 𝒄𝟏 ,…,𝒄𝒌
𝑛

𝑃𝑢𝑑 = ෍ 𝐴𝑗 𝑃𝑗 1 − 𝑃 𝑛−𝑗

𝑗=1

❑ 𝐴𝑗 is the number of codewords with weight 𝑗.

ECE 141: DIGITAL COMMUNICATION I 50


Probability of Error with Error
Correction
❑ There is an error if 𝑤 𝒆 > 𝑡 where 𝑡 is the number of
allowable bit errors in a received codeword.
❑ The upper bound of the probability of error is
𝑛
𝑛 𝑗
𝑃 𝑒 ≤ ෍ 𝑗 𝑃 1 − 𝑃 𝑛−𝑗
𝑗=𝑡+1

❑ Note that the probability of having 𝑡 + 1 errors is much


greater than the probability of having 𝑡 + 2 which simplifies
the upper bound:
𝑛
𝑃 𝑒 ≈ 𝑃𝑡+1
𝑡+1
❑ Only valid if 𝑃 ≪ 1. Note: 1 − 𝑃 𝑛−𝑡−1 ≈ 1.

ECE 141: DIGITAL COMMUNICATION I 51


Probability of Error with Error
Correction
❑ The effective bit error rate after channel coding becomes:
𝑃 𝑒 𝑛 − 1 ! 𝑃𝑡+1
𝑃𝑏 ≈ 2𝑡 + 1 = 2𝑡 + 1
𝑛 𝑡+1 ! 𝑛−𝑡−1 !

2𝑡 + 1 being the number of bit 𝑃 𝑒 is the probability than a


errors occurred and the most codeword of length 𝑛 is in
probable scenario with 𝑃 ≪ 1 error.

❑ For 𝑡 = 1:
𝑃𝑏 = 1.5 𝑛 − 1 𝑃2
❑ Note that if 𝑃 ≪ 1, then 𝑃𝑏 < 𝑃.

ECE 141: DIGITAL COMMUNICATION I 52


Example: (23,12) Triple Error
Correcting Code
❑ Let 𝑃𝑏 = 10−5 in uncoded and coded cases using BPSK. The
error correcting capability of the code is 𝑡 = 3. Compare the
SNR needed in both cases.
❑ For the uncoded case:
2𝐸𝑏
𝑃𝑏 = 10−5 =𝑄
𝑁0
𝐸𝑏
= 9.6 𝑑𝐵
𝑁0

ECE 141: DIGITAL COMMUNICATION I 53


Example: (23,12) Triple Error
Correcting Code
❑ For the coded case:
𝑡+1
𝑛 − 1 ! 𝑃
𝑃𝑏 = 10−5 = 2𝑡 + 1
𝑡+1 ! 𝑛−𝑡−1 !
4
−5
22! 𝑃
10 = 7 × → 𝑃 = 7.8 × 10−3
13! × 19!
❑ The SNR becomes:
2𝐸𝑏 12 −3
𝐸𝑏
𝑄 ∙ = 7.8 × 10 → = 7.5 𝑑𝐵
𝑁0 23 𝑁0

❑ The coding gain is 𝐶𝐺 = 9.6 − 7.5 = 2.1 𝑑𝐵.

ECE 141: DIGITAL COMMUNICATION I 54

You might also like