Professional Documents
Culture Documents
Introduction To Binary Linear Block Codes: Yunghsiang S. Han
Introduction To Binary Linear Block Codes: Yunghsiang S. Han
Yunghsiang S. Han
?
Channel ¾ Errors
?
Demodulator - Decoder - Received information
Channel Model
1. The time-discrete memoryless channel (TDMC) is a channel
specified by an arbitrary input space A, an arbitrary output
space B, and for each element a in A , a conditional probability
measure on every element b in B that is independent of all other
inputs and outputs.
2. An example of TDMC is the Additive White Gaussian Noise
channel (AWGN channel). Another commonly encountered
channel is the binary symmetric channel (BSC).
AWGN Channel
6. c √
(rj −(−1) j E)2
1
P r(rj |cj ) = √ e .
− N0
πN0
0.5
0.45 P r(rj |1) P r(rj |0)
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
-4 -2 0 2 4
rj
where Z ∞
1 − y2
Q(x) = √ e 2 dy
x 2π
q - q
1−p
0
H p *
©
0
H ©
H©
©HpH
© j q
H
1 q © - 1
1−p
Generator Matrix
1. An (n, k) BLBC can be specified by any set of k linear
independent codewords c0 ,c1 ,. . . ,ck−1 . If we arrange the k
codewords into a k × n matrix G, G is called a generator matrix
for C.
2. Let u = (u0 , u1 , . . . , uk−1 ), where uj ∈ GF (2).
Example
1 0 0 1 1 0
G=
0 1 0 1 0 1
0 0 1 0 1 1
1 1 0 1 0 0
H=
1 0 1 0 1 0
0 1 1 0 0 1
be a parity-check matrix of C.
c = uG
Parity-Check Matrix
1. A parity check for C is an equation of the form
a0 c0 ⊕ a1 c1 ⊕ . . . ⊕ an−1 cn−1 = 0,
h i
H = −AT I n−k , then
h iT
T T
GH = [I k A] −A I n−k
−A
= [I k A] = −A + A = 0k×(n−k)
I n−k
s = yH T = (c ⊕ e)H T = eH T
-
u
Encoder -
c Modulator - AWGN - Demodulator - Decoder -
u
m(c) channel r c′
¨¥ ²
¨¥ ¯²¨¥ ¯ ² ¯²¨¥ ¯²
¨¥ ¯¨ ¥
r - r - r - r - r - r - r
§¦ §c
¦
± °± §¦
m(c)
° ± °± °± ° ¦
r
§ ¦
m(c)
§c
¦ §
u
{0, 1}k {0, 1}n {1, −1}n Rn {1, −1}n {0, 1}n {0, 1}k
X P r(rj |cℓj )
ln ≥ 0 for all c ∈ C.
P r(rj |cj )
j∈S(c,cℓ )
Since √
P r(rj |0) 4 E
φj = ln = rj ,
P r(rj |1) N0
then √
4 E
φ= r.
N0
4. For BSC,
ln 1−p : if rj = 0;
p
φj =
ln p : if rj = 1.
1−p
X P r(rj |cℓj )
ln ≥0
P r(rj |cj )
j∈S(c,cℓ )
X P r(rj |cℓj )
⇐⇒ 2 ln ≥0
P r(rj |cj )
j∈S(c,cℓ )
X
⇐⇒ ((−1)cℓj φj − (−1)cj φj ) ≥ 0
j∈S(c,cℓ )
n−1
X n−1
X
⇐⇒ (−1)cℓj φj ≥ (−1)cj φj
j=0 j=0
n−1
X n−1
X
2 2
⇐⇒ (φj − (−1)cℓj ) ≤ (φj − (−1)cj )
j=0 j=0
n−1
X n−1
X
2 2
(φj − (−1)cℓj ) ≤ (φj − (−1)cj )
j=0 j=0
for all c ∈ C.
n−1
X n−1
X
(−1)cℓj φj ≥ (−1)cj φj
j=0 j=0
n−1
1X
⇐⇒ [(−1)yj − (−1)cℓj ]φj ≤
2 j=0
n−1
1X
[(−1)yj − (−1)cj ]φj
2 j=0
n−1
X n−1
X
⇐⇒ (yj ⊕ cℓj )|φj | ≤ (yj ⊕ cj )|φj |
j=0 j=0
n−1
X n−1
X
⇐⇒ eℓj |φj | ≤ ej |φj |
j=0 j=0
n−1
X n−1
X
eℓj |φj | ≤ ej |φj | for all e ∈ E(s).
j=0 j=0
n−1
X
1. d2E (m(c), φ) = (φj − (−1)cj )2
j=0
n−1
X
2. hφ, m(c)i = (−1)cj φj
j=0
n−1
X X
3. dD (c, φ) = (yj ⊕ cj )|φj | = |φj |, where T = {j|yj 6= cj }.
j=0 j∈T
2.
Es
kEb = nEs → Eb = .
R
where Eb is the energy per information bit and Es the energy per
received symbol.
3. For a coding scheme, the coding gain at a given bit error
probability is defined as the difference between the energy per
information bit required by the coding scheme to achieve the
given bit error probability and that by uncoded transmission.
4. If for
³Pall j φj are iid , then
´ ³P ´
Pr j∈S(c) φj < 0|A0 = P r j∈S(v ) φj < 0|A0 when any
vector v has the same Hamming weight as c.
5. Let {E1 , E2 , . . . , En } be events in a probability space S. The
union bound states that the sum of the probabilities of the
individual events is greater than or equal to the probability of
the union of the events. That is,
n
X w
X
= Aw P r( φj < 0|A0 )
w=dmin j=1
5.
w
X
P r(Ew |A0 ) = P r rj < 0|A0
j=1
Z 0 √
1 −
(x−w Es )2
≈ √ e wN0
dx
−∞ πwN0
= Q((2wEs /N0 )1/2 )
n
X
= Aw Q((2wREb /N0 )1/2 )
w=dmin
code are two major factors which determine the bit error rate
performance of the code.
2
9. Since Q(x) ≤ √ 1 e−x /2 for all x > 0,
2πx
s
dmin −(Rdmin γb )
P r(Eb ) ≈ Admin e (1)
4πnkγb
where γb = Eb /N0 .
10. We may use Equation 1 to calculate bit error probability of a
code at moderate to high SNRs.
11. The bit error probability of (48, 24) and (72, 36) binary extended
quadratic residue (QR) codes are given in the next two slides [3].
Both codes have dmin = 12. Admin for the (48, 24) code is 17296
and for the (72, 36) code is 2982.
0.0001
BER 1e-06
1e-08
1e-10
1e-12
1 2 3 4 5 6 7 8 9 10
γb (dB)
µ ¶
dmin dmin REb
≤ Admin exp −
n N0
4. Let µ ¶ µ ¶
dmin dmin REb Eb′
Admin exp − = exp −
n N0 N0
5. Taking the logarithm of both sides and noting that
£d ¤
log n Admin is negligible for large SNR we have
min
Eb′
= dmin R
Eb
6. The asymptotic coding gain is
Ga = 10 log[dmin R]
q X 1 − p − pe - q
0
HpXX pe * ©
0
H XX ©
H© : X
z
» q
© H » e
p©»»H
1 ©»
q » pe j q
H
- 1
1 − p − pe
5. All vectors of weights ≤ t are coset leaders since they are all
correctable.
6. A standard array can be constructed as follows:
(a) Write down all the codewords of C in a single row starting
with the all-zero codeword. Remove all codewords from V n .
(b) Select from the remaining vectors without replacement one of
the vectors with the smallest weight. Write the selected
vector down in the column under the all-zero codeword.
(c) Add the selected vector to all nonzero codewords and write
each sum down under respective codeword and then remove
these sums from V n .
(d) Repeat steps 6b and 6c until V n is empty.
y 1 ⊕ y 2 = (y c ⊕ c1 ) ⊕ (y c ⊕ c2 ) = c3
Thus, the sum of any two vectors in the same row of a standard
array results in a codeword.
2. It is easy to see that the number of vectors in any row of
standard array is 2k , the size of C, since no vectors are equal in
the same row.
3. It is clear that, for any y, if a vector added to y will result in a
codeword, then it is in the same row as y.
4. Every vector appears exactly once in the standard array
yH T = y ′ H T
⇔ (y ⊕ y ′ )H T = 0
⇔ (y ⊕ y ′ ) = c ∈ C
a codeword. Thus,
d µ ¶
X n
V (n, d) =
j=0
j
2k × V (n, ρ) ≥ 2n
and
log2 V (n, ρ) ≥ n − k
which is called the sphere-covering bound.
n − k ≥ log2 V (n, t)
2n ≥ 2k × V (n, t)
where
1 1 1
0 1 1
A=
1 0 1
1 1 0
where
1 1 1
0 1 1
A=
1 0 1
1 1 0
000 0 000
011 0 011
101 0 101
110 0 110 C2 0 C2
C2 = = .
000 1 111 C2 1 C2
011 1 100
101 1 010
110 1 001
then there exists one more (n − k)-tuple vector can be filled into
H.
8. Since the duel code of the (n, n − k) simplex code is the (n, k)
Hamming code, the weight enumerator of the Hamming code is
à µ ¶ n+1 !
−(n−k) n 1−x 2
B(x) = 2 (1 + x) 1+n
1+x
1 n o
= (1 + x)n + n(1 − x)(1 − x2 )(n−1)/2 .
n+1
Note that n = 2m − 1 and n − k = m for the (n, n − k) simplex
code.
9. The weight enumerator for a (15, 11) Hamming code is
1 © 15 2 7
ª
A(x) = (1 + x) + 15(1 − x)(1 − x )
16
= 1 + 35x3 + 105x4 + 168x5 + 280x6 + 435x7 + 435x8
+280x9 + 168x10 + 105x11 + 35x12 + x15
 ¿  ¿
(7, 4) Hamming code (7, 3) Linear code
1000111
0100011 ¾ Augment
G =
1000111
0101101
G =
0010101 0011011
-
0001110 Expurgate 1111111
1011100 1011100
H =
H =
1101010
1101010
Á À Á À
1110001 1110001
@ @@ Puncture
I Â ¿ ¡ µ
¡
(8, 4) Linear code Lengthen ¡
@ ¡ ¡
@ 10001110
@
¡ ¡
@
01000111
Extend@
¡¡ ¡ Shorten
G =
@ ª
00101101
@ 00011011
¡
@
11111111 ¡
@
R
@ ¡
01011100
H =
01101010
Á À
01110001
References
[1] M. Bossert, Channel Coding for Telecommunications. New York,
NY: John Wiley and Sons, 1999.