Professional Documents
Culture Documents
Abstract |Gallager's low density binary parity check codes To reduce the probability of introducing low weight code-
have been shown to have near Shannon limit performance words the weight 2 columns are constructed systematically.
when decoded using a probabilistic decoding algorithm. We To generate codewords, we would use Gaussian elimination
report the empirical results of error-correction using the
analogous codes over ( ) for 2, with Binary Sym- to derive the generator matrix.
There is a possibility that the rows of H are not inde-
GF q q >
metric channels and Binary Gaussian Channels. We nd a
signicant improvement over the performance of the binary pendent (though for odd t, this has small probability); in
codes, including a rate 1/4 code with bit error probability
10?5 at b 0 = 0 2dB. this case H is a parity check matrix for a code with the
same N and with smaller M . So H denes a code with
< E =N :
I. Introduction
rate of at least K=N . Results are quoted here based on the
C
assumption that the rate is equal to K=N .
ODES dened in terms of a non-systematic low den-
sity parity check matrix [1], [2] are asymptotically III. Channel models
good, and can be practically decoded with Gallager's be- We will use these codes to communicate over binary
lief propagation algorithm [3], [4], [5]. Our proof in [5] channels, making no special use of the algebraic struc-
shows that they are asymptotically good codes for a wide ture of GF (q). Moving to GF (q) makes the codes more
class of channels, not just for the memoryless binary sym- complex while decoding remains tractable. We have ap-
metric channel. Results presented in [4] showed these codes plied our codes to the binary symmetric channel (BSC)
(which we call `LDPC' codes) have near Shannon limit per- and the Binary Gaussian Channel with inputs of s and
formance when decoded using the belief propagation algo- additive noise of variance 2 = 1. If one communicates
rithm. using a code of rate R then it is conventional to describe
Binary LDPC codes may be generalised to nite elds the signal to noise ratio (SNR) by Eb =N0 = s2 =2R2 and
GF (q) in a natural way. In the remainder of this paper we to report this number in decibels as 10 log10 Eb =N0 . We
use a vector space over the nite eld GF (q) where q = 2b . dene the received bit to be the sign of the channel out-
Elements of GF (q) will be called symbols and we use the put and set the likelihood of the nth noise bit being 1 to
term bits when referring to the binary representation of gn1 = 1=(1 + exp(2s jyn j =2 )) where yn is the output of the
symbols. channel. We also dene gn0 = 1 ? gn1 .
Denition 1: The weight of a vector or matrix is the In the case of the BSC gn1 is independent of n.
number of non-zero symbols in it. The density of a source In GF (2b ) each noise symbol xn consists of b noise bits
of random symbols is the expected fraction of non-zero xn1 : : : xnb . Our channel models are memoryless binary
symbols. The overlap between two vectors is the number channels so we can set the likelihood of the noise symbol
of coordinates in which both vectors have non-zero entries. xn being equal to a to fna := bi=1 gnaii for each a 2 GF (q)
Q
the probability of check m being satised if symbol n of elds GF (2); GF (4); and GF (8). The vertical axis shows
x is considered xed at a and the other noise symbols the empirical bit-error probability. It should be pointed
have a separable distribution given by the probabilities out that all the errors were detected errors: the decoder
a : n0 2 N (m)nn; a 2 GF (q )g.
fqmn 0 reported that it had failed. The BSC results show a mono-
tonic improvement as we increase the order of the elds on
A. Initialisation which the codes are based.
a to f a , the likelihood that
We initialise the values of qmn n
xn = a according to the channel model as outlined in sec-
0.1
tion III.
q. If i; j are successive indices in N (m) with j > i then Fig. 1. Comparison of performance of LDPC codes for a BSC over
X
GF (2) (4), and (8). These codes had column weight 3
; GF GF
Prob [mj = a] = t
Prob [mi = s] qmj (2) and rate 1/2. From top to bottom: ( ) = (2000,2); (1000,4);
N; q
(6000,2); (2000,8).
fs;t:Hmj t + s = ag
Similarly we can calculate the distribution of each mk . Figure 2 shows the performance of LDPC codes over
a using
Now we can update the values of rmn a Binary Gaussian Channel. Three codes of rate 1/3
a with average column weight 2.5 are shown, over elds
rmn = Prob (m(n?1) + m(n+1) ) = zm ? Hmn a (3) GF (2); GF (4); and GF (8). Also shown is a code of rate
Prob m(n?1) = s Prob m(n+1) = t (4) 0.26 with column weight 2.3 over GF (16). We compare the
X
=
fs;t: + = m ? mn g performance of these LDPC codes with recent Turbo code
s t z H a
results [8].
a
C. Updating qmn 0.1
For each m and n and for a 2 GF (q) we update: GF(16)
Y
a = mn f a a
Empirical Bit-Error Probability
j 2M(n)nm GF(4)
V. Results right: JPL Turbo code, rate 1/4, blocklength 65536; LDPC Code
with rate 0.26, average column weight 2.3, ( ) = (6000,16);
We compare codes of blocklength N symbols over
N; q
JPL Turbo code, rate 1/3, blocklength 49152; LDPC codes with
GF (2b ) with binary codes of length Nb, i.e. codes of length rate 1/3, average column weight 2.5: ( ) = (6000,8), (9000,4),
N; q