You are on page 1of 2

ITW 1998,Killarney, Ireland, June 22 - 26

Low Density Parity Check Codes over GF(q)


hllatthew C. Davey and David J. C. MacKay
Cavendish Laboratory, Cambridge CB3 OHE, England,
mcdavey@mrao.cam.ac.uk, mackayQmrao.cam.ac.uk.

Abstract - B i n a r y Low D e n s i t y P a r i t y Check Although the code construction is largely random, we may
( L D P C ) c o d e s h a v e been s h o w n t o h a v e near S h a n n o n reduce the probability of introducing low weight codewords
limit p e r f o r m a n c e w h e n d e c o d e d using a probabilis- by constructing the weight 2 columns systematically. To gen-
t i c decoding a l g o r i t h m . T h e a n a l o g o u s c o d e s defined erate codewords we would derive the generator matrix using
over finite fields GF(q) of o r d e r q > 2 s h o w signifi- Gaussian elimination.
c a n t l y i m p r o v e d p e r f o r m a n c e . We present the r e s u l t s If the rows of H are not independent (for odd t , this has
of M o n t e C a r l o simulations of the d e c o d i n g of infi- small probability) H is a parity check matrix for a code with
n i t e L D P C C o d e s which c a n be used to o b t a i n g o o d the same N and with smaller M . So H defines a code with
c o n s t r u c t i o n s for finite Codes. We also present em- rate of at least K I N .
pirical results for the G a u s s i a n c h a n n e l including a
r a t e 114 c o d e w i t h b i t e r r o r p r o b a b i l i t y of at 111. DECODING
ALGORITHM
EbINo = -0.05dB.
We transmit a vector z which is received as r = z + n where n
is a sample from the channel noise distribution. An instance
of the decoding problem requires finding the most probable
I. INTRODUCTION vector x such that Hx = z, where z is the syndrome vector
JVe consider a class of error correcting codes first described by z := Hr = Hn and the likelihood of x is determined by the
Gallager in 1962 [2]. These recently rediscovered low density channel model. The decoding algorithm we use is a generalisa-
parity check (LDPC) codes are defined in terms of a sparse tion of the approximate belief propagation algorithm [8] used
parity check matrix and are known to be asymptotically good by Gallager [Z] and MacKay and Neal [7, 61. The complexity
for all channels with symmetric stationary ergodic noise [6]. of decoding scales as Ntq2 per iteration.
Practical decoding of these codes is possible using an approx- We refer to elements of n as noise symbols and elements of
imate belief propagation algorithm and near Shannon limit z as checks. The belief propagation algorithm may be viewed
performance has been reported [7]. as a message passing algorithm on a directed bipartite graph
We consider the generalisation of binary LDPC codes to defined by the parity check matrix H. Each node is associated
finite fields GF(q), q > 2, and demonstrate a significant im- with a check or a noise symbol. Let edge et3 connect check i
provement in empirical performance. Although little is known with noise symbol j. For each edge et3 in the graph quantities
about the theoretical properties of the approximate belief q: and rP; are iteratively updated. qt3 approximates the prob-
propagation algorithm when applied to this decoding problem, ability that the j t h element of x is a , given the information
Monte Carlo methods may be used to simulate the behaviour obtained from all checks other than i. rt"J approximates the
of the decoding algorithm applied to an infinite LDPC code. probability that the i t h check is satisfied if element j of x is
We have used such Monte Carlo results to design better codes a and the other noise symbols have a separable distribution
for practical decoding. given by the appropriate q t j i . We initialise the algorithm by
In section I1 we define low density parity check codes and setting the q: to the likelihood that the j t h element of x is a ,
in section I11 we describe the decoding algorithm. Section IV as given by the channel model.
presents the results of the Monte Carlo simulation and empir- After each iteration we make a tentative decoding i by
ical decoding results are presented in section V. choosing, for each element, the noise symbol that receives the
largest vote from the r:J messages. If H i = z then the de-
11. CODECONSTRUCTION coding algorithm halts having identified a valid decoding of
the syndrome, otherwise the algorithm repeats. A failure is
The codes are defined in terms of a low density parity check declared if some maximum number of iterations (e.g. 500)
matrix H as follows. We choose a source block length K , a occurs without a valid decoding.
transmitted block length N and a mean column weight t > 2. In the absence of loops the algorithm would converge to
The weight of a vector is the number of non-zero components the correct posterior distribution over noise vectors x. In the
in that vector. We define M = (N- K ) to be the number of presence of cycles convergence is not guaranteed, but decoding
parity checks in the code. H is a rectangular matrix with M performance proves to be very good.
rows and N columns. We construct H such that the weight
of each column is at least 2, the mean column weight is t and
the weight per row is as uniform as possible. IV. MONTECARLOSIMULATION
We fill the non-zero entries in H from the elements of a We can use Monte Carlo methods t o simulate an infinite
finite field GF(q), q = 2 b , according to a carefully selected LDPC code in order to investigate the properties of the de-
random distribution: rather than using the uniform distri- coding algorithm in the absence of loops. We would expect
bution we choose the entries in each row to maximise the en- the behaviour to be similar to the empirical performance as
tropy of the corresponding bit of the syndrome vector z = Hx the blocklength of the codes increased. We form an ensemble
where x is a sample from the assumed channel noise model. of noise symbols n/ := { n i , { Q 4 } a E ~ ~ ( g ) } } femitting
T initial

0-7803-4408-1/98/$10.00 0 1 9 9 8 IEEE 70
messages &p according t o our channel model. We then form with quite low weight. The best code in figure 2 has a mean
a fragment of a n infinite graph using this ensemble, reflect- column weight of 3.4, but contains column weights of up to
ing our matrix construction, and propagate the Q messages 33, as shown in table 1. The row weight is almost uniform.
down t o a new noise symbol t o produce a n element of a new
ensemble Af'. This ensemble N' will represent an ensemble of Col. Weight I 2 3 9 13 17 33
noise symbols after one iteration of t h e decoding algorithm. fraction I 0.67 0.23 0.04 0.03 0.02 0.01
We iterate the procedure t o produce successive ensembles con-
Tab. 1: Parameters of good irregular code for GF(16), rate 0.25
taining approximations t o t h e distribution of Q messages in
an infinite network after a n arbitrary number of iterations.
No investigation of t h e parameters of irregular LDPC codes
For successful decoding the average Shannon entropy of the
has yet been performed using MC methods, but we expect
Q messages should become arbitrarily small as decoding pro-
more careful choice of code parameters t o yield further im-
gresses.
provements.
We declare a decoding 'success' if t h e average entropy of the
Q messages in our ensemble drops below some chosen thresh-
old. With this approach it is possible t o investigate t h e effect
of changes in field order, code construction and noise level on
the decoding performance. .' X Reg
"'..., GF(2) .
lrreg '?
2.2 GF(2) ' ?t
2

le-05 :
;, Turbo
b ?.

Fig. 2: Empirical results for Gaussian Channel, Rate 1/4 Left-


2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 Right : Irregular LDPC, GF(8) blocklength 24000 bits; JPL
Mean column weight Turbo [3]; Regular LDPC, GF(16), blocklength 24448 bits; Ir-
regular LDPC , GF(2), blocklength 64000 bits[4]; Regular LDPC,
GF(2), blocklength 40000 bits[6]
Fig. 1: Gaussian Channel: Minimum signal to noise ratio for which
average bit entropy reaches within 80 iterations, as a function
of mean column weight. Rate 1/4 code.

In Figure 1 we present results for t h e Binary Gaussian REFERENCES


Channel. The results suggest t h a t for best performance we [l] M. C. Davey and D. J. C. MacKay. Low density parity check
should choose the highest order code t h a t is feasible, bearing codes over GF(q). IEEE Communications Letters, June 1998.
in mind t h a t the decoding time per iteration scales as q2. We [2] R. G. Gallager. Low density parity check codes. I R E Trans.
Info. Theory, IT-8:21-28, Jan 1962.
then choose a n appropriate mean column weight, which will
be lower as the order increases. [3] JPL. Turbo codes performance. Available from
http://uuu33l.jpl.nasa.gov/public/TurboPerf.html, Au-
gust 1996.
[4] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. A.
V. EMPIRICAL RESULTS Spielman. Improved low-density parity-check codes using ir-
regular graphs and belief propagation. Submitted to ISIT98,
We have used the results of the MC analysis t o construct finite 1998.
codes with performance very close t o the Shannon limit [l]. [5] M. G. Luby, M. Mitzenmacher, M. Amin Shokrollahi, D. A.
We have produced rate 1/4 codes with mean column weight Spielman, and V. Stemann. Practical loss-resilient codes. In
2.3 with bit error probability of at = 0.2dB. Proceedings of the Twenty-Ninth Annual AGM Symposium on
Recent work by Spielmanet.a1.[4, 51 showed t h a t care- Theory of Computing (STOC).
fully constructed irregular parity check matrices (non-uniform [6] D. J. C. MacKay. Good error correcting codes based on very
weight per column) could give improved performance for codes sparse matrices. Submitted to IEEE transactions on Informa-
tion Theory. Available from http://uol.ra.phy.cam.ac .uk/,
over binary fields. Preliminary results for irregular matrices 1997.
defined over GF(8) and GF(4) have produced very encourag- [7] D. J. C. MacKay and R. M. Neal. Near Shannon limit perfor-
ing results, presented in figure 2. We include a code with bit mance of low density parity check codes. Electronics Letters,
error rate of at Eb/No = -0.05dB, a slight improvement 32(18):1645-1646, August 1996. Reprinted Electronics Letters,
over the best turbo codes of which this author is aware. vol 33, no 6, 13th March 1997, p.457-458.
We find that the codes presented in [4], with mean column [S] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Net-
weight 8, perform poorly over higher order fields. Motivated works of Plausible Inference. Morgan Kaufmann, San Mateo,
by the Monte Carlo results, we tried irregular constructions 1988.

71

You might also like