Professional Documents
Culture Documents
Abstract - B i n a r y Low D e n s i t y P a r i t y Check Although the code construction is largely random, we may
( L D P C ) c o d e s h a v e been s h o w n t o h a v e near S h a n n o n reduce the probability of introducing low weight codewords
limit p e r f o r m a n c e w h e n d e c o d e d using a probabilis- by constructing the weight 2 columns systematically. To gen-
t i c decoding a l g o r i t h m . T h e a n a l o g o u s c o d e s defined erate codewords we would derive the generator matrix using
over finite fields GF(q) of o r d e r q > 2 s h o w signifi- Gaussian elimination.
c a n t l y i m p r o v e d p e r f o r m a n c e . We present the r e s u l t s If the rows of H are not independent (for odd t , this has
of M o n t e C a r l o simulations of the d e c o d i n g of infi- small probability) H is a parity check matrix for a code with
n i t e L D P C C o d e s which c a n be used to o b t a i n g o o d the same N and with smaller M . So H defines a code with
c o n s t r u c t i o n s for finite Codes. We also present em- rate of at least K I N .
pirical results for the G a u s s i a n c h a n n e l including a
r a t e 114 c o d e w i t h b i t e r r o r p r o b a b i l i t y of at 111. DECODING
ALGORITHM
EbINo = -0.05dB.
We transmit a vector z which is received as r = z + n where n
is a sample from the channel noise distribution. An instance
of the decoding problem requires finding the most probable
I. INTRODUCTION vector x such that Hx = z, where z is the syndrome vector
JVe consider a class of error correcting codes first described by z := Hr = Hn and the likelihood of x is determined by the
Gallager in 1962 [2]. These recently rediscovered low density channel model. The decoding algorithm we use is a generalisa-
parity check (LDPC) codes are defined in terms of a sparse tion of the approximate belief propagation algorithm [8] used
parity check matrix and are known to be asymptotically good by Gallager [Z] and MacKay and Neal [7, 61. The complexity
for all channels with symmetric stationary ergodic noise [6]. of decoding scales as Ntq2 per iteration.
Practical decoding of these codes is possible using an approx- We refer to elements of n as noise symbols and elements of
imate belief propagation algorithm and near Shannon limit z as checks. The belief propagation algorithm may be viewed
performance has been reported [7]. as a message passing algorithm on a directed bipartite graph
We consider the generalisation of binary LDPC codes to defined by the parity check matrix H. Each node is associated
finite fields GF(q), q > 2, and demonstrate a significant im- with a check or a noise symbol. Let edge et3 connect check i
provement in empirical performance. Although little is known with noise symbol j. For each edge et3 in the graph quantities
about the theoretical properties of the approximate belief q: and rP; are iteratively updated. qt3 approximates the prob-
propagation algorithm when applied to this decoding problem, ability that the j t h element of x is a , given the information
Monte Carlo methods may be used to simulate the behaviour obtained from all checks other than i. rt"J approximates the
of the decoding algorithm applied to an infinite LDPC code. probability that the i t h check is satisfied if element j of x is
We have used such Monte Carlo results to design better codes a and the other noise symbols have a separable distribution
for practical decoding. given by the appropriate q t j i . We initialise the algorithm by
In section I1 we define low density parity check codes and setting the q: to the likelihood that the j t h element of x is a ,
in section I11 we describe the decoding algorithm. Section IV as given by the channel model.
presents the results of the Monte Carlo simulation and empir- After each iteration we make a tentative decoding i by
ical decoding results are presented in section V. choosing, for each element, the noise symbol that receives the
largest vote from the r:J messages. If H i = z then the de-
11. CODECONSTRUCTION coding algorithm halts having identified a valid decoding of
the syndrome, otherwise the algorithm repeats. A failure is
The codes are defined in terms of a low density parity check declared if some maximum number of iterations (e.g. 500)
matrix H as follows. We choose a source block length K , a occurs without a valid decoding.
transmitted block length N and a mean column weight t > 2. In the absence of loops the algorithm would converge to
The weight of a vector is the number of non-zero components the correct posterior distribution over noise vectors x. In the
in that vector. We define M = (N- K ) to be the number of presence of cycles convergence is not guaranteed, but decoding
parity checks in the code. H is a rectangular matrix with M performance proves to be very good.
rows and N columns. We construct H such that the weight
of each column is at least 2, the mean column weight is t and
the weight per row is as uniform as possible. IV. MONTECARLOSIMULATION
We fill the non-zero entries in H from the elements of a We can use Monte Carlo methods t o simulate an infinite
finite field GF(q), q = 2 b , according to a carefully selected LDPC code in order to investigate the properties of the de-
random distribution: rather than using the uniform distri- coding algorithm in the absence of loops. We would expect
bution we choose the entries in each row to maximise the en- the behaviour to be similar to the empirical performance as
tropy of the corresponding bit of the syndrome vector z = Hx the blocklength of the codes increased. We form an ensemble
where x is a sample from the assumed channel noise model. of noise symbols n/ := { n i , { Q 4 } a E ~ ~ ( g ) } } femitting
T initial
0-7803-4408-1/98/$10.00 0 1 9 9 8 IEEE 70
messages &p according t o our channel model. We then form with quite low weight. The best code in figure 2 has a mean
a fragment of a n infinite graph using this ensemble, reflect- column weight of 3.4, but contains column weights of up to
ing our matrix construction, and propagate the Q messages 33, as shown in table 1. The row weight is almost uniform.
down t o a new noise symbol t o produce a n element of a new
ensemble Af'. This ensemble N' will represent an ensemble of Col. Weight I 2 3 9 13 17 33
noise symbols after one iteration of t h e decoding algorithm. fraction I 0.67 0.23 0.04 0.03 0.02 0.01
We iterate the procedure t o produce successive ensembles con-
Tab. 1: Parameters of good irregular code for GF(16), rate 0.25
taining approximations t o t h e distribution of Q messages in
an infinite network after a n arbitrary number of iterations.
No investigation of t h e parameters of irregular LDPC codes
For successful decoding the average Shannon entropy of the
has yet been performed using MC methods, but we expect
Q messages should become arbitrarily small as decoding pro-
more careful choice of code parameters t o yield further im-
gresses.
provements.
We declare a decoding 'success' if t h e average entropy of the
Q messages in our ensemble drops below some chosen thresh-
old. With this approach it is possible t o investigate t h e effect
of changes in field order, code construction and noise level on
the decoding performance. .' X Reg
"'..., GF(2) .
lrreg '?
2.2 GF(2) ' ?t
2
le-05 :
;, Turbo
b ?.
71