You are on page 1of 5

Iterative Encoding of Low-Density Parity-Check Codes

David Haley, Alex Grant and John Buetefuer


Institute for Telecommunications Research
University of South Australia
Mawson Lakes Blvd
Mawson Lakes SA 5095 Australia
e-mail dhaley0spri.levels.unisa.edu.au

A b s t w t - Motivated by the potential to reuse the d e equivalent to solving


coder architecture, and thus reduce circuit space, we ex-
piore the m e of iterative encoding techniques which are H,x~= b (1)
based upon the graphical representation of the code. We
design codes hy identifying associated encoder mnvergence For m x m non-singular H,, we have xk = Hi'b.
constraints and also eliminating some well known undesir- In this paper we investigate iterative solution meth-
able properties for sum-product decoding such as 4-cycles. ods for (1) and the corresponding convergence criteria
In particular we show how the Jacobi method for iterative
matrix Inversion can be viewed BB message passing and and constraints imposed on H,. Our goal is to develop
employed as the core of an iterative encoder. Example encoding techniques which converge quickly and which
constructions of both regular and irregular LDPC codes re-use the sum-product message passing decoder archi-
that are encodable using this metbod are investigated.
tecture described in [7]. The idea behind using the code
constraints to perform encoding on the graph is not new
I. INTRODUCTION and was originally suggested by Tanner (81. The work pre-
sented here forms a link between this concept and c l m
Since the rediscovery of low-density parity-check sical iterative matrix inversion techniques, allowing the
(LDPC) codes [l]some effort has been directed into find- design of good codes that encode quickly. By reusing the
ing computationally efficient encoders. This has been mo- decoder architecture for encoding, both operations can be
tivated by the fact that in general, matrix-vector multi- performed by the same circuit on a time switched basis.
plication has complexity O(n2) for block length n. Hence, by eliminating the need for a separate dedicated
Following on from [Z],[3], a class of codes built from encoder circuit we aim to reduce the overall size of the
irregular cascaded graphs, was introduced in [4]. A mes- communication system.
sage passing algorithm for erasure decoding these codes Encoding and decoding operations are represented in
was presented. The cascaded graph structure allows these t h e usual way as message passing on bipartite graphs.
codes to be encoded using an algorithm similar to the era- Variable n o d e will be labelled w; and check nodes la-
sure decoder. In [5] it was proposed that the parity check belled c j (the nodes can also have values associated with
matrix for irregular LDPC codes be constructed such that them, and we shall re-use the symbols v ; and c j for this
it is in approximately lower triangular form. For this case purpose). The matrix H specifies the edges of the graph.
an appropriate encoder architecture could exploit the fact We define A(w;)as the set of all check nodes adjacent to
that most of the parity bits are computable using sparse variable node U ; , specified by column a of H. Similarly,
operations, leading to approximately linear time encod- d(c,) is specified by row j in H . Variablecheck mes-
ing complexity. In fact, it is known that the parity check sages will be denoted and check-variable messages
matrix for most LDPC codes can bemanipu lated into ACj+"<
approximately triangular form, such that the coefficient
of the quadratic term in the encoding complexity is made 11. THESUM-PRODUCT
ENCODER
quite small. Furthermore, perfohance optimized LDPC It is well known that if H, is upper triangular then
codes actually admit linear time encoding (61. encoding (solution of (1)) may be performed in m steps
Consider a binary systematic (n,k)code with c o d e by simply performing back substitution, which implies
words arranged as row vectors x = [x, Ix "1, where xu are solution for each of the parity bits in a particular order.
the information bits and x, are the parity bits. Likewise Hence upper triangular matrices are of interest. For any
partition the parity check matrix, H = [H, IH "1. Thus x upper triangular A with elements from Fa (the binary
is a codeword iff (Hp I HY][xp Ix "1' = 0, or equivalently Galois field),
Hpxk = H,xL. Defining b = H,xi, encoding becomes A non-singular U diagA = I (2)
This work was supparted by Southern-Poro Communications and (since the diagonal elements are the eigenvalues, none of
the Australian Government under ARC SPIRT C00002232. which may be 0). Let 7 be the set of all non-singular

0-7803-7632-3/02/$17.00022002 IEEE 1289


m x m matrices that may he made upper triangular using U; 6A(cj)for i> j. The induction requires a t most m
only row and column permutations. steps before every node is correct. 0
Consider a binary erasure channel with the mapping M
Hence, if H, E 7, we may perform encoding by a p
of the output: M ( 0 ) = +1, M ( 1 ) = -1, M(erasure) = 0.
plying Algorithm 1 to ( l ) , initializing the variables rep-
The message passing erasure decoder c.f. [4] operates as
resenting xu with f l and those representing xp with 0.
follows (all arithmetic is real).
The idea of Theorem 1 is certainly not new, hut we have
A l g o r i t h m 1. not seen it made explicit.
1 . Initialization: Set U; E {O,-l,+l} to be the received The number of iterations required for convergence may
symbol Corresponding to variable U;. Initialize all mes- he greatly reduced below the upper bound of m for LDPC
sages to 0. codes as they are represented by sparse matrices. It is
2. Variable -Che ck: From each variable U; to each c E possible to design H, E 7 using a tiered approach, similar
A(vi) send to that described in [5]. In this construction, the parity
hits for one or more tiers will he evaluated at each itera-
tion, and therefore the total number of iterations may be
set by the designer.
The selection of H, is always arbitrary with respect to
the sum-product encodahility of H.
3. Check -Variable: F" each check c j to each variable
U E A(c,) send 111. ENCODING
VIA ITERATIVE MATRIXINVERSION

Having reduced the encoding problem statement to one


of matrix inversion, it is natural to wonder whether clas-
sical iterative matrix inversion techniques such as those
described in [9]can be applied.
4. Stop/Continue: If at least one # 0 for all U; Suppose we wish to solve Ax = b. Split A according
then ezit, otherwise return to 2. +
t o A = S - T. We can then write Sx = Tx b, and try
the iteration
The decoder of Algorithm 1 can he used for encoding
certain types of LDPC codes as we shall now show. sXk+i = TXk +b (3)
for some initial guess XO. In order to compute xk+l
Theorem 1. Let A E T and b E {-l,+l}'" be given. easily, S should heeas ily invertible. The GaussSeidel
Algorithm 1 solves A x = b in at most m iterations, with-
method chooses S triangular, so for A E 7, we see that
out regard t o the actual order of node updates.
the method of the previous section actually implements
Proof. Without loss of generality assume A upper trian- Gauss-Seidel (in this case simply hack-substitution). The
gular. Let x = A-'b. Construct the bipartite graph classical Jacobi method chooses S = diag(A) and con-
with variable nodes U; connected to checks cj accord- verges for any initial guess provided the spectral radius
ing to A. Also connected to each check c j is the ad- of the real or complex matrix S-'T is less than 1. We
ditional variable node U.: Initialize the nodes (step 1) will consider the use of this method for FZ matrices, ne-
with v: = M(b;) E {-l,+l} and U; = 0. cessitating different convergence criteria.
Call vi active if a t least one ,A, # 0. An active U; is Over Fz,S invertible implies S = Iand diagA = I.
correct if ,A, E {sgn M(z,),O} for all c j E A(v;). For Hence (3) becomes
any correct U;, sgnA,,,cj E {sgn M(xi),O}. In the case
t h a t every node is either correct or not active, nodes can Xk+l = ( A fBI)Xk +b (4)
only he made correct, left correct, or left inactivc a t the
next Step 3, since each new ,,A,, E {M(z;),O}.
Theorem 2. For arbitrary XO, the iteration (4) yields
After the first Step 3, 'VI will he correct (since the only xkr = A-'b for k' 2 k z@
. ( A I)k= 0.
non-zero incoming message will he M ( b , ) ) . Similarly, any Proof. Let the error term a t iteration k he ek = ( x - xk).
other nodes activated will he correct. +
Subtracting xk+l = Txk b from x = Tx b gives +
Assume there is a set of k t ~correct
l nodes C and that e k + 1 = Tek. so e k = Tkeo,where eo is the error of
e
every node U C is inactive. It remains only to show that the initialization XO. Hence the error term vanishes for
at least one correct node is created a t the next Step 3. iterations k' 2 k if Tk = 0. Conversely, if Tk # 0 for
This is true since there will exist an integer j 2 1 such all k, the algorithm will fail to universally converge since
t h a t vl, . . .,U, are correct. At the next Step 3, vj+l 6C the error will be zero only if eo is in the null space of Tk,
will he correct since by A triangular and (Z), ci E A(u;) which cannot be guaranteed independently of the initial
and c j 6 A(;;) for ? < i. Likewise, vj E A(cj) and guess. 0

I290
Based on Theorem 2, we can in principle construct re-
versible LDPC codes that are iteratively encodable in k it-
erations using (4) by selecting H, such that (H,@I)k= 0 .
We call such codes Jacobi encodable. It is interesting to
note that the codes with H, E 7 mentioned in the last
section are also Jacobi encodable.
Theorem 3. Any code with upper triangular H, is Ja-
cobs encodable ouer Fz.
Pmof. Let T = H, @ I. Hence diagT = 0. Each succes
sive power of T will therefore be upper triangular, with
its first non-zero entry of each row occurring at least one
place later. Thus T k= 0 for some k. 0

We may view the Jacobi iteration as message passing on


a bipartite graph formed as follows. Let variable node vi
correspond to z;and let nodes v;or respond to bj. The vi
are connected to checks cj according to A and the v i are
connected to c j . This is the same connection structure as
required for sum-product decoding. The Jacobi message
passing schedule, for ahinary mapping: M ( 0 ) = fl,
M(1) = -1 is defined as follows. Fig. 1. Jacobi algorithm 89 message passing

Algorithm 2.
1. Initialization: Set all vi = +1 and = bj. it to the desired size of H, by recursively applying either
2. Variable -Che ck: Send pv,+c = U; to all c E d(v;)\ci. of the following two rules.
3. Check -Variable: Each check cj sends pCi-", - -
~ u , ~ A ~ c . l , v , p to U, only. Let U; = p.<--... R e t u n
to step 1.
An example of how this algorithm operates on the In both cases if A' = I then BZ = I. The first method
graph is shown in Fig. 1. During each iteration variables provides some flexibility in growing the column and row
may be updated in parallel. For clarity Fig. 1shows only density distribution of A whereas the second method al-
those messages used to update v2. lows us to expand A without altering the distribution.
We note that Algorithm 2 has a strong resemblance Neither method introduces new cycles of length 4 into the
to the sum-product decoder. In fact, the update process graph of H. We complete H for a rate 112 code, building
for pC,-", in the Jacobi method is identical to that used H, by adding randomly generated columns of weight 4
in the update in the sum-product case, so the de- to the right hand side of H,and rejecting columns that
coder architecture may be reused. It is also worth noting would introduce a 4-cycle.
that only one operation per node needs to he performed We have constructed an n = 512 code using this
in each step of the Jacobi method, compared to one per method and observed that its BER performance compares
connected edge for each of the nodes in the sum-product well to that of a regular code. However, this simple code
case. Therefore the Jacobi encoder implementation offers contains some single weight columns which are undesir-
the potential for reduced power consumption. able for sum-product decoding. For such columns, only a
single edge connects the variable node to the remainder of
IV. REVERSIBLE
LDPC CODES the graph. If the variable becomes corrupted then it will
In this section we demonstrate the use of the Fz Jacobi always pass the corrupted message value along this edge,
convergence rule, to design codes which are iteratively thus connected checks may not besatisfied, preventing
encodahle in two iterations of the Jacohi method. We their use as a stopping criteria. The codes constructed
therefore seek a matrix H, with (H, @ I)* = 0 * in the following sections do not contain any single weight
H: = I. columns.
There are many rules that can be applied to build a
matrix H,of this form. Here we build an example code
V. REGULAR
CONSTRUCTION
using wme of the simplest. We may begin with any ma- In this section we allow four encoder iterations and
trix A for which A' = I, for example A = I, and grow build (3,6)-regular codes, constructing H, as an m x m

1291
circulant matrix. The first row of a circulant matrix is
specified by the polynomial c(x), where the coefficients
of xJ-l represent the j t h column entry. The it" row of
t h e matrix is then specified by the polynomial p(x) =
x"-'c(z) mod (x" +1).
Theorem 4. If C is a binary m x m circulant matrix,
where m = 2' for q > 2, built f" cyclic rotations of
the first row polynomial c(z) = l + ~ + x ~ ' ~ ~ then
+ ' ,C is
an invertible (3,3)-replar mat&, satisfying the condition
(c8 114 = 0.
Proof. Given that the weightof c(x) is 3 and the trans-
pose of a circulant matrix is also circulant it follows that
C is (3,3)-regular. Without loss of generality, we note
that the statement (C e3 I)4 = 0 is equivalent to C4 = I . .
and that if this holds then C must bein vertible. The Fig. 2. Random and reversible regular LDPC codes
algebra of C over Fz is isomorphic to the algebra of poly-
+
nomials modulo xm 1 having coefficients from Fz [lo].
It therefore remains only to show that c4(x) 1 modulo To build H, we start with a 4-cycle free seed matrix A
Zm + 1. which has the property (A 8 I)4 = 0.

c4(x) = xm+4 2 4 1+ +
= 1 mod ( x m + l )

0
An example circulant matrix for m = 8, which satisfies
Theorem 4, having the first row polynomial c(z) = 1 + Weth en grow it to t h e desired size of H, by recur-
+
x 2 3 follows. sively applying either of the following two rules, where
kron represents the matrix Kronecker product.
.
1 1 0 1 0 0 0 0
0 1 1 0 1 0 0 0 B = kron(A,I) B = [(AiI)
0 0 1 1 0 1 0 0
0 0 0 1 1 0 1 0 In both cases (B fB I)4 = 0 and the column and row
C=
0 0 0 0 1 1 0 1 density distribution of B is equal to that of A. Neither
1 0 0 0 0 1 1 0 method introduces new cycles of length 4.
0 1 0 0 0 0 1 1 Richardson et al. 1121, [13] have shown how density
-1 0 1 0 0 0 0 1- evolution can he used to compute the capacity of a given
ensemble of randomly constructed LDPC codes. They
define the threshold as the maximum level of noise such
that the probability of error tends to zero as both the
block length and number of decoder iterations tends to
infinity. Chung et al. [14] have since presented a less com-
plex Gaussian approximation algorithm for determining
the threshold over AWGN channels and sum-product de-
coding. Using these algorithm the authors provide opti-
mized irregular distribution sequences for good irregular
codes. We note that both algorithm are based upon
random LDPC constructions, and depend upon the "lo-
cal tree assumption" that the girth of the graph will be
large enough to sustain cycle free local subgraphs during
decoding (141. Here H, is structured and we are iuter-
ested in observing the effect that this has on the decoder
performance.

1292
illustrated how codes may be designed to encode within
a guaranteed number of iterations. We have drawn a link
I . i :.....: ; ... .. .:... . :.::I
... between iterative encoding/decoding and classical itera-
tive matrix inversion techniques. The Jacobi method was
proposed 84 an iterative encoding algorithm with very low
. .. . . . . . . complexity.
3m 10~'
Examples of both regular and irregular reversible codes
were constructed and their performance analyzed. The
example regular reversible LDPC codes compare well to
those constructed randomly, while it appears that there
is still work to bedon e in building optimized irregular
reversible structures.
The efficient re-use of circuit space and potential for
t 1.2 1.1 ,.I 2.2 2.4 2.6
reduced power consumption presented by the low com-
E$No ( d 6 ) plexity Jacobi encoder is of obvious practical relevance.
Fig. 3. Random and reversible irregular LDPC codes
REFERENCES
[l] R. G.Gallager, " h i - d e n s i t y parity-chsk codes." MIT P r e s .
1963.
The matrix H, created above has equal column and [Z]M. Sipser and D. A. Spielman. "Expander codes," IEEE f i n . , .
row density distributions X(s) = p(s) = 0.6667s + Inform. Theory, wl. 42,pp. 171&1722, Nw. 1996.
131 D. A. Spielman, "Linear-time eocodabla and demdable enor-
0.3333s' using the notation from [13], with respect to correcting codes," IEEE f i n s . Inform. Theory, vol. 42,
edges. We build H, randomly for maximum column pp. 1723-1731, Nov. 1996.
weight XmaZ = 9 so that the overall distribution of H is 141 M. J. Luby, M. Mitzenmacher, M. A. Sholvollahi, D. A. Spiel-
close to that for the density evolution optimized code (5) man, and V. Stemann, "Practical lassresilient codes," in P m . ,
29th Synp. Theory Computing, pp. 15C-159, Aug. 1997.
from 1131, which has a noise threshold U * = 0.9540 [5] D. J. C. MacKay, S. T. Wilson, and M. C. Davey, "Compari-
(E{/No = 0.4090dB). son of construction of irregular Gallager codes." IEEE hn.,.
Commun., MI.47, pp. 1449-1454, Oct. 1999.
161 T. J. Richardson and R. L. Urblrake, "Efficient encoding of
X(s) = 0.276842 + 0 . 2 8 3 4 2 ~+~0.43974s' low-density parity-cheek codes," IEEE %tu. Inform. Theory,
(5)
+ +
p(s) = 0 . 0 1 5 6 8 ~ ~0 . 8 5 2 4 4 ~ ~0.131882'
MI.47, pp. 63-56, Feb. 2001.
[7] F. R. Ksehirhsng, B. J. Rey, and &A. Loeliger, "Factor
graphs and the sum-product algorithm," IEEE h m .Inform.
The noise threshold for the designed distribution (6) Theory, vol. 47, pp. 498-519, Feb. 2001.
of H, evaluated using Chung's Gaussian approximation 181 R.M. Tanner, "A reemive approach t o low complexity codes:'
IEEE f i n s . Inform. Theory, MI.27, pp. 533-547, Sep. 1981.
calculator [15]is U* = 0.9412 (E;/No = 0.5260dB). [9] G.Strang, Linmr algebm and ib opplicntiotu. Saunders College
Publishing, 3 ed., 1988.
1101 M. Karlin, "New binary coding results by circulaots," IEEE
X(Z) = 0.276642 + 0 . 2 8 3 3 1 ~+~0.440052' "8.Inform. Theory, vol. 15,pp. 81-92, 1969.
Ill] D. J. C. MacKay, "Encyclopedia of sparse graph codes," http:
+
p ( z ) = 0 . 8 8 5 9 9 ~ ~0.11401s'
(6) //ud .ra.phy.cam.ac.uL/m.sray/cod../
1121 T. J. Richardson and R. L. Urbanke, "The capacity of low-
densitv oaritv-eh& codes under messaee oassine dewdine."
The performance of the optimized n = 1008 reversible IEEE-'&tu.'Inf~m. Theory, vol. 47,p i 5bS618, Feb. 2 C 6 .
code, using a sum-product decoder for 1000 iterations 1131 T. J . Richards" and R. L. Urbanke, "Design of capacity-
approaching irregular low-density parity-check codes," IEEE
over an AWGN channel, is compared to that of the o p %m. Inform. Theory, vol. 47, pp. 61S637; Feb. 2W1.
timized n = 1000 random code from [13] in Fig. 3. We 1141 S.-Y. Chung, T. J. Richardson and R. L. Urbanke, "Analysis
see that it compares well until around Eb/No = 1.8dB. of sum-product decoding of low-density parity-check codes us-
ing B Gaussian approximation," IEEE %ns. Inform. Theory,
A possible explanation for the divergence after this point vol. 47, pp. 657470, Feb. 2W1.
is the fact that the seed matrix, although 4-cycle free, 1151 S.-Y. Chung, "Threshold calculation using a Gaussian approx-
contains several cycles of length 6. The methods used to imation," http://truth.mit.odu/-sychung/gath.html
grow the seed above also grow the number of &cycles. As
a result, this particular structure of H, violates the local
tree assumption in many instances.

VII. CONCLUSIONS
We have presented practical algorithms for iterative en-
coding of LDPC codes which make use of the architecture
in place for a sum-product decoder. In each case we have

1293

You might also like