Professional Documents
Culture Documents
I290
Based on Theorem 2, we can in principle construct re-
versible LDPC codes that are iteratively encodable in k it-
erations using (4) by selecting H, such that (H,@I)k= 0 .
We call such codes Jacobi encodable. It is interesting to
note that the codes with H, E 7 mentioned in the last
section are also Jacobi encodable.
Theorem 3. Any code with upper triangular H, is Ja-
cobs encodable ouer Fz.
Pmof. Let T = H, @ I. Hence diagT = 0. Each succes
sive power of T will therefore be upper triangular, with
its first non-zero entry of each row occurring at least one
place later. Thus T k= 0 for some k. 0
Algorithm 2.
1. Initialization: Set all vi = +1 and = bj. it to the desired size of H, by recursively applying either
2. Variable -Che ck: Send pv,+c = U; to all c E d(v;)\ci. of the following two rules.
3. Check -Variable: Each check cj sends pCi-", - -
~ u , ~ A ~ c . l , v , p to U, only. Let U; = p.<--... R e t u n
to step 1.
An example of how this algorithm operates on the In both cases if A' = I then BZ = I. The first method
graph is shown in Fig. 1. During each iteration variables provides some flexibility in growing the column and row
may be updated in parallel. For clarity Fig. 1shows only density distribution of A whereas the second method al-
those messages used to update v2. lows us to expand A without altering the distribution.
We note that Algorithm 2 has a strong resemblance Neither method introduces new cycles of length 4 into the
to the sum-product decoder. In fact, the update process graph of H. We complete H for a rate 112 code, building
for pC,-", in the Jacobi method is identical to that used H, by adding randomly generated columns of weight 4
in the update in the sum-product case, so the de- to the right hand side of H,and rejecting columns that
coder architecture may be reused. It is also worth noting would introduce a 4-cycle.
that only one operation per node needs to he performed We have constructed an n = 512 code using this
in each step of the Jacobi method, compared to one per method and observed that its BER performance compares
connected edge for each of the nodes in the sum-product well to that of a regular code. However, this simple code
case. Therefore the Jacobi encoder implementation offers contains some single weight columns which are undesir-
the potential for reduced power consumption. able for sum-product decoding. For such columns, only a
single edge connects the variable node to the remainder of
IV. REVERSIBLE
LDPC CODES the graph. If the variable becomes corrupted then it will
In this section we demonstrate the use of the Fz Jacobi always pass the corrupted message value along this edge,
convergence rule, to design codes which are iteratively thus connected checks may not besatisfied, preventing
encodahle in two iterations of the Jacohi method. We their use as a stopping criteria. The codes constructed
therefore seek a matrix H, with (H, @ I)* = 0 * in the following sections do not contain any single weight
H: = I. columns.
There are many rules that can be applied to build a
matrix H,of this form. Here we build an example code
V. REGULAR
CONSTRUCTION
using wme of the simplest. We may begin with any ma- In this section we allow four encoder iterations and
trix A for which A' = I, for example A = I, and grow build (3,6)-regular codes, constructing H, as an m x m
1291
circulant matrix. The first row of a circulant matrix is
specified by the polynomial c(x), where the coefficients
of xJ-l represent the j t h column entry. The it" row of
t h e matrix is then specified by the polynomial p(x) =
x"-'c(z) mod (x" +1).
Theorem 4. If C is a binary m x m circulant matrix,
where m = 2' for q > 2, built f" cyclic rotations of
the first row polynomial c(z) = l + ~ + x ~ ' ~ ~ then
+ ' ,C is
an invertible (3,3)-replar mat&, satisfying the condition
(c8 114 = 0.
Proof. Given that the weightof c(x) is 3 and the trans-
pose of a circulant matrix is also circulant it follows that
C is (3,3)-regular. Without loss of generality, we note
that the statement (C e3 I)4 = 0 is equivalent to C4 = I . .
and that if this holds then C must bein vertible. The Fig. 2. Random and reversible regular LDPC codes
algebra of C over Fz is isomorphic to the algebra of poly-
+
nomials modulo xm 1 having coefficients from Fz [lo].
It therefore remains only to show that c4(x) 1 modulo To build H, we start with a 4-cycle free seed matrix A
Zm + 1. which has the property (A 8 I)4 = 0.
c4(x) = xm+4 2 4 1+ +
= 1 mod ( x m + l )
0
An example circulant matrix for m = 8, which satisfies
Theorem 4, having the first row polynomial c(z) = 1 + Weth en grow it to t h e desired size of H, by recur-
+
x 2 3 follows. sively applying either of the following two rules, where
kron represents the matrix Kronecker product.
.
1 1 0 1 0 0 0 0
0 1 1 0 1 0 0 0 B = kron(A,I) B = [(AiI)
0 0 1 1 0 1 0 0
0 0 0 1 1 0 1 0 In both cases (B fB I)4 = 0 and the column and row
C=
0 0 0 0 1 1 0 1 density distribution of B is equal to that of A. Neither
1 0 0 0 0 1 1 0 method introduces new cycles of length 4.
0 1 0 0 0 0 1 1 Richardson et al. 1121, [13] have shown how density
-1 0 1 0 0 0 0 1- evolution can he used to compute the capacity of a given
ensemble of randomly constructed LDPC codes. They
define the threshold as the maximum level of noise such
that the probability of error tends to zero as both the
block length and number of decoder iterations tends to
infinity. Chung et al. [14] have since presented a less com-
plex Gaussian approximation algorithm for determining
the threshold over AWGN channels and sum-product de-
coding. Using these algorithm the authors provide opti-
mized irregular distribution sequences for good irregular
codes. We note that both algorithm are based upon
random LDPC constructions, and depend upon the "lo-
cal tree assumption" that the girth of the graph will be
large enough to sustain cycle free local subgraphs during
decoding (141. Here H, is structured and we are iuter-
ested in observing the effect that this has on the decoder
performance.
1292
illustrated how codes may be designed to encode within
a guaranteed number of iterations. We have drawn a link
I . i :.....: ; ... .. .:... . :.::I
... between iterative encoding/decoding and classical itera-
tive matrix inversion techniques. The Jacobi method was
proposed 84 an iterative encoding algorithm with very low
. .. . . . . . . complexity.
3m 10~'
Examples of both regular and irregular reversible codes
were constructed and their performance analyzed. The
example regular reversible LDPC codes compare well to
those constructed randomly, while it appears that there
is still work to bedon e in building optimized irregular
reversible structures.
The efficient re-use of circuit space and potential for
t 1.2 1.1 ,.I 2.2 2.4 2.6
reduced power consumption presented by the low com-
E$No ( d 6 ) plexity Jacobi encoder is of obvious practical relevance.
Fig. 3. Random and reversible irregular LDPC codes
REFERENCES
[l] R. G.Gallager, " h i - d e n s i t y parity-chsk codes." MIT P r e s .
1963.
The matrix H, created above has equal column and [Z]M. Sipser and D. A. Spielman. "Expander codes," IEEE f i n . , .
row density distributions X(s) = p(s) = 0.6667s + Inform. Theory, wl. 42,pp. 171&1722, Nw. 1996.
131 D. A. Spielman, "Linear-time eocodabla and demdable enor-
0.3333s' using the notation from [13], with respect to correcting codes," IEEE f i n s . Inform. Theory, vol. 42,
edges. We build H, randomly for maximum column pp. 1723-1731, Nov. 1996.
weight XmaZ = 9 so that the overall distribution of H is 141 M. J. Luby, M. Mitzenmacher, M. A. Sholvollahi, D. A. Spiel-
close to that for the density evolution optimized code (5) man, and V. Stemann, "Practical lassresilient codes," in P m . ,
29th Synp. Theory Computing, pp. 15C-159, Aug. 1997.
from 1131, which has a noise threshold U * = 0.9540 [5] D. J. C. MacKay, S. T. Wilson, and M. C. Davey, "Compari-
(E{/No = 0.4090dB). son of construction of irregular Gallager codes." IEEE hn.,.
Commun., MI.47, pp. 1449-1454, Oct. 1999.
161 T. J. Richardson and R. L. Urblrake, "Efficient encoding of
X(s) = 0.276842 + 0 . 2 8 3 4 2 ~+~0.43974s' low-density parity-cheek codes," IEEE %tu. Inform. Theory,
(5)
+ +
p(s) = 0 . 0 1 5 6 8 ~ ~0 . 8 5 2 4 4 ~ ~0.131882'
MI.47, pp. 63-56, Feb. 2001.
[7] F. R. Ksehirhsng, B. J. Rey, and &A. Loeliger, "Factor
graphs and the sum-product algorithm," IEEE h m .Inform.
The noise threshold for the designed distribution (6) Theory, vol. 47, pp. 498-519, Feb. 2001.
of H, evaluated using Chung's Gaussian approximation 181 R.M. Tanner, "A reemive approach t o low complexity codes:'
IEEE f i n s . Inform. Theory, MI.27, pp. 533-547, Sep. 1981.
calculator [15]is U* = 0.9412 (E;/No = 0.5260dB). [9] G.Strang, Linmr algebm and ib opplicntiotu. Saunders College
Publishing, 3 ed., 1988.
1101 M. Karlin, "New binary coding results by circulaots," IEEE
X(Z) = 0.276642 + 0 . 2 8 3 3 1 ~+~0.440052' "8.Inform. Theory, vol. 15,pp. 81-92, 1969.
Ill] D. J. C. MacKay, "Encyclopedia of sparse graph codes," http:
+
p ( z ) = 0 . 8 8 5 9 9 ~ ~0.11401s'
(6) //ud .ra.phy.cam.ac.uL/m.sray/cod../
1121 T. J. Richardson and R. L. Urbanke, "The capacity of low-
densitv oaritv-eh& codes under messaee oassine dewdine."
The performance of the optimized n = 1008 reversible IEEE-'&tu.'Inf~m. Theory, vol. 47,p i 5bS618, Feb. 2 C 6 .
code, using a sum-product decoder for 1000 iterations 1131 T. J . Richards" and R. L. Urbanke, "Design of capacity-
approaching irregular low-density parity-check codes," IEEE
over an AWGN channel, is compared to that of the o p %m. Inform. Theory, vol. 47, pp. 61S637; Feb. 2W1.
timized n = 1000 random code from [13] in Fig. 3. We 1141 S.-Y. Chung, T. J. Richardson and R. L. Urbanke, "Analysis
see that it compares well until around Eb/No = 1.8dB. of sum-product decoding of low-density parity-check codes us-
ing B Gaussian approximation," IEEE %ns. Inform. Theory,
A possible explanation for the divergence after this point vol. 47, pp. 657470, Feb. 2W1.
is the fact that the seed matrix, although 4-cycle free, 1151 S.-Y. Chung, "Threshold calculation using a Gaussian approx-
contains several cycles of length 6. The methods used to imation," http://truth.mit.odu/-sychung/gath.html
grow the seed above also grow the number of &cycles. As
a result, this particular structure of H, violates the local
tree assumption in many instances.
VII. CONCLUSIONS
We have presented practical algorithms for iterative en-
coding of LDPC codes which make use of the architecture
in place for a sum-product decoder. In each case we have
1293