You are on page 1of 4

Iteration-constrained design of IRA codes

Alex Grant, Ingmar Land and Guangsong Wang


Institute for Telecommunications Research
University of South Australia
{alex.grant,ingmar.land}@unisa.edu.au, wangy012@mymail.unisa.edu.au

Abstract—This paper addresses extrinsic information chart irregular LDPC codes was proposed, which minimizes the
based design of non-systematic irregular repeat-accumulate codes number of iterations for a given target error probability. The
for a given number of decoding iterations. This criterion is of key concept is to approximate the number of iterations by
practical importance in many applications where complexity or
latency is limited. Our main contribution is a novel formulation a continuous function, and use this as the objective function
of the optimzation problem in a particular way that makes in the optimization. Starting from a capacity-achieving code,
its actual evaluation possible. This is achieved by introducing the variable-node degree distribution is iteratively adapted to
additional optimization variables and constraints which have improve this objective function.
the effect of “unrolling” the iteration. Our approach enforces We take a different approach, seeking a code that is opti-
the finite iteration condition, while avoiding the need to know
arbitrary repeated functional compositions of the component mized for a fixed number of iterations. We restrict ourselves
EXIT functions. We restrict attention to the binary erasure to non-systematic IRA codes over the binary erasure channel
channel, for which the EXIT chart approach is exact. Extension (BEC). Extension to other spares-graph codes and to other
to other sparse-graph codes and other communication channels communication channels (given the usual assumptions and
is straightforward. approximations with EXIT charts) is possible.
I. I NTRODUCTION II. IRA C ODES AND EXIT C HARTS
Sparse-graph codes with iterative message-passing decoding We assume familiarity with low-density parity-check codes,
represent the state-of-the-art of modern channel coding. The their graphical representation, iterative decoding, and EXIT
two major classes are low-density parity-check (LDPC) codes, chart based design [1–3, 5]. We focus on non-systematic
and irregular repeat-accumulate (IRA) codes. For both classes, irregular repeat accumulate codes, where information bits
encoding and decoding complexity are linear in the code are repeated, interleaved, passed through parity-checks, and
length, and there are efficient code design methods [1–3]. finally scrambled by an accumulator [6, 11]. Only the code bits
The typical design goal in the literature is capacity- produced by the accumulator are transmitted over the channel.
approaching codes. These codes that have maximum rate We restrict our analysis to the binary erasure channel, for
while the probability of error diminishes with increasing code which the EXIT chart approach is exact [12].
length and increasing number of iterations. The predominant The encoder maps k information bits onto n code bits, the
approaches are density evolution and extrinsic information code rate is R = k/n. The code is defined P by the edge
i−1
transfer (EXIT) charts [1, 4, 5]. While density evolution pro- perspective degree polynomials, λ(z) = λ
i i z for the
i−1
P
vides exact results for the decoding threshold, the EXIT chart variable-nodes and ρ(z) = ρ
i i z for the check-nodes,
method employs a Gaussian approximation. The advantage of where λi , i = 1, 2, . . . denotes proportion of edges connected
the latter, however, is that code design can often be formu- to variable nodes of degree i, and ρi , i = 1, 2, . . . denotes
lated in terms of a convex optimization problem, allowing the proportion of edges connected to check nodes of degree i.
convenient numerical evaluation. Tradeoffs between the two According to common practice in the literature, ρi does not
methods for the design of IRA codes were discussed in [6]. count the connection to the accumulator. The design rate is
For erasure channels, the two methods are equivalent. R1
λ(z)dz
P
i λi /i
Two important parameters characterizing implementation Rd = P = R01 ≤ R, (1)
ρ
i i /i ρ(z)dz
complexity are the parity-check matrix density and the number 0
of decoding iterations. The relationship between parity-check The code bits are transmitted over a binary erasure channel
density and performance in terms of the gap to capacity have with erasure probability δ and capacity C = 1 − δ.
been analyzed in [7, 8], the required number of iterations The iterative decoder consists of three components: the
for successful decoding and corresponding bounds have been variable-node decoder, the check-node decoder, and the accu-
addressed in [9]. As opposed to LDPC codes, non-systematic mulator decoder. Only the accumulator obtains observations
IRA codes can achieve capacity on the binary erasure channel from the channel. In every iteration the component decoders
with bounded complexity per information bit [7]. are activated in the order variable-check-accumulator-check.
Whereas the above approaches assume infinite iteration, the In the EXIT chart method, the mutual information transfer
number of iterations required to achieve a target performance between the component decoders is modeled and used to
is of practical interest. In [10] a method for the design of analyze convergence behavior [5]. For the following analysis,
we combine the check-node decoder and the accumulator to Figure 1, the corresponding values of the trajectory for L
decoder into a single unit, henceforth referred to as the check- iterations are given recursively for ` = 1, 2, . . . by
accumulator decoder. Since we have assumed a binary erasure
x(`) = f y (`−1) , y (`) = g x(`) .
 
channel, all virtual channels are also erasure channels and (6)
EXIT chart analysis is exact [12].
As shown in Figure 1, we orient the extrinsic information The initial values are x(0) = 0 and y (0) = g(0), resulting from
transfer chart with the x-axis representing the variable-node the fact that the variable nodes are not directly connected to
extrinsic information IEv and the check-accumulator a-priori the communication channel.
information IAca . The y-axis carries the variable-node a-priori We can eliminate the y (`) from (6) to obtain x(`) =
information IAv and check-accumulator extrinsic information, f (g(x(`−1) )). Going a step further, we may also substitute
IEca . In order to avoid proliferation of subscripts, where it will these equalities into each other to express the variable node
(L)
not cause confusion, we simply use x and y to represent these extrinsic information after L iterations IEv as
quantities. The variable-node EXIT function f (y) maps from
y = IAv onto x = IEv . Conversely, the check-accumulator x(L) = hL (0) (7)
function g(x) maps from x = IAca onto y = IEca .
where h(x) = f (g(x)) and hL denotes L-fold functional
composition.
1
The areas under the variable node curve (with the orientation
0.9 indicated in Figure 1) and under the check-accumulator curve
are given by
0.8
Z 1 X λi Z 1
0.7 1− f (y) dy = = λ(z) dz, (8)
0 i 0
y = IAv = IEca

i
0.6 Z 1 Z 1
X ρj
g(x) g(x) dx = C =C ρ(z) dz, (9)
0.5
0 j
j 0
0.4
y (�−1) f (y)
respectively. Substituting from (1) we have the following the-
0.3 orem, first proved for low-density parity-check codes in [12].
0.2
Theorem 1 (Area Theorem):
A
0.1
x(�−1) x
(�) C − Rd = R 1 , (10)
0
ρ(z) dz
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Z 1  Z 1 
A= g(x) dx − 1 − f (y) dy . (11)
x = IAca = IEv 0 0

Fig. 1. Example EXIT chart of a non-systematic IRA code: variable-node where A is the area between the check-accumulator curve g
EXIT function f (y) and check-accumulator EXIT function g(x). and the variable node curve f .
Thus, finding the code with the smallest rate gap from
The variable node EXIT function is given by [12]
X capacity corresponds to minimizing the area between f and
f (y) = λi fi (y) (2) g. NoteR a minor difference from the original LDPC result,
i where ρ sets a “scaling factor”. We do not require the inner
fi (y) = 1 − (1 − y)i−1 , i = 1, 2, . . . (3) check-accumulator code to have rate 1.
For code analysis and design, it is useful to express the
where fi is the transfer function for a degree-i variable node. (information) bit error rate of the decoder as a function of the
The check-accumulator EXIT function is [12] mutual information exchanged during the iterative process.
X
g(x) = ρi gi (x), (4) Theorem 2 (Bit Error Rate): The bit error rate Pb of a non-
i systematic IRA code transmitted over a BEC, with variable

C
2 node degree distribution λ and a-priori information IAv (to
i−1
gi (x) = x , i = 1, 2, . . . (5) the variable nodes) is given by
1 − (1 − C)xi
where gi is the transfer function for a degree-i check node 1 − Iv
Pb = (12)
combined with the accumulator. We can show that f (y) and 2
X −1 X 
g(x) are linear in λ and ρ and convex in y and x, respectively. λj λi i

(`) Iv = 1 − (1 − IAv ) . (13)
Denote x(`) = IEv the extrinsic information at the output j i
j i
of the variable node decoder after iteration `, and let y (`) =
(`)
IAv be the corresponding a-priori information. With reference The value Iv denotes the overall variable node information.
Note that Pb is monotonically increasing in IAv . For fixed Complexity constraints may be introduced into the optimiza-
design rate Rd , tion problem (17) to a certain extent via (a) fixing the maximal
 X −1 X  variable node degree (which needs to be done anyhow in
ρj λi i

order to have a finite number of variables in Problem 1),
Iv = Rd 1 − (1 − IAv ) (14)
j
j i
i or (b) introducing other linear constraints on the λi , e.g.
corresponding to the cost of high degree nodes.
which is linear in the λi , convex in the ρi and concave in IAv . For a code optimized according to (17), operating at rates
With sufficiently many iterations, the decoder will converge close to capacity, it may take many iterations to achieve the
to the fixed point x = f (g(x) = h(x), which for a well- target bit error rate. It is therefore of great practical interest
designed code will occur at a value of x very close to 1, to consider the design of codes that achieve the best possible
corresponding to a low bit error rate. To prevent an undesirable performance after a fixed number of iterations, denoted by
error floor, we may wish to enforce a condition that the L. One obvious approach would be to replace the objective
iteration does not get stuck at a fixed point prior to some function in (17) with the bit error rate at iteration L. From (12)
target value x = 1 − ξ for some small ξ. To this end, we and (13), we see that this is equivalent to maximizing the
require x < h(x) for all x ∈ (1 − ξ, 1) . For this range of x, variable node information at iteration L,
we use a Taylor series expansion around x = 1, and obtain X −1 X 
the condition x < h(1) + h0 (1) · (x − 1), which is equivalent (L) λj λi i 
Iv = 1 − 1 − g(hL−1 (0))
to 2 − C X  j
j i
i
h0 (1) = λ2 · jρj − 1 < 1, (15) hL−1 (x) = f (g(f (g . . . f (g (x) . . . ).
C
j≥1 | {z }
L−1 times
referred to as the stability condition. This condition is always (L)
met if λ2 = 0 (this choice is usually bad for capacity-achieving Note that Iv is a multinomial of degree L times the square
codes). Otherwise it is equivalent to of the maximal variable node degree. This could potentially
present numerical difficulties for optimization. Furthermore,
X 1 + λ2 C this formulation requires us to compute hL for each value
jρj < · . (16)
λ2 2−C of L of interest. In the case of IRA codes and the BEC,
j≥1
we have analytical expressions for fi (y) and gi (x). In more
III. C ODE D ESIGN general settings, we may only have numerical samples of these
The prevailing approach to code design finds the degree dis- functions.
tribution which results in the minimal rate loss from capacity. Motivated by (6), the key novel step in this paper is to
According to Theorem 1, this corresponds to a curve fitting introduce dummy variables x` , y` , ` = 0, 1, . . . , L. With
problem. Equivalently, for fixed ρ, reference to Figure 1, these are the values of the variable
Pthe maximum design rate
is achieved by maximization of i λi /i. This results in the node extrinsic and a-priori information at iteration `. For given
following linear program. target rate R, we formulate our design problem as follows
Problem 1 (Capacity Approaching Code): Given the chan- (where we have used (14)).
nel capacity C and check-node degree distribution ρ, find the Problem 2 (Minimum bit error rate after L iterations):
variable-node degree distribution λ? as the solution to the Given R < C determine the variable node degree distribution
following linear program: λ?L and check node degree distribution ρ?L that minimize
X λi the bit error rate after L iterations as the solution to:
max (17)  X −1 X 
ρi λi 
λ
i
i max Rd 1 − (1 − yL )
i
(22)
X λ,ρ,x,y i i
s.t. λi · fi (y) > g −1 (y), y ∈ [0, 1) (18) i i

i s.t. (1), (19), (20), (21) and


C X ρi ≥ 0, i = 1, 2, . . . (23)
(1 + λ2 ) − λ2 jρj > 0 (19)
2−C
X
j ρi = 1, (24)
λi ≥ 0, i = 1, 2, . . . (20) i
X x0 = 0, y0 = g(x0 ) (25)
λi = 1. (21) X
i x` = λi fi (y`−1 ), l = 1, 2, . . . , L (26)
i
The constraints (18) ensure that there are no sub-optimal X
fixed points and (19) is the stability condition. y` = ρi gi (x` ), l = 1, 2, . . . , L (27)
j
In many situations of practical interest, the implementation
complexity of the decoder is limited, and it is therefore of In this setup, the target rate R would be chosen to be
interest to consider the design of codes that reduce decoder less than capacity, sacrificing some code rate for potential
complexity, even if this is at the expense of reduced code rate. performance improvements after L iterations.
The new constraints (25),(26),(27) ensure consistency of the 1

decoding trajectory according to (6). The trajectory constraints


0.9
are convex, however as discussed earlier, the objective function
is neither convex nor concave. 0.8

We have empirically observed that there is very little bit


0.7
error rate penalty incurred by replacing the bit error rate

y = IAv = IEca
objective function (22) with yL or xL (i.e. maximize the 0.6
information passed between the decoders). In fact, this may
0.5
be of interest if the code is an inner component of some other g(x)
outer iteration, e.g. a multi-user decoder. This results in the 0.4
following optimization problem:
Problem 3 (Maximal information after L iterations): 0.3 f (1) (x)
Given R < C determine the variable node degree distribution 0.2
λ?L and check node degree distribution ρ?L that maximizes f (2) (x)
the variable node extrinsic information after L iterations as 0.1

the solution to the following convex optimization problem:


0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
max xL (28)
λ,ρ,x,y x = IAca = IEv
s.t. (1), (19), (20), (21), (23), (24), (25), (26), (27) Fig. 2. EXIT chart and decoding trajectories: check-accumulator EXIT
function g; variable-node EXIT functions for the maximal-information code
Note that by “unrolling” the iteration, we obtain constraints (Problem 3), f (1) , and the ‘capacity-achieving’ code (Problem 1), f (2) .
that are independent of the maximum number of iterations L.
Furthermore, we do not need to know the L-fold functional
composition f (g(f (. . . ) . . . ). Instead, we only need to know iterations is 10−11 for the maximal-information code and only
f (y) and g(x) individually. This could be an additional ad- 10−3 for the ‘capacity-achieving’ code.
vantage if these characteristics are only available numerically
ACKNOWLEDGMENT
(e.g. for other kinds of codes).
This work has been supported in parts by the Australian Re-
IV. N UMERICAL E XAMPLE search Council under the ARC Discovery Grant DP0986089.
We seek for a code of rate R = 0.5 for transmission over R EFERENCES
a BEC with erasure rate δ = 0.3, that achieves a low bit [1] T. Richardson and R. Urbanke, Modern coding theory. Cambridge
error probability with L = 10 decoding iterations. We consider University Press, 2008.
variable node degrees from the set {2, 3, 4, 5, 6, 13, 20, 30}, [2] F. Kschischang and B. Frey, “Factor graphs and the sum-product
algorithm,” IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 498–519,
and (for simplicity) the fixed check-node distribution ρ1 = 2001.
0.2, ρ3 = 0.8. (Note that degree-1 check nodes are required for [3] H. Loeliger, “An introduction to factor graphs,” IEEE Signal Process.
non-systematic IRA codes.) Solving Problem 3 for maximal Mag., 2004.
[4] T. Richardson and R. Urbanke, “The capacity of low-density parity-
information, we obtain the variable node distribution λ(1) , see check codes under message-passing decoding,” IEEE Trans. Inform.
Table I, and the EXIT function f (1) , depicted in Fig. 2. Theory, vol. 47, no. 2, pp. 599–618, 2001.
As an alternative approach, we designed a capacity- [5] S. ten Brink and G. Kramer, “Design of repeat-accumulate codes for
iterative detection and decoding,” IEEE Trans. Signal Process., vol. 51,
approaching code, according to Problem 1, for a BEC with no. 11, pp. 2764–2772, 2003.
a higher erasure rate, namely δ = 0.47, to obtain a code rate [6] A. Roumy, S. Guemghar, G. Caire, and S. Verdú, “Design methods for
of R = 0.5. We then use this code with L = 10 decoding irregular repeat-accumulate codes,” IEEE Trans. Inform. Theory, vol. 50,
no. 8, pp. 1711–1727, 2004.
iterations for the given BEC with δ = 0.3. The resulting [7] H. D. Pfister, I. Sason, and R. Urbanke, “Capacity-achieving ensembles
variable node distribution λ(2) is given in Table I, and the for the binary erasure channel with bounded complexity,” IEEE Trans.
EXIT function is depicted in Fig. 2. Inform. Theory, vol. 51, pp. 2352–2379, July 2005.
[8] G. Wiechman and I. Sason, “Parity-check density versus performance of
For the maximal-information code, the tunnel between the binary linear block codes: New bounds and applications,” IEEE Trans.
EXIT functions is clearly more open towards the end, and Inform. Theory, vol. 53, no. 2, pp. 550–579, 2007.
for a given number of iterations a better performance can be [9] I. Sason and G. Wiechman, “Bounds on the number of iterations for
turbo-like code ensembles over the binary erasure channel,” in Proc.
achieved. Resulting from that, the bit error rate after L = 10 IEEE Int. Symp. Inf. Theory (ISIT), pp. 1898–1902, 2008.
[10] B. Smith, M. Ardakani, W. Yu, and F. Kschischang, “Design of irregular
LDPC codes with optimized performance-complexity tradeoff,” IEEE
TABLE I Trans. Commun., vol. 58, no. 2, pp. 489–499, 2010.
VARIABLE NODE DEGREE DISTRIBUTIONS ( TWO - DIGIT PRECISION ). [11] H. Jin, A. Khandekar, and R. McEliece, “Irregular repeat-accumulate
codes,” in Proc. Int. Symp. on Turbo Codes & Rel. Topics, pp. 1–8,
i 2 3 4 5 6 13 20 30 2000.
(1) [12] A. Ashikhmin, G. Kramer, and S. ten Brink, “Extrinsic information
λi 0 0.11 0.66 0.08 0.06 0.06 0.02 0.01
(2) transfer functions: model and erasure channel properties,” IEEE Trans.
λi 0.16 0.15 0 0 0.54 0.14 0 0 Inform. Theory, vol. 50, no. 11, pp. 2657–2673, 2004.

You might also like