Professional Documents
Culture Documents
Werner Henkel, senior member, IEEE, Nazia S. Islam, and M. Ahmed Leghari
Jacobs University Bremen
Bremen, Germany
Emails: werner.henkel@ieee.org, {w.henkel, n.islam, m.leghari}@jacobs-university.de
Abstract—Usually, coding and a channel and the Tomlinson-Harashima precoding requires a duplex
corresponding equalization and decoding are seen as a channel, since the precoder coefficients need to be
serial concatenation. One might consider the solution known at the transmitter. At best, some compromise
to be iterative equalization and decoding in a Turbo coefficient might be chosen together with leaving the
fashion. We are showing that it is not necessary to run
remaining equalization to the feed-forward filter.
LDPC decoding and equalization as separate entities
in a Turbo manner, but integrate decision-feedback Another typical solution is suitable for convolutional
equalization (DFE) directly into the LDPC decoder, codes, where one computes the feedback operations
performing DFE-like operations as part of the LDPC for all paths in a Viterbi algorithm, thereby, of course,
message passing. increasing the complexity substantially.
With the invention of Turbo coding by Berrou et
I. I NTRODUCTION al. [5], many applications apart from parallel and
A very common equalizer structure is the decision serial concatenation of codes were investigated, the
feedback equalizer shown in Fig. 1, [1], [2]. serial concatenation of coding and an ISI (inter-symbol
interference) channel and the corresponding iterative
min. phase F (z) treatment of equalization and decoding being one of
them [6]–[9]. A figure from [6] is redrawn here as
Linear r r̂ Fig. 3 to show the typical Turbo decoding of the serial
Feed−Forward
Filter structure.
Slicer
Feedback
Data
LC (cn ) LE (xn ) estimate
L(cn )
Filter MAP
Equalizer π −1 MAP
Decoder
F (z) − 1
L(xn ) LD (cn )
Fig. 1. Decision Feedback Equalizer π
After the feed-forward filter, the overall transfer Fig. 3. Turbo equalization [6]
function is minimum phase, hence the energy is con-
centrated at the beginning of the impulse response. The In a Turbo-like operation, instead of a DFE variant,
slicer cannot easily be replaced by a decoder due to one could think of realizing the channel equalization
the decoding delay. One typical solution is Tomlinson- by a soft-output Viterbi algorithm, BCJR or windowed
Harashima precoding [3], [4], where the feedback filter BCJR algorithm, if the number of states is or can be
is moved to the transmitter, thereby avoiding any error made sufficiently small. This allows straight forward
propagation. The structure is shown in Fig. 2, where exchange of LLRs (log-likelihood ratios) between the
the feed-forward filter is seen as part of the minimum equalizer and the decoder. If the decoder is a Turbo or
phase overall channel F (z). LDPC decoder, this would mean some iterations there
in between the equalizer operations.
A(z) T (z) Channel R(z) Detector Â(z)
mod 2M
F (z) mod 2M The next section discusses, how a separate realiza-
tion with “outer” Turbo-like iterations can be avoided
AWGN
F (z) − 1 N (z) and how a DFE can directly be integrated into the
Tanner graph for decoding an LDPC code.
C(z) · 2M
A(z) T (z) Channel R(z) Detector Â(z) II. I NTEGRATED EQUALIZATION AND MESSAGE
F (z) mod 2M
PASSING DECODING
AWGN
F (z) − 1 N (z) In a recent paper at ISTC 2018 [10], we showed
how Markov properties of a source can directly be
Fig. 2. Tomlinson-Harashima precoding integrated into the LDPC decoding without the need of
an iterative, Turbo-like procedure exchanging informa- The analog decoder input ri is the one to be used in
tion between a BCJR algorithm handling the Markov the equalizer operations according to
source and message passing inside the Tanner graph ri −f1 ·λ(ξi−1 ) = ri −f1 ·tanh (ξi−1 /2) , i = 1, 2, . . .
of the LDPC code. Although the iterative procedure is (3)
slightly superior to the integrated one, the complexity where i is the variable node counter.
is lower for the integrated solution, since no separate Equation (3) is illustrated in Fig. 4. Comparing with
BCJR algorithm is needed. An ISI channel, however, the standard DFE structure in Fig. 1 will make the
creates the dependencies after encoding. Hence, the se- operation (3) obvious. In the DFE, the effect of the tail
quence between a component creating memory and the of the impulse response is subtracted from subsequent
LDPC encoding is the opposite for both applications. symbols. Now, instead of a hard decision value after
One has a Markov source followed by an LDPC en- a slicer, we take the variable-node estimate instead,
coder, the other consists of an LDPC encoder followed which combines all impinging LLRs. However, not the
by an ISI channel. The Markov source properties could LLRs themselves are used, but, since the equalizer is
be integrated by additional LLR forwarding between processing analog values, the corresponding soft-value
variable nodes. An equalizer is, however, an operation is computed following Eq. (1). Clearly, Fig. 4 shows
related to analog signals. This means an extension of a DFE extension integrated into the Tanner graph.
the Tanner graph with a transition to analog signals Some care has to be taken regarding the scheduling
and also the reverse to process LLRs. of the operations. Especially, one should not perform
For illustration purposes, we assume a very short any up-front equalizing step just based on the intrinsic
minimum phase impulse response after the feed- (not code protected) input. This will lead to inferior
forward filter to be described in z domain by F (z) = performance.
1 + f1 z −1 and for the actual simulations, for now, we At the beginning, the intrinsic information will be
have only taken F (z) = 1+0.5z −1 . The generalization use to initialize the variable node contents to feed
to higher orders is obvious and later shown in the the DFE graph segment for the following variable-
formulation of the modified variable node equation. to-check node equation and to feed the first run of
DFE is a deterministic operation, wherein a hard the check-node equation. The first actual variable-
decided value after the slicer is subtracted from the node operation, i.e., from variable to check node will
next sample(s), thereby eliminating the tail F (z) − 1 combine the direct intrinsic information, the impulse-
of the impulse response in a denoised fashion. In an response-tail cancellation through tanh and fi and the
LDPC Tanner graph with message-passing decoding, other incoming LLRs from check nodes. The variable
the variable nodes represent the decisions, although and check node equations are:
their quality will improve with time and the values " M
#
(l) 2 X
are not hard decided but represented by log-likelihood vm (i) = 2 r(i) − fm tanh(ξi−m ) +
ratios, which we call ξi at the ith variable node. The σn
m=1
LLR ξi represents a so-called soft bit, which we write dv (i)
(l−1)
X
as + uk (i), ∀m = 1...dv (i)(4)
,
k=1,k6=m
λ(ξ) = EX {x} = (+1) · P (X = +1) + (l) dc (j) (l)
uk (j) Y vm (j)
+(−1) · P (X = −1) tanh = tanh , ∀k = 1...dc (j)(5)
.
2 2
m=1,m6=k
eξ 1
= ξ
− The check-node equation (5) is the standard one.
1+e 1 + eξ (l)
vm (i) is the variable-to-check LLR at the mth edge
eξ − 1 eξ/2 − e−ξ/2 (l)
= = of the ith variable node. uk (j) is the check-to-variable
1 + eξ eξ/2 + e−ξ/2 LLR at the k th edge of the j th check node. dv (i)
= tanh (ξ/2) . (1)
and dc (j) denote the variable and check node degree
at variable and check nodes, i and j , respectively.
This allows to represent the LLRs as analog values The term in rectangular brackets represents the analog
to be subtracted after weighting with f1 (fi in general DFE-like operation with the usual factor given by
for higher order ISI channel models) from the intrinsic Eq. (2).
input to the next variable node. The intrinsic informa- Note that ξi−m denotes the sum of all incoming
tion depends on the analog value in the form LLRs, which in usual LDPC decoding would only
be computed as final result after all iterations are
2
p(ri |ci = +1 e+2ri /2σn 2 completed. With our integrated DFE/LDPC solution,
Lintr i = ln = ln −2r /2σ2 = 2 · ri . this overall sum will be needed at every iteration.
p(ri |ci = −1 e i n σn
(2) In Fig. 4, we show the case with M = 1.
ri−3 ri−2 ri−1 ri ri+1 ri+2 ri+3
1
The final paper will additionally show results with the ISI
channel of [6] to allow for a direct comparison. Simulation time
did not yet allow to provide the full set of results.