Professional Documents
Culture Documents
Using Priors
Mi
hael T
u
hler, Andrew Singer, Ralf Koetter
July 3, 2000
Abstra
t
For data transmission over
hannels with intersymbol interferen
e (ISI) a soft-in
soft-out (SISO) equalizer is introdu
ed to
orre
t for the ISI distortion using linear
ltering.
In addition to the
orrupted symbols from the
hannel it is assumed that the SISO
equalizer has prior information available about the o
urren
e probability of the value
of ea
h symbol. Algorithms are derived to in
orporate this prior knowledge in an
e
ient way. Exa
t and approximate implementations of the algorithm are derived.
The
omplexity of these implementations grows with quadrati
and linear
omplexity,
respe
tively, in the equalizer lter length and the
hannel impulse response length.
1 Introdu
tion
In many pra
ti
al
ommuni
ation s
enarios, digital data is transmitted over analog
hannels
with an impulse response that is signi
antly longer than the symbol period. The resulting
distortion is referred to as inter-symbol interferen
e (ISI). Su
h
hannels o
ur, e.g., in
wireless
ommuni
ation, magneti
re
ording or underwater data transmission. The re
eived
signal is further
orrupted by noise introdu
ed during transmission over the
hannel or in
the re
eiver front end. Figure 1 depi
ts an un
oded data transmission system in its dis
retetime baseband representation. The data is mapped to, in general,
omplex-valued symbols
hosen from a given signaling
onstellation. The
hannel is assumed to have a nite-length
impulse response (FIR) with time-invariant
oe
ients hk ; k = M1 ; M1 + 1 M2 , and
overall response length M = M1 + M2 + 1. The hk are assumed to be known or at least
available as estimates h^ k . The noise is modeled to be additive.
A
ommon approa
h in many pra
ti
al systems for re
overing of the transmitted data in
presen
e of ISI and noise is the use of minimum mean squared error (MMSE) equalization,
as shown in Figure 1. We
onsider linear equalizers (LE) and de
ision feedba
k equalizers
(DFE). Both obtain estimates of the transmitted symbols by linear ltering of the re
eived
symbols, while the DFE forms a symbol estimate by also ltering the past de
isions. The
fun
tional elements of su
h equalizers are linear lters, whose parameters are determined
based on the
hannel response and a
ost
riterion, su
h as the zero for
ing (ZF) or MMSE
riterion.
1
additive
noise
data
Signal
Mapper
ISI
Channel
Equalizer
Inverse
Signal
Mapper
data
estimate
prior information
2 Basi
s
For the derivations of the algorithms, we dene a spe
i
data transmission system, depi
ted
in Figure 2, and note that extensions to more elaborate systems, i.e. multiple
hannels or
higher order signal
onstellations, are straight-forward. For simpli
ity, we assume binary
phase shift keying (BPSK) yielding symbols xn taken from the alphabet f 1; +1g and assumed to be independent at the transmitter. We also assume additive white Gaussian noise
(AWGN), i.e., the noise samples wn are independent and identi
ally distributed (i.i.d.) with
normal probability density fun
tion (pdf)
fw (w )
= N (0; w ); where
2
N (; ) ,
2
p 1 exp
2
(w
)2
2 2
:
(1)
xn
ISI Channel
h[n]
zn
MMSEestimator
Mapper
xn
MMSE
Le
(xn)
LMMSE(xn | z)
wn
L(xn)
Figure 2: Parti
ular data transmission system used for the derivations.
The
hannel output symbols zn are given as
zn
+
M2
X
k= M1
hk xn k
+ wn:
(2)
The sequen
e z = [ zn 1 zn zn+1 T is the input of the SISO equalizer, whi
h has
furthermore available the prior information L(xn ) for ea
h xn , dened as log-likelihood ratio
(LLR)
L(xn )
Prfxn = +1g
, ln Pr
fx = 1g :
n
(3)
fxn = +1 j zg :
, ln Pr
Prfx = 1 j zg
n
The quantity LMAP (xn )
an be broken up using Bayes' rule into the sum of the prior information L(xn ) and the \extrinsi
" information LMAP
(xn ), whi
h is the knowledge about the
e
value of xn in addition to L(xn ).
LMAP (xn )
Prfzjxn =+1g
Prfxn =+1g
, ln Pr
+ ln
fzjxn = 1g
Prfxn = 1g
P
x p(z j x) Prfxg
Prfxn =+1g
xn=+1
+ ln
, LMAP
(xn ) + L(xn );
= ln P
e
p
(
z
j
x
)
Pr
f
x
g
Pr
f
x
=
1
g
n
x
(4)
xn= 1
(5)
Linear Filter
c[n]
zn
z
Mapper
xn
MMSE
Le
(xn)
LMMSE(xn | z)
mnx
mn
L(xn)
The SISO equalizer with a linear equalizer as estimator to
ompute x^n (MMSE-LE) is depi
ted in Figure 3. Part of the estimator is a linear lter with time-varying
oe
ients
k;n ; k = N1 ; N1 +1 N2 , of length N = N1 + N2 +1. Using the ve
tors
, [ xn+M1+N1 xn+M1+N1 1 xn M2 N2 T ;
T
wn , [ wn+N1 wn+N1 1 wn N2 ;
T
zn , [ zn+N1 zn+N1 1 zn N2 ;
z
z
z
z
T
mn , [ mn+N1 mn+N1 1 mn N2 ;
T
n , [
N1 ;n
N1 +1;n
N2 ;n ;
xn
(6)
h M1
, 664
h M1 +1
h M1
h M1 +1
hM2
h M1
h M1 +1
hM2
...
0
0 7
7
7;
5
(7)
hM2
the estimator output x^n a
ording to Figure 3
an be expressed in matrix-ve
tor form:
= H xn + wn ;
z
x
x
^n =
H
n (zn mn ) + mn :
zn
(8)
The quantity x^n is the linear MMSE estimate dened in Appendix A of the symbol xn given
the observation zn . From Eqs. (43) and (44) we
an identify expressions for the parameters
n , mzn , and mx
n:
= Cov (zn; zn ) 1 Cov (zn; xn ) = (w2 IN + H Cov (xn; xn ) HH )
z
mn = E fzn g = H E fxn g;
mxn = E fxn g;
n
H Cov (xn ; xn );
(9)
given the noise statisti
s E fwng = 0N and Cov (wn ; wn)= w2 IN . With the standard assumption that the xn are independent and take on all alphabet symbols equally likely, we have
E fxn g = 0 and Cov (xn ; xm ) = [n m, given the symbol alphabet f1; 1g. It follows the
well-known time-invariant MMSE linear equalizer solution [7:
UP
x
^n
,
n = (w2 IN + H HH )
=
H
UP
H u; mn
= 0N ;
mxn
= 0;
zn ;
01(N1 +M1 )
01(N2 +M2 )
T
(10)
The resulting spe
i
parameter set
UP is
alled the uniform prior (UP) solution, sin
e the
assumption of xn being equally likely +1 or 1 is equivalent to prior information L(xn ) given
as
L(xn )
= 0;
8n:
(11)
In general we have L(xn ) 2 R . In this
ase the statisti
s of xn vary with n resulting in
time-varying parameters (
n ; mzn; mxn ). Taking the prior information L(xn ) into a
ount and
assuming independen
e of the xn , the rst and se
ond order statisti
s of xn are as follows:
f g = Prfxn =+1g 1
+ Prfxn = 1g ( 1)
1
1
exp (L(xn ))
+
= tanh
L(xn ) ;
=
1 + exp (L(xn )) 1 + exp (L(xn ))
2
(
1 tanh2 21 L(xn ) ; if n = m;
Cov (xn ; xm ) =
0;
otherwise:
E xn
(12)
Linear Filter
c[n]
zn
Mapper
xn
MMSE
Le
(xn)
LMMSE(xn | z)
mn
mn
Linear Filter
b
c [n]
Delay
mn-1
^d
xn-1
L(xn)
1
L(xn ) ;
= tanh
2
x
x
x
x
T
mn , [ mn+M1 +N1 mn+M1 +N1 1 mn M2 N2 ;
x
2
x
2
x
Dn , diag ([ (1 (mn+M1 +N1 ) ) (1 (mn+M1 +N1 1 ) ) (1 (mn
mxn
, H u;
= (w2 IN + H Dn HH + (mxn )2 s sH )
x
x
x
^n =
H
H mn + mn s):
n (zn
n
M2 N2 )
) );
(13)
s;
The equation set (13) des
ribes the
omputation that has to be performed for every symbol
in the re
eived sequen
e.
3.2
The SISO equalizer with a de
ision feedba
k equalizer as estimator (MMSE-DFE) is depi
ted in Figure 4. One
omponent of the estimator is the forward lter with time-varying
oe
ients
k;n; k = N1 ; N1 +1 N2 , of length N = N1 + N2 +1 and the stri
tly
ausal
ba
kward lter with time-varying
oe
ients
bk;n; k =1; 2 Nb , of length Nb .
The ba
kward path
ontains previous estimates x^n i ; i > 0, whi
h are dis
retized to
x
^dn 2 f+1; 1g (BPSK) by an appropriate sli
er h() prior to feedba
k, e.g.,
(
x
^dn
= h(^xn ) =
6
1; x^n 0;
1; x^n < 0;
Without forward ltering, the maximum useful length of the ba
kward lter, Nb , is M2 ,
sin
e only past de
ided estimates, x^dn i ; i > 0, are available, whi
h
ould be used to remove
ISI. If the forward lter has a further delay of about N2 symbols, the ISI
aused of N2 + M2
symbols
an be pro
essed in the ba
kward lter. Hen
e, we restri
t the
hoi
e of Nb to
Nb
N 2 + M2 ;
(14)
where M 1 is the maximal value of Nb . Using the quantities already dened for the MMSELE and the ve tors
^n
x
(15)
the estimator output x^n is expressed in matrix-ve
tor form a
ording to Figure 4 as follows:
zn
x
^n
= wn + H xn ;
=
zn
^ dn
x
bn
mzn
mdn
(16)
+ mxn :
The quantity x^n is now the linear MMSE estimate of xn given the observations zn and x^ dn .
From Eqs.(43) and (44) we
an identify expressions for the parameters
n ,
bn, mzn, mdn, and
mxn :
+ H Cov (xn; xn ) HH
=
Cov (^
xdn ; x) HH
bn
z
mn = E fzn g = H E fxn g;
d
^ dn g;
mn = E fx
mxn = E fxn g:
n
2 N
w
I
^ dn )
H Cov (xn ; x
^ dn )
Cov (^
xdn ; x
H Cov (xn ; xn )
Cov (^
xdn ; xn )
(17)
The statisti
s of x^dn are identi
al to the statisti
s of xn , sin
e we assume that the MMSE-DFE
is error-free:
x
^dn
= xn ;
With the parti
ular
hoi
e of the MMSE-DFE ba
kward lter length
have
2
^ dn )
Cov (xn ; x
=4
Nb
(18)
in Eq. (14), we
^ dn ) 5 :
Cov (^
xdn ; x
By introdu
ing
J
, Cov(^xdn; x^ dn)
d
Cov (^
xn ; xn )
INb
=
b
n =
1
^ dn )J) HH
Cov (xn ; x
H Cov (xn ; xn );
whi
h
an be veried from Eq. (17). We see that the DFE output x^n is thus only a fun
tion
of the forward lter parameters. Rewriting Eq. (16) with this nding yields
= [mxn+M1 +N1 mxn x^dn 1 x^dn Nb mxn Nb 1 mxn M2
z
H
H xd
d
x
H
x
^n =
H
mn )
n H J (^
mn ) + mn =
n (zn
n (zn
n
x
mn
T
N2 ;
H mn )
+ mxn :
(19)
Also for the MMSE-DFE, we present at rst the
ommon time-invariant (UP) solution,
where the symbols xn are assumed to take on the values 0 and 1 with equal probability.
Using Eq. (12) with prior information L(xn ) as in Eq. (11) yields
UP
mn
H ) 1 H u;
(20)
01(M2 +N2 Nb ) ;
For general priors L(xn ) 2 R we
ompute the statisti
s of xn and x^dn a
ording to Eq. (12).
The algorithm to
ompute x^n with an MMSE-DFE in
luding the well-behaved
onstraint is
now as follows:
N2
= M1 ;
mxn
= tanh
Nb
M 1;
1
L(xn )
2
N1
=N
M1
1;
mn
T
N2 ;
(21)
values of symbols xn ae
ting x^n . With this approa
h we have time-varying pdfs fx^n j xn =k (x)
as fun
tion of the prior information L(xi ); i = n M2 N2 ; n M2 N2 +1 n + M1 + N1 . An
exa
t implementation for
omputing fx^n j xn =k (x) requires an infeasible
omplexity for ea
h
re
eived symbol [4. We follow the approa
h of Wang and Poor [8 in deriving an approximate
expression for fx^n j xn =k (x) and LMMSE
(xn ):
e
2
N ((+1)
n ; n ) ;
( 1)
; n2 ):
1 (x) N (n
( 1)
2
The time-varying rst- and se
ond-order statisti
s, (+1)
n , n , and n , are obtained using
Eqs. (13) (MMSE-LE) or (21) (MMSE-DFE), respe
tively. The result for the MMSE-LE is
x
x
H
= E f x^n j xn =1g =
H
n (E fwn g + H E fxn j xn =1g H mn + mn s) =
n
^n j xn = 1g =
H
(n 1) = E f x
n s;
2
(+1) 2
n = E fx
^n x^n j xn =1g jn j = E fx^n x^n j xn = 1g j(n 1) j2
2 N
H
x 2
H
(+1) 2
H
(+1) 2
=
H
n (w I + H Dn H + (mn ) s s )
n jn j =
n s jn j ;
(+1)
n
s;
(22)
using Eqs.(13) and (12). Similarly, the result for the MMSE-DFE is:
=
H
n (H E fxn j xn =1g
( 1)
=
H
n
n s;
2
H
2 N
H
n =
n (w
I + H Dn H
(+1)
n
H E mn
ss
H)
n
j xn =1g) =
Hn s;
=
H
n
using Eqs. (21), (12), and (18). The SISO equalizer output
approximation is nally:
LMMSE
(xn )
e
= ln
fx^n j xn =+1 (^
xn )
fx^n j xn = 1 (^
xn )
2 x^n (+1)
n
2
2
j(+1)
n j ;
LMMSE
(xn )
e
2 x^n
sH n
(23)
using Eq. (5). For the MMSE-LE, it is shown [8 that for any given prior information the
signal to noise ratio (SNR), given by
fj j j
f j
g = E fjx^nj2 j xn = 1g =
Hn s =
g V arfx^n j xn = 1g n2 1
E x
^n 2 xn =1
V ar x
^n xn =1
sH n
(25)
is greater than the SNR with uniform priors as in Eq. (11) with a maximum if the prior
information resembles the transmitted symbols xn , i.e. L(xn ) 2 f1; 1g.
5 Implementation
In this se
tion we examine the
omputational
omplexity of the algorithms for the SISO
equalizer in presen
e of general input priors L(xn ) 2 R . The most expensive
al
ulation for
omputing LMMSE
(xn ) is the inversion of an N N matrix (see Eqs. (13) and (21)) for ea
h
e
re
eived symbol zn . A dire
t implementation of the inversion requires an order of
omplexity
that is
ubi
in the matrix dimension. An exa
t low
omplexity implementation requiring a
square order and an approximate implementation requiring a linear order are derived in this
se
tion.
9
5.1
5.1.1
Given Eq. (13) to
ompute the estimate x^n with an MMSE-LE, the time-varying matrix to
be inverted is
Rn
, w2 IN + H Dn HH + (mxn)2 s sH :
(26)
Rn
Rn+1
,
,
,
RO
rH
O
rN
rN
n+1
rO
rO
rH
N
RN
,
,
sO
sO
sN
;
sN
(27)
sO sH
sO sO
~rO
x
2
O
;
+ (mn )
sO sH
sO sO
r~O
O
~rH
sN sN sN sH
x
2
N
N ;
~ N + (mn+1 ) sN sN sN sH
R
N
~O
R
~rH
O
r~N
~rN
(28)
, An ,
AO
aH
O
aO
aO
1
Rn+1
, An+1 ,
aN
aN
aH
N
AN
(29)
where we use the relationship that the inverse of a hermitian matrix is also hermitian. The
inverse An is required to
ompute
n and is therefore stored and updated at every time step.
~ O from An and then An+1 from R
~ N using the identity
Next we derive a s
heme to
ompute R
~
~
RO = RN .
The inverse of the submatrix RO of Rn is expressed in terms of
omponents of An by
solving Rn An = IN using Eqs. (28) and (29):
N
+ rO aH
O =I
RO aO + rO aO = 0N
RO AO
RO
= AO
1;
aO aH
O
aO
(30)
:
= (RO
(mxn )2 sO sH
O)
= RO1 +
10
RO sO sH
O RO
1
(mxn )2
sH
O RO sO
1
(31)
The expression yielding
n in Eq. (13)
an be written using Eqs. (27) and (29) as:
AO
aO
aH
O
sO
(32)
sO
aO
aO aH
O sO
, RO1 sO = O
~ O 1 = RO 1 +
R
and
aO
O
aO ;
aO
(mxn )2 vO vOH
;
(mxn )2 sH
O vO
aO s O
aO
H
vO vO
= RO1
1
H
sO vO
(mxn )2
O , O ,
= O
and obtain an
(33)
where the ve
tor vO was introdu
ed to simplify notation. The partitioning for Rn+1 in Eq.
(28) gives an expression to
ompute RN1 similar to Eq. (31):
~ N1 sN ;
vN , R
~ N1 sN sH
~ 1
R
N RN
1
x
2
H
1
1
~
~
RN = (RN + (mn+1 ) sN sN )
= RN
1
H ~ 1
(34)
(mxn+1 )2 + sN RN sN
(mxn+1 )2 vN vNH
;
1 + (mxn+1 )2 sH
N vN
where the ve
tor vN was introdu
ed. Using the partitioning for Rn+1 and An+1 in Eq. (29),
we express AN , aN , and aN in terms of RN , rN , and rN :
0
1
rN ,R
rN ;
~ N1
=R
aN
0
rH
Nr N
(35)
= aN r0 N ;
1
0 0H
AN = RN + aN r N r N ;
where we ordered the equations to optimize the
omputation by using the quantity r0 N and
already
omputed
omponents of An+1 . Similar to Eq. (32) we express the parameter ve
tor
n+1 using Eq. (27) and Eq. (29) as
rN
aN
= aN
=
~N
R
= vN
=
sN
aH
N
aN
aN
sN
:
sN
AN
(36)
sN
+ aH
N
sN
H
vN vN
H
(mxn+1 )2 + sN
1
H s
vN vN
N
1
(mxn+1 )2
0H
= aN (sN
+ sH
N
vN
x
1 + (mn+1 )2 sH
N vN
vN
vN
r N sN );
+ aN r0 N r0 H
N
+ aN
(r0
N r0 N :
11
0H
r
vN .
N N sN
sN
0
r
+ aN
N sN )
sN
(37)
rN
rN
2
w
+ H Dn+1
0N 1
H
H
rN
and
1
0N 1
rN .
+ (mxn+1 )2 s sN :
(38)
The update algorithm is nished by
omputing the estimate x^n+1 using Eq. (13) and nally
using Eq. (24). To bootstrap the time-re
ursive update algorithm, an initialization is required, e.g., by
omputing A1 and
1 using Eq. (13) at the starting time step
n =1.
The matrix-ve
tor multipli
ation H mxn+1 in Eq. (13) is a part of the
onvolution of the
sequen
e fmxn g with the
hannel response h[n. Hen
e, it
an be implemented re
ursively by
shifting the ve
tor H mxn yielding a O(M )
omplexity for this operation:
LMMSE
(xn+1 )
e
mzn
x
H mn
x
H mn+1
=
=
M2
X
hk mxn k ;
k = M1
[mzn+N1 mzn+1 N2
[mzn+1+N1 mzn+N1
mzn N2 T ;
mzn+1 N2 T ;
sin
e only one new mean value mzn has to be
omputed per re
eived symbol. The quantity
H Dn+1 HH [1 01(N 1) T in Eq. (38) is obtained by
omputing rst Dn+1 HH [1 01(N 1) T
and multiplying next with H, whi
h is an O(M 2 ) operation. Overall we have a dominant
O (N 2 + M 2 )
omplexity per re
eived symbol zn , given the estimator lter length N and the
hannel length M .
The update step of the re
ursion algorithm is summarized in Figure 5.
5.1.2
The derivation of the re
ursive update algorithm for the MMSE-DFE is
losely related to
the MMSE-LE approa
h. We restri
t ourselves to the setup
Nb
= N 2 + M2 = M
1;
(39)
whi
h is a natural
hoi
e, sin
e the
omplete ISI
aused by the M 1 is in
luded. However,
the following update algorithm requires this
hoi
e.
Given Eqs. (21) and (39), the time-varying matrix to be inverted now is
Rn
, w2 IN + H Dn HH :
Rn
Rn+1
,
,
,
RO
rH
O
rN
rN
rO
n+1
rO
rH
N
rN
N
N
~O
R
;
~rO
+
r~O
,
sO
sO
sO sH
O
sO sH
O
sO sO
sO sO
~rH
O
H
, ~rr~NN ~~rrNN + (mxn+1)2 ssNN ssN
N
12
sN
;
sN
(40)
;
sN
sN
sH
N
sH
N
;
:
- re
eived symbols zn , prior information L(xn ),
-
hannel and re
eiver
hara
teristi
s h[n, M1 , M2 , w2 ,
- equalizer parameters N1 , N2 ,
- equalizer lter parameters
n 1 and inverse of the
ovarian
e matrix Rn
at time step n 1,
Initialization :
-
ompute M , N , H, s, sO , sN , sN , mxn 1 , mxn , mxn , and Dn ,
= Rn 1 1 , A = 0(N 1)(N 1) ,
- dene variables
=
n 1 , A
a = v = r =
= 0N 1 , a =
= v =0,
Re
ursion step
:
A a
,
;
A
Partition :
H
Input
Update
A
a
a
A
1
H
aa a
+1
A sN ,
1 + (mxn )2 sH
N v,
(mxn )2
H
A
v v v ,
2
w
a a,
0N 1
A
,
(mxn 1 )2
v vH
(mxn 1 )2 sH
Ov
+ H Dn HH
H r ,
a r,
A + a r rH ,
a sN + aH sN ,
1
v v
r,
0N 1
+ (mxn )2 s sN ,
Assemble : A
a aH
;
:
x
x
- extrinsi
information LMMSE
(xn ) 1 s2H
n
H
n (zn H mn + mn s),
e
- equalizer lter parameters:
n
at time step n,
and inverse of the
ovarian
e matrix: Rn 1 A
Output
Figure 5: O(N 2 + M 2 ) re
ursive update step of the exa
t SISO equalizer implementation
(MMSE-LE)
13
rN
rN
2
w
0N 1
+ H Dn+1
H
H
1
0N 1
(41)
Per re
eived symbol only the quantities p0;n and H [01N (^xdn mxn ) 01(M 2) T have to
be
omputed, whi
h are O(M ) operations. The quantity H Dn+1 HH [1 01(N 1) T in Eq.
(41) is
omputed from right to left similar to the MMSE-LE approa
h, whi
h is an O(M 2 )
operation. For the entire update algorithm we have O(N 2 + M 2 )
omplexity per re
eived
symbol zn .
The update step of the re
ursion algorithm is summarized in Figure 6.
5.2
We now outline a s
heme to
ompute x^n in Eqs. (13) and (21) approximately. The main
idea of this approa
h is to not
ompute the lter parameter ve
tor
n for ea
h re
eived
symbol zn as a fun
tion of the input priors L(xn ). Instead, we use the time-invariant UP
parameter ve
tor
UP , given in Eqs. (10) (MMSE-LE) and (20) (MMSE-DFE), whi
h have
to be
omputed only on
e. The ee
t of the prior information on the lter parameters is
thus simply negle
ted. However, the prior information L(xn ) is still employed to
ompute
the time-varying statisti
s E fxn g and Cov (xn ; xm ) as in Eqs. (13) and (21). The estimate
x
^n with this linear estimator
alled appMMSE-LE is obtained as follows:
= (w2 IN + H HH ) 1 s;
x
x
x
^n =
H
UP (zn H mn + mn s);
UP
whi
h is an adaptation of the MMSE-LE solution in Eq. (13). The MMSE-DFE solution in
Eq. (21) is modied to (appMMSE-DFE)
= [mxn+M1 +N1 mxn x^dn 1 x^dn Nb mxn Nb 1 mxn M2 N2 T ;
2 N
H
UP = (w I + H diag ([11(M1 +N1 +1) 01Nb 11(M2 +N2 Nb ) ) H )
x
x
x
^n =
H
H mn + mn s);
UP (zn
x
^dn = h(^xn ):
x
mn
14
s;
:
- re
eived symbols zn , prior information L(xn ), previous estimates x^i ; i < n,
-
hannel and re
eiver
hara
teristi
s h[n, M1 , M2 , w2 ,
- equalizer parameter N ,
- equalizer forward lter parameters
n 1 and inverse of the
ovarian
e matrix Rn
at time step n 1,
Initialization :
-
ompute M , Nb M 1, N1 N M1 1, N2 M1 ,
x
H, s, sO , sN , sN , mx
n , mn , and Dn ,
= Rn 1 1 , A = 0(N 1)(N 1) ,
- dene variables
=
n 1 , A
a = v = r =
= 0N 1 , a =
= v =0,
Re
ursion step
:
A a
,
;
Partition :
A
H
Input
Update
v
A
v
A
a
a
A
a a,
1
1
H
v vH ,
A
a a a + 1 sH
Ov
A sN ,
1 + (mxn )2 sH
N v,
(mxn )2
H
A
2 v v v ,
1
w
H
,
+ H Dn H
0N 1
0N 1
A ,
H r ,
a r,
A + a r rH ,
a sN + aH sN ,
1
v v
r,
Assemble
: A
a aH
;
:
x
x
- extrinsi
information LMMSE
(xn ) 1 s2H
n
H
e
n (zn H mn + mn s),
- equalizer forward lter parameters:
n
at time step n,
and inverse of the
ovarian
e matrix: Rn 1 A
Output
Figure 6: O(N 2 + M 2 ) re
ursive update step of the exa
t SISO equalizer implementation
(MMSE-DFE)
15
As derived in Se
tions 5.1.1 and 5.1.2,
omputing the term H mxn is an O(M ) operation.
The
al
ulation of x^n and nally LMMSE
(xn ) is of order O(N + M ) per re
eived symbol zn ,
e
besides the initial
omputational load to
ompute the parameter ve
tor
UP .
For the mapping from x^n to LMMSE
(xn ) we again approximate the pdfs fx^n j xn =+1 (x) and
e
(+1)
( 1)
fx^n j xn = 1 (x) with a single Gaussian distribution. The parameters n , n
and n2 are
derived in a manner similar to Eqs. (22) and (23). For the appMMSE-LE and appMMSEDFE we obtain:
( 1)
= E f x^n j xn = 1g =
H
UP s = n ;
2
H
2 N
H
n2 = E fx
^n x^n j xn =1g j(+1)
n j =
UP (w I + H Dn H )
UP
(+1)
n
2
j(+1)
n j :
UP
2
H
(+1) 2
(w2 IN + H Dn HH )
UP j(+1)
n j
UP s jn j
H
2 N
H
H
2 N
H
UP (w I + H Dn H )
UP
UP (w I + H H )
UP
Dn
IN ;
where Dn IN means that IN Dn is a positive semidenite matrix. The proof for the
MMSE-DFE is similar.
We
an also estimate n2 given a length L sequen
e of symbol estimates [^x1 x^L T and
prior information [L(x1 ) L(xL )T with the mean ^ 2 = EL fn2 g over the distribution of the
prior information. To
ompute ^ 2 , we
an use the time average to approximate EL fn2 g:
0
^2
L
BX
L1 B
n=1
x^n 0
j(+1)
n
x
^n 2
L
X
n=1
x^n <0
j(n 1)
C
x
^n 2 C
A;
where we assumed that xn =1 was sent whenever x^n 0, and vi
e versa. The SISO equalizer
output is obtained as follows:
LMMSE
(xn )
e
2 x^n (+1)
2 x^n (+1)
n
n
MMSE
or Le
(xn ) =
:
=
2
2
^
Max
16
(42)
L
se-
:
- re
eived symbols zn , prior information L(xn ),
-
hannel and re
eiver
hara
teristi
s h[n, M1 , M2 , w2 ,
- equalizer parameters N1 , N2 ,
Initialization :
-
ompute M , N , H, s, mxn tanh 12 L(xn ) for (1 M2 N2 ) n (L + M1 + N1 ),
-
ompute the UP lter parameter ve
tor and the mean parameter:
(w2 IN + H HH ) 1 s,
H s,
Compute estimates :
FOR n = 1 TO L DO
mx
[mxn+M1 +N1 mxn M2 N2 T ,
n
x
x
^n
H (zn
H mx
n + mn s),
END.
Compute output :
FOR n = 1 TO L DO LMMSE
(xn ) 2 x^^n2 .
e
Input
Figure 7:
O (N + M )
:
- re
eived symbols zn , prior information L(xn ),
-
hannel and re
eiver
hara
teristi
s h[n, M1 , M2 , w2 ,
- equalizer parameters N , Nb M 1
Initialization :
-
ompute N1 N M1 1, N2 M1 , H, s,
-
ompute mxn tanh 21 L(xn ) for (1 M2 N2 ) n (L + M1 + N1 ),
-
ompute the UP lter parameter ve
tor and the mean parameter:
(w2 IN + H diag ([11(M1 +N1 +1) 01Nb 11(M2 +N2 Nb ) ) HH ) 1 s,
H s,
Compute estimates :
FOR n = 1 TO L DO
mx
[mxn+M1 +N1 mxn x^dn 1 x^dn Nb mxn Nb 1 mxn M2 N2 T ,
n
x
x
^n
H (zn
H mx
n + mn s),
x
^dn
h(^
xn ),
END.
Estimate varian
e :
!
Input
^2
PL
x^n 0
Compute output
FOR n
Figure 8:
n=1
=1
x
^n 2
PL
^n 2
n=1 + x
x^n <0
2x
^n
^ 2
TO L DO LMMSE
(xn )
e
O (N + M )
.
.
6 Results
To obtain BER performan
e results, we simulated data transmission (107 bits) of i.i.d. and
BPSK modulated data symbols xn over the length 11
hannel:
H (z )
The SISO equalizer lter parameters were set to (N1 = 7; N2 = 7) for the linear equalizer
and (N1 = 9; N2 = 5; Nb = 10) for the de
ision feedba
k equalizer.
Several
ases were examined. In the rst
ase, the re
eiver has no prior information
(L(xn )=0; 8n) available. In the se
ond
ase, prior information L(xn ) was generated a
ording to the distribution
(
fL (l )
10 l 10;
else;
1
20 ;
0;
fj j g = 10 log
E zn 2
log10
N0
10
fj j g PMk=2
j j
E xn 2
2
M1 hk
N0
= 10 log10
PM2
j j
2
k= M1 hk
dB:
2 w2
x
^n
1;
1;
L(xn )
0;
L(xn ) < 0;
18
10
10
BER
10
10
10
10
5
6
Es/No in dB
10
10
10
10
BER
10
10
10
10
5
6
Es/No in dB
Figure 9: BER performan
e of equalization using prior information. Referen
es: (|)
MAP dete
tor, (x|) prior information only, Proposed algorithms: ( ) MMSE-LE, (o )
appMMSE-LE, (- -) MMSE-DFE, (o- -) appMMSE-DFE.
19
20
Xn
p( Z n| Xn )
Zn
Estimator
Xn
= n Zn + dn ; ( n ; dn) =
fj j g
argmin E Dn 2 ;
2C N ; d2C
(43)
E Dn 2
rn
E Dn 2
in
fj j g =
rn
2 Ef
f g + Dni RefZn gg = 0N ;
Dnr Im Zn
fj j g =
2 E fDnr g = 0;
fj j g =
2 E fDni g = 0:
E D2
drn
E D2
din
The rst two and last two lines
an be written more
ompa
t as
2 E fDn Zn g = 0N and
2 E fDng = 0:
Solving this equation system gives expressions for the parameters
n and dn :
= Cov (Xn; Zn ) Cov (Zn ; Zn )
dn = E fXn g
n E fZn g:
n
21
(44)
Referen
es
[1 R. Chang and J. Han
o
k, \On re
eiver stru
tures for
hannels having memory," IEEE
Transa
tions on Information Theory, vol. IT-12, pp. 463{468, O
tober 1966.
[2 L. Bahl et al., \Optimal de
oding of linear
odes for minimizing symbol error rate,"
IEEE Transa
tions on Information Theory, vol. 20, pp. 284{287, Mar
h 1974.
[3 G. D. Forney, \Maximum-likelihood sequen
e estimation of digital sequen
es in the presen
e of intersymbol interferen
e," IEEE Transa
tions on Information Theory, vol. 18,
pp. 363{378, May 1972.
[4 M. Tu
hler, \Iterative equalization using priors," Master's thesis, University of Illinois
at Urbana-Champaign, U.S.A., 2000.
[5 J. Nelson, A. Singer, and R. Koetter, \Linear iterative turbo-equalization (LITE) for
dual
hannels," Pro
. of the Thirty-third Asilomar Conf. on Signals, Systems, and Computers, 1999.
[6 R. Graham, D. Knuth, and O. Patashnik, Con
rete
ing, Massa
husetts: Addison-Wesley, 1994.
. Read-
[7 J. Smee and N. Beaulieu, \New methods for evaluating equalizer error rate performan
e," IEEE Pro
eedings on the 45th Vehi
ular Te
hnology Conferen
e, vol. 1, pp. 87{
91, 1995.
[8 X. Wang and H. Poor, \Turbo multiuser dete
tion and equalization for
oded CDMA
in multipath
hannels," IEEE International Conferen
e on Universal Press Communi
ations, vol. 2, pp. 1123{1127, 1998.
[9 S. Haykin, Adaptive Filter Theory, 3rd Edition. Upper Saddle River, New Jersey: Prenti
e Hall, 1996.
[10 H. Poor, An Introdu
tion to
Springer Verlag, 1994.
22
. New York: