You are on page 1of 6

KEEE494: 2nd Semester 2009 Week 12

MMSE Detection for Spatial Multiplexing MIMO (SM-MIMO)


In the spatial multiplexing MIMO, the transmit data from N
T
number of antennas can be written in vector form
as
x = [x
1
, x
2
, , c
N
T
]
T
.
The channel matrix, G, for N
T
transmit and N
R
receive antennas is N
R
N
T
matrix. The received signal at
the lth receive antenna can be written as
r
l
= g
l1
x
1
+ g
l2
x
2
+ + g
lN
T
x
N
T
+ n
l
where n
l
is the AWGN with variance
2
. We can construct the receive signal vector as
r = [r
1
, r
2
, , r
N
R
]
T
.
Now we want to detect the transmit signal vector x by observing r assuming that the channel gain, i.e., G, is
perfectly known. In this class we consider the minimum mean square error (MMSE) criterion for the detection
of x.
Figure 1: Minimum mean square error (MMSE) detection for spatial multiplexing MIMO where the element of the
channel vector g
l
consists of the channel from the N
T
number of antennas to the lth receive antenna.
Find the matrix W to have
min E

(x W
H
r)
2

where W is N
R
N
T
matrix.
The solution is shown to be given by
W
H
=

G
H
G +
I
N
T
SNR

1
G
H
which is known as Wiener Solution. In above equation, I
N
T
is N
T
N
T
identity matrix and SNR is the
signal-to-noise ratio of each data stream.
1
General solution of minimum-mean square error detection (MMSE)
Problem description: Let us assume we have desired signal d[n] and d[n] is corrupted somehow into u[n].
Then observing u[n] and using the linear lter W we want to detect d[n]. The problem description is
illustrated in Fig. 2.
d[n] ; desired signal
u[n] ; input signal to MMSE (or Wiener) lter which contains desired signal d[n]
y[n] ; estimated d[n], i.e.,

d[n]
As an example, u[n] = d[n] + z[n] for AWGN and u[n] = h[n]d[n] + z[n] for fading channel.
Figure 2: Problem description of Wiener lter.
Now we want to nd the optimum weight vector W = [w
0
, w
1
, , w
K1
]
T
such that the mean-squared
error is minimized. Denote the error is dened as
e[n] = d[n] y[n]
where
y[n] =
K1

k=0
w

k
u[n k]
with w
k
= a
k
+ jb
k
. Now let us dene the cost function J as
J = E[e[n]e

[n]] = E[|e[n]|
2
].
The MMSE solution for W is given as
W

= min
W
[J]
Solution:
e[n] = d[n] y[n]
= d[n]
K1

k=0
(a
k
jb
k
)u[n k]
Let us dene a gradient operator
k
with respect to the real input a
k
and the imaginary part b
k
such as

k
=

a
k
+ j

b
k
, k = 0, 1, , K 1
Apply the operator
k
to the cost function J yielding

k
J =
J
a
k
+ j
J
b
k
, k = 0, 1, , K 1
Now the MMSE solution is obtained when

k
J = 0 for all k = 0, 1, , K 1
2
or equivalently

k
J = E

e[n]
a
k
e

[n] +
e

[n]
a
k
e[n] + j
e[n]
b
k
e

[n] + j
e

[n]
b
k
e[n]

Note that
e[n]
a
k
= u[n k]
e[n]
b
k
= ju[n k]
e

[n]
a
k
= u

[n k]
e

[n]
a
k
= ju

[n k].
Using the above, we have

k
J = E

u[n k]e

[n] u

[n k]e[n] u[n k]e

[n] + u
ast
[n k]e[n]

= 2E [u[n k]e

[n]]
= 0
or equivalently
E[u[n k]e

[n]] = 0, k = 0, 1, , K 1
E

u[n k]

[n]
K1

l=0
w
l
u

[n l]

= 0.
We can rewrite it as
E [u[n k]d

[n]] =
K1

l=0
w
l
E [u[n k]u

[n l]] ,
which is called Wiener-Hopf equation. Note that in the Wiener-Hope equation, the left-hand side term is
cross-correlation between u[nk] and d[n] for a lag of k and the right-hand side term is auto-correlation
function of the lter output for a lag of l k. Let us express these as

uu
(l k) = E [u[n k]u

[n l]]

ud
(k) = E [u[n k]d

[n]]
Hence, the Wiener-Hopf equation can be rewritten as
K1

l=0
w
k

uu
(l k) =
ud
(k), k = 0, 1, , K 1
Let us dene
u[n] = [u[n], u[n 1], , u[n K + 1]]
T
,
and dene the correlation matrix R as
R =

uu
(0)
uu
(1)
uu
(K 1)

uu
(1)
uu
(0)
uu
(K 2)
.
.
.

uu
(K 1)

uu
(K 2)
uu
(0)

3
and
T = E[u[n]d

[n]]
= [
ud
(0),
ud
(1), ,
ud
(1 K)]
T
.
Then, Wiener-Hopf equation can be rewritten as
RW = T
Finally, the Wiener solution is given as
W = R
1
T
Figure 3: Transversal linear Wiener lter.
Now in our MIMO case which is of our interest, when we apply the Wiener solution, we have
x
1
=
N
R

k=1
w
1
k
r
k
= w
1H
r
x
2
=
N
R

k=1
w
2
k
r
k
= w
2H
r
.
.
.
x
N
T
=
N
R

k=1
w
N
R

k
r
k
= w
N
R
H
r
where
w
l
= [w
l
1
, w
l
2
, , w
l
N
R
]
T
.
Then, from the Wiener solution, w
l
is given as
w
l
= R
1
T
l
.
Now let us nd R and T. Note that R = [rr
H
] which is N
R
N
R
matrix given as
R =

E[r
1
r

1
] E[r
1
r

2
] E[r
1
r

N
R
]
.
.
.
E[r
N
R
r

1
] E[r
N
R
r

2
] E[r
N
R
r

N
R
]

where
r
k
= g
k1
x
1
+ g
k2
x
2
+ + g
kN
T
x
N
T
+ n
k
= g
T
k
x + n
k
4
Hence,
E[r
k
r

l
] = E[(g
k1
x
1
+ + g
k
x
N
T
+ n
k
) (g

l1
x

1
+ + g

lN
T
x

N
T
+ n

l
)]
= E[g
k1
g

l1
|x
1
|
2
+ g
k2
g

l2
|x
2
|
2
+ + g
kN
T
g

lN
T
|x
N
T
|
2
+ g
k1
g

l2
x
1
x

2
+ + g
kN
T
g
lN
T
1
x

N
T
1
+ n
k
n

l
]
Since E[x
1
x

2
] = 0 and g
k
and g
l
are known, it can be rewritten as
E[r
k
r

l
] = g
k1
g

l1
E[|x
1
|
2
] + g
k2
g

l2
E[|x
2
|
2
] + + g
kN
T
g

lN
T
E[|x
N
T
|
2
] +
2
(k l)
Note that E[|x
l
|
2
] = P is the signal power for all k. Then,
E[r
k
r

l
] = P

g
k1
g

l1
+ + g
kN
T
g

lN
T

= P

l1
, g

l2
, , g

lN
T

g
k1
g
k2
.
.
.
g
kN
T

+
2
(k l)
= P(g
H
l
g
k
) +
2
(k l).
On the other hand,
T
l
= E[r x

l
]
= E

r
1
x

l
r
2
x

l
.
.
.r
N
R
x

= E

g
l1
P
g
l2
P
.
.
.
g
lN
T
P

= Pg
l
,
where we note that r
k
= g
k1
x
1
+ g
k2
x
2
+ + g
kN
T
x
N
T
. Now the MMSE solution is
w
l
= R
1
T
l
=

1
P

g
H
1
g
1
g
H
2
g
1
g
H
N
R
g
1
g
H
1
g
2
g
H
2
g
2
g
H
N
R
g
2
.
.
.
g
H
1
g
N
R
g
H
2
g
2
g
H
N
R
g
N
R

+
2

1
Pg
l
=

g
H
1
g
1
g
H
2
g
1
g
H
N
R
g
1
g
H
1
g
2
g
H
2
g
2
g
H
N
R
g
2
.
.
.
g
H
1
g
N
R
g
H
2
g
2
g
H
N
R
g
N
R

+

2
P

1
g
l
We can rewrite the optimal weight vector in matrix form denoted as W as
W =

w
1
w
2
w
N
R

= G

G G
H
+
1
SNR
I
N
T

1
,
5
or
W
H
=

G
H
G +
1
SNR
I
N
T

1
G
H
6