You are on page 1of 43

Chapter 9: Gaussian channel

University of Illinois at Chicago ECE 534, Natasha Devroye


Chapter 9 outline

• Definitions

• Capacity of Gaussian noise channels: achievability and converse

• Bandlimited channels

• Parallel Gaussian channels

• Colored Gaussian noise channels

• Gaussian channels with feedback

University of Illinois at Chicago ECE 534, Natasha Devroye


Motivation

• Our goal is to determine the capacity of an AWGN channel

N Gaussian noise ~ N(0,PN)

h
X Y =hX+N
Wireless channel
with fading

time time

University of Illinois at Chicago ECE 534, Natasha Devroye


Motivation

• Our goal is to determine the capacity of an AWGN channel

N Gaussian noise ~ N(0,PN)

h
X Y =hX+N
Wireless channel
with fading

2

|h| P +PN
C= 1
2 log PN
= 1
2 log (1 + SN R) (bits/channel use)

University of Illinois at Chicago ECE 534, Natasha Devroye


Definitions

Can capacity be infinite?


University of Illinois at Chicago ECE 534, Natasha Devroye
Definitions

University of Illinois at Chicago ECE 534, Natasha Devroye


Thought experiment: 1 bit over AWGN channel

• Send 1 bit with power constraint P. How would you do it, and what is the
associated probability of error Pe?
Gaussian noise variance N

Input power constraint P +

1-f
o f
o
f
1 1
1-f

• Turn a Gaussian channel into a discrete binary symmetric channel with


crossover probability Pe!

University of Illinois at Chicago ECE 534, Natasha Devroye


Definitions - information capacity

Gaussian noise
variance N
Input power
constraint P +
University of Illinois at Chicago ECE 534, Natasha Devroye
Definitions - Gaussian code

W: 1... M Encoder + Decoder Ŵ: 1... M

University of Illinois at Chicago ECE 534, Natasha Devroye


Definitions: achievable rate and capacity

University of Illinois at Chicago ECE 534, Natasha Devroye


Intuition about why it works - sphere packing

University of Illinois at Chicago ECE 534, Natasha Devroye


Intuition about why it works - sphere packing

University of Illinois at Chicago ECE 534, Natasha Devroye


Channel coding: achievability

• We will prove achievability, then the converse

• Need concepts of typical sets

• Need idea that Gaussians maximize entropy for a given variance constraint

University of Illinois at Chicago ECE 534, Natasha Devroye


Typical sets

University of Illinois at Chicago ECE 534, Natasha Devroye


Properties of jointly typical sets

University of Illinois at Chicago ECE 534, Natasha Devroye


Achievability ⇐

University of Illinois at Chicago ECE 534, Natasha Devroye


Converse

W: 1... M Encoder + Decoder Ŵ: 1... M

University of Illinois at Chicago ECE 534, Natasha Devroye


Bandlimited Gaussian Channels

h(t) H(ω)

-W W

University of Illinois at Chicago ECE 534, Natasha Devroye


Bandlimited Gaussian Channels

University of Illinois at Chicago ECE 534, Natasha Devroye


Bandlimited Gaussian channels

University of Illinois at Chicago ECE 534, Natasha Devroye


Bandlimited Gaussian Channel

University of Illinois at Chicago ECE 534, Natasha Devroye


Example: telephone channel

• Telephone signals bandlimited to 3300Hz.

• SNR is 33dB.

• What is capacity of a telephone line?

University of Illinois at Chicago ECE 534, Natasha Devroye


Parallel Gaussian channels

9.4 PARALLEL GAUSSIAN CHANNELS 275

Question: how to distribute Z1

power across the parallel


channels?
X1 Y1

Zk

Xk Yk

FIGURE 9.3. Parallel Gaussian channels.

We calculate the distribution that achieves the information capacity for


this channel. The fact that the information capacity is the supremum of
University of Illinois at Chicago ECE 534, Natasha Devroye achievable rates can be proved by methods identical to those in the proof
9.4 PARALLEL GAUSSIAN CHANNELS 275

Z1

X1 Y1

Parallel Gaussian channels


Zk

Xk Yk

FIGURE 9.3. Parallel Gaussian channels.

We calculate the distribution that achieves the information capacity for


this channel. The fact that the information capacity is the supremum of
achievable rates can be proved by methods identical to those in the proof
of the capacity theorem for single Gaussian channels and will be omitted.
Since Z1 , Z2 , . . . , Zk are independent,

I (X1 , X2 , . . . , Xk ; Y1 , Y2 , . . . , Yk )
= h(Y1 , Y2 , . . . , Yk ) − h(Y1 , Y2 , . . . , Yk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk ) (9.68)
!
= h(Y1 , Y2 , . . . , Yk ) − h(Zi ) (9.69)
i
!
≤ h(Yi ) − h(Zi ) (9.70)
i
!1 " #
Pi
≤ log 1 + , (9.71)
2 Ni
i

University of Illinois at Chicago ECE 534, Natasha Devroye


9.4 PARALLEL GAUSSIAN CHANNELS 275

Z1

X1 Y1

Parallel Gaussian channels


Zk

Xk Yk

FIGURE 9.3. Parallel Gaussian channels.

We calculate the distribution that achieves the information capacity for


this channel. The fact that the information capacity is the supremum of
achievable rates can be proved by methods identical to those in the proof
of the capacity theorem for single Gaussian channels and will be omitted.
Since Z1 , Z2 , . . . , Zk are independent,

I (X1 , X2 , . . . , Xk ; Y1 , Y2 , . . . , Yk )
= h(Y1 , Y2 , . . . , Yk ) − h(Y1 , Y2 , . . . , Yk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk ) (9.68)
!
= h(Y1 , Y2 , . . . , Yk ) − h(Zi ) (9.69)
i
!
≤ h(Yi ) − h(Zi ) (9.70)
i
!1 " #
Pi
≤ log 1 + , (9.71)
2 Ni
i

University of Illinois at Chicago ECE 534, Natasha Devroye


9.4 PARALLEL GAUSSIAN CHANNELS 275

Z1

X1 Y1

Parallel Gaussian channels


Zk

Xk Yk

FIGURE 9.3. Parallel Gaussian channels.

We calculate the distribution that achieves the information capacity for


this channel. The fact that the information capacity is the supremum of
achievable rates can be proved by methods identical to those in the proof
of the capacity theorem for single Gaussian channels and will be omitted.
Since Z1 , Z2 , . . . , Zk are independent,

I (X1 , X2 , . . . , Xk ; Y1 , Y2 , . . . , Yk )
= h(Y1 , Y2 , . . . , Yk ) − h(Y1 , Y2 , . . . , Yk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk ) (9.68)
!
= h(Y1 , Y2 , . . . , Yk ) − h(Zi ) (9.69)
i
!
≤ h(Yi ) − h(Zi ) (9.70)
i
!1 " #
Pi
≤ log 1 + , (9.71)
2 Ni
i

9.5 CHANNELS WITH COLORED GAUSSIAN NOISE 277

Power

P1

P2
N3

N1

N2

Channel 1 Channel 2 Channel 3

FIGURE 9.4. Water-filling for parallel channels.


University of Illinois at Chicago ECE 534, Natasha Devroye
9.4 PARALLEL GAUSSIAN CHANNELS 275

Z1

X1 Y1

Zk

Waterfilling Xk

FIGURE 9.3. Parallel Gaussian channels.


Yk

We calculate the distribution that achieves the information capacity for


this channel. The fact that the information capacity is the supremum of
achievable rates can be proved by methods identical to those in the proof
of the capacity theorem for single Gaussian channels and will be omitted.
Since Z1 , Z2 , . . . , Zk are independent,

I (X1 , X2 , . . . , Xk ; Y1 , Y2 , . . . , Yk )
= h(Y1 , Y2 , . . . , Yk ) − h(Y1 , Y2 , . . . , Yk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk ) (9.68)
!
= h(Y1 , Y2 , . . . , Yk ) − h(Zi ) (9.69)
i
!
≤ h(Yi ) − h(Zi ) (9.70)
i
!1 " #
Pi
≤ log 1 + , (9.71)
2 Ni
i

9.5 CHANNELS WITH COLORED GAUSSIAN NOISE

Power

C=?
P1

P2
N3

N1

N2

Channel 1 Channel 2 Channel 3


University of Illinois at Chicago ECE 534, Natasha Devroye
9.4 PARALLEL GAUSSIAN CHANNELS 275

Z1

X1 Y1

Colored Gaussian noise


Zk

Xk Yk

FIGURE 9.3. Parallel Gaussian channels.

We calculate the distribution that achieves the information capacity for


this channel. The fact that the information capacity is the supremum of
achievable rates can be proved by methods identical to those in the proof
of the capacity theorem for single Gaussian channels and will be omitted.
Since Z1 , Z2 , . . . , Zk are independent,

I (X1 , X2 , . . . , Xk ; Y1 , Y2 , . . . , Yk )
= h(Y1 , Y2 , . . . , Yk ) − h(Y1 , Y2 , . . . , Yk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk ) (9.68)
!
= h(Y1 , Y2 , . . . , Yk ) − h(Zi ) (9.69)
i
!
≤ h(Yi ) − h(Zi ) (9.70)
i
!1 " #
Pi
≤ log 1 + , (9.71)
2 Ni
i

What does white noise correspond to?

University of Illinois at Chicago ECE 534, Natasha Devroye


9.4 PARALLEL GAUSSIAN CHANNELS 275

Z1

X1 Y1

Colored Gaussian noise


Zk

Xk Yk

FIGURE 9.3. Parallel Gaussian channels.

We calculate the distribution that achieves the information capacity for


this channel. The fact that the information capacity is the supremum of
achievable rates can be proved by methods identical to those in the proof
of the capacity theorem for single Gaussian channels and will be omitted.
Since Z1 , Z2 , . . . , Zk are independent,

I (X1 , X2 , . . . , Xk ; Y1 , Y2 , . . . , Yk )
= h(Y1 , Y2 , . . . , Yk ) − h(Y1 , Y2 , . . . , Yk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk ) (9.68)
!
= h(Y1 , Y2 , . . . , Yk ) − h(Zi ) (9.69)
i
!
≤ h(Yi ) − h(Zi ) (9.70)
i
!1 " #
Pi
≤ log 1 + , (9.71)
2 Ni
i

University of Illinois at Chicago ECE 534, Natasha Devroye


9.4 PARALLEL GAUSSIAN CHANNELS 275

Z1

X1 Y1

Colored Gaussian noise


Zk

Xk Yk

FIGURE 9.3. Parallel Gaussian channels.

We calculate the distribution that achieves the information capacity for


this channel. The fact that the information capacity is the supremum of
achievable rates can be proved by methods identical to those in the proof
of the capacity theorem for single Gaussian channels and will be omitted.
Since Z1 , Z2 , . . . , Zk are independent,

I (X1 , X2 , . . . , Xk ; Y1 , Y2 , . . . , Yk )
= h(Y1 , Y2 , . . . , Yk ) − h(Y1 , Y2 , . . . , Yk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk ) (9.68)
!
= h(Y1 , Y2 , . . . , Yk ) − h(Zi ) (9.69)
i
!
≤ h(Yi ) − h(Zi ) (9.70)
i
!1 " #
Pi
≤ log 1 + , (9.71)
2 Ni
i

University of Illinois at Chicago ECE 534, Natasha Devroye


9.4 PARALLEL GAUSSIAN CHANNELS 275

Z1

X1 Y1

Colored Gaussian noise - optimal powers


Zk

Xk Yk

FIGURE 9.3. Parallel Gaussian channels.

We calculate the distribution that achieves the information capacity for


this channel. The fact that the information capacity is the supremum of
achievable rates can be proved by methods identical to those in the proof
of the capacity theorem for single Gaussian channels and will be omitted.
Since Z1 , Z2 , . . . , Zk are independent,

I (X1 , X2 , . . . , Xk ; Y1 , Y2 , . . . , Yk )
= h(Y1 , Y2 , . . . , Yk ) − h(Y1 , Y2 , . . . , Yk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk ) (9.68)
!
= h(Y1 , Y2 , . . . , Yk ) − h(Zi ) (9.69)
i
!
≤ h(Yi ) − h(Zi ) (9.70)
i
!1 " #
Pi
≤ log 1 + , (9.71)
2 Ni
i

University of Illinois at Chicago ECE 534, Natasha Devroye


9.4 PARALLEL GAUSSIAN CHANNELS 275

Z1

X1 Y1

Colored Gaussian noise - optimal powers


Zk

Xk Yk

FIGURE 9.3. Parallel Gaussian channels.

We calculate the distribution that achieves the information capacity for


this channel. The fact that the information capacity is the supremum of
achievable rates can be proved by methods identical to those in the proof
of the capacity theorem for single Gaussian channels and will be omitted.
Since Z1 , Z2 , . . . , Zk are independent,

I (X1 , X2 , . . . , Xk ; Y1 , Y2 , . . . , Yk )
= h(Y1 , Y2 , . . . , Yk ) − h(Y1 , Y2 , . . . , Yk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk |X1 , X2 , . . . , Xk )
= h(Y1 , Y2 , . . . , Yk ) − h(Z1 , Z2 , . . . , Zk ) (9.68)
!
= h(Y1 , Y2 , . . . , Yk ) − h(Zi ) (9.69)
i
!
≤ h(Yi ) − h(Zi ) (9.70)
i
!1 " #
Pi
≤ log 1 + , (9.71)
2 Ni
i

University of Illinois at Chicago ECE 534, Natasha Devroye


Gaussian channels with feedback
9.6 GAUSSIAN CHANNELS WITH FEEDBACK 281

Zi

Xi Yi

FIGURE 9.6. Gaussian channel with feedback.

The feedback allows the input of the channel to depend on the past values
of the output.
A (2nR , n) code for the Gaussian channel with feedback consists of
a sequence of mappings xi (W, Y i−1 ), where W ∈ {1, 2, . . . , 2nR } is the
input message and Y i−1 is the sequence of past values of the output. Thus,
x(W, ·) is a code function rather than a codeword. In addition, we require
that the code satisfy a power constraint, Time-varying = channel with memory!
! n
#
1"
E xi2 (w, Y i−1 ) ≤ P , w ∈ {1, 2, . . . , 2nR }, (9.99)
n
i=1

where the expectation is over all possible noise sequences.


We characterize the capacity of the Gaussian channel is terms of the
covariance matrices of the input X and the noise Z. Because of the feed-
back, X n and Z n are not independent; Xi depends causally on the past
values of Z. In the next section we prove a converse for the Gaussian
channel
University of Illinois at Chicago ECE 534, with feedback
Natasha Devroye and show that we achieve capacity if we take X
Zi

Gaussian channels with feedback Xi Yi

FIGURE 9.6. Gaussian channel with feedback.

The feedback allows the input of the channel to depend on the past values
of the output.
A (2nR , n) code for the Gaussian channel with feedback consists of
a sequence of mappings xi (W, Y i−1 ), where W ∈ {1, 2, . . . , 2nR } is the
input message and Y i−1 is the sequence of past values of the output. Thus,
x(W, ·) is a code function rather than a codeword. In addition, we require
that the code satisfy a power constraint,
! n
#
1" 2
E xi (w, Y i−1 ) ≤ P , w ∈ {1, 2, . . . , 2nR }, (9.99)
n
i=1

where the expectation is over all possible noise sequences.


We characterize the capacity of the Gaussian channel is terms of the
covariance matrices of the input X and the noise Z. Because of the feed-
back, X n and Z n are not independent; Xi depends causally on the past
values of Z. In the next section we prove a converse for the Gaussian
channel with feedback and show that we achieve capacity if we take X
to be Gaussian.
We now state an informal characterization of the capacity of the channel
with and without feedback.

1. With feedback . The capacity Cn,FB in bits per transmission of the


time-varying Gaussian channel with feedback is

(n)
1 |K |
Cn,FB = max log X+Z , (9.100)
1 (n) 2n |KZ(n) |
n tr(KX )≤P

University of Illinois at Chicago ECE 534, Natasha Devroye


Zi

Gaussian channels with feedback Xi Yi

FIGURE 9.6. Gaussian channel with feedback.

The feedback allows the input of the channel to depend on the past values
of the output.
A (2nR , n) code for the Gaussian channel with feedback consists of
a sequence of mappings xi (W, Y i−1 ), where W ∈ {1, 2, . . . , 2nR } is the
input message and Y i−1 is the sequence of past values of the output. Thus,
x(W, ·) is a code function rather than a codeword. In addition, we require
that the code satisfy a power constraint,
! n
#
1" 2
E xi (w, Y i−1 ) ≤ P , w ∈ {1, 2, . . . , 2nR }, (9.99)
n
i=1

where the expectation is over all possible noise sequences.


We characterize the capacity of the Gaussian channel is terms of the
covariance matrices of the input X and the noise Z. Because of the feed-
back, X n and Z n are not independent; Xi depends causally on the past
values of Z. In the next section we prove a converse for the Gaussian
channel with feedback and show that we achieve capacity if we take X
to be Gaussian.
We now state an informal characterization of the capacity of the channel
with and without feedback.

1. With feedback . The capacity Cn,FB in bits per transmission of the


time-varying Gaussian channel with feedback is

(n)
1 |K |
Cn,FB = max log X+Z , (9.100)
1 (n) 2n |KZ(n) |
n tr(KX )≤P

University of Illinois at Chicago ECE 534, Natasha Devroye


Zi

Gaussian channels with feedback Xi Yi

FIGURE 9.6. Gaussian channel with feedback.

The feedback allows the input of the channel to depend on the past values
of the output.
A (2nR , n) code for the Gaussian channel with feedback consists of
a sequence of mappings xi (W, Y i−1 ), where W ∈ {1, 2, . . . , 2nR } is the
input message and Y i−1 is the sequence of past values of the output. Thus,
x(W, ·) is a code function rather than a codeword. In addition, we require
that the code satisfy a power constraint,
! n
#
1" 2
E xi (w, Y i−1 ) ≤ P , w ∈ {1, 2, . . . , 2nR }, (9.99)
n
i=1

where the expectation is over all possible noise sequences.


We characterize the capacity of the Gaussian channel is terms of the
covariance matrices of the input X and the noise Z. Because of the feed-
back, X n and Z n are not independent; Xi depends causally on the past
values of Z. In the next section we prove a converse for the Gaussian
channel with feedback and show that we achieve capacity if we take X
to be Gaussian.
We now state an informal characterization of the capacity of the channel
with and without feedback.

1. With feedback . The capacity Cn,FB in bits per transmission of the


time-varying Gaussian channel with feedback is

(n)
1 |K |
Cn,FB = max log X+Z , (9.100)
1 (n) 2n |KZ(n) |
n tr(KX )≤P

University of Illinois at Chicago ECE 534, Natasha Devroye


Zi

Gaussian channels with feedback Xi Yi

FIGURE 9.6. Gaussian channel with feedback.

The feedback allows the input of the channel to depend on the past values
of the output.
A (2nR , n) code for the Gaussian channel with feedback consists of
a sequence of mappings xi (W, Y i−1 ), where W ∈ {1, 2, . . . , 2nR } is the
• But how much does feedback really give you?
input message and Y i−1 is the sequence of past values of the output. Thus,
x(W, ·) is a code function rather than a codeword. In addition, we require
that the code satisfy a power constraint,

• Let’s find bounds which relate the capacity with feedback to the
! n
#
1" 2
E xi (w, Y i−1 ) ≤ P ,
w ∈ {1, 2, . . . , 2 }, nR
(9.99)
n
capacity without feedback

i=1

where the expectation is over all possible noise sequences.


We characterize the capacity of the Gaussian channel is terms of the
covariance matrices of the input X and the noise Z. Because of the feed-
back, X n and Z n are not independent; Xi depends causally on the past
values of Z. In the next section we prove a converse for the Gaussian
channel with feedback and show that we achieve capacity if we take X

• To do this we’ll need some technical lemmas..... to be Gaussian.


We now state an informal characterization of the capacity of the channel
with and without feedback.

1. With feedback . The capacity Cn,FB in bits per transmission of the


time-varying Gaussian channel with feedback is

(n)
1 |K |
Cn,FB = max log X+Z , (9.100)
1 (n) 2n |KZ(n) |
n tr(KX )≤P

University of Illinois at Chicago ECE 534, Natasha Devroye


Zi

Gaussian channels with feedback Xi Yi

FIGURE 9.6. Gaussian channel with feedback.

The feedback allows the input of the channel to depend on the past values
of the output.
A (2nR , n) code for the Gaussian channel with feedback consists of
a sequence of mappings xi (W, Y i−1 ), where W ∈ {1, 2, . . . , 2nR } is the
input message and Y i−1 is the sequence of past values of the output. Thus,
x(W, ·) is a code function rather than a codeword. In addition, we require
that the code satisfy a power constraint,
! n
#
1" 2
E xi (w, Y i−1 ) ≤ P , w ∈ {1, 2, . . . , 2nR }, (9.99)
n
i=1

where the expectation is over all possible noise sequences.


We characterize the capacity of the Gaussian channel is terms of the
covariance matrices of the input X and the noise Z. Because of the feed-
back, X n and Z n are not independent; Xi depends causally on the past
values of Z. In the next section we prove a converse for the Gaussian
channel with feedback and show that we achieve capacity if we take X
to be Gaussian.
We now state an informal characterization of the capacity of the channel
with and without feedback.

1. With feedback . The capacity Cn,FB in bits per transmission of the


time-varying Gaussian channel with feedback is

(n)
1 |K |
Cn,FB = max log X+Z , (9.100)
1 (n) 2n |KZ(n) |
n tr(KX )≤P

University of Illinois at Chicago ECE 534, Natasha Devroye


Zi

Gaussian channels with feedback Xi Yi

FIGURE 9.6. Gaussian channel with feedback.

The feedback allows the input of the channel to depend on the past values
of the output.
A (2nR , n) code for the Gaussian channel with feedback consists of
a sequence of mappings xi (W, Y i−1 ), where W ∈ {1, 2, . . . , 2nR } is the
input message and Y i−1 is the sequence of past values of the output. Thus,
x(W, ·) is a code function rather than a codeword. In addition, we require
that the code satisfy a power constraint,
! n
#
1" 2
E xi (w, Y i−1 ) ≤ P , w ∈ {1, 2, . . . , 2nR }, (9.99)
n
i=1

where the expectation is over all possible noise sequences.


We characterize the capacity of the Gaussian channel is terms of the
covariance matrices of the input X and the noise Z. Because of the feed-
back, X n and Z n are not independent; Xi depends causally on the past
values of Z. In the next section we prove a converse for the Gaussian
channel with feedback and show that we achieve capacity if we take X
to be Gaussian.
We now state an informal characterization of the capacity of the channel
with and without feedback.

1. With feedback . The capacity Cn,FB in bits per transmission of the


time-varying Gaussian channel with feedback is

(n)
1 |K |
Cn,FB = max log X+Z , (9.100)
1 (n) 2n |KZ(n) |
n tr(KX )≤P

University of Illinois at Chicago ECE 534, Natasha Devroye


Zi

Gaussian channels with feedback Xi Yi

FIGURE 9.6. Gaussian channel with feedback.

The feedback allows the input of the channel to depend on the past values
of the output.
A (2nR , n) code for the Gaussian channel with feedback consists of
a sequence of mappings xi (W, Y i−1 ), where W ∈ {1, 2, . . . , 2nR } is the
input message and Y i−1 is the sequence of past values of the output. Thus,
x(W, ·) is a code function rather than a codeword. In addition, we require
that the code satisfy a power constraint,
! n
#
1" 2
E xi (w, Y i−1 ) ≤ P , w ∈ {1, 2, . . . , 2nR }, (9.99)
n
i=1

where the expectation is over all possible noise sequences.


We characterize the capacity of the Gaussian channel is terms of the
covariance matrices of the input X and the noise Z. Because of the feed-
back, X n and Z n are not independent; Xi depends causally on the past
values of Z. In the next section we prove a converse for the Gaussian
channel with feedback and show that we achieve capacity if we take X
to be Gaussian.
We now state an informal characterization of the capacity of the channel
with and without feedback.

1. With feedback . The capacity Cn,FB in bits per transmission of the


time-varying Gaussian channel with feedback is

(n)
1 |K |
Cn,FB = max log X+Z , (9.100)
1 (n) 2n |KZ(n) |
n tr(KX )≤P

University of Illinois at Chicago ECE 534, Natasha Devroye


Zi

Does feedback increase capacity? Xi Yi

FIGURE 9.6. Gaussian channel with feedback.

The feedback allows the input of the channel to depend on the past values
of the output.
A (2nR , n) code for the Gaussian channel with feedback consists of
a sequence of mappings xi (W, Y i−1 ), where W ∈ {1, 2, . . . , 2nR } is the
input message and Y i−1 is the sequence of past values of the output. Thus,
x(W, ·) is a code function rather than a codeword. In addition, we require
that the code satisfy a power constraint,
! n
#
1" 2
E xi (w, Y i−1 ) ≤ P , w ∈ {1, 2, . . . , 2nR }, (9.99)
n
i=1

where the expectation is over all possible noise sequences.

• In a discrete memoryless channel?

We characterize the capacity of the Gaussian channel is terms of the


covariance matrices of the input X and the noise Z. Because of the feed-
back, X n and Z n are not independent; Xi depends causally on the past
values of Z. In the next section we prove a converse for the Gaussian

• In an additive white Gaussian noise channel?


channel with feedback and show that we achieve capacity if we take X
to be Gaussian.
We now state an informal characterization of the capacity of the channel
with and without feedback.

• In a colored Gaussian noise channel? 1. With feedback . The capacity Cn,FB in bits per transmission of the
time-varying Gaussian channel with feedback is

(n)
1 |K |
Cn,FB = max log X+Z , (9.100)
1 (n) 2n |KZ(n) |
n tr(KX )≤P

University of Illinois at Chicago ECE 534, Natasha Devroye


SUMMARY
1
Maximum entropy. maxEX2 =α h(X) = 2 log 2π eα.

Gaussian
! channel. Yi = Xi + Zi ; Zi ∼ N(0, N ); power constraint
1 n 2
n i=1 xi ≤ P ; and
" #
1 P
C = log 1 + bits per transmission. (9.163)
2 N
Bandlimited additive white Gaussian noise channel. Bandwidth W ;
two-sided power spectral density N0 /2; signal power P ; and
" #
P
C = W log 1 + bits per second. (9.164)
N0 W
Water-filling (k parallel Gaussian channels). Yj = Xj + Zj , j = 1,
!k
2, . . . , k; Zj ∼ N(0, Nj ); j =1 Xj2 ≤ P ; and

k
$ " +#
1 (ν − Ni )
C= log 1 + , (9.165)
2 Ni
i=1
!
where ν is chosen so that (ν − Ni )+ = nP .

Additive nonwhite Gaussian noise channel. Yi = Xi + Zi ; Z n ∼


N(0, KZ ); and
n " #
1$1 (ν − λi )+
C= log 1 + , (9.166)
n 2 λi
i=1

where
! λ1 , λ2 , . . . , λn are the eigenvalues of KZ and ν is chosen so that
+
i (ν − λi ) = P .
University of Illinois at Chicago ECE 534, Fall 2009, Natasha Devroye
where
! λ1 , λ2 , . . . , λn are the eigenvalues of KZ and ν is chosen so that
+
i (ν − λi ) = P .

Capacity without feedback


290 GAUSSIAN CHANNEL 1 |KX + KZ |
Cn = max log . (9.167)
tr(KX )≤nP 2n |KZ |
Capacity with feedback
1 |KX+Z |
Cn,FB = max log . (9.168)
tr(KX )≤nP 2n |KZ |

Feedback bounds
1
Cn,FB ≤ Cn + . (9.169)
2
Cn,FB ≤ 2Cn . (9.170)

PROBLEMS

9.1 Channel with two independent looks at Y . Let Y1 and Y2 be condi-


tionally independent and conditionally identically distributed
given X.
(a) Show that I (X; Y1 , Y2 ) = 2I (X; Y1 ) − I (Y1 ; Y2 ).
(b) Conclude that the capacity of the channel

X (Y1, Y2)
University of Illinois at Chicago ECE 534, Fall 2009, Natasha Devroye

You might also like