Professional Documents
Culture Documents
Amos Lapidoth
ETH Zurich
c
Lecture 1,
Amos Lapidoth 2017
Why Should I Take this Class?
c
Lecture 1,
Amos Lapidoth 2017
This Class Is Not for You if:
c
Lecture 1,
Amos Lapidoth 2017
If You Are Staying:
c
Lecture 1,
Amos Lapidoth 2017
The Plan for Today
• Course information
• A point-to-point digital communication system.
• Functions, signals, and time-reversal.
• The inner product, orthogonality, and energy.
• The Fourier Transform (review):
• On 2π, f , ω, i, and j.
• Conjugate Symmetry
• Parseval’s Theorem
• The definition of bandwidth.
c
Lecture 1,
Amos Lapidoth 2017
The Textbook
AMOS LAPIDOTH
• Greater mathematical precision than the
lecture.
A Foundation in
Digital • The lecture’s level suffices for the exam.
Communication • The exercises, however, are at the
Second Edition
course’s level.
• Open-book exam.
• No electronic devices allowed in the exam.
If you have seen some of the material before, now is the chance to
cover it in depth. I love questions!
c
Lecture 1,
Amos Lapidoth 2017
Warning!
c
Lecture 1,
Amos Lapidoth 2017
encryption
encoder
encoder
channel
source
source
waveform bits bits waveform
file, etc (encrypted)
channel
reconstruction
decryption
decoder
channel
source
c
Lecture 1,
Amos Lapidoth 2017
Functions
• A function or a mapping
u: A → B
c
Lecture 1,
Amos Lapidoth 2017
Signals
c
Lecture 1,
Amos Lapidoth 2017
Caution
• While u and u(·) denote functions, u(t) denotes the result of
applying u to t. If u is a real-valued signal then u(t) is a real
number!
• If x and y are signals, then
x?y
denotes their convolution.
• The value of their convolution at time 17 is
(x ? y)(17).
• It is also perfectly fine to write
t
1
x(t − 2)
t
3
c
Lecture 1,
Amos Lapidoth 2017
Reflecting a Signal: ~x : t 7→ x(−t)
x(t)
t
1
~x(t)
t
−1
c
Lecture 1,
Amos Lapidoth 2017
The Energy in a Real Signal: Ch. 3
c
Lecture 1,
Amos Lapidoth 2017
The Energy in a Complex Signal
c
Lecture 1,
Amos Lapidoth 2017
The Inner Product
We define the inner product hu, vi between the signals u and v as
Z ∞
hu, vi , u(t) v ∗ (t) dt,
−∞
c
Lecture 1,
Amos Lapidoth 2017
The Fourier Transform: Ch. 6
The Fourier Transform of an integrable signal x : R → C is the
mapping x̂ : R → C defined by
Z ∞
x̂ : f 7→ x(t) e−i2πf t dt.
−∞
In contrast to
X(jω)
√
• we use i for −1;
• we use x̂ instead of X(·), so
• we can write things like
ˆ = ~x,
x̂
• and reserve upper-case letters for random things.
• We use f instead of ω.
• But most important is the 2π.
c
Lecture 1,
Amos Lapidoth 2017
The Inverse Fourier Transform
c
Lecture 1,
Amos Lapidoth 2017
Parseval’s Theorem
and
kuk2 = kûk2 .
c
Lecture 1,
Amos Lapidoth 2017
The Fourier Transform of Real Signals (1)
c
Lecture 1,
Amos Lapidoth 2017
The Fourier Transform of Real Signals (2)
x̂(f )
c
Lecture 1,
Amos Lapidoth 2017
The Fourier Transform of Real Signals (3)
ŷ(f )
c
Lecture 1,
Amos Lapidoth 2017
The Fourier Transform of Real Signals (4)
Since Z ∞
x̂ : f 7→ x(t) e−i2πf t dt,
−∞
Z ∞
x̂(−f ) = x(t) e−i2π(−f )t dt
Z−∞
∞
= x(t) ei2πf t dt
−∞
Z ∞ ∗ ∗
i2πf t
= x(t) e dt
−∞
Z ∞ ∗ ∗ ∗
i2πf t
= x(t) e dt
−∞
Z ∞ ∗
= x(t) e−i2πf t dt
−∞
= x̂∗ (f ).
c
Lecture 1,
Amos Lapidoth 2017
An Important Fourier Pair
1
first zero at α
cutoff γ
1 1
δ = 2γβ, γ = .
α 2
c
Lecture 1,
Amos Lapidoth 2017
Bandwidth
We say that x is bandlimited to W Hz if,
x̂(f ) = 0, |f | > W.
x̂(f )
f
−W W
c
Lecture 1,
Amos Lapidoth 2017
Some Notes on Bandwidth
• Find the shortest symmetric interval containing all the
frequencies where x̂ is not zero.
• Half the length of this interval is the bandwidth.
• We seek a symmetric interval, but we only measure its part
where the frequencies are positive.
• Not to be confused with bandwidth around the carrier
frequency fc .
W W
f
−W W
c
Lecture 1,
Amos Lapidoth 2017
Another Example of Bandwidth
W W
f
−W W
c
Lecture 1,
Amos Lapidoth 2017
Not to Be Confused with
Bandwidth around a Carrier Frequency
ŷ(f )
c
Lecture 1,
Amos Lapidoth 2017
More Precise Definition of Bandwidth
c
Lecture 1,
Amos Lapidoth 2017
The Ideal Unit-Gain Lowpass Filter
Here
(
2Wc sin(2πW c t)
if t 6= 0,
LPFWc (t) , 2πWc t t ∈ R.
2Wc if t = 0,
d W (f )
LPF c
Wc
f
−Wc Wc
c
Lecture 1,
Amos Lapidoth 2017
The Two Are not the same
c
Lecture 1,
Amos Lapidoth 2017
Set of Measure Zero (1): (Section 2.5)
c
Lecture 1,
Amos Lapidoth 2017
Set of Measure Zero (2)
c
Lecture 1,
Amos Lapidoth 2017
Set of Measure Zero (3)
N ⊆ [a1 , b1 ] ∪ [a2 , b2 ] ∪ · · · .
c
Lecture 1,
Amos Lapidoth 2017
Communication and Detection Theory: Lecture 2
Amos Lapidoth
ETH Zurich
c
Lecture 2,
Amos Lapidoth 2017
Today: Ch. 7
• Passband signals.
• Bandpass filters (Sec. 6.3).
• Signals that are bandlimited to W Hz around a carrier
frequency fc .
• Bandwidth around a carrier frequency.
• Multiplying a signal by a carrier.
• The Analytic representation.
• The Baseband representation.
• Inner products in passband and baseband.
• Baseband representation of xPB ? yPB .
• Baseband representation of xPB ? h.
c
Lecture 2,
Amos Lapidoth 2017
The FT of a Real Passband Signal
ŷ(f )
f
W W
−fc fc − 2
fc fc + 2
c
Lecture 2,
Amos Lapidoth 2017
Passband Signals
c
Lecture 2,
Amos Lapidoth 2017
The Ideal Unit-Gain Bandpass Filter
n o
d W,f (f ) , I |f | − fc ≤ W ,
BPF f ∈ R.
c
2
d W,f (f )
BPF c
W
1
f
−fc fc
c
Lecture 2,
Amos Lapidoth 2017
Passband Signals
c
Lecture 2,
Amos Lapidoth 2017
Bandwidth around fc
ŷ(f )
f
W W
−fc fc − 2
fc fc + 2
c
Lecture 2,
Amos Lapidoth 2017
Remarks on the Bandwidth around fc
c
Lecture 2,
Amos Lapidoth 2017
The Bandwidth around fc Depends on fc
f
W W
−fc fc − 2
fc fc + 2
c
Lecture 2,
Amos Lapidoth 2017
The FT of t 7→ x(t) ei2πfc t is f 7→ x̂(f − fc ).
Z ∞ Z ∞
i2πfc t
x(t) e e −i2πf t
dt = x(t) ei2πfc t e−i2πf t dt
−∞
Z−∞
∞
= x(t) e−i2π(f −fc )t dt
−∞
= x̂(f − fc ).
Likewise
t 7→ x(t) e−i2πfc t has FT f 7→ x̂(f + fc ).
So
1 1
t 7→ x(t) cos(2πfc t) has FT f 7→ x̂(f − fc ) + x̂(f + fc )
2 2
because
1 1
cos(2πfc t) = ei2πfc t + e−i2πfc t .
2 2
c
Lecture 2,
Amos Lapidoth 2017
Multiplication by a Carrier Doubles the Bandwidth
Recall that
1 1
t 7→ x(t) cos(2πfc t) has FT f 7→ x̂(f − fc ) + x̂(f + fc )
2 2
c
Lecture 2,
Amos Lapidoth 2017
x̂(f )
f
−W W
ŷ(f )
2W
1
2
f
fc − W fc fc + W
c
Lecture 2,
Amos Lapidoth 2017
x̂(f )
f
−W W
ŷ(f )
2W
1
2
f
fc − W fc fc + W
c
Lecture 2,
Amos Lapidoth 2017
The Analytic and the Baseband Representations
c
Lecture 2,
Amos Lapidoth 2017
First Aid
c
Lecture 2,
Amos Lapidoth 2017
The Analytic Representation
c
Lecture 2,
Amos Lapidoth 2017
x̂PB (f )
f
−fc fc
x̂A (f )
f
fc
c
Lecture 2,
Amos Lapidoth 2017
From xA to xPB
Because xPB is real,
And, by definition,
(
x̂PB (f ) if f ≥ 0,
x̂A (f ) =
0 otherwise.
So,
x̂PB (f ) = x̂A (f ) + x̂∗A (−f ), f ∈R
and hence, as we next argue,
xPB (t) = 2 Re xA (t) , t ∈ R.
c
Lecture 2,
Amos Lapidoth 2017
The FT of x∗ and Re(x)
Proof:
• Parseval
• xPB is real, so |x̂| is symmetric.
Z ∞
kxPB k22 = x̂PB (f )2 df
−∞
Z ∞
=2 x̂PB (f )2 df
Z0 ∞
=2 x̂A (f )2 df
0
= 2 kxA k22 .
c
Lecture 2,
Amos Lapidoth 2017
x̂PB (f )
f
−fc fc
x̂A (f )
f
fc
c
Lecture 2,
Amos Lapidoth 2017
xPB , yPB = 2 Re xA , yA
ku + vk22 = hu + v, u + vi
= hu, ui + hu, vi + hv, ui + hv, vi
= kuk22 + hu, vi + hu, vi∗ + kvk22
= kuk22 + kvk22 + 2 Re hu, vi .
Thus,
kxA + yA k22 = kxA k22 + kyA k22 + 2 Re hxA , yA i ,
kxPB + yPB k22 = kxPB k22 + kyPB k22 + 2 hxPB , yPB i .
The result now follows from
kxPB k22 = 2kxA k22 ,
kyPB k22 = 2kyA k22 ,
kxPB + yPB k22 = 2kxA + yA k22 .
c
Lecture 2,
Amos Lapidoth 2017
x̂PB (f )
f
−fc fc
ŷPB (f )
f
−fc fc
c
Lecture 2,
Amos Lapidoth 2017
Some Comments on the Analytic Representation
c
Lecture 2,
Amos Lapidoth 2017
The Baseband Representation of xPB w.r.t. fc
x̂BB (f ) = x̂A (f + fc ), f ∈ R.
c
Lecture 2,
Amos Lapidoth 2017
118 Passband Signals and Their Representation
x̂PB (f )
f
−fc fc
x̂A (f )
f
fc
x̂BB (f )
c
Lecture 2,
Amos Lapidoth 2017
x̂PB (f )
f
−fc fc
x̂PB (f + fc )
f
−2fc −fc −W
2
W
2
g0 (f )
f
−Wc Wc
x̂BB (f )
f
−W
2
W
2
c
Lecture 2,
Amos Lapidoth 2017
From xPB to xBB
xBB = t 7→ e−i2πfc t xPB (t) ? ǧ0 ,
where g0 : f 7→ g0 (f ) is any integrable function satisfying
W
g0 (f ) = 1, |f | ≤ ,
2
and
W
g0 (f ) = 0, |f + 2fc | ≤ .
2
For example,
xBB = t 7→ e−i2πfc t xPB (t) ? LPFWc ,
W W
≤ Wc ≤ 2fc − .
2 2
c
Lecture 2,
Amos Lapidoth 2017
Convolving a Complex Signal with a Real Signal
Re x ? h = Re(x) ? h,
h is real-valued.
Im x ? h = Im(x) ? h,
c
Lecture 2,
Amos Lapidoth 2017
The In-Phase and Quadrature Components
Because xPB is real,
Re xPB (t) e−i2πfc t = xPB (t) cos(2πfc t), t ∈ R,
Im xPB (t) e−i2πfc t = −xPB (t) sin(2πfc t), t ∈ R.
c
Lecture 2,
Amos Lapidoth 2017
From xPB to xBB
xPB (t) cos(2πfc ) Re xBB (t)
× LPFWc
cos(2πfc t)
W W
xPB (t) 2 ≤ Wc ≤ 2fc − 2
90◦
× LPFWc
−xPB (t) sin(2πfc t) Im xBB (t)
c
Lecture 2,
Amos Lapidoth 2017
Bandwidth
c
Lecture 2,
Amos Lapidoth 2017
x̂PB (f )
f
−fc fc
x̂PB (f + fc )
f
−2fc −fc −W
2
W
2
g0 (f )
f
−Wc Wc
x̂BB (f )
f
−W
2
W
2
c
Lecture 2,
Amos Lapidoth 2017
Recovering xPB from xBB and fc
Recall that
xBB (t) , e−i2πfc t xA (t), t∈R
and that
xPB (t) = 2 Re xA (t) , t ∈ R.
Consequently,
xPB (t) = 2 Re xBB (t) ei2πfc t , t ∈ R.
c
Lecture 2,
Amos Lapidoth 2017
x̂BB (f )
x̂BB (f − fc )
f
fc
x̂∗BB (−f )
x̂∗BB (−f − fc )
f
−fc
f
−fc −fc
c
Lecture 2,
Amos Lapidoth 2017
Some Remarks on the Baseband Representation
c
Lecture 2,
Amos Lapidoth 2017
Inner Products
hxPB , yPB i = 2 Re hxBB , yBB i
Proof:
Z ∞
∗
hxBB , yBB i = xBB (t) yBB (t) dt
Z−∞
∞ ∗
= e−i2πfc t xA (t) e−i2πfc t yA (t) dt
Z−∞
∞
= e−i2πfc t xA (t) ei2πfc t yA∗ (t) dt
−∞
= hxA , yA i .
c
Lecture 2,
Amos Lapidoth 2017
x̂PB (f )
f
−fc fc
ŷPB (f )
f
−fc fc
c
Lecture 2,
Amos Lapidoth 2017
Orthogonality in Passband
Two real passband signals xPB , yPB are orthogonal iff the inner
product between their baseband representations is purely imaginary.
c
Lecture 2,
Amos Lapidoth 2017
The Baseband Representation of xPB ? yPB
c
Lecture 2,
Amos Lapidoth 2017
x̂PB (f )
f
−fc fc
ŷPB (f )
f
−fc fc
c
Lecture 2,
Amos Lapidoth 2017
7.6 Baseband Representation of Real Passband Signals 127
x̂PB (f )
f
−fc fc
ŷPB (f )
1.5
x̂PB (f ) ŷPB (f )
1.5
x̂BB (f )
ŷBB (f )
x̂BB (f ) ŷBB (f )
Figure 7.13: The convolution of two real passband signals and its baseband rep-
resentation.
c
Lecture 2,
Amos Lapidoth 2017
The Baseband Representation of xPB ? h
Here
• xPB is a real passband signal that is bandlimited to W Hz
around fc , but
• h is a general (not necessarily bandpass) real impulse
response.
• xPB ? h is the filter’s response to xPB .
c
Lecture 2,
Amos Lapidoth 2017
7.7 Energy-Limited Passband Signals 131
x̂PB (f )
W
1
f
−fc fc
ĥ(f )
f
−fc fc
x̂PB (f ) ĥ(f )
f
−fc fc
c
Lecture 2,
Amos Lapidoth 2017
Frequency Response w.r.t. the bandwidth W around fc (1)
ĥ(f )
W
f
fc
f
c
Lecture 2,
Amos Lapidoth 2017
−W
2
W
2
Frequency Response w.r.t. the bandwidth W around fc (2)
c
Lecture 2,
Amos Lapidoth 2017
7.7 Energy-Limited Passband Signals 131
x̂PB (f )
W
1
f
−fc fc
ĥ(f )
f
−fc fc
x̂PB (f ) ĥ(f )
f
−fc fc
c
Lecture 2,
Amos Lapidoth 2017
x̂BB (f )
f
−W
2
W
2
c
Lecture 2,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 2,
Amos Lapidoth 2017
Communication and Detection Theory: Lecture 3
Amos Lapidoth
ETH Zurich
March 7, 2017
c
Lecture 3,
Amos Lapidoth 2017
Today
Chapters 4 and 8:
• L2 as vector space.
• Finite-dimensional subspaces of L2 : bases, dimension,
orthonormal bases, and the Gram-Schmidt procedure (review).
• Expressing a signal in terms of a given orthonormal basis.
• Projections onto a finite dimensional sub-space.
• The projection and closest element in the subspace.
• Complete Orthonormal Systems (CONS).
• CONS and Parseval’s Theorem.
• The Sampling Theorem as an orthonormal expansion.
c
Lecture 3,
Amos Lapidoth 2017
Amplification and Superposition
L2 is the space of energy-limited signals.
t 7→ u(t) + v(t), t ∈ R.
c
Lecture 3,
Amos Lapidoth 2017
Linear Subspaces
u1 + u2 ∈ U, u1 , u2 ∈ U;
c
Lecture 3,
Amos Lapidoth 2017
Another Linear Subspace
All signals of the form
t 7→ p(t) e−|t| ,
αu : t 7→ αα0 + αα1 t + αα2 t2 + αα3 t3 e−|t| .
If u is as above and
v : t 7→ β0 + β1 t + β2 t2 + β3 t3 e−|t| ,
then
u+v : t 7→ (α0 + β0 ) + (α1 + β1 )t + (α2 + β2 )t2 + (α3 + β3 )t3 e−|t| .
c
Lecture 3,
Amos Lapidoth 2017
Linear Combinations, Span, and Independence
• v ∈ L2 is a linear combination of (v1 , . . . , vn ) if it equals
α 1 v1 + · · · + α n vn ,
i.e.,
n
X
α ν vν ,
ν=1
for some α1 , . . . , αn ∈ C.
• span(v1 , . . . , vn ) is the set of all vectors in L2 that are linear
combinations of (v1 , . . . , vn ).
• span(v1 , . . . , vn ) is a linear subspace of L2 .
• The n-tuple (v1 , . . . , vn ) is linearly independent if
X n
αν vν = 0 =⇒ αν = 0, ν = 1, . . . , n .
ν=1
c
Lecture 3,
Amos Lapidoth 2017
Dimension, Finite and Infinite
c
Lecture 3,
Amos Lapidoth 2017
Some Examples
t 7→ p(t) e−|t| ,
t 7→ I{t = 17}.
c
Lecture 3,
Amos Lapidoth 2017
kuk2 as the “length” of the Signal u(·)
For u, v ∈ L2 and α ∈ C
Also,
kuk − kvk ≤ ku + vk ≤ kuk + kvk , u, v ∈ L2 ,
2 2 2 2 2
because
u+v v
c
Lecture 3,
Amos Lapidoth 2017
A Pythagorean Theorem
Last time:
ku + vk22 = kuk22 + kvk22 + 2 Re hu, vi ,
so
ku + vk22 = kuk22 + kvk22 , u and v are orthogonal.
By induction
ku1 + · · · + un k22 = ku1 k22 +· · ·+kun k22 , u1 , . . . , un pairwise orthog.
Indeed, if u , u1 , and v , u2 + · · · un , then the pairwise
orthogonality implies hu, vi = 0 because
hu, vi = hu1 , u2 + · · · un i = hu1 , u2 i + · · · + hu1 , un i = 0.
Hence ku + vk22 = kuk22 + kvk22 , i.e.,
ku1 + · · · + un k22 = ku1 k22 + ku2 + · · · + un k22 .
Now use the induction hypothesis.
c
Lecture 3,
Amos Lapidoth 2017
Projecting v onto u (1)
u
w = (length of v) cos(angle between v and u) .
length of u
c
Lecture 3,
Amos Lapidoth 2017
Projecting v onto u (2)
And since
hv, ui
w= u,
kuk22
we obtain the Cauchy-Schwarz Inequality
c
Lecture 3,
Amos Lapidoth 2017
Orthonormal Tuples
c
Lecture 3,
Amos Lapidoth 2017
Orthonormal Tuples Are Linearly Independent
If
n
X
α` φ` = 0,
`=1
0 = h0, φ`0 i
X n
= α` φ` , φ`0
`=1
n
X
= α` hφ` , φ`0 i
`=1
n
X
= α` I{` = `0 }
`=1
= α`0 .
c
Lecture 3,
Amos Lapidoth 2017
An Orthonormal Basis
c
Lecture 3,
Amos Lapidoth 2017
If (φ1 , . . . , φd ) is an orthonormal basis for U ⊂ L2 ,
d
X
u= hu, φ` i φ` , u ∈ U.
`=1
Since (φ1 , . . . , φd ) is a basis for U, any u ∈ U can be expressed as
X d
u= α` φ` .
`=1
X
d
hu, φ`0 i = α` φ` , φ`0
`=1
d
X
= α` hφ` , φ`0 i
`=1
Xd
= α` I{` = `0 }
`=1
= α`0 .
c
Lecture 3,
Amos Lapidoth 2017
Projections (1)
Lemma
Let (φ1 , . . . , φd ) be an orthonormal basis for U ⊂ L2 . Let v ∈ L2
be arbitrary.
P
1. v − d`=1 hv, φ` i φ` is orthogonal to every signal in U:
d
X
v− hv, φ` i φ` , u = 0, v ∈ L2 , u∈U .
`=1
c
Lecture 3,
Amos Lapidoth 2017
Projections (2)
Proof. Pd
The signal v − `=1 α` φ` is orthogonal to φ`0 iff α`0 = hv, φ`0 i.
Hence:
P
1. If α`0 = hv, φ`0 i for all `0 ∈ {1, . . . , d} then v − d`=1 α` φ` is
orthogonal to each basis function and hence to every u ∈ U.
P
2. If w is in U, it can be written as d`=1 α` φ` , and if,
additionally, v − w is orthogonal to each φ`0 , then α`0 must
equal hv, φ`0 i.
c
Lecture 3,
Amos Lapidoth 2017
Projections (3)
c
Lecture 3,
Amos Lapidoth 2017
Projection as Best Approximation
Let (φ1 , . . . , φd ) be an orthonormal basis for U ⊂ L2 . Let v ∈ L2
be arbitrary. Then the projection w of v onto U is the element in
U that, among all the elements of U, is closest to v in the sense
that
kv − uk2 ≥ kv − wk2 , u ∈ U.
Proof.
Let u be any element of U. Then so is w − u. Since v − w is
orthogonal to all elements of U it is a fortiori also orthogonal to
w − u. Thus, by Pythagoras,
c
Lecture 3,
Amos Lapidoth 2017
Energy and Inner Products and Orthonormal Bases (1)
Proof.
Follows from the Pythagorean Theorem and
d
X
u= hu, φ` i φ` , u ∈ U.
`=1
c
Lecture 3,
Amos Lapidoth 2017
Energy and Inner Products and Orthonormal Bases (2)
Let (φ1 , . . . , φd ) be an orthonormal basis for U ⊂ L2 . Then
d
X
kvk22 ≥ hv, φ` i2 , v ∈ L2 .
`=1
Proof.
Let w be the projection of v onto U. Then
c
Lecture 3,
Amos Lapidoth 2017
Energy and Inner Products and Orthonormal Bases (3)
Let (φ1 , . . . , φd ) be an orthonormal basis for U ⊂ L2 . Then
d
X
hv, ui = hv, φ` i hu, φ` i∗ , v ∈ L2 , u ∈ U .
`=1
Since u ∈ U,
d
X
u= hu, φ` i φ` .
`=1
Consequently, using the sesquilinearity of the inner product,
X d
hv, ui = v, hu, φ` i φ`
`=1
d
X
= hu, φ` i∗ hv, φ` i .
`=1
c
Lecture 3,
Amos Lapidoth 2017
Does Every Finite-Dimensional Subspace Have an
Orthonormal Basis?
In general, no:
u ∈ L2 : u(t) = 0 whenever t 6= 17
c
Lecture 3,
Amos Lapidoth 2017
Gram-Schmidt
c
Lecture 3,
Amos Lapidoth 2017
The Benefits of Orthonormal Bases
d
X
u= hu, φ` i φ` , u ∈ U.
`=1
d
X
kuk22 = hu, φ` i2 , u∈U
`=1
d
X
hv, ui = hv, φ` i hu, φ` i∗ , v ∈ L2 , u ∈ U .
`=1
d
X
w= hv, φ` i φ` .
`=1
c
Lecture 3,
Amos Lapidoth 2017
Complete Orthonormal System (CONS)
Definition: . . . , φ−1 , φ0 , φ1 , . . . is a complete orthonormal
system (CONS) for the linear subspace U ⊆ L2 if:
1. φ` ∈ U, ` ∈ Z.
2. hφ` , φ`0 i = I{` = `0 }, `, `0 ∈ Z.
P
3. kuk2 = ∞ hu, φ` i2 , u ∈ U.
2 `=−∞
c
Lecture 3,
Amos Lapidoth 2017
If {φ` } ⊂ U are orthonormal, then the following are equivalent:
• ∀u ∈ U and ∀ > 0 there exists some positive integer L() and
coefficients α−L() , . . . , αL() ∈ C s.t.
L()
X
u − α ` φ`
(3)
< .
`=−L() 2
• For every u ∈ U
L
X
lim
u − hu, φ` i φ`
(4)
L→∞
= 0.
`=−L 2
• For every u ∈ U
∞
X
kuk22 = hu, φ` i2 . (5)
`=−∞
• For every u, v ∈ U
∞
X
hu, vi = hu, φ` i hv, φ` i∗ . (6)
c
Lecture 3,
Amos Lapidoth 2017 `=−∞
• ∀u ∈ U and ∀ > 0 there exists some positive integer L() and
coefficients α−L() , . . . , αL() ∈ C s.t.
L()
X
u − α φ
< .
` `
`=−L() 2
• For every u ∈ U
L
X
lim
u − hu, φ` i φ`
= 0.
L→∞ 2
`=−L
c
Lecture 3,
Amos Lapidoth 2017
The Frequency Domain
c
Lecture 3,
Amos Lapidoth 2017
Energy-Limited Signals that Are Bandlimited to W Hz
g(f ) = 0, |f | > W
and Z W
|g(f )|2 df < ∞.
−W
Thus, if
V = g ∈ L2 : g(f ) = 0 whenever |f | > W .
c
Lecture 3,
Amos Lapidoth 2017
If {ψ` } is a CONS for V, then {ψ̌` } is a CONS for V̌
Let {ψ` } be a CONS for V. Orthonormality follows from Parseval:
ψ̌` , ψ̌`0 = hψ` , ψ`0 i , `, `0 ∈ Z.
= kgk22
= kǧk22 , g ∈ V.
c
Lecture 3,
Amos Lapidoth 2017
A CONS for V̌
The sequence of signals that are defined for every integer ` by
√
t 7→ 2W sinc(2Wt + `)
D √ E
x, t 7→ 2W sinc(2Wt + `) = x, ψ̌`
= ǧ, ψ̌`
= hg, ψ` i
Z W 1 ∗
= g(f ) √ eiπ`f /W df
−W 2W
Z W
1
=√ g(f ) e−iπ`f /W df
2W −W
1 `
=√ x − , ` ∈ Z.
2W 2W
c
Lecture 3,
Amos Lapidoth 2017
If {φ` } ⊂ U are orthonormal, then the following are equivalent:
• ∀u ∈ U and ∀ > 0 there exists some positive integer L() and
coefficients α−L() , . . . , αL() ∈ C s.t.
L()
X
u − α ` φ`
(8)
< .
`=−L() 2
• For every u ∈ U
L
X
lim
u − hu, φ` i φ`
(9)
L→∞
= 0.
`=−L 2
• For every u ∈ U
∞
X
kuk22 = hu, φ` i2 . (10)
`=−∞
• For every u, v ∈ U
∞
X
hu, vi = hu, φ` i hv, φ` i∗ . (11)
c
Lecture 3,
Amos Lapidoth 2017 `=−∞
Applying this with φ` = ψ̌`
Theorem (L2 -Sampling Theorem)
Let x be an energy-limited signal that is bandlimited to W Hz,
1
T= .
2W
Let . . . , x(−2T), x(−T), x(0), x(T), x(2T), . . . be the samples of x.
Z ∞ XL t 2
lim x(t) − x(−`T) sinc + ` dt = 0.
L→∞ −∞ T
`=−L
Z ∞ ∞
X
|x(t)|2 dt = T |x(`T)|2 .
−∞ `=−∞
i.e., i.e.,
� � � � � �2
∞ � L
� � �� � ��2 �W L
� 1 �
�x(t) − � �
� x − sinc 2Wt + � �� dt → 0 �g(f ) − c� √ eiπ�f /W � df → 0
−∞ 2W −W � 2W �
�=−L �=−L
Table 8.1: The duality between the Sampling Theorem and the Fourier Series Representation.
c
Lecture 3,
Amos Lapidoth 2017
The Sampling Theorem also holds for
complex signals.
c
Lecture 3,
Amos Lapidoth 2017
An Isomorphism
u(`T) = α` , ` ∈ Z.
c
Lecture 3,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 3,
Amos Lapidoth 2017
Communication and Detection Theory: Lecture 4
Amos Lapidoth
ETH Zurich
c
Lecture 4,
Amos Lapidoth 2017
Today
c
Lecture 4,
Amos Lapidoth 2017
Bandwidth vs. Bandwidth-around-fc
ŷ(f )
W
fc + 2
f
W W
−fc fc − 2
fc fc + 2
c
Lecture 4,
Amos Lapidoth 2017
The Holy Grail
or
c
Lecture 4,
Amos Lapidoth 2017
Recall
c
Lecture 4,
Amos Lapidoth 2017
The Solution
Sampling:
• From xPB generate xBB .
• Sample xBB at its Nyquist rate
c
Lecture 4,
Amos Lapidoth 2017
x̂PB (f )
f
−fc fc
x̂PB (f + fc )
f
−2fc −fc −W
2
W
2
g0 (f )
f
−Wc Wc
x̂BB (f )
f
−W
2
W
2
c
Lecture 4,
Amos Lapidoth 2017
Complex Sampling
We seek the samples of xBB :
`
xBB , ` ∈ Z.
W
Recalling that
xBB = t 7→ e−i2πfc t xPB (t) ? LPFWc ,
where
W W
≤ Wc ≤ 2fc − ,
2 2
` `
−i2πfc t
xBB = t 7→ e xPB (t) ? LPFWc
W W
`
= t→ 7 xPB (t) cos(2πfc t) ? LPFWc
W
`
− i t 7→ xPB (t) sin(2πfc t) ? LPFWc , ` ∈ Z.
W
c
Lecture 4,
Amos Lapidoth 2017
Complex Sampling
xPB (t) cos(2πfc t) Re xBB (t) Re xBB (`/W)
× LPFWc
`/W
cos(2πfc t)
W W
xPB (t) 2 ≤ Wc ≤ 2fc − 2
90◦
−xPB (t) sin(2πfc t) Im xBB (t) Im xBB (`/W)
× LPFWc
`/W
c
Lecture 4,
Amos Lapidoth 2017
Reconstruction
By the (Pointwise) Sampling
∞
X `
xBB (t) = xBB sinc(Wt − `), t ∈ R.
W
`=−∞
Consequently,
∞
X `
i2πfc t
xPB (t) = 2 Re e xBB sinc(Wt − `) , t ∈ R.
W
`=−∞
Since sinc (·) is real,
X∞ `
xPB (t) = 2 Re ei2πfc t xBB sinc(Wt − `)
W
`=−∞
X∞ `
=2 Re xBB sinc(Wt − `) cos(2πfc t)
W
`=−∞
∞
X `
−2 Im xBB sinc(Wt − `) sin(2πfc t), t ∈ R.
W
`=−∞
c
Lecture 4,
Amos Lapidoth 2017
Convergence in L2 (1)
L
X `
2
lim
t 7→ xBB (t) − xBB sinc(Wt − `)
L→∞ W
= 0.
`=−L 2
So the energy in the latter is twice the energy of the former and
thus also converges to zero.
c
Lecture 4,
Amos Lapidoth 2017
Convergence in L2 (2)
Thus,
L
X `
i2πfc t
lim
t 7→ xPB (t)−2 Re e xBB sinc(Wt−`)
= 0.
L→∞
W
`=−L 2
c
Lecture 4,
Amos Lapidoth 2017
Convergence in L2 (3)
c
Lecture 4,
Amos Lapidoth 2017
The Sampling Theorem for Real Passband Signals
Let xPB be a real energy-limited passband signal that is
bandlimited to W Hz around the carrier frequency fc . Let
xBB (`/W) denote the time-`/W sample of xBB .
1. xPB can be reconstructed from the samples in the L2 sense
Z ∞ L
X ` !2
i2πfc t
lim xPB (t)−2 Re e xBB sinc(Wt−`) dt = 0.
L→∞ −∞ W
`=−L
X: Ω × R → R
(ω, t) 7→ X(ω, t).
c
Lecture 4,
Amos Lapidoth 2017
Sample Paths
X(ω, ·) : R → R
t 7→ X(ω, t).
c
Lecture 4,
Amos Lapidoth 2017
A Stochastic Process and its Sample-Paths
(
x0 (t) if D = 0,
X(t) = t ∈ R,
x1 (t) if D = 1,
where (
A e−t/T if t/T ≥ 0,
x0 (t) = t ∈ R,
0 otherwise,
(
A if 0 ≤ t/T ≤ 1,
x1 (t) = t ∈ R,
0 otherwise,
has two sample-paths: x0 (·) and x1 (·).
c
Lecture 4,
Amos Lapidoth 2017
Time-t Sample
X(·, t) : Ω → R
ω 7→ X(ω, t).
c
Lecture 4,
Amos Lapidoth 2017
A Formal Definition
A stochastic process X(t), t ∈ T is an indexed family of random
variables that are defined on a common probability space
(Ω, F, P ).
c
Lecture 4,
Amos Lapidoth 2017
From Bits to Real Numbers
D1 , D2 , . . . , Dk .
The mapping
ϕ : {0, 1}k → Rn
(d1 , . . . , dk ) 7→ (x1 , . . . , xn )
c
Lecture 4,
Amos Lapidoth 2017
Examples (1)
• n = k, i.e., one bit per real symbol, and
(
+1 if Dj = 0,
Xj = j = 1, . . . , k.
−1 if Dj = 1,
Here n = k/2, and the rate is two bits per real symbol.
c
Lecture 4,
Amos Lapidoth 2017
Examples (2)
Here n = 2k, and the rate is half a bit per real symbol.
c
Lecture 4,
Amos Lapidoth 2017
Block-Mode Mapping of Bits to Real Numbers
where
k
k0 = K.
K
c
Lecture 4,
Amos Lapidoth 2017
From Real Numbers to Waveforms with Linear Modulation
c
Lecture 4,
Amos Lapidoth 2017
The Energy in the Transmitted Waveform
The transmitted energy is a random variable!
Z ∞
2
kXk2 = X 2 (t) dt
−∞
Z ∞ n
X 2
= A X` g` (t) dt
−∞ `=1
Z ∞ Xn X
n
= A X` g` (t) A X`0 g`0 (t) dt
−∞ `=1 `0 =1
n
XXn Z ∞
= A2 X` X`0 g` (t) g`0 (t) dt
`=1 `0 =1 −∞
Xn X n
= A2 X` X`0 hg` , g`0 i .
`=1 `0 =1
n
X
kXk22 = A2 X`2 , {g` } orthonormal.
c
Lecture 4,
Amos Lapidoth 2017 `=1
Gram-Schmidt to the Rescue
c
Lecture 4,
Amos Lapidoth 2017
Recovering the Symbols with a Matched Filter
ϕ : (D1 , . . . , Dk ) 7→ (X1 , . . . , Xn ).
n
X
X(t) = A X` φ` (t), t ∈ R,
`=1
(φ1 , . . . , φn ) orthonormal.
How can we recover the symbols?
1
hX, φ` i , ` = 1, . . . , n.
X` =
A
This requires n inner-product computations.
c
Lecture 4,
Amos Lapidoth 2017
Computing Inner Products with a Matched Filter
~ ∗ (0),
hu, φi = u ? φ u, φ ∈ L2 .
c
Lecture 4,
Amos Lapidoth 2017
Computing Many Inner Products Is sometimes Easy
gj : t 7→ φ(t − tj ), j = 1, . . . , J,
hu, gj i , j = 1, . . . , J
c
Lecture 4,
Amos Lapidoth 2017
Back to Linear Modulation
n
X
X(t) = A X` φ` (t), t ∈ R,
`=1
(φ1 , . . . , φn ) orthonormal.
Suppose now that
φ` (t) = φ(t − `Ts ), ` ∈ {1, . . . , n}, t ∈ R .
Then we can compute all the inner products using one circuit!
Z
1 ∞
X` = X(τ ) φ` (τ ) dτ
A −∞
Z
1 ∞
= X(τ ) φ(τ − `Ts ) dτ
A −∞
Z
1 ∞ ~ s − τ ) dτ
= X(τ ) φ(`T
A −∞
1
~ (`Ts ), ` = 1, . . . , n.
= X ?φ
c
Lecture 4,
Amos Lapidoth 2017
A
Recovering the Symbols Using One MF
X(·) ~
φ AX`
`Ts
This motivates
n
X
X(t) = A X` φ(t − `Ts ), t ∈ R.
`=1
c
Lecture 4,
Amos Lapidoth 2017
Pulse Amplitude Modulation
The bits D1 , . . . , Dk are mapped to the symbols X1 , . . . , Xn , and
n
X
X(t) = A X` g(t − `Ts ), t ∈ R.
`=1
• A ≥ 0 is a scaling factor.
• g : R → R—pulse shape.
• Ts > 0—baud period.
• 1/Ts —baud rate.
1 real symbols
.
Ts sec
If the time shifts of g are orthonormal—φ—
1 real dimensions
. J.M.E. Baudot
Ts sec
(1845–1903)
c
Lecture 4,
Amos Lapidoth 2017
The Constellation
For example,
−5, −3, −1, +1, +3, +5
or
{−2, −1, +1, +2}.
c
Lecture 4,
Amos Lapidoth 2017
Constellation Parameters
δ , min
0
|x − x0 |.
x,x ∈X
x6=x0
c
Lecture 4,
Amos Lapidoth 2017
Uncoded Transmission
c
Lecture 4,
Amos Lapidoth 2017
Next Week
Reading Assignment:
• PAM Implementation Considerations (Section 10.12).
• Stationary Discrete-Time SP (Chapter 13).
Thank you!
c
Lecture 4,
Amos Lapidoth 2017
Communication and Detection Theory: Lecture 5
Amos Lapidoth
ETH Zurich
Nyquist’s Criterion
c
Lecture 5,
Amos Lapidoth 2017
Nyquist’s Criterion
• Nyquist’s Criterion.
• The Self-Similarity Function.
• Excess Bandwidth.
• Band-edge symmetry.
c
Lecture 5,
Amos Lapidoth 2017
PAM with a Shift-Orthonormal Pulse Shape
n
X
X(t) = A X` φ(t − `Ts ), t ∈ R,
`=1
where
Z ∞
φ(t − `Ts ) φ(t − `0 Ts ) dt = I{`0 = `}, `, `0 ∈ {1, . . . , n}.
−∞
• Energy:
n
X
kXk22 = A2 X`2 .
`=1
• Recovering X` is easy
~ (`Ts ),
X` = A−1 X, t 7→ φ(t − `Ts ) = A−1 X?φ ` = 1, . . . , n.
• Instead of
Z ∞
φ(t − `Ts ) φ(t − `0 Ts ) dt = I{`0 = `}, `, `0 ∈ {1, . . . , n},
−∞
we’ll require
Z ∞
φ(t − `Ts ) φ(t − `0 Ts ) dt = I{`0 = `}, `, `0 ∈ Z.
−∞
c
Lecture 5,
Amos Lapidoth 2017
The Time Shifts Are Orthogonal when they don’t Overlap
c
Lecture 5,
Amos Lapidoth 2017
The Self-Similarity Function Rvv of v ∈ L2
Z ∞
Rvv : τ 7→ v(t + τ ) v ∗ (t) dt, τ ∈ R.
−∞
c
Lecture 5,
Amos Lapidoth 2017
v ∗(t)
τ
v(t + τ )
c
Lecture 5,
Amos Lapidoth 2017
The Self-Similarity Function Rvv of v ∈ L2
Z ∞
Rvv : τ 7→ v(t + τ ) v ∗ (t) dt, τ ∈ R.
−∞
1. Value at zero:
Z ∞
Rvv (0) = |v(t)|2 dt.
−∞
2. Maximum at zero:
|Rvv (τ )| ≤ Rvv (0), τ ∈ R.
3. Conjugate symmetry:
∗
Rvv (−τ ) = Rvv (τ ), τ ∈ R.
4. Integral representation:
Z ∞
Rvv (τ ) = |v̂(f )|2 ei2πf τ df, τ ∈ R.
−∞
c
Lecture 5,
Amos Lapidoth 2017
Some Proofs
Z ∞
Rvv : τ 7→ v(t + τ ) v ∗ (t) dt, τ ∈ R.
−∞
c
Lecture 5,
Amos Lapidoth 2017
3. Conjugate symmetry follows by substituting s , t + τ in
Z ∞
∗
Rvv (τ ) = v(t| +
{z τ}) v (t) dt
−∞
s
Z ∞
= v(s) v ∗ (s − τ ) ds
−∞
Z ∞ ∗
∗
= v(s − τ ) v (s) ds
−∞
∗
= Rvv (−τ ), τ ∈ R.
4. The integral representation follows from Parseval:
Z ∞
Rvv (τ ) = v(t + τ ) v ∗ (t) dt
−∞
= t 7→ v(t + τ ), t 7→ v(t)
D E
= f 7→ ei2πf τ v̂(f ), f 7→ v̂(f )
Z ∞
= ei2πf τ |v̂(f )|2 df, τ ∈ R.
−∞
c
Lecture 5,
Amos Lapidoth 2017
More Properties of Rvv
5. Uniform Continuity: Rvv is uniformly continuous.
The IFT of an integrable function is uniformly continuous.
6. Convolution Representation:
Rvv (τ ) = (v ? ~v∗ ) (τ ), τ ∈ R.
Z ∞
Rvv (τ ) = v(t| +
{z τ}) v ∗ (t) dt
−∞
s
Z ∞
= v(s) v ∗ (s − τ ) ds
−∞
Z ∞
= v(s) ~v ∗ (τ − s) ds
−∞
= (v ? ~v∗ )(τ ).
c
Lecture 5,
Amos Lapidoth 2017
Shift-Orthonormality and the Self-Similarity Func.
Let φ be energy limited. The shift-orthonormality condition
Z ∞
φ(t − `Ts ) φ∗ (t − `0 Ts ) dt = I{` = `0 }, `, `0 ∈ Z
−∞
Proof: Recall
Z ∞
Rvv : τ 7→ v(t + τ ) v ∗ (t) dt, τ ∈ R.
−∞
Z ∞ Z ∞
φ(t − `Ts ) φ∗ (t − `0 Ts ) dt = φ s + (`0 − `)Ts φ∗ (s) ds
−∞ | {z } −∞
s
= Rφφ (`0 − `)Ts .
c
Lecture 5,
Amos Lapidoth 2017
Nyquist Pulse
We say that v : R 7→ C is a Nyquist Pulse of parameter Ts if
Harry Nyquist
(1889–1976)
c
Lecture 5,
Amos Lapidoth 2017
Nyquist’s Criterion
Let Ts > 0 be given, and assume v = ǧ for some integrable
g : f 7→ g(f ). Then v is a Nyquist Pulse of parameter Ts iff
Z 1/(2Ts )
j
J
X
lim Ts − g f+ df = 0.
J→∞ −1/(2Ts ) Ts
j=−J
c
Lecture 5,
Amos Lapidoth 2017
g(f )
Ts
f
− 2T1 s 1
2Ts
� 1�
g f+
Ts
Ts
f
− T1s − 2T1 s
� 1�
g f−
Ts
Ts
f
1 1
2Ts Ts
∞
� � j �
g f+ = Ts
j=−∞
Ts
f
− T2s − T1s 1
Ts
2
Ts
c
Lecture 5,
Amos Lapidoth 2017
Proof of Nyquist’s Criterion (1)
1 X j
∞
1 1
f 7→ √ g f+ , − ≤f ≤ .
Ts j=−∞ Ts 2Ts 2Ts
c
Lecture 5,
Amos Lapidoth 2017
Proof of Nyquist’s Criterion (2)
Z ∞
v(−`Ts ) = g(f ) e−i2πf `Ts df
−∞
∞
X Z j 1
+ 2T
Ts s
= g(f ) e−i2πf `Ts df
j 1
j=−∞ Ts
− 2T
s
∞ Z j −i2π f˜+ Tjs `Ts ˜
1
X 2Ts
= g f˜ + e df
1 Ts
j=−∞ − 2Ts
∞ Z 1
X 2Ts j −i2πf˜`Ts ˜
= g f˜ + e df
1 Ts
j=−∞ − 2Ts
Z 1 X j −i2πf˜`Ts ˜
∞
2Ts
= g f˜ + e df
1
− 2T Ts
s j=−∞
Z 1
1 X ˜ j p −i2πf˜`Ts ˜
∞
2Ts
= √ g f+ Ts e df .
− 1 Ts Ts
2Ts j=−∞
c
Lecture 5,
Amos Lapidoth 2017
Characterization of Shift-Orthonormal Pulses
Let φ : R 7→ C be energy-limited and Ts > 0. The condition
Z ∞
φ(t − `Ts ) φ∗ (t − `0 Ts ) dt = I{` = `0 }, `, `0 ∈ Z
−∞
And Z ∞
Rφφ (τ ) = |φ̂(f )|2 ei2πf τ df, τ ∈ R.
−∞
c
Lecture 5,
Amos Lapidoth 2017
Minimum Bandwidth of Shift-Orthonormal Pulses
Let Ts > 0 be fixed, and let φ be an energy-limited signal that is
bandlimited to W Hz. If the time shifts of φ by integer multiples
of Ts are orthonormal, then
1
W≥ .
2Ts
Equality is achieved if
p n o
φ̂(f ) = Ts I |f | ≤ 1 , f ∈R
2Ts
and, in particular, by the sinc(·) pulse
1 t
φ(t) = √ sinc , t∈R
Ts Ts
φ̂(f )
f
−W W
� �
�φ̂(f )�2
f
−W W
� � ��2
�φ̂ f − 1 �
Ts
f
1 1
Ts
−W Ts
� � ��2
�φ̂ f + 1 �
Ts
f
− T1s − T1s + W
� � ��
1 �2
� �2 � � ��2
�φ̂ f + + �φ̂(f )� + �φ̂ f − 1 �
Ts Ts
f
− 2T1 s −W W 1
2Ts
c
Lecture 5,
Amos Lapidoth 2017 � � ��2
Excess Bandwidth
c
Lecture 5,
Amos Lapidoth 2017
Band-Edge Symmetry
Assume Ts > 0 and φ a real energy-limited signal that is
bandlimited to W Hz, where W < 1/Ts so φ is of excess
bandwidth smaller than 100%. Then φ is shift orthonormal iff
1 2 1 2 1
φ̂ − f + φ̂ + f ≡ Ts , 0 < f ≤ .
2Ts 2Ts 2Ts
φ̂(f )2
2
φ̂ f 0 + 1 − Ts
2Ts 2
Ts
Ts
2 f0
f
1 1
c
Lecture 5,
Amos Lapidoth 2017 2Ts Ts
g(f )
Ts
f
− 2T1 s 1
2Ts
� 1�
g f+
Ts
Ts
f
− T1s − 2T1 s
� 1�
g f−
Ts
Ts
f
1 1
2Ts Ts
∞
� � j �
g f+ = Ts
j=−∞
Ts
f
− T2s − T1s 1
Ts
2
Ts
c
Lecture 5,
Amos Lapidoth 2017
Sketch of Proof (1)
φ̂(f )2
2
φ̂ f 0 + 1 − Ts
2Ts 2
Ts
Ts
2 f0
f
1 1
2Ts Ts
c
Lecture 5,
Amos Lapidoth 2017
Sketch of Proof (2)
• φ real ⇒ |φ̂(f )| symmetric ⇒ it suffices to consider f ≥ 0.
• Excess bandwidth < 100% ⇒ only two terms contribute to
the sum (for f > 0):
1
φ̂(f )2 + φ̂(f − 1/Ts )2 ≡ Ts , 0≤f < .
2Ts
Substituting f 0 , 1
2Ts − f leads to the condition
1 2 1 2 1
φ̂ − f 0 + φ̂ −f 0 − ≡ Ts , 0 < f0 ≤ ,
2Ts 2Ts 2Ts
c
Lecture 5,
Amos Lapidoth 2017
Raised-Cosine (1)
φ̂(f )2
Ts
f
1−β 1 1+β
2Ts 2Ts 2Ts
1−β
Ts
if 0 ≤ |f | ≤ 2Ts ,
1−β 1−β
|φ̂(f )|2 = 2
Ts
1 + cos πT
β
s
(|f | − 2Ts ) if 2Ts < |f | ≤ 1+β2Ts ,
0 if |f | > 1+β
2Ts ,
c
Lecture 5,
Amos Lapidoth 2017
Raised-Cosine (2)
√
1−β
Ts if 0 ≤ |f | ≤ 2Ts ,
q r
Ts πTs 1−β 1−β 1+β
φ̂(f ) = 1 + cos β (|f | − 2Ts ) if < |f | ≤ 2Ts ,
2 2Ts
0 1+β
if |f | > 2Ts ,
sin ((1−β)π Tt )
cos (1 + β)π Tts + 4β Tt
s
4β
φ(t) = √ s
, t ∈ R.
π Ts 1 − (4β Tts )2
c
Lecture 5,
Amos Lapidoth 2017
φ(t)
Rφφ (τ )
τ
−2Ts −Ts Ts 2Ts
c
Lecture 5,
Amos Lapidoth 2017
A refresher on discrete-time stochastic processes.
Read Chapter 13!
c
Lecture 5,
Amos Lapidoth 2017
Stationary Processes
A discrete-time SP Xν is stationary or strict sense stationary or
strongly stationary if for every n ∈ N and all integers η, η 0
L
Xη , . . . Xη+n−1 = Xη0 , . . . Xη0 +n−1 .
L
Xν , ν ∈ Z stationary =⇒ Xν = X1 , ν ∈ Z .
Xν , ν ∈ Z stationary
L
=⇒ (Xν , Xν 0 ) = (Xη+ν , Xη+ν 0 ), ν, ν 0 , η ∈ Z .
c
Lecture 5,
Amos Lapidoth 2017
Wide-Sense Stationary Processes
We say that a discrete-time SP Xν , ν ∈ Z is wide-sense
stationary (WSS) if the following three conditions are satisfied:
Choosing ν = ν 0 we obtain
Xν , ν ∈ Z WSS =⇒ Var[Xν ] = Var[X1 ] , ν∈Z .
c
Lecture 5,
Amos Lapidoth 2017
Stationarity and Wide-Sense Stationarity
Every finite-variance discrete-time stationary SP is WSS.
L
Xν , ν ∈ Z stationary =⇒ Xν = X1 , ν ∈ Z ,
so
Xν , ν ∈ Z stationary =⇒ E Xν = E X1 , ν ∈ Z .
Xν , ν ∈ Z stationary
L
=⇒ (Xν , Xν 0 ) = (Xη+ν , Xη+ν 0 ), ν, ν 0 , η ∈ Z ,
so
Xν , ν ∈ Z stationary
=⇒ E Xν , Xν 0 = E Xη+ν , Xη+ν 0 , ν, ν 0 , η ∈ Z ,
c
Lecture 5,
Amos Lapidoth 2017
The Autocovariance Function
The autocovariance
function KXX : Z → R of a WSS discrete-time
SP Xν is defined by
Key Properties:
n X
X n
αν αν 0 KXX (ν − ν 0 ) ≥ 0, α1 , . . . , αn ∈ R.
ν=1 ν 0 =1
c
Lecture 5,
Amos Lapidoth 2017
Proof of Key Properties
n X
X n n X
X n
0
αν α KXX (ν − ν ) =
ν0 αν αν 0 Cov[Xν , Xν 0 ]
ν=1 ν 0 =1 ν=1 ν 0 =1
X
n n
X
= Cov αν Xν , αν 0 Xν 0
ν=1 ν 0 =1
Xn
= Var αν Xν
ν=1
≥ 0.
c
Lecture 5,
Amos Lapidoth 2017
The Power Spectral Density Function
We say that the discrete-time WSS SP Xν is of
power spectral density SXX if SXX : [−1/2, 1/2) → R is
nonnegative, symmetric, integrable, and
Z 1/2
KXX (η) = SXX (θ) ei2πηθ dθ, η ∈ Z. (13)
−1/2
c
Lecture 5,
Amos Lapidoth 2017
PSD when KXX Is Absolutely Summable
If
∞
X
KXX (η) < ∞,
η=−∞
c
Lecture 5,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 5,
Amos Lapidoth 2017
Communication and Detection Theory: Lecture 6
Amos Lapidoth
ETH Zurich
c
Lecture 6,
Amos Lapidoth 2017
Today
• Energy in PAM.
• Defining power in PAM.
• Zero-mean signals for additive noise channels.
• The power when:
• The symbols form a centered WSS discrete-time SP.
• Bi-infinite block-mode.
• The pulse shape is shift orthonormal.
c
Lecture 6,
Amos Lapidoth 2017
Pulse Amplitude Modulation
ϕ : {0, 1}k → Rn
(d1 , . . . , dk ) 7→ (x1 , . . . , xn ),
c
Lecture 6,
Amos Lapidoth 2017
Block-Mode Mapping of Bits to Real Numbers
where
k
k0 = K.
K
c
Lecture 6,
Amos Lapidoth 2017
Energy in Transmitting a Single Block (1)
The energy, Z ∞
ω 7→ X 2 (ω, t) dt
−∞
is a RV whose expectation—expected energy—is
Z ∞
E,E X 2 (t) dt .
−∞
c
Lecture 6,
Amos Lapidoth 2017
Z ∞
E=E X 2 (t) dt
−∞
"Z #
∞X
N 2
= A2 E X` g(t − `Ts ) dt
−∞`=1
"Z #
∞XN X
N
2 0
=A E X` g(t − `Ts ) X`0 g(t − ` Ts ) dt
−∞ `=1 `0 =1
"Z N X
N
#
∞ X
= A2 E X` X`0 g(t − `Ts ) g(t − `0 Ts ) dt
−∞ `=1 `0 =1
Z N X
∞ X N
=A 2
E[X` X`0 ] g(t − `Ts ) g(t − `0 Ts ) dt
−∞ `=1 `0 =1
N
XX N Z ∞
=A 2
E[X` X ] `0 g(t − `Ts ) g(t − `0 Ts ) dt
`=1 `0 =1 −∞
XN X N
= A2 E[X` X`0 ] Rgg (` − `0 )Ts .
`=1 `0 =1
c
Lecture 6,
Amos Lapidoth 2017
Energy in Transmitting a Single Block (3)
• Since Z ∞
Rgg (τ ) = ĝ(f )2 ei2πf τ df, τ ∈ R,
−∞
we can also express the energy as
Z N X
X N
∞
0 2
E=A 2
E[X` X`0 ] ei2πf (`−` )Ts ĝ(f ) df.
−∞ `=1 `0 =1
sometimes simplifies:
N
X N−1
E = A2 kgk22 E X`2 , t 7→ g(t − `Ts ) `=0 orthogonal .
`=1
N
X
E=A 2
kgk22 E X`2 , X` , ` ∈ Z zero-mean & uncorrelated .
`=1
c
Lecture 6,
Amos Lapidoth 2017
Defining Power
The power P in the SP X(t), t ∈ R is
Z T
1
P , lim E 2
X (t) dt .
T→∞ 2T −T
c
Lecture 6,
Amos Lapidoth 2017
Gordian Knot
c
Lecture 6,
Amos Lapidoth 2017
The Alexandrian Solution
We shall assume
• Bounded Symbols:
X` ≤ γ, ` ∈ Z.
β
|g(t)| ≤ , t∈R
1 + |t/Ts |1+α
for some
α, β > 0.
c
Lecture 6,
Amos Lapidoth 2017
Generating Infinitely Many Symbols
• Just assume Xν for a WSS SP.
• Bi-Infinite Block Encoding.
• Shift-orthonormal pulse shape.
c
Lecture 6,
Amos Lapidoth 2017
Bi-Infinite Block Encoding
c
Lecture 6,
Amos Lapidoth 2017
Zero-Mean Signals for Additive-Noise Channels
N
TX2 RX2
{Dj } X X−c Y =X−c+N X+N {Djest }
TX1 + + + RX1
−c c
c = E[W ] .
E (W − c)2
h 2 i
= E (W − E[W ]) + (E[W ] − c)
= E (W − E[W ])2 + 2 E W − E[W ] E[W ] − c + (E[W ] − c)2
| {z }
0
= E (W − E[W ])2 + (E[W ] − c)2
≥ E (W − E[W ])2
= Var[W ] ,
To minimize Z
1 T h 2 i
E X(t) − c(t) dt,
2T −T
c
Lecture 6,
Amos Lapidoth 2017
The Power when X` Is Zero-Mean and WSS
We ignore how the symbols were generated and assume
E[X` ] = 0, ` ∈ Z,
c
Lecture 6,
Amos Lapidoth 2017
Z τ +Ts Z τ +Ts X
∞ 2
2 2
E X (t) dt = A E X` g(t − `Ts ) dt
τ τ `=−∞
Z τ +Ts X
∞ ∞
X
2 0
=A E X` X`0 g(t − `Ts ) g(t − ` Ts ) dt
τ `=−∞ `0 =−∞
Z τ +Ts ∞
X X∞
= A2 E[X` X`0 ] g(t − `Ts ) g(t − `0 Ts ) dt
τ `=−∞ `0 =−∞
Z X∞ X∞
τ +Ts
= A2 E[X` X`+m ] g(t − `Ts ) g t − (` + m)Ts dt
τ `=−∞ m=−∞
Z X∞ ∞
X
τ +Ts
= A2 KXX (m) g(t − `Ts ) g t − (` + m)Ts dt
τ m=−∞ `=−∞
∞
X ∞ Z
X τ +Ts −`Ts
= A2 KXX (m) g(t0 ) g(t0 − mTs ) dt0
m=−∞ `=−∞ τ −`Ts
∞
X Z ∞
= A2 KXX (m) g(t ) g(t − mTs ) dt0 .
0 0
m=−∞ | −∞ {z }
Rgg (mTs )
c
Lecture 6,
Amos Lapidoth 2017
The Power when X` Is Zero-Mean and WSS
c
Lecture 6,
Amos Lapidoth 2017
The Sandwich Theorem
bn ≤ an ≤ cn , n = 1, 2, 3, . . . ,
c
Lecture 6,
Amos Lapidoth 2017
First Application of the Sandwich Theorem
Using
ξ − 1 < bξc ≤ ξ, ξ∈R
we obtain
12T
1 1 2T 12T
− < ≤ .
2T
Ts 2T 2T Ts 2T
Ts
|
{z } | {z }
→1/Ts →1/Ts
Consequently,
1 2T 1
lim = , Ts > 0.
T→∞ 2T Ts Ts
c
Lecture 6,
Amos Lapidoth 2017
Second Application of the Sandwich Theorem
Using
ξ ≤ dξe < ξ + 1, ξ∈R
we obtain
1 2T
1 2T 1 2T
1
≤ < + .
2T
Ts
2T Ts 2T
Ts
2T
Consequently,
1 2T 1
lim = , Ts > 0.
T→∞ 2T Ts Ts
c
Lecture 6,
Amos Lapidoth 2017
Third Application of the Sandwich Theorem
Z τ +Ts Z T Z τ +Ts
2T 2 2 2T 2
E X (t) dt ≤ E X (t) dt ≤ E X (t) dt .
Ts τ −T Ts τ
Dividing by 2T
Z τ +Ts
1 2T 2
E X (t) dt
2T Ts τ
| {z }
→1/Ts
Z T
1 2
≤ E X (t) dt ≤
2T −T
Z τ +Ts
1 2T 2
E X (t) dt .
2T Ts τ
| {z }
→1/Ts
and
Z τ +Ts ∞
X
2 2
E X (t) dt = A KXX (m) Rgg (mTs ),
τ m=−∞
we conclude
∞
1 2 X
P= A KXX (m) Rgg (mTs ).
Ts m=−∞
c
Lecture 6,
Amos Lapidoth 2017
Special Cases and Different Forms
Since
∞
1 2 X
P= A KXX (m) Rgg (mTs ),
Ts m=−∞
1 2
P= A kgk22 σX
2
, 2
X` centered, variance σX , uncorrelated .
Ts
Also,
∞ Z ∞
1 2 X
P= A KXX (m) |ĝ(f )|2 ei2πf mTs df
Ts m=−∞ | −∞ {z }
Rgg (mTs )
2 Z ∞ ∞
X
A
= KXX (m) ei2πf mTs |ĝ(f )|2 df.
Ts −∞ m=−∞
c
Lecture 6,
Amos Lapidoth 2017
The Power in Bi-Infinite Block-Mode (1)
Dν , DνK+1 , . . . , DνK+K , ν ∈ Z.
Xν , enc(Dν ), ν ∈ Z.
XνN+1 , . . . , XνN+N = Xν , ν ∈ Z.
c
Lecture 6,
Amos Lapidoth 2017
The Power in Bi-Infinite Block-Mode (2)
Thus,
Es
P= .
Ts
c
Lecture 6,
Amos Lapidoth 2017
Z ∞
E=E X 2 (t) dt
−∞
"Z 2 #
∞X
N
= A2 E X` g(t − `Ts ) dt
−∞`=1
"Z #
∞XN X
N
2 0
=A E X` g(t − `Ts ) X`0 g(t − ` Ts ) dt
−∞ `=1 `0 =1
"Z N X N
#
∞ X
= A2 E X` X`0 g(t − `Ts ) g(t − `0 Ts ) dt
−∞ `=1 `0 =1
Z N X
∞ X N
=A 2
E[X` X`0 ] g(t − `Ts ) g(t − `0 Ts ) dt
−∞ `=1 `0 =1
N
XX N Z ∞
=A 2
E[X` X ] `0 g(t − `Ts ) g(t − `0 Ts ) dt
`=1 `0 =1 −∞
XN X N
= A2 E[X` X`0 ] Rgg (` − `0 )Ts ,
`=1 `0 =1
c
Lecture 6,
Amos Lapidoth 2017
The Power in Bi-Infinite Block-Mode (3)
N X
X N
E = A2 E[X` X`0 ] Rgg (` − `0 )Ts ,
`=1 `0 =1
so
N N
A2 X X
P= E[X` X`0 ] Rgg (` − `0 )Ts ,
NTs 0 `=1 ` =1
or
Z N X
X N
A2 ∞
0 2
P= E[X` X`0 ] ei2πf (`−` )Ts ĝ(f ) df.
NTs −∞ `=1 `0 =1
c
Lecture 6,
Amos Lapidoth 2017
The Power in Bi-Infinite Block-Mode (4)
To derive the power we express X(·) as
∞
X
X(t) = A X` g(t − `Ts )
`=−∞
∞ X
X N
=A XνN+η g t − (νN + η)Ts
ν=−∞ η=1
X∞
=A u Xν , t − νNTs , t ∈ R,
ν=−∞
where
N
X
u : (x1 , . . . , xN , t) 7→ xη g(t − ηTs ).
η=1
c
Lecture 6,
Amos Lapidoth 2017
The Power in Bi-Infinite Block-Mode (5)
• Because the law of Dν does not depend on ν, neither does
the law of Xν (= enc(Dν )):
L
Xν = Xν 0 , ν, ν 0 ∈ Z.
c
Lecture 6,
Amos Lapidoth 2017
Z Z " ∞ 2 #
τ +NTs τ +NTs X
E X 2 (t) dt = E A u Xν , t − νNTs dt
τ τ ν=−∞
Z h
τ +NTs ∞
X ∞
X i
= A2 E u Xν , t − νNTs u Xν 0 , t − ν 0 NTs dt
τ ν=−∞ ν 0 =−∞
Z τ +NTs X∞
2
= A2 E u Xν , t − νNTs dt
τ ν=−∞
Z X∞
τ +NTs
= A2 E u2 X0 , t − νNTs dt
τ ν=−∞
∞ Z
X τ −(ν−1)NTs
= A2 E u2 X0 , t0 dt0
ν=−∞ τ −νNTs
Z ∞
= A2 E u2 X0 , t0 dt0
"Z−∞ N 2 #
∞ X
=E A X` g(t0 − `Ts ) dt0 , τ ∈ R.
−∞ `=1
c
Lecture 6,
Amos Lapidoth 2017
The Power in Bi-Infinite Block-Mode (7)
There are b2T/(NTs )c disjoint length-NTs half-open intervals
contained in the interval [−T, T); and d2T/(NTs )e such intervals
suffice to cover the interval [−T, T),
"Z ∞ X N 2 #
2T
E A X` g(t − `Ts ) dt
NTs −∞ `=1
Z T
2
≤E X (t) dt ≤
−T
"Z ∞ X N 2 #
2T
E A X` g(t − `Ts ) dt .
NTs −∞ `=1
c
Lecture 6,
Amos Lapidoth 2017
Time Shifts of Pulse Shape Are Orthonormal
∞
X
X(t) = A X` φ(t − `Ts ), t ∈ R,
`=−∞
Z ∞
φ(t − `Ts ) φ(t − `0 Ts ) dt = I{` = `0 }, `, `0 ∈ Z.
−∞
Assume
β
φ(t) ≤ , t∈R
1 + |t/Ts |1+α
for some α, β > 0 and that X` ≤ γ, ` ∈ Z. Then
Z T L
X
1 2 A2 1
lim E X (t) dt = lim E X`2
T→∞ 2T −T Ts L→∞ 2L + 1
`=−L
c
Lecture 6,
Amos Lapidoth 2017
Some Intuition (1)
We focus on the case Ts = 1. We need to study
Z T Z ∞ ∞
X 2
2
E X (t) dt = E A X` φ(t − `Ts ) I{|t| ≤ T} dt .
−T −∞ `=−∞
Define
φ` : t 7→ φ(t − `)
and its “windowed version”
so
∞
2
Z T
X
2 2
X (t) dt = A
X` φ`,w
.
−T
`=−∞ 2
The windowed time-shifts {φ`,w } are not orthogonal. . .
c
Lecture 6,
Amos Lapidoth 2017
Some Intuition (2)
For fixed (large) ν and all T > ν, define
X
X0 = X` φ`,w ,
|`|≤T−ν
X
X1 = X` φ`,w ,
T−ν<|`|≤T+ν
X
X2 = X` φ`,w ,
T+ν<|`|<∞
We seek h i
E kX0 + X1 + X2 k22
t
−T − ν −T −T + ν T−ν T T+ν
c
Lecture 6,
Amos Lapidoth 2017
Some Intuition (4)
Z
1 T 1 h i
E X 2 (t) dt = E kX0 + X1 + X2 k22
2T −T 2T
1 h i
≈ E kX0 k22
2T
X
2
1 2
= A E
X` φ`,w
2T 2
|`|≤T−ν
X
2
1
≈ A2 E
X` φ`
2T 2
|`|≤T−ν
T−ν
X
1 2
= A E X`2
2T
`=−(T−ν)
T−ν
X
2(T − ν) + 1 1
= A2 E X`2 .
| 2T
{z } 2(T − ν) + 1 `=−(T−ν)
→1
c
Lecture 6,
Amos Lapidoth 2017
Recap (1)
• Energy in transmitting a single block:
N X
X N
E=A 2
E[X` X`0 ] Rgg (` − `0 )Ts
`=1 `0 =1
h energy iE
Eb ,
bit K
energy E
Es , .
real symbol N
c
Lecture 6,
Amos Lapidoth 2017
Recap (2)
c
Lecture 6,
Amos Lapidoth 2017
Recap (3)
c
Lecture 6,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 6,
Amos Lapidoth 2017
Communication and Detection Theory: Lecture 7
Amos Lapidoth
ETH Zurich
April 4, 2017
c
Lecture 7,
Amos Lapidoth 2017
Today
c
Lecture 7,
Amos Lapidoth 2017
What Are the Issues?
c
Lecture 7,
Amos Lapidoth 2017
Two Approaches to Definitions
y(ξ + h) − y(ξ)
lim .
h→0 h
2. How is the quantity used:
• A map’s coloring number is the minimum number of colors
that suffice to color the countries under the restriction that no
two countries sharing a border have the same color.
c
Lecture 7,
Amos Lapidoth 2017
The Preservation-of-Sweat Law
c
Lecture 7,
Amos Lapidoth 2017
An Example: Charge Density
%(x, y, z) is
Charge in Box (x0 , y 0 , z 0 ) : |x − x0 | ≤ ∆ 0 ∆ 0
2 , |y − y | ≤ 2 , |z − z | ≤ 2
∆
lim
∆↓0 Volume of Box (x0 , y 0 , z 0 ) : |x − x0 | ≤ ∆ , |y − y 0 | ≤ ∆ , |z − z 0 | ≤ ∆
2 2 2
i.e,
Charge in box (x0 , y 0 , z 0 ) : |x − x0 | ≤ ∆
2 , |y − y0| ≤ ∆
2 , |z − z0| ≤ ∆
2
lim .
∆↓0 ∆3
c
Lecture 7,
Amos Lapidoth 2017
Pros and Cons of the Second Approach
c
Lecture 7,
Amos Lapidoth 2017
The Definition of Charge Density Revisited
%(·) is the charge density if for every region D ⊂ R3
Z
Charge in D = %(x, y, z) dx dy dz, D ⊂ R3 .
(x,y,z)∈D
c
Lecture 7,
Amos Lapidoth 2017
246 Some Etymology
Operational Power Spectral Density
Table 15.1:like
This suggest something Various densities and their units
Z
charge need Power
not be uniformly distributed,
of X in D = �(·)
SXXis typically
(f ) df, notD ⊂ R, so the charge
constant
density is a function of location. Thus, we usually write �(x, y, z) for the charge
f ∈D
density at the location (x, y, z). This can be defined differentially or integrally.
Z
The differential definition is
Power of X in D = I{f ∈ D} SXX (f ) df, D ⊂ R.
�(x, y, z)
�all frequencies �
Charge in Box (x� , y � , z � ) : |x − x� | ≤ ∆ � ∆ � ∆
2 , |y − y | ≤ 2 , |z − z | ≤ 2 �
= lim �
But what does this
∆↓0 Volume mean?
of Box (x� , y � , z � ) : |x − x� | ≤ ∆ � ∆ � ∆
2 , |y − y | ≤ 2 , |z − z | ≤ 2
� � � � � ∆ � ∆ � ∆
�
Charge in box (x , y , z ) : |x − x | ≤ 2 , |y − y | ≤ 2 , |z − z | ≤ 2
= lim ,
c
Amos
Lecture 7, ∆↓0 Lapidoth 2017 ∆3
Some Hand-waving
c
Lecture 7,
Amos Lapidoth 2017
Uniqueness
c
Lecture 7,
Amos Lapidoth 2017
Insisting on Symmetry
Suppose we have found some S(·) satisfying
Z ∞
Power of X ? h = |ĥ(f )|2 S(f ) df, h real and “nice.”
−∞
Define
1
S̃(f ) = S(f ) + S(−f ) , f ∈ R.
2
Then
S̃(f ) + S̃(−f ) = S(f ) + S(−f ), f ∈ R,
so
Z ∞
Power of X ? h = |ĥ(f )|2 S̃(f ) df, h real and “nice”
−∞
and
S̃(·) is symmetric.
c
Lecture 7,
Amos Lapidoth 2017
The Definition of the Operational PSD
The continuous-time real SP X(t) is of operational power
spectral density SXX if it is a measurable SP; SXX : R → R is
integrable and symmetric; and for every stable real filter of impulse
response h ∈ L1
Z ∞
Power in X ? h = SXX (f ) |ĥ(f )|2 df.
−∞
c
Lecture 7,
Amos Lapidoth 2017
Uniqueness
0 (·) are operational PSDs for X(t) , then the
If both SXX and SXX
set of frequencies at which they differ is of Lebesgue measure zero.
(Corollary 15.3.6)
c
Lecture 7,
Amos Lapidoth 2017
Nonnegativity
If X(t) is of operational PSD SXX , then SXX must be
nonnegative except possibly on a set of frequencies of Lebesgue
measure zero.
(Corollary 15.3.3)
c
Lecture 7,
Amos Lapidoth 2017
Filtering PAM Signals
If you know how to compute the power in PAM, then you also
know how to compute the power in a filtered PAM.
c
Lecture 7,
Amos Lapidoth 2017
Filtering a PAM Signal—Proof
• Convolution is linear
∞
X
X ? h (t) = σ 7→ A X` g(σ − `Ts ) ? h (t)
`=−∞
∞
X Z ∞
=A X` h(s) g(t − s − `Ts ) ds
`=−∞ −∞
X∞
=A X` (g ? h)(t − `Ts ), t ∈ R.
`=−∞
c
Lecture 7,
Amos Lapidoth 2017
X` Is Centered and WSS
∞
1 2 X
P= A KXX (m) Rgg (mTs )
Ts m=−∞
Z ∞
A2 ∞ X
= KXX (m) ei2πf mTs |ĝ(f )|2 df.
Ts −∞ m=−∞
g[
? h(f ) = ĝ(f ) ĥ(f ), f ∈R:
Z ∞ ∞
A2 X i2πf mTs
Power in X ? h = KXX (m) e |ĝ(f )| |ĥ(f )|2 df.
2
−∞ Ts m=−∞
| {z }
SXX (f )
c
Lecture 7,
Amos Lapidoth 2017
Bi-Infinite Block Mode
N N
A2 X X
P= E[X` X`0 ] Rgg (` − `0 )Ts
NTs 0 `=1 ` =1
Z∞ X N X
N
A 2
0 2
= E[X` X`0 ] ei2πf (`−` )Ts ĝ(f ) df.
NTs −∞ `=1 `0 =1
g[
? h(f ) = ĝ(f ) ĥ(f ), f ∈R:
Z ∞ N N
A2 X X i2πf (`−`0 )Ts
Power in X ? h = E[X` X`0 ] e |ĝ(f )| |ĥ(f )|2 df.
2
−∞ NTs
`=1 `0 =1
| {z }
SXX (f )
Use
N X
X N N
X N X
X `−1
a`,`0 = a`,` + a`,`0 + a`0 ,`
`=1 `0 =1 `=1 `=2 `0 =1
What about when the time shifts of the pulse shape by integer
multiples of Ts are orthonormal?
Z T L
X
1 2 A2 1
lim E X (t) dt = lim E X`2 .
T→∞ 2T −T Ts L→∞ 2L + 1
`=−L
This property isn’t preserved under filtering: φ ? h need not have it.
c
Lecture 7,
Amos Lapidoth 2017
The Bandwidth of a SP
We say that a SP X(t) of operational PSD SXX is
bandlimited to W Hz if, except on a set of frequencies of Lebesgue
measure zero, SXX (f ) is zero whenever |f | > W.
The smallest
W to which X(t) is limited is the bandwidth of
X(t) .
c
Lecture 7,
Amos Lapidoth 2017
The Bandwidth of PAM
c
Lecture 7,
Amos Lapidoth 2017
Proof (1)
so
ĝ(f ) = 0 =⇒ SXX (f ) = 0 .
c
Lecture 7,
Amos Lapidoth 2017
Proof (2)
N N
A XX 2
0
SXX (f ) = E[X` X`0 ] ei2πf (`−` )Ts |ĝ(f )|2
NTs 0 `=1 ` =1
There could be frequencies where SXX (f ) is zero but ĝ(f ) is not,
i.e., the zeros of m
N X
2 X N z }| {
A 0
σ(f ) , E[X` X`0 ] ei2πf (` − ` )Ts
NTs 0 `=1 ` =1
N−1
X
= γm ei2πf mTs
m=−N+1
N−1
X
= γm z m ,
z=ei2πf Ts
m=−N+1
min{N,N+m}
A2 X
γm = E[X` X`−m ] , m ∈ {−N + 1, . . . , N − 1}.
NTs
`=max{1,m+1}
c
Lecture 7,
Amos Lapidoth 2017
Proof (3)
SXX (f ) is zero while ĝ(f ) is not only if ei2πf Ts is a root of
N−1
X
z 7→ γm z m .
m=−N+1
c
Lecture 7,
Amos Lapidoth 2017
Recap (1)
• X(t) is of operational PSD SXX if SXX : R → R is
symmetric and for every stable real filter
Z ∞
Power in X ? h = SXX (f ) |ĥ(f )|2 df.
−∞
c
Lecture 7,
Amos Lapidoth 2017
Recap (2)
• If X` is WSS and centered
∞
A2 X
SXX (f ) = KXX (m) ei2πf mTs |ĝ(f )|2 .
Ts m=−∞
N N
A2 X X 0
SXX (f ) = E[X` X`0 ] ei2πf (`−` )Ts |ĝ(f )|2 .
NTs 0
`=1 ` =1
c
Lecture 7,
Amos Lapidoth 2017
Recap (3)
• A SP X(t) of operational PSD SXX is bandlimited to W Hz
if, except on a set of frequencies of Lebesgue measure zero,
SXX (f ) is zero whenever |f | > W.
• The smallest W to which X(t) is bandlimited is called the
bandwidth of X(t) .
• The bandwidth of a nonzero PAM signal equals the
bandwidth of its pulse shape.
c
Lecture 7,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 7,
Amos Lapidoth 2017
Communication and Detection Theory: Lecture 8
Amos Lapidoth
ETH Zurich
c
Lecture 8,
Amos Lapidoth 2017
Today
c
Lecture 8,
Amos Lapidoth 2017
Passband Communication
We assume throughout
W
fc > .
2
c
Lecture 8,
Amos Lapidoth 2017
The Good-Old Baseband
The pulse shape √
t 7→ 2W sinc(2Wt)
is of bandwidth W, and its time shifts by integer multiples of
1/(2W) are orthonormal. By using it with PAM we can send
symbols arriving at rate
real symbol
Rs
second
as the coefficients in a linear combination of orthonormal signals
whose bandwidth does not exceed
Rs
[Hz] .
2
For each 1 Hz at baseband we obtain 2 real-dimensions per second:
our spectral efficiency is
[real dimension/sec]
2 .
[baseband Hz]
c
Lecture 8,
Amos Lapidoth 2017
Objective
Transmit real symbols arriving at rate Rs [real symbol/second] as
the coefficients in a linear combination of orthonormal passband
signals occupying a bandwidth of W Hz around the carrier
frequency fc , where W equals Rs /2:
[real dimension/sec]
2 .
[passband Hz]
[complex dimension/sec]
1 .
[passband Hz]
c
Lecture 8,
Amos Lapidoth 2017
The PAM Solution—Not Great
c
Lecture 8,
Amos Lapidoth 2017
QAM in a Nutshell
c
Lecture 8,
Amos Lapidoth 2017
The QAM Signal
• Map the bits to complex symbols
ϕ : {0, 1}k → Cn .
• The rate is
k bit
.
n complex symbol
• The baseband representation of the transmitted signal is
n
X
XBB (t) = A C` g(t − `Ts ), t ∈ R.
`=1
c
Lecture 8,
Amos Lapidoth 2017
Alternative Representation
Using
with w = C` :
gI,` (t)
z }| !{
n
X
√ 1
XPB (t) = 2A Re(C` ) 2 Re √ g(t − `Ts ) ei2πfc t
2
`=1 | {z }
gI,`,BB (t)
gQ,` (t)
z }| !{
n
X
√ 1
+ 2A Im(C` ) 2 Re i √ g(t − `Ts ) ei2πfc t , t ∈ R.
2
`=1 | {z }
gQ,`,BB (t)
c
Lecture 8,
Amos Lapidoth 2017
If the Pulse Shape is Real:
n
X
XPB (t) = 2A Re(C` ) g(t − `Ts ) cos(2πfc t)
`=1
n
X
− 2A Im(C` ) g(t − `Ts ) sin(2πfc t), g real.
`=1
c
Lecture 8,
Amos Lapidoth 2017
QAM Modulator with a Real Pulse Shape
P P
A ` Re(C` )g(t − `Ts ) A ` Re(C` )g(t − `Ts ) cos(2πfc t)
PAM ×
Re(C` )
Re(·)
cos(2πfc t)
xPB (t)/2
{C` } +
90◦
Im(·)
−sin(2πfc t)
Im(C` )
PAM P × P
A ` Im(C` )g(t − `Ts ) −A ` Im(C` )g(t − `Ts ) sin(2πfc t)
c
Lecture 8,
Amos Lapidoth 2017
Bandwidth Considerations
c
Lecture 8,
Amos Lapidoth 2017
Orthogonality Considerations (1)
If the pulse shape φ satisfies
Z ∞
φ(t − `Ts ) φ∗ (t − `0 Ts ) dt = I{` = `0 }, `, `0 ∈ Z,
−∞
where
. . . , ψI,−1 , ψQ,−1 , ψI,0 , ψQ,0 , ψI,1 , ψQ,1 , . . .
are orthonormal functions:
1 i2πfc t
ψI,` : t 7→ 2 Re √ φ(t − `Ts ) e , ` ∈ Z,
2
1 i2πfc t
ψQ,` : t 7→ 2 Re i √ φ(t − `Ts ) e , ` ∈ Z.
2
c
Lecture 8,
Amos Lapidoth 2017
Orthogonality Considerations (2)
ψI,` (t)
n z}| {
√ X 1
XPB (t) = 2A Re(C` ) 2 Re √ φ(t − `Ts ) ei2πfc t
2
`=1 | {z }
ψI,`,BB (t)
ψQ,` (t)
n z }| {
√ X 1
+ 2A Im(C` ) 2 Re i √ φ(t − `Ts ) ei2πfc t , t ∈ R.
2
`=1 | {z }
ψQ,`,BB (t)
so xPB and yPB are orthogonal iff hxBB , yBB i is purely imaginary.
c
Lecture 8,
Amos Lapidoth 2017
Orthogonality Considerations (3)
ψI,` , ψI,`0 = 2 Re ψI,`,BB , ψI,`0 ,BB
D 1 1 E
= 2 Re t 7→ √ φ(t − `Ts ), t 7→ √ φ(t − `0 Ts )
0
2 2
= Re I ` = `
= I{` = `0 },
ψQ,` , ψQ,`0 = 2 Re ψQ,`,BB , ψQ,`0 ,BB
D 1 1 E
= 2 Re t 7→ i √ φ(t − `Ts ), t 7→ i √ φ(t − `0 Ts )
2 2
∗ 0
= i i Re I ` = `
= I{` = `0 },
D 1 1 E
ψI,` , ψQ,`0 = 2 Re t 7→ √ φ(t − `Ts ), t 7→ i √ φ(t − `0 Ts )
2
2
∗ 0
= Re i I ` = ` = 0.
c
Lecture 8,
Amos Lapidoth 2017
Spectral Efficiency
√
• Choose φ of bandwidth W/2, e.g., φ : t 7→ W sinc(Wt).
• The QAM signal is then of bandwidth W around fc .
• To satisfy Nyquist,
1
. Ts ≥
W
• We are then sending complex symbols at rate 1/Ts , i.e., W,
complex symbols per second.
• This corresponds to 2W real symbols per second.
• By orthogonality, we achieve
[real dimension/sec]
2 .
[passband Hz]
c
Lecture 8,
Amos Lapidoth 2017
Mission Accomplished
c
Lecture 8,
Amos Lapidoth 2017
QAM Constellations
Ci ∈ C, i = 1, . . . , n.
c
Lecture 8,
Amos Lapidoth 2017
4-QAM 16-QAM
8-PSK 32-QAM
c
Lecture 8,
Amos Lapidoth 2017
The Constellation’s Parameters
δ , min
0
|c − c0 |.
c,c ∈C
c6=c0
c
Lecture 8,
Amos Lapidoth 2017
Recovering the Symbols
c
Lecture 8,
Amos Lapidoth 2017
Computing hr, ψI,` i and hr, ψQ,` i (1)
More generally, we’ll compute
gI,` (t)
z }| !{
n
X
√ 1
XPB (t) = 2A Re(C` ) 2 Re √ g(t − `Ts ) ei2πfc t
2
`=1 | {z }
gI,`,BB (t)
gQ,` (t)
z }| !{
n
X
√ 1
+ 2A Im(C` ) 2 Re i √ g(t − `Ts ) ei2πfc t , t ∈ R.
2
`=1 | {z }
gQ,`,BB (t)
× LPFWc Re(sBB )
cos(2πfc t)
s(t)
r(t) BPFW,fc W
≤ Wc ≤ 2fc − W
2 2
90◦
× LPFWc Im(sBB )
c
Lecture 8,
Amos Lapidoth 2017
√
hr, gI,` i = 2 Re hsBB , t 7→ g(t − `Ts )i ,
√
hr, gQ,` i = 2 Im hsBB , t 7→ g(t − `Ts )i
can be computed with real operations:
Z ∞
√ ∗
hr, gI,` i = 2 Re sBB (t) g (t − `Ts ) dt
−∞
√ Z ∞
= 2 Re sBB (t) Re g(t − `Ts ) dt
−∞
√ Z ∞
+ 2 Im sBB (t) Im g(t − `Ts ) dt,
−∞
and Z ∞
√ ∗
hr, gQ,` i = 2 Im sBB (t) g (t − `Ts ) dt
−∞
√ Z ∞
= 2 Im sBB (t) Re g(t − `Ts ) dt
−∞
√ Z ∞
− 2 Re sBB (t) Im g(t − `Ts ) dt.
−∞
c
Lecture 8,
Amos Lapidoth 2017
Computing hr, ψI,` i and
Z
hr, ψQ,` i: Real Pulse
Shape
√ ∞
hr, gI,` i = 2 Re sBB (t) g ∗ (t − `Ts ) dt
−∞
√ Z ∞
= 2 Re sBB (t) Re g(t − `Ts ) dt
−∞
√ Z ∞
+ 2 Im sBB (t) Im g(t − `Ts ) dt
−∞
√ Z ∞
= 2 Re sBB (t) g(t − `Ts ) dt
−∞
Z ∞
√
hr, gQ,` i = 2 Im sBB (t) g ∗ (t − `Ts ) dt
−∞
√ Z ∞
= 2 Im sBB (t) Re g(t − `Ts ) dt
−∞
√ Z ∞
− 2 Re sBB (t) Im g(t − `Ts ) dt
−∞
√ Z ∞
= 2 Im sBB (t) g(t − `Ts ) dt
c
Lecture 8,
Amos Lapidoth 2017 −∞
Bandpass Filtering and Baseband Conversion
× LPFWc Re(sBB )
cos(2πfc t)
s(t)
r(t) BPFW,fc W
≤ Wc ≤ 2fc − W
2 2
90◦
× LPFWc Im(sBB )
c
Lecture 8,
Amos Lapidoth 2017
Matched Filtering in Baseband (g Real)
c
Lecture 8,
Amos Lapidoth 2017
Filtering QAM Signals
c
Lecture 8,
Amos Lapidoth 2017
7.7 Energy-Limited Passband Signals 131
x̂PB (f )
W
1
f
−fc fc
ĥ(f )
f
−fc fc
x̂PB (f ) ĥ(f )
f
−fc fc
c
Lecture 8,
Amos Lapidoth 2017
ĥ(f )
W
f
fc
f
−W
2
W
2
c
Lecture 8,
Amos Lapidoth 2017
The frequency response of the real impulse response h ∈ L1
with respect to the bandwidth W around the carrier
frequency fc is the mapping
n Wo
f 7→ ĥ(f + fc ) I |f | ≤ .
2
The FT of
xPB ? h BB
is the product of x̂BB by the filter’s frequency response with
respect to the bandwidth W around the carrier frequency fc
n Wo
f 7→ x̂BB (f ) ĥ(f + fc ) I |f | ≤ .
2
c
Lecture 8,
Amos Lapidoth 2017
7.7 Energy-Limited Passband Signals 131
x̂PB (f )
W
1
f
−fc fc
ĥ(f )
f
−fc fc
x̂PB (f ) ĥ(f )
f
−fc fc
c
Lecture 8,
Amos Lapidoth 2017
x̂BB (f )
f
−W
2
W
2
c
Lecture 8,
Amos Lapidoth 2017
Returning to Filtered QAM
The baseband representation of QAM is a complex PAM, so
Xn
X̂BB (f ) = A C` e−i2πf `Ts ĝ(f ), f ∈ R.
`=1
The baseband representation of XPB ? h is hence of FT
Xn
f 7→ A C` e−i2πf `Ts ĝ(f ) ĥ(f + fc ), f ∈ R.
`=1
In the time domain,
n
X
XPB ? h BB
(t) = A C` p(t − `Ts ),
`=1
where Z ∞
p(t) = ĝ(f ) ĥ(f + fc ) ei2πf t df, t ∈ R.
−∞
XPB ? h BB is a complex PAM with g replaced by p.
c
Lecture 8,
Amos Lapidoth 2017
Filtering a QAM Signal
c
Lecture 8,
Amos Lapidoth 2017
Complex Random Variables
Z = X + iY.
c
Lecture 8,
Amos Lapidoth 2017
The Density of a CRV
Thus,
∂2
fZ (z) = Pr Re(Z) ≤ x, Im(Z) ≤ y , z ∈ C.
∂x ∂y x=Re(z),y=Im(z)
c
Lecture 8,
Amos Lapidoth 2017
The Expectation of a CRV
c
Lecture 8,
Amos Lapidoth 2017
Proper CRV
A CRV Z is proper if it is zero-mean; of finite-variance; and
E Z 2 = 0.
Since
E Z 2 = E Re(Z)2 − Im(Z)2 + i2E Re(Z) Im(Z) ,
the condition E Z 2 = 0 is equivalent to
E Re(Z)2 = E Im(Z)2
and
E Re(Z) Im(Z) = 0.
Z is proper iff:Z is of zero mean; Re(Z) & Im(Z) have the same
finite variance; and Re(Z) & Im(Z) are uncorrelated.
c
Lecture 8,
Amos Lapidoth 2017
The Covariance Matrix of Re(Z), Im(Z)
In general, the covariance matrix of Re(Z), Im(Z) is
Var Re(Z) Cov Re(Z), Im(Z)
.
Cov Re(Z), Im(Z) Var Im(Z)
But if Z is proper,
1
2 Var[Z] 0
.
1
0 2 Var[Z]
c
Lecture 8,
Amos Lapidoth 2017
The Covariance
∗
Cov[Z, W ] , E Z − E[Z] W − E[W ] .
c
Lecture 8,
Amos Lapidoth 2017
Properties of the Covariance (1)
1. Conjugate Symmetry:
∗
Cov[Z, W ] = Cov[W, Z] .
2. Sesquilinearity:
Cov[αZ, W ] = αCov[Z, W ] ,
Cov[Z1 + Z2 , W ] = Cov[Z1 , W ] + Cov[Z2 , W ] ,
Cov[Z, βW ] = β ∗ Cov[Z, W ] ,
Cov[Z, W1 + W2 ] = Cov[Z, W1 ] + Cov[Z, W2 ] ,
c
Lecture 8,
Amos Lapidoth 2017
Properties of the Covariance (2)
Var[Z] = Cov[Z, Z] .
c
Lecture 8,
Amos Lapidoth 2017
WSS Discrete-Time Complex Stochastic Processes
A discrete-time CSP Zν is wide-sense stationary if:
1. For every ν ∈ Z the CRV Zν is of finite variance.
2. The mean of Zν does not depend on ν.
3. E Zν Zν∗0 depends on ν 0 and ν only via ν − ν 0 :
E[Zν Zν∗0 ] = E Zν+η Zν∗0 +η , ν, ν 0 , η ∈ Z.
c
Lecture 8,
Amos Lapidoth 2017
Autocovariance Function
Key properties:
• KZZ is conjugate symmetric:
∗
KZZ (−η) = KZZ (η) , η ∈ Z.
c
Lecture 8,
Amos Lapidoth 2017
The PSD of a Complex Discrete-Time SP
Zν is of power spectral density SZZ if
Z 1/2
KZZ (η) = SZZ (θ) ei2πηθ dθ, η ∈ Z.
−1/2
c
Lecture 8,
Amos Lapidoth 2017
The PSD when the Autocovariance Function is Summable
If the autocovariance function KZZ is absolutely summable, i.e.,
∞
X
KZZ (η) < ∞,
η=−∞
c
Lecture 8,
Amos Lapidoth 2017
The Intuition
The complex exponentials are orthonormal:
Z 1/2
0
ei2π(η−η )θ dθ = I{η = η 0 }, η, η 0 ∈ Z.
−1/2
Hence,
Z 1/2 Z 1/2 ∞
X
i2πηθ −i2πη 0 θ
S(θ) e dθ = KXX (η ) e 0
ei2πηθ dθ
−1/2 −1/2 η 0 =−∞
∞
X Z 1/2
0
= KXX (η )0
e−i2πη θ ei2πηθ dθ
η 0 =−∞ −1/2
X∞ Z 1/2
0
= KXX (η 0 ) ei2π(η−η )θ dθ
η 0 =−∞ −1/2
X∞
= KXX (η 0 ) I{η = η 0 }
η 0 =−∞
= KXX (η), η ∈ Z.
c
Lecture 8,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 8,
Amos Lapidoth 2017
Communication and Detection Theory: Lecture 9
Amos Lapidoth
ETH Zurich
c
Lecture 9,
Amos Lapidoth 2017
Today
• Energy in QAM.
• Power in QAM.
• Operational PSD of QAM.
c
Lecture 9,
Amos Lapidoth 2017
Sending a Single Block
• K IID random bits D1 , . . . , DK are transmitted.
• These bits are mapped by
to N complex symbols C1 , . . . , CN .
• The transmitted signal is
X(t) = 2 Re XBB (t) ei2πfc t
N
!
X
i2πfc t
= 2 Re A C` g(t − `Ts ) e , t ∈ R,
`=1
c
Lecture 9,
Amos Lapidoth 2017
Assumptions
c
Lecture 9,
Amos Lapidoth 2017
The Energy in a Single Block
We seek Z ∞
E,E 2
X (t) dt .
−∞
Since XBB (·) is bandlimited to W/2 Hz, and since fc > W/2,
Z ∞
E = 2E XBB (t)2 dt .
−∞
c
Lecture 9,
Amos Lapidoth 2017
The Energy in Baseband
Z Z " N 2 #
∞ ∞ X
E XBB (t)2 dt = E A C` g(t − `Ts ) dt
−∞ −∞ `=1
Z " N X
N ∗ #
∞ X
= E A C` g(t − `Ts ) A C`0 g(t − `0 Ts ) dt
−∞ `=1 `0 =1
N X
X N Z ∞
= A2 E[C` C`∗0 ] g(t − `Ts ) g ∗ (t − `0 Ts ) dt
`=1 `0 =1 −∞
XN X N
= A2 E[C` C`∗0 ] Rgg (`0 − `)Ts ,
`=1 `0 =1
Z ∞ N
X
E XBB (t)2 dt = A2 kgk2 E |C` |2 ,
2
−∞ `=1
E[C` C`∗0 ] 2
= E |C` | I{` = `0 }, `, `0 ∈ {1, . . . , N} ,
c
Lecture 9,
Amos Lapidoth 2017
The Energy in XPB
Z ∞
Rgg (τ ) = |ĝ(f )|2 ei2πf τ df, τ ∈ R,
−∞
so
N X
X N
E = 2A2 E[C` C`∗0 ] Rgg (`0 − `)Ts
`=1 `0 =1
Z ∞ X N X
N
0
= 2A2 E[C` C`∗0 ] ei2πf (` −`)Ts |ĝ(f )|2 df.
−∞ `=1 `0 =1
Only expectations of the form E C` C`∗0 show up; not E[C` C`0 ].
We define the energy per bit Eb
E
Eb ,
K
and the energy per complex symbol Es
E
Es , .
N
c
Lecture 9,
Amos Lapidoth 2017
Relating Power in Passband to Power in Baseband
• The energy in a passband signal is twice the energy in its
baseband representation.
• But power is trickier:
"Z # "Z #
T T
1 1 2
XBB (t) dt ,
E X 2 (t) dt 6= 2 E
2T −T 2T −T
c
Lecture 9,
Amos Lapidoth 2017
C` Is Zero-Mean and WSS (1)
Assume that C` is a zero-mean WSS discrete-time CSP of
autocovariance function KCC :
E[C` ] = 0, ` ∈ Z,
c
Lecture 9,
Amos Lapidoth 2017
Z Z " ∞ 2 #
τ +Ts τ +Ts X
E XBB (t)2 dt = A2 E C` g(t − `Ts ) dt
τ τ `=−∞
Z τ +Ts X
∞ ∞
X
=A 2
E C` C`∗0 ∗ 0
g(t − `Ts ) g (t − ` Ts ) dt
τ `=−∞ `0 =−∞
Z ∞
τ +Ts X ∞
X
= A2 E[C` C`∗0 ] g(t − `Ts ) g ∗ (t − `0 Ts ) dt
τ `=−∞ `0 =−∞
Z τ +Ts X∞ ∞
X
= A2 E[C`0 +m C`∗0 ] g t − (`0 + m)Ts g ∗ (t − `0 Ts ) dt
τ m=−∞ `0 =−∞
Z ∞
τ +Ts X ∞
X
=A 2
KCC (m) g t − (`0 + m)Ts g ∗ (t − `0 Ts ) dt
τ | {z }
m=−∞ `0 =−∞ t0
∞
X ∞
X Z τ +Ts −`0 Ts
= A2 KCC (m) g(t0 − mTs ) g ∗ (t0 ) dt0
m=−∞ `0 =−∞ τ −`0 Ts
∞
X Z ∞
= A2 KCC (m) g ∗ (t0 ) g(t0 − mTs ) dt0
m=−∞ | −∞ {z }
∗ (mT )
Rgg
c
Lecture 9,
Amos Lapidoth 2017 s
We lower-bound the energy of XBB (·) in the interval [−T, +T ] by
Z τ +Ts
2T
E XBB (t)2 dt
Ts τ
and upper-bound it by
Z τ +Ts
2T 2
E
XBB (t) dt ,
Ts τ
c
Lecture 9,
Amos Lapidoth 2017
The Power in Passband
and
Z Z ∞
1 T
2A2 ∞ X
lim E 2
X (t) dt = KCC (m) e−i2πf mTs |ĝ(f )|2 df.
T→∞ 2T −T Ts −∞ m=−∞
c
Lecture 9,
Amos Lapidoth 2017
The Power in QAM in Bi-Infinite Block-Mode
If enc(·) produces zero-mean symbols from IID random bits:
"Z N 2 #
∞ X
1 A
PBB = E C ` g(t − `Ts )
NTs −∞ `=1
Z ∞X N X N
A2 0
= E[C` C`∗0 ] ei2πf (` −`)Ts |ĝ(f )|2 df.
NTs −∞ 0 `=1 ` =1
Consequently
Z T
1 2 Es
lim E X (t) dt =
T→∞ 2T −T Ts
where Es = E/N, and
N X
X N
E = 2A 2
E[C` C`∗0 ] Rgg (`0 − `)Ts
`=1 `0 =1
Z ∞ X N X
N
0
= 2A2 E[C` C`∗0 ] ei2πf (` −`)Ts |ĝ(f )|2 df.
c
Lecture 9,
Amos −∞
Lapidoth 2017 0
Time Shifts of Pulse Shape Are Orthonormal
Suppose
X∞
i2πfc t
X(t) = 2 Re A C` φ(t − `Ts ) e , t ∈ R,
`=−∞
Z T L
X
1 2A2 1
lim E X 2 (t) dt = lim E |C` |2 ,
T→∞ 2T −T Ts L→∞ 2L + 1
`=−L
c
Lecture 9,
Amos Lapidoth 2017
The Operational PSD of QAM
SXX (f ) = SBB |f | − fc , f ∈ R.
c
Lecture 9,
Amos Lapidoth 2017
XBB Is Bandlimited to W/2 Hz
W
SBB (f ) = 0, |f | > .
2
More precisely, we’ll assume that XBB is of operational PSD
SBB (·) and show that it is also of operational PSD
n Wo
f 7→ SBB (f ) I |f | ≤ .
2
c
Lecture 9,
Amos Lapidoth 2017
X
Power in XBB ? h = Power in t 7→ A C` g(t − `Ts ) ? h
`∈Z
X
= Power in t 7→ A C` (g ? h)(t − `Ts )
`∈Z
X
= Power in t 7→ A C` (g ? LPFW/2 ) ? h (t − `Ts )
`∈Z
X
= Power in t 7→ A C` g ? (h ? LPFW/2 ) (t − `Ts )
`∈Z
X
= Power in t 7→ A C` g(t − `Ts ) ? (h ? LPFW/2 )
`∈Z
Z ∞ 2
= SBB (f ) ĥ(f ) I{|f | ≤ W/2} df
Z−∞
∞ 2
= SBB (f ) I{|f | ≤ W/2} ĥ(f ) df.
−∞
c
Lecture 9,
Amos Lapidoth 2017
The Baseband Representation of X ? h
c
Lecture 9,
Amos Lapidoth 2017
7.7 Energy-Limited Passband Signals 131
x̂PB (f )
W
1
f
−fc fc
ĥ(f )
f
−fc fc
x̂PB (f ) ĥ(f )
f
−fc fc
c
Lecture 9,
Amos Lapidoth 2017
ĥ(f )
W
f
fc
ĥ0BB (f )
f
−W
2
W
2
c
Lecture 9,
Amos Lapidoth 2017
Power in X ? h
= 2 Power in XBB ? h0BB
Z ∞
2
=2 SBB (f ) ĥ0BB (f ) df
Z−∞
∞ 2
=2 SBB (f ) ĥ(f + fc ) I{|f | ≤ W/2} df
Z−∞
∞ 2
=2 SBB (f ) ĥ(f + fc ) df
Z−∞
∞ 2
=2 SBB (f˜ − fc ) ĥ(f˜) df˜
−∞
Z ∞ Z ∞
2 2
= ˜ ĥ ˜
SBB (f − fc ) (f ) df + ˜ SBB (f˜ − fc ) ĥ(−f˜) df˜
Z−∞ Z−∞
∞ 2 ∞ 2
= SBB (f˜ − fc ) ĥ(f˜) df˜ + SBB (−f 0 − fc ) ĥ(f 0 ) df 0
Z−∞ −∞
∞ 2
= SBB (f − fc ) + SBB (−f − fc ) ĥ(f ) df
Z−∞∞ 2
= SBB (|f | − fc ) ĥ(f ) df .
−∞
c
Lecture 9,
Amos Lapidoth 2017
Computing SBB (·)
• To compute SBB (·) we need the power in XBB ? h.
• Also for complex PAM, feeding XBB to a filter of impulse
response h is tantamount to changing its pulse shape from g
to g ? h.
∞
X
X ? h (t) = σ 7→ A X` g(σ − `Ts ) ? h (t)
`=−∞
∞
X Z ∞
=A X` h(s) g(t − s − `Ts ) ds
`=−∞ −∞
X∞
=A X` (g ? h)(t − `Ts ), t ∈ R.
`=−∞
Consequently,
∞
A2 X 2
SXX (f ) = KCC (m) e−i2π(|f |−fc )mTs ĝ |f | − fc , f ∈ R.
Ts m=−∞
c
Lecture 9,
Amos Lapidoth 2017
C` Zero-Mean, Variance-σC2 , and Uncorrelated
In this case
∞
A2 X 2
KCC (m) e−i2π(|f |−fc )mTs ĝ |f | − fc , f ∈R
Ts m=−∞
simplifies to
A2 2 2
SXX (f ) = σC ĝ |f | − fc , f ∈ R.
Ts
c
Lecture 9,
Amos Lapidoth 2017
ĝ(f )
|ĝ(f )|2
� � ��
�ĝ |f | − fc �2
f
−fc fc
Figure
c
Lecture 9,
Amos 18.1: The
Lapidoth relationship between the Fourier Transform� of�the pulse shape
2017
The Operational PSD of QAM in Bi-Infinite Block-Mode
To compute the operational PSD of XBB , replace g with g ? h:
Power in XBB ? h
Z ∞ 2 X N XN
A
∗ i2πf (`0 −`)Ts
2
= E[C` C`0 ] e ĝ(f ) ĥ(f )2 df.
−∞ NTs 0 `=1 ` =1
Hence,
N N
A2 X X 0 2
SBB (f ) = E[C` C`∗0 ] ei2πf (` −`)Ts ĝ(f ) , f ∈ R.
NTs 0 `=1 ` =1
Consequently,
N N
A2 X X 0 2
SXX (f ) = E[C` C`∗0 ] ei2π(|f |−fc )(` −`)Ts ĝ |f | − fc .
NTs 0`=1 ` =1
c
Lecture 9,
Amos Lapidoth 2017
You have all read Chapter 19.
But let’s quickly review the Q-function.
c
Lecture 9,
Amos Lapidoth 2017
Standard Gaussian
1 w2
fW (w) = √ e− 2 , w ∈ R.
2π
It is of zero mean and unit variance.
fW (w)
c
Lecture 9,
Amos Lapidoth 2017
Gaussian Random Variables
• X is a centered Gaussian if
X = aW
c
Lecture 9,
Amos Lapidoth 2017
Standardizing a Gaussian
X −µ
∼ N (0, 1)
σ
and is thus a standard Gaussian.
c
Lecture 9,
Amos Lapidoth 2017
The Q-Function
The Q-function maps every α ∈ R to the probability that a
standard Gaussian exceeds it:
Z ∞
1 2 /2
Q(α) , √ e−ξ dξ, α ∈ R.
2π α
Q(α)
α
c
Lecture 9,
Amos Lapidoth 2017
The Q-Function
Q(α)
1
2
α
1
c
Lecture 9,
Amos Lapidoth 2017
The Q-Function and Intervals
The CDF of a Standard Gaussian:
FW (w) = Pr[W ≤ w]
= 1 − Pr[W ≥ w]
= 1 − Q(w), w ∈ R.
More generally,
Pr[a ≤ W ≤ b] = Pr[W ≥ a] − Pr[W ≥ b]
= Q(a) − Q(b), a ≤ b.
2
If X ∼ N µ, σ with σ > 0, then
Pr[a ≤ X ≤ b] = Pr[X ≥ a] − Pr[X ≥ b], a ≤ b
X −µ a−µ X −µ b−µ
= Pr ≥ − Pr ≥ , σ>0
σ σ σ σ
a − µ b − µ
=Q −Q , a ≤ b, σ > 0 .
σ σ
c
Lecture 9,
Amos Lapidoth 2017
The Q-Function and Rays
Letting a → −∞ we obtain
b − µ
Pr[X ≤ b] = 1 − Q , σ > 0.
σ
c
Lecture 9,
Amos Lapidoth 2017
The Q-Function with Negative Arguments
• The standard Gaussian density is symmetric. Let
W ∼ N (0, 1).
Consequently,
Q(−α) = 1 − Q(α), α ∈ R.
c
Lecture 9,
Amos Lapidoth 2017
19.4 The Q-Function 343
Q(α)
Q(α)
−α
Q(−α)
−α
Q(α) Q(−α)
−α
Figure 19.4: The identity Q(α) + Q(−α) = 1.
c
Lecture 9,
Amos Lapidoth 2017
Linear Combinations of Independent Gaussians
Let
α1 , . . . , αJ ∈ R
be deterministic constants. Then
J
X J
X
2
2
αj Zj ∼ N 0, σ , σ = αj2 σj2 .
j=1 j=1
c
Lecture 9,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 9,
Amos Lapidoth 2017
Communication and Detection Theory:
Lecture 10
Amos Lapidoth
ETH Zurich
May 2, 2017
c
Lecture 10,
Amos Lapidoth 2017
Today
c
Lecture 10,
Amos Lapidoth 2017
Guessing H
H takes on the values 0 and 1 according to the prior
φGuess : Rd → {0, 1}
p∗ (error).
c
Lecture 10,
Amos Lapidoth 2017
Guessing in the Absence of Observables
Check by case!
c
Lecture 10,
Amos Lapidoth 2017
The Joint Law of H and Y
We are typically given the prior (π0 , π1 ) and the conditionals
c
Lecture 10,
Amos Lapidoth 2017
Intuition
Suppose the observation is a scalar Y .
Pr H = 0, Y ∈ y obs − δ, yobs + δ
Pr H = 0 Y = yobs ≈ lim .
δ↓0 Pr Y ∈ yobs − δ, yobs + δ
Now approximate
Z
yobs +δ
Pr H = 0, Y ∈ (yobs − δ, yobs + δ) = π0 fY |H=0 (y) dy
yobs −δ
≈ π0 2δfY |H=0 (yobs ), δ 1,
Z yobs +δ
Pr Y ∈ (yobs − δ, yobs + δ) = fY (y) dy
yobs −δ
≈ 2δfY (yobs ), δ 1,
c
Lecture 10,
Amos Lapidoth 2017
Advice
c
Lecture 10,
Amos Lapidoth 2017
Guessing after Observing Y—Heuristics
Having observed that Y = yobs , we associate with H the a
posteriori probabilities Pr[H = 0|Y = yobs ], Pr[H = 1|Y = yobs ].
(
0 if Pr[H = 0| Y = yobs ] ≥ Pr[H = 1|Y = yobs ],
φ∗Guess (yobs ) =
1 otherwise,
i.e.,
(
0 if π0 fY|H=0 (yobs ) ≥ π1 fY|H=1 (yobs ),
φ∗Guess (yobs ) =
1 otherwise.
The conditional error probability is
p∗ (error|Y = yobs ) = min Pr[H = 0|Y = yobs ], Pr[H = 1|Y = yobs ] ,
so Z
∗
p (error) = min Pr[H = 0| Y = y], Pr[H = 1| Y = y] fY (y) dy
ZRd
= min π0 fY|H=0 (y), π1 fY|H=1 (y) dy.
Rd
c
Lecture 10,
Amos Lapidoth 2017
The Error
Let φGuess : Rd → {0, 1} be any guessing rule, and let
Then
Z
p(error|H = 0) = fY|H=0 (y) dy,
Zy∈D
/
and
Z Z
p(error) = π0 fY|H=0 (y) dy + π1 fY|H=1 (y) dy
y∈D
/ y∈D
Z
= π0 fY|H=0 (y) I{y ∈
/ D} + π1 fY|H=1 (y) I{y ∈ D} dy.
Rd
c
Lecture 10,
Amos Lapidoth 2017
The Main Result
If φ∗Guess guesses “H = 0” only when
π0 fY|H=0 (yobs ) ≥ π1 fY|H=1 (yobs )
φ∗Guess (yobs ) = 0 =⇒ π0 fY|H=0 (yobs ) ≥ π1 fY|H=1 (yobs ) ,
c
Lecture 10,
Amos Lapidoth 2017
Proof
Let φGuess : Rd → {0, 1} be any guessing rule, and
D = {y ∈ Rd : φGuess (y) = 0}.
Then
Pr φGuess (Y) 6= H
Z
= π0 fY|H=0 (y) I{y ∈
/ D} + π1 fY|H=1 (y) I{y ∈ D} dy
ZRd
≥ min π0 fY|H=0 (y), π1 fY|H=1 (y) dy.
Rd
But ∗
φGuess (·) achieves this lower bound! Indeed, with
D∗ = {y ∈ Rd : φ∗Guess (y) = 0}
we have
/ D∗ } + π1 fY|H=1 (y) I{y ∈ D∗ }
π0 fY|H=0 (y) I{y ∈
= min π0 fY|H=0 (y), π1 fY|H=1 (y) , y ∈ Rd .
c
Lecture 10,
Amos Lapidoth 2017
Randomized Guessing Rules
Θ ∼ U ([0, 1])
Random Number
Generator
c
Lecture 10,
Amos Lapidoth 2017
Randomized Guessing Rules Are not Better
Deterministic rules are randomized rules where b(yobs ) ∈ {0, 1}.
c
Lecture 10,
Amos Lapidoth 2017
Alternative Proof
The randomized rule is a deterministic rule based on (Y, Θ)!
Similarly,
c
Lecture 10,
Amos Lapidoth 2017
The Maximum A Posteriori Rule
φMAP (yobs )
0 if Pr[H = 0|Y = yobs ] > Pr[H = 1| Y = yobs ],
, 1 if Pr[H = 0|Y = yobs ] < Pr[H = 1| Y = yobs ],
U {0, 1} if Pr[H = 0|Y = yobs ] = Pr[H = 1| Y = yobs ],
0 if π0 fY|H=0 (yobs ) > π1 fY|H=1 (yobs ),
= 1 if π0 fY|H=0 (yobs ) < π1 fY|H=1 (yobs ),
U {0, 1} if π0 fY|H=0 (yobs ) = π1 fY|H=1 (yobs ).
c
Lecture 10,
Amos Lapidoth 2017
The Likelihood-Ratio Function
LR : Rd → [0, ∞]
fY|H=0 (y)
LR(y) , , y ∈ Rd
fY|H=1 (y)
using the convention
α 0
= ∞, α>0 and = 1.
0 0
Using this function,
π1
0 if LR(yobs ) > π0 ,
π1
φMAP (yobs ) = 1 if LR(yobs ) < π0 ,
π0 , π1 , fY (yobs ) > 0 .
π1
U {0, 1} if LR(yobs ) = π0 ,
c
Lecture 10,
Amos Lapidoth 2017
The Maximum-Likelihood Rule
c
Lecture 10,
Amos Lapidoth 2017
The Bhattacharyya Bound
Z
∗
p (error) = min π0 fY|H=0 (y), π1 fY|H=1 (y) dy
ZRd q
≤ π0 fY|H=0 (y)π1 fY|H=1 (y) dy
Rd Z q
√
= π0 π1 fY|H=0 (y)fY|H=1 (y) dy
Z qRd
1
≤ fY|H=0 (y)fY|H=1 (y) dy,
2 Rd
where we have used
√ 1
min{a, b} ≤ ab ≤ (a + b), a, b ≥ 0.
2
Thus,
Z q
1
p∗ (error) ≤ fY|H=0 (y)fY|H=1 (y) dy.
2 y∈Rd
c
Lecture 10,
Amos Lapidoth 2017
Testing the Mean of a Univariate Gaussian (1)
H is uniform and
1 2 /(2σ 2 )
fY |H=0 (y) = √ e−(y−A) , y ∈ R,
2πσ 2
1 2 /(2σ 2 )
fY |H=1 (y) = √ e−(y+A) , y∈R
2πσ 2
for some deterministic A, σ > 0.
Since the prior is uniform, the MAP and the ML rules both guess
“H = 0” or “H = 1” depending on whether LR(yobs ) is greater or
smaller than one.
c
Lecture 10,
Amos Lapidoth 2017
Testing the Mean of a Univariate Gaussian (2)
fY |H=0 (y)
LR(y) =
fY |H=1 (y)
2 2
√ 1 e−(y−A) /(2σ )
2πσ 2
=
√ 1 e−(y+A)2 /(2σ2 )
2πσ 2
4yA/(2σ 2 )
=e , y ∈ R.
2
LR(yobs ) > 1 ⇐⇒ e4yobs A/(2σ ) > 1
2
⇐⇒ ln e4yobs A/(2σ ) > ln 1
⇐⇒ 4yobs A/(2σ 2 ) > 0
⇐⇒ yobs > 0.
c
Lecture 10,
Amos Lapidoth 2017
Testing the Mean of a Univariate Gaussian (3)
Likewise
2
LR(yobs ) < 1 ⇐⇒ e4yobs A/(2σ ) < 1
2
⇐⇒ ln e4yobs A/(2σ ) < ln 1
⇐⇒ 4yobs A/(2σ 2 ) < 0
⇐⇒ yobs < 0.
The MAP and ML rules thus guess “H = 0,” if yobs > 0; they
guess “H = 1,” if yobs < 0; and they guess “H = 0” or “H = 1”
equiprobably, if yobs = 0 (i.e., in the case of a tie).
c
Lecture 10,
Amos Lapidoth 2017
Testing the Mean of a Univariate Gaussian (4)
c
Lecture 10,
Amos Lapidoth 2017
Testing the Mean of a Univariate Gaussian (4)
pMAP (error|H = 1) = Pr[Y > 0 |H = 1]
A
=Q ,
σ
because, conditional on H = 1, the RV Y is N −A, σ 2 so the
origin is A/σ standard deviations away (to the right).
Since
c
Lecture 10,
Amos Lapidoth 2017
Guess “H = 1” Guess “H = 0”
fY (y)
pMAP (error|H = 0)
y
−A A
c
Lecture 10,
Amos Lapidoth 2017
Testing the Mean of a Univariate Gaussian (6)
The Bhattacharyya Bound:
Z
1 ∞q
p∗ (error) ≤ fY |H=0 (y)fY |H=1 (y) dy
2 −∞
Z s
1 ∞ 1 1
= √ e−(y−A)2 /(2σ2 ) √ e−(y+A)2 /(2σ2 ) dy
2 −∞ 2πσ 2 2πσ 2
Z ∞
1 2 2 1 2 2
= e−A /2σ √ e−y /2σ dy
2 −∞ 2πσ 2
1 −A2 /2σ2
= e .
2
As an aside, we obtained
1 −α2 /2
Q(α) ≤ e , α ≥ 0.
2
c
Lecture 10,
Amos Lapidoth 2017
Deterministic Processing is Futile
c
Lecture 10,
Amos Lapidoth 2017
More General Processing
The processor generates Θ independently of (H, Y) and forms
g(Y, Θ).
This too is futile!
Cannot outperform an optimal rule based on (yobs , θobs ), where
fY,Θ|H=0 (yobs , θobs ) = fY|H=0 (yobs ) fΘ (θobs ),
fY,Θ|H=1 (yobs , θobs ) = fY|H=1 (yobs ) fΘ (θobs ).
But,
fY,Θ|H=0 (yobs , θobs )
LR(yobs , θobs ) =
fY,Θ|H=1 (yobs , θobs )
fY|H=0 (yobs ) fΘ (θobs )
=
fY|H=1 (yobs ) fΘ (θobs )
fY|H=0 (yobs )
= , fΘ (θobs ) 6= 0
fY|H=1 (yobs )
c
Lecture 10,
Amos Lapidoth 2017
= LR(yobs ), fΘ (θobs ) 6= 0.
Recall that X and Y are conditionally independent given Z
−Z(−
X(− −Y
if
PX,Y |Z (x, y|z) = PX|Z (x|z)PY |Z (y|z), PZ (z) > 0.
c
Lecture 10,
Amos Lapidoth 2017
yobs yobs + W MAPfor testing Guess
+ N α0 , σ 2 + δ 2 vs. N α1 , σ 2 + δ 2
with prior (π0 , π1 )
W ∼ N 0, δ 2
Gaussian RV
Generator
Local
Randomness
W independent of (Y, H).
A randomized rule for N α0 , σ 2 vs. N α1 , σ2 that attains the
optimal probability of error for N α0 , σ 2 + δ 2 vs. N α1 , σ 2 + δ 2 .
c
Lecture 10,
Amos Lapidoth 2017
Sufficient Statistics—an Example (1)
Let H have a uniform prior. We observe (Y1 , Y2 ). Conditional on
H = 0, they areIID N 0, σ02 , whereas conditional on H = 1 they
are IID N 0, σ12 , where σ0 > σ1 > 0. Thus,
1 1
2 2
fY1 ,Y2 |H=0 (y1 , y2 ) = exp − (y 1 + y2 ) , y1 , y2 ∈ R,
2πσ02 2σ02
1 1
2 2
fY1 ,Y2 |H=1 (y1 , y2 ) = exp − (y 1 + y2 ) , y1 , y2 ∈ R.
2πσ12 2σ12
1 1 1 2 2 σ02
LR(y1 , y2 ) > 1 ⇐⇒ exp − (y 1 + y2 ) >
2 σ12 σ02 σ12
1 1 1 σ 2
⇐⇒ 2 − 2 (y12 + y22 ) > ln 02
2 σ1 σ0 σ1
2
σ0 − σ1 2 2 σ 2
⇐⇒ 2 2 (y1 + y22 ) > ln 02
2σ0 σ1 σ1
2
2σ σ 2 σ 2
⇐⇒ y12 + y22 > 2 0 12 ln 02 .
σ0 − σ1 σ1
c
Lecture 10,
Amos Lapidoth 2017
Sufficient Statistics—an Example (3)
c
Lecture 10,
Amos Lapidoth 2017
Sufficient Statistics—Informal Definition
0
A mapping T : Rd → Rd forms a sufficient statistic for the
densities fY|H=0 (·) and fY|H=1 (·) if the likelihood-ratio LR(yobs )
can be computed from T (yobs ) for every yobs in Rd .
c
Lecture 10,
Amos Lapidoth 2017
Sufficient Statistics—Formal Definition
0
A mapping T : Rd → Rd forms a sufficient statistic for the
densities fY|H=0 (·) and fY|H=1 (·) on Rd if it is Borel measurable
and if there exists a set Y0 ⊂ Rd of Lebesgue measure zero and a
0
Borel measurable function ζ : Rd → [0, ∞] such that for all
yobs ∈ Rd satisfying
yobs ∈
/ Y0 and fY|H=0 (yobs ) + fY|H=1 (yobs ) > 0.
we have
fY|H=0 (yobs )
= ζ T (yobs ) ,
fY|H=1 (yobs )
where on the LHS of the above we define a/0 to be +∞ whenever
a > 0.
c
Lecture 10,
Amos Lapidoth 2017
Basing the Decision on a Sufficient Statistic Is Optimal
0
If T : Rd → Rd is a sufficient statistic for the densities fY|H=0 (·)
and fY|H=1 (·), then, for every prior of H, there exists a decision
rule that guesses H based on T (Y) and which is as good as any
optimal guessing rule based on Y.
Indeed, the rule
0 if ζ T (yobs ) > ππ10 ,
φT T (yobs ) = 1 if ζ T (yobs ) < ππ10 ,
U {0, 1} if ζ T (yobs ) = ππ10
c
Lecture 10,
Amos Lapidoth 2017
Computability of the a Posteriori Distribution—Informal
0
T : Rd → Rd is a sufficient statistic for fY|H=0 (·) and fY|H=1 (·)
iff
for every prior (π0 , π1 ) there exist functions
t 7→ ψm π0 , π1 , t , m = 0, 1,
equals
T
Pr[H = 0|Y = yobs ], Pr[H = 1| Y = yobs ] ,
c
Lecture 10,
Amos Lapidoth 2017
After Identifying a Sufficient Statistic (2)
Method 2: Use the MAP rule for guessing H based on the new
d0 -dimensional observations tobs = T (yobs ). You’ll need the
conditional densities of T = T (Y) given H.
(
0 if π0 fT|H=0 T (yobs ) > π1 fT|H=1 T (yobs ) ,
φGuess (T (yobs )) =
1 if π0 fT|H=0 T (yobs ) < π1 fT|H=1 T (yobs ) ,
c
Lecture 10,
Amos Lapidoth 2017
Applying Method 2 in the Example
The squares of two IID centered Gaussians sum to an exponential
1 t
fT |H=0 (t) = 2 exp − 2 , t ≥ 0,
2σ0 2σ
1 t0
fT |H=1 (t) = 2 exp − 2 , t ≥ 0.
2σ1 2σ1
So,
fT |H=0 (t) σ12 1 1
= 2 exp t − , t ≥ 0,
fT |H=1 (t) σ0 2σ12 2σ02
fT |H=0 (t) σ2 1 1
ln = ln 12 + t − , t ≥ 0.
fT |H=1 (t) σ0 2σ12 2σ02
We thus guess “H = 0” if the log likelihood-ratio is positive,
2σ 2 σ 2 σ2
t ≥ 2 0 12 ln 02
σ0 − σ1 σ1
2σ 2 σ 2 σ2
⇐⇒ y12 + y22 ≥ 2 0 12 ln 02 .
σ0 − σ1 σ1
c
Lecture 10,
Amos Lapidoth 2017
Multi-Dimensional Binary Gaussian Hypothesis Testing
H is of nondegenerate prior (π0 , π1 ). The observable is
T
Y = Y (1) , . . . , Y (J)
(j)
H = 0 : Y (j) = s0 + Z (j) , j = 1, 2, . . . , J,
(j) (j) (j)
H=1:Y = s1 +Z ,
j = 1, 2, . . . , J,
where Z (1) , Z (2) , . . . , Z (J)
are IID N and 0, σ 2
(1)
(J) T (1) (J) T
s0 = s0 , . . . , s0 , s1 = s1 , . . . , s1
are deterministic. The Euclidean inner product and norm in RJ are
J
X
hu, viE , u(j) v (j) ,
j=1
v
q u J
uX 2
kuk , hu, uiE = t u(j) .
j=1
c
Lecture 10,
Amos Lapidoth 2017
The Likelihood Function
fY|H=0 (y)
LR(y) =
fY|H=1 (y)
(j) 2
!
QJ y (j) −s0
√ 1 exp −
j=1 2πσ 2 2σ 2
= (j) 2
!
QJ y (j) −s1
√ 1 exp −
j=1 2πσ 2 2σ 2
J (j) 2 (j) 2
!
Y y (j) − s0 y (j) − s1
= exp − + , y ∈ RJ .
2σ 2 2σ 2
j=1
c
Lecture 10,
Amos Lapidoth 2017
The Log-Likelihood Function
1 X (j)
J
(j) 2 (j) (j) 2
LLR(y) = y − s1 − y − s0
2σ 2
j=1
1 ks1 k2 − ks0 k2
= hy, s0 − s1 iE +
σ2 2
1 hs0 , s0 − s1 iE + hs1 , s0 − s1 iE
= hy, s0 − s1 iE −
σ2 2
D E D E
s , s0 −s1
+ s , s0 −s1
ks0 − s1 k s0 − s1 0 ks0 −s1 k E 1 ks0 −s1 k E
= 2
y, −
σ ks0 − s1 k E 2
ks0 − s1 k 1
= hy, φi E − hs 0 , φi E + hs1 , φi E , y ∈ RJ ,
σ2 2
where
s0 − s1
φ=
ks0 − s1 k
is a unit-norm vector pointing from s1 to s0 .
c
Lecture 10,
Amos Lapidoth 2017
Decision Rule
An optimal rule is to guess “H = 0” when LLR(y) ≥ ln ππ10 :
π0 < π1 π0 = π1 π0 > π1
c
Lecture 10,
Amos Lapidoth 2017
y
φ
s0
s1
φ
Then,
h i
ks0 − s1 k
Pr kY − s1 k ≤ kY − s0 k = Q .
2σ
c
Lecture 10,
Amos Lapidoth 2017
Error Probability Lemma
h i
Pr kY − s1 k ≤ kY − s0 k
h i
= Pr kZ + s0 − s1 k ≤ kZk
h i
= Pr kZ + s0 − s1 k2 ≤ kZk2
h i
2
kZk
= Pr + ks0 − s1 k2 + 2 hZ, s0 − s1 iE ≤
kZk
2
h i
= Pr −2 hZ, s0 − s1 iE ≥ ks0 − s1 k2
h i
= Pr 2 hZ, s0 − s1 iE ≥ ks0 − s1 k2
c
Lecture 10,
Amos Lapidoth 2017
Linear Combinations of Independent Gaussians
Let
α1 , . . . , αJ ∈ R
be deterministic constants. Then
J
X J
X
αj Zj ∼ N 0, σ 2 , σ2 = αj2 σj2 .
j=1 j=1
c
Lecture 10,
Amos Lapidoth 2017
For Our Problem
For a uniform prior,
ks0 − s1 k
Pr[error |H = 0] = Pr[error |H = 1] = Pr[error] = Q .
2σ
More generally,
ks0 − s1 k σ π0
pMAP (error|H = 0) = Q + ln .
2σ ks0 − s1 k π1
ks0 − s1 k σ π1
pMAP (error|H = 1) = Q + ln .
2σ ks0 − s1 k π0
∗ ks0 − s1 k σ π0
p (error) = π0 Q + ln
2σ ks0 − s1 k π1
ks0 − s1 k σ π1
+ π1 Q + ln .
2σ ks0 − s1 k π0
c
Lecture 10,
Amos Lapidoth 2017
Random Parameter Not Observed—Nuisance Parameter
Instead of fY|H=0 (yobs ) and fY|H=1 (yobs ), we are given
fΘ (·), fY|Θ=θ,H=0 (·), fY|Θ=θ,H=1 (·), Θ ind. of H.
Z
fY|H=0 (yobs ) = fY,Θ|H=0 (yobs , θ) dθ
Zθ
= fY|Θ=θ,H=0 (yobs ) fΘ|H=0 (θ) dθ
Zθ
= fY|Θ=θ,H=0 (yobs ) fΘ (θ) dθ.
θ
(Think about conditioning on H = 0 as specifying the law.)
Z
fY|H=1 (yobs ) = fY|Θ=θ,H=1 (yobs ) fΘ (θ) dθ.
θ
R
fY|Θ=θ,H=0 (yobs ) fΘ (θ) dθ
LR(yobs ) = Rθ .
θ fY|Θ=θ,H=1 (yobs ) fΘ (θ) dθ
c
Lecture 10,
Amos Lapidoth 2017
Random Parameter Observed
If Θ is observed, we merely view the observable as (Y, Θ).
Likewise,
so
fY|H=0,Θ=θobs (yobs )
LR(yobs , θobs ) = .
fY|H=1,Θ=θobs (yobs )
c
Lecture 10,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 10,
Amos Lapidoth 2017
Communication and Detection Theory:
Lecture 11
Amos Lapidoth
ETH Zurich
May 9, 2017
Multi-Hypothesis Testing
c
Lecture 11,
Amos Lapidoth 2017
Today
c
Lecture 11,
Amos Lapidoth 2017
Multiple Hypotheses
M takes value in the set M = {1, . . . , M}, where M ≥ 2
according to the prior
πm = Pr[M = m], m ∈ M.
πm > 0, m ∈ M.
fY|M =m (·), m ∈ M.
φGuess : Rd → M.
p∗ (error)
c
Lecture 11,
Amos Lapidoth 2017
Guessing in the Absence of Observables
• Only M deterministic decision rules: φ1 , . . . , φM , where
φm guesses “M = m”.
πm̃ = max
0
πm0 .
m ∈M
c
Lecture 11,
Amos Lapidoth 2017
The Joint Law of M and Y
c
Lecture 11,
Amos Lapidoth 2017
Guessing in the Presence of Observables
• After observing that Y = yobs , we associate with each
m ∈ M the a posteriori probability Pr[M = m|Y = yobs ].
• We pick the message of highest a posteriori probability.
• A tie occurs when more than one outcome attains the highest
a posteriori probability. Any one of the maximum-achieving
messages will do.
• We thus guess “m̃,” only if
0
Pr[M = m̃| Y = yobs ] = max 0
Pr[M = m |Y = y obs ] .
m ∈M
• For this rule
p∗ (correct|Y = yobs ) = max Pr[M = m 0
|Y = y obs ] ,
m0 ∈M
p∗ (error|Y = yobs ) = 1 − max Pr[M = m 0
| Y = y obs ] ,
m0 ∈M
Z
p∗ (error) = 1 − max
0
Pr[M = m0
| Y = y] fY (y) dy.
Rd m ∈M
c
Lecture 11,
Amos Lapidoth 2017
The Main Result
M̃(yobs )
n o
, m̃ ∈ M : Pr[M = m̃ |Y = yobs ] = max Pr[M = m 0
| Y = y obs ]
m0 ∈M
n o
= m̃ ∈ M : πm̃ fY|M =m̃ (yobs ) = max
0
π 0 f
m Y|M =m 0 (y obs ) .
m ∈M
is optimal.
c
Lecture 11,
Amos Lapidoth 2017
Proof
Given any φGuess (·), define the disjoint sets
Dm = yobs ∈ Rd : φGuess (yobs ) = m , m ∈ M.
X Z
Pr(correct) = πm fY|M =m (y) dy
m∈M Dm
X Z
= πm fY|M =m (y) I{y ∈ Dm } dy
m∈M Rd
Z X
= πm fY|M =m (y) I{y ∈ Dm } dy
Rd m∈M
Z n o
≤ max πm fY|M =m (y) dy.
Rd m∈M
Equality is attained if
y ∈ Dm̃ =⇒ πm̃ fY|M =m̃ (y) = max
0
π 0 f
m Y|M =m 0 (y) .
m ∈M
c
Lecture 11,
Amos Lapidoth 2017
Randomized Rules, the MAP, and the ML Rules
c
Lecture 11,
Amos Lapidoth 2017
Processing
M (−
−Y(−
−Z.
c
Lecture 11,
Amos Lapidoth 2017
Multi-Hypothesis Testing for 2D Signals
fY (1) ,Y (2) |M =m y (1) , y (2)
1 (y (1) − am )2 + (y (2) − bm )2
= exp − .
2πσ 2 2σ 2
c
Lecture 11,
Amos Lapidoth 2017
8PSK
8PSK corresponds to M = 8 and
2πm 2πm
am = A cos , bm = A sin , m = 1, . . . , 8.
8 8
(a2 , b2 )
(a3 , b3 ) (a1 , b1 )
(a4 , b4 ) A (a8 , b8 )
(a5 , b5 ) (a7 , b7 )
(a6 , b6 )
c
Lecture 11,
Amos Lapidoth 2017
The “Nearest-Neighbor” Decoding Rule
Since M is uniform, the MAP picks an element of
argmax fY (1) ,Y (2) |M =m0 (y (1) , y (2) )
m0 ∈M ( 2 2)
1 −
(y(1) −am0 ) +(y(2) −bm0 )
= argmax e 2σ 2
m0 ∈M 2πσ 2
( 2 2)
−
(y(1) −am0 ) +(y(2) −bm0 )
= argmax e 2σ 2
m0 ∈M
2 2
y (1) − am0 + y (2) − bm0
= argmax −
m0 ∈M 2σ 2
(1) 2 2
y − am0 + y (2) − bm0
= argmin
m0 ∈M 2σ 2
n 2 o
(1) 2 (2)
= argmin y − am + y − bm
0 0
m0 ∈M
n o
= argmin ky − sm0 k .
0 ∈M
m2017
c
Lecture 11,
Amos Lapidoth
Nearest-Neighbor Decoding for 8PSK
y (2)
guess 1
m=1
y (1)
c
Lecture 11,
Amos Lapidoth 2017
Error Analysis for 8PSK
c
Lecture 11,
Amos Lapidoth 2017
y (2)
y (1)
c
Lecture 11,
Amos Lapidoth 2017
The Union-of-Events Bound
• The probability of the union of two disjoint events is the sum
of their probabilities.
• Given two not necessarily disjoint events V and W,
V ∪ W = W ∪ (V \ W),
so
Pr(V ∪ W) = Pr(W) + Pr(V \ W).
• To study Pr(V \ W), note that
V = (V \ W) ∪ (V ∩ W).
so
Pr(V \ W) = Pr(V) − Pr(V ∩ W).
Hence,
Pr(V ∪ W) = Pr(V) + Pr(W) − Pr(V ∩ W)
≤ Pr(V) + Pr(W).
c
Lecture 11,
Amos Lapidoth 2017
The Union-of-Events Bound for Finite Collections of Events
If V1 , V2 , . . . , is a finite (or countably-infinite) collection of events,
then
[ X
Pr Vj ≤ Pr(Vj ).
j j
y ∈ Bm,m0 =⇒
6 the MAP will not guess m.
(There could be a tie resolved in m’s favor.)
If m was not guessed by the MAP rule, then some m0 which is not
equal to m must have had an a posteriori probability that is at
least as high as that of m
[
m was not guessed =⇒ Y∈ Bm,m0 .
m0 6=m
c
Lecture 11,
Amos Lapidoth 2017
[
m was not guessed =⇒ Y∈ Bm,m0 ,
m0 6=m
implies that
h [ i
Pr m was not guessed ≤ Pr Y ∈ Bm,m0 .
m0 6=m
[
pMAP (error|M = m) ≤ Pr Y ∈
Bm,m0 M = m
m0 6=m
[ n o
= Pr ω ∈ Ω : Y(ω) ∈ Bm,m0 M = m
m0 6=m
X
≤ Pr {ω ∈ Ω : Y(ω) ∈ Bm,m0 } M = m
m0 6=m
X
= Pr Y ∈ Bm,m0 M = m
m0 6=m
X Z
= fY|M =m (y) dy.
m0 6=m Bm,m0
c
Lecture 11,
Amos Lapidoth 2017
The Union-of-Events Bound in Hypothesis Testing
pMAP (error|M = m)
X
≤ Pr Y ∈ Bm,m0 M = m
m0 6=m
X Z
= fY|M =m (y) dy
m0 6=m Bm,m0
X
= Pr πm0 fY|M =m0 (Y) ≥ πm fY|M =m (Y) M = m ,
m0 6=m
where
n o
Bm,m0 = y ∈ Rd : πm0 fY|M =m0 (y) ≥ πm fY|M =m (y) .
If ties occur with probability zero, then Pr[Y ∈ Bm,m0 |M = m] is
the conditional probability of error of the MAP rule for
π πm0
m
fY|M =m (·) vs. fY|M =m0 (·) with prior , .
πm + πm0 πm + πm0
c
Lecture 11,
Amos Lapidoth 2017
The Union Bound for 8-PSK—pMAP (error|M = 4)
B4,3
3 3 3
4 4 = 4
∪
5 5 5
B4,5 B4,3 ∪ B4,5
B4,3
3 3 3
4 4 = 4
∪
5 5 5
B4,5 B4,3 ∪ B4,5
The events Y ∈ B4,5 and Y ∈ B4,3 are not mutually exclusive, but,
c
Lecture 11,
Amos Lapidoth 2017
Optimal Guessing Rule
Having observed that Y = y, the MAP rule randomly picks an
element from the set
M̃(y)
n o
= m̃ ∈ M : πm̃ fY|M =m̃ (y) = max πm0 fY|M =m0 (y)
m0 ∈M
n
o
= m̃ ∈ M : ln πm̃ fY|M =m̃ (y) = max0
ln πm fY|M =m0 (y)
0 .
m ∈M
Here
J
J 1 X (j) 2
ln πm fY|M =m (y) = ln πm − ln(2πσ 2 ) − 2 y − s(j)
m .
2 2σ
j=1
c
Lecture 11,
Amos Lapidoth 2017
Optimal Rule for a Uniform Prior
c
Lecture 11,
Amos Lapidoth 2017
Uniform Prior and Equi-Norm Vectors
A further simplification arises when
ks1 k = ks2 k = · · · = ksM k .
In this case the nearest-neighbor rule coincides with the ”highest
correlation rule”
( J )
X (j)
M̃(y) = argmax y (j) sm̃ .
m̃∈M j=1
where the red term is (always) common, and the blue term by
assumption. c
Lecture 11,
Amos Lapidoth 2017
Ties
If the mean vectors s1 , . . . , sM are distinct,
pMAP (error|M = m)
X
≤ Pr πm0 fY|M =m0 (Y) ≥ πm fY|M =m (Y) M = m
m0 6=m
X
ksm − sm0 k σ πm
= Q + ln .
2σ ksm − sm0 k πm0
m0 6=m
Thus,
X X
∗ ksm − sm0 k σ πm
p (error) ≤ πm Q + ln .
2σ ksm − sm0 k πm0
m∈M m0 6=m
c
Lecture 11,
Amos Lapidoth 2017
The Union Bound for the Gaussian Problem with a
Uniform Prior
X
ksm − sm0 k
pMAP (error|M = m) ≤ Q , M uniform,
2σ
m0 6=m
∗ 1 X X ksm − sm0 k
p (error) ≤ Q , M uniform.
M 0
2σ
m∈M m 6=m
c
Lecture 11,
Amos Lapidoth 2017
A Lower Bound
If the score of Message m0 is higher than that of Message m, then
the MAP decoder will surely not guess “M = m.”
(Whether it will guess “M = m0 ” depends on the other scores.)
Thus, for each message m0 6= m
pMAP (error|M = m)
≥ Pr πm0 fY|M =m0 (Y) > πm fY|M =m (Y) M = m
ksm − sm0 k σ πm
=Q + ln .
2σ ksm − sm0 k πm0
To tighten the bound we maximize over m0 :
ksm − sm0 k σ πm
pMAP (error|M = m) ≥ max Q + ln
m0 ∈M\{m} 2σ ksm − sm0 k πm0
X
∗ ksm − sm0 k σ πm
p (error) ≥ πm max Q + ln .
m0 ∈M\{m} 2σ ksm − sm0 k πm0
m∈M
This simplifies for a uniform prior:
c
Lecture 11,
Amos Lapidoth 2017
A Lower Bound for the Gaussian Problem with a
Uniform Prior
When the prior is uniform,
ksm − sm0 k
pMAP (error|M = m) ≥ max Q .
m0 ∈M\{m} 2σ
� � � � ��T
yobs T (yobs ) ψ1 {πm } , T (yobs ) , . . . , ψM {πm } , T (yobs )
T (·) Black Box
�
πm }m∈M
T (·) forms
Figure a sufficient
22.1: A black box statistic for fed
that when guessing
any priorM{πbased
m } and on obs )if (but
T (yY there
not the observation yobs directly) produces a probability vector that is equal to
exists
(Pr[Ma black box that, when fed T (y
= 1| Y = yobs ], . . . , Pr[M = M | Y = yobs ) (but not
obs]) whenever both
T y obs ) and
the conditionany
�
{πmπm} fproduces
priorm∈M Y |M =m (yobsthe
) > 0aand posteriori distribution
the condition yobs ∈/ Y0 areofsatisfied.
M given
Y = yobs .
is a probability vector and such that this probability vector is equal to
� �T
Pr[M = 1 | Y = yobs ], . . . , Pr[M = M | Y = yobs ] (22.7)
c
Lecture 11,
Amos Lapidoth 2017
Technicalities
c
Lecture 11,
Amos Lapidoth 2017
Sufficient Statistic—Formal Definition
0
T: Rd→ Rd forms a sufficient statistic for the densities fY|M =1 , . . . ,
fY|M =M on Rd if it is Borel measurable and if for some Y0 ⊂ Rd
of Lebesgue measure zero we have that for every prior {πm } there
0
exist M Borel measurable functions from Rd to [0, 1]
T (yobs ) 7→ ψm {πm }, T (yobs ) , m ∈ M,
such that the vector
T
ψ1 {πm }, T (yobs ) , . . . , ψM {πm }, T (yobs )
is a probability vector and such that this probability vector equals
T
Pr[M = 1|Y = yobs ], . . . , Pr[M = M| Y = yobs ]
whenever both the condition yobs ∈
/ Y0 and the condition
M
X
πm fY|M =m (yobs ) > 0
m=1
are satisfied.
c
Lecture 11,
Amos Lapidoth 2017
Guessing Based on a Sufficient Statistic Is Optimal
c
Lecture 11,
Amos Lapidoth 2017
Sufficiency Implies Pairwise Sufficiency
π f (y)
m Y|M =m if fY (y) > 0,
Pr[M = m|Y = y] , fY (y) m ∈ M,
1
otherwise, y ∈ Rd ,
M
and ignore the second case. For a uniform prior
−1
M fY|M =m0 (y)
Pr[M = m0 |Y = y] = P
−1
M fY|M =m (y)
m∈M
−1
M fY|M =m00 (y)
Pr[M = m00 |Y = y] = P
−1
.
M f (y)
m∈M Y|M =m
Dividing
Pr[M = m0 |Y = y] fY|M =m0 (y)
00
= ,
Pr[M = m | Y = y] fY|M =m00 (y)
so if the LHS is computable from T (y) then so is the RHS.
c
Lecture 11,
Amos Lapidoth 2017
Pairwise Sufficiency Implies Sufficiency
Consider M densities {fY|M =m (·)}m∈M on Rd , and assume that
0
T : Rd → Rd forms a sufficient statistic for every pair of densities
fY|M =m0 (·), fY|M =m00 (·), where m0 6= m00 are both in M. Then
T (·) is a sufficient statistic for the M densities {fY|M =m (·)}m∈M .
πm fY|M =m (y)
Pr[M = m|Y = y] = , fY (y) > 0, m ∈ M, y ∈ Rd .
fY (y)
Consequently, for any prior,
πm fY|M =m (y)
Pr[M = m |Y = y] =
fY (y)
πm fY|M =m (y)
=P
m0 ∈M πm0 fY|M =m0 (y)
πm
=P fY|M =m0 (y)
m0 ∈M πm0 f (y)
Y|M =m
c
Lecture 11,
Amos Lapidoth 2017
A Markov Condition
0
A Borel measurable function T : Rd → Rd forms a sufficient
statistic for the M densities {fY|M =m (·)}m∈M if, and only if, for
any prior {πm }
M (− −T (Y)(− −Y.
c
Lecture 11,
Amos Lapidoth 2017
Simulating the Observables
• The condition
M (−
−T (Y)(−
−Y
is equivalent to the distribution of Y given M, T (Y) being
the same as given T (Y).
• Stated differently, the distribution of Y given T (Y) under
fY|M =m does not depend on m.
• If we generate Ỹ from T (Y) according to this conditional
law, then Ỹ will be of the same conditional law given M = m
as Y.
• We could then feed Ỹ to decoder that was designed for Y
and get the same performance!
c
Lecture 11,
Amos Lapidoth 2017
Guessing Based on the Simulated Observables
� �
T (yobs ) Ỹ T (yobs ), Θ Guess
PY|T (Y)=T (yobs ) Given Rule for Guessing
M based on Y
Random Number
Generator
Figure 22.2: If T (Y) forms a sufficient statistic for guessing M based on Y, then—
even though Y cannot typically be recovered from T (Y)—the performance of any
given detector based on Y can be achieved based on T (Y) and a local random
number generator as follows. Using T (yobs ) and local randomness Θ, one produces
a Ỹ whose conditional law given M = m is the same as that of Y, for each m ∈ M.
One then feeds Ỹ to the given detector.
c
Lecture 11,
Amos Lapidoth 2017
The Example Revisited (1)
Let H have a uniform prior. We observe (Y1 , Y2 ). Conditional on
H = 0, they areIID N 0, σ02 , whereas conditional on H = 1 they
are IID N 0, σ12 , where σ0 > σ1 > 0. Thus,
1 1
2 2
fY1 ,Y2 |H=0 (y1 , y2 ) = exp − (y 1 + y2 ) , y1 , y2 ∈ R,
2πσ02 2σ02
1 1
2 2
fY1 ,Y2 |H=1 (y1 , y2 ) = exp − (y 1 + y2 ) , y1 , y2 ∈ R.
2πσ12 2σ12
1 1 1 2 2 σ02
LR(y1 , y2 ) > 1 ⇐⇒ exp − (y 1 + y 2 ) >
2 σ12 σ02 σ12
1 1 1 σ 2
⇐⇒ 2 − 2 (y12 + y22 ) > ln 02
2 σ1 σ0 σ1
2
σ0 − σ1 2 2 σ 2
⇐⇒ 2 2 (y1 + y22 ) > ln 02
2σ0 σ1 σ1
2
2σ σ 2 σ 2
⇐⇒ y12 + y22 > 2 0 12 ln 02 .
σ0 − σ1 σ1
c
Lecture 11,
Amos Lapidoth 2017
The Example Revisited (3)
c
Lecture 11,
Amos Lapidoth 2017
The Example Revisited—Simulating the Observables
c
Lecture 11,
Amos Lapidoth 2017
Guessing Based on the Simulated Observables
� �
T (yobs ) Ỹ T (yobs ), Θ Guess
PY|T (Y)=T (yobs ) Given Rule for Guessing
M based on Y
Random Number
Generator
Figure 22.2: If T (Y) forms a sufficient statistic for guessing M based on Y, then—
even though Y cannot typically be recovered from T (Y)—the performance of any
given detector based on Y can be achieved based on T (Y) and a local random
number generator as follows. Using T (yobs ) and local randomness Θ, one produces
a Ỹ whose conditional law given M = m is the same as that of Y, for each m ∈ M.
One then feeds Ỹ to the given detector.
c
Lecture 11,
Amos Lapidoth 2017
Guessing whether M Lies in a Given Subset of M
c
Lecture 11,
Amos Lapidoth 2017
Indeed
1
Pr[Y ∈ A|M ∈ K] = Pr[M ∈ K, Y ∈ A]
Pr[M ∈ K]
1 X
= Pr[M = m, Y ∈ A]
Pr[M ∈ K]
m∈K
1 X
= Pr[M = m] Pr[Y ∈ A |M = m]
Pr[M ∈ K]
m∈K
X Z
1
= πm fY|M =m (y) dy
Pr[M ∈ K] A
m∈K
Z !
1 X
= πm fY|M =m (y) dy.
A Pr[M ∈ K]
m∈K
| {z }
fY|M ∈K (y)
c
Lecture 11,
Amos Lapidoth 2017
Sufficiency and Testing whether M is in K
0
Let T : Rd → Rd form a sufficient statistic for the M densities
{fY|M =m (·)}m∈M . Then T (·) is also sufficient for
fY|M ∈K (·), fY|M ∈K
/ (·).
1 X
fY|M ∈K (y) = πm fY|M =m (y),
Pr[M ∈ K]
m∈K
1 X
fY|M ∈K
/ (y) = πm fY|M =m (y),
Pr[M ∈/ K]
m∈K
/
so P
fY|M ∈K (y) Pr[M ∈
/ K] πm fY|M =m (y)
= Pm∈K
fY|M ∈K
/ (y) Pr[M ∈ K] / πm fY|M =m (y)
m∈K
P
Pr[M ∈
/ K] πm fY|M =m (y)/fY|M =1 (y)
= Pm∈K
Pr[M ∈ K] / πm fY|M =m (y)/fY|M =1 (y)
m∈K
and all the terms are computable from T (y) and M ’s PMF.
c
Lecture 11,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 11,
Amos Lapidoth 2017
Communication and Detection Theory:
Lecture 12
Amos Lapidoth
ETH Zurich
c
Lecture 12,
Amos Lapidoth 2017
Today
c
Lecture 12,
Amos Lapidoth 2017
The Multivariate Gaussian Distribution (Chapter 23).
c
Lecture 12,
Amos Lapidoth 2017
Definitions
c
Lecture 12,
Amos Lapidoth 2017
Orthogonal Matrix—Definition
UUT = In .
UT U = In .
c
Lecture 12,
Amos Lapidoth 2017
Writing U in terms of its columns,
↑ ··· ↑
···
U= 1ψ · ·· ψn
,
···
↓ ··· ↓
the condition UT U = In is
T
↑ ··· ↑
← ψ1 →
· · · · · · · · · ···
In = ψ1 · · · ψn
· · · · · · · · ·
···
← ψnT →
↓ ··· ↓
T T T
ψ1 ψ1 ψ1 ψ2 · · · ψ1 ψn
ψ T ψ1 ψ T ψ2 · · · ψ T ψn
2 2 2
= . .. . .. .
. . . . . .
ψnT ψ1 ψnT ψ2 · · · ψnT ψn
c
Lecture 12,
Amos Lapidoth 2017
The Columns of an Orthogonal Matrix
c
Lecture 12,
Amos Lapidoth 2017
2 × 2 Orthogonal Matrices
c
Lecture 12,
Amos Lapidoth 2017
Eigenvectors of Symmetric Matrices (1)
Aψ = λψ.
c
Lecture 12,
Amos Lapidoth 2017
Eigenvectors of Symmetric Matrices (2)
↑ ··· ↑ ↑ ··· ↑
··· ···
A
ψ1 ··· ψn
= Aψ1 ··· Aψn
··· ···
↓ ··· ↓ ↓ ··· ↓
and
↑ ··· ↑ λ1 0 ··· 0 ↑ ··· ↑
··· .. .. ···
0 λ2 . .
ψ1 ··· ψn = λ 1 ψ1 · · · λn ψn
.. .. .. .
··· . . . 0 ···
↓ ··· ↓ 0 ··· 0 λn ↓ ··· ↓
c
Lecture 12,
Amos Lapidoth 2017
Eigenvectors of Symmetric Matrices (3)
The eigen-vectors/values relation is thus AU = UΛ, where
↑ ··· ↑ λ1 0 · · · 0
.
··· 0 λ2 . . . ..
U = ψ1 · · · ψn and Λ = . . .
. . ..
··· . . . 0
↓ ··· ↓ 0 · · · 0 λn
Thus,
A = UΛU−1 .
The orthonormality of the eigenvectors is equivalent to U being
orthogonal. So, alternatively,
A = UΛUT .
c
Lecture 12,
Amos Lapidoth 2017
Spectral Decomposition Theorem for Real Symmetric
Matrices
A = UΛUT ,
c
Lecture 12,
Amos Lapidoth 2017
Positive Semidefinite Matrices
• K ∈ Rn×n is positive semidefinite or nonnegative definite
K0
if K is symmetric and
αT Kα ≥ 0, α ∈ Rn .
K0
if K is symmetric and
αT Kα > 0, α 6= 0, α ∈ Rn .
c
Lecture 12,
Amos Lapidoth 2017
Characterizing Positive Semidefinite Matrices
K = ST S
K = UΛUT ,
c
Lecture 12,
Amos Lapidoth 2017
Characterizing Positive Definite Matrices
K = UΛUT ,
c
Lecture 12,
Amos Lapidoth 2017
Finding S Satisfying K = ST S
Given K 0, how can we find a matrix S satisfying K = ST S?
There are many. E.g., find matrices U and Λ as above satisfying
K = UΛUT .
Define √
λ1 0 ··· 0
√ .. ..
0 λ2 . .
Λ 1/2
=
.. .. ..
.
. . .0
√
0 ··· 0 λn
Now choose
S = Λ1/2 UT .
Indeed, with this definition of S we have
T
ST S = Λ1/2 UT Λ1/2 UT
= UΛ1/2 Λ1/2 UT = UΛUT = K.
c
Lecture 12,
Amos Lapidoth 2017
Random Vectors
c
Lecture 12,
Amos Lapidoth 2017
Expectations
c
Lecture 12,
Amos Lapidoth 2017
The Covariance Matrix
The n × n covariance matrix KXX of the random n-vector X is
h i
KXX , E (X − E[X]) (X − E[X])T
X (1) − E X (1)
..
.
= E
..
X (1) − E X (1) · · ·
··· X (n) −E X (n)
.
(n) (n)
X −E X
(1) (1) (2)
Var
X Cov X , X ··· CovX (1) , X (n)
Cov X (2) , X (1) Var X (2) ··· Cov X (2) , X (n)
= .. .. .. .. .
. . . .
(n) (1)
(n) (2)
(n)
Cov X , X Cov X , X ··· Var X
c
Lecture 12,
Amos Lapidoth 2017
The Covariance Matrix of a Subset of the Components
7 13 12 26
T
then the covariance matrix of X (2) , X (4) is 39 13
13 26 .
c
Lecture 12,
Amos Lapidoth 2017
Mean and Covariance under Linear Transformation
If H is a random matrix and A, B are deterministic matrices then
E[AH] = AE[H] , E[HB] = E[H] B.
The transpose operation commutes with expectation:
E HT = (E[H])T .
As to the covariance matrix,
Y = AX =⇒ KYY = A KXX AT .
Indeed,
KYY , E (Y − E[Y])(Y − E[Y])T
= E (AX − E[AX])(AX − E[AX])T
= E A(X − E[X])(A(X − E[X]))T
= E A(X − E[X])(X − E[X])T AT
= AE (X − E[X])(X − E[X])T AT
= A KXX AT .
c
Lecture 12,
Amos Lapidoth 2017
Singular Covariance Matrices—Example
Let X be centered with covariance matrix
3 5 7
KXX = 5 9 13 .
7 13 19
As we’ll see, because the columns of KXX satisfy
3 5 7
− 5 + 2 9 − 13 = 0,
7 13 19
it follows that
−X (1) + 2X (2) − X (3) = 0, with probability one.
Consequently, in manipulating X we can pick the two components
X (2) , X (3) , which are of nonsingular covariance matrix 13
9 13 and
19
keep track “on the side” of the fact that X (1) is equal, with
probability one, to 2X (2) − X (3) .
c
Lecture 12,
Amos Lapidoth 2017
Manipulating Singular Covariance Matrices
c
Lecture 12,
Amos Lapidoth 2017
The Characteristic Function
c
Lecture 12,
Amos Lapidoth 2017
Random Vectors of Identical Characteristic Functions
Have Identical Laws
c
Lecture 12,
Amos Lapidoth 2017
Establishing Independence via the Characteristic Function
X and Y are independent iff
h i
E ei($1 X+$2 Y ) = E ei$1 X E ei$2 Y , $1 , $2 ∈ R. (16)
c
Lecture 12,
Amos Lapidoth 2017
Gaussian Random Vectors
A random n-vector X is Gaussian if for some positive integer m
there exists an n × m matrix A; a standard Gaussian random
m-vector W; and a deterministic vector µ ∈ Rn such that
L
X = AW + µ.
The law of X does not determine A. Not even the number of its
columns m. It only determines AAT .
Every positive semidefinite matrix is the covariance matrix of some
centered Gaussian random vector.
c
Lecture 12,
Amos Lapidoth 2017
Examples and Basic Properties
1. Every N µ, σ 2 random variable, when viewed as a random
1-vector, is Gaussian.
Such a random variable has the same law as σW +µ, when
W is a standard univariate Gaussian.
2. Every deterministic vector is a Gaussian vector.
Choose the matrix A as the all-zero matrix 0.
3. If the components of X are independent univariate Gaussians
(not necessarily of equal variance), then X is a Gaussian
vector.
Choose A to be an appropriate diagonal matrix.
c
Lecture 12,
Amos Lapidoth 2017
The Definition of Independent Vectors
c
Lecture 12,
Amos Lapidoth 2017
Stacking Independent Gaussians Yields a Gaussian
L
Let A1 ∈ Rn1 ×m1 and µ1 ∈ Rn1 be such that X1 = A1 W1 + µ1 ,
where W1 is a standard Gaussian m1 -vector. Similarly, let
A2 ∈ Rn2 ×m2 and µ2 ∈ Rn2 represent the n2 -vector X2 . Let W1
& W2 be independent standard Gaussians.
A1 0 W1 µ1 A1 W1 + µ1
+ =
0 A2 W2 µ2 A2 W2 + µ2
| {z } | {z }
A µ
L X1
= ,
X2
L
where we have used that if X1 & X2 are independent, X1 = X01 ,
L
X2 = X02 , and X01 & X02 are independent, then
0
X1 L X1
= .
X2 X02
c
Lecture 12,
Amos Lapidoth 2017
An Affine Transformation of a Gaussian Is a Gaussian
L
Indeed, if X = AW + µ, then
L
CX + d = C(AW + µ) + d
= (CA)W + (Cµ + d),
so CX + d is Gaussian.
c
Lecture 12,
Amos Lapidoth 2017
Permuting and Selecting Components
Permuting the components of a Gaussian vector results in a
Gaussian vector. Hence we speak of jointly Gaussian without
specifying the order.
Choose C as a permutation matrix, e.g.,
(3) (1)
X 0 0 1 X
X (1) = 1 0 0 X (2) .
X (2) 0 1 0 X (3)
Constructing a random p-vector from a Gaussian n-vector by
picking p of its components (allowing for repetition) yields a
Gaussian vector.
Picking is also an affine transformation, e.g.,
(3) X (1)
X 0 0 1 (2)
= X .
X (1) 1 0 0
X (3)
c
Lecture 12,
Amos Lapidoth 2017
Every Component of a Gaussian Vector is a Gaussian RV
c
Lecture 12,
Amos Lapidoth 2017
The Mean and Covariance Determine the
Law of a Gaussian
c
Lecture 12,
Amos Lapidoth 2017
Computing the Characteristic Function of
a Gaussian Vector
• We compute ΦX (·) when X is a Gaussian n-vector.
T
• We need to compute E ei$ X for every $ ∈ Rn .
• $ T X is a Gaussian 1-vector, whose sole component is thus a
Gaussian RV. Its mean is $ T µ and its variance is $ T KXX $
$ T X ∼ N $ T µ, $ T KXX $ , $ ∈ Rn .
c
Lecture 12,
Amos Lapidoth 2017
There Is only One Gaussian Distribution of Given Mean
and Covariance
• Every positive semidefinite matrix is the covariance matrix of
some centered Gaussian random vector.
• The law of a Gaussian is determined by the mean and
covariance.
For every µ ∈ Rn and every positive semidefinite matrix K ∈ Rn×n
there exists one, and only one, Gaussian distribution of mean µ
and covariance matrix K. We denote it
N (µ, K) .
1 T T
X ∼ N (µ, K) =⇒ ΦX ($) = e− 2 $ K$+i$ µ , $ ∈ Rn .
c
Lecture 12,
Amos Lapidoth 2017
Jointly Gaussian Vectors
c
Lecture 12,
Amos Lapidoth 2017
Independence between Jointly Gaussian Vectors
Suppose that X and Y are jointly Gaussian. Then they are
independent iff they are uncorrelated.
• Independence always implies uncorrelatedness.
• Suppose now that X and Y are centered, jointly Gaussian,
and uncorrelated. Let X0 and Y0 be independent random
L L
vectors such that X0 = X and Y0 = Y.
0
X KXX 0 X
0 is Gaussian of covariance , like !
Y 0 KYY Y
c
Lecture 12,
Amos Lapidoth 2017
More Generally
c
Lecture 12,
Amos Lapidoth 2017
Pairwise Independence
c
Lecture 12,
Amos Lapidoth 2017
Pairwise Independence of Jointly Gaussians
c
Lecture 12,
Amos Lapidoth 2017
The Matrix A Can be Chosen Square
If X is a centered Gaussian n-vector, then there exists a
L
deterministic square n × n matrix A such that X = AW, where
W is a standard Gaussian n-vector.
KXX = ST S.
c
Lecture 12,
Amos Lapidoth 2017
A Canonical Representation of a Centered Gaussian
We can generate any Gaussian by stretching a standard Gaussian
and rotating the result:
Let X be a centered Gaussian n-vector. Then
√
λ1 W (1)
L ..
X = UΛ1/2 W = U . ,
√ (n)
λn W
If X ∼ N µ, σ 2 with σ 6= 0, then (X − µ)/σ is standard. What
about vectors?
Suppose X ∼ N (µ, K), where µ ∈ Rn and K0. Let Λ and U be
as in the spectral representation of K. Then,
Λ−1/2 UT (X − µ) ∼ N (0, In ) ,
where Λ−1/2 is the diagonal matrix whose diagonal entries are the
reciprocals of the square roots of the diagonal elements of Λ.
c
Lecture 12,
Amos Lapidoth 2017
The Density of a Gaussian Vector (1)
L
If X ∼ N (0, K), then X = BW, where
B = UΛ1/2 ,
Thus,
c
Lecture 12,
Amos Lapidoth 2017
exp − 12 xT K−1 x
fX (x) = p , x ∈ Rn .
(2π)n det(K)
c
Lecture 12,
Amos Lapidoth 2017
Linear Functionals of Gaussian Vectors
• A linear functional on Rn is a linear mapping from Rn to R.
• All linear functionals are of the form
x 7→ αT x.
c
Lecture 12,
Amos Lapidoth 2017
Proof
c
Lecture 12,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 12,
Amos Lapidoth 2017
Communication and Detection Theory:
Lecture 13
Amos Lapidoth
ETH Zurich
c
Lecture 13,
Amos Lapidoth 2017
Today
c
Lecture 13,
Amos Lapidoth 2017
Notation
A continuous-time stochastic process X(t), t ∈ R is a family of
random variables that are defined on a common probability space
(Ω, F, P ) and that are indexed by the reals.
c
Lecture 13,
Amos Lapidoth 2017
The Finite-Dimensional Distributions
The FDDs of a continuous-time SP X(t) is the collection of all
joint distributions of
X(t1 ), . . . , X(tn )
where
• n can be any positive integer and
• t1 , . . . , tn ∈ R are arbitrary epochs.
To specify the FDDs of X(t) we must specify for every n ∈ N
and for every choice of the epochs t1 , . . . , tn ∈ R the distribution of
X(t1 ), . . . , X(tn ) .
c
Lecture 13,
Amos Lapidoth 2017
Do the FDDs Tell Us Everything about a SP?
The σ-algebra generated by X(t) is the set of events
whose
probability can be computed from the FDDs of X(t) using only
the axioms of probability.
c
Lecture 13,
Amos Lapidoth 2017
Independent Stochastic Processes
X(t) and Y (t) are independent stochastic processes if for
every n ∈ N and any choice of the epochs t1 , . . . , tn ∈ R,
X(t1 ), . . . , X(tn ) & Y (t1 ), . . . , Y (tn ) are independent.
c
Lecture 13,
Amos Lapidoth 2017
Gaussian SP: Definition
X(t) is a Gaussian stochastic process if for every n ∈ N and
every choice of the epochs t1 , . . . , tn ∈ R, the random vector
(X(t1 ), . . . , X(tn ))T is Gaussian.
c
Lecture 13,
Amos Lapidoth 2017
The FDDs of Gaussian SPs
If X(t) is a centered Gaussian SP, then its FDDs are determined
by the mapping
(t1 , t2 ) 7→ Cov X(t1 ), X(t2 ) , t1 , t2 ∈ R.
Proof: Since X(t) is a Gaussian SP,
T
X(t1 ), . . . X(tn )
is Gaussian, and its law is specified by its mean (which is zero) and
its covariance matrix, which is determined by the mapping:
Cov[X(t1 ), X(t1 )] Cov[X(t1 ), X(t2 )] · · · Cov[X(t1 ), X(tn )]
··· ··· ··· ···
··· ··· ··· ··· .
··· ··· ··· ···
Cov[X(tn ), X(t1 )] Cov[X(tn ), X(t2 )] · · · Cov[X(tn ), X(tn )]
c
Lecture 13,
Amos Lapidoth 2017
Stationary Stochastic Processes
X(t) is stationary if all its time shifts have identical FDDs: for
every τ ∈ R, every n ∈ N and all epochs t1 , . . . , tn ∈ R,
L
X(t1 + τ ), . . . , X(tn + τ ) = X(t1 ), . . . , X(tn ) .
• Choosing n = 1: if X(t) is stationary, then all its samples
have the same distribution
L
X(t) = X(t + τ ), t, τ ∈ R.
• Choosing n = 2: if X(t) is stationary, then the joint
distribution of any two of its samples depends on the elapsed
time between them and not on the absolute time at which
they are taken
L
X(t1 ), X(t2 ) = X(t1 + τ ), X(t2 + τ ) , t1 , t2 , τ ∈ R.
c
Lecture 13,
Amos Lapidoth 2017
Wide-Sense Stationary Stochastic Processes
X(t), t ∈ R is wide-sense stationary if
1. It is of finite variance;
2. its mean is constant
E X(t) = E X(t + τ ) , t, τ ∈ R;
Every finite-variance
stationary SP is also wide-sense stationary.
Indeed, if X(t) is stationary then
L
X(t) = X(t + τ ), t, τ ∈ R,
L
X(t1 ), X(t2 ) = X(t1 + τ ), X(t2 + τ ) , t1 , t2 , τ ∈ R.
c
Lecture 13,
Amos Lapidoth 2017
Autocovariance Function
The autocovariance function KXX : R → R of a WSS SP X(t) is
KXX (τ ) , Cov X(t + τ ), X(t) ,
(which does not depend on t because X(t) is WSS).
c
Lecture 13,
Amos Lapidoth 2017
Stationary Gaussian Stochastic Processes
Assume now that X(t) is WSS. We’ll show
L
X(t1 + τ ), . . . , X(tn + τ ) = X(t1 ), . . . , X(tn ) .
c
Lecture 13,
Amos Lapidoth 2017
The former’s covariance is
Cov[X(t1 ), X(t1 )] ··· Cov[X(t1 ), X(tn )]
··· ··· ···
··· ··· ···
··· ··· ···
Cov[X(tn ), X(t1 )] ··· Cov[X(tn ), X(tn )]
c
Lecture 13,
Amos Lapidoth 2017
The FDDs of a Stationary Gaussian SP
The FDDs of a centered stationary Gaussian SP are fully specified
by its autocovariance function.
Proof: Since X(t) is Gaussian, the vector
T
X(t1 ), . . . , X(tn )
is Gaussian, and its law thus determined by its mean & covariance.
• Its mean is 0 because X(t) is centered.
• The Row-j Column-` entry of its covariance matrix is
Cov[X(tj ), X(t` )] ,
c
Lecture 13,
Amos Lapidoth 2017
The PSD of a Continuous-Time WSS SP
The WSS SP X(t) is of power spectral density (PSD) SXX if
SXX : R → R is nonnegative, symmetric, integrable,
and its IFT is
the autocovariance function KXX of X(t) :
Z ∞
KXX (τ ) = SXX (f ) ei2πf τ df, τ ∈ R.
−∞
c
Lecture 13,
Amos Lapidoth 2017
Remarks on the PSD
If KXX is continuous at the origin and integrable, then X(t) is of
PSD K̂XX (·). (Proposition 25.7.1.)
c
Lecture 13,
Amos Lapidoth 2017
The PSD and Operational PSD of a WSS SP
Let X(t) be a measurable, centered, WSS SP of a continuous
autocovariance function KXX . Let S(·) be a nonnegative,
symmetric, integrable function. Then the following two conditions
are equivalent:
1. KXX is the Inverse Fourier Transform of S(·).
2. For every integrable h : R → R, the power in X ? h is given by
Z ∞
Power of X ? h = S(f ) |ĥ(f )|2 df.
−∞
(Theorem 25.14.3)
c
Lecture 13,
Amos Lapidoth 2017
The Average Power
Some technicalities:
• For some ω ∈ Ω the mapping t 7→ X 2 (ω, t) might be
ill-behaved.
• The mapping from ω to the result of the integral might not
be measurable.
These difficulties are eliminated if X(t) is measurable.
c
Lecture 13,
Amos Lapidoth 2017
Power in a Centered WSS SP
If X(t) is a measurable, centered, WSS SP of autocovariance
function KXX , then for all a < b
Z b
1
ω 7→ X 2 (ω, t) dt
b−a a
defines a RV (possibly taking on the value +∞) satisfying
Z b
1 2
E X (t) dt = KXX (0).
b−a a
The power in X(t) is thus KXX (0).
Proof: Swapping integration and expectation we obtain
Z b Z b
2
E X (t) dt = E X 2 (t) dt
a a
Z b
= KXX (0) dt
a
= (b − a) KXX (0).
c
Lecture 13,
Amos Lapidoth 2017
Linear Functionals
Let X(t) be WSS. We wish to study the RV
Z ∞
ω 7→ X(ω, t) s(t) dt.
−∞
As to the variance:
c
Lecture 13,
Amos Lapidoth 2017
Linear Functionals
c
Lecture 13,
Amos Lapidoth 2017
Linear Functionals
Z ∞ Z ∞Z ∞
Var X(t) s(t) dt = s(t) KXX (t − τ ) s(τ ) dt dτ
−∞ −∞ −∞
Z ∞Z ∞
= s(σ + τ ) KXX (σ) s(τ ) dσ dτ
−∞ −∞
Z ∞ Z ∞
= KXX (σ) s(σ + τ ) s(τ ) dτ dσ
Z−∞
∞
−∞
Or Z Z
∞ ∞ 2
Var X(t) s(t) dt = SXX (f )ŝ(f ) df .
−∞ −∞
c
Lecture 13,
Amos Lapidoth 2017
What if X(t) is WSS but not Centered?
Consider the centered SP X̃(t)
Z ∞ Z ∞
Var X(t) s(t) dt = Var X̃(t) + µ s(t) dt
−∞
Z−∞
∞ Z ∞
= Var X̃(t) s(t) dt + µ s(t) dt
Z−∞
∞ −∞
c
Lecture 13,
Amos Lapidoth 2017
Linear Functionals of Gaussian Processes
If X(t) is stationary and Gaussian, then
Z ∞ n
X
X(t) s(t) dt + αν X(tν ) is a Gaussian RV.
−∞ ν=1
Here:
• s : R → R is deterministic and integrable;
• n is an arbitrary nonnegative integer;
• α1 , . . . , αn ∈ R are arbitrary coefficients; and
• the epochs t1 , . . . , tn ∈ R are arbitrary.
c
Lecture 13,
Amos Lapidoth 2017
Some Intuition
c
Lecture 13,
Amos Lapidoth 2017
Computing the Mean
Z ∞ n
X
E X(t) s(t) dt + αν X(tν )
−∞ ν=1
Z ∞ Xn
=E X(t) s(t) dt + αν E X(tν )
−∞ ν=1
Z ∞ Xn
= E[X(0)] s(t) dt + αν .
−∞ ν=1
c
Lecture 13,
Amos Lapidoth 2017
The Variance (1)
Z ∞ n
X Z ∞
Var X(t) s(t) dt + αν X(tν ) = Var X(t) s(t) dt
−∞ ν=1 −∞
X
n n
X Z ∞
+ Var αν X(tν ) + 2 αν Cov X(t) s(t) dt, X(tν ) .
ν=1 ν=1 −∞
We already saw
Z ∞ Z ∞
Var X(t) s(t) dt = KXX (σ) Rss (σ) dσ.
−∞ −∞
c
Lecture 13,
Amos Lapidoth 2017
The Variance (2)
It remains to compute the covariance
Z ∞ Z ∞
E X(tν ) X(t) s(t) dt = E X(t)X(tν ) s(t) dt
−∞
Z ∞−∞
= s(t) E[X(t)X(tν )] dt
Z−∞
∞
= s(t) KXX (t − tν ) dt.
−∞
c
Lecture 13,
Amos Lapidoth 2017
Linear Functionals of a Gaussian SP Are Jointly Gaussian
The m linear functionals
Z ∞ Xn1
X(t) s1 (t) dt + α1,ν X t1,ν , . . . ,
−∞ ν=1
Z ∞ nm
X
X(t) sm (t) dt + αm,ν X tm,ν
−∞ ν=1
of a measurable, stationary, Gaussian SP X(t) are jointly
Gaussian.
Here:
• m ∈ N is the number of functionals;
• the m functions {sj }m j=1 are integrable;
• the coefficients {αj,ν } and the epochs {tj,ν } are deterministic
real numbers for all j ∈ {1, . . . , m} and all ν ∈ {1, . . . , nj }.
"Z nj Z nk
#
∞ X ∞ X
Cov X(t) sj (t) dt + αj,ν X(tj,ν ), X(t) sk (t) dt + αk,ν 0 X(tk,ν 0 )
−∞ ν=1 −∞ ν 0 =1
Z ∞ Z ∞
= Cov X(t) sj (t) dt, X(t) sk (t) dt
−∞ −∞
nj
X Z ∞
+ αj,ν Cov X(tj,ν ), X(t) sk (t) dt
ν=1 −∞
Xnk Z ∞
+ αk,ν 0 Cov X(tk,ν 0 ), X(t) sj (t) dt
ν 0 =1 −∞
nj nk
X X
+ αj,ν αk,ν 0 Cov X(tj,ν ), X(tk,ν 0 ) , j, k ∈ {1, . . . , m}.
ν=1 ν 0 =1
SNN (f )
N0 /2
f
−W W
c
Lecture 13,
Amos Lapidoth 2017
Key Properties of White Gaussian Noise (1)
• If s is any integrable function that is bandlimited to W Hz,
Z ∞
N0 2
N (t) s(t) dt ∼ N 0, ksk2 .
−∞ 2
• If s1 , . . . , sm are integrable functions that are bandlimited to
W Hz, then the m random variables
Z ∞ Z ∞
N (t) s1 (t) dt, . . . , N (t) sm (t) dt
−∞ −∞
c
Lecture 13,
Amos Lapidoth 2017
Key Properties of White Gaussian Noise (2)
• If φ1 , . . . , φm are integrable functions that are bandlimited to
W Hz and are orthonormal, then
Z ∞ Z ∞ N
0
N (t) φ1 (t) dt, . . . , N (t) φm (t) dt ∼ IID N 0, .
−∞ −∞ 2
• If s is any integrable function that is bandlimited to W Hz,
N0
KNN ? s = s.
2
• If s is an integrable function that is bandlimited to W Hz,
then for every epoch t ∈ R
Z ∞
N0
Cov N (σ) s(σ) dσ, N (t) = s(t).
−∞ 2
c
Lecture 13,
Amos Lapidoth 2017
Proof (1)
Z ∞ Z ∞
Cov N (t) sj (t) dt, N (t) sk (t) dt
−∞ −∞
Z ∞
= SNN (f ) ŝj (f ) ŝ∗k (f ) df
−∞
Z W
= SNN (f ) ŝj (f ) ŝ∗k (f ) df
−W
Z
N0 W
= ŝj (f ) ŝ∗k (f ) df
2 −W
N0
= hsj , sk i , j, k ∈ {1, . . . , m}.
2
c
Lecture 13,
Amos Lapidoth 2017
Proof (2)
Z ∞
KNN ? s (t) = s(τ ) KNN (t − τ ) dτ
Z −∞
∞ Z ∞
= s(τ ) SNN (f ) ei2πf (t−τ ) df dτ
Z−∞∞
−∞
Z ∞
i2πf t
= SNN (f ) e s(τ ) e−i2πf τ dτ df
−∞ −∞
Z ∞
= SNN (f ) ŝ(f ) ei2πf t df
−∞
Z W
= SNN (f ) ŝ(f ) ei2πf t df
−W
Z
N0 W
= ŝ(f ) ei2πf t df
2 −W
N0
= s(t), t ∈ R.
2
c
Lecture 13,
Amos Lapidoth 2017
Proof (3)
Z ∞ Z ∞
Cov N (σ) s(σ) dσ, N (t) = SNN (f ) ŝ(f ) ei2πf t df
−∞ −∞
Z W
= SNN (f ) ŝ(f ) ei2πf t df
−W
Z
N0 W
= ŝ(f ) ei2πf t df
2 −W
N0
= s(t), t ∈ R.
2
c
Lecture 13,
Amos Lapidoth 2017
Projecting a SP
If X is a measurable WSS SP, and if φ1 , . . . , φd ∈ L1 ∩ L2 are
orthonormal, then the projection of X onto span(φ1 , . . . , φd ) is
the SP
d
X
(ω, t) 7→ hX, φ` i(ω) φ` (t),
`=1
i.e.,
d
X
hX, φ` i φ` .
`=1
c
Lecture 13,
Amos Lapidoth 2017
Projecting White Noise
If N (t) is white Gaussian noise of power spectral density N0 /2
w.r.t. the bandwidth W, and if φ1 , . . . , φd are orthonormal
integrable signals that are bandlimited to W Hz. Then
d
X d
X
hN, φ` i φ` and N − hN, φ` i φ`
`=1 `=1
c
Lecture 13,
Amos Lapidoth 2017
A Small Detour (1)
Suppose
N = g(N ) + h(N ), (17a)
where
g(N ) and h(N ) are independent. (17b)
Then
L
N =G+H (18a)
whenever
L L
G = g(N ), H = h(N ), G and H are independent. (18b)
c
Lecture 13,
Amos Lapidoth 2017
A Small Detour (2)
c
Lecture 13,
Amos Lapidoth 2017
Simulating White Noise of a Given Projection
c
Lecture 13,
Amos Lapidoth 2017
Projecting White Noise
If N (t) is white Gaussian noise of power spectral density N0 /2
w.r.t. the bandwidth W, and if φ1 , . . . , φd are orthonormal
integrable signals that are bandlimited to W Hz. Then
d
X d
X
hN, φ` i φ` and N − hN, φ` i φ`
`=1 `=1
c
Lecture 13,
Amos Lapidoth 2017
Define
d
X d
X
N1 , hN, φ` i φ` , N2 , N − hN, φ` i φ` .
`=1 `=1
d
X d
X
N1 (t) , hN, φ` i φ` (t), N2 (t) , N (t) − hN, φ` i φ` (t).
`=1 `=1
We need to show that for every n ∈ N and epochs t1 , . . . , tn ∈ R
T T
N1 (t1 ), . . . , N1 (tn ) and N2 (t1 ), . . . , N2 (tn )
are independent Gaussian vectors. They are jointly Gaussian
because they are linear functionals of a Gaussian SP:
X d Z ∞ X d
N1 (tν ) = N, φ` (tν )φ` = N (t) φ` (tν )φ` dt,
`=1 −∞ `=1
Z ∞ Xd
N2 (tν 0 ) = N (t) − φ` (tν 0 )φ` dt + N (tν 0 ).
−∞ `=1
It thus remains to establish that they are uncorrelated.
c
Lecture 13,
Amos Lapidoth 2017
Because N (t) is centered, N1 (tν ) and N2 (tν 0 ) are centered and
Cov N1 (tν ), N2 (tν 0 ) = E N1 (tν ) N2 (tν 0 ) .
It thus remains to establish
E N1 (tν ) N2 (tν 0 ) = 0, ν, ν 0 ∈ {1, . . . , n}.
More generally,
E N1 (t) N2 (t0 )
X d d
X
0 0
=E hN, φ` iφ` (t) N (t ) − hN, φ`0 iφ`0 (t )
`=1 `0 =1
d
X d d
X X
= φ` (t)E hN, φ` iN (t0 ) − φ` (t)φ`0 (t0 )E hN, φ` ihN, φ`0 i
`=1 `=1 `0 =1
d
X d
X
N0 N0
= φ` (t)φ` (t0 ) − φ` (t)φ` (t0 )
2 2
`=1 `=1
= 0, t, t0 ∈ R.
c
Lecture 13,
Amos Lapidoth 2017
Next Week
Thank you!
c
Lecture 13,
Amos Lapidoth 2017
Communication and Detection Theory:
Lecture 14
Amos Lapidoth
ETH Zurich
c
Lecture 14,
Amos Lapidoth 2017
Signals in White Noise
The “message” M takes value in M = {1, . . . , M}, with prior
πm = Pr[M = m], m ∈ M.
The “observation” Y (t) is a continuous-time SP.
Conditional on M = m,
(I.e., that
are measurable w.r.t. the σ-algebra generated by
Y (t) .)
c
Lecture 14,
Amos Lapidoth 2017
From a SP to a Random Vector
2
that is measurable w.r.t. the σ-algebra generated by Y
c
Lecture 14,
Amos Lapidoth 2017
Computing the Inner Products
~1 hY, φ1 i
φ
~2 hY, φ2 i
Decision Rule
φ
Guess
Y (t)
...
~d hY, φd i
φ
sample at t = 0
c
Lecture 14,
Amos Lapidoth 2017
More Generally
• span(φ1 , . . . , φd ) need not equal span(s1 , . . . , sM ): it suffices
that
span(φ1 , . . . , φd ) ⊇ span(s1 , . . . , sM ).
• The same holds for any vector S provided that T is
computable from S.
• If s̃1 , . . . , s̃n are integrable signals that are bandlimited to W
Hz and
span(s1 , . . . , sM ) ⊆ span(s̃1 , . . . , s̃n ),
then it is optimal to base our guess on
T
hY, s̃1 i, . . . , hY, s̃n i .
Indeed, T is computable from this vector because
X n Xn
φ` = α`,j s̃j =⇒ hY, φ` i = α`,j hY, s̃j i .
j=1 j=1
c
Lecture 14,
Amos Lapidoth 2017
The Conditional Law of T
Given M = m, we have Y = sm + N so
T T
T = hsm , φ1 i, . . . , hsm , φd i + hN, φ1 i, . . . , hN, φd i .
c
Lecture 14,
Amos Lapidoth 2017
Key Properties of White Gaussian Noise (2)
• If φ1 , . . . , φm are integrable functions that are bandlimited to
W Hz and are orthonormal, then
Z ∞ Z ∞ N
0
N (t) φ1 (t) dt, . . . , N (t) φm (t) dt ∼ IID N 0, .
−∞ −∞ 2
• If s is any integrable function that is bandlimited to W Hz,
N0
KNN ? s = s.
2
• If s is an integrable function that is bandlimited to W Hz,
then for every epoch t ∈ R
Z ∞
N0
Cov N (σ) s(σ) dσ, N (t) = s(t).
−∞ 2
c
Lecture 14,
Amos Lapidoth 2017
Reduction to the Multi-Dimensional Multi-Hypothesis
Gaussian Problem
Before Now
observed vector Y T
number of components of
J d
the observed vector
variance of noise added to
σ2 N0 /2
each component
number of hypotheses M M
conditional mean of ob- (1) (J) T T
sm , . . . , sm hsm , φ1 i, . . . , hsm , φd i
servation given M = m
J
X d
X Z ∞
sum of squared compo- (j) 2 2
sm hsm , φ` i = s2m (t) dt
nents of mean vector −∞
j=1 `=1
c
Lecture 14,
Amos Lapidoth 2017
Optimal Rule Based on T
c
Lecture 14,
Amos Lapidoth 2017
Optimal Rule Based on T—Uniform Prior
c
Lecture 14,
Amos Lapidoth 2017
What if s1 , . . . , sM all Have the same Energy?
Since (φ1 , . . . , φd ) is orthonormal
d
X
sm = hsm , φ` iφ` , m ∈ M,
`=1
and,
d
X
ksm k22 = hsm , φ` i2 , m ∈ M.
`=1
Consequently,
ks1 k2 = ks2 k2 = · · · = ksM k2
X
d d
X d
X
2 2 2
=⇒ hs1 , φ` i = hs2 , φ` i · · · = hsM , φ` i .
`=1 `=1 `=1
In this case the Euclidean norms of all mean vectors are equal!
c
Lecture 14,
Amos Lapidoth 2017
Optimal Rule for a Uniform Prior and Equi-Energy Mean
Signals
c
Lecture 14,
Amos Lapidoth 2017
The Decision Rule without Reference to a Basis
Pd 2
hY, φ` i − hsm0 , φ` i
`=1
ln πm0 −
N0
can be expressed by opening the square as
d d d
1 X 2 2 X 1 X
ln πm0 − hY, φ` i + hY, φ` ihsm0 , φ` i− hsm0 , φ` i2 .
N0 N0 N0
`=1 `=1 `=1
c
Lecture 14,
Amos Lapidoth 2017
d
X X d
hY, φ` ihsm , φ` i = Y, hsm , φ` iφ`
`=1 `=1
= hY, sm i, m ∈ M.
c
Lecture 14,
Amos Lapidoth 2017
Optimal Rule
c
Lecture 14,
Amos Lapidoth 2017
Performance Analysis
As in the Mutli-Dimensional Gaussian Multi-Hypothesis Problem!
Before Now
observed vector Y T
number of components of
J d
the observed vector
variance of noise added to
σ2 N0 /2
each component
number of hypotheses M M
conditional mean of ob- (1) (J) T T
sm , . . . , sm hsm , φ1 i, . . . , hsm , φd i
servation given M = m
J
X d
X Z ∞
sum of squared compo- (j) 2 2
sm hsm , φ` i = s2m (t) dt
nents of mean vector −∞
j=1 `=1
And note
d
X 2
hsm , φ` i − hsm0 , φ` i = ksm − sm0 k22 .
`=1
c
Lecture 14,
Amos Lapidoth 2017
p
X
ksm − sm0 k2 N0 /2 πm
pMAP (error|M = m) ≤ Q √ + ln
2N0 ksm − sm0 k2 πm0
m0 6=m
s
X ksm −sm0 k22
pMAP (error|M = m) ≤ Q , M uniform
2N0
m0 6=m
p
ksm − sm0 k2 N0 /2 πm
pMAP (error|M = m) ≥ max Q √ + ln
m0 6=m 2N0 ksm − sm0 k2 πm0
s
ksm −sm0 k22
pMAP (error|M = m) ≥ max Q , M uniform
0 m 6=m 2N0
c
Lecture 14,
Amos Lapidoth 2017
Antipodal Signaling
Consider the binary case with a uniform prior
s0 = −s1 = s,
where s is a nonzero integrable signal that is bandlimited to W Hz,
Es , ksk22 .
Here span(s0 , s1 ) is one dimensional and is spanned by the
unit-norm signal
s
φ= .
ksk2
We guess based on
T = hY, φi .
√
Conditional on H = 0, we have T ∼ N √ Es , N0 /2 ,whereas,
conditional on H = 1, we have T ∼ N − Es , N0 /2 .
How to guess H based on T we have already seen with
p N0
A = Es , σ2 = .
2
c
Lecture 14,
Amos Lapidoth 2017
Guess “H = 1” Guess “H = 0”
fY (y)
pMAP (error|H = 0)
y
−A A
c
Lecture 14,
Amos Lapidoth 2017
Antipodal Signaling
It is optimal to guess “H = 0” if T ≥ 0 and to guess “H = 1” if
T < 0. That is,
Z ∞
Guess “H = 0” if Y (t) s(t) dt ≥ 0.
−∞
Substituting
p N0
Es = A, = σ2
2
we obtain !
r
2Es
p∗ (error) = Q .
N0
c
Lecture 14,
Amos Lapidoth 2017
The Probability of Error
Define Z ∞
(`) s` (t)
T = Y (t) √ dt, ` ∈ {1, . . . , M}.
−∞ Es
0
We guess “M = m” if T (m) = maxm0 ∈M T (m ) , with ties being
resolved at random among the components of T that are maximal.
The mean signals are distinct, and hence the probability of a tie is
zero, and
pMAP (error|M = m)
= Pr max T (1) , . . . , T (m−1) , T (m+1) , . . . , T (M) > T (m) M = m .
c
Lecture 14,
Amos Lapidoth 2017
The Conditional Law of T
Conditional on M = m,
• the components of T are independent
√
• with the m-th being N Es , N0 /2 and
• the other components N (0, N0 /2).
c
Lecture 14,
Amos Lapidoth 2017
pMAP (error|M = 1)
= Pr max T (2) , . . . , T (M) > T (1) M = 1
= 1 − Pr max T (2) , . . . , T (M) ≤ T (1) M = 1
Z ∞
= 1− fT (1) |M =1 (t) Pr max T (2) , . . . , T (M) ≤ t M = 1, T (1) = t dt
Z−∞
∞
=1− fT (1) |M =1 (t) Pr max T (2) , . . . , T (M) ≤ t M = 1 dt
Z−∞
∞
=1− fT (1) |M =1 (t) Pr T (2) ≤ t, . . . , T (M) ≤ t M = 1 dt
Z−∞
∞ M−1
=1− fT (1) |M =1 (t) Pr T (2) ≤ t M = 1 dt
−∞
Z ∞ M−1
t
=1− fT (1) |M =1 (t) 1 − Q p dt
−∞ N0 /2
Z ∞ √
(t− Es )2
M−1
1 − t
=1− √ e N 0 1−Q p dt
−∞ πN0 N0 /2
Z ∞ r !M−1
1 2 2E s
=1− √ e−τ /2 1 − Q τ + dτ.
c
2π
Lecture 14,
Amos −∞ 2017
Lapidoth
N 0
Zero-Mean Signals for Additive-Noise Channels
N
TX2 RX2
{Dj } X X−c Y =X−c+N X+N {Djest }
TX1 + + + RX1
−c c
c = E[W ] .
E (W − c)2
h 2 i
= E (W − E[W ]) + (E[W ] − c)
= E (W − E[W ])2 + 2 E W − E[W ] E[W ] − c + (E[W ] − c)2
| {z }
0
= E (W − E[W ])2 + (E[W ] − c)2
≥ E (W − E[W ])2
= Var[W ] ,
To minimize Z
1 T h 2 i
E X(t) − c(t) dt,
2T −T
c
Lecture 14,
Amos Lapidoth 2017
The M-ary Simplex
Start from Orthogonal Keying and subtract the mean.
Let φ1 , . . . , φM be orthonormal. Let φ̄ be their “center of gravity”
1 X
φ̄ = φm .
M
m∈M
The M-ary Simplex is a scaled version of
φ1 − φ̄, . . . , φM − φ̄.
φ2
−
φ̄
φ2 φ1
φ̄
−
φ̄
φ1
c
Lecture 14,
Amos Lapidoth 2017
Constructing the 3-Ary Simplex
c
Lecture 14,
Amos Lapidoth 2017
The Simplex: Inner Products and Energies
φ1 , . . . , φM are orthonormal and
1 X
φ̄ = φm .
M
m∈M
Consequently
D E
φm0 − φ̄, φm00 − φ̄
= hφm0 , φm00 i − φm0 , φ̄ − φm00 , φ̄ + φ̄, φ̄
n o 1D X E 1D X E
= I m0 = m00 − φm0 , φm − φm00 , φm
M m
M m
2
1
X
+ 2
φm
M
m
2
n o 1 1 1
0 00
=I m =m − − + M
M M M2
n o 1
= I m0 = m00 − , m0 , m00 ∈ M.
M
c
Lecture 14,
Amos Lapidoth 2017
Normalization
Since
φm − φ̄
2 = 1 − 1 = M − 1 ,
2 M M
we define the energy-Es M-ary simplex constellation as
r
p M
sm = Es φm − φ̄ , m ∈ M,
M−1
with the result that
Es
ksm k22 = Es and hsm0 , sm00 i = − , m0 6= m00 .
M−1
{sm } can be viewed as the result of subtracting the center of
gravity from orthogonal signals of energy Es M/(M − 1):
r r
p M p M
sm = Es φm − Es φ̄, m ∈ M.
M−1 M−1
c
Lecture 14,
Amos Lapidoth 2017
The Probability of Error for the Simplex
p∗ (error)
Z ∞ r !M−1
1 −τ 2 /2 M 2Es
=1− √ e 1−Q τ + dτ.
2π −∞ M − 1 N0
c
Lecture 14,
Amos Lapidoth 2017
From the Simplex to Orthogonal Keying
If ψ is of unit-energy and orthogonal to {s1 , . . . , sM }, then
p
1
sm + √ Es ψ
M−1 m∈M
are orthogonal, each of energy Es M/(M − 1).
s1
ψ
+ √
s
E
s1
√
Es ψ
s2
+
√ Es
ψ
s2
c
Lecture 14,
Amos Lapidoth 2017
Decoding the Simplex
1
√
To decode, add √M−1 Es ψ to Y and use a decoder for
orthogonal keying.
c
Lecture 14,
Amos Lapidoth 2017
Bi-Orthogonal Keying
c
Lecture 14,
Amos Lapidoth 2017
From a SP to a Random Vector
of identical performance.
3
that is measurable w.r.t. the σ-algebra generated by Y
c
Lecture 14,
Amos Lapidoth 2017
A Toy Problem
We observe a pair (Y1 , Y2 ).
H=0: Y1 = s0 + N1 , Y2 = N2 .
H=1: Y1 = s1 + N1 , Y2 = N2 .
Here
N1 ∼ N 0, σ 2 N2 ∼ N 0, σ 2
are independent of H.
(Later s0 , s1 will be waveforms and N1 , N2 stochastic processes.)
• Can Y2 be discarded?
• Does (y1 , y2 ) 7→ y1 form a sufficient statistic?
Not necessarily!
c
Lecture 14,
Amos Lapidoth 2017
Can Y2 be Discarded?
c
Lecture 14,
Amos Lapidoth 2017
Discarding Y2 when N1 and N2 Are Independent
c
Lecture 14,
Amos Lapidoth 2017
y1
Given rule for guessing Guess
y2 H based on (Y1 , Y2 )
c
Lecture 14,
Amos Lapidoth 2017
y1
Given rule for guessing Guess
y2 H based on (Y1 , Y2 )
y1
Given rule for guessing Guess
H based on (Y1 , Y2 )
c
Lecture 14,
Amos Lapidoth 2017
y1
Given rule for guessing Guess
y2 H based on (Y1 , Y2 )
y1
Given rule for guessing Guess
ỹ2 H based on (Y1 , Y2 )
Generate
Ỹ2 ∼ fN2 (·)
Local
Randomness
c
Lecture 14,
Amos Lapidoth 2017
Back to the Real Problem (1)
d
X d
X
Y1 , hY, φ` i φ` , Y2 , Y − hY, φ` i φ` .
`=1 `=1
Since Y = Y1 + Y2 , we can guess based on (Y1 , Y2 ).
Conditional on M = m
Xd
Y1 = h(sm + N), φ` i φ`
`=1
d
X d
X
= hsm , φ` i φ` + hN, φ` i φ`
`=1 `=1
d
X
= sm + hN, φ` i φ` ,
`=1
and
d
X
Y2 = N − hN, φ` i φ` .
c
Lecture 14,
Amos Lapidoth 2017 `=1
Back to the Real Problem (2)
Conditional on M = m,
Y1 = sm + N1 , Y2 = N2 ,
where
d
X d
X
N1 = hN, φ` i φ` N2 = N − hN, φ` i φ` .
`=1 `=1
QED.
c
Lecture 14,
Amos Lapidoth 2017
� �
Y (t), t ∈ R Decision Guess
Device
�Y, φ1 �
Reconstruction
�Y, φ2 �
Projection
� � �
� �Y, φ� � φ� Y � (t) Decision Guess
+
Device
�
�Y, φd � N� − � �N� , φ� � φ�
Projection
Subtraction
N�
� �
Generate N � (t)
� �
of same FDDs as N (t)
Local
Randomness
c
Lecture 14,
Amos Lapidoth 2017
Thank you!
c
Lecture 14,
Amos Lapidoth 2017