You are on page 1of 48

Chapter 10

The Application of Detection and


Estimation Theory to
Communications

Problem 10.1

a. Given H1 , Z = N , so

10z
fZ (zjH1 ) = fN (n) jz=n = 10e u (z)

Given H2 , Z = S + N , where S and N are independent. Thus, the resulting pdf under
H2 is the convolution of the separate pdfs of S and N :
Z 1
fZ (zjH2 ) = fS (z ) fN ( ) d
1
Z 1
= 2e 2(z )
u (z ) 10e 10
u( )d
1
Z z
2z 8
= 20e e d ,z 0
0
z
2z 1 8
= 20e e ,z 0
8 0
2z 8z
= 2:5e 1 e u (z)
2z 10z
= 2:5 e e u (z)

which is the result given in the problem statement.

1
2CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICAT

b. The likelihood ratio is


fZ (ZjH2 ) 2:5 e 2Z e 10Z
(Z) = = 10Z
; Z 0
fZ (ZjH1 ) 10e
= 0:25 e8Z 1 , Z 0
Note that an uppercase Z is used because we substitute data to carry out the test.
c. The threshold is
Pr (H1 true) (c21 c11 ) (1=3) (7 0) 1
= = =
Pr (H2 true) (c12 c22 ) (2=3) (7 0) 2
d. The likelihood ratio test becomes
H2
> 1
0:25 e8Z 1
< 2
H1
This can be reduced to
H2
> ln (4=2 + 1)
Z = 0:1373
< 8
H1
e. The probability of detection is
Z 1
2z 10z
PD = 1 PM = 2:5 e e dz
0:1373
1 2z 1 10z 1
= 2:5 e + e
2 10 0:1373
1 0:2746 1 1:373
= 2:5 e e = 0:6991
2 10
The probability of false alarm is
Z 1
10z 1
PF = 10e 10z dz = e 0:1373
=e 1:373
= 0:2533
0:1373
Therefore, the risk is
Risk = Pr (H1 ) c21 + Pr (H2 ) c22 + Pr (H2 ) (c12 c22 ) PM
Pr (H1 ) (c21 c11 ) (1 PF )
1 2 2 1
= 7+ 0 + (7 0) (1 0:6991) (7 0) (1 0:2533)
3 3 3 3
7 14 7
= + 0:3009 0:7467 = 1:9952
3 3 3
3

f. Consider the threshold set at . Then


10 3
PF = e 10 ) 0:3 ln (10) = 0:6908

to give PF 10 3. Also, from part (e),


Z 1
2z 10z
PD = 2:5 e e dz
2 10
= 1:25e 0:25e ; 0

For values of 0, PD is a monotonically decreasing function with maximum value


of 1 at = 0. For = 0:6908, PD = 0:3137.

g. From part (f),


10 2 10
PF = e and PD = 1:25e 0:25e
A plot of PD versus PF for various values of constitutes the operating characteristic
of the test.

Problem 10.2

a. The likelihood ratio is


jZj =2
r
e 2 =2
(Z) = p = eZ jZj
e Z 2 =2 = 2 2

b. The decision regions are determined by the set of inequalities


r
e jZj =2 2 H2
(Z) = 2
p = eZ =2 jZj ?
e Z =2 = 2 2 H1

or
H2 1
Z 2 =2 jZj ? ln ln =
H1 2 2
Thus, solve the equation
Z 2 =2 jZj =
for Z > 0 and Z < 0 for the decision boundary points:
p
Z > 0 : Z 2 =2 Z = 0 ) Z1;2 = 1 1+2
2
p
Z < 0 : Z =2 + Z = 0 ) Z3;4 = 1 1+2
4CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICAT

For example, for = 0:1


p
Z1;2 = 1 1 + 0:2 = 1 1:0954 = 2.0954 ; 0:0954; Z > 0
p
Z3;4 = 1 1 + 0:2 = 1 1:0954 = 0:0954; -2.0954 ; Z < 0

So the decision regions in this particular case are

R1 : 1<Z< 2:0954; 2:0954 < Z < 1


R2 : 2:0954 Z 2:0954

Note that the equality sign can be associated with either region.

Problem 10.3

Under both hypotheses, the statistic Z is zero mean and Gaussian. Under hypothesis
H1 (noise alone) Z is zero-mean Gaussian with variance 2n and under hypothesis
H2 (signal plus noise) it is zero-mean Gaussian with variance 2s + 2n : Thus, the
likelihood ratio test is
s
2 exp Z 2 =2 2s + 2n H2 Pr (H1 ) (c21 c11 )
(Z) = n
2+ 2 2 =2 2 ]
? =
s n exp [ Z n H1 Pr (H2 ) (c12 c22 )
or s
H2 2 + 2
exp Z 2 =2 2
s + 2
n + Z 2 =2 2
n ? s
2
n
H1 n
or
1 1 2 H2 2 + 2
Z2 2 2 2
= Z2 2 ( 2
s
2)
? 2 ln ( ) + ln s
2
n
n s + n n s + n H1 n
or
H2 2 2 + 2 2 2
n s n +
Z2 ? 2
2 ln ( ) + ln s
2
n
H1 s n

For the rest of the problem, we assume that 2 = 3; 2 = 1 so the LRT becomes
s n

H2 4
Z2 ? [2 ln ( ) + ln (4)]
H1 3
q
4
The boundaries of the decision regions are at Z1;2 = 3 [2 ln ( ) + ln (4)] which
results in the decision regions
r r
4 4
R1 : [2 ln ( ) + ln (4)] Z [2 ln ( ) + ln (4)]
3 3
5

and
r r
4 4
R2 : 1<Z< [2 ln ( ) + ln (4)]; [2 ln ( ) + ln (4)] < Z < 1
3 3

a. For c11 =qc22 = 0, c21 = c12 , and Pr (H1 ) = Pr (H2 ) = 1=2 we have = 1 so that
4
Z1;2 = 3 ln (4) = 1:3596.

b. For c11 = c22 = 0, c21 = c12 , p0 = Pr (H1 )q


= 1=4, and q0 = 1 p0 = Pr (H2 ) = 3=4
1 4 1 4
we have = 4 3 = 3 so that Z1;2 = 3 [2 ln (1=3) + ln (4)] which is imaginary
(i.e., Z2is being compared with a negative number in the LRT) which means that
the decision is in favor of H2 (signal plus noise).
c21 1
c. For c11 = c22 = 0, c21 = c12 =2, p0 = Pr (H1 ) = Pr (H2 ) = 1=2 we have = c12 = 2 so
q
4
that Z1;2 = 3 [2 ln (1=2) + ln (4)] = 0 so the decision is in favor of H2 .

c21
d. For c11 = c22 = 0, c21 = 2c12 , p0 = Pr (H1 ) = Pr (H2 ) = 1=2 we have = c12 = 2 so
q
4
that Z1;2 = 3 [2 ln (2) + ln (4)] = 1:9227.

Problem 10.4

For 2 = 9; 2 = 16 the LRT is


n s

H2 9 (25) 25
Z2 ? 2 ln ( ) + ln = 28:125 ln ( ) + 14:367
H1 16 9
p
The boundaries of the decision regions are at Z1;2 = 28:125 ln ( ) + 14:367 which
results in the decision regions
p p
R1 : 28:125 ln ( ) + 14:367 Z 28:125 ln ( ) + 14:367

and
p p
R2 : 1<Z< 28:125 ln ( ) + 14:367; 28:125 ln ( ) + 14:367 < Z < 1

a. For c11 =pc22 = 0, c21 = c12 , and Pr (H1 ) = Pr (H2 ) = 1=2 we have = 1 so that
Z1;2 = 14:367 = 3:7904 giving decision regions of

R1 : 3:7904 Z 3:7904
R2 : 1<Z< 3:7904; 3:7904 < Z < 1:
6CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICAT

The probability of false alarm is the probability that a decision is made in favor of
signal plus noise when noise alone is present, and is given in this case by
Z 3:7904 Z 3:7904
exp z 2 =2 9 exp z 2 =2 9 z
PF = p dz = 2 p dz; u =
3:7904 2 9 2 9 3
Z 3:7904=3 "0 Z #
exp u2 =2 1 1 exp u2 =2
= 2 p du = 2 p du
0 2 2 3:7904=3 2
= 1 2Q (1:2635) = 0:7936
The probability of detection is the probability that a decision is made in favor of signal
plus noise when signal plus noise present, and is given in this case by
Z 3:7904 Z 1
exp z 2 =2 25 exp z 2 =2 25
PD = p dz + p dz
1 2 25 3:7904 2 25
Z 1
exp z 2 =2 25 z
= 2 p dz; u =
3:7904 2 25 5
Z 1 2
exp u =2
= 2 p du
3:7904=5 2
= 2Q (0:7581) = 0:4484
The risk is
Risk = Pr (H1 ) c21 + Pr (H2 ) c22 + Pr (H2 ) (c12 c22 ) PM
Pr (H1 ) (c21 c11 ) (1 PF )
1 1 1 1
= 1+ 0+ 1 (1 0:4484) 1 (1 0:7936)
2 2 2 2
= 0:6726

b. For c11 = c22 =


p0, c21 = c12 = 1, Pr (H1 ) = 1=4; and Pr (H2 ) = 3=42we have = 1=3 so
that Z1;2 = 28:125 ln (1=3) + 14:367 = imaginary which says Z is being compared
with a negative number in the LRT which means that the decision is in favor of H2
(signal plus noise). The probability of false alarm is the probability that a decision is
made in favor of signal plus noise when noise alone is present, and is 0 in this case.
Since the decision is always in favor of H2 the probability of detection is 1. The risk
is
Risk = Pr (H1 ) c21 + Pr (H2 ) c22 + Pr (H2 ) (c12 c22 ) PM
Pr (H1 ) (c21 c11 ) (1 PF )
1 3 3 1
= 1+ 0+ 1 (1 1) 1 (1 0)
4 4 4 4
= 0
7

c. For c11 = c22 = p


0, c21 = c12 =2 = 1=2, and Pr (H1 ) = Pr (H2 ) = 1=2 we have = 1=2
so that Z1;2 = 28:125 ln (1=2) + 14:367 =imaginary giving which gives PF = 0 and
PD = 1. The risk is
Risk = Pr (H1 ) c21 + Pr (H2 ) c22 + Pr (H2 ) (c12 c22 ) PM
Pr (H1 ) (c21 c11 ) (1 PF )
1 1 1 1 1 1
= + 0+ 1 (1 1) (1 0)
2 2 2 2 2 2
= 0

d. For c11 =pc22 = 0, c21 = 2c12 = 2, and Pr (H1 ) = Pr (H2 ) = 1=2 we have = 2 so that
Z1;2 = 28:125 ln (2) + 14:367 = 5:8191 giving decision regions of
R1 : 5:8191 Z 5:8191
R2 : 1<Z< 5:8191; 5:8191 < Z < 1:
The probability of false alarm is the probability that a decision is made in favor of
signal plus noise when noise alone is present, and is given in this case by
Z 5:8191 Z 5:8191
exp z 2 =2 9 exp z 2 =2 9 z
PF = p dz = 2 p dz; u =
5:8191 2 9 2 9 3
Z 5:8191=3 "0 Z #
exp u =22 1 2
exp u =2
1
= 2 p du = 2 p du
0 2 2 5:8191=3 2
= 1 2Q (1:9397) = 0:9476
The probability of detection is the probability that a decision is made in favor of signal
plus noise when signal plus noise present, and is given in this case by
Z 5:8191 Z 1
exp z 2 =2 25 exp z 2 =2 25
PD = p dz + p dz
1 2 25 5:8191 2 25
Z 1
exp z 2 =2 25 z
= 2 p dz; u =
5:8191 2 25 5
Z 1 2
exp u =2
= 2 p du
5:8191=5 2
= 2Q (1:1638) = 0:2445
The risk is
Risk = Pr (H1 ) c21 + Pr (H2 ) c22 + Pr (H2 ) (c12 c22 ) PM Pr (H1 ) (c21 c11 ) (1 PF )
1 1 1 1
= 2+ 0+ 1 (1 0:2445) 2 (1 0:9476)
2 2 2 2
= 1:3254
8CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICAT

Problem 10.5
De…ne A = (a1 ; a2; a3 ) and B = (b1 ; b2 ; b3 ). De…ne scalar multiplication by (a1 ; a2; a3 ) =
( a1 ; a2; a3 ) and vector addition by (a1 ; a2; a3 ) +(b1 ; b2 ; b3 ) = (a1 + b1 ; a2 + b2 ; a3 + b3 ).
The properties listed under ”Structure of Signal Space” in the text then become:

1. 1A + 2B =( 1 a1 + 2 b1 ; 1 a2 + 2 b2 ; 1 a3 + 2 b3 ) R3 ;

2. (A + B) = (a1 + b1; a2 + b2 ; a3 + b3 )
= ( a1 + b1; a2 + b2 ; a3 + b3 ) = ( a1 ; a2; a3 ) + ( b1; b2 ; b3 )
= A + B R3 ;

3. 1 ( 2 A) =( 1 2 A) R3 (follows by writing out in component form);

4. 1 A = A (follows by writing out in component form);

5. The unique element 0 is (0; 0; 0) so that A + 0 = A;

6. A = ( a1 ; a2; a3 ) so that A + ( A) = 0:

Problem 10.6
p p p p p p
a. jAj = A A = 12 + 32 + 22 = 14; jBj = B B = 52 + 12 + 32 = 35;
AB 1 5+3
p p1+2 3 = 0:6325.
cos = jAjjBj = 14 35

p p p p p p
b. jAj = A A = 62 + 22 + 42 = 56; jBj = B B = 22 + 22 + 22 = 12;
AB 6 2+2
p p2+4 2 = 0:9258.
cos = jAjjBj = 56 12

p p p p p p
c. jAj = A A = 42 + 32 + 12 = 26; jBj = B B = 32 + 42 + 52 = 50;
AB 4 3+3
p p4+1 5 = 0:8043.
cos = jAjjBj = 26 50

p p p p q
d. jAj = A A = + 32 32 + 22 = 22; jBj = B B = ( 1)2 + ( 2)2 + 32 =
p AB 3 ( 1)+3 ( 2)+2 3
14; cos = jAjjBj = p p
22 14
= 0:1709.
9

Problem 10.7

a. This scalar product is Z T


4t 1
(x1 ; x2 ) = lim 2e dt =
T !1 0 2
b. Use the energy signal scalar product de…nition as in (a):
Z T
1
(x1 ; x2 ) = lim e (7 j2)t dt =
T !1 0 7 2j

c. Use the power signal product de…nition:


Z T
1
(x1 ; x2 ) = lim cos (2 t) cos (4 t) dt = 0
T !1 2T T

d. Use the power signal scalar product de…nition:


Z T
1
(x1 ; x2 ) = lim 5 cos (2 t) dt = 0
T !1 2T 0

Problem 10.8

Consider Z T
(x; y) = lim x (t) y (t) dt
T !1 T
Then
Z T Z T
(x; y) = lim y (t) x (t) dt = lim x (t) y (t) dt = (x; y)
T !1 T T !1 T

Also
Z T Z T
( x; y) = lim x (t) y (t) dt = lim x (t) y (t) dt = (x; y)
T !1 T T !1 T

and
Z T
(x + y; z) = lim [x (t) + y (t)] z (t) dt
T !1 T
Z T Z T
= lim x (t) z (t) dt + lim y (t) z (t) dt
T !1 T T !1 T
= (x; z) + (y; z)
10CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

Finally
Z T Z T
(x; x) = lim x (t) x (t) dt = lim jx (t)j2 dt 0
T !1 T T !1 T

The scalar product for power signals is considered in the same manner.

Problem 10.9

Since the signals are assumed real,


Z T
2
kx1 + x2 k = lim [x1 (t) + x2 (t)]2 dt
T !1 T
Z T Z T Z T
= lim x21 (t) dt + 2 lim x1 (t) x2 (t) dt + lim x22 dt
T !1 T T !1 T T !1 T
= kx1 k2 + 2 (x1 ; x2 ) + kx2 k2

From this it is seen that

kx1 + x2 k2 = kx1 k2 + kx2 k2

if and only if (x1 ; x2 ) = 0.

Problem 10.10
It follows that
!
2
X X
kx1 k = an n; bm m
n m
XX
= an bm ( n; m)
n m
N
X
= jan j2 because ( n; m) = mn
n=1
XN
and similarly kx2 k2 = jbn j2
n=1

Also, in a similar fashion,


N
X
(x1 ; x2 ) = an bn
n=1
11

Thus we must show that

N 2 N N
X X X
an bn jan j2 jbn j2
n=1 n=1 n=1

Writing the both sides out we get


XX XX
an am bn bm an an bm bm
n m n m

This is equivalent to
XX
(2an an bm bm 2an am bn bm ) 0
n m
XX
or (an an bm bm 2an am bn bm + am am bn bn ) 0
n m

The inequality is therefore equivalent to


XX
jan bm am bn j2 0
m n

which is seen to be true because each term in the sum is nonnegative.

Problem 10.11
By straight forward integration, it follows that
Z 1 p
2
kx1 k = 12 dt = 2 or
kx1 k = 2
1
Z 1 r
2 2
kx2 k2 = 2
t dt = or kx2 k =
1 3 3
Z 1 r
8 8
kx3 k2 = 2
(1 + t) dt = or kx3 k =
1 3 3

Also
(x1 ; x2 ) = 0 and (x3 ; x1 ) = 2

Since (x1 ; x2 ) = 0, they are orthogonal. Choose them as basis functions (not normal-
ized). Clearly, x3 is their vector sum.
12CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

Problem 10.12
p p p p p p
a. jAj = A A = 12 + 32 + 22 = 14; jBj = B B = p p 52 + 12 + 32 = 35;
A B = 1 5 + 3 1 + 2 3 = 14; jA Bj = 14 jAj jBj = 14 35 = 22:1359 which
is true.
p p p p p p
b. jAj = A A = 62 + 22 + 42 = 56; jBj = B B =p 2p 2 + 22 + 22 = 12;
A B = 6 2 + 2 2 + 4 2 = 24; jA Bj = 24 jAj jBj = 56 12 = 25:923 which
is true.
p p p p p p
c. jAj = A A = 42 + 32 + 12 = 26; jBj = B B = p p 32 + 42 + 52 = 50;
A B = 4 3 + 3 4 + 1 5 = 29; jA Bj = 29 jAj jBj = 26 50 = 36:0555 which
is true.
p p p p q
d. jAj = A A = 32 + 32 + 22 = 22; jBj = B B = ( 1)2 + ( 2)2 + 32 =
p p p
14; A B = 3 ( 1) + 3 ( 2) + 2 3 = 3; jA Bj = 3 jAj jBj = 22 14 =
17:5499 which is true.

Problem 10.13
Normalize the …rst vector to form the …rst orthonormal vector:

x1 j b
3bi + 2b k
1 = =q
jx1 j
32 + 22 + ( 1)2
1
= p j b
3bi + 2b k
14
= 0:8018bi + 0:5345b
j 0:2673b
k

Take the second vector and subtract its component along 1:

1 1
v2 = j+b
2bi + 5b k p 3bi + 2b
j b
k j+b
2bi + 5b k p 3bi + 2b
j b
k
14 14
= j + 1:2143b
2:6429bi + 4:5714b k

Normalize v2 to form the second orthonormal vector:

j + 1:2143b
2:6429bi + 4:5714b k
2 = q
( 2:6429)2 + (4:5714)2 + (1:2143)2
= j + 0:2241b
0:4878bi + 0:8437b k
13

Take the third vector and subtract its components along 1 and 2:

v3 = 6bi j + 7b
2b k

0:8018bi + 0:5345b
j 0:2673b
k 6bi j + 7b
2b k 0:8018bi + 0:5345b
j 0:2673b
k

j + 0:2241b
0:4878bi + 0:8437b k 6bi j + 7b
2b k j + 0:2241b
0:4878bi + 0:8437b k

= 3:0146bi j + 8:1825b
0:4307b k

Normalize:

j + 8:1825b
3:0146bi 0:4307b k
3 = p
3:0146 + 0:4307 + 8:18252
2 2

j + 0:9372b
= 0:3453bi 0:0493b k

Finally, take the fourth vector and subtract components of the other three along it:

v4 = 3bi + 8b
j 3b
k

0:8018bi + 0:5345b
j 0:2673b
k 3bi + 8b
j 3b
k 0:8018bi + 0:5345b
j 0:2673b
k

j + 0:2241b
0:4878bi + 0:8437b k 3bi + 8b
j 3b
k j + 0:2241b
0:4878bi + 0:8437b k

0:3453bi j + 0:9372b
0:0493b k 3bi + 8b
j 3b
k 0:3453bi j + 0:9372b
0:0493b k

j + 0b
= 0bi + 0b k (actually of the order of 10 14
)

Therefore there are only three orthornormal vectors.

Problem 10.14

a. A set of normalized basis functions is


1
1 (t) p s1 (t)
=
2
r
2 1
2 (t) = s2 (t) s1 (t)
3 2
p 2 2
3 (t) = 3 s3 (t) s1 (t) s2 (t)
3 3
14CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

b. Evaluate (s1 ; 1 ), (s2 ; 1 ), and (s3 ; 1) to get


r r
p 1 3 p 2
s1 = 2 1 ; s2 = p 1 + 2; s3 = 2 1 + ( 2 + 3)
2 2 3

Note that a basis function set could have been obtained by inspection. For example,
three unit-height, unit-width, nonoverlapping pulses spanning the interval could have
been used.

Problem 10.15

a. A basis set is
r
2
1 (t) cos (2 fc t) and
=
T
r
2
2 (t) = sin (2 fc t) for 0 t T
T

b. The coordinates of the signal vectors are

Z T p i
xi = si (t) 1 (t) dt = E cos
0 4

and
Z T p i
yi = si (t) 2 (t) dt = E sin
0 4

where E = A2 T . Thus

p i p i
si (t) = E cos 1 (t) E sin 2 (t) ; i = 0; 1; 2; 3; 4; 5; 6; 7
4 4

p
The signal point for i = 0 is at E; 0 with the rest at multiples of =4 radians on
p
a circle of radius E centered at the origen.
15

Problem 10.16

a. Normalize x1 (t) to produce 1 (t):

Z 1 Z 1
2 t 2 2t 1 2t 1 1
jjx1 jj = e dt = e dt = e 0
=
0 0 2 2
x1 (t) p
) 1 (t) = p = 2e t u (t)
1= 2

Take the second signal and subtract its component along 1:

Z 1p p
2t
v2 (t) = e u (t) 2e t e 2t dt 2e t u (t)
"0 p #1
2t 2 3t p
= e u (t) e 2e t u (t)
3
0
2t 2 t
= e u (t) e u (t)
3

Normalize v2 (t) to produce 2 (t):

Z 1 2 Z 1
2 2t 2 t 4t 4 3t 4 2t
jjv2 jj = e e dt = e e + e dt
0 3 0 3 9
1
1 4 4t 2 1
= e + e 3t e 2t
=
4 9 9 0 36
v2 (t)
) 2 (t) = = 6e 2t 4e t
u (t)
1=6

Take the third signal and subtract its components along 1 and 2:

Z 1p p
3t
v3 (t) = e u (t) 2e t e 3t
dt 2e t u (t)
0
Z 1
2t t 3t 2t
6e 4e e dt 6e u (t) 4e t u (t)
0
3t 6 2t 3 t
= e e + e u (t)
5 10
16CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

Normalize v3 (t) to produce 3 (t):

Z 1 2
2 3t 6 2t 3
jjv3 jj = e e + e t dt
5 10
Z0 1
6t 12 5t 51 4t 18 3t 9 2t
= e e + e e + e dt
0 5 25 25 100
1
=
600
v3 (t) p 3t 2t t
) 3 (t) = p = 6 10e 12e + 3e u (t)
1=10 6

b. There does not appear to be a general pattern developing.

Problem 10.17
Normalize s1 (t) to produce 1 (t):
Z 2
2
jjs1 jj = 12 dt = 2
0
s1 (t) 1
) 1 (t) = p =p ; 0 t 2
2 2

Take the second signal and subtract its component along 1:


Z 2
1 1
v2 (t) = cos ( t) p cos ( t) dt p
0 2 2
2
1 1
= cos ( t) sin ( t)
2 0
= cos ( t) ; 0 t 2

Normalize it:
Z 2 Z 2
1 1
jjv2 jj2 = cos2 ( t) dt = + cos (2 t) dt = 1
0 0 2 2
) 2 (t) = s2 (t) = cos ( t) ; 0 t 2

A similar operation with the third signal shows that

3 (t) = s3 (t) = sin ( t) ; 0 t 2


17

Take the fourth signal and subtract its components along 1 and 3 :
Z 2 Z 2
2 1 2 1
v4 (t) = sin ( t) p sin ( t) dt p cos ( t) sin2 ( t) dt cos ( t)
0 2 2 0
Z 2
sin ( t) sin2 ( t) dt sin ( t)
0
1 1 1 1
= sin2 ( t) = cos (2 t)
2 2 2 2
1
= cos (2 t) ; 0 t 2
2
Normalize it:
Z 2 Z
2 1 2 1 2 1 1 1
jjv4 jj = cos (2 t) dt = + cos (4 t) dt =
0 4 4 0 2 2 4
v4 (t)
) 4 (t) = = cos (2 t) ; 0 t 2
1=2
From these results we …nd that
p
s1 (t) = 2 1 (t)
s2 (t) = 2 (t)
s3 (t) = 3 (t)
1 1 1 1
s4 (t) = cos (2 t) = p 1 (t) + 4 (t)
2 2 2 2

Problem 10.18
a. Normalize x1 (t) to produce 1 (t):
Z 1
1 3 12
jjx1 jj2 = t2 dt = t= 1
1 3 3
r
x1 (t) 3
) 1 (t) = p = t; 1 t 1
2=3 2
Take the second signal and subtract its component along 1 :
Z 1r r
3 3
v2 (t) = t2 tt2 dt t
1 2 2
1
3 t4
= t2 t
2 4 1
= t2 ; 1 t 1
18CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

Normalize v2 (t) to produce 2 (t):


Z 1 Z 1
2 2 2
jjv2 jj = t dt = t4 dt
1 1
5 1
t 2
= =
5 1 5
r
x2 (t) 5 2
) 2 (t) = p = t ; 1 t 1
2=5 2

Take the third signal and subtract its components along 1 and 2 :
Z 1r r Z 1r r
3 3 5 5 2
v3 (t) = t3 tt3 dt t t2 t3 dt t
1 2 2 1 2 2
1 1
3 t5 5 t6
= t3 t t2
2 5 1 2 6 1
3 3
= t t; 1 t 1
5
Normalize v3 (t) to produce 3 (t):
Z 1 2 Z 1
2 3 6 4 9
jjv3 jj = t3 t dt = t6 t + t2 dt
1 5 1 5 25
1
t7 6 t5 9 t3 2 62 9 2 8
= + = + =
7 55 25 3 1 7 5 5 25 3 175
r
v3 (t) 7 5 3 3
) 3 (t) =p = t t ; 1 t 1
8=175 2 2 2

Take the fourth signal and subtract its compProblem 10.17

Normalize s1 (t) to produce 1 (t):onents along 1 , 2 , and 3 :


Z 1r r Z 1r r
3 3 5 5 2
v4 (t) = t4 tt4 dt t t2 t4 dt t
1 2 2 1 2 2
Z 1r r
7 5 3 3 4 7 5 3 3
t t t dt t t
1 2 2 2 2 2 2
1 1 1
3 t6 5 t7 7 5 t8 3 t6 5 3 3
= t4 t t2 t t
2 6 1 2 7 1 2 28 26 1 2 2
5 2
= t4 t ; 1 t 1
7
19

Normalize v4 (t) to produce 4 (t):


Z 1 2 Z 1
5 2 10 6 25 4
jjv4 jj2 = t4 t dt = t8 t + t dt
1 7 1 7 49
9 1
t 10 t7 25 t5 2 10 2 25 2 8
= + = + =
9 7 7 49 5 1 9 7 7 49 5 9 49
xv4 (t) 27 5 2
) 4 (t) =p = p t4 t ; 1 t 1
8= (9 49) 2 2 7

b. The pattern is not apparent.

Problem 10.19
By the time delay and modulation theorems,

1 1 j2 k f
(f ) = sinc f + sinc f+ e
2 2 2

The total energy in the kth signal is


Z =2 Z =2
2 1 1
E= cos ( t= ) dt = + cos (2 t= ) dt =
=2 =2 2 2 2

The energy for jf j W is


Z W 2 2
1 1
EW = sinc f + sinc f + df
W 4 2 2
Z W 2
1 1
= sinc v + sinc v + dv
W 4 2 2
Z W 2
1 1
= 2 sinc v + sinc v + dv (even integrand)
0 4 2 2

Thus Z W 2
EW 1 1
= sinc v + sinc v + dv
E 0 2 2
The MATLAB program below computes EW =E:

% pr10_19
%
disp(’W_tau E_W/E’)
20CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

for tau_W = 0.7:.1:1.3


v = 0:0.01:tau_W;
y = (sinc(v-0.5)+sinc(v+0.5)).^2;
EW_E = trapz(v, y);
disp([tau_W, EW_E])
end

The results are given below:


>> pr10_19
W_tau E_W/E
0.7000 0.8586
0.8000 0.9106
0.9000 0.9467
1.0000 0.9701
1.1000 0.9838
1.2000 0.9909
1.3000 0.9939
11
The product 0:8 W 0:9 for EW =E 12 = 0:9167.

Problem 10.20
Write
Z 2
"Z p 2
#
1 y+ E=N0
e y e x
Pc = p p dx dy
1 1

and let z = x y. This gives

Z pE=N0 "Z #
e z 2 =2 1
e 2(y+z=2)2
Pc = p p dy dz
1 1

Complete the square in the exponent p of the inside integral and use a table of de…nite
integrals to show that is evaluates to 1= 2. The result then reduces to
p
Pc = 1 Q E=N0

hp i
which gives PE = Q E=N0 , the desired result.
21

Problem 10.21

a. By inspection of the given waveform set it follows that a satisfactory set of orthonormal
functions is
p
1 (t) = 2 cos (2 t) ; 0 t 1
p
2 (t) = 2 sin (2 t) ; 0 t 1
p
3 (t) = 2 cos (4 t) ; 0 t 1
p
4 (t) = 2 sin (4 t) ; 0 t 1

For i = 0; 1; 2; 3

si (t) = A cos (i =2) cos (2 t) A sin (i =2) sin (2 t)


p
= E [cos (i =2) 1 (t) sin (i =2) 2 (t)]

For i = 4; 5; 6; 7

si (t) = A cos ((i 4) =2) cos (4 t) A sin ((i 4) =2) sin (4 t)


p
= E [cos ((i 4) =2) 3 (t) sin ((i 4) =2) 4 (t)]

b. The receiver looks like Figure 10.5 with four multipliers and integrators, one for each
orthonormal function. The signal coordinates are

A = [Aij ; i = 0; 1; 2; 3; 4; 5; 6; 7; j = 1; 2; 3; 4]
8 hp i
>
> E; 0; 0; 0; 0; 0; 0; 0 ; i=0
>
> h i
>
> p
>
> 0; E; 0; 0; 0; 0; 0; 0 ; i = 1
>
> h i
>
> p
>
> 0; 0; E; 0; 0; 0; 0; 0 ; i=2
>
> h i
>
> p
< 0; 0; 0; E; 0; 0; 0; 0 ; i = 3
= h p i
>
> 0; 0; 0; 0; E; 0; 0; 0 ; i=4
>
> h i
>
> p
>
> 0; 0; 0; 0; 0; E; 0; 0 ; i = 5
>
> h i
>
> p
>
> 0; 0; 0; 0; 0; 0; E; 0 ; i=6
>
> h
>
> p i
: 0; 0; 0; 0; 0; 0; 0; E ; i = 7

From (10:96) the decision rule is


4
X
Choose H` so that d =2
(Zj A`j )2 = minumum
j=1
22CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

where
Z T =1 Z 1
Zj = Aij + Nj = y (t) j (t) dt = [si (t) + n (t)] j (t) dt; j = 1; 2; 3; 4
0 0

This can be done suboptimally in two stages: (1) Correlate with two tones, one at
frequency 1 Hz and the other at frequency 2 Hz and choose the output having the
largest envelope (or squared envelope) (note that this will involve a correlation with
both a sine and a cosine at each frequency); (2) Taking the pair of outputs from the
correlators (one sine and one cosine) having the largest output envelope, determine
in which =2 radian range the output pair lies (i.e., 0 to =2, =2 to , to 3 =2, or
3 =2 to 2 radians).

N0
c. As before E [Ni Nj ] = 2 ij . According to the suboptimum strategy given above, we
make the comparison

2 2 > 2 2
(y; 1) + (y; 2) (y; 3) + (y; 4)
<

Suppose s0 (t) was really transmitted. Then the probability of a correct decision on
frequency is

p 2
Pcf = Pr E + N1 + N22 > N32 + N42
p 2
= Pr E + N1 > N32 + N42 N22

The probability of correct decision on signal phase, from the analysis of quadriphase
in Chapter 9 resulting in (9.15, is
r !
E
Pcp = 1 2Q
N0

By symmetry, these expressions hold for either frequency or any phase. The overall
probability of correct reception is Pc = Pcf Pcp for an overall probability of error of

PE = 1 Pcf Pcp
23

Problem 10.22

a. The space is three-dimensional with signal points at the eight points


p p p
E=3 ; E=3 ; E=3

The optimum partitions are planes determined by the coordinate axes taken two at
a time (three planes). Thus the optimum decision regions are the eight quadrants of
the signal space.

b. Consider S1 . Given the partitioning discussed in part (a), we make a correct decision
only if
kZ S1 k2 < kZ S2 k2

and
kZ S1 k2 < kZ S4 k2

and
kZ S1 k2 < kZ S8 k2

where S2 , S4 , and S8 are the nearest-neighbor signal points topS1 . These are the
most probable errors. Substituting
p p ) and S1 = E=3 (1; 1; 1) ; S2 =
Z = (Z1 ; Z2 ; Z3p
E=3 (1; 1; 1; ) ; S4 = E=3 ( 1; 1; 1), and S8 = E=3 (1; 1; 1), the above condi-
tions become
p 2 p 2 p 2 p 2
Z1 + E=3 < Z1 E=3 ; Z2 + E=3 < Z2 E=3 ;
p 2 p 2
and Z3 + E=3 < Z3 E=3

by canceling like terms on each side. For S1 , these reduce to

Z1 > 0; Z2 > 0; and Z3 > 0

to de…ne the
decision region for S1 . Therefore, the probability of correct decision is

P [correct dec.js1 (t)] = Pr (Z1 > 0; Z2 > 0; Z3 > 0) = [Pr (Zi > 0)]3

because the noises along each coordinate axis are independent. Note that E [Zi ] =
(Es =3)1=2 , all i. The noise variances, and therefore the variances of the Zi s are all
24CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

0
10

-1
10

-2
10

-3
10
Ps

n= 4
n= 3
n= 2
n= 1
-4
10

-5
10

-6
10
0 5 10 15
Eb/N0

N0 =2. Thus
Z 1 p 3
1 1 2
P [correct dec.js1 (t)] = p exp y Es =3 dy
N0 0 N0
" Z 1 # r
1 u2 2 p
= p p exp du ; u = y Es =3
2 2Es =3N0 2 N0
h p i3
= 1 Q 2Es =3N0

Since this is independent of the signal chosen, this is the average probability of correct
detection. The symbol error probability is 1 Pc . Generalizing to n dimensions, we
have h p in h p in
PE = 1 1 Q 2Es =nN0 =1 1 Q 2Eb =N0

where n = log2 M . Note that Eb = Es =n since there are n bits per dimension and
M = 2n .

c. The symbol error probability is plotted in Fig. 10.1.


25

Problem 10.23

a. Integrate the product of the i 1 and i signals over [0; Ts ]:


Z Ts
A2 cos f2 [fc + (i 1) f ] tg cos [2 (fc + i f ) t] dt
0
Z Z
A2 Ts A2 Ts
= fcos [2 (2fc + (2i 1) f ) t]g dt + fcos [2 ( f ) t]g dt
2 0 2 0
Ts
A2 sin [2 (2fc + (2i 1) f ) t] sin [2 ( f ) t]
= +
2 2 (2fc + (2i 1) f ) 2 ( f) 0
A2 sin [2 (2fc + (2i 1) f ) Ts ] sin [2 ( f ) Ts ]
= +
2 2 (2fc + (2i 1) f ) 2 ( f)
2
A Ts sin [2 f Ts ] 2
A Ts
' = sinc (2 f Ts )
2 2 f Ts 2
where the …rst term has been neglected because it is small for typical values of fc .
The remaining term is 0 for its argument equal to a nonzero integer. The smallest
such integer is 1, which gives
1
2 f Ts = 1 or f=
2Ts

b. We need 1=2Ts Hz of bandwidth per signal, so


W M
M= = 2W Ts or W =
1=2Ts 2Ts

c. For vertices-of-a-hypercube signaling M = 2n where n is the dimensionality (number


of orthogonal functions) which is 2W Ts , so
log2 M
log2 M = 2W Ts or W =
2Ts

Problem 10.24

a. By de…nition, the new signal set is


M 1
1 X
s0i (t) = si (t) sj (t)
M
j=0
26CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

The signal energy for the ith signal in the new set is
2 32
Z Ts MX 1
4si (t) 1
Es0 = sj (t)5 dt
0 M
j=0
2 3
Z Ts M
X1 M
X1 M X1
4s2i (t) 2 1
= si (t) sj (t) + 2 sj (t) sk (t)5 dt
0 M M
j=0 j=0 k=0
Z X1 Z Ts
M M 1M 1Z
Ts
2 1 X X Ts
= s2i (t) dt si (t) sj (t) dt + 2 sj (t) sk (t) dt
0 M 0 M 0
j=0 j=0 k=0
M
X1 M
X1 M
X1
2 1
= Es Es ij + Es jk ; jk = 0; j 6= k and 1; j = k
M M2
j=0 j=0 k=0
M
X1
2 1 2 1
= Es Es + 2 Es = Es Es + Es
M M M M
j=0
1 M 1
= Es 1 = Es
M M

b. The correlation coe¢ cient between two di¤erent signals is


2 3" #
Z Ts M
X 1 M
X1
1 4sm (t) 1 1
mn = sj (t)5 sn (t) sk (t) dt; m 6= n
Es0 0 M M
j=0 k=0
Z Ts " P M 1 P 1 #
1 sm (t) sn (t) M 1
sn (t) j=0 sj (t) M 1
sm (t) Mk=0 sk (t)
= P 1 PM 1 dt
Es0 0 + M12 M j=0 k=0 sj (t) sk (t)
2 RT R 3
s 1 PM 1 Ts
sm (t) sn (t) dt j=0 sn (t) sj (t) dt
1 6 0 1 PM 1 Ts
RM 0
7
= 4 k=0 sm (t) s k (t) dt 5
Es0 M
P P 0 R
M 1 Ts
+ M12 M j=0
1
k=0 0 sj (t) s k (t) dt
2 3
M 1 M 1 M 1M 1
1 4 1 X 1 X 1 X X
= Es mn Es nj Es nk + 2 Es jk 5
Es0 M M M
j=0 k=0 j=0 k=0

1 Es Es Es M Es
= 0 + =
Es0 M M M Es (M 1) M
1
= ; m 6= n
M 1

c. Simply solve for Es in the above expression and substitute for Es in the orthogonal
27

signaling result. This gives


Z ( " s !#)M 1
1 2
2Es0 M e v =2
Pc; simplex = 1 Q v+ p dv
1 N0 M 1 2

d. The union bound approximation for the symbol error probability for coherent orthog-
onal signaling is given by (9.67) as
r !
Es
Ps; orthog ' (M 1) Q
N0

Using the substitution used in part c, we get


s !
Es0 M
Ps; simplex ' (M 1) Q
N0 M 1

The bit error probability can be approximated as


s !
M log2 (M ) Eb M
Pb; simplex ' Q
2 N0 M 1

Plots are given in Fig. 10.2.

Problem 10.25
In the equation
Z 1 Z 1
P (EjH1 ) = fR2 (r2 jH1 ) dr2 fR1 (r1 jH1 ) dr1
0 r1

substitute
2r1 r12 =(2E 2 +N
0 ); r
fR1 (r1 jH1 ) = e 1 0
2E 2 + N0
and
2r2 r22 =N0
fR2 (r2 jH1 ) = e ; r2 > 0
N0
The inside integral becomes
r12 =N0
I=e
When substituted in the …rst equation, it can be reduced to
Z 1 Z 1
2r1 r12 =(2E 2 +N0 ) r12 =N0 2r1 r12 =K
P (EjH1 ) = 2
e e dr1 = e dr1
0 2E + N0 0 2E 2 + N0
28CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

0
M-ary CFSK 0
M-ary Simplex
10 10

-1 -1
10 10

-2 -2
10 M= 2 10

-3 -3
10 =4 10
M= 2
Pb

Pb

-4 -4 =4
10 =8 10

=8
-5 -5
10 = 16 10
= 16

-6 = 32 -6
10 10 = 32

-7 -7
10 10
5 10 15 5 10 15
Eb /N0 Eb /N0
29

where
N0 2E 2 + N0
K=
2 (E 2 + N0 )
The remaining integral is then easily integrated since its integrand is a perfect di¤erential
if written as Z 1
K 2r1 r12 =K
P (EjH1 ) = 2
e dr1
2E + N0 0 K
In the integral, let v = r12 =K to get
Z 1 Z 1
2r1 r12 =K v
e dr1 = e dv = 1
0 K 0

so that …nally

K 1 N0 2E 2 + N0
P (EjH1 ) = =
2E 2 + N0 2E 2 + N0 2 (E 2 + N0 )
N0 1
= =
2 (E 2 + N0 ) 2 1 + 12 2E
2
N0

which is the same as (10.142).

Problem 10.26

a. Use the basis set


r
2
ci (t) cos ! i t; 0
= t T
T
r
2
si (t) = sin ! i t; 0 t T; i = 1; 2; ; M
T
Base the decision on Z = (Zc1 ; Zs1 ; :::; ZcM ; ZsM ) where Zci = (y; ci ) and Zsi =
(y; si ). Suppose that the ith hypothesis is true. Then the components of Z are
p
Zci = Ej Gcj + Ncj ; i = j
= Ncj ; i 6= j
p
Zsi = Ej Gsj + Nsj ; i = j
= Nsj ; i 6= j

where Gcj = Gj cos and Gsj = Gj sin are independent Gaussian random variables
with variances 2 and Ncj ; Nsj are independent Gaussian random variables with
30CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

variances N0 =2. The pdf of Z given hypothesis Hi is true is

0 1
2 2 M
X
zci + zsi 1
fZjHi (zjHi ) = exp 2
exp @ 2
zci 2 A
+ zsi = M
Ei 2
+ N0 N0M
Ei + N0 N0
j=1; j6=i

Decide hypothesis i is true if

fZjHi (zjHi ) fZjHj ; all j

This constitutes the maximum likelihood decision rule since the hypotheses are equally
probable. Alternatively, we can take the natural log of both sides and use that as a
test. This reduces to computing

Ei 2
2 2
2
zci + zsi
Ei + N0

and choosing the signal corresponding to the largest. If the signals have equal energy,
then the optimum receiver is seen to be an M -signal replica of the one shown in Fig.
10.8a.

q
b. Assume that the Ei s are equal. Note that Zi = Zci 2 + Z 2 is Rayleigh distributed
si
given Hi . Without loss of genarality assume that H1 is given. Then

z1
fZ1 jH1 (z1 j H1 ) = exp z12 =2 E 2 + N0 ; z1
2
0
E + N0
zi
fZi jH1 (zi j H1 ) = exp zi2 =2N0 ; zi 0; i 6= 1
N0
31

The probability of correct decision is

Pc = E fPr [Zi < Z1 ; all i = 2; 3; :::; M ]g


Z 1 Z z1 M 1
z1 2 2 zi 2
= exp z 1 =2 E + N 0 exp z i =2N 0 dz i dz1
0 E 2 + N0 0 N0
Z 1
z1 exp z12 =2 E 2 + N0 M 1
= 2
1 exp z12 =2N0 dz1 , use binomial theorem
0 E + N0
Z 1
z1 exp z12 =2 E 2 + N0
=
0 E 2 + N0
"M 1 #M 1
X M 1
exp (M 1 i) z12 =2N0 dz1
i
i=0
M
X1 Z 1
1 M 1
= z1 exp Ki z12 dz1
E 2 + N0 i 0
i=0
M
X1
1 M 1 N0 E 2 + N0
= 2
E + N0 i N0 + (E 2 + N0 ) (M 1 i)
i=0
M
X1 M 1 N0
= 2
i N0 + (E + N0 ) (M 1 i)
i=0
M
X1 M 1 1
= 2 =N
=1 PE
i 1 + (E 0 + 1) (M 1 i)
i=0

where
1 M 1 i
Ki , 2
+
2 (E + N0 ) 2N0

Problem 10.27

a. It can be shown that under hypothesis H1 ; Y1 is the sum of squares of 2N independent


Gaussian random variables each having mean zero and variance

2 E 2 N0
11 =2 +
N 2
and Y2 is the sum of squares of 2N independent Gaussian random variables each
having zero mean and variance
2 N0
21 =
2
32CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

Therefore, from Problem 5.37, the pdfs under hypothesis H1 are


2
y N 1 e y1 =2 11
fY1 (y1 jH1 ) = 1 N N ; y1 0
2 11 (N )

and
y2N 1
e y2 =N0
fY2 (y2 jH1 ) = ; y2 0
2N (N0 =2)N (N )
b. The probability of error is

PE = Pr (EjH1 ) = Pr (EjH2 ) = Pr (Y2 > Y1 jH1 )


Z 1 Z 1
= fY2 (y2 jH1 ) dy2 fY1 (y1 jH1 ) dy1
0 y1

Using integration by parts, it can be shown that


Z 1 n
X
n az ax n!
I (x; a) = z e dz = e xi
x i!an+1 i
i=0

This can be used to get the formula given in the problem statement.

c. Plots are given for N = 1 13 in steps of 2 in Fig. 10.3.

Problem 10.28

a. The characteristic function is found and used to obtain the moments. By de…nition,
Z 1 m
j !
(j!) = E e = ej ! e m 1
d
0 (m)
To integrate, note that Z 1
1
ej !
e d =
0 j!
Di¤erentiate both sides m 1 times with respect to . This gives
Z 1
( 1)m 1 (m 1)!
ej ! ( 1)m 1 m 1 e d =
0 ( j!)m
m
Multiply both sides by and cancel like terms to get
Z 1 m m
ej !
e m 1
d =
0 (m) ( j!)m
33

PE for FSK signaling with diversity in Rayleigh fading: N = 1 - 13 paths in steps of 2


0
10

-1
10

-2
10

-3
10
PE

-4
10

-5
10

-6
10

-7
10
0 2 4 6 8 10 12 14 16 18 20
SNR, dB
34CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

Thus m
(j!) =
( j!)m
Di¤erentiating this with respect to !, we get

0 m 2 m (m + 1)
E[ ]=j (0) and E = j2 00
(0) =

The variance is
2 m
Var ( ) = E E2 ( ) = 2

b. Use Bayes’rule. But …rst we need fZ (z), which is obtained as follows:


Z 1
fZ (z) = fZj (zj ) f ( ) d
Z0 1 m
= e z e m 1
d
0 (m)
Z 1 m+1
m m ( +z) ( + z) m
= e d
( + z)m+1 0 (m + 1)
m m
=
( + z)m+1
where the last integral is evaluated by noting that the integrand is a pdf and therefore
integrates to unity. Using Bayes’rule, we …nd that
m
( + z)m+1 e ( +z)
f jZ ( jz) =
(m + 1)
Note that this is the same pdf as in part (a) except that has been replaced by + z
and m has been replaced by m + 1. Therefore, we can infer the moments from part
(a) as well. They are
m+1 m+1
E [ jZ] = and Var [ jZ] =
+Z ( + Z)2

c. Assume the observations are independent. Then


2 (z1 +z2 )
fz1 z2 j (z1; z2 j ) = e ; z1 ; z2 0
The integration to …nd the joint pdf of Z1 and Z2 is similar to the procedure used to
…nd the pdf of Z above with the result that
m (m + 1) m
fz1 z2 (z1; z2 ) =
( + z1 + z2 )m+2
35

Again using Bayes’rule, we …nd that


m+1
e ( +z1 +z2 ) (+ z1 + z2 )m+2
f jz1 z2 ( jz1 ; z2 ) =
(m + 2)

Since this is of the same form as in parts (a) and (b), we can extend the results there
for the moments to
m+2 m+2
E [ jZ1 ; Z2 ] = and V ar [ jZ1 ; Z2 ] =
+ Z1 + Z2 ( + Z1 + Z2 )2

d. By examining the pattern developed in parts. (a), (b), and (c), we conclude that
m+1+K
e ( +z1 +z2 +:::+zK ) ( + z1 + z2 + : : : + zK )m+K
f jz1 z2 :::zK ( jz1 ; z2; : : : zK ) =
(m + 2)

and
m+2
E [ jZ1 ; Z2 ; : : : ; ZK ] =
+ Z1 + Z2 + : : : + ZK
m+K
var [ jZ1 ; Z2 ; : : : ; ZK ] =
( + Z1 + Z2 + : : : + ZK )2

Problem 10.29
The mean of the individual samples is A as is the mean of a
^ML (Z). Therefore
2 !2 3
XK
1
aML (Z)] = E 4
var [^ (Zk A) 5
K
k=1
2 3
K K
1 4X X
= E (Zk A) (Zj A)5
K2
k=1 j=1
K K
1 XX
= E [(Zk A) (Zj A)]
K2
k=1 j=1

But, by the independence of samples and the fact that Zk A = Nk

0; k 6= j
E [(Zk A) (Zj A)] = 2; k = j
n
36CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

Thus
K K
1 XX 2
var [^
aML (Z)] = n kj
K2
k=1 j=1
K
1 X 2
= n
K2
k=1
2
n
=
K

Problem 10.30
The conditional mean and Bayes’estimates are the same if:

1. C (x) is symmetric;
2. C (x) is convex upward;
3. fAjZ is symmetric about the mean.

Armed with this, we …nd the following to be true:

a. Same - all conditions are satis…ed.


b. Same - all conditions are satis…ed;
c. Condition 3 is not satis…ed;
d. Condition 2 is not satis…ed.

Problem 10.31

a. The conditional pdf of the noise samples is


" K
#
K=2 X
2 2
fZ z1 ; z2 ; : : : ; zK j n = 2 n exp zi2 =2 2
n
i=1

The maximum likelihood estimate for 2n maximizes this expression. It can be found
by di¤erentiating the pdf with respect to 2n and setting the result equal to zero.
Solving for 2n , we …nd the maximum likelihood estimate to be
K
1 X 2
b2n = Zi
K
i=1
37

b. The variance of the estimate is


2 4n
var b2n =
K

c. Yes.
PK 2
d. i=1 Zi is a su¢ cient statistic.

Problem 10.32

We base our estimate on the observation Z0 , where Z0 = M0 + N where N is a


Gaussian random variable of mean zero and variance N0 =2. The sample value M0 is
Gaussian with mean m0 and variance 2m . The …gure of merit is mean-squared error.
Thus, we use a MAP estimate. It can be shown that the estimate for m0 is the mean
value of M0 conditioned on Z0 because the random variables involved are Gaussian.
That is,
b 0 = E (M0 jZ0 = z0 ) = z0
m
where
E (M0 Z0 ) m
=p p =q
var (M0 ) var (Z0 ) 2 + N0
m 2

It is of interest to compare this with the ML estimate, which is c

b 0;ML = Z0
m

which results if we have no a priori knowledge of the parameter. Note that the MAP
estimate reduces to the ML estimate when 2m ! 1. Note also that for N0 ! 1
(i.e., small signal-to-noise ratio) we don’t use the observed value of Z0 , but instead
estimate m0 as zero (its mean).

Problem 10.33

a. Use the basis function set


r r
2 2
1 (t) = cos ! c t and 2 (t) = sin ! c t
T T
and trigonometric expansion of the received data as

H1 : y(t) = A cos cos ! c t A sin sin ! c t + n (t)


H2 : y(t) = A cos cos ! c t + A sin sin ! c t + n (t)
38CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

to write the data in vector form as


r r !
T T
H1 : Z = (Z1 ; Z2 ) = A cos + N1 ; A sin + N2
2 2
r r !
T T
H2 : Z = (Z1 ; Z2 ) = A cos + N1 ; A sin + N2
2 2

where
Z T Z T
N1 = n (t) 1 (t) dt and N2 = n (t) 2 (t) dt
0 0

Note that E [N1 ] = [N2 ] = 0 and var[N1 ] = var[N2 ] = N0 =2. Thus

r
T
E [Z1 j H1 ] = A cos , A1
2
r
T
E [Z2 j H1 ] = A sin , A2
2
r
T
E [Z1 j H2 ] = A cos , A1
2
r
T
E [Z1 j H2 ] = A sin , A2
2

Hence the conditional pdfs of the data, given and the particular hypothesis, are

1 1 h i
fZj ;H1 (z1 ; z2 j ; H1 ) = exp (z1 A1 )2 + (z2 + A2 )2
N0 N0
1 1 h i
fZj ;H2 (z1 ; z2 j ; H2 ) = exp (z1 + A1 )2 + (z2 A2 )2
N0 N0

b. Assume equally likely hypotheses so that

1 1
fZj (z1 ; z2 j ) = fZj ;H1 (z1 ; z2 j ; H1 ) + fZj ;H2 (z1 ; z2 j ; H2 )
2 2 n h 2 io 3
1 2 2
1 4 exp (z 1 A 1 ) + (z 2 + A 2 )
= n N0 h io 5
2 N0 + exp 1
N0 (z1 + A1 )2 + (z2 A2 )2
39

Expanding the exponents and taking out common terms results in

1 1 T 2
fZj (z1 ; z2 j ) = exp z12 + z22 + A
2 N0 N0 2
2 2
exp (A1 z1 A2 z2 ) + exp (A1 z1 A2 z2 )
N0 N0
1 1 T 2
= exp z12 + z22 + A2 cosh (A1 z1 A2 z2 )
N0 N0 2 N0

The maximum likelihood estimator for satis…es (the data variables are capitalized
to signify actual data values)

@
ln fZj (Z1 ; Z2 j ) =
@ M L =0

From above
1 T
ln fZj (Z1 ; Z2 j ) = ln ( N0 ) Z12 + Z22 + A2
N0 2
2
+ ln cosh (A1 Z1 A2 Z2 )
N0

Taking the derivative with respect to and setting the result equal to 0 results in the
condition
"r #r
T 2A T 2A
tanh (Z1 cos Z2 sin ) ( Z1 sin Z2 cos ) = 0
2 N0 2 N0

Using de…nitions for Z1 and Z2 , which are


Z T r Z T r
2 2
Z1 = y (t) cos ! c t dt and Z2 = y (t) sin ! c t dt
0 T 0 T

this can be put into the form


Z T Z T
4z 4z
tanh y (t) cos (! c t + ) dt y (t) sin (! c t + ) dt = 0
AT 0 AT 0

where z = A2 T =2N0 . This implies a Costas loop type structure shown in the problem
statement with integrators in place of the low pass …lters and a tanh function in the
leg with the cosine multiplication. Note that for z small tanh (x) ' x and the phase
estimator becomes a Costas loop if the integrators are viewed as low pass …lters.
40CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

c. The variance of the phase estimate is lower bounded by applying the Cramer-Rao
bound which is " # 1
@ 2 fZj
var ( ML ) E
@ 2
The …rst derivative, from above, is
Z Z T
@fZj 2A 2A T
= tanh y (t) cos (! c t + ) dt y (t) sin (! c t + ) dt
@ N0 N0 0 0

Assume low SNR so that tanh (x) ' x. Under this assumption the second derivative,
after some work, can be shown to be
Z Z
@ 2 fZj 2A 2 T T
= y (t) y t0 cos ! c t t0 dtdt0
@ 2 N0 0 0

After some development, it can be shown that

E y (t) y t0 = Pr [H1 ] E y (t) y t0 j H1 + Pr [H2 ] E y (t) y t0 j H2


1
= E [A cos (! c t + ) + n (t)] A cos ! c t0 + + n t0
2
1
+ E [ A cos (! c t + ) + n (t)] A cos ! c t0 + + n t0
2
A2 N0
= cos ! c t t0 + t t0
2 2
so that
" #
2Z T Z
@ 2 fZj 2A T
E = E y (t) y t0 cos ! c t t0 dtdt0
@ 2 N0 0 0
2Z T Z T
2A A2 N0
= cos ! c t t0 + t t0 cos ! c t t0 dtdt0
N0 0 0 2 2
2Z T Z T
2A A2 N0 T
= cos2 ! c t t0 dtdt0 +
N0 0 0 2 2
2
2A A T2
2 N0 T
= +
N0 4 2
A2 T
Since the SNR is z = 2N0 this result becomes
" #
@ 2 fZj
E = 4z (z + 1)
@ 2
41

Thus, the variance of the phase estimate, for low SNR, is bounded by
1
var ( M L)
4z (z + 1)

Problem 10.34
Write the impulse response as
1 t T =2
h (t) =
T T
and use the delay theorem to get
1 j fT j fT
H (f ) = T sinc (T f ) e = sinc (T f ) e
T
Thus the equivalent noise bandwidth is
Z 1
1
BN = 2
jH (f )j2 df
Hmax
Z 1 0
= sinc2 (T f ) df
0
Z
1 1
= sinc2 (u) du
T 0
1
=
2T

Problem 10.35
1=2
a. Note that cos cos 1 m = m and sin cos 1m = 1 m2 to write
p p p
s (t) = 2P m sin (! c t + ) 2P 1 m2 cos (! c t + ) ; sign due to data

Use the basis functions


r r
2 2
1 (t) = cos ! c t and 2 (t) = sin ! c t
T T
to write
p p p p
s (t) = P T m sin 1 m2 cos 1 (t)+ P T m cos 1 m2 sin 2 (t)
42CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

b. Use the above bases functions and base a decision on the vector
2 p p 3
P T m sin 1 m2 cos + N1 ;
Z = [Z1 ; Z2 ] = 4 p p 5
P T m cos 1 m2 sin + N2
, [S1 + N1 ; S2 + N2 ]
where N1 and N2 are independent Gaussian random variables with zero means and
variances N0 =2 The pdf of Z conditioned on the signal sign and is
1 1 h i
fZj ; (Z1 ; Z2 j ; ) = exp (Z1 S1 )2 + (Z2 S2 )2
N0 N0
To …nd the pdf conditioned only on use the fact the Pr(+ bit) = Pr ( bit) = 1=2
to get
1 1
fZj (Z1 ; Z2 j ) = fZj ; + (Z1 ; Z2 j ; +) + fZj ; (Z1 ; Z2 j ; )
2 2
After considerable simpli…cation we obtain
" p #
2m P T
fZj (Z1 ; Z2 j ) = C exp (Z1 sin + Z2 cos )
N0
" p p #
2 1 m2 P T
cosh (Z1 cos Z2 sin )
N0
where C includes all terms independent of . The log-likelihood function is
p
2m P T
L( ) = (Z1 sin + Z2 cos )
N0
" p p #
2 1 m2 P T
+ ln cosh (Z1 cos Z2 sin )
N0
Note that
Z T
r
2
Z1 sin + Z2 cos = y (t) sin (! c t + ) dt
0 T
Z T
r
2
Z1 cos Z2 sin = y (t) cos (! c t + ) dt
0 T
so that
p Z
2m 2P T
L( ) = y (t) sin (! c t + ) dt
N0 0
" p p Z #
2 1 m2 2P T
+ ln cosh y (t) cos (! c t + ) dt
N0 0
10.1. COMPUTER EXERCISES 43

The maximum likelihood estimate satis…es @L( @


)
= 0. Di¤erentiating the above
espression for L ( ) gives
p Z
@L ( ) 2m 2P T
= 0= y (t) cos (! c t + ML ) dt
@ N0
" p 0 p Z #
2 1 m2 2P T
tanh y (t) cos (! c t + ML ) dt
N0 0
" p p Z #
2 1 m2 2P T
y (t) sin (! c t + ML ) dt
N0 0

c. The block diagram follows the development of the one for Problem 10.33. The block
diagram of the phase estimator has another arm that adds into the feedback to the
VCO that acts as a phase-lock loop for the carrier component.

10.1 Computer Exercises

Computer Exercise 10.1


% …le: ce10_1.m
%
% PF = Q(A) < 1e-3 => A > 3.08; B = ln(eta)/d + d/2
% PD = Q(B) > 0.95 => B < -1.65; A = ln(eta)/d -d/2
% d = A - B > 4.73;
% ln(eta) = (A^2 - B^2)/2 > 3.38 => eta > 29.37
%
clf
clear all;
d = [4.73 5 5.5 6];
log_eta = 3.38:0.01:9;
lend = length(d);
hold on
for j = 1:lend
dj = d(j);
af = log_eta/dj + dj/2;
ad = log_eta/dj - dj/2;
pf = qfn(af);
pd = qfn(ad);
44CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

Receiver operating characteristic


1
d=6
d = 5.5
0.95 d=5
d = 4.73

0.9
Probability of Detection

0.85

0.8

0.75

0.7

0.65
0 0.2 0.4 0.6 0.8 1 1.2
Probability of False Alarm -3
x 10

semilogy(pf, pd)
text(pf(20), pd(20)-.01, [’d = ’, num2str(dj)])
end
hold o¤
xlabel(’Probability of False Alarm’)
ylabel(’Probability of Detection’)
title(’Receiver operating characteristic’)

% This function computes the Gaussian Q-function


%
function Q=qfn(x)
Q = 0.5*erfc(x/sqrt(2));
10.1. COMPUTER EXERCISES 45

Computer Exercise 10.2


% …le: ce10_2.m
%
% From (10.171), sigmap^2/sigman^2 = (K + sigman^2/sigmaA^2)^-1
%
clf
clear all;
sigmaA2_sigman2 = 0.1:0.5:2.1;
Ls = length(sigmaA2_sigman2);
K = 1:25;
hold on
for l=1:Ls
sigma_ratio = sigmaA2_sigman2(l);
sigmap2_sigman2 = 1./(K + 1./sigma_ratio);
plot(K, sigmap2_sigman2)
text(K(1), sigmap2_sigman2(1),[’nsigma_A^2/nsigma_n^2 = ’, num2str(sigma_ratio)])
end
hold o¤
xlabel(’K’),ylabel(’nsigma_p^2/nsigma_n^2’)

Computer Exercise 10.3


% …le: ce10_3.m
% Simulation of a digital PLL
%
clf
clear all;
AA = char(’-’,’:’,’–’,’-.’);
K = 1000;
epsilon = 0.1;
theta0 = 0.5*pi;
T = 1;
A = 1;
kk = 1:K;
sigma_n = [0.1 0.5 1];
Ln = length(sigma_n);
hold on
for n = 1:Ln
sigma = sigma_n(n);
Z1(1) = sqrt(T/2)*A*cos(theta0) + sigma*randn(1);
46CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

0.7 2 2
σ A /σ n = 2.1
2 2
0.6
σ A /σ n = 1.6

2 2
σ A /σ n = 1.1
0.5

0.4 2 2
σ A /σ n = 0.6
2
σ p/σ n
2

0.3

0.2

2 2
0.1 σ A /σ n = 0.1

0
0 5 10 15 20 25
K
10.1. COMPUTER EXERCISES 47

First-order digital phase tracking device performance for ε = 0.01


1.6
σ n = 0.1
1.4
σ n = 0.5
1.2 σn = 1

0.8
θ(k), rad

0.6

0.4

0.2

-0.2

-0.4
0 100 200 300 400 500 600 700 800 900 1000
k

Z2(1) = -sqrt(T/2)*A*sin(theta0) + sigma*randn(1);


theta(1) = theta0;
for k = 1:K-1
theta(k+1) = theta(k) + epsilon*atan2(Z2(k), Z1(k));
Z1(k+1) = sqrt(T/2)*A*cos(theta(k+1)) + sigma*randn(1);
Z2(k+1) = -sqrt(T/2)*A*sin(theta(k+1)) + sigma*randn(1);
end
plot(kk, theta, AA(n,:))
end
hold o¤
xlabel(’k’),ylabel(’ntheta(k), rad’)
legend([’nsigma_n = ’, num2str(sigma_n(1))], [’nsigma_n = ’, num2str(sigma_n(2))],
[’nsigma_n = ’, num2str(sigma_n(3))])
title([’First-order digital phase tracking device performance for nepsilon = ’, num2str(epsilon)])
48CHAPTER 10. THE APPLICATION OF DETECTION AND ESTIMATION THEORY TO COMMUNICA

First-order digital phase tracking device performance for ε = 0.1


2
σ n = 0.1
1.5 σ n = 0.5
σn = 1
1

0.5
θ(k), rad

-0.5

-1

-1.5
0 100 200 300 400 500 600 700 800 900 1000
k

You might also like